text
stringlengths
1
2.55M
id
stringlengths
21
25
metadata
dict
\section{Introduction} Conformal field theory (CFT) is ubiquitous in theoretical physics. It is not a surprise that there exists a vast amount of introductory books and notes aimed to anyone approaching the subject for the first time. A comprehensive introduction is provided by the book of Di Francesco et al. \cite{DiFrancesco:1997nk}. The spacetime dimension is relevant for a CFT. In two dimensions the conformal algebra is infinite-dimensional. In higher dimensions it is of the type $\mathfrak{so}(p,q)$. Most of the introductory literature focuses on two-dimensional conformal field theories, discussing only marginally the case $d>2$. This is motivated, on the one hand, by string theory. There, the CFT arises as a two-dimensional field theory living on the world-volume of a string which moves in some ambient spacetime. Good references specialized in string theory applications are \cite{Polchinski:1998rq, Blumenhagen:2009zz, Ketov:1995yd, Schellekens:1996tg}. A more technical text, well suited to mathematicians, is \cite{Schottenloher:2008zz}. On the other hand, there exist two-dimensional models in statistical mechanics whose continuum description at a second-order phase transition is given by a conformal field theory. A remarkable example is the Ising model for ferromagnets. Good references that deal with statistical mechanics applications are \cite{Cardy:2008jc, Ginsparg:1988ui, Dotsenko:1986ca}. Further notes which provide a general introduction to conformal field theory, but always with a main focus on two dimensions, are \cite{Qualls:2015qjb, Rovai:2013gga, Efthimiou:2000gz}. Nevertheless, conformal field theories in $d>2$ dimensions are also important and do occur in modern research areas like the AdS/CFT correspondence \cite{Maldacena:1997re} or in the context of the conformal bootstrap program \cite{Rattazzi:2008pe}. Some of the introductory references that discuss more deeply conformal field theory in $d>2$ dimensions are indeed the ones originally written as an introduction to the AdS/CFT correspondence \cite{Ammon:2015wua, Nastase:2015wjb, Aharony:1999ti, Zaffaroni:2000vh}. Other sources with pedagogical intents discuss conformal field theories in $d>2$ dimensions employing the projective null cone or embedding space formalism \cite{Osborn:2013ConformalFT, Rychkov:2016iqz, Simmons-Duffin:2016gjk}. The fields in a unitary CFT can be characterized by a positive number called conformal dimension. One is generally interested in irreducible representations of the conformal group. In $d>2$ spacetime dimensions there exists indeed a class of fields which not only transform irreducibly under the conformal group but also possess the lowest possible conformal dimension among the other fields of the theory. They are called \textit{conformal primary fields}\footnote{In $d=2$ dimensions there exists a difference between primary and quasi-primary fields. Since we are interested in $d>2$, their analysis falls outside the scope of this paper.}. The concept was originally introduced by Mack and Salam who referred to them as interpolating fields \cite{Mack:1969rr}. In the literature there exist two definitions of conformal primary field. Both of them define it in terms of its behavior under a conformal transformation. The difference is that the first definition is a differential one, it refers to an infinitesimal conformal transformation, while the second refers to a finite one. The first definition involves the commutators between the field, evaluated at $x=0$, and the generators of conformal transformations, in particular dilations and special conformal transformations; the second definition gives an explicit rule to compute the transformed field evaluated at the new point in spacetime. These two kind of definitions are ubiquitous in quantum field theory; one can for example restrict oneself to the rotation group, and classify fields in terms of their behaviour under an infinitesimal or a finite rotation. Given the central role played by conformal primary fields in a CFT, a coherent discussion of both definitions would be expected. Unfortunately, some textbooks and lecture notes state only one of the two definitions \cite{DiFrancesco:1997nk, Ginsparg:1988ui, Qualls:2015qjb, Rovai:2013gga,Aharony:1999ti, Zaffaroni:2000vh}. Others give them both but they do not show their equivalence \cite{Ammon:2015wua, Nastase:2015wjb}. When this is done, the discussion proceeds rather quickly \cite{Rychkov:2016iqz, Simmons-Duffin:2016gjk}. In another case, one definition is immediately stated while the other emerges only after several chapters dealing with other aspects of the theory \cite{Osborn:2013ConformalFT}. This paper can be thought of as a useful supplementary material accompanying an introductory lecture course on conformal field theory. As such, its goal is simple: to give a clear and concise review of both definitions and to provide a simple proof of their equivalence. To that end, we use some minimal results from classical and quantum field theory combined with basic properties of conformal field theory. The paper is organised as follows. In Section \ref{sec:review} we review the essential results in field theory that we need for the proof. To be precise, we review the concept of symmetry in classical and quantum field theory, and how it is possible to define a field in terms of its transformation under rotations. Next, we turn to the definition and basic properties of conformal transformations, and the conformal algebra. The discussion is mainly based on \cite{DiFrancesco:1997nk, Zee:2016fuk} and is not intended to be exhaustive. The reader is referred to the mentioned literature for a complete treatment of the topics. In Section \ref{sec:primaries} we illustrate both definitions of conformal primary fields. In Section \ref{sec:equivalence21} we show how the second definition implies the first one. Section \ref{sec:equivalence12} provides extended calculations of an argument proposed in \cite{Simmons-Duffin:2016gjk}, where it is shown that the first definition implies the second one. \section{Basic results in field theory}\label{sec:review} The content of this section is mainly inspired by Chapters 2 and 4 of the book of Di Francesco et al. \cite{DiFrancesco:1997nk}. Other useful references are \cite{Rychkov:2016iqz, Simmons-Duffin:2016gjk, Poland:2018epd} \subsection{Symmetries}\label{sec:symmetries} In classical relativistic field theory, a field $\phi^M$ living in a $d$-dimensional flat spacetime is described by an action \begin{equation} S=\int d^dx\,\, \mathcal{L}\left(\phi^M, \partial_{\mu}\phi^M \right), \end{equation} where $\mathcal{L}$ is the Lagrangian density and $M$ represents some collection of indices. For example, a scalar has $M=\varnothing$; a rank-2 tensor has $M=\{\mu,\nu\}$. For the sake of brevity we suppress $M$ from now on. In addition, we interchangeably refer to the spacetime position as $x$ or $x^{\mu}$. Unless otherwise stated, throughout the whole paper we take the flat metric to be in Minkowski signature, $\eta=\text(-1,+1,...,+1)$. In general, performing a transformation of coordinates $x\mapsto x'$ has an effect also on the fields: \begin{equation}\label{eq:trafo_on_fields} \phi(x)\mapsto {\phi'}(x')=\mathcal{F}\left(\phi(x)\right), \end{equation} where $\mathcal{F}$ is a functional. A symmetry is a transformation that leaves the action invariant, $S=S'$. An important class of symmetries is constituted by continuous global symmetries. They can be written as \begin{equation}\label{eq:trafo_x_global_cont} \begin{split} x'^{\mu}&=x^{\mu}+\omega^a\frac{\delta x^{\mu}}{\delta\omega_a},\\ \phi'(x')&=\phi(x)+\omega_a\frac{\delta\mathcal{F}}{\delta\omega_a}(x) \end{split} \end{equation} where $\{\omega_a\}$ is a set of infinitesimal parameters independent of the spacetime position $x$. One defines the \textit{generators} $G_a$ of a continuous global symmetry in terms of the difference between the transformed field and the original field, both evaluated at the same position: \begin{equation}\label{eq:generators_diff} \delta_{\omega} \phi(x)= {\phi'}(x)-\phi(x):=-i\omega^a G_a\phi(x). \end{equation} In this definition, $G_a$ are differential operators. The importance of symmetries in physics is summarized by Noether's theorem. It states that to every continuous global symmetry of the action there exists a current which is classically conserved, meaning that \begin{equation} \partial_{\mu}j^{\mu}_a =0. \end{equation} Together with the conserved current there is a conserved charge \begin{equation} Q_a=\int d^{d-1}x\,\, j^{0}_a \end{equation} which is constant in time, $\frac{dQ_a}{dt}=0$. A classical field theory can be quantized in several ways, one of them being canonical quantization. In this approach, one promotes fields to operators, $\phi\to \hat{\phi}$, which are required to satisfy commutation or anticommutation relations, depending on their bosonic or fermionic nature. In quantum field theory, one can derive an important relation involving the commutator of the conserved charge and the field (see Section 2.4 of Di Francesco et al. \cite{DiFrancesco:1997nk} or Section 7.3 of Weinberg \cite{Weinberg:1995mt}): \begin{equation}\label{eq:comm_charge} \left[\hat{Q}_a,\hat{\phi}\right]=G_a\hat{\phi}. \end{equation} In other words, the conserved charge $Q_a$, promoted to an operator, is the generator of the symmetry transformation in the Hilbert space of quantum states. In order to avoid confusion, we explicitly keep on putting the hat symbol on quantum operators. \subsection{Rotations} Before entering the realm of conformal field theory, it is convenient to step back and recall some known facts about a class of transformations familiar from classical mechanics: rotations. Let us suppose we have fixed a Cartesian coordinate system. A rotation $\mathcal{R}$ is a transformation of coordinates that leaves the length of a vector invariant. We can think of it as a matrix, $\vec{v}^{\,\prime}=\mathcal{R}\vec{v}$. We have \begin{equation} \vec{v}^{\,\prime 2}=\vec{v}^{\,\prime T}\cdot\vec{v}^{\,\prime}=\left(\mathcal{R}\vec{v}\right)^T\cdot \mathcal{R}\vec{v}=\vec{v}^T \mathcal{R}^T \mathcal{R}\vec{v}\overset{!}{=}\vec{v}^T\cdot\vec{v}. \end{equation} The matrices that satisfy the condition $\mathcal{R}^T\mathcal{R}=\mathbb{1}$ are called orthogonal. Recalling that $\text{det}\mathcal{R}_1\mathcal{R}_2=\text{det}\mathcal{R}_1\text{det}\mathcal{R}_2$, their determinant can be either 1 or -1. The latter case characterizes reflections. Rotations are then a set of matrices with unit determinant that satisfy the orthogonality condition. In $d$ dimensions they make up the special orthogonal group $SO(d)$. It is a Lie group, meaning that a rotation through a finite angle can be obtained by performing infinitesimal rotations repeatedly. An infinitesimal rotation can be written as: \begin{equation}\label{eq:rot_exp} \mathcal{R}=\mathbb{1}+A+O(A^2), \end{equation} where $A$ is a matrix proportional to the infinitesimal angle(s). Plugging \eqref{eq:rot_exp} into the orthogonality condition, \begin{equation} \mathcal{R}^T\mathcal{R}=(\mathbb{1}+A)^T(\mathbb{1}+A)+O(A^2)=\mathbb{1}, \end{equation} gives $A^T=-A$. In $d$ dimensions, there are $d(d-1)/2$ independent matrices $T_a$ that satisfy this condition. These matrices are called generators of rotations, and we can expand $A$ as $A=\sum_a^{d(d-1)/2}\theta_a T_a$, where the real numbers $\theta_a$ are the infinitesimal angles of rotations. If we want to consider a finite rotation, we can split the finite angles $\theta_a$ into $N$ pieces so that $\theta_a/N$ is infinitesimal for large $N$, and perform the infinitesimal rotation $N$ times. We then have \begin{equation} \mathcal{R}=\lim_{N\to\infty}\left(1+\sum_{a}\frac{\theta_a T_a}{N}\right)^N=e^{\sum_a\theta_a T_a} \end{equation} Of course not only vectors but also fields may be affected by rotations, transforming under some representation of the rotation group. One can even say what a certain kind of field is by looking at its behaviour under rotations. Since $SO(d)$ is a Lie group, it is possible to choose in the definition an infinitesimal or a finite rotation. For example, we can define a scalar field as a function of space that transforms under a finite rotation as $\phi'(x')=\phi(x)$, or $\phi'(x)=\phi\left(\mathcal{R}^{-1}x\right)$. For an infinitesimal rotation, we can expand $\phi'(x)=\phi(x)-A_{ij}x_j\partial_i\phi(x)=\phi(x)-(\theta_a T_a)_{ij}x_j\partial_i\phi(x)$. We can then define a scalar field as a function of space that, under an infinitesimal rotation, undergoes the variation $\delta\phi(x)=-(\theta_a T_a)_{ij}x_j\partial_i\phi(x)$. In particular, for $x=0$, $\delta\phi(0)=0$. If we call $\hat{J}_{ij}$ the quantum charge operator associated to rotational invariance, then in a quantum theory we can state two definitions of a scalar field in terms of its behaviour under rotation: $\left[\hat{J}_{ij}, \hat{\phi}(0)\right]=0$ or $\hat{\phi'}(x')=\hat{\phi}(x)$. One can proceed similarly in a conformal field theory. It is possible to define a field by its behaviour under a finite or infinitesimal conformal transformation. This is what we do in Section \ref{sec:primaries}. In a conformal field theory there are however additional subtleties. The conformal group contains not only rotations but also dilations and the so-called special conformal transformations. The goal of the next subsections is to get familiar with them. \subsection{Properties of conformal transformations}\label{sec:conformal} Let $g_{\mu \nu}$ be the metric tensor in a $d$-dimensional spacetime. A conformal transformation is a change of coordinates $x\mapsto x'=f(x)$ that leaves the metric tensor invariant up to a scale: \begin{equation}\label{eq:def_conformal} g_{\mu \nu}(x) \mapsto g'_{\mu \nu}(x')=\Omega^2(x)g_{\mu \nu}(x), \end{equation} where we assume $\Omega(x)>0$. From now on we consider flat spacetime, i.e. $g_{\mu \nu}=\eta_{\mu \nu}$. Recalling the transformation rule for rank-2 tensors with lower indices, we can rewrite Equation \eqref{eq:def_conformal} as \begin{equation}\label{eq:defining_conformal_flat} \frac{\partial x^{\rho}}{\partial {x'}^{\mu}}\frac{\partial x^{\sigma}}{\partial x'^{\nu}}\eta_{\rho \sigma}=\Omega^2(x)\eta_{\mu \nu}. \end{equation} Let us now consider an infinitesimal transformation of coordinates \begin{equation} x'^{\rho}=x^{\rho}+\epsilon^{\rho}(x)+O(\epsilon^2). \end{equation} Inserting it into Equation \eqref{eq:defining_conformal_flat} gives \begin{equation}\label{eq:def_calculations} \eta_{\mu \nu}-\partial_{\mu}\epsilon_{\nu}-\partial_{\nu}\epsilon_{\mu}+O\left(\epsilon^2\right)=\Omega^2\eta_{\mu \nu}. \end{equation} At first order in $\epsilon$, \begin{equation}\label{eq:k(x)} \partial_{\mu}\epsilon_{\nu}+\partial_{\nu}\epsilon_{\mu}=\xi(x)\eta_{\mu \nu}, \end{equation} for some function $\xi(x)$. Multiplying both sides by $\eta^{\mu \nu}$, using $\eta^{\mu \nu}\eta_{\mu \nu}=d$, fixes $\xi(x)=\frac{2}{d}\partial \cdot \epsilon$. Hence, Equation \eqref{eq:k(x)} becomes \begin{equation}\label{eq: equation_conformal_trafo} \partial_{\mu}\epsilon_{\nu}+\partial_{\nu}\epsilon_{\mu}=\frac{2}{d}\left(\partial \cdot \epsilon \right)\eta_{\mu \nu}. \end{equation} Comparing Equations \eqref{eq: equation_conformal_trafo} and \eqref{eq:def_calculations} we can read off the conformal factor: \begin{equation}\label{eq:conformal_factor} \Omega^2(x)=1-\frac{2}{d}\left(\partial \cdot \epsilon \right) +O\left(\epsilon^2\right). \end{equation} The most general solution of equation \eqref{eq: equation_conformal_trafo}, in $d>2$ dimensions, is \begin{equation}\label{eq:solution} \epsilon^{\mu}=a^{\mu} +{\omega^{\mu}}_{\nu}x^{\nu}+\alpha x^{\mu}+2(b\cdot x) x^{\mu} - b^{\mu}x^2, \end{equation} where $a^{\mu}$, $\omega^{\mu}_{\nu}$, $b^{\mu}$, $\alpha<<1$ are constants. The first two terms represent an infinitesimal translation and an infinitesimal Lorentz transformation, respectively. The third term, $x'^{\mu}=(1+\alpha)x^{\mu}$, is a dilation. Its finite version is \begin{equation} x'^{\mu}=\lambda x^{\mu}. \end{equation} The infinitesimal dilation is recovered via $\lambda=1+\alpha +O(\alpha^2)$. The last two terms of \eqref{eq:solution} represent a special conformal transformation (SCT). Its finite version is \begin{equation}\label{eq:sct} x'^{\mu}=\frac{x^{\mu}-b^{\mu}x^2}{1-2(b\cdot x)+b^2x^2}. \end{equation} A special conformal transformation can be split into more elementary transformations. Equation \eqref{eq:sct} can be indeed written as \begin{equation} \frac{x'^{\mu}}{x'^2}=\frac{x^{\mu}}{x^2}-b^{\mu}. \end{equation} The transformation $x'^{\mu}=\frac{x^\mu}{x^2}=\frac{1}{x_{\mu}}$ is called inversion. A SCT can be then understood as an inversion of $x^{\mu}$, followed by a translation of a factor $b^{\mu}$, and followed again by an inversion. We extensively exploit this property in Section \ref{sec:equivalence21}. \subsection{The conformal algebra in $d>2$ dimensions}\label{sec:algebra} The set of conformal transformations forms a group that has the Lorentz group as a subgroup. The algebra of the conformal group is determined by looking at the generators of conformal transformations. The generators, defined as in Equation \eqref{eq:generators_diff}, are \begin{itemize} \item $\mathcal{P}_{\mu}=-i\partial_{\mu}$ for translations; \item $\mathcal{J}_{\mu \nu}=i\left(x_{\mu}\partial_{\nu}-x_{\nu}\partial_{\mu}\right)$ for Lorentz transformations; \item $\mathcal{D}=-ix^{\mu}\partial_{\mu}$ for dilations; \item $\mathcal{K}_{\mu}=-i\left(2x_{\mu}x^{\nu}\partial_{\nu}-x^2\partial_{\mu}\right)$ for special conformal transformations; \end{itemize} The non vanishing commutators are \begin{equation}\label{eq:conf_algebra} \begin{split} &\left[\mathcal{D},\mathcal{P}_{\mu}\right]=i\mathcal{P}_{\mu},\\ &\left[\mathcal{D},\mathcal{K}_{\mu}\right]=-i\mathcal{K}_{\mu},\\ &\left[K_{\mu},\mathcal{P}_{\nu}\right]=2i\left(\eta_{\mu\nu}\mathcal{D}-\mathcal{J}_{\mu\nu}\right),\\ &\left[\mathcal{K}_{\rho},\mathcal{J}_{\mu\nu}\right]=i\left(\eta_{\rho\mu}\mathcal{K}_{\nu}-\eta_{\rho\nu}\mathcal{K}_{\mu}\right),\\ & \left[\mathcal{P}_{\rho},\mathcal{J}_{\mu\nu}\right]=i\left(\eta_{\rho\mu}\mathcal{P}_{\nu}-\eta_{\rho\nu}\mathcal{P}_{\mu}\right),\\ &\left[\mathcal{J}_{\mu\nu},\mathcal{J}_{\rho\sigma}\right]=i\left(\eta_{\nu\rho}\mathcal{J}_{\mu\sigma}+\eta_{\mu\sigma}\mathcal{J}_{\nu\rho}-\eta_{\mu\rho}\mathcal{J}_{\nu\sigma}-\eta_{\nu\sigma}\mathcal{J}_{\mu\rho}\right), \end{split} \end{equation} which define the $\mathfrak{so}(d,2)$ algebra, or the $\mathfrak{so}(d+1,1)$ algebra in the case of Euclidean signature. The corresponding conformal group is the Lie group $SO(d+1,1)$ or $SO(d,2)$, respectively. In a quantum theory we also need the charge operators. Therefore, we introduce the operators $\hat{P}_{\mu}$, $\hat{J}_{\mu\nu}$, $\hat{D}$, $\hat{K}_{\mu}$, which are the generators, on the Hilbert space of quantum states, of translations, Lorentz transformations, dilations and special conformal transformations, respectively. They also obey the algebra \eqref{eq:conf_algebra}. The action of the generators on a field can be derived by using the method of induced representations \cite{Mack:1969rr}. One starts by defining the action of the generators on the field at $x=0$: \begin{equation}\label{eq:commutators_x0} \begin{split} &\left[\hat{J}_{\mu\nu},\hat{\phi}(0)\right]=S_{\mu \nu}\hat{\phi}(0)\\ &\left[\hat{D},\hat{\phi}(0)\right]=-i\Delta\hat{\phi}(0)\\ &\left[\hat{K}_{\mu},\hat{\phi}(0)\right]=\kappa_{\mu}\hat{\phi}(0). \end{split} \end{equation} $\Delta$ is a positive fractional number, called conformal dimension; $S_{\mu \nu}$ is a matrix obeying the Lorentz algebra and represents the intrinsic angular momentum (spin) of the field. For instance, scalars have $S_{\mu \nu}=0$; for vectors one can choose ${\left(S^{\mu \nu}\right)^{\rho}}_{\sigma}=i\left(\eta^{\nu \rho}{\delta^{\mu}}_{\sigma}-\eta^{\mu \rho}{\delta^{\nu}}_{\sigma}\right)$; spinor fields have $S_{\mu \nu}=\frac{i}{4}\left[\gamma_{\mu},\gamma_{\nu}\right]$, where the $\gamma_{\mu}$ operators satisfy the $d$-dimensional Clifford algebra \begin{equation}\label{eq:clifford} \{\gamma_{\mu},\gamma_{\nu}\}=2\eta_{\mu \nu}. \end{equation} Subsequently, one uses the momentum operator to shift the relations \eqref{eq:commutators_x0} to a generic position $x$. This, together with the commutation relations of the conformal algebra \eqref{eq:conf_algebra}, imply \begin{equation}\label{eq:commutators_fields} \begin{split} &\left[\hat{P}_{\mu},\hat{\phi}(x)\right]=\mathcal{P}_{\mu}\hat{\phi}(x)=-i\partial_{\mu}\hat{\phi}(x); \\ &\left[\hat{J}_{\mu\nu},\hat{\phi}(x)\right]=\mathcal{J}_{\mu\nu}\hat{\phi}(x)=i\left(x_{\mu}\partial_{\nu}-x_{\nu}\partial_{\mu}\right)\hat{\phi}(x)+S_{\mu \nu}\hat{\phi}(x);\\ &\left[\hat{D},\hat{\phi}(x)\right]=\mathcal{D}\hat{\phi}(x)=\left(-ix^{\nu}\partial_{\nu}-i\Delta\right)\hat{\phi}(x);\\ &\left[\hat{K}_{\mu},\hat{\phi}(x)\right]=\mathcal{K}_{\mu}\hat{\phi}(x)=\left(-2ix_{\mu}\Delta +ix^2\partial_{\mu}-2ix_{\mu}x^{\rho}\partial_{\rho}-2x^{\nu}S_{\mu \nu}+\kappa_{\mu}\right)\hat{\phi}(x). \end{split} \end{equation} \section{Conformal primary fields in $d>2$ dimensions}\label{sec:primaries} Loosely speaking, in $d>2$ spacetime dimensions conformal primary fields are irreducible representations of the conformal group with the lowest possible conformal dimension. Since the conformal group has the Lorentz group and the group of dilations as subgroups, every irreducible representation of the conformal group is specified by an irreducible representation $\rho$ of the Lorentz group $SO(d-1,1)$ and a definite conformal dimension $\Delta$. In the special case $d=4$, the Lie algebra of the Lorentz group can be split in terms of two copies of the Lie algebra of $SU(2)$. Hence, in $d=4$ one labels a generic representations of the Lorentz group in terms of two half-integers $(j_L,j_R)$. In addition, one requires $\kappa_{\mu}=0$, as we want the spectrum of operators to be bounded from below. First, we notice that $\hat{K}_{\mu}$ is a lowering operator with respect to the conformal dimension. Suppose $\lvert\Psi\rangle$ is an eigenstate of $\hat{D}$: $\hat{D}\lvert\Psi\rangle=i\Delta\lvert\Psi\rangle$. Then, using $\left[\hat{D},\hat{K}_{\mu}\right]=-i\hat{K}_{\mu}$, \begin{equation} \hat{D}\hat{K}_{\mu}\lvert\Psi\rangle=\left(\left[\hat{D},\hat{K}_{\mu}\right]+\hat{K}_{\mu}\hat{D}\right)\lvert\Psi\rangle=i(\Delta-1)\hat{K}_{\mu}\lvert\Psi\rangle. \end{equation} This allows us to justify the existence of primary operators, considered up to now as an axiom. Suppose we start with any local operator and keep hitting it with $\hat{K}_{\mu}$. Assuming that conformal dimensions are bounded from below, we must eventually hit zero, and this gives us a primary. In addition to the boundedness from below of the spectrum, let us suppose now that the theory is also unitary. Then there exists a lower bound for the conformal dimension of the fields. To show this, let us consider the matrix element \begin{equation} A_{\mu{\{t\}},\nu{\{s\}}}={}_{\{t\}}\langle \Delta,l \lvert \hat{K}_{\mu}\hat{P}_{\nu}\rvert\Delta,l\rangle_{\{s\}}, \end{equation} where $\rvert\Delta,l\rangle$ is a state created by a field operator with dimensions $\Delta$ and spin $l$, and $\{s,t\}$ are spin indices. In a unitary theory, this matrix must have only positive eigenvalues. If it had a negative eigenvalue $\sigma<0$ with eigenvector $\zeta_{\nu,{\{s\}}}$, the state $\lvert\Psi\rangle=\zeta_{\nu,{\{s\}}}\hat{P}_{\nu}\lvert{\{s\}}\rangle$ would have negative norm: \begin{equation} \langle \Psi\lvert\Psi\rangle=\zeta^{\dagger}A\zeta=\sigma\zeta^{\dagger}\zeta<0. \end{equation} From the form of the commutator $\left[\hat{K}_{\mu},\hat{P}_{\nu}\right]$ of the algebra \eqref{eq:conf_algebra}, we see that the eigevalues of $A$ gets two contributions. The first is proportional to $\Delta$, the second one corresponds to the eigenvalues of a Hermitian matrix that depends only on the spin: \begin{equation} B_{\mu{\{t\}},\nu{\{s\}}}=\langle{\{t\}} \vert i\hat{J}_{\mu\nu} \rvert\{s\}\rangle. \end{equation} The condition that all the eigenvalues of $A$ must be non negative, $\sigma_A\geq0$, translates to $\Delta \geq \sigma_{\text{max}}(B)$, where $\sigma_{\text{max}}(B)$ is the maximum eigenvalue of $B$. A more detailed calculation \cite{Rychkov:2016iqz} shows that, for a symmetric traceless field of spin $l$, \begin{equation} \Delta_{\min}(l)=l+d-2, \hspace{3mm} \text{if} \hspace{3mm} l=1,2,... \hspace{5mm} \text{and} \hspace{3 mm}\Delta_{\text{min}}(0)=\frac{d}{2}-1. \end{equation} Analogous results hold for antisymmetric tensors and spinor fields. We are now ready to state the first definition. We reintroduce the set of indices $M$ to give it general validity. \begin{definition}\label{def:definition1} {\it In $d>2$ spacetime dimensions, let $\hat{D}$ be the generator of dilations and let $\hat{K}_{\mu}$ be the generator of special conformal transformations. A conformal primary field $\hat{\phi}^M_{\rho}(x)$, in the $\rho$ representation of the Lorentz group and with conformal dimension $\Delta$, satisfies the following conditions at $x=0$: } \begin{enumerate} \item $\left[\hat{D},\hat{\phi}^M_{\rho}(0)\right]=-i\Delta\hat{\phi}^M_{\rho}(0)$; \item $\left[\hat{K}_{\mu},\hat{\phi}^M_{\rho}(0)\right]=0$. \end{enumerate} \end{definition} Given a conformal primary, one can construct other operators with higher dimension by acting with the momentum operator $\hat{P}_{\mu}$, which is a raising operator for the conformal dimension (this can be checked as before using $\left[\hat{D},\hat{P}_{\mu}\right]=i\hat{P}_{\mu}$). These new operators are called descendants. Conformal primary fields can also be defined via their behaviour under a finite conformal transformation. The ingredient for the second definition is the tensor \begin{equation} {R^{\mu}}_{\lambda}(x)=\Omega^{-1}(x)\frac{\partial x^{\mu}}{\partial x'^{\lambda}}, \end{equation} where $\Omega(x)$ is the positive square root of the conformal factor of equation \eqref{eq:def_conformal}. It satisfies \begin{equation} {R^{\mu}}_{\lambda}{R^{\nu}}_{\sigma}\eta_{\mu \nu}=\Omega^{-2}\frac{\partial x^{\mu}}{\partial x'^{\lambda}}\frac{\partial x^{\nu}}{\partial x'^{\sigma}}\eta_{\mu \nu}=\Omega^{-2}\Omega^2\eta_{\lambda\sigma}=\eta_{\lambda \sigma}. \end{equation} This means that ${R^{\mu}}_{\rho}$ is a Lorentz transformation. Its explicit expression for an infinitesimal conformal transformation ${x'}^{\mu}=x^{\mu}+\epsilon^{\mu}$, with $\Omega(x)=1-\frac{1}{d}\partial\cdot\epsilon$, is \begin{equation}\label{eq:Run} {R^{\mu}}_{\nu}(x)={\delta^{\mu}}_{\nu}-\frac{1}{2}\left(\partial_{\nu}\epsilon^{\mu}-\partial^{\mu}\epsilon_{\nu}\right). \end{equation} We state now the second definition of a conformal primary field. \begin{definition}\label{def:definition2} {\it In $d>2$ spacetime dimensions, a conformal primary field $\hat{\phi}^M_{\rho}(x)$, in the $\rho$ representation of the Lorentz group and with conformal dimension $\Delta$, transforms under a conformal transformation $\eta_{\mu \nu}\mapsto \Omega^2(x)\eta_{\mu \nu}$ as \begin{equation}\label{eq:def2} \hat{\phi'}^M_{\rho}(x')=\Omega^{\Delta}(x)\mathcal{D}{\left[R(x)\right]^M}_{N}\hat{\phi}^N_{\rho}(x) \end{equation} where ${R^{\mu}}_{\nu}(x)=\Omega^{-1}(x)\frac{\partial x^{\mu}}{\partial x'^{\nu}}$ and $\mathcal{D}{\left[R(x)\right]^M}_{N}$ implements the action of $R$ in the $SO(d-1,1)$ representation of $\hat{\phi}^{M}_{\rho}(x)$. } \end{definition} We now list some possibilities. \begin{itemize} \item Scalar fields have $\mathcal{D}\left[R(x)\right]=1$. \item Vector fields have $\mathcal{D}{\left[R(x)\right]^{\mu}}_{\nu}={R^{\mu}}_{\nu}(x)$. \item For generic tensor fields a product of $R{^{\mu}}_{\nu}$'s is involved. The position of the indices depends on what kind of tensor $\hat{\phi}^M$ is. For example, if we have ${\hat{\phi}^{\mu \nu}}_{\rho}(x)$, then ${\mathcal{D}\left[R(x)\right]^{\mu \nu \tau}}_{\lambda \sigma \rho}={R^{\mu}}_{\lambda}(x){R^{\nu}}_{\sigma}(x){R_{\rho}}^{\tau}(x)$. \item For spinor fields, $\mathcal{D}\left[R(x)\right]$ is a spinor representation, i.e. $\mathcal{D}\left[R(x)\right]=\exp\left(\frac{i}{2}\Theta_{\mu \nu}S^{\mu \nu}\right)$, where $\Theta_{\mu \nu}$ is an antisymmetric tensor with $d(d-1)/2$ independent components telling us which transformation we are doing, and $S_{\mu \nu} =\frac{i}{2}\left[\gamma_{\mu},\gamma_{\nu}\right]$, with $\gamma_{\mu}$ satisfying the algebra \eqref{eq:clifford}. \end{itemize} The existing introductory textbooks and lecture notes which deal with conformal field theory in $d>2$ present a plethora of possibilities regarding the definitions. \begin{itemize} \item \cite{DiFrancesco:1997nk, Ginsparg:1988ui, Qualls:2015qjb, Rovai:2013gga} provide only the second definition restricting themselves to scalar fields. Moreover, they adopt the convention of calling the fields ``quasi-primary'' instead of primary. \item \cite{Aharony:1999ti, Zaffaroni:2000vh} give only the first definition. \item \cite{Ammon:2015wua, Nastase:2015wjb} give both definitions, without explicitly proving the equivalence. \item In \cite{Osborn:2013ConformalFT} one definition is immediately stated while the other emerges only after several chapters dealing with other aspects of the theory. \item \cite{Simmons-Duffin:2016gjk} provides both definitions and shows that the first one implies the second one by explicitly using the commutators \eqref{eq:commutators_fields}. Most of the calculations are omitted. \item \cite{Rychkov:2016iqz} provides both definitions and shows that the second one implies the first one. Again, only a quick calculation is provided. \end{itemize} \section{Derivation of the equivalence of the definitions: 2 implies 1}\label{sec:equivalence21} In this section we prove that Definition \ref{def:definition2} implies Definition \ref{def:definition1}. We omit the subscript $\rho$ whenever it is clear which type of field we are talking about. We consider a conformal transformation that changes the coordinates $x\mapsto f(x)=x_c$ and the field $\hat{\phi}^M(x)\mapsto{\hat{\phi}^M}_c(x_c)$. We outline here the steps that we are going to follow. \begin{enumerate} \item We compute the associated conformal factor $\Omega_c(x)$. \item Starting from the expression of ${\hat{\phi}^M}_c(x_c)$ as given in Equation \eqref{eq:def2} we compute ${\hat{\phi}^M}_c(x)$ by noticing that \begin{equation} \begin{gathered} {\hat{\phi}^M}_c(x_c)=\Omega^{\Delta}(x)\mathcal{D}{\left[R(x)\right]^M}_{N}\hat{\phi}^N(x)=\\ =\Omega^{\Delta}\left(f^{-1}(x_c)\right)\mathcal{D}{\left[R\left(f^{-1}(x_c)\right)\right]^M}_{N}\hat{\phi}^N\left(f^{-1}(x_c)\right), \end{gathered} \end{equation} and renaming the indices on both sides we have \begin{equation}\label{eq:property} {\hat{\phi}^M}_c(x)=\Omega^{\Delta}\left(f^{-1}(x)\right)\mathcal{D}{\left[R\left(f^{-1}(x)\right)\right]^M}_{N}\hat{\phi}^N\left(f^{-1}(x)\right). \end{equation} \item We take the difference ${\hat{\phi}^M}_c(x)-{\hat{\phi}^M}(x)$. This, by Equations \eqref{eq:generators_diff} and \eqref{eq:comm_charge}, gives the commutator between the generator of the transformation and the field. At the end we set $x=0$. \end{enumerate} We follow these steps for a special conformal transformation ($c=\text{sct})$ and a dilation ($c=\text{dil}$). We begin by considering a scalar field, for which $\rho=\mathbb{1}$ (or, in $d=4$, $(j_L,j_R)=(0,0)$) and $\mathcal{D}\left[R(x)\right]=1$. The proof for fields with higher spins is entirely based on the one for scalars. \subsection{Step 1: the conformal factors} Let us first consider a special conformal transformation. Recall that ultimately we want to compute ${\phi^M}_c(x)$. However, if $\Tilde{x}=g(x)$ is the result of a SCT as given in Equation \eqref{eq:sct}, we cannot easily invert $g$ to get $x=g^{-1}(\Tilde{x})$. Therefore, it is more convenient to adopt the interpretation of a SCT as a composition of two inversions and a translation. Let us then compute the conformal factor for an inversion ${x'}^{\mu}=x^{\mu}/x^2$: \begin{equation} \begin{gathered} \frac{\partial {x'}^{\rho}}{\partial {x}^{\mu}}\frac{\partial {x}'^{\sigma}}{\partial {x}^{\nu}}=\left(\frac{{\delta^{\rho}}_{\mu}}{x^2}-\frac{2x^{\rho}x_{\mu}}{x^4}\right)\left(\frac{{\delta^{\sigma}}_{\nu}}{x^2}-\frac{2x^{\sigma}x_{\nu}}{x^4}\right)= \\ = \frac{{\delta^{\rho}}_{\mu}{\delta^{\sigma}}_{\nu}}{x^4}-\frac{2{\delta^{\rho}}_{\mu}x^{\sigma}x_{\nu}}{x^4}-\frac{2{\delta^{\sigma}}_{\nu}x^{\rho}x_{\mu}}{x^6}+\frac{4x^{\rho}x^{\sigma}x_{\mu}x_{\nu}}{x^8}. \end{gathered} \end{equation} Contracting with the metric, \begin{equation} \frac{\partial {x'}^{\rho}}{\partial {x}^{\mu}}\frac{\partial {x'}^{\sigma}}{\partial {x}^{\nu}}\eta_{\rho \sigma}=\frac{\eta_{\mu \nu}}{x^4}. \end{equation} Comparing this with the defining property of a conformal transformation, equation \eqref{eq:defining_conformal_flat}, we conclude that \begin{equation} \Omega_{\text{inv}}(x)=x^2. \end{equation} Next, let us consider a dilation ${x'}^{\mu}=\lambda x^{\mu}$. We have \begin{equation} \frac{\partial x^{\mu}}{\partial {x'}^{\rho}}\frac{\partial x^{\nu}}{\partial {x'}^{\sigma}}\eta_{\mu \nu}=\frac{{\delta^{\mu}}_{\rho}}{\lambda}\frac{{\delta^{\nu}}_{\sigma}}{\lambda}\eta_{\mu \nu}=\frac{\eta_{\rho \sigma}}{\lambda^2}, \end{equation} from which \begin{equation} \Omega_{\text{dil}}(x)=\frac{1}{\lambda}. \end{equation} Finally, we notice that for translations the conformal factor is trivially equal to one, $\Omega_{\text{tr}}(x)=1$. \subsection{Step 2: obtaining $\phi_c(x)$ from $\phi_c(x_c)$} Let $\Tilde{x}$ be the result of an infinitesimal SCT with parameter $b^{\mu}$. We know that $\Tilde{x}^{\mu}/\Tilde{x}^2=x^{\mu}/x^2-b^{\mu}$. In other words, $\Tilde{x}$ is obtained by performing the following chain of transformations. \begin{enumerate} \item A first inversion $x^{\mu}\mapsto {x'}^{\mu}=f_1\left(x^{\mu}\right)=\frac{x^{\mu}}{x^2}=\frac{1}{x_{\mu}}$. The field transforms as \begin{equation} \hat{\phi}(x)\mapsto \hat{\phi}'(x')=x^{2\Delta}\hat{\phi}(x). \end{equation} \item A translation ${x'}^{\mu}\mapsto {x''}^{\mu}=f_2\left({x'}^{\mu}\right)={x'}^{\mu}-b^{\mu}=\frac{x^{\mu}}{x^2}-b^{\mu}=\frac{\Tilde{x}^{\mu}}{\Tilde{x}^2}$. The field transforms as \begin{equation} \hat{\phi}'(x')\mapsto\hat{\phi}''(x'')=1\cdot \hat{\phi}'(x')=x^{2\Delta}\hat{\phi}(x). \end{equation} \item A second inversion ${x''}^{\mu}\mapsto {x'''}^{\mu}=f_3\left({x''}^{\mu}\right)=\frac{{x''}^{\mu}}{{x''}^2}=\frac{1}{x''_{\mu}}=\Tilde{x}^{\mu}$. The field transforms as \begin{equation}\label{eq:final_step} \hat{\phi}''(x'')\mapsto\hat{\phi}'''(x''')=\left(x^{''}\right)^{2\Delta}\hat{\phi}^{''}(x'')=\left(x^{''}\right)^{2\Delta}x^{2\Delta}\hat{\phi}(x). \end{equation} \end{enumerate} We now proceed backwards, starting from $\hat{\phi}'''(x''')$ until we reach $\hat{\phi}'''(x)$. Firstly we express $\hat{\phi}'''(x''')$ in terms of $x''$ only. We notice that $x^{\mu}=\frac{1}{x'_{\mu}}=\frac{1}{x''_{\mu}+b_{\mu}}$. Hence, by Equation \eqref{eq:final_step}, \begin{equation} \hat{\phi}'''({x'''}^{\mu})=\left({x''}^{\mu}\right)^{2\Delta}\left(\frac{1}{x''_{\mu}+b_{\mu}}\right)^{2\Delta}\hat{\phi}\left(\frac{1}{x''_{\mu}+b_{\mu}}\right). \end{equation} We observe that $f^{-1}_3(x''')=\frac{1}{x'''}$. Thus, using \eqref{eq:property} with $f^{-1}_3(x'')=\frac{1}{x''}$ we have \begin{equation} \hat{\phi}'''\left({x''}^{\mu}\right)=\left(\frac{1}{x''_{\mu}}\right)^{2\Delta}\left(\frac{1}{\frac{1}{{x''}^{\mu}}+b_{\mu}}\right)^{2\Delta}\hat{\phi}\left(\frac{1}{\frac{1}{{x''}^{\mu}}+b_{\mu}}\right). \end{equation} Secondly, we express $\hat{\phi}'''\left({x''}\right)$ in terms of $x'$ only: \begin{equation} \hat{\phi}'''\left({x''}^{\mu}\right)=\left(\frac{1}{{x'}_{\mu}-b_{\mu}}\right)^{2\Delta}\left(\frac{1}{\frac{1}{{x'}^{\mu}-b^{\mu}}+b_{\mu}}\right)^{2\Delta}\hat{\phi}\left(\frac{1}{\frac{1}{{x'}^{\mu}-b_{\mu}}+b_{\mu}}\right). \end{equation} Using \eqref{eq:property} with $f^{-1}_2(x')=x'+b$ we determine \begin{equation} \hat{\phi}'''\left({x'}^{\mu}\right)=\left(\frac{1}{{x'}_{\mu}}\right)^{2\Delta}\left(\frac{1}{\frac{1}{{x'}^{\mu}}+b_{\mu}}\right)^{2\Delta}\hat{\phi}\left(\frac{1}{\frac{1}{{x'}^{\mu}}+b_{\mu}}\right). \end{equation} Finally, we express $\hat{\phi}'''\left({x'}\right)$ in terms of $x$ only: \begin{equation} \hat{\phi}'''\left({x'}^{\mu}\right)=\left(\frac{x^2}{{x}_{\mu}}\right)^{2\Delta}\left(\frac{1}{\frac{x^2}{{x}^{\mu}}+b_{\mu}}\right)^{2\Delta}\hat{\phi}\left(\frac{1}{\frac{x^2}{{x'}^{\mu}}+b_{\mu}}\right). \end{equation} Using \eqref{eq:property} with $f^{-1}_1(x)=\frac{1}{x}$ we obtain \begin{equation}\label{eq:final_phi} \hat{\phi}'''\left(x^{\mu}\right)\equiv\hat{\phi}_{\text{sct}}(x^{\mu})=\left(\frac{x^{\mu}}{x^2}\right)^{2\Delta}\left(\frac{1}{\frac{x_{\mu}}{x^2}+b_{\mu}}\right)^{2\Delta}\hat{\phi}\left(\frac{1}{\frac{x_{\mu}}{x^2}}+b_{\mu}\right). \end{equation} Next, we focus on a dilation ${x}^{\mu}_{\text{dil}}=\lambda x^{\mu}$. Infinitesimally, $\lambda=1+\alpha+O\left(\alpha^2\right)$. According to \eqref{eq:def2} a scalar field transforms as \begin{equation} \hat{\phi}_{\text{dil}}(x_{\text{dil}})=\hat{\phi}_{\text{dil}}(\lambda x)=\left(\frac{1}{1+\alpha}\right)^{\Delta}\hat{\phi}(x), \end{equation} from which \begin{equation}\label{eq:dilation_phi} \hat{\phi}_{\text{dil}}(x)=\left(\frac{1}{1+\alpha}\right)^{\Delta}\hat{\phi}\left(\frac{x}{1+\alpha}\right). \end{equation} \subsection{Step 3: the action of the generators on the field} Since $b_{\mu}$ is infinitesimal, we can Taylor expand the factors in Equation \eqref{eq:final_phi} up to first order in $b_{\mu}$. We have \begin{equation} \begin{gathered} \left(\frac{1}{\frac{x_{\mu}}{x^2}+b_{\mu}}\right)^{2\Delta}=\left[\frac{1}{\frac{x_{\mu}}{x^2}\left(1+b_{\mu}x^{\mu}\right)}\right]^{2\Delta}=\left(\frac{x^2}{x_{\mu}}\right)^{2\Delta}\left(\frac{1}{1+b\cdot x}\right)^{2\Delta}=\\ =\left(\frac{x^2}{x_{\mu}}\right)^{2\Delta}\left(1-2\Delta b\cdot x\right) + O\left(b^2\right). \end{gathered} \end{equation} Furthermore, \begin{equation} \hat{\phi}\left(\frac{1}{\frac{x_{\mu}}{x^2}+b_{\mu}}\right)=\hat{\phi}\left(\frac{\frac{x^{\mu}}{x^2}+b^{\mu}}{\left(\frac{x_{\mu}}{x^2}+b_{\mu}\right)^2} \right)=\hat{\phi}\left(\frac{\frac{x^{\mu}}{x^2}+b^{\mu}}{\frac{1}{x^2}+\frac{2x\cdot b}{x^2}}\right). \end{equation} The argument can be written as \begin{equation} \begin{gathered} \left(\frac{x^{\mu}}{x^2}+b^{\mu}\right)\left(\frac{1}{x^2}+\frac{2x\cdot b}{x^2}\right)^{-1}=\left(x^{\mu}+x^2b^{\mu}\right)(1-2x\cdot b) + O\left(b^2\right)=\\ =x^{\mu}-2x^{\mu}(x\cdot b) +x^2 b^{\mu}+O\left(b^2\right). \end{gathered} \end{equation} Thus, \begin{equation}\label{eq:sct_shifted} \begin{split} \hat{\phi}\left(\frac{1}{\frac{x_{\mu}}{x^2}+b_{\mu}}\right)=\hat{\phi}\left(x^{\mu}-2x^{\mu}(x\cdot b) +x^2 b^{\mu}+O\left(b^2\right)\right)=\\ =\hat{\phi}\left(x\right)-2(x\cdot b) x^{\mu}\partial_{\mu}\hat{\phi}(x) +x^2b^{\mu}\partial_{\mu}\hat{\phi}(x) +O\left(b^2\right). \end{split} \end{equation} Putting all together, \begin{equation} \begin{gathered}\label{eq:final_phippp} \hat{\phi}_{\text{sct}}(x)=\left(\frac{x^{\mu}}{x^2}\right)^{2\Delta}\left(\frac{x^2}{x_{\mu}}\right)^{2\Delta}\left(1-2\Delta x\cdot b\right)\big[\hat{\phi}(x)-2(x\cdot b) x^{\mu}\partial_{\mu}\hat{\phi}(x)\\+x^2b^{\mu}\partial_{\mu}\hat{\phi}(x)+ O\left(b^2\right)\big]= \hat{\phi}(x)-2(x\cdot b) x^{\mu}\partial_{\mu}\hat{\phi}(x)+x^2b^{\mu}\partial_{\mu}\hat{\phi}(x)\\-2\Delta (x\cdot b) \hat{\phi}(x)+O\left(b^2\right). \end{gathered} \end{equation} As explained in Subsection \ref{sec:symmetries}, the differential operators $G_a$ are defined in terms of the difference $\hat{\phi}_{\text{sct}}(x)-\hat{\phi}(x):=-iG_{a}\omega^a\hat{\phi}(x)$. In our case we have, by Equation \eqref{eq:final_phippp}, \begin{equation}\label{eq:first_difference} \delta_{b}\hat{\phi}(x)=\hat{\phi}_{\text{sct}}(x)-\hat{\phi}(x)=b^{\mu}\left(-2x_{\mu}x^{\nu}\partial_{\nu}+x^2\partial_{\mu}-2\Delta x_{\mu}\right)\hat{\phi}(x)+O\left(b^2\right). \end{equation} If we identify $\omega^a=b^{\mu}$ then, by Equations \eqref{eq:generators_diff} and \eqref{eq:comm_charge}, we have \begin{equation} \left[\hat{K}_{\mu},\hat{\phi}(x)\right]=\left(-2ix_{\mu}x^{\nu}\partial_{\nu}+ix^2\partial_{\mu}-2i\Delta x_{\mu}\right)\hat{\phi}(x). \end{equation} In particular, setting $x=0$ gives \begin{equation} \left[\hat{K}_{\mu},\hat{\phi}(0)\right]=0, \end{equation} which is the first requirement of Definition \ref{def:definition1}. We can repeat the same procedure for the dilation. Expanding \eqref{eq:dilation_phi} up to first order in $\alpha$, \begin{equation} \hat{\phi}_{\text{dil}}(x)=\left(1-\alpha\Delta\right)\left[\hat{\phi}(x)-\alpha x^{\mu}\partial_{\mu}\hat{\phi}(x)\right]+O\left(\alpha^2\right). \end{equation} The difference between the transformed field and the new field is \begin{equation} \delta_{\alpha}\hat{\phi}(x)=\hat{\phi}_{\text{dil}}(x)-\hat{\phi}(x)=\alpha\left(-x^{\mu}\partial_{\mu}-\Delta\right)\hat{\phi}(x) +O\left(\alpha^2\right). \end{equation} Hence, \begin{equation} \left[\hat{D},\hat{\phi}(x)\right] = -i\left(x^{\mu}\partial_{\mu}+\Delta\right)\hat{\phi}(x). \end{equation} Setting $x=0$ gives \begin{equation} \left[\hat{D},\hat{\phi}(0)\right] =-i\Delta \hat{\phi}(0), \end{equation} which is the second requirement of Definition \ref{def:definition1}. \subsection{Higher representations} For fields with higher spin, one needs in principle the explicit expression of ${R^{\mu}}_{\nu}$. In the case of a special conformal transformation, the infinitesimal parameter is $\epsilon^{\mu}=2(b\cdot x)x^{\mu}-x^2b^{\mu}$. Thus, by equation \eqref{eq:Run}, \begin{equation} {R^{\mu}}_{\nu}(x)={\delta^{\mu}}_{\nu}-2\left(x^{\mu}b_{\nu}-b^{\mu}x_{\nu}\right). \end{equation} For an infinitesimal dilation with parameter $\epsilon^{\mu}=\alpha x^{\mu}$ the ${R^{\mu}}_{\nu}$ tensor is trivial: \begin{equation} {R^{\mu}}_{\nu}(x)={\delta^{\mu}}_{\nu}-\frac{1}{2}{\delta^{\mu}}_{\nu}\alpha+\frac{1}{2}{\delta^{\mu}}_{\nu}\alpha={\delta^{\mu}}_{\nu}. \end{equation} We can summarize both possibilities by saying that $R$ is equal to the identity plus, eventually, terms directly proportional to $x$. Any representation $\mathcal{D}\left[R(x)\right]$ must replicate this structure. Since at the end we are interested in $x=0$, $\mathcal{D}\left[R(x)\right]$ can be considered trivial and no additional features are introduced to the calculations previously carried out for scalars. However, let us be painstaking and do the calculations anyway to convince ourselves. We consider a vector field $\phi^{\mu}$. According to \eqref{eq:def2} it transforms, under a finite conformal transformation, as \begin{equation} {\hat{\phi}}^{\prime\mu}\left(x'\right)=\Omega^{\Delta}(x){R^{\mu}}_{\nu}(x){\hat{\phi}}^{\nu}(x). \end{equation} The overall procedure is similar to the scalar case. The only difference is that we have to evaluate ${R^{\mu}}_{\nu}(x)$ in the shifted position $x^{\mu}-2x^{\mu}(x\cdot b)+x^2b^{\mu}$, as we did for $\hat{\phi}(x)$ in Equation \eqref{eq:sct_shifted}. Up to first order, the original function and the shifted one coincide: \begin{equation} {R^{\mu}}_{\nu}\left(x-2x(x\cdot b)+x^2b\right)={\delta^{\mu}}_{\nu}-2\left(x^{\mu}b_{\nu}-b^{\mu}x_{\nu}\right)+O\left(b^2\right)={R^{\mu}}_{\nu}(x)+O\left(b^2\right). \end{equation} Hence, Equation \eqref{eq:final_phippp} generalizes to \begin{equation} \begin{gathered} {\hat{\phi}_{\text{sct}}}^{\mu}(x)=\\(1-2\Delta x\cdot b)\left[{\delta^{\mu}}_{\nu}-2\left(x^{\mu}b_{\nu}-b^{\mu}x_{\nu}\right)\right]\left[1-2(x\cdot b)x^{\mu}\partial_{\mu}+x^2b^{\mu}\partial_{\mu}\right]\hat{\phi}^{\nu}(x) \\+O\left(b^2\right)=\hat{\phi}^{\mu}(x)-2(x\cdot b)x^{\rho}\partial_{\rho}\hat{\phi}^{\mu}+x^2b^{\rho}\partial_{\rho}\hat{\phi}^{\mu}-2\Delta(x\cdot b)\hat{\phi}^{\mu}+\\ +2b^{\mu}x_{\nu}\hat{\phi}^{\nu} -2x^{\mu}b_{\nu}\hat{\phi}^{\nu}+O\left(b^2\right). \end{gathered} \end{equation} When $x=0$ this implies that \begin{equation} \left[\hat{K}^{\nu}, \hat{\phi}^{\mu}(0)\right]=0. \end{equation} Since ${R^{\mu}}_{\nu}$ is already trivial for dilations, the calculations are identical to the scalar case without any additional modification and lead to \begin{equation} \left[\hat{D},\hat{\phi}^{\mu}(0)\right]=-i\Delta\hat{\phi}^{\mu}(0). \end{equation} Similar arguments apply for a generic tensor field of type $(p,q)$. According to \eqref{eq:def2} it transforms as \begin{equation} {{\hat{\phi}}^{\prime\mu_1\ldots\mu_p}}_{\nu_1\ldots\nu_q}(x')= \Omega^{\Delta}(x){R^{\mu_1}}_{\lambda_1}(x)...{R^{\mu_p}}_{\lambda_p}(x){R_{\nu_1}}^{\rho_1}(x)...{R_{\nu_q}}^{\rho_q}(x) {{\hat{\phi}}^{\prime\lambda_1\ldots\lambda_p}}_{\rho_1\ldots\rho_q}(x) \end{equation} For a SCT, shifting the above tensor product leads to its original expression, since we are neglecting higher-order terms. For a dilation we get a product of Kronecker deltas. In the first case we obtain terms proportional to $x$, which cancel out when $x=0$. In the second case, the calculations are a carbon copy of the ones for a scalar field. The argument carries over for spinors. We can expand the exponential \begin{equation} \mathcal{D}\left[R(x)\right]=\exp\left(\frac{i}{2}\Theta_{\mu \nu}S^{\mu \nu}\right)=1+\text{\small terms directly proportional to $x$}. \end{equation} Since at the end we set $x=0$, we do not have to care about the additional terms. \\ We conclude that Definition \ref{def:definition2} implies Definition \ref{def:definition1}. \section{Derivation of the equivalence of the definitions: 1 implies 2}\label{sec:equivalence12} To show that Definition \ref{def:definition1} implies Definition \ref{def:definition2}, one could start from the commutators between generators and field at $x=0$, with $\kappa_{\mu}=0$, and use the method of induced representation mentioned in Section \ref{sec:algebra} to obtain the full commutators \eqref{eq:commutators_fields}. At this point, one simply proceeds backwards through the steps outlined in the previous subsections until one arrives at the transformation \eqref{eq:def2}. Another possibility is the one proposed in \cite{Simmons-Duffin:2016gjk}. We revise it here with more detailed calculations. We start with the most general form of an infinitesimal conformal transformation, namely $\epsilon_{\mu}$ in Equation \eqref{eq:solution}. Using the commutators \eqref{eq:commutators_fields}, the corresponding infinitesimal variation of a field, $\delta\hat{\phi}^M(x)=-i\omega_aG^a(x)\hat{\phi}^M(x)$, is \begin{equation} \begin{split} \delta\hat{\phi}^M&=-ia^{\mu}\left(-i\partial_{\mu}\right)\hat{\phi}^M-i\alpha\left(-i\Delta-ix^{\nu}\partial_{\nu}\right)\hat{\phi}^M\\&-i\frac{\omega_{\mu \nu}}{2}\left[i\left(x^{\mu}\partial^{\nu}-x^{\nu}\partial^{\mu}\right)+S^{\mu \nu}\right]\hat{\phi}^M \\&-ib^{\mu}\left(-2ix_{\mu}\Delta+ix^2\partial_{\mu}-2ix_{\mu}x^{\rho}\partial_{\rho}-2x^{\nu}S_{\mu \nu}\right)\hat{\phi}^M. \end{split} \end{equation} We notice that \begin{equation} \begin{gathered} \partial\cdot \epsilon=\partial_{\mu}\epsilon_{\nu}\eta^{\mu \nu}=\alpha d + 2(b\cdot x) d;\\ \partial_{\mu}\epsilon_{\nu}-\partial_{\nu}\epsilon_{\mu}=2\omega_{\nu \mu} - 4b_{\nu}x_{\mu}-4b_{\mu}x_{\nu};\\ \frac{1}{2}S_{\mu\nu}\partial^{[\mu}\epsilon^{\nu]}=\frac{1}{2}S_{\mu\nu}\frac{1}{2}\left(\partial^{\mu}\epsilon^{\nu}-\partial^{\nu}\epsilon^{\mu}\right)=\frac{1}{2}\left(-\omega^{\mu \nu}S_{\mu \nu}+4b^{\mu}x^{\nu}S_{\mu \nu}\right), \end{gathered} \end{equation} where the last equality follows from the antisymmetry of $S_{\mu \nu}$ and $\omega_{\mu \nu}$. Hence, we can write $\delta \phi^M$ as \begin{equation} \delta\hat{\phi}^M(x)=-\left[\epsilon(x)\cdot \partial +\frac{\Delta}{d}\partial \cdot \epsilon(x)-\frac{i}{2}\partial^{[\mu}\epsilon^{\nu]}(x)S_{\mu \nu}\right]\hat{\phi}^M(x). \end{equation} In a quantum theory, a symmetry generated by $\hat{Q}_{\epsilon}$ is implemented by a unitary operator $\hat{U}=e^{-i\hat{Q}_{\epsilon}\epsilon}$ such that \begin{equation}\label{eq:unitary_trafo} {\hat{\phi}}^{\prime M}(x')=\hat{U}\hat{\phi}^M(x)\hat{U}^{-1}. \end{equation} Using the Baker–Campbell–Hausdorff formula, \begin{equation} e^{A}Be^{-A}=B+[A,B]+\frac{1}{2!}\left[\left[A,B\right],A\right]+... \end{equation} we can write equation \eqref{eq:unitary_trafo} as \begin{equation} \hat{U}\hat{\phi}^M(x)\hat{U}^{-1}=\hat{\phi}^M(x)-i\epsilon\left[\hat{Q}_{\epsilon}, \hat{\phi}^M(x)\right]+O(\epsilon^2). \end{equation} The commutator is nothing but minus the variation of the field: \begin{equation} i \epsilon \left[\hat{Q}_{\epsilon}, \hat{\phi}^M(x)\right]=-\delta_{\epsilon}\hat{\phi}^M=\left[\epsilon(x)\cdot \partial +\frac{\Delta}{d}\partial \cdot \epsilon(x)-\frac{i}{2}\partial^{[\mu}\epsilon^{\nu]}(x)S_{\mu \nu}\right]\hat{\phi}^M(x). \end{equation} Hence \begin{equation} \begin{gathered} {\hat{\phi}}^{\prime M}(x')=\left(1-\epsilon\cdot \partial -\frac{\Delta}{d}\partial \cdot \epsilon+\frac{i}{2}\partial^{[\mu}\epsilon^{\nu]}S_{\mu \nu}\right)\hat{\phi}^M(x)=\\ = \left(1-\frac{\Delta}{d}\partial \cdot \epsilon \right)\left(1+\frac{i}{2}\partial^{[\mu}\epsilon^{\nu]}(x)S_{\mu \nu}\right)\hat{\phi}^M(x)-\epsilon^{\mu}\partial_{\mu}\hat{\phi}^M(x)+O\left(\epsilon^2\right). \end{gathered} \end{equation} For $\epsilon\to 0$ we recognize the first factor as the first term of the Taylor expansion of $\Omega^\Delta(x)$. The second factor depends on which representation of the Lorentz group $\phi^M(x)$ is. For $\epsilon\to 0$ the term $\epsilon^{\mu}\partial_{\mu}\phi^M(x)$ simply vanishes. The final result is valid for an infinitesimal transformation. However, since a the conformal group is a Lie group, it is possible to achieve a finite transformation by composition of an infinite number of infinitesimal ones. Therefore, we can conclude that \begin{equation} {\hat{\phi}}^{\prime M}(x')=\Omega^{\Delta}(x)\mathcal{D}{\left(R(x)\right)^M}_N\hat{\phi}^N(x), \end{equation} which is the content of Definition \ref{def:definition2}. \section*{Acknowledgements} The author thanks the Bethe Center for Theoretical Physics (BCTP) of the University of Bonn for its hospitality and Daniel Galviz for useful comments. This work was supported by the Bonn-Cologne Graduate School of Physics and Astronomy (BCGS). \bibliographystyle{utphys}
proofpile-arXiv_065-100
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{SYK $t-J$ model} Doped Mott insulators are central to understanding the unusual normal state of high-temperature superconductors, particularly the cuprates. The low hole doping pseudogap phase of the cuprates is characterised by collinear antiferromagnetic order, low carrier concentration with poor metallic conduction, a small Fermi surface \cite{frachet2020hidden}, and an anomalously large thermal Hall conductivity even in the absence of doping \cite{grissonnanche2019giant}. At large hole doping, the system is a paramagnetic Landau Fermi liquid with a large Fermi surface and spinful fermionic quasiparticles. In the presence of magnetic order, spin-$1$ magnons are natural excitations \cite{arrachea2002infinite}, but the collinear magnetic order and the low hole concentration in the underdoped phase make it unlikely that magnons or holes are the mechanism for the observed large thermal Hall conductivity \cite{grissonnanche2019giant,grissonnanche2020chiral,samajdar2019enhanced,dalla2015fractional}. Instead, in analogy with the Kitaev model \cite{kitaev2006anyons} where emergent free (Jordan-Wigner) Majorana fermions have been predicted to result in a quantized thermal Hall effect \cite{yokoi2021half,kasahara2018majorana}, and reportedly confirmed in a Mott insulating Kitaev material $\alpha$-RuCl$_3,$ it has been suggested that the large thermal Hall effect in the cuprates \cite{grissonnanche2020chiral} could similarly arise from phonons coupling to some emergent fermionic excitations in the pseudogap phase, including at zero doping. At a certain critical value of doping, magnons are not well-defined since magnetic order vanishes before this doping is reached \cite{frachet2020hidden}, and neither are Landau quasiparticles since transport properties \cite{varma2020colloquium,cha2020linear,guo2020linear} are inconsistent with a Landau Fermi liquid. These observations motivate us to pose the question of stability of different quasiparticles across doping regimes of a Mott insulator. We identify quasiparticle metamorphosis upon varying the doping with many-body localization transitions in the Hilbert space. We shall study the fully-connected and random $t$-$J$ model, which has all-to-all hopping $t_{ij}$ and exchange $J_{ij}$ realized as independent random variables with zero mean. At zero doping, the model with SU($M$) symmetry, and $M$ large, realizes a Sachdev-Ye-Kitaev model of fractionalized fermionic spinons \cite{SY,kitaev_talk}. We are interested here in the case with SU(2) symmetry and non-zero doping: this has emerged a useful theoretical paradigm for studying the normal state properties of doped Mott insulators \cite{tJreview}. Recent analytical treatments of this model \cite{joshi2020deconfined,tarnopolsky2020metal} present a picture of a spin glass phase in low doping regime (with bosonic spinon and fermionic holon quasiparticles), surviving up to a critical value of doping $p_c$, separated from a disordered Fermi liquid at large doping. However whether any of these quasiparticles or something different is responsible for the anomalously large thermal Hall effect in the underdoped regime remains an open question. In this paper, we study the stability of three different quasiparticles across doping regimes in the random $t$-$J$ model of a doped Mott insulator by casting the problem as one of localization in the many-body Hilbert space \cite{altshuler1997,aman_kitaev}. We use a numerical exact diagonalization method based on the recently developed FEAST algorithm \cite{FEAST} that allows calculation of large numbers of eigenstates in user-specified energy windows required for this approach. The Hamiltonian of the random $t$-$J$ model is \begin{equation}{\label{eq:1}} H=\sum_{i\neq j;\alpha=\uparrow,\downarrow}^{N}{\frac{t_{ij}}{\sqrt{N}}Pc^\dagger_{i\alpha}c_{j\alpha}P}+\sum_{i<j}^{N}{\frac{J_{ij}}{\sqrt{N}}\mathbf{S}_{i}\cdot\mathbf{S}_{j}}, \end{equation} where $t_{ij}$ and $J_{ij}$ are both randomly chosen from a Gaussian distribution with mean zero and unit variance. $P$ is the projector prohibiting double occupancy at any site, and $N$ is the total number of sites. A fraction $p$ of the sites is empty, corresponding to hole doping. The spin operator at each site is $\mathbf{S}_i=\frac{1}{2}\sum_{\alpha\beta}c^\dagger_{i\alpha}\boldsymbol{\sigma}_{\alpha \beta}c_{i\beta}.$ At low hole doping, the system has magnetic order and we show that spin-$1$ magnon excitations are good quasiparticles - a fact confirmed in numerous numerical \cite{arrachea2002infinite,shackleton2021quantum,Dumi21} and theoretical studies \cite{joshi2020deconfined,tarnopolsky2020metal}. Remarkably, in this doping regime, we also find that emergent Jordan-Wigner (JW) fermionic excitations are good quasiparticles reminiscent of the Mott insulating Kitaev materials \cite{banerjee2016proximate}. In contrast, spinful fermions are poor quasiparticles. In the opposite limit of large hole doping, we confirm that spinful fermionic quasiparticles are stable while the magnons and JW fermions are not. Our study indicates, similar to the cuprates, a critical point separating these regimes, which we identify as a many-body localization transition. We find that all the above quasiparticles are bad at criticality. We formulate the problem of quasiparticle stability as one of many-body localization (MBL) \cite{altshuler1997,aman_kitaev} in the Hilbert space. Quasiparticles are approximations to the exact many-body eigenstates. Their lifetime and decay can be understood as follows. Any quasiparticle state $|\phi_i(t)\rangle$ can always be expanded as a linear superposition of the exact many-body stationary states $|\psi_j\rangle$ of the Hamiltonian, $|\phi_i(0)\rangle = \sum_{j}b_{ij}|\psi_j\rangle.$ At a later time $t,$ the quasiparticle wavefunction is $|\phi_i(t)\rangle = \sum_{j}\exp(-i E_j t)b_{ij}|\psi_j\rangle,$ where $E_j$ are the energy eigenvalues. If the weights $b_{ij}$ are significant only in a finite window of energies, $\Delta E,$ then the wavepacket decays in a time $\Delta t = 1/\Delta E.$ Since the number of energy eigenstates of the interacting system generally grows exponentially with the system size while the bandwidth grows much more slowly, the energy spread of the quasiparticle will be vanishingly small in the thermodynamic limit if the number of significant $b_{ij}$ (in other words, the support size of the quasiparticle in the many-body Hilbert space) does not scale in proportion to the dimension of the Hilbert space. On the other hand, if the support size scales in proportion to the Hilbert space dimension, the quasiparticle has a finite lifetime in the thermodynamic limit. These ideas are put on a quantitative footing below. The quasiparticle stability is determined as follows. We first define our magnon and fermionic quasiparticles. We construct the set of $N$ magnon states by applying the spin raising operator at each site on the ground state $|\chi\rangle$ of only the \textit{exchange} part of the Hamiltonian, $\{c^\dagger _{\uparrow,1} c_{\downarrow,1}|\chi\rangle,....,c^\dagger _{\uparrow,N} c_{\downarrow,N}|\chi\rangle\}.$ We then use the Gram-Schmidt orthonormalization to construct a set of $N$ orthonormal states. We project the Hamiltonian in Eq. \ref{eq:1} to this magnon basis, and diagonalization yields the magnon quasiparticle states. We represent this set of $N$ orthonormal magnon states as $\{|\chi\prime_i\rangle\}$. To measure the resemblance of a given magnon state $|\chi\prime_i\rangle$ with the exact many-body states $|\psi_j\rangle$ of the full Hamiltonian, we express it as a linear superposition, \begin{equation} |\chi\prime_i\rangle=\sum_{j=1}^{D}{a_{ij}|\psi_j\rangle}, \end{equation} where $D$ is the dimension of the Fock space of physically relevant states (i.e. do not include doubly occupied sites). The JW fermion creation (annihilation) operators are constructed by taking the product of the spin raising (lowering) operator at some site $i$ and a (nonlocal) 1D Jorgan-Wigner string $\prod_{j<i}[p_j I + n_j \sigma^{z}_{j}],$ where $n_j = 1-p_j.$ Note our Jordan Wigner strings differ from the string operator introduced in ref \cite{grusdt2019microscopic} for the study of spinon chargon correlations in the antiferromagnetic $t$-$J$ model. We repeat a similar procedure for the fermionic quasiparticles. In particular, the hole states are obtained by applying annihilation operators ($c_{i\alpha}$) on the ground state of only the \textit{hopping} part of the Hamiltonian. There are $2N$ number of hole excitations. After the aforementioned Gram-Schmidt orthonormalization, projection of the full Hamiltonian to this basis, and subsequent diagonalization, we obtain a set of $2N$ fermionic hole quasiparticle states. The hole quasiparticles are then expressed as a linear superposition of the exact many-body states. Note that the magnon or hole quasiparticles are not eigenstates of the random $t$-$J$ model. \begin{figure*} \includegraphics[width=1\columnwidth]{magnon} \includegraphics[width=\columnwidth]{magnon_scaling} \includegraphics[width=1\columnwidth]{spinon} \includegraphics[width=\columnwidth]{spinon_scaling.pdf} \includegraphics[width=\columnwidth]{fermion} \includegraphics[width=\columnwidth]{fermion_scaling.pdf} \caption{\label{fig:overlap}Plot showing (disorder averaged) squares of the overlap of a magnon (a), a JW fermion (b), and a hole (c) quasiparticle state (labeled $i$) with exact many-body states (excitation energy $E_j$ measured from disorder averaged ground state energy of many body state) of the random $t-J$ model for different values of doping density $p,$ and system size $N.$ For low doping density $p<0.33,$ (the spin glass phase) magnons (a) and JW fermions have small support sizes (sharply peaked curves in (a), (b)) indicating a many-body localized state in the Fock space (a stable quasiparticle), while spinful fermions have large support sizes (broadly peaked curves in (c)), indicating a fully delocalized state in the Fock space (a decaying quasiparticle). At high doping density $p>0.33,$ spinful fermions have a small support size (sharply peaked curve in (c)) while magnons and JW fermions have large support sizes (broadly peaked curves in (a), (b)). At the critical value of doping, $p=0.33,$ all three excitations have large widths indicating that they are bad quasiparticles. Localization and delocalization are further confirmed by a finite size scaling analysis (d-f) for the support sizes of (d) magnon and (e) JW fermion and (f) spinful fermionic hole quasiparticle states for different doping densities, as a function of the many-body Hilbert space dimension $D.$ The fitted solid lines are to the form $\xi = f D^c.$ The abscissae show values of $1/\log_2 D,$ while the ordinates show $\log_2 \xi/\log_2 D.$ An intercept $c\approx 1$ signifies a fully many-body delocalized state (bad quasiparticle) while a small value of the intercept $c\approx 0$ corresponds to a many-body localized or fractal (stable quasiparticle) state. Note that there is an abrupt jump in exponent $c$ in all three cases upon crossing the localization threshold. For all values of doping except $p=0.33$, where none of the quasiparticles are stable, at all other doping one or more quasiparticles are found to be good.} \end{figure*} The support size $\xi_i$ of the quasiparticle state $|\chi_i \prime\rangle$ in the many-body Hilbert space of exact eigenstates is given by $\xi_i=1/P_i,$ where $P_i$ is the inverse participation ratio (IPR), \begin{equation} P_i=\sum_{j=1}^{D}|a_{ij}|^4. \end{equation} The stability of the magnon and the two fermionic quasiparticles is quantitatively determined by performing a finite size scaling analysis of their respective support sizes. This requires estimation of large number of excited states for which the recently developed FEAST \cite{FEAST} eigensolver is used. \iffalse The FEAST eigensolver is based on a contour integration projection technique, which enables us to compute a large numbers of eigenvectors (including degenerate states), in arbitrarily specified energy ranges. The action of the projection operator is given by \begin{align} \frac{1}{2\pi i}\oint_{C} & \frac{dE}{EI-H}|v\rangle=\sum_{n\in C}\langle n|v\rangle|n\rangle,\label{eq:feast} \end{align} where the contour $C$ in the complex $E$-plane contains all the energy eigenvalues say $m$ lying in a specified energy range. Here $|v\rangle$ a random vector with dimensionality $D$ and $\{|n\rangle\}$ are eigenvectors corresponding to the energy eigenvalues. We overestimate the number of eigenvalues in the contour $C$ and choose a set of linearly independent random vectors. This will provide us a set of linearly independent eigenvectors lying within the closed contour $C$. This technique has been used by two of us in an earlier study of the spinonic effects in Kitaev-Heisenberg models in the magnetically ordered phase \cite{aman_kitaev}. For low-lying excited states, we also used a faster algorithm based on the Krylov-Schur subspaces \cite{krylov_schur}. In this work we have perfomed calculations for systems with the many-body Hilbert space reaching up to $\sim 2.5\times10^6.$ \fi Fig.~\ref{fig:overlap}(a) shows plots of the disorder averaged \footnote{For every disorder realization $\alpha$, the overlaps $|a^{(\alpha)}_{ij}|^2$ are computed, and the combined data for all realizations is binned in equally spaced energy intervals. The plots show the disorder averaged values, $|a_{ij}|^2 = (1/N_{\text{dis}})\sum_{\alpha}|a^{(\alpha)}_{ij}|^2$ vs average many-body energy in the bin.} squares of the overlap, $|a_{ij}|^2$, for the second lowest magnon state $|\chi_i\prime\rangle$ with exact many-body states $|\psi_j\rangle$ of the random $t-J$ model as a function of the many-body excitation energy measured from the disorder averaged ground state energy. Curves for different values of hole doping corresponding to under (spin glass phase), over (Fermi liquid phase) and the putative critical doping $p=p_c=1/3$ are shown. Figs.~1(b) and 1(c) respectively show similar plots for respectively, the second lowest Jordan-Wigner (JW) fermion and a spinful fermionic (hole) quasiparticle. At low doping densities $p<p_c,$ the magnon and JW quasiparticles have relatively small energy widths when compared to spinful quasiparticles. At high doping densities $p>p_c,$ the magnon and JW quasiparticles have large widths compared to the spinful quasiparticles. We also note that JW quasiparticles are generally less stable than magnons but still good quasiparticles in the underdoped phase. The plots shown are disorder averaged over $\sim 10^3$ configurations. Low-lying states have been chosen for stability analysis here as they are more stable than the higher energy ones and also easier to compute. The above overlap plots include the effects of both disorder and interaction related broadening, with the former not associated with decay. In the SI, we present plots for lifetime broadening $\Delta \epsilon$ as a function of quasiparticle energy $\epsilon,$ which is analogous to $\text{Im}\Sigma$ studied elsewhere \cite{georges2013bad,cha2020linear}. The expected superlinear energy dependence of $\Delta \epsilon$ for low-lying Landau quasiparticles in the overdoped regime is clearly seen. In the underdoped regime, we see a flat and large $\Delta \epsilon$ expected for bad Landau quasiparticles. For magnons, the overlaps of many-body states with the quasiparticle wavefunction essentially determine the dynamical susceptibility \cite{shackleton2021quantum}. Starting from the underdoped side, we note the broadening of magnon quasiparticles with doping as well as energy, qualitatively consistent with Ref \cite{shackleton2021quantum}. However our decay width analysis does not give a clear signature of the quasiparticle stability and transition, for which we find the IPR method to be better. To place our above observations on a more quantitative footing, and also determine the stabilities near critical doping, we perform a finite size scaling analysis (for details see \cite{aman_kitaev,altshuler1997}) of the quasiparticle support sizes in the many-body Hilbert space. \iffalse This is necessary because a finite linewidth in a finite sized system is generically observed; however it is important to understand how the linewidth evolves as one approaches infinite size. \fi If the support size $\xi$ of a quasiparticle state scales with the dimensionality $D$ of the Fock space as $\xi \sim D,$ the quasiparticle is said to be fully many-body delocalized state, and has a finite lifetime that is inversely proportional to its energy width extracted from the support size. If on the other hand, the support size scales as $\xi \sim D^0,$ the quasiparticle state is many-body localized in the Fock space - such a state has an infinite lifetime. A third possibility may also occurs where the scaling of the support size is fractal, $\xi \sim D^c$, with $c<1$ - this represents a nonergodic delocalized state, which is also a good quasiparticle for our purposes. A perturbative treatment in Ref. \cite{rivas2002numerical} shows that quasiparticle broadening calculated from imaginary part of self-energy ($\text{Im}\Sigma$) is equivalent to an IPR-based analysis. Figure \ref{fig:overlap} (d-f) shows a plot of $(\log_2(\xi))/(\log_2(D))$ versus $1/\log_2(D)$ for (d) magnon, (e) JW fermions and (f) spinful fermionic hole quasiparticles. The fits are to the law $\xi\sim f D^c,$ where $f\equiv f(D)$ has an arbitrary but weaker than power-law dependence on $D.$ Since the interaction term generally does not connect all fermions or magnon quasiparticle states to every many body state even in the fully delocalized phase, we expect $f<1$ in the delocalized regime. In the many-body localized phase, although $c=0,$ the support size $\xi \geq 1,$ and in general also an increasing function of $D;$ consequently, $f(D)\geq 1$ is expected to monotonously increase with $D.$ Fitted lines with positive (negative) slope $(-\log_2(1/f))$ thus correspond to localized (delocalized) phases. For small doping densities ($p<p_c$), the magnon state (Fig.~1(d)) has a small support size, a positive slope $(-\log_2(1/f))$ and $c\approx 0$ corresponding to magnon localization and stability, while beyond the critical doping $p_c=0.33,$ support size of the magnon scales exponentially with $c\approx 1,$ and has negative slope corresponding to full delocalization and instability. Elsewhere in the underdoped phase $p<p_c,$ we observe for the magnon a negative slope with intercept $c<1$ corresponding to a nonergodic (fractal) delocalized state, still corresponding to well-defined quasiparticles. The same behavior is shown by the JW fermions (Fig.~1(e)) although they are systematically more unstable than the magnons in all doping regimes. The quasiparticle stability gets reversed for the spinful fermions (Fig.~1(f)), which are stable for $p>p_c$ and unstable in the underdoped regime. These scaling trends are consistent with the behavior seen in the plots (a-c) of the overlap squares. \begin{figure} \includegraphics[width=\columnwidth]{intercept} \caption{\label{fig:intercept}Plot shows the MBL scaling exponent $c$ versus doping density for all three quasiparticles. A sharp jump in $c$ (to values $c\approx 1$) is seen for all quasiparticles upon crossing to the doping regime where they are unstable.} \end{figure} We now discuss the behavior in the vicinity of $p_c.$ Note that the dimensionality $D$ of the Fock space takes its maximum value at $p_c=1/3;$ in the thermodynamic limit it is easily checked that at this critical doping, $D\sim 3^N.$ This indicates three degrees of freedom at each site, signifying some emergent symmetry at this point. In comparison, at zero doping, there are only two degrees of freedom at each site ($D=2^N$), and at very high values of the doping, $D$ increases slower than exponential. From the scaling analysis in Fig. \ref{fig:overlap} (d-f), it is clear that the exponent $c\approx 1$ for all three types of excitations when $p\approx p_c = 0.33,$ consistent with the understanding from earlier field-theoretical studies\cite{joshi2020deconfined} that there are no good quasiparticles at critical doping in this model. Figure \ref{fig:intercept} shows the plot of the intercept $c$ versus doping density $p.$ A sharp step in $c$ is observed near the vicinity of $p_c$ for all quasiparticles. The change in $c$ near $p_c$ for JW fermions is evident (showing they are good quasiparticles that decay in the overdoped side), but less sharp than the jump for magnons, which happen to be better quasiparticles. Interestingly, similar jumps in the scaling exponent have been seen in MBL transitions in disordered XXZ Heisenberg chains \cite{rivas2002numerical}. In conclusion, using a many-body localization treatment, we confirm the existence of two phases as a function of doping density - a spin glass phase at low doping and a Fermi liquid phase at high values of doping. At large doping density $p>0.33,$ spinful fermionic excitations are well-defined while at small doping $p<0.33,$ magnons and emergent Jordan-Wigner fermions are well-defined. Magnons and spinful fermions of comparable energy are not observed to be simultaneously good quasiparticles for any value of the doping. Near the critical point $p=0.33,$ we find that all these three excitations are bad quasiparticles. A central observation is that the deconfined critical point is associated with a localization transition in the many-body Hilbert space. The existence of stable Jordan-Wigner fermions in the spin glass phase could be of experimental significance. In the ground state of the underdoped (spin glass) phase, the JW quasiparticle density is fairly high, at $(1-p)/2.$ Our analysis lends support to the recent view that these quasiparticles are likely responsible via their coupling to phonons for the anomalously large thermal Hall conductivity seen in the underdoped cuprates (concomitant with poor electrical conductivity). \begin{acknowledgments} AK and VT acknowledge support of the Department of Atomic Energy, Government of India, under Project Identification No. RTI 4002, and the Department of Theoretical Physics, TIFR, for computational resources. SS was supported by the U.S. National Science Foundation grant No. DMR-2002850 by the Simons Collaboration on Ultra-Quantum Matter which is a grant from the Simons Foundation (651440, S.S.) \end{acknowledgments}
proofpile-arXiv_065-101
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Given an undirected graph $G=(V,E)$, where $V$ is the set of vertices and $E$ the set of edges, and an integer $k$, we ask for a subset $S \subseteq V$ of $k$ vertices whose deletion maximizes the number of connected components in the induced subgraph $G[V \setminus S]$. Note that $G[V \setminus S]$ denotes the subgraph induced by $V \setminus S$. This problem is known as the \emph{K-way vertex cut problem} \cite{ber14,shen12}. Its recognition version can be stated as follows. Let $c(G)$ denotes the number of connected components in $G$.\vspace{0.3cm} \noindent\textbf{K-way Vertex Cut Problem (KVCP)} \vspace{0.2cm} \noindent\textbf{Instance:} A graph $G = (V, E)$, and an integer $k$.\vspace{0.05cm} \noindent\textbf{Question:} Is there a subset of vertices $S \subseteq V$, where $|S| \leq k$, the deletion of which satisfies $c(G,[V\setminus S]) \geq K ?$ where $K$ is an integer.\vspace{0.3cm} The objective is to find a subset $S \subseteq V$ of at most $k$ vertices, the deletion of which partitions the graph into at least $K$ connected components. This problem is the vertex-version of the well-known \emph{Minimum k-cut problem} \cite{gol94,kar96,kam06,sar95,dow03}, where we ask for deleting a set of edges instead of vertices, with the purpose of maximizing the number of connected components in the induced graph. Note that the number of connected components in a graph can be computed in linear time using either breadth-first search or depth-first search algorithm \cite{cor09}. The \emph{K-way vertex cut problem} has been proven to be NP-complete on general graphs \cite{shen12} through a reduction from the \emph{Maximum Independent Set problem} (\emph{MIS}). Indeed, we can easily see that any subset of vertices whose deletion separates the graph into at least $K$ components identifies an independent set of size at least $K$. Accordingly, the \emph{K-way vertex cut problem} on $G$ is a natural generalization of the \emph{MIS} on $G$. Conversely, the \emph{MIS} is the particular case of the \emph{K-way vertex cut problem} where the connected component size has to be equal to one. The two problems are equivalent if $k\geq |V\setminus I|$ where $I$ is the maximum independent set of $G$. One can hope that the \emph{K-way vertex cut problem} becomes polynomial on classes of graphs for which \emph{MIS} is polynomially solvable. However, this is not the case for the class of bipartite graphs. In this paper, we prove that the \emph{K-way vertex cut problem} remains NP-complete even on this class of graphs. While for the class of split graphs, we provide an equivalence between the \textit{KVCP} and \textit{CNP}. This allows the \textit{KVCP} to be solved using any algorithms for solving the \textit{CNP}. Figure \ref{graph_classes} reviews the \textit{KVCP} complexity on different classes of graphs considered in the literature and highlights the contributions of this paper. \begin{figure}[htbp] \centerline{\includegraphics[scale=0.22]{MaxNum_graph_classes.eps}} \caption{The complexity of the K-way vertex cut problem on different classes of graphs. Contributions of this paper concern the colored classes. } \label{graph_classes} \end{figure} The rest of the paper is organized as follows. We complete this section by a state-of-the-art of the \emph{K-way vertex cut problem}, where we review different works handled this problem in the literature. Also, we give some definitions we need in the rest of the paper. In section \ref{sec-bG}, we provide the NP-completeness proof of the problem on bipartite graphs. In section \ref{sec-sG}, we deduce its equivalence to the \textit{CNP}, while in section \ref{sec-btw} we deduce its resolvability in polynomial time on weighted graphs of bounded treewidth. We close up the paper by some future works in section \ref{sec-conc}. \subsection{Related works} The \emph{K-way vertex cut problem} can be considered as a parametrized version of the graph separation problem \cite{mar06}, where we ask for the vertex-separator set that partitions the graph into the maximum number of connected components. As well, it can be considered as a variant of the \emph{Critical Nodes Detection Problem (CNDP)} \cite{lal17}. This problem (\emph{CNDP}) consists in finding the subset of vertices whose removal significantly degrades the graph connectivity according to some predefined connectivity metrics, such as: minimizing the pairwise connectivity in the network \cite{add13,aru09,aru11,sum11}, minimizing (or limiting to a given bound) the largest component size \cite{she12,shen12,lal15}, etc. In the case of the \emph{K-way vertex cut problem} the metric considered is maximizing the number of connected components. Although its importance, the \emph{K-way vertex cut problem} has received a little attention, in the literature, as expected for such an important problem. On general graphs, the problem has been shown to be NP-complete \cite{shen12,ber14}, and NP-hard to be approximated within a factor of $n^{(1-\epsilon)}$, for any $\epsilon> 0$ \cite{ber14}. Also, it is W[1]-hard, \emph{i.e.} not fixed-parameter tractable, with respect to the two parameters, namely the number of deleted vertices $k$, and the number of connected component in the induced graph $K$ \cite{mar06}. We recall that when we deal with parametrized problems, the input instance has an additional part called \emph{parameters}. The complexity of the problem is then measured as a function of those parameters, and the problem is said to be fixed-parameter tractable if it can be solved using algorithms that are exponential only in the size of these parameters, while polynomial in the size of the problem instance. For solving the \emph{K-way vertex cut problem} on general graphs, a \emph{Mixed-Integer Program} formulation has been presented in \cite{shen12}, where bounds and validated inequalities for the proposed formulation have been studied. As well, an evolutionary framework, that uses two greedy methods embedded within two main genetic operators, has been presented in \cite{ari16b}. The two operators, namely reproduction and mutation, are used to repair the obtained solutions, while the greedy methods are used to guide the search in the feasible solution space. Considering the \emph{K-way vertex cut problem} on particular classes of graphs, it has been proved to be NP-complete on split and planar graphs \cite{ber14}. Also it has been shown, by the same authors, that the problem is NP-hard to be approximated on split graphs \cite{ber14}, while on planar graphs it can be approximated using a polynomial-time approximation scheme (PTAS) of complexity $O(nk^2f(\epsilon))$, where $\epsilon> 0$ and $f$ is a function only depending on $\epsilon$ \cite{ber14}. We note that a PTAS outputs an approximate solution of value at least $(1- \epsilon)$ times the optimum, and the running time is polynomial in the size of the problem. Considering the parametrized complexity on these two classes of graphs, the problem remains W[1]-hard with respect to parameter $k$ (the number of vertices to be deleted) on split graphs \cite{ber14}, however on planar graphs, a fixed-parameter tractable algorithm of complexity $O(nk^{O(k)})$, with respect to $k$, has been proposed \cite{ber14}. On trees, $k$-hole and series-parallel graphs, polynomial dynamic programming algorithms have been developed for solving the problem with complexity $O(n^3)$, $O(n^{3+k})$ and $O(n^3)$, respectively \cite{she12}. Also on graphs of bounded treewidth, the problem can be solved in polynomial-time using a dynamic programming algorithm with complexity $O(nk^2w^w)$, where $w-1$ is the treewidth \cite{ber14}. \emph{Table \ref{maxnum_comp}} summarizes the different results arisen from studying the \emph{K-way vertex cut problem} on different classes of graphs.\\ \begin{table*}[!ht] \caption{The different results obtained from studying the \emph{K-way vertex cut problem} on different classes of graphs.} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Graph class}& \textbf{Complexity} & \textbf{Solving approach} & \textbf{Time}\\ \hline\hline General graphs & \multirow{4}{*}{NP-complete \cite{shen12,ber14}} & Genetic algorithm \cite{ari16b} & NC\\ \cline{1-1}\cline{3-4} \multirow{2}{*}{Planar graphs} & \multirow{4}{*}{} & PTAS \cite{ber14}& $O(nk^2f(\epsilon))$\\ \cline{3-4} & & FPT \cite{ber14}& $O(nk^{O(k)})$\\ \cline{1-1}\cline{3-4} Split graphs & &\multicolumn{2}{c|}{$\setminus$} \\ \cline{1-4} Trees & \multirow{4}{*}{Polynomial \cite{she12}} & \multirow{3}{*}{Dynamic programming \cite{she12}} & $O(n^3)$ \\ \cline{1-1}\cline{4-4} k-hole graphs & & & $O(n^{3+k})$ \\ \cline{1-1}\cline{4-4} Series-parallel graphs & & & $O(n^3)$ \\ \cline{1-1}\cline{3-4} Graphs with bounded $T_w$ & & Dynamic programming \cite{ber14}& $O(nk^2w^w)$ \\ \hline \end{tabular} \end{center} \label{maxnum_comp} \end{table*} \subsection{Definitions and notations} Let $G = (V,E)$ be an undirected graph, where $V$ is the set of vertices and $E \subseteq V \times V$ is the set of edges. Two distinct vertices $u$ and $v$ are adjacent (or neighbour) if there exists an edge $uv \in E$ connecting them. $u$ and $v$ are called the endpoints of the edge $uv$. The neighbourhood set of a vertex $v \in V$ is defined as $N(v)=\{u \in V | \{u,v\} \in E\}$. Let $deg_G(v)$ denote the degree of the vertex $v$, we have $deg_G(v)=|N(v)|$. A \emph{chain} in $G$ is a sequence of distinct vertices $\{v_1, v_2,\ldots,v_k \}$ such that $v_iv_{i+1}$ is an edge for each $0 < i <k-1 $. Given a subset of vertices $S \subseteq V$, $S$ is called an \emph{independent set} if there are no edges between any pair of vertices in $S$. We use $G[S]$ to denote the subgraph of $G$ induced by $S$, and hence $G[V\setminus S]$ denotes the subgraph induced by $V\setminus S$. Also, we use $c(G,S)$ to denote the number of connected components in $G[V\setminus S]$ obtained by removing $S$ from $G$. As well, $c(G,A)$ denotes the number of connected components obtained by deleting a set of edges $A \in E$. A graph $G=(V,E)$ is a \emph{bipartite graph} if the vertex set $V$ can be divided into two disjoint subsets $V_1$ and $V_2$, such that every edge $e \in E$ has one endpoint in $V_1$ and the other endpoint in $V_2$. Each subset, $V_1$ or $V_2$, forms an independent set of $G$. $G$ is then denoted $G=(V_1,V_2,E)$, where $n_1 = |V_1|$, $n_2 = |V_2|$ and $n_1 + n_2 = n$. $G$ is said to be a complete bipartite graph, denoted $K_{n_1,n_2}$, if each vertex in $V_1$ is adjacent to all vertices in $V_2$. If one of the independent set, $V_1$ or $V_2$, is a clique $G$ is called a split graph. \section{Bipartite graphs} \label{sec-bG} In this section, we consider the \emph{K-way vertex cut problem} on bipartite graphs. This case is relevant when the network to be decomposed on connected groups or communities has a bipartite structure, which is the case, for example, of users vs files in a P2P system, traders vs stocks in a financial trading system, conferences vs authors in a scientific publication networks and so on. In the following, we show that the \emph{K-way vertex cut problem} remains NP-complete even on this class of graphs. In order to establish the complexity proof we have first to introduce the following transformation of the \emph{k-cut problem} on general graphs \cite{gar79} to the \emph{K-way vertex cut problem} on bipartite graphs. \vspace{0.3 cm} \noindent\textbf{\textit{The K-cut problem.}} Given a graph $G=(V,E)$ and an integer $K$, find a minimal subset of edges $A \subseteq E$, whose removal partitions the graph into at least $K$ connected components, \emph{i.e.} such that $c(G, A)\geq K$. This problem is NP-complete on general graphs \cite{gar79}, and its recognition version asks whether there exists a cut-edges set $A$ where $|A|\leq B$, for a given bound $B$. We give a polynomial-time reduction from the \emph{k-cut problem} to the \emph{K-way vertex cut problem}. Given an instance of the \emph{k-cut problem} on a general graph $G(V,E)$, we define an instance of the \emph{K-way vertex cut problem} on a bipartite graph $G'=(V',E')$ as follows: \begin{enumerate} \item $G'$ contains all vertices and all edges of $G$, \emph{i.e.} $V \subseteq V'$ and $E \subseteq E'$. \item For each vertex $v \in V$, if $deg_G(v) \geq 2$, we add to $G'$ a chain $p_v=\{v_1,\dots,v_k\}$ of $k$ vertices, such that $v_1$ coincides with $v$ (see \emph{Figure \ref{proof_bipartite}}). \item For each edge $uv \in E'$, we add a vertex $x \in V'$ such as we replace $uv$ with two new edges $ux, xv \in E'$, \emph{i.e.} we replace each edge $uv$ by a chain $\{u,x,v\}$ such that $x$ is an added vertex. We denote $U$ the set of all added vertices $x$ for which $ux,xv \in E'$ and $u,v \in V$. \item For each chain $p_v=\{v_1,x_1,v_2,x_2,\dots,v_k\}$ we add two edges $xx_1, x'x_1 \in E'$ where $x_1\in P_v$ and $xv,vx'$ are two edges sharing the vertex $v\in V$ (see \emph{Figure \ref{proof_bipartite}}). Also we add edges $vv_i$ where $2\leq i\leq k$, and $v_ix_{i+1}$, $x_iv_{i+1}$ where $1\leq i\leq k$ for each chain $p_v$. \end{enumerate} \begin{figure}[htbp] \centerline{\includegraphics[scale=0.18]{Proof_bipartite.eps}} \caption{The reduction \emph{k-cut problem $\propto$ K-way vertex cut problem} on bipartite graphs. The added vertices are those with circles, and we have $U=\{x,x'\}$.} \label{proof_bipartite} \end{figure} Note that removing vertices of $V$ from $G'$ does not disconnect the graph $G'$, and $G'$ becomes disconnected only by removing vertices of $U$. Also, it is obvious that the transformation can be done in polynomial time, and the graph $G'$ is bipartite. Now, we prove the following theorem.\vspace{0,2cm} \begin{theorem} \label{NP-C bipartite} The K-way vertex cut problem is NP-complete on bipartite graphs. \end{theorem} \begin{proof} The \emph{K-way vertex cut problem} is in \emph{NP} since given a graph, we can compute in polynomial time the number of connected components in the induced graph after deleting $k$ vertices. Now we prove that the \emph{K-cut problem} on general graphs $\leq_p$ \emph{K-way vertex cut problem} on bipartite graphs. Given an instance $I$ of the \textit{K-cut problem} on a general graph $G=(V,E)$, we construct an instance $I'$ of the \emph{K-way vertex cut problem} on a bipartite graph $G'(V',E')$ as described in (1)-(4). We show that $G$ has a cut-edge set $A \in E$ of $k$ edges such that $c(G,A) \geq K$ if and only if $G'$ has a cut-vertex set $S \in V'$ of $k$ vertices such that $c(G',V'\setminus S) \geq K$. First, let $A \subseteq E$ be a solution of $I$, so $A$ contains no more than $k$ edges whose deletion disconnects $G$ into at least $K$ components. In $I'$, we select the k vertices of S as follows: for each edge $uv \in A$, we select from $G'$ the corresponding vertex $x \in U$ such that $ux,xv \in E'$. By deleting the vertices in $S$ from $G'$, no more than $k$ vertices are deleted $|S|\leq k$ and at least $K$ connected components are generated $c(G',V'\setminus S) \geq K$. Hence, $S$ is a solution of the \emph{K-way vertex cut problem} on $G'$. Conversely, we prove that if there is a cut-vertex set $S$ of size $k$ for $G'$, then we have a cut-edge set of size $k$ for $G$. Let $S$ be a solution of $I'$, so $S$ contains a set of $k$ vertices whose deletion disconnects $G'$ into at least $K$ components. We can easily observe that $G'$ becomes disconnected only by removing vertices of $U$. Thus, the solution satisfies the condition that only vertices from $U$ are deleted. Indeed, if the condition is not satisfied, then $S$ should contain the original vertices of $G$ and/or vertices from the added paths $p_i$. Given such a solution an equivalent solution satisfying the condition that only vertices from $U$ are deleted can be constructed in polynomial time. In doing so, we swap each vertex $v \in S$ and a vertex $u \in U$, \emph{i.e.} we keep $v$ and we delete $u$ instead, and hence we get an induced graph with probably more components, since deleting vertices from $U$ can disconnect $G'$ and generates further components. Thus, the obtained solution is at least as good as $S$, and satisfies that only vertices from $U$ are deleted. Now, let $S \subseteq U$ be a solution of $I'$. In $I$, we select the $k$ edges of $A$ as follows: for each vertex $v \in S$, we select from $G$ the edge $uw \in E$ such that $uv,vw \in E'$. By deleting $A$ from $G$, no more than $k$ edges are deleted, $|A|\leq k$, and at least $K$ connected components are generated. Therefore, $A$ is a solution of the \emph{K-cut problem} on $G$.\\This complete the proof. \end{proof} \begin{remark} It is clear that for the complete bipartite graph $K_{n_1,n_2}$ the \emph{K-way vertex cut problem} is trivial, and the solution is obtained by deleting the partition of smaller cardinality if $n_1, n_2 \leq k$. Otherwise, the solution is to delete any $k$ vertices that results in only one component. \end{remark} \section{Split graphs} \label{sec-sG} Considering split graphs, we show that the \emph{K-way vertex cut problem} is equivalent to the \textit{Critical Node Problem} (\emph{CNP}) \cite{aru09}. \vspace{0.1cm} \begin{theorem} \label{splitg} The \emph{K-way vertex cut problem} and the CNP are equivalent on split graphs. \end{theorem} \begin{proof} Given a split graph $G=(V,E)$ and a set of vertices $S \subseteq V$, we can easily notice that $G[V\setminus S]$ always contains a non-trivial connected component and isolated vertices, if any (see \textit{Figure \ref{proof_split}}). Note that $G$ is a split graph if the set of vertices can be partitioned into two subsets $V_1$ and $V_2$, $V=V_1 \cup V_2$, where $V_1$ is an independent set and $V_2$ is a clique. \\We recall that the recognition version of both the \emph{CNP} and \emph{K-way vertex cut problem} seeks for finding a set of vertices of at most $k$, the deletion of which, respectively, minimizes pairwise connectivity (for the \textit{CNP}), or maximizes the number of components (for the \textit{K-way vertex cut problem}) in the remaining graph. According to the value of $k$, two cases can be considered: \begin{figure}[htbp] \centerline{\includegraphics[scale=0.18]{Proof_split.eps}} \caption{Deleting any subset of vertices (eg. vertices with circles) from a split graph (see (a)) results in a non-trivial connected component and isolated vertices (see (b)).} \label{proof_split} \end{figure} \emph{Case 1:} \emph{$k\geq |N(V_1)|$}. This is a trivial case, where the optimal solution, for both variants, is to delete the vertices of $N(V_1)$ and any $k-|N(V_1)|$ vertices from $V_2$. We then obtain a residual graph that has $|V_1|$ isolated vertices and a connected component of size $|V_2|-(k - |N(V_1)|)$. \emph{Case 2:} \emph{$k < |N(V_1)|$}. In this case, we consider an optimal solution for the \emph{CNP} and try to prove that it is also an optimal solution for the \emph{K-way vertex cut problem}, and vice versa. Given an optimal solution $s^*$ for the \emph{CNP} on a split graph $G$, this solution aims to find a set of vertices $S \subseteq V$ so that the non-trivial connected component of $G[V \setminus S]$ is as small as possible and the surviving isolated vertices of the independent set be as large as possible. Therefore, we note that for $s^*$ only vertices in $V_2$ are removed from $G$ (\emph{i.e.,} $S \subseteq V_2$), and given any optimal solution for the \emph{CNP}, an equivalent solution satisfying this condition ($S \subseteq V_2$) can be constructed in polynomial time (for proof see \cite{add13}). On the other hand, to solve the \emph{K-way vertex cut problem} we aim to obtain a maximal number of components in the residual graph. In doing so, we seek for maximizing the number of isolated vertices from $V_1$ once the critical vertices have been deleted. For this purpose, only vertices in $V_2$ are removed from $G$, which is exactly the solution $s^*$. Hence, the solution $s^*$ is also the optimal solution of the \emph{K-way vertex cut problem}. \\Therefore, an optimal solution of one of the two problems is an optimal solution of the other, and so the \emph{CNP} and the the \emph{K-way vertex cut problem} are equivalent. \end{proof} According to \emph{Theorem \ref{splitg}} and since the \emph{CNP} is NP-complete on split graphs \cite{add13}, we have the following corollary:\vspace{0,2cm} \begin{corollary} The \emph{K-way vertex cut problem} remains NP-complete on split graphs. \end{corollary} \vspace{0,3cm} This is also what has been proven by Berger \emph{et al.} \cite{ber14} through a reduction from the k-clique problem. \section{Other results} \label{sec-btw} We mentioned above that the \textit{K-way vertex cut problem} is polynomially solvable on graphs of bounded treewidth \cite{ber14}. The considered graphs are unweighted. In this section, we deduce that it remains polynomially solvable on the case of weighted graphs with bounded treewidth. Weighted graphs means that a weight $w_i \geq 0$ is associated with each node $ v_i \in V$. In this case, we ask for a subset of nodes of a total weight (rather than a cardinality) no more than $k$, whose removal maximizes the number of connected components in the induced graph. In \cite{add13}, authors studied the \emph{MaxNumSC} problem, for \emph{Maximizing the Number of Small Components}, that can be formulated as follows. Let $f^c(S)$ be the function that returns the number of connected components in $G[V\setminus S]$ with a cardinality of at most $c$. :\vspace{0.3cm} \textbf{Input:} A graph $G = (V, E)$, and two integers $c$ and $k$. \textbf{Output:} $\underset{S \subseteq V}{arg max}$ $f^c(S)$, where $|S|\leq k$.\vspace{0.3cm} Given a graph $G=(V,E)$ and two positive integers $k$ and $c$, the \emph{MaxNumSC} problem consists in maximizing the number of connected components of cardinality at most $c$, by deleting $k$ vertices from $G$. The authors showed that the problem is polynomially solvable on weighted graphs with bounded treewidth. It is obvious that the \emph{K-way vertex cut problem} is a special case of the \emph{MinMaxSC} problem where $c=|V|$, and as the \emph{MinMaxSC} problem is polynomially solvable on weighted graphs (where $w_i \geq 0, \forall v_i \in V$) we have the following corollary.\vspace{0.2cm} \begin{corollary} The \emph{K-way vertex cut problem} is polynomially solvable on weighted graphs with bounded treewidth. \end{corollary} \section{Conclusion and future works} \label{sec-conc} In this paper, we studied the complexity of the \emph{K-way vertex cut problem} on some particular classes of graphs, namely bipartite and split graphs. This problem asks for finding the subset of vertices in a graph, the deletion of which results in the maximum number of connected components in the induced subgraph. We proved its NP-completeness on bipartite graphs. While on split graphs, we provided its equivalence to the well-known problem, namely the \textit{Critical Node Problem} (\textit{CNP}). This allows any solving method for the \textit{CNP} to be used for solving The \textit{K-way vertex cut problem} and vice versa. The problem still needs more investigation on both complexity study and solving methods. For future works, we can consider it on subclasses of (or related classes to) bipartite and split graphs, which can help providing bounds for the problem hardness. In fact, this is what we are already started to do by considering bipartite-permutation graphs (which is a subclass of the bipartite graph class). We found that the problem can be solved polynomially, on this class of graphs, using dynamic programming approach. Also, the problem can be investigated on other important classes of graphs, such as chordal graphs, disk graphs, etc. which will allow us to find different applications of this parameter in real-world networks.
proofpile-arXiv_065-102
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Over the past decades, skin cancer has emerged as a pressing challenge in public health, accountable for 5.4 million new cases in the United States~\cite{rogers2015incidence,sarker2018slsdeep}. Melanoma, the most serious type of skin cancer, holds a mortality rate of 75\%~\cite{li2018dense}. Due to the dauntingly high incidence and mortality rates, early detection and prevention of skin cancer are critical. In response, recent studies have developed automated skin cancer diagnosis approaches, which achieved performance on par with board-certified dermatologists~\cite{esteva2017dermatologist,haenssle2018man}. However, these approaches formulated skin cancer diagnosis as a simple classification task, dismissing the potential benefit from lesion segmentation. In essence, the category of a skin lesion is determined by its asymmetry, border, intensity, and physical size~\cite{sharmeela2017classification}, which rely heavily on accurate lesion segmentation results. A faithful classification, on the other hand, can also serve as crucial guidance of lesion segmentation by extracting discriminant lesion features from the dermoscopic image. Naturally, in this paper, we seek to answer this critical question: \emph{How to beneficially integrate the task of segmentation with classification for skin cancer diagnosis?} To address this question, we propose a single generic model that jointly learns classification and segmentation tasks in skin images. Our framework, called \textbf{M}ulti-\textbf{T}ask \textbf{TransUNet} (\textbf{MT-TransUNet)}, inherits the merits of both Convolutional Neural Networks and Transformers to capture both local details (\eg skin lesion color, texture) and long-range context (\eg skin lesion shape, physical size) in a multi-task learning pipeline. What makes our framework most distinct from the latest medical transformers (\eg \cite{chen2021transunet}) is that instead of tailoring for different downstream tasks separately, we jointly learn the two complementary tasks with the new design of classification and segmentation tokens. To mediate those two types of tokens, consistencies are further enforced in different levels, \ie the intermediate activated attention map as well as the final segmentation outputs, which we find is crucial for successful multi-task learning in skin images. In contrast to the existing approaches, MT-TransUNet offers the following \textbf{three unique advantages}. (1) Aggregating long-range dependencies in the image. Unlike conventional CNN architectures, the self-attention layers in the Transformer architecture are capable of capturing long-range dependencies~\cite{dosovitskiy2020image,chen2021transunet}, which shed new light on identifying larger skin lesions from dermoscopic images. (2) Exploiting both pixel- and image-level annotation. Although Y-Net~\cite{mehta2018net} attempted to combine segmentation and classification back in 2018, there was little performance gain owing to the asymmetrical learning objectives of the two tasks. We have overcome this barrier by regulating the internal consistency between the class attention map and lesion segmentation map. (3) Demonstrating superior computational efficiency. The previous state of the art, MB-DCNN~\cite{xie2020mutual}, was extremely time-consuming and resource-intensive for training and testing, as the three modules in their architecture must be trained individually. Besides outperforming MB-DCNN by a large margin, our MT-TransUNet also presents a remarkable improvement in efficiency thanks to the design of a shared encoder. We validate the effectiveness of MT-TransUNet on three public datasets, \ie ISIC-2017, ISIC Additional, and PH2 datasets. Our experiments show that: (1) Combining segmentation and classification is capable of boosting the performance for each task. (2) Regulating the internal consistency between the class attention map and lesion segmentation map can mediate the learning objectives across classification and segmentation. (3) MT-TransUNet is more data-efficient and model-efficient in training and testing than prior arts. These results are attributable to the following key observation: \textit{Skin lesion classification and segmentation share a similar goal---recognizing lesions from the image and distinguishing lesion categories---thus, it is advantageous to train them jointly with a shared encoder network.} To summarize, our contribution is three-fold: \begin{enumerate} \item We propose a single generic multi-task framework, named MT-TransUNet, that segment and classify skin lesions simultaneously by exploiting the potential of dermoscopy images with either pixel-level or image-level annotation. \item We introduce dual-task and attended region consistency losses for mediating the classification and segmentation heads without pixel-level annotation, enhancing the robustness of the model when it encounters the same image undergoing various data augmentation. \item Our MT-TransUNet exceeds the previous state of the art~\cite{xie2020mutual} for both segmentation and classification tasks, and more importantly, preserving compelling computational efficiency regarding model parameters (48M~vs.~130M) and inference speed (0.17s~vs.~2.02s per image). \end{enumerate} \section{Related Works} \subsection{Skin Image Analysis} \paragraph{Skin Lesion Segmentation:} Before the deep learning era, most methods tend to leverage traditional strategies, including thresholding, active contour models, and clustering \cite{hemalatha2018active,ravichandran2009color,yogarajah2010dynamic}. In the period of the deep learning era, deep neural networks gradually take the place of traditional feature extraction strategies, embracing an end-to-end manner to tackle skin image segmentation. \cite{nasr2017dense} puts forward a 19-layer neural network to segment skin images in the absence of prior knowledge of given datasets. \cite{mirikharaji2018star} attempts to incorporate the prior knowledge about the structure of target objects. The authors encode the star shape prior with a novel loss function, which penalizes the non-star shape predictions. \cite{sarker2018slsdeep} proposes an end point error to address the confusion of segmentation predictions around boundaries. \cite{liu2021skin} simultaneously predicts the segmentation mask and edge map to boost the performance. \cite{dong2021fac} leverage the feedback fusion block to concatenate the latter features with former ones to enable multiplexing of parameters. \begin{figure*}[t] \centering \includegraphics[width=0.98\textwidth]{image/architecture.pdf} \caption{The overall architecture of MT-TransUNet. It mainly comprises ``cls'' and ``seg'' tokens to tackle the tasks of classification and segmentation. Dual-task consistency is introduced to exploit the images without segmentation ground truth. We also put forward an attended region consistency between segmentation and classification heads. ``T'', ``LN'', ``MSA'', and ``FFN'' denote Transformer layer, Layer Normalization, Multi-head Self-Attention, Feed-Foward Network, respectively. $\ell_{\text{LAB}}$ and $\ell_{\text{UNLAB}}$ denote the supervision for dataset with or without segmentation masks.} \label{fig:architecture} \end{figure*} \paragraph{Skin Image Classification:} Traditionally, people usually rely on handcrafted features to train a classifier, including support vector machine~\cite{alquran2017melanoma}, K-nearest-neighbor~\cite{narayanan2017automatic}, etc. However, the design of handcrafted features is time-consuming and such low-capacity features hamper the improvement of classification performance. With the flourishing development of deep learning, recently many works are inclined to leverage deep neural networks for designing classifiers. For example, \cite{zhang2018skin} proposes attention residual learning to improve the ability of discriminative representation. \cite{hagerty2019deep} takes advantage of both hand-crafted and deep features extracted by ResNet to construct a melanoma recognizer. \paragraph{Multi-task Learning:} Compared with those works that treat segmentation and classification tasks separately, there are few works dealing with both tasks in one model. \cite{yu2016automated} uses a two-stage framework that first segments the skin regions, which are sequentially cropped for recognition. \cite{gonzalez2018dermaknet} leverage the predicted segmentation masks to train a better classify. Actually, both of them ignore the benefits that classifications bring to segmentation. MB-DCNN~\cite{xie2020mutual} uses a mutual bootstrapping manner to train segmentation and classification networks. Nevertheless, features between two tasks can not be shared because the two tasks are not trained in an end-to-end manner. \subsection{Vision Transformer} Transformer is firstly proposed to address problems concerning natural language processing. It mainly consists of a self-attention module to capture long-range dependencies and feed-forward module to project features to new latent space. Recently, ViT~\cite{dosovitskiy2020image} is proposed to tackle nature image recognition tasks based on Transformer. It first splits the images into several non-overlapping patches, then utilizes Transformer to calculate global information between each token. An additional token is appended for recognition tasks. Inspired by this, many researchers try to leverage this fashion for many purposes, such as TransUNet~\cite{chen2021transunet}, TransReID~\cite{he2021transreid}, etc. Furthermore, there are some variations of Transformer, like Swin Transformer~\cite{liu2021swin} that employs shifted windows to calculate local self-attention, and PVT~\cite{wang2021pyramid} that combines the fashion of pyramid network with Transformer to capture features from multiple stages. \subsection{Consistency Regularization} The consistency regularization is widely used in semi-supervised learning. For example, \cite{tarvainen2017mean} designs a teacher-student consistency to take advantage of the dataset without segmentation labels. In detail, the student model is optimized by the consistency loss with regard to the targets predicted by the teacher model. \cite{luo2020semi} takes advantage of the consistency loss to boost the detection performance. \cite{zamir2020robust} put forward cross-task consistency based on inference-path invariance. \cite{luo2020semi} introduces the prediction of level set function as additional tasks and construct dual-task consistency. \section{Method} In this section, we first introduce the basic design of MT-TransUNet that incorporates segmentation and classification tasks in one model, followed by the dual-task consistency (DTC) which is designed for leveraging unlabeled datasets. Then we further introduce the attended region consistency (ARC) between segmentation and classification heads. Finally, the overall loss function is presented. \subsection{Multi-Task TransUNet} The single-task TransUNet~\cite{chen2021transunet} follows the basic design of UNet~\cite{ronneberger2015u} that utilizes skip connections between corresponding layers to enhance local details in feature maps. Due to the intrinsic locality in convolution, UNet is weak to model long-range dependencies, thus subpar to tackle situations such as segmenting large regions in skin images. Under this circumstance, TransUNet introduces a strong encoder by incorporating several Transformer layers during feature extraction (refer to green rectangles in Figure \ref{fig:architecture}). Specifically, given a skin image $I \in \mathbb{R}^{H \times W \times C}$, the encoder first downsamples $I$ for four times using ResNet50~\cite{he2016deep} and generate a feature map $F \in \mathbb{R}^{H^{\prime} \times W^{\prime} \times C^{\prime}}$. The generated feature map is further split into $N$ non-overlapping patches $\{\mathbf{x}^{i}_{s} \in \mathbb{R}^{P^{2} \cdot C^{\prime}} | i=1,2,...,N\}$ as segmentation tokens, each of which is of size $P \times P$ and $N = \frac{H^{\prime}W^{\prime}}{P^{2}}$. Here we append an additional zero-initialized classification token $\mathbf{x}_{c}$ to build up a multi-task framework. Each of the tokens is mapped to a latent $D$-Dimensional embedding space through a learnable linear projection $\mathbf{E}$. Different from the original implementation of TransUNet, we discard the positional embedding for the convenience of multi-scale inputs during testing. We highlight that the deep backbone can obtain the positional cues through padding~\cite{islam2020much}. Hence, the list $\mathbf{z}_{0}$ of embedded tokens are as follows: \begin{equation} \label{equa:linear projection} \mathbf{z}_{0} = [\mathbf{x}^{1}_{s}\mathbf{E}; \mathbf{x}^{2}_{s}\mathbf{E}; \cdots; \mathbf{x}^{N}_{s}\mathbf{E}; \mathbf{x}_{c}\mathbf{E}] \end{equation} The Transformer layer mainly consists of Multihead Self-Attention (MSA) to correlate each token through long-range dependencies, as well as Feed-Forward Network (FFN) to project features to new latent space. The outputs of the $l$-layer are shown as folows: \begin{equation} \label{equa:MSA} \mathbf{z}^{\prime}_{l} = \text{MSA}(\text{LN}(\mathbf{z}_{l-1})) + \mathbf{z}_{l-1} \end{equation} \begin{equation} \label{equa:FFN} \mathbf{z}_{l} = \text{FFN}(\text{LN}(\mathbf{z}^{\prime}_{l})) + \mathbf{z}^{\prime}_{l} \end{equation} After being enhanced by $n$ successive Transformer layers, the generated tokens are separated into two parts. For \textbf{classification} tokens, we simply use a fully connected layer to reduce the dimension of features (equal to the number of categories), then employ a cross-entropy loss $\ell_{\text{CLS}}$ for supervision. For \textbf{segmentation} tokens, they are upsampled four times in a cascade manner with bilinear interpolation and two convolutional neural networks. Specifically, in the first three steps, the feature maps of the same size in the ResNet50 encoder are concatenated in feature channels using skip connection to enhance the representation. Finally, we upsample the last feature map, resulting in two predictions, including a segmentation mask $M$ and a level set function $L$, both of which are of spatial size $H \times W$. Please note that the level set function is introduced for conducting dual-task consistency (describe in the next section in detail). In detail, the segmentation masks distinguish foregrounds and backgrounds, while the level set functions capture geometric active contour and distance information. The level set function is calculated as follows: \begin{equation} \label{equa:level set function} \mathcal{T}(x)=\left\{\begin{array}{lc}-\inf _{y \in \partial S}\|x-y\|_{2}, & x \in \mathcal{S}_{\text {in }} \\ 0, & x \in \partial \mathcal{S} \\ +\inf _{y \in \partial S}\|x-y\|_{2}, & x \in \mathcal{S}_{\text {out }}\end{array}\right. \end{equation} Specifically, the generated segmentation mask is supervised by a cross-entropy loss, and the predicted level set function is supervised by an L2 loss. \begin{equation} \label{equa:mask loss} \ell_{\text{MASK}} = \text{CrossEntropyLoss}(M, M_{\text{GT}}) \end{equation} \begin{equation} \label{equa:lsf loss} \ell_{\text{LSF}} = \text{L2}(L, L_{\text{GT}}) \end{equation} where $M_{\text{GT}}$ and $L_{\text{GT}}$ are ground truth of segmentation mask and level set function, respectively. In this manner, the predictions of segmentation and classification can be generated at the same time, which distinguishes it from MB-DCNN~\cite{xie2020mutual} that employs multi-bootstrapping fashion which is time-consuming. \begin{figure}[t] \centering \includegraphics[width=0.40\textwidth]{image/dual_task_consistency.pdf} \caption{Dual-task consistency. For dataset without segmentation labels, the predicted level set function is first transformed back to a mask presentation, then conduct a consistency loss with the predicted segmentation mask.} \label{fig:dual_task_consistency} \end{figure} \subsection{Dual-Task Consistency} Generally, annotating pixel-level segmentation labels is time-consuming compared with annotating classification labels. Moreover, we observe that there is an additional set only designed for classification tasks consisting of 1,320 skin images, which are widely used in the previous methods~\cite{xie2020mutual,xie2019semi} to boost the performance of classification. On the contrary, segmentation labels are not available in this part, which puts obstacles on training such a multi-task framework. One naive solution is that we only need to optimize the classification branch when training on the additional part. However, we suggest that the additional dataset can indeed provide cues for the segmentation tasks by leveraging dual-task consistency between segmentation mask $M$ and level set function $L$. Empirically, a robust model is capable of achieving high consistency between correlative predictions~\cite{zamir2020robust}. As is demonstrated in Figure \ref{fig:dual_task_consistency}, we first multiply the generated level set function with $k$ (a large value), and employ the sigmoid function to transform it back to the mask representation. The calculation is shown as follows: \begin{equation} \label{equa:transform back} M^{\prime} = \text{Sigmoid}(k \cdot L) \end{equation} To formulate the dual-task consistency, we utilize an L2 loss to constrain two masks. \begin{equation} \label{equa:two masks} \ell_{\text{DTC}} = \text{L2}(M, M^{\prime}) \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{image/attended_region.pdf} \caption{The attention maps of classification tokens and segmentation results are mainly focus on the foregrounds. We propose the attended region consistency loss to address the distraction phenomenon in attention maps.} \label{fig:attended regions} \end{figure} \subsection{Attended Region Consistency} When training MT-TransUNet, we observe an interesting phenomenon: two tasks are inclined to attend to similar regions like foregrounds in skin images (see Figure \ref{fig:attended regions}). However, there are some bad cases where classification tokens attempt to attend to regions of background. Through visualization, we find that the classification branch is sensitive to hair or artificial objects (like rulers and scissors), which will cause a distraction problem. According to the ABCD rules~\cite{kasmi2016classification}, it is the \textit{Asymmetry, Border, Color, Diameter} that determines the type of skin cancer, thus the distraction problem should be penalized. Here we introduce an attended region consistency loss between predicted segmentation masks and the generated attention maps of classification tokens. We first average the attention maps in the head channel, then we denote the attention maps as $A = (a_{s}^{1}, a_{s}^{2}, ..., a_{s}^{P^{2}}, a_{c})$, where $\sum_{i=0}^{P^{2}} a_{s}^{i} + a_{c} = 1$. We discard $a^{c}$ and normalize the attention values of segmentation tokens as follows: \begin{equation} \label{equa:normalize} A^{\prime} = (a_{s}^{1}, a_{s}^{2}, ..., a_{s}^{P^{2}})\quad, \quad a^{i}_{s} \gets a^{i}_{s} \bigg/ \sum_{i=0}^{P^{2}} a_{i}^{s} \end{equation} We also downsample the predicted mask $M$ to $M^{\prime} \in \mathbb{R}^{ P \times P }$ with bilinear interpolation. Furthermore, we design the attended region consistency loss as follows: \begin{equation} \label{equa:attended region consistency} \ell_{\text{ARC}} = (1 - M^{\prime}) \cdot A^{\prime} \end{equation} Specifically, lower $\ell_{\text{ARC}}$ indicates that the attended regions are consistent with predicted foregrounds in the segmentation masks. \subsection{Overall Loss Function} Our MT-TransUNet is capable of simultaneously tackling datasets with and without segmentation masks. In detail, the datasets with segmentation labels are supervised as: \begin{equation} \label{equa:label data supervised} \ell_{\text{LAB}} = \ell_{\text{MASK}} + \lambda_{\text{C}}\ell_{\text{CLS}} + \lambda_{\text{L}}\ell_{\text{LSF}} + \lambda_{\text{D}}\ell_{\text{DTC}} + \lambda_{\text{A}}\ell_{\text{ARC}} \end{equation} and those without segmentation labels are supervised as: \begin{equation} \label{equa:unlabel data supervised} \ell_{\text{UNLAB}} = \lambda_{\text{C}}\ell_{\text{CLS}} + \lambda_{\text{D}}\ell_{\text{DTC}} + \lambda_{\text{A}}\ell_{\text{ARC}} \end{equation} \section{Experiments} In this section, we firstly describe the datasets, evaluation metrics, as well as the implementation details. Then we conduct some ablation studies to verify the effectiveness of each component in the proposed architecture. \subsection{Dataset} \noindent\textbf{ISIC-2017 Dataset} contains 2000 dermoscopic images for training, 150 for validation, and 600 for testing \footnote{https:\/\/challenge.isic-archive.com\/landing\/2017}. Each image is paired with a segmentation ground truth (foreground and background) and a classification ground truth (melanoma, nevus, and seborrheic keratosis). \medskip\noindent\textbf{ISIC Additional Dataset} involves 1320 dermoscopic images only paired with classification labels. It derives from ISIC archive\footnote{https:\/\/www.isic-archive.com\/}, which is the largest publicly available collection of skin lesions. \medskip\noindent\textbf{PH2 Dataset}~\cite{mendoncya2013dermoscopic} comprises 200 dermoscopic images, including 160 nevus as well as 40 melanomas. Both segmentation and classification labels are available. It only contains two categories, including melanoma and nevus. \subsection{Evaluation Metrics} For the \textbf{segmentation} results, we employ five common evaluation metrics for validation, including Jacarrd score (JA), Dice score (DI), pixel accuracy (pixel-AC), pixel-sensitivity (pixel-SE), and pixel-specitivity (pixel-SP). For the \textbf{classification} results, we use four common evaluation metrics to verify the performance of the proposed method, including accuracy (AC), sensitivity (SE), specitivity (SP), and Area Under Curve (AUC). The detailed criteria are defined following~\cite{xie2020mutual}. \begin{table}[t] \centering \centering \scalebox{0.75}{ \begin{tabular}{l||c|c|c|c} \hline \centering Setting & JA & DI & M\_ACC & K\_ACC \\ \hline (1) $\ell_{\text{MASK}}$ & 79.1 & 87.0 & - & - \\ \hline (2) $\ell_{\text{CLS}}$ & - & - & 88.2 & 92.5 \\ \hline (3) $\ell_{\text{MASK}}$ + $\ell_{\text{CLS}}$ & 78.1 & 86.5 & 88.0 & 92.2 \\ \hline (4) $\ell_{\text{MASK}}$ + $\ell_{\text{CLS}}$ + $\ell_{\text{LSF}}$ & 79.4 & 87.2 & 88.7 & 92.5 \\ \hline (5) $\ell_{\text{MASK}}$ + $\ell_{\text{CLS}}$ + $\ell_{\text{LSF}}$ + $\ell_{\text{DTC}}$ & 79.3 & \textbf{87.3} & 88.7 & \textbf{93.0} \\ \hline (6) $\ell_{\text{MASK}}$ + $\ell_{\text{CLS}}$ + $\ell_{\text{LSF}}$ + $\ell_{\text{DTC}}$ + $\ell_{\text{ARC}}$ & \textbf{79.5} & \textbf{87.3} & \textbf{89.0} & \textbf{93.0} \\ \hline \end{tabular}} \caption{Ablation studies on multi-task framework, dual-task consistency, and attended region consistency.} \label{tab:ablation studies} \end{table} \begin{table}[t] \centering \centering \scalebox{0.83}{ \begin{tabular}{p{1.5cm}<{\centering}|| p{1.5cm}<{\centering} | p{1.5cm}<{\centering} | p{1.5cm}<{\centering} | p{1.5cm}<{\centering}} \hline $n$ Layer & JA & DI & M\_ACC & K\_ACC \\ \hline 0 & 79.4 & 87.1 & - & - \\ \hline 2 & 79.2 & 86.9 & 87.6 & 90.3 \\ \hline 4 & \textbf{79.5} & \textbf{87.3} & \textbf{89.0} & \textbf{93.0} \\ \hline 6 & 79.4 & \textbf{87.3} & 88.6 & 91.5 \\ \hline 8 & 79.2 & 87.1 & 88.3 & 89.7 \\ \hline \end{tabular}} \caption{Ablation studies on the number of Transformer layers. The performance reaches the best when $n=4$.} \label{tab:transformer layer} \end{table} Please note that we follow the ISIC-2017 competition to divide the classification task into two subtasks, including melanoma classification and seborrheic keratosis classification. We utilize the metrics with different prefixes for these subtasks, \eg M\_ACC for melanoma classification and K\_ACC for keratosis classification. \subsection{Implementation Details} The proposed method is implemented with PyTorch. All experiments are carried out on one GTX1080Ti GPU with 11GB memory. The model is trained using Adam~\cite{kingma2014adam} optimizer. We set the batch to 8. Specifically, we construct a two-stream sampler to simultaneously optimize samples with and without segmentation labels, and the batch size of each part is equally set to 4. The learning rate is set to $10^{-5}$ and it linearly decreases to 0 during 40000 iterations. The ResNet50 and Transformer layers are pre-trained with ImageNet~\cite{deng2009imagenet}. Empirically, we set $\lambda_{\text{C}}$, $\lambda_{\text{L}}$, and $\lambda_{\text{A}}$ to 0.25, 5.0, and 1.0. For $\lambda_{\text{D}}$, we utilize an exponential ramp-up with length 40 for better convergence. We set $k$ to 1500. We employ 4 Transformer layers are used to enhance features. The patch size is set to 16 $\times$ 16. During training, we randomly flip skin images horizontally or vertically, then crop the images from the central at multiple scales for data augmentation. During testing, we leverage three different sizes of input, including $224 \times 224$, $256 \times 256$, and $288 \times 288$. For each input size, we employ horizontal and vertical flipping, as well as multi-angle rotation ($90^{\circ}, 180^{\circ}, 270^{\circ}$) to augment the original images, and finally ensemble these results as final predictions. \begin{table*}[t] \begin{center} \scalebox{0.83}{ \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \hline Methods & \makecell[c]{ CDNN \\ \cite{yuan2017improving} } & \makecell[c]{DDN \\ \cite{li2018dense} } & \makecell[c]{FCN+SSP\\\cite{mirikharaji2018star}} & \makecell[c]{SLSDeep\\\cite{sarker2018slsdeep}} & \makecell[c]{Swin-Tiny\\\cite{liu2021swin}} & \makecell[c]{Swin-Base\\\cite{liu2021swin} } & \makecell[c]{Segmenter\\\cite{strudel2021segmenter} } & \makecell[c]{CCL+MSFA\\\cite{liu2021skin}} & \makecell[c]{MB-DCNN\\\cite{xie2020mutual}} & MT-TransUNet \\ \hline JA & $76.5$ & $76.5$ & $77.3$ & $78.2$ & 75.7 & 74.8 & 75.6 & 79.5 & 79.4 / 80.4 & 79.5 / \textbf{80.7} \\ \hline DI & $84.9$ & $86.6$ & $85.7$ & 87.8 & 86.2 & 85.6 & 86.1 & 87.1 & 87.0 / 87.8 & 87.3 / \textbf{88.0} \\ \hline pixel-AC & $93.4$ & $93.9$ & $93.8$ & $93.6$ & 92.4 & 92.8 & 93.8 & 94.3 & 94.3 / 94.7 & 94.6 / \textbf{94.9} \\ \hline pixel-SE & $82.5$ & $82.5$ & $85.5$ & $81.6$ & 86.6 & 87.1 & 83.3 & \textbf{88.8} & 87.3 / 87.4 & 88.0 / 88.2 \\ \hline pixel-SP & $97.5$ & \textbf{98.4} & $97.3$ & 98.3 & 96.9 & 96.5 & 97.2 & 96.5 & 96.4 / 96.8 & 96.5 / 96.4 \\ \hline \end{tabular}} \end{center} \caption{Experimental results on the segmentation task on ISIC-2017. Since MB-DCNN~\cite{xie2020mutual} employs five-model ensembling when testing, we also follow this setting for a fair comparison (the ensembling results are shown after the slash). Our MT-TransUNet achieves the state-of-the-art performance no matter the model ensembling strategy is used or not.} \label{tab:segmentation} \end{table*} \begin{table*}[t] \begin{center} \scalebox{0.99}{ \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*} { Methods } & \multicolumn{4}{c|} { Melanoma Classification } & \multicolumn{4}{c|} { Keratosis Classification } & \multirow{2}{*} {Average ACC } \\ \cline { 2 - 9 } & M\_ACC & M\_SE & M\_SP & M\_AUC & K\_ACC & K\_SE & K\_SP & K\_AUC & ~ \\ \hline ARL-CNN~\cite{zhang2019attention} & $85.0$ & $65.8$ & $89.6$ & $87.5$ & $86.8$ & $87.8$ & $86.7$ & $95.8$ & $85.9$ \\ \hline SSAC~\cite{xie2019semi} & $83.5$ & $55.6$ & $90.3$ & $87.3$ & $91.2$ & $88.9$ & $91.6$ & $95.9$ & $87.4$ \\ \hline SDL~\cite{zhang2019medical} & $88.8$ & $-$ & $-$ & $86.8$ & $92.5$ & $-$ & $-$ & $95.8$ & $90.7$ \\ \hline~\cite{matsunaga2017image} & $82.8$ & $\mathbf{73.5}$ & $85.1$ & $86.8$ & $80.3$ & $\mathbf{97.8}$ & $77.3$ & $95.3$ & $81.6$ \\ \hline~\cite{diaz2017incorporating} & $82.3$ & $10.3$ & $\mathbf{99.8}$ & $85.6$ & $87.5$ & $17.8$ & $\mathbf{99.8}$ & $96.5$ & 84.9 \\ \hline~\cite{menegola2017recod} & $87.2$ & $54.7$ & $95.0$ & $87.4$ & $89.5$ & $35.6$ & $99.0$ & $94.3$ & 88.4 \\ \hline~\cite{bi2017automatic} & $85.8$ & $42.7$ & $96.3$ & $87.0$ & $91.8$ & $58.9$ & $97.6$ & $92.1$ & 88.9 \\ \hline~\cite{yang2017novel} & $83.0$ & $43.6$ & $92.5$ & $83.0$ & $91.7$ & $70.0$ & $99.5$ & $94.2$ & 87.4 \\ \hline \hline MB-DCNN~\cite{xie2020mutual} & $86.7$ & $70.1$ & $90.7$ & $89.6$ & $92.3$ & $83.3$ & $93.9$ & $96.7$ &89.5 \\ \hline MT-TransUNet & 89.0 & $69.3$ & $91.2$ & 89.4 & 93.0 & $92.8$ & $96.3$ & $95.1$ & 91.0 \\ \hline \hline $\text{MB-DCNN}^{*}$~\cite{xie2020mutual} & $87.8$ & $72.7$ & $91.5$ & $90.3$ & $93.0$ & $84.4$ & $94.5$ & $\mathbf{97.3}$ & 90.4 \\ \hline $\text{MT-TransUNet}^{*}$ & \textbf{89.2} & 68.0 & 92.3 & \textbf{90.6} & \textbf{93.2} & 77.6 & 97.6 & 95.7 & \textbf{91.2} \\ \hline \end{tabular}} \end{center} \caption{Experimental results of the classification task on ISIC-2017. $^{*}$ denotes five-model ensembling. Our method surpasses MB-DCNN with regard to recognition accuracy in two subtasks with or without the ensembling strategy.} \label{tab:classification} \end{table*} \subsection{Ablation Studies} Here we carry out several ablation studies concerning the joint training strategy, dual-task and attended region consistency, as well as Transformer layers. The experimental results are shown in Table \ref{tab:ablation studies} and Table \ref{tab:transformer layer}, respectively. All of the ablation studies are conducted on ISIC-2017. \paragraph{Effectiveness of Joint Training:} Firstly, we conduct experiments with a single branch \ie only generate segmentation or classification predictions (Settings (1) and (2)). Then we jointly train segmentation and classification tasks (Setting (3)). From this ablation study, we observe that naively combining segmentation and classification branches in one model will decrease the performance for both tasks. For example, the Jaccard score decreases by 1\% and M\_ACC decreases by 0.2\%. We suppose that the reason is: For the classification task, the model attempts to learn the most discriminative features (usually high-level features) rather than exploit the whole information. However, for the segmentation task, the model focuses more on low-level features like edges or colors to judge whether a pixel belongs to foregrounds or backgrounds. Base on these observations, we attempt to boost the performance of this multi-task framework in two ways: 1) Utilize semi-supervised training to take advantage of data without segmentation masks. 2) Exploit the consistency between segmentation and classification heads. \paragraph{Effectiveness of Dual-Task Consistency:} Since annotating segmentation masks for skin images is time-consuming, there is a large number of skin images only with classification labels. Hence, we employ a semi-supervised manner to take advantage of the additional dataset through dual-task consistency (Settings (4) and (5)). When the level set function head is appended along with the segmentation mask branch, the performance of both tasks increases and is better than that of single branch settings \eg (1) and (2). Compared with the segmentation mask, the level set function cares more about the information of the edge, thus helping the segmentation tasks indirectly and achieving better performance than setting (1) in terms of Jaccard score. When the consistency loss is introduced in this architecture, the performance of both tasks is further boosted. For example, compared with (1), the Dice score can be boosted by 0.2\% in the segmentation task. While compared with (2), the M\_ACC can be boosted by 0.5\%. \paragraph{Effectiveness of Attended Region Consistency:} The introduction of the dual-task mainly concentrates on the consistency of two low-level tasks. To further exploit the consistency between segmentation and classification heads, we put forward an attended region consistency. As is shown in setting (6), the performance reaches the best compared with all its counterparts. Intuitively, the attended region consistency can rectify the distraction phenomenon of the classification branch, thus resulting in higher classification performance. \paragraph{Effectiveness of Transformer Layers:} As is shown in Table \ref{tab:transformer layer} we conduct the ablation studies to verify the effectiveness of the appended Transformer layers. The performance reaches the best when four Transformer layers are used. We suppose that the Transformer layers are capable of correlating each grid in feature maps through the self-attention mechanism, thus achieving better scores compared with the model without any Transformer layers. \begin{figure*}[t] \centering \includegraphics[width=1.02\textwidth]{image/visualization.pdf} \caption{The segmentation predictions of our method. Compared with MB-DCNN, our method is more sensitive to the contour information, thus achieving higher performance regarding Dice score. } \label{fig:visualization} \end{figure*} \subsection{Compared with Previous Methods} The segmentation results are shown in Table \ref{tab:segmentation}. We also conduct experiments on the existing Transformer-based architecture like Swin-Transformer~\cite{liu2021swin} and Segmenter~\cite{strudel2021segmenter}. We observe that these Transformer-based architectures are subpar on this task since they do not introduce any priors with regard to skin images to the model. Through the results, our method achieves the state-of-the-art performance in terms of Jacarrd score, pixel-AC, as well as pixel-SE, which indeed verifies the ability of the proposed method. The results of the classification task are demonstrated in Table \ref{tab:classification}. Our method achieves the best average accuracy compared with its counterparts. \subsection{Generalization Ability}\label{section:ph2} In this section, we follow MB-DCNN~\cite{xie2020mutual} to validate the generalization ability of the proposed method in two ways: 1) Test on the PH2 dataset using the model pre-trained on ISIC-2017 dataset and the additional dataset. 2) Use four-fold cross-validation, \textit{i.e.} regard the trained MB-DCNN as a pre-trained one, and use three folds of PH2 dataset to fine-tune the model, while testing the fine-tuned model on the other fold of the PH2 dataset. The experimental results of the segmentation and classification tasks are shown in Table \ref{tab:ph2 classification} and Table \ref{tab:segmentation}. Through the experimental results, we observe that our model is robust compared with MB-DCNN when generalized to another dataset. For example, our method is 3.1\% better in terms of recognition accuracy and 2.8\% better in terms of Jacarrd score compared with MB-DCNN~\cite{xie2020mutual} with fine-tuning. \begin{table}[t] \centering \centering \scalebox{0.76}{ \begin{tabular}{l|c|c|c||c|c} \hline Datasets & \multicolumn{5}{c} { PH2 } \\ \hline Methods & \makecell[c]{ mFCNPI\\\cite{bi2017dermoscopic}} & \makecell[c]{ RFCN\\\cite{yuan2017automatic}} & \makecell[c]{ SLIC\\\cite{patino2018automatic}} & \makecell[c]{MB-DCNN\\\cite{xie2020mutual}} & MT-TransUNet \\ \hline JA & 84.0 & - & - & 86.7 / 89.4 & 88.5 / \textbf{92.2} \\ \hline DI & 90.7 & 93.8 & - & 92.6 / 94.2 & 93.6 / \textbf{95.9} \\ \hline pixel-AC & 94.2 & - & 90.4 & 95.8 / 96.4 & \textbf{96.5} / 93.5 \\ \hline pixel-SE & 94.9 & - & 91.0 & \textbf{97.9} / 96.7 & 97.2 / 96.5 \\ \hline pixel-SP & 94.0 & - & 89.7 & 95.1 / 94.6 & 95.7 / \textbf{98.0} \\ \hline \end{tabular}} \caption{Experimental results of the segmentation task on PH2 dataset. \textit{a/b} denotes the results of \textit{directly testing} / \textit{finetuning} settings.} \label{tab:ph2 segmentation} \end{table} \begin{table}[t] \centering \centering \scalebox{0.78}{ \begin{tabular}{l|c|c|c|c} \hline Methods & M\_AC & M\_SE & M\_SP & M\_AUC \\ \hline CICS~\cite{barata2017development} & - & \textbf{100.0} & 88.2 & - \\ \hline MFLF~\cite{barata2015melanoma} & - & 98.0 & 90.0 & - \\ \hline CCS~\cite{barata2014improving} & - & 92.5 & 76.3 & 84.3 \\ \hline \hline MB-DCNN~\cite{xie2020mutual} (test) & 88.5 & 82.5 & 90.0 & 95.7 \\ \hline Ours (test) & 95.0 & 82.5 & 98.1 & 96.0 \\ \hline \hline MB-DCNN~\cite{xie2020mutual} (ft) & 94.0 & 95.0 & 93.8 & 97.7 \\ \hline Ours (ft) & \textbf{97.1} & 91.2 & \textbf{99.0} & \textbf{98.6} \\ \hline \end{tabular}} \caption{Experimental results of the classification task on PH2 dataset.} \label{tab:ph2 classification} \end{table} \subsection{Model Efficiency} In this section, we compare the efficiency of the proposed model with another multi-task framework MB-DCNN~\cite{xie2020mutual}. Specifically, MB-DCNN comprises three separate networks, including CoarseSN, MaskCN, and EnhancedSN. The total amount of parameters is 130M, which is far more than our model (48M). During training, it only takes 8 hours to train our model, which is more efficient compared with MB-DCNN (48 hours). As for the inference time, when the batch size is set to 1, our method is capable of generating both segmentation and classification results in 0.17 seconds. On the contrary, it costs MB-DCNN 2.02 seconds in total to generate both predictions. Furthermore, since the EnhancedSN relies on the intermediate results produced by MaskCN, it will cost large disk storage. Hence, our model surpasses MB-DCNN in terms of model efficiency. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{image/failure_case.pdf} \caption{Some failure cases of segmentation predictions.} \label{fig:Failure case} \end{figure} \subsection{Failure Cases} Several failure cases are shown in Figure \ref{fig:Failure case}. We observe that the low-contrast skin images still bring great difficulties for our model. Besides, when there is some occlusion in the skin image \eg hair or rulers, the performance of segmentation will also decrease. To tackle these failure cases, we suppose that more additional preprocessing strategies should be utilized to enhance the original images, such as contrast enhancement and occlusion removal. \section{Conclusion} In this paper, we proposed a multi-task framework called MT-TransUNet to combine segmentation and classification tasks in one model. We represent each task as corresponding tokens and leverage Transformer layers to correlate tasks in feature domains. To utilize the dataset without segmentation labels, we utilize a semi-supervised fashion by introducing the level set function as a dual-task, then constrain the consistency between the segmentation masks and the level set function. Besides, we also put forward an attended region consistency between segmentation and classification heads. The experimental results show that our method can achieve state-of-the-art performance on both tasks while preserving better efficiency compared with previous methods. {\small \bibliographystyle{ieee_fullname}
proofpile-arXiv_065-103
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Acknowledgements} \section{Appendix} \section{Conclusion} A versatile framework capable of simulating \ac{ISAC} systems using realistic scenarios due to the virtue of Blender was introduced. The required workflow of such a sensing framework to incorporate all necessary propagation attributes was demonstrated. Further four commonly used metrics from different fields were compared regarding their ability to capture the sensing capabilities. As it turns out prominence due to its noise resilience and being bounded is a strong candidate for creating a common comparison technique among a large amount of different algorithms. In a standard industrial use-case state-of-the art clutter removal techniques were compared, demonstrating that clutter removal is a key element for the success of \ac{ISAC} systems. Additional scenarios and datasets will be provided in the future to reproduce and compare algorithms. \section{MaxRay: A versatile \ac{ISAC} Framework} \label{sec:framework} Fig.~\ref{fig:framework} depicts the workflow of \textit{MaxRay}, where the input is a dynamic scenario encapsulated as a Blender\cite{Blender} file and configuration file. This configuration file contains the communication settings, e.g. carrier frequency and number of subcarriers. The power of using Blender is modeling realistic environments, where fine-granular movement of parts and complex scenarios can be simulated. Exploiting the Python \ac{API} \cite{Blender} the exact information about the environment can be leveraged by a ray tracer, getting the communication channel impulse response. Thus, we give a rough overview of the pipeline before explaining the parts in detail. First, all possible paths considering different propagation properties (reflection, scattering, diffraction, blockage) are calculated using this \ac{API} and their interactions based on the incident angle, outgoing angle, materials, and speed of the surface are denoted. This computationall< complex process is done within the Blender environment and then passed through the following routine within \textit{MaxRay}: Leveraging those parameters, all experienced losses per path (reflection loss, penetration loss,..) are calculated. Then, a Doppler component (phase shift) per path is added, corresponding to the movement of each interacted frame. Further the \ac{OFDM} frame structure is superimposed by band-limiting the signal and applying the Fourier-transform. On this \ac{OFDM} frame the corresponding sensing/\ac{RADAR} images can be calculated. In general the outputs of \textit{MaxRay} can be a rendered Camera, a depth and/or a \ac{LIDAR} image containing all back-scattering points and their distance. Further, the bounding boxes for the camera, the \ac{LIDAR} object identification per point, the depth bounding boxes and the original position and dimensions of the objects in the environment, are created automatically. We are emphasizing that a full pipeline for different \ac{DL} or classical techniques is created and the datasets will be available to reproduce our results. \subsection{Geometrical RayTracing} \begin{algorithm}[t] \caption{Geometrical ray-casting}\label{alg:cap} \begin{algorithmic} \Require probing vectors $\vec{P}$, max interactions $\ell$, RX pos $\vec{r}$, TX pos $\vec{t}$, Current number interaction $i = 0$ \State \empty \Procedure{trace-path}{$\vec{P},\vec{t},\vec{r},i,\ell$} \If{$i \leq \ell$} \Comment max interaction not reached \For{$\vec{p}$ in $\vec{P}$} \If{check-if-rx-hit($\vec{t},\vec{r},\vec{p}$)} \State store-trace(); \State break; \EndIf \State hit,\textbf{point},obj $\gets$ ray($\vec{t}$,$\vec{p}$) \If{hit} \State $i \gets i +1$; \State $\vec{t} \gets \textbf{point}$; \State $\hat{\vec{P}} \gets$ calc-new-probes($\vec{p}$,obj); \State trace-path($\hat{\vec{P}},\vec{t},\vec{r},i,\ell$); \Comment trace recursively \EndIf \EndFor \EndIf \EndProcedure \end{algorithmic} \end{algorithm} The pseudo-code of the geometrical ray-casting is given in Alg.~\ref{alg:cap}. The input is a set of probing vectors $\vec{P}$ seen from the transmitter point into the environment, compare the blue solid lines of Fig.~\ref{fig:Interaction-with-Objects} as an example.. Thus, $\vec{P}$ defines the spatial resolution and accuracy of the ray-tracing engine, as the initial number and resolution of rays are set by the user. Note in our case we set it to the full angle domain using a resolution of $1\textdegree$. Then, in a recursive fashion each ray is traced, allowing it to penetrate, diffract, back-scatter and reflect from the objects. Thus, the function "ray()" calculates from the position $\vec{p}$ into the direction $\vec{t}$ the next interacting object, returning if and at which location ($\textbf{point}$) an interaction occurred. From this location, the function "calc-new-probes()" calculates using all propagation effects a new set of probing vectors, which will be explained in Fig.~\ref{fig:Interaction-with-Objects}. This path is stored and broken if at any point a bounding box at the receiver is hit, e.g. by calling the function "check-if-rx-hit". For this to be more efficient, a cube around the receiver of four times the wavelength $\lambda$ is considered. After computing each path, up to the $\ell$-th interaction, their corresponding delays and losses will be calculated. \begin{figure}[H] \centering \includegraphics{pdfs/main-figure1.pdf} \caption{Different interactions with objects} \label{fig:Interaction-with-Objects} \end{figure} Fig.~\ref{fig:Interaction-with-Objects} depicts for an input vector (blue) \begin{itemize} \item the back-scattering component by inverting the direction of the input vector, \item the reflected vector by mirroring the input vector, \item the scattering vector created by a pre-defined angular spread of, e.g. 10 degrees and resolution of 1 degree a set of rays around he reflected ray, \item the penetration vector by using Snell's-law, \item the set of diffraction vectors if the input vector hits close to an edge. \end{itemize} The difference between the reflected and scattered probing vector is given by the angular spread $\theta$. From one ray as an input a manifold of new probing vectors emerge, allowing to model most common propagation effects. In the next subsection we elaborate on how to compute the losses for each of these interactions. \subsection{Propagation Properties: Losses} All traced paths undergo certain lossy interactions which are accounted for by \begin{equation} p_{\text{loss}} = \Gamma_{\text{path}}\Gamma_{\text{beam}} \Gamma_{\text{reflect}} \Gamma_{\text{scat}} \Gamma_{\text{rough}} \Gamma_{\text{diffract}} \Gamma_{\text{backscatter}}, \end{equation} multiplying all effects along the path. Hereby \begin{itemize} \item $\Gamma_{\text{path}}$ is the free-space path-loss, \item $\Gamma_{\text{beam}}$ is the antenna pattern-loss, \item $\Gamma_{\text{reflect}}$ is the reflection loss, \item $\Gamma_{\text{scat}}$ is the additional scattering loss, \item $\Gamma_{\text{rough}}$ is the roughness loss, \item $\Gamma_{\text{diffract}}$ is the diffraction loss, \item $\Gamma_{\text{backscatter}}$ is the back-scatter loss. \end{itemize} Emphasizing that a full explanation of these effects would go beyond the scope of this paper, we provide the equations we used, with some brief explanation. Note that these models can easily be changed or improved and adjusted to different measurements. The free-space path-loss \begin{equation} \Gamma_{\text{path}} = \left (\frac{\lambda}{4\pi d}\right), \end{equation} is commonly calculated by using the wavelength $\lambda$ and the distance travelled along the path between transmitter and receiver. The antenna pattern is used to calculate the beam loss for the transmitter and receiver pair and can be chosen from a perfect dipole/patch, a measured patch and a beamset for mmWave systems. The reflection loss \begin{equation} \Gamma_\text{reflect} = \frac{\cos{\phi}-\sqrt{\mu_r(f) \epsilon_r(f) - \sin^{2}{\phi}}}{\cos{\phi}+\sqrt{\mu_r(f) \epsilon_r(f) - \sin^{2}{\phi}}} \label{eg:reflect_loss} \end{equation} is due to the material change from air to a material with specific frequency-dependent permittivity $\epsilon_r$, permeability $\mu_r$, influenced by the input angle $\phi$. Note that if the material is a perfect reflector the reflection coefficient goes to one. The scattering loss \cite{Rappaport2019} \begin{equation} \Gamma_\text{scatt} = \left(\frac{1+\cos{\theta}}{2}\right)^{\frac{\alpha_r}{2}}, \end{equation} accounts for the effect that other rays emerge around the original reflected ray due to the roughness of the material. Hereby the $\alpha_r$ is a parameter to model the angular spread around the reflected ray\cite{Rappaport2019}. Additionally to the scattering component a roughness loss \cite{Rappaport2019} \begin{equation} \Gamma_\text{rough} = \exp\left[{-8\left(\frac{\pi\rho\cos{\phi}}{\lambda }\right)^2}\right] J_0\left[8\left(\frac{\pi\rho\cos{\phi}}{\lambda }\right)\right], \end{equation} is required, using the Bessel function of zero-th order $J_0$ and the standard deviation of the surface roughness $\rho$. This roughness loss is is to model the interaction of high frequency waves at surfaces where the roughness is comparable to the wavelength. The diffraction loss \begin{equation} \Gamma_{\text{diffract}}(\nu) = \frac{\sqrt{\left(1-C(\nu)-S(\nu)\right)^2 + \left(C(\nu) + S(\nu)\right)^2}}{2}, \end{equation} is commonly modeled using the ITU-R P.526-14 standard \cite{ITU}. Hereby we use the knife-edge diffraction by calculating the geometrical factor $\nu=\sqrt{\frac{2d}{\lambda}\alpha_1 \alpha_2}$, where the angles $\alpha_1,\alpha_2$ are between the top of the obstacle and one end as seen from the other end. Moreover, the Fresnel integral \begin{equation} F(\nu) = C(\nu) + jS(\nu) \end{equation} is approximated by using the cosine integral $C(\nu)$ and the sine integral $S(\nu)$. For more details we refer to \cite{ITU}. The last effect considered is the back-scattering loss \begin{equation} \Gamma_{\text{backscatter}} = p_\text{scat} \left(\frac{1+\cos{\phi}}{2}\right)^{\frac{\alpha_r}{2}}, \end{equation} where we consider the input angle $\phi$ and the specific back scatter loss $p_\text{scat}$ as the main back-reflection component. One can derive that if the incoming angle is close to zero the effect of back-scattering is stronger than if the ray emerges from the side. \subsection{Baseband channel representation} Blender and its animation feature can be used to calculate the effective movement between animated frames, allowing to calculate the effective Doppler phase-shift \begin{equation} \beta = 4\pi(N_\text{sub}+ \text{CP})f_s f_c \frac{v_s}{c_0} \end{equation} per path. $N_\text{sub}$ is the number of subcarriers, \text{CP} the cyclic prefix length, $f_s$ the sampling rate, $f_c$ the carrier frequency, $c_0$ the speed of light in the vacuum and $v_s$ the relative speed in each path-element. This Doppler shift is applied per path at the receiver. As each transmission scheme has a certain bandwidth, the channel impulse response is now decimated. The wanted OFDM frames are created by zero-padding this band-limited channel impulse response and applying the Fourier transform. The \ac{OFDM} frames, additional sensory and corresponding labels are saved into a large \ac{HDF} file. Although one would now expect measurements to verify the ray-tracing core we shift this investigation to a later point, emphasizing that we are not claiming to create a good match between measurements and simulation, but claiming that the underlying structure within behaves the same. \section{Introduction}\label{Sec:Introduction} \ac{ISAC} is widely seen as the natural evolution of of communications-only networks. In the next generation of wireless network standards, \ac{ISAC} will enable the already existing communication links to extract environment information~\cite{TWild2021,Lima2021}. The synergy between communications and sensing could be then leveraged~\cite{viswanathan2020communications}. Although sensing is seen as a new topic in the world of communication, we want to emphasize that both worlds are learning from each other, e.g., \ac{RADAR} applications use communication principles like \ac{MIMO} and \ac{OFDM} \cite{Braun2019,Leen2012}. On the other hand, proof of concepts where communication networks can locate their users have already been published~\cite{saur20205gcar}. However, not only active users can be located, but wireless link can also be used to detect passively the activity of humans \cite{Jian2020}, their gestures \cite{Wenfeng2015}, or their position \cite{Arnold2019}. Although seeing the two worlds merging towards a joint system, a complete redesign of them to fully unveil their potential is still to be done. Among the open points, the channel models typically considered for communications are statistical~\cite{3GPP}. The main shortcoming of these models is that they not consider the interaction among the environment and the signals propagating through it in a deterministic way, thus not allowing to perform any meaningful investigation about sensing. Therefore, raytracing can be considered allowing to model the deterministic wireless network signal's propagation in the environment. Moreover, raytracing offers the capabilities to reproduce the response of additional sensors, like \ac{LIDAR} and cameras. This allows to investigate not only \ac{ISAC}, but also sensor fusion techniques. \begin{figure}[t] \centering \includegraphics{pdfs/main-figure0.pdf} \caption{Blockdiagram of the MaxRay Framework} \label{fig:framework} \vspace{-0.5cm} \end{figure} Therefore, we introduce \textit{MaxRay}, a versatile tool to simulate realistic scenarios, leveraging ray-tracing to get \ac{ISAC} channel responses. Moreover, MaxRay allows to deploy and generate the acquisition of multiple environmental sensors, for the moment cameras and \ac{LIDAR}. The proposed \textit{MaxRay} is used in an indoor factory scenario to generate an extensive dataset available to the community\footnote{Link available after review} for evaluating \ac{ISAC} and sensor fusion experiments. In this paper, we leverage the generated data to address a well-known problem in \ac{RADAR} literature, i.e. clutter removal. Indeed, a \ac{RADAR} acquisition comes including all the environment information that may disturb the sensing tasks. For instance, in our considered \ac{AGV} tracking scenario, we detect also factory walls and static equipment, that are defined as clutter, in addition to the \ac{AGV}. One would like to remove the clutter to have clean acquisitions, improving the sensing-based \ac{AGV} tracking performance. Therefore, we systematically evaluate two standard clutter removal techniques with four different metrics used in computer vision, \ac{RADAR} and/or communication systems. The sensitivity of these metrics with respect to the \ac{AGV} tracking precision is discussed, providing suggestions on which are the metrics to be evaluated for network sensing applications. Summarizing our contributions \begin{itemize} \item we discuss the viability of raytracing to investigating \ac{ISAC}, \item we investigate clutter removal performance in an indoor factory scenario and \ac{ISAC} parametrization, defining and assessing different metrics to measure the performance, \item we provide the labeled dataset to reproduce our results. The dataset can be used for other \ac{ISAC} and/or sensor fusion studies. \end{itemize} \section{Sensing Basics} \label{sec:sens} For understanding the full leverage of \ac{ISAC}, first the basics of sensing needs to be understood. Thus, we first explain the basic sensing concepts and show the challenges of such systems. \subsection{OFDM \ac{RADAR}} Considering that standard communication systems use \ac{OFDM}, we are going to exploit the concept of \ac{OFDM} \ac{RADAR} \cite{Braun2019} for sensing. Thus, knowing for each of the $N_\text{symb}$ transmitted symbols, the transmitted data $\vec{X} \in \mathbb{C}^{N_\text{sub}\times N_\text{symb}}$, the channel \begin{equation} \vec{H}^{k,n} = \frac{\vec{Y}^{k,n}}{\vec{X}^{k,n}} \end{equation} is estimated for each sub-element $k,n$, using the single-tap-equalizer. Exploiting that only a limited amount of paths $L$ is seen by the received, the channel can be rewritten into \begin{equation} \vec{H}^{k,n} = \sum_{\ell=0}^{L} p_{\text{loss}} \underbrace{e^{j2\pi nT_0f_{\ell}}}_{\text{Doppler}} + \underbrace{e^{j2\pi k d_{\ell}/c_0 \Delta f}}_{\text{distance}} + \mathcal{N}^{k,n}, \end{equation} where the Doppler frequency shift $f_{\ell}$ of each path is creating a phase shift over the \ac{OFDM} symbols, with $T_0$ being the \ac{OFDM} symbol duration. The distance traveled creates a linear phase shift over the subcarriers with $\Delta f$ being the sub-carrier spacing. The zero-mean Gaussian distributed noise sample is given by $\mathcal{N}^{k,n}$. Thus, one can directly conclude that the phase-information per object can be used to determine the relative speed and the range of the object. The angle of the object is estimated by the phase-difference between antennas, i.e. having multiple $\vec{H}^{k,n}$. The baseline to exploit this orthogonality is represented by calculating the periodogram of the channel \cite{Braun2019} \begin{equation} \vec{P}^{k,n} = \left| \sum_{m=0}^{N_\text{smyb}-1}\left(\sum_{p=0}^{N_\text{sub}-1}\vec{H}^{p,m}e^{-j2\pi\frac{pn}{N_\text{symb}}}\right)e^{j2\pi\frac{mk}{N_\text{sub}}}\right|^2 \end{equation} using the \ac{FFT} over the symbols and the inverse \ac{FFT} over the number of subcarriers. Thus, the phase-shifts create a peak at the corresponding distance and speed of the respective object. This commonly used technique has limitations regarding resolution and accuracy\cite{Braun2019}. Therefore, subspace methods were proposed to exploit the underlying channel covariance matrix \begin{equation} \vec{R} = \vec{\vec{H}}\vec{\vec{H}}^{\text{H}}. \end{equation} One prominent candidate is \ac{MUSIC} \cite{Poro2010}, as its exploits the noise subspace of the covariance matrix, which is calculated using the \ac{EVD} of $\vec{R}$ and partitioning the eigenvectors into the signal subspace $\mathbf{U}_S$ corresponding to the $\mathit{Q}$ strongest eigenvalues and the complementary noise subspace $\mathbf{U}_N$. This noise subspace is probed using the the corresponding steering vectors \begin{align} \mathbf{a}(\phi_q) &= \begin{bmatrix} 1, \ e^{j2\pi \frac{d}{\lambda} \sin(\phi_{q})},\ \dots, \ e^{j2\pi (N_\text{ant} -1) \frac{d}{\lambda} \sin(\phi_{q}) } \end{bmatrix}^\text{T} \\ \mathbf{b}(d_{q}) &= \begin{bmatrix} 1, \ e^{-j2\pi \Delta f \cdot \frac{d_{q}}{c}}, \ \dots, \ e^{-j2\pi (N_\text{sub} - 1) \Delta f \cdot \frac{d_{q}}{c}} \end{bmatrix}^\text{T} \\ \label{eq:channel_vectors} \end{align} where $\vec{a}$ is the angular and $\vec{b}$ the range steering vector, with $f_{D}$ being the Doppler frequency-shift, $N_\text{ant}$ being the number of antennas of a uniform linear array with $\lambda/2$ spacing. The 2D \ac{MUSIC} spectrum (range-azimuth) is obtained by computing \begin{equation} P_{\text{MU}}(d, \phi) = \frac{1}{(\mathbf{a}(\phi) \otimes \mathbf{b}(d))^\text{H} \textbf{U}_{N}\textbf{U}_{N}^\text{H}(\mathbf{a}(\phi) \otimes\mathbf{b}(d))}, \label{eq:MUSIC} \end{equation} where $\otimes$ is the Kronecker product. For now we consider the \ac{RADAR} plot always to be the 2D-\ac{MUSIC} equivalent. \subsection{Challenges of sensing} In this paper we assume that the co-located transmitter and receiver are synchronized, giving an upper bound for performance. \begin{figure}[t] \subfloat[]{ \includegraphics{pdfs/main-figure2.pdf} \label{fig:camera-scenario} } \subfloat[]{ \includegraphics{pdfs/main-figure3.pdf} \label{fig:scenario} } \caption{Different outputs of MaxRay: \protect\subref{fig:camera-scenario} the rendered camera image snapshot \protect\subref{fig:scenario} he 2D experiment setup, with the \ac{AGV} moving with constant y coordinate.} \label{fig:ExampleScenario} \vspace{-0.6cm} \end{figure} To show the algorithmic challenges we consider a static scenario first, e.g. Fig.~\ref{fig:camera-scenario} depicts for a specific time instant (frame) the camera and Fig.~\ref{fig:scenario} shows the \ac{AGV} movement within the experiment. Note, the floor-map corresponds to a typical future production cell \cite{Arnold2021}, consisting of four robots in a specific configuration and an \ac{AGV} transporting materials within. For this investigation we use a carrier frequency of $f_c=\SI{3.75}{\giga \hertz}$, bandwidth of \SI{100}{\mega \hertz}, \ac{MIMO} configuration of $1\times4$ using a linear patch array, 100 symbols and 1024 subcarriers. \begin{figure*} \subfloat[]{ \includegraphics{pdfs/main-figure4.pdf} \label{fig:prob_static} } \subfloat[]{ \includegraphics{pdfs/main-figure5.pdf} \label{fig:sinr_static} } \subfloat[]{ \includegraphics{pdfs/main-figure6.pdf} \label{fig:prom_static} } \subfloat[]{ \includegraphics{pdfs/main-figure7.pdf} \label{fig:iso_static} } \caption{Comparison of different metrics for performance estimation of clutter removal. \protect\subref{fig:prob_static} Probability of detection.\protect\subref{fig:sinr_static} SINR \protect\subref{fig:prom_static} Prominence. \protect\subref{fig:iso_static} Isolation.} \label{fig:MetricsComparison} \vspace{-0.6cm} \end{figure*} \subsection{Clutter Removal} Clutter is defined in general as the interference, noise and reflection from unwanted targets. The task of clutter rmoval is therefore to remove any signal not being impacted by the \ac{AGV}. \subsubsection{Reference method} This reference clutter removal method is based on measuring the average environment with and without the \ac{AGV} within $\vec{H}$ and $\vec{H}_{\text{ref}}$ respectively. Then, we subtract the reference $\vec{H}_{\text{ref}}$ from the measurement. Fig.~\ref{fig:exmaple-diff-clutter} depicts left the original 2D-\ac{MUSIC} plot and in the middle the output of clutter removal using the reference method. Thus, the wanted target (\ac{AGV}) at a range of $\approx$ \SI{28}{\metre} and 50\textdegree $ $ is clearly visible. It can be shown that the clutter removal works very good if the scenario is kept. \subsubsection{Dynamic method} Another method to remove clutter and to detect a moving object is to estimate the phase shift over time of the individual impulses, by \begin{equation} \Delta \vec{h} = \sum_{p=0}^{N_\text{sub}-1}\vec{H}^{p,0}e^{\frac{-j2\pi p}{N_\text{sub}}}- \sum_{p=0}^{N_\text{sub}-1}\vec{H}^{p,N_\text{symb}}e^{\frac{-j2\pi p}{N_\text{sub}}}. \end{equation} Using the condition that an impulse not affect by movement should have no phase difference over the frame (besides noise), the impulses in the time domain are set to zero with $\Delta \vec{h} \leq \epsilon$, where $\epsilon$ is arbitrarily set to 1\%. Fig.~\ref{fig:exmaple-diff-clutter} shows the impact of this clutter removal technique (right), removing partly the clutter. Thus, this technique enhances the \ac{RADAR} image, by de-cluttering. The importance of finding a suitable metric to compare those two clutter-removal techniques is emphasized by understanding that the visual effect does not give any possibility of numerically assess their performance, for fine-granular comparisons. \begin{figure}[t] \centering \includegraphics{pdfs/main-figure8.pdf} \caption{Example image of clutter removal, left original, middle clutter removed using the reference method and right the dynamic method} \label{fig:exmaple-diff-clutter} \vspace{-0.6cm} \end{figure} \subsection{Metrics} Although in vision-based systems different metrics were introduced, \ac{RADAR} images have the unique twist of not knowing if and how \begin{itemize} \item the target is "seen" by the receiver (there is a reflection), \item the size of the objects impacts the receiver image, \item the exact position and reflection are accounted for. \end{itemize} To demonstrate the advantages and disadvantages of this techniques we leverage first a static frame, where the ground-truth is known. In general, we assume that peaks can be detected if their amplitude/power is at least 10\% higher power than the average power of the \ac{RADAR} image. We consider now four different metrics, where the probability of detection emerged from vision technologies, the \ac{SINR} metric from communication, the prominence and isolation from geology. \subsubsection{Probability of Detection} The probability of detection is defined as \begin{equation} P_D = \frac{N_\text{detected}}{N_\text{iteration}} \cdot 100 \% \end{equation} where $N_\text{detected}$ is the number how often the \ac{AGV} was detected within a range of $\lambda$ around the true target and $N_\text{iteration}$ is the number of the experiment runs. Plotting this metric over the movement of the \ac{AGV} demonstrates potential blockers and thus can be leveraged to enhance communication. This metric is also heavily impacted by defining the range difference allowed, making erroneous peaks possible in low \ac{SNR} regions. \subsubsection{SINR} Another metric is the \ac{SINR} \begin{equation} \Gamma = \frac{P_\text{Signal}}{\sum P_{S}}, \end{equation} where the power at the specific position in the \ac{RADAR} image $P_\text{Signal}$ is taken and then divided by the remaining power $P_S$ in the set $S$ outside the $\lambda$ circle. Note, this metric incorporates targets which are not wanted as interference and thus punishes clutter-removal techniques which keep background targets (dynamic method). \subsubsection{Prominence} Prominence \begin{equation} \kappa = \frac{P_\text{Signal}}{P_\text{c}} \end{equation} is a metric mostly used in topography maps and is defined as the distance between the wanted signal (peak) $P_\text{Signal}$ and the circumference $P_\text{c}$ around this peak, where the gradient changes its sign. \subsubsection{Isolation} Isolation \begin{equation} \iota = \left|\vec{p}_\text{peak} - \vec{p}_\text{closest,peak} \right|^2 \end{equation} is given as the distance between the signal peak $\vec{p}_\text{peak}$ to the next peak $\vec{p}_\text{closest,peak}$. Note that this metric is unbounded if only one target is available. Fig.~\ref{fig:MetricsComparison} depicts in this simple case the performance for the different clutter removal techniques using the mentioned metrics. It can be seen that since the noise can create erroneous peaks at the wanted position, the probability of detection is falsely large at low \ac{SNR}, but converges to the real probability in high \ac{SNR} regions. As seen from the visualization in the clutter removal section the reference case is better than the dynamic removal but still achieves a lot better results than without. Thus, both techniques seem to be viable at first. The \ac{SINR} metric punishes that the second peak of the clutter is not removed, thus no real gain is shown and it seems to be an unsuitable metric (comp. Fig.~\ref{fig:exmaple-diff-clutter}). The prominence in the sub-figure does not show the effect of the low \ac{SNR} error as the prominence in noise is almost zero. Further it shows the respective gains of the clutter removal, rendering it into a very viable metric for sensing algorithms. The isolation in the last sub-figure shows the gains, but is unbounded due to only a remaining peak, when the SINR is high enough. In conclusion it seems that prominence due to its bounding by zero and one, incorporates all underlying effects and is suitable for the most classical and \ac{DL} techniques. \subsection{Dynamic environment} \begin{figure}[t] \centering \includegraphics{pdfs/main-figure9.pdf} \caption{Probability of detection over time for the two clutter removal techniques} \vspace{-0.2cm} \label{fig:prob_over_time} \vspace{-0.4cm} \end{figure} Fig.~\ref{fig:prob_over_time} shows the probability of detecting the \ac{AGV} using the two clutter removal techniques. It can be seen that the reference case only has two dips at exactly the positions of the robotic arm (e.g. blockage). Further due to the limited angular resolution the \ac{AGV} cannot be resolved. In the case of the dynamic clutter removal the channel impulse response in some cases cannot resolve the target, thus making it unsuitable for perfect tracking. Note that time tracking can enhance this method. In this case communication can benefit from sensing systems by predicting blockage, and therefore performance drops, beforehand. \subsection{Environmental movement} Fig.~\ref{fig:environmental_change} depicts the performance, if one of the robots arm moves, while the \ac{AGV} moves, degrading the performance of the reference method. This is due to the effect that the reference method only incorporates a specific setting and the reflections within the environment drastically change if the environment changes. Notably the performance of the dynamic clutter removal stays constant and even improves a little, due to better separation of static parts. Further the dynamic case seems to be achieving the most promising results, which could be used to calculate the \ac{RADAR} image. \begin{figure}[t] \centering \includegraphics{pdfs/main-figure10.pdf} \caption{Average performance of clutter removal algorithms in the case of environmental movement.} \vspace{-0.2cm} \label{fig:environmental_change} \vspace{-0.4cm} \end{figure} \chapter*{Notations \label{chap:Notations}} \addcontentsline{toc}{chapter}{Notations} \markboth{Notations}{Notations} \setlength\extrarowheight{1em} \begin{flushleft} \begin{longtable}{>{\raggedright}p{0.3\textwidth} >{\raggedright}p{0.69\textwidth}} $x$ & scalar variables, especially in time domain: italic lower case letters \tabularnewline $\mathbf{X}$ & matrix variables: boldface upright upper case letters \tabularnewline $x\left(t\right)$ & functions of continuous variables: argument is placed in round parentheses \tabularnewline $x_{k}$ & scalar elements of a vector or element of a sequence: element index is a subscript \tabularnewline $X_{ij}$ & scalar elements of a matrix: row and column indices are subscripts \tabularnewline $\oplus$ & exclusive disjunction (xor) \tabularnewline $x \ast h $ & convolution of $x$ and $h$ \tabularnewline $P[x<X]$ & probability of $x$ being smaller than $X$ \tabularnewline $\mathbb{E}[X]$ & expected value of $X$ \tabularnewline $\bf{H} \odot \bf{I}$ & Hadamard product of matrix $\bf{H}$ and $\bf{I}$ \tabularnewline $j$ & imaginary unit \tabularnewline $<.>$ & Iverson brackets, turning the statement into Boolean \tabularnewline $\bf{H}^{\dagger}$ & pseudo inverse of $\bf{H}$ \end{longtable} \par \end{flushleft} \setlength\extrarowheight{0em}
proofpile-arXiv_065-104
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In noble liquid detectors for dark matter searches \cite{Chepel13} and low-energy neutrino experiments \cite{Majumdar21}, the scattered particle produces two types of signals: that of primary scintillation, produced in the liquid and recorded promptly ("S1"), and that of primary ionization, produced in the liquid and recorded with a delay ("S2"). In two-phase (liquid-gas) detectors \cite{Akimov21}, to record the S2 signal proportional electroluminescence (EL) is used produced by drifting electrons in the gas phase under high enough electric fields. According to modern concepts~\cite{Buzulutskov20}, there are three mechanisms responsible for proportional EL in noble gases: that of excimer (e.g. Ar$^*_2$) emission in the vacuum ultraviolet (VUV) \cite{Oliveira11}, that of emission due to atomic transitions in the near infrared (NIR) \cite{Oliveira13,Buzulutskov17}, and that of neutral bremsstrahlung (NBrS) emission in the UV, visible and NIR range \cite{Buzulutskov18}. These three mechanisms are referred to as excimer (ordinary) EL, atomic EL and NBrS EL, respectively. NBrS EL is due to bremsstrahlung of drifting electrons scattered on neutral atoms: \begin{eqnarray} \label{Rea-NBrS-el} e^- + \mathrm{A} \rightarrow e^- + \mathrm{A} + h\nu \; . \end{eqnarray} The presence of NBrS EL in two-phase Ar detectors has for the first time been demonstrated in our previous work~\cite{Buzulutskov18}, both theoretically and experimentally. Recently, the similar theoretical approach has been applied to all noble gases, i.e. overall to He, Ne, Ar, Kr and Xe, to calculate the photon yields and spectra for NBrS EL \cite{Borisova21}. NBrS EL in noble gases was further studied experimentally in \cite{Bondar20,Tanaka20,Kimura20,Takeda20,Takeda20a,Aoyama21,Aalseth21,Monteiro21} and theoretically in \cite{Amedo21}. On the other hand, much less is known about proportional EL in noble liquids \cite{Buzulutskov20,Masuda79,Schussler00,Aprile14,Ye14,Lightfoot09,Stewart10}. In a sense, the experimental data are even confusing. Indeed, in liquid Ar the observed threshold in the electric field for proportional EL, of about 60 kV/cm \cite{Buzulutskov20,Lightfoot09}, was 2 orders of magnitude less than expected for excimer EL \cite{Stewart10}. In liquid Xe, the EL threshold was more reasonable, around 400 kV/cm, but some puzzling EL events were observed below this threshold \cite{Aprile14}. In our previous works \cite{Buzulutskov18,Buzulutskov20} it was suggested that these puzzling events at unexpectedly low fields might be induced by proportional EL produced by drifting electrons in a noble liquid due to NBrS effect, the latter having no threshold in the electric field. In this work we verify this hypothesis, namely we extend the theoretical approach developed for noble gases to noble liquids in order to develop a quantitative theory that can predict the photon yields and spectra for NBrS EL in all noble liquids. What is new in this work is that the electron energy and transport parameters in noble liquids are calculated in the framework of rigorous Cohen-Lekner \cite{Cohen67} and Atrazhev \cite{Atrazhev85} theory. In this theory, the electron transport through the liquid is considered as a sequence of single scatterings on the effective potential. Therefore, such a parameter as electron scattering cross section can be used in the liquid in a way similar to that of the gas \cite{Akimov21}. An important concept of the theory is the distinction between the energy transfer scattering, which changes the electron energy, and that of momentum transfer, which only changes the direction of the electron velocity. Both processes have been assigned separate cross sections \cite{Cohen67,Atrazhev85,Stewart10}: that of energy transfer (or else effective) and that of momentum transfer. These are obvious analogs of those in the gas, namely of the total elastic and momentum transfer (transport) cross sections, respectively. The latest modifications of the theory can be found elsewhere \cite{Boyle15,Boyle16}. Accordingly, in this work the photon yields and spectra are calculated for NBrS EL in all noble liquids: in liquid He, Ne, Ar, Kr and Xe. The relevance of the results obtained to the development of noble liquid detectors for dark matter searches and neutrino detection is also discussed. \section{Theoretical formulas} To calculate the photon yields and spectra for NBrS EL in noble liquids we used the approach developed for noble gases in~\cite{Buzulutskov18}. Let us briefly recall the main points of this approach. The differential cross section for NBrS photon emission is expressed via electron-atom total elastic cross section ($\sigma _{el}(E)$)~\cite{Buzulutskov18,Park00,Firsov61,Kasyanov65,Dalgarno66,Biberman67}: \begin{eqnarray} \label{Eq-sigma-el} \frac{d\sigma}{d\nu} = \frac{8}{3} \frac{r_e}{c} \frac{1}{h\nu} \left(\frac{E - h\nu}{E} \right)^{1/2} \times \hspace{40pt} \nonumber \\ \times \ [(E-h\nu) \ \sigma _{el}(E) \ + \ E \ \sigma _{el}(E - h\nu) ] \; , \end{eqnarray} where $r_e=e^2/m c^2$ is the classical electron radius, $c=\nu \lambda$ is the speed of light, $E$ is the initial electron energy and $h\nu$ is the photon energy. To be able to compare results at different medium densities and temperatures, we need to calculate the reduced EL yield ($Y_{EL}/N$) as a function of the reduced electric field ($\mathcal{E}/N$), where $\mathcal{E}$ is the electric field and $N$ is the atomic density. The reduced EL yield is defined as the number of photons produced per unit drift path and per drifting electron, normalized to the atomic density; for NBrS EL it can be described by the following equation \cite{Buzulutskov18}: \begin{eqnarray} \label{Eq-NBrS-el-yield} \left( \frac{Y_{EL}}{N}\right)_{NBrS} = \int\limits_{\lambda_1}^{\lambda_2} \int\limits_{h\nu}^{\infty}\frac{\upsilon_e}{\upsilon_d} \frac{d\sigma}{d\nu} \frac{d\nu}{d\lambda} f(E) \ dE \ d\lambda \; , \end{eqnarray} where $\upsilon_e=\sqrt{2E/m_e}$ is the electron velocity of chaotic motion, $\upsilon_d$ is the electron drift velocity, $\lambda_1-\lambda_2$ is the sensitivity region of the photon detector, $d\nu/d\lambda=-c/\lambda^2$, $f(E)$ is the electron energy distribution function normalized as \begin{eqnarray} \label{Eq-norm-f} \int\limits_{0}^{\infty} f(E) \ dE = 1 \; . \end{eqnarray} The distribution functions with a prime, $f^\prime=f/E^{1/2}$, is often used instead of $f$, normalized as \begin{eqnarray} \label{Eq-norm-fprime} \int\limits_{0}^{\infty} E^{1/2} f^\prime(E) \ dE = 1 \; . \end{eqnarray} $f^\prime$ is considered to be more enlightening than $f$, since in the limit of zero electric field it tends to Maxwellian distribution. Consequently, the spectrum of the reduced EL yield is \begin{eqnarray} \label{Eq-NBrS-el-yield-spectrum} \frac{d (Y_{EL}/N)_{NBrS}}{d\lambda} = \int\limits_{h\nu}^{\infty}\frac{\upsilon_e}{\upsilon_d} \frac{d\sigma}{d\nu} \frac{d\nu}{d\lambda} f(E) \ dE \ \; . \end{eqnarray} In our previous works \cite{Buzulutskov18,Borisova21}, the electron energy distribution function and drift velocity in noble gases, at a given reduced electric field, were calculated using Boltzmann equation solver \cite{Hagelaar05}. In this work, we follow exactly the Atrazhev paper \cite{Atrazhev85} to calculate the electron energy distribution function and drift velocity in noble liquids. Another modification is that the total elastic cross section in Eq.~\ref{Eq-sigma-el} is replaced with the energy transfer cross section for electron transport through the liquid. With these two modifications all the Eqs.~\ref{Eq-sigma-el},\ref{Eq-NBrS-el-yield},\ref{Eq-norm-f},\ref{Eq-NBrS-el-yield-spectrum} can directly apply to noble liquids. \section{Cross sections, electron energy distribution functions and drift velocities in noble liquids} According to Cohen-Lekner and Atrazhev theory the drift and heating of excess electrons by an external electric field in the liquid are determined by two parameters, the collision frequency of energy transfer ($\nu_{e}$) and that of momentum transfer ($\nu_{m}$)~\cite{Atrazhev85}: \begin{eqnarray} \label{Eq01} \nu_{e} = \delta N \sigma_{e}(E)(2E/m)^{1/2} \: , \\ \nu_{m} = N \sigma_{m}(E)(2E/m)^{1/2} \: , \\ \sigma_{m}(E) = \sigma_{e}(E)\widetilde{S}(E) \,. \end{eqnarray} \noindent Here $N$ is the atomic density of the medium; $E$ is the electron energy; $\delta = 2m/M$ is twice the electron-atom mass ratio; $\sigma_{e}(E)$ and $\sigma_{m}(E)$ is the energy transfer (effective) and momentum transfer electron scattering cross section in the liquid, respectively; $\widetilde{S}(E)$ is the function, which takes into account liquid structure. To calculate collision frequencies one need to know $\sigma_{e}(E)$ and $\sigma_{m}(E)$; for liquid Ar, Kr and Xe these were given in \cite{Atrazhev85}: see Fig.~\ref{fig01} (top). For comparison, Fig.~\ref{fig01} (bottom) presents the total elastic cross sections for gaseous Ne, Ar, Kr, and Xe taken from the BSR database~\cite{DBBSR}; since for He it is not available, the momentum transfer cross section is shown instead taken from the Biagi database~\cite{DBBiagi}. \begin{figure} \includegraphics[width=0.99\columnwidth]{fig01a} \includegraphics[width=0.99\columnwidth]{fig01b} \caption{Top: Electron scattering cross sections in liquid Ar, Kr and Xe as a function of electron energy, namely that of energy transfer (or effective), $\sigma_{e}$, and that of momentum transfer, $\sigma_{m}$, both taken from~\cite{Atrazhev85}. Bottom: Electron scattering cross section in noble gases as a function of electron energy: that of total elastic for Ne, Ar, Kr, and Xe, taken from the BSR database~\cite{DBBSR}, and that of momentum transfer for He, taken from the Biagi database~\cite{DBBiagi}.} \label{fig01} \end{figure} The electron distribution function $f^\prime(E)$ in a strong electric field is expressed via both collision frequencies \cite{Atrazhev85}: \begin{eqnarray} \label{Eq02} f^\prime(E) = f(0) exp\left(-\int\limits_{0}^{E} \frac{3m\nu_{e}(E)\nu_{m}(E)}{2e^{2}\mathcal{E}^{2}}dE\right). \end{eqnarray} The constant $f(0)$ is determined from the normalization condition of Eq.~\ref{Eq-norm-fprime}. Using the electron energy distribution functions, one can calculate the electron drift velocity in the liquid \cite{Atrazhev85}: \begin{eqnarray} \label{Eq03} \upsilon_d = -\frac{2}{3}\frac{e\mathcal{E}}{m} \int\limits_{0}^{\infty} \frac{E^{3/2}}{\nu_{m}(E)} \frac{df^\prime}{dE} dE. \end{eqnarray} It is shown in Fig.~\ref{fig02} as a function of the reduced electric field, the latter being expressed in Td units: 1~Td~=~$10^{-17}$~V~cm$^2$. It is possible to check the correctness of the distribution functions by comparing the calculated and measured electron drift velocities: this is done in Fig.~\ref{fig02} using the experimental data compiled in~\cite{Miller68}. It can be seen that the theoretical and experimental drift velocities are in a reasonable agreement, within a factor of 2, thus confirming the correctness of the calculated distribution functions for liquid Ar, Kr and Xe. \begin{figure} \includegraphics[width=0.99\columnwidth]{fig02} \caption{Comparison of electron drift velocity ($\upsilon_d$) in liquid Ar, Kr and Xe theoretically calculated in this work (curves) with that measured in experiment \cite{Miller68} (data points). The color of the curve and the data points is the same for a given noble liquid.} \label{fig02} \end{figure} It should be remarked that in light noble liquids, He and Ne, the Cohen-Lekner and Atrazhev theory cannot apply to calculate the electron energy distribution functions, since the appropriate cross sections for electron transport in the liquid, $\sigma_{e}(E)$ and $\sigma_{m}(E)$, are not available in the literature. Thereby in the following for these liquids a "compressed gas" approximation will be used, similarly to that developed in \cite{Borisova21}. In this approximation, the Eqs.~\ref{Eq-sigma-el},\ref{Eq-NBrS-el-yield},\ref{Eq-norm-f},\ref{Eq-NBrS-el-yield-spectrum} apply directly as for the gas, i.e. with electron energy distribution function and drift velocity obtained using Boltzmann equation solver, with the input elastic cross sections taken for the gas from Fig.~\ref{fig01} (bottom), and with the atomic density $N$ equal to that of the liquid. \onecolumn \begin{table*} [h!] \caption{Properties of noble gases and liquids, and parameters of neutral bremsstrahlung (NBrS) electroluminescence (EL) theoretically calculated in this work.} \label{table} \begin{center} \begin{tabular}{p{0.5cm}p{6cm}p{1.5cm}p{1.5cm}p{1.54cm}p{1.5cm}p{1.5cm}} No & Parameter & He & Ne & Ar & Kr & Xe \\ \\ (1) & Boiling temperature at 1.0~atm, $T_b$~\cite{Fastovsky71} (K) & $4.215$ & $27.07$ & $87.29$ & $119.80$ & $165.05$ \\ (2) & Gas atomic density at $T_b$ and 1.0 atm, derived from~\cite{Fastovsky71} (cm$^{-3}$) & $2.37\cdot10^{21}$ & $3.41\cdot10^{20}$ & $8.62\cdot10^{19}$ & $6.18\cdot10^{19}$ & $5.75\cdot10^{19}$ \\ (3) & Liquid atomic density at $T_b$ and 1.0 atm, derived from~\cite{Fastovsky71} and from ~\cite{Theeuwes70} for Xe (cm$^{-3}$) & $1.89\cdot10^{22}$ & $3.59\cdot10^{22}$ & $2.10\cdot10^{22}$ & $1.73\cdot10^{22}$ & $1.35\cdot10^{22}$ \\ (4) & Threshold in electric field for excimer EL in noble liquid deduced from the corresponding threshold in noble gas by reduction to the atomic density of the liquid, obtained using data of \cite{Borisova21} (kV/cm) & $1134$ & $538$& $840$ & $519$ & $472$\\ (5) & Number of photons for NBrS EL in noble liquid produced by drifting electron in 1~mm thick EL gap at $T_b$ and 1.0~atm, at electric field of 100 kV/cm & $0.13$ & $2.5$& $0.93$ & $1.6$ & $1.1$\\ (6) & The same at 500 kV/cm & $4.3$ & $40$& $12$ & $24$ & $30$\\ \end{tabular} \end{center} \end{table*} \begin{multicols}{2} \twocolumn The values of the atomic densities for the gas and liquid phases at boiling temperatures at 1 atm are presented in Table~\ref{table}. We will see in the following on the example of heavy noble liquids that "compressed-gas" approximation works well: the difference in photon yields for NBrS EL between the "liquid" and "compressed-gas" approximation is not that large, remaining within a factor of 1.5. It should be also remarked, that all the calculations in this work were provided for atomic densities of the medium, liquid or gas, corresponding to the boiling temperature of a given noble element at 1 atm. \section{Operational range of reduced electric fields in noble liquids for NBrS EL} It is obvious that NBrS EL in noble liquids is much weaker than excimer EL and thus becomes insignificant above the electric field threshold for excimer EL. Table~\ref{table} gives an idea of these thresholds in noble liquid deduced from the corresponding threshold in noble gas by reduction to the atomic density of the liquid, obtained using the data of \cite{Borisova21}. To compare with the results for noble gases, one also need to determine the operational range of reduced electric fields for NBrS EL in noble liquids from the experimental works, where it was presumably observed and where the operation electric field can be reliably estimated. Basically 3 works do fit to these conditions: that of \cite{Buzulutskov12}, operating the gas electron multiplier (GEM,\cite{Sauli16}) in liquid Ar, that of \cite{Lightfoot09}, operating the thick GEM (THGEM, \cite{Breskin09}) in liquid Ar, and that of \cite{Aprile14}, operating the thin anode wire in liquid Xe. Deduced from the absolute electric field values given in \cite{Buzulutskov12} and \cite{Aprile14}, the required range of reduced electric fields within which NBrS EL was presumably observed amounts to 0.1-5 Td. In particular, for liquid Ar this range corresponds to electric fields ranging from 21 to 1040 kV/cm. We will restrict our calculations to this range of fields. \section{NBrS EL spectra and yields in noble liquids} Figs.~\ref{fig03} show the NBrS spectra of the reduced EL yield for liquid Ar, Kr and Xe at different reduced electric fields. The spectra were calculated by numerical integration of Eq.~\ref{Eq-NBrS-el-yield-spectrum}. One can see that NBrS EL spectra are similar in all noble liquids; moreover they look almost identical to those obtained in noble gases at the same reduced electric field: compare Fig.~\ref{fig03} to Fig.~10 of \cite{Borisova21} at 5 Td. The spectra are rather flat, extending from the UV to visible and NIR range at higher reduced electric field, e.g. at 5 Td. In each noble liquid, the NBrS EL spectrum has a broad maximum that gradually moves to longer wavelength with decreasing electric field. At lower reduced electric field, in particular at 0.3 Td corresponding to 60 kV/cm in liquid Ar, the spectra have moved completely to the visible and NIR ranges. In all noble liquids, the spectra are mostly above 200 nm (in the UV, visible and NIR range), i.e. just in the sensitivity region of commonly used photomultiplier tubes (PMTs) and silicon photomultipliers (SiPMs). \end{multicols} \twocolumn \begin{figure} \includegraphics[width=0.99\columnwidth]{fig03} \caption{Spectra of the reduced EL yield for NBrS EL in liquid Ar, Kr and Xe at different reduced electric fields (0.3, 1 and 5 Td), calculated using Eq.~\ref{Eq-NBrS-el-yield-spectrum}.} \label{fig03} \end{figure} \begin{figure} \includegraphics[width=0.99\columnwidth]{fig04} \caption{Reduced EL yield for NBrS EL at 0-1000 nm in liquid Ar, Kr and Xe as a function of the reduced electric field, calculated in this work in the framework of Cohen-Lekner and Atrazhev theory using Eq.~\ref{Eq-NBrS-el-yield} (solid lines). For comparison, the reduced yield for NBrS EL at 0-1000 nm in noble gases is shown calculated in \cite{Borisova21} using Boltzmann equation solver (dashed lines). The color of the curves are the same for a given noble element. The top scale shows the corresponding absolute electric field in liquid Ar.} \label{fig04} \end{figure} \begin{figure} \includegraphics[width=0.99\columnwidth]{fig05} \caption{Absolute EL yield (number of photons per drifting electron per 1 cm) for NBrS EL at 0-1000 nm in noble liquids as a function of the absolute electric field, calculated in this work. For heavy noble liquids (Ar, Kr and Xe) the rigorous Cohen-Lekner and Atrazhev theory was used to calculate the electron energy and transport parameters in the liquid, while for light noble liquids (He and Ne) the "compressed gas" approximation was used.} \label{fig05} \end{figure} The EL yield for NBrS EL in noble liquids is presented in Figs.~\ref{fig04} obtained by numerical integration of Eq.~\ref{Eq-NBrS-el-yield}: the reduced EL yield is shown as a function of the reduced electric field. For comparison, the reduced yield for NBrS EL in noble gases is shown calculated in \cite{Borisova21} using Boltzmann equation solver. Surprisingly, this "compressed-gas" approximation, successfully applied before to describe NBrS EL in noble gases, has led to almost the same results as that of the rigorous "liquid" theory in terms of the reduced EL yields and spectra when formally extrapolated to the atomic density of the noble liquid: for a given noble element and given reduced electric field the difference between them remains within a factor of 1.5 up to reduced electric field of 5 Td. This fact indicates that the scaling law, stating that the reduced EL yield ($Y/N$) is a function of the reduced electric field ($\mathcal{E}/N$), is valid not only for noble gases, but also for noble liquids to some extent, at least as concerned the NBrS EL effect. It also indicates on the applicability of the "compressed gas" approximation to noble liquids at moderate reduced electric fields, below 5 Td, thus justifying its use for light noble liquids, He and Ne, where the Cohen-Lekner and Atrazhev theory cannot be used due to the lack of the data. Furthermore, Fig.~\ref{fig05} shows the practical photon yield suitable for verifying in experimental conditions, namely the number of photons produced by drifting electron per 1 cm in all noble liquids, as a function of the absolute electric field. In this figure, for heavy noble liquids (Ar, Kr and Xe) the rigorous Cohen-Lekner and Atrazhev theory was used to calculate the electron energy and transport parameters in the liquid, while for light noble liquids (He and Ne) the "compressed gas" approximation was used, with the calculations identical to those of \cite{Borisova21}. The appropriate NBrS EL spectra and yields for He and Ne can be found in \cite{Borisova21}. Table~\ref{table} (items 5 and 6) gives an idea of the magnitude of the NBrS EL effect in a practical parallel-plate EL gap, of a thickness of 1 mm: at a field of 500 kV/cm the photon yield varies as 4, 40, 12, 24 and 30 photons for He, Ne, Ar, Kr and Xe, respectively. On the other hand, at 100 kV/cm the photon yield is reduced by about an order of magnitude down to about 1 photon per drifting electron in almost all noble liquids. It is remarkable that up to 600 kV/cm, liquid Ne has the highest EL yield for NBrS EL, obviously due to much lower elastic cross section between 1 and 10 eV of the electron energy compared to other noble elements (see Fig.~\ref{fig01} (bottom)), resulting in stronger electron heating by the electric field and thus in more intense NBrS photon emission. \section{Possible applications and discussion} In order to produce noticeable NBrS EL in noble liquids, one should provide high enough electric fields, ranging from 50 to 500 kV/cm, in practical devices. Based on previous experience, such devices might be GEMs \cite{Buzulutskov12}, THGEMs \cite{Lightfoot09} and thin anode wires \cite{Aprile14}. The parallel-plate EL gap of a thickness of 1 mm can also be considered, albeit being not tested in real experiment in noble liquids at such high fields. It should be remarked that the larger EL gap thickness, e.g. 1 cm, can hardly be used in practice due to the existing limit on high voltage breakdowns in noble liquids: the absolute voltage before breakdown cannot exceed values of about 100 kV in liquid He~\cite{Gerhold94} and several hundreds kV in other noble liquids \cite{Buzulutskov20,Auger16,Tvrznikova19}. It looks natural to use GEMs or THGEMs as EL plates instead of parallel-plate EL gaps in noble liquids, since the former are more resistant to breakdowns than the latter. Note that the NBrS EL spectrum is mostly in the visible and NIR range: see Fig.~\ref{fig03}. This implies a possible practical application of NBrS EL in noble liquid detectors, namely the method of direct optical readout of S2 signal in the visible range, i.e. without using a wavelength shifter (WLS). A similar technique has been recently demonstrated in two-phase Ar detector with direct SiPM-matrix readout using NBrS EL in the gas phase \cite{Aalseth21}. These results have lead us to the idea of using THGEM plates in combination with SiPM-matrices that have high sensitivity in the visible and NIR range, to optically record the S2 signal in single-phase noble liquid detectors for dark matter search and neutrino experiments. In addition, the recently proposed transparent very-thick GEM \cite{Kuzniak21} can be used as EL plate, with enhanced light collection efficiency. We can verify the theory of NBrS EL in noble liquids by experiments where it was presumably observed, where the electric field is explicitly known and where it is known how to convert the emitted photons into recorded photoelectrons. At first glance, only two works qualify for these criteria: \cite{Lightfoot09} and \cite{Aprile14}. In particular, in \cite{Lightfoot09} the operation electric field in the center of THGEM hole (1.5 mm height), of 60 kV/cm \cite{Buzulutskov12}, corresponds to $\mathcal{E}/N$=0.3~Td in liquid Ar, resulting in about 0.6 photons per drifting electron predicted by the NBrS EL theory according to Figs.~\ref{fig04} and \ref{fig05}. However, this is more than 2 orders of magnitude smaller than the light gain reported in \cite{Lightfoot09}. We therefore suggest to interpret the results of \cite{Lightfoot09} as caused by the presence of gas bubbles associated to THGEM holes, inside which proportional EL in the gas phase took place, similarly to what happens in Liquid Hole Multipliers \cite{Erdal20}. \begin{figure} \includegraphics[width=0.99\columnwidth]{fig06} \caption{Number of photoelectrons recorded in liquid Xe by a PMT as a function of the voltage on $10~\mu m$ thick anode wire \cite{Aprile14}: the experimental data (data points) and linear fit of proportional EL to the data (solid line) are shown, the latter defining the threshold of excimer EL. Top scale shows the corresponding reduced electric field on anode wire surface. For comparison, the theoretical assessment of the number of photoelectrons due to NBrS EL obtained in this work is shown (area between dashed lines).} \label{fig06} \end{figure} In \cite{Aprile14}, where puzzling EL events were observed in liquid Xe under the threshold of excimer EL, the operation fields near the anode wire were much higher, around 400 kV/cm. Fig.~\ref{fig06} shows the experimental data and linear fit of proportional EL to the data, the latter defining the threshold of excimer EL. In addition, the experimental conditions were explicitly described. This allowed us to predict the number of photoelectrons recorded by the PMT due to NBrS EL, although with some difficulties associated with highly inhomogeneous field near the wire. Due to the latter, Eq.~\ref{Eq-NBrS-el-yield} if applied directly gives only the lower limit of the event amplitude, since it does not take into account the electron diffusion, which significantly increases the travel time of the electron to the wire and thus the overall photon yield. We tried to take into account the diffusion effect: as a result, the theoretical prediction in Fig.~\ref{fig06} is shown in the form of an area between two dashed curves, thus setting the theoretical uncertainty. Within this uncertainty, the NBrS EL theory well describes the puzzling underthreshold events, namely their absolute amplitudes and the dependence on the anode voltage, which might be treated as the first experimental evidence for NBrS EL in noble liquids. \section{Conclusion} In this work we systematically studied the effect of neutral bremsstrahlung (NBrS) electroluminescence (EL) in all noble liquids: the photon yields and spectra for NBrS EL have for the first time been theoretically calculated in liquid He, Ne, Ar, Kr and Xe. For heavy noble liquids, the calculations were done in the framework of Cohen-Lekner and Atrazhev theory describing the electron energy and transport parameters in the liquid medium. Surprisingly, the "compressed-gas" approximation, successfully applied before to describe NBrS EL in noble gases, has led to almost the same results as that of the rigorous "liquid" theory in terms of the reduced EL yields and spectra when formally extrapolated to the atomic density of the noble liquid. The predicted magnitude of the NBrS EL effect in a practical parallel-plate EL gap, of a thickness of 1 mm, is noticeable: at a field of 500 kV/cm the photon yield varies from 12 to 30 and 40 photons per drifting electron in liquid Ar, Xe and Ne respectively. The NBrS EL spectra in noble liquids are in the visible and NIR range. The practical applications of the results obtained might be the use of THGEMs as EL plates in combination with SiPM-matrices, to optically record the S2 signal in single-phase noble liquid detectors for dark matter search and neutrino experiments. \acknowledgments This work was supported by Russian Science Foundation (project no. 19-12-00008). It was done within the R\&D program of the DarkSide-20k experiment. \bibliographystyle{eplbib}
proofpile-arXiv_065-105
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The existence of non-Abelian anyons in two-dimensional condensed matter systems has been attracting growing attention due to their remarkable features and potential quantum-mechanical applications ~\cite{lei77,wil82m,wil82q,nay08}. When non-Abelian anyons are exchanged by braiding them along the world lines, their wavefunction exchange behaviors are described by a unitary matrix intrinsically different from that of exchanging fermions or bosons~\cite{wil90,fre89}. Since matrix operations are generally non-commutative, the braiding results of non-Abelian anyons are dependent on the order of the braiding operations, which are predicted to be useful for intriguing applications~\cite{nay08}. Although non-Abelian braiding has been dominantly investigated in condensed matter systems~\cite{lut18,wan18,hua21,wil13,bar20,nak20}, its underlying mechanism can be related to the multi-mode geometric-phase effect in classical systems~\cite{zu14,che22}, indicating its universality and compatibility with photonics. Since the very first use of the geometric-phase effect in controlling light polarization in Pancharatnam’s study~\cite{pan56}, various applications have successfully leveraged the single-mode Pancharatnam-Berry phase for light manipulations, such as the light steering with metasurfaces~\cite{zhu20}. However, the exploration of the physical consequence and useful effects of multi-mode geometric phase matrix remains elusive until now. We show here that such effects can be demonstrated by introducing the concept of non-Abelian braiding into photonics. Recently, a variety of novel non-Abelian phenomena in photonic systems have been predicted and realized~\cite{dal11,oza19,umu13,iad16,dut18,bor19,kre19,che19,yan19,guo21,xu16,xu18,ma16,yan20,bro21,wan21}, including the synthesis of non-Abelian gauge fields~\cite{che19,yan19}, the observation of non-Abelian topological charges~\cite{guo21}, and the simulation of Majorana zero modes in bulk optical systems~\cite{xu16,xu18}. A recent study reports the realization of braiding in the dynamic evolution of one topological mode in a photonic lattice~\cite{noh20}, which produced a phase factor of Abelian nature. However, state permutations are still not directly observed. One of the main challenges lies in the fact that non-Abelian braiding must operate on at least three degenerate states. Therefore, a systematic approach and a versatile, reliable photonic non-Abelian system are highly desirable for exploring the braiding induced multi-mode geometric-phase effect and inspiring novel applications for photon and light manipulations. In this work, we experimentally realize the non-Abelian braiding of multiple photonic modes on photonic chips. The system is comprised of evanescently coupled photonic waveguides, wherein the evolution of photons follows a Schrödinger-like paraxial equation~\cite{rec13,kla19}. Our scheme leverages chiral symmetry to ensure the degeneracy of multiple zero modes, and drives them in simultaneous adiabatic evolution that induces a unitary geometric-phase matrix that swaps of photon dwell sites. On-chip non-Abelian braiding of up to five modes is observed with both classical light and single photons. We further show that our scheme can be straightforwardly expanded to realize the braiding of even more modes, making the system a versatile platform for studying non-Abelian physics as well as inspiring applications of on-chip non-Abelian photonic devices. \begin{figure*}[th] \centering \includegraphics[width=12cm]{fig1.jpg} \caption{\label{f1}\textbf{Two-mode braiding in photonic waveguides.} \textbf{a,} A schematic diagram of the braiding structure consisting of four waveguides. The inset shows a photograph of the cross-section of the fabricated waveguides. \textbf{b,} The modulation profiles of the waveguide separations (upper) and the corresponding coupling coefficients (lower). \textbf{c,} The eigenvalues of the system as functions of $\kappa_{AX}$ and $\kappa_{SX}$ with $\kappa_{BX}=0$. The blue sheet represents two-fold degenerate states, on which the trajectory depicts the evolution of step I. The corresponding mode exchanges are shown in the insets. \textbf{d,} The calculated state vectors along the braiding direction by setting $\beta_0=0$, for injection at waveguide A (upper) and B (lower). \textbf{e,} The measured light diffraction patterns at the output for injection at waveguide A (left) and B (right). \textbf{f,} The measured coincidences per second at each output waveguide for single-photon injection at different waveguides. } \end{figure*} \section{Results and discussion} \subsection{Two-mode photonic braiding in coupled photonic waveguides} We begin our discussion with a two-mode braiding. Figure ~\ref{f1}a illustrates a photonic chip housing the braiding structure consisting of three straight waveguides (A, B, and S) and a curved waveguide X. The straight waveguides have a length of $ L=\ 50\ mm$. These waveguides were fabricated inside boroaluminosilicate glass using femtosecond laser direct writing techniques ~\cite{dav96,yu21} which can induce a refractive index contrast of $\sim 2.5\times 10^{-3}$ between the waveguide and the background. The cross-section of each waveguide measures $\sim6.9\ \mu m\times 5.3\ \mu m$, which meets the single-mode condition for photons polarized in the y-axis at a wavelength of $\sim 810\ nm$. Figure~\ref{f1}b shows the designed center-to-center gap distance $g_{iX}\ \left(i=A,B,S\right)$ between waveguide $i$ and waveguide X and the corresponding coupling coefficient $\kappa_{iX}$. The waveguides A, B, and S are sufficiently separated so that the direct coupling among them is negligible. When the modulations on $\kappa_{iX}$ are sufficiently slow, the dynamics of photon propagation in the waveguides follow a Schrodinger-like equation $\left.H(z)|\psi\left(z\right)\right\rangle={-i\partial}_z\left.|\psi\left(z\right)\right\rangle$, where the Hamiltonian reads \begin{equation}\label{e1} H\left ( z \right ) =\begin{bmatrix} \beta_{X} &\kappa_{AX}\left ( z \right ) & \kappa_{BX}\left ( z \right ) & \kappa_{SX}\left ( z \right )\\ \kappa_{AX}\left ( z \right )&\beta_{A} & 0 & 0 \\ \kappa_{BX}\left ( z \right ) & 0 & \beta_{B} & 0 & \\ \kappa_{SX}\left ( z \right ) & 0 & 0 & \beta_{S} & \end{bmatrix} \end{equation} In our waveguide system, $\beta_{X,A,B,S}=\beta_0$ which is the waveguide’s propagation constant, and $\left.\left|\psi\left(z\right)\right.\right\rangle=\left[\varphi_X(z),\varphi_A(z),\varphi_B(z),\varphi_S(z)\right]^T$ is the state vector. The two-mode braiding is carried out in three steps as indicated in Fig.~\ref{f1}a,b. In step I, $\kappa_{SX} (\kappa_{AX})$ smoothly decreases (increases) from its maximum (zero) to zero (maximum), while $\kappa_{BX}$ is kept at zero. Figure~\ref{f1}c plots the eigenvalues of Eq.(\ref{e1}) as functions of $\kappa_{SX}$ and $\kappa_{AX}$, with $\kappa_{BX}=0$. Two out of the four eigenvalues are independent of the changes in $\kappa_{iX}$ and are always degenerate at a constant eigenvalue $\beta_0$ (see the blue sheet). The two-fold degeneracy is protected by the chiral symmetry $\Gamma ^{-1}H\Gamma=-H$ with\ $\Gamma=\left[\begin{matrix}-1& 0\\0& I_3\end{matrix}\right]$ where $I_3$ is a $3\times3$ identity, when we set $\beta_0=0$. We prepare a single-site injection at waveguide A, i.e., $\left.\left|\psi\left(0\right)\right.\right\rangle=\left[0,1,0,0\right]^T$(upper-left inset of Fig.~\ref{f1}c). The adiabatic evolution of $\left.\left|\psi\left(z\right)\right.\right\rangle$ follows the trajectory on the blue sheet and becomes $\left.\left|\psi\left(\frac{1}{3}L\right)\right.\right\rangle=\left[0,0,0,-1\right]^T$, which occupies waveguide S (lower-left inset of Fig. 1c) and picks up a geometric phase of $\pi$. On the other hand, a different injection at waveguide B $\left.\left|\psi\left(0\right)\right.\right\rangle=\left[0,0,1,0\right]^T\rightarrow\left.\left|\psi\left(\frac{1}{3}L\right)\right.\right\rangle=\left[0,0,1,0\right]^T$ remains unchanged (right insets in Fig.~\ref{f1}c). The dynamical phases accumulated for both injections are $\beta_{0}L/3$. Therefore, the total phases accumulated in the above two processes differ by $\pi$ which is the geometric phase. Steps II and III can be understood similarly. In step II, the state dwelling in waveguide B is relocated to A and acquires a geometric phase $\pi$, i.e., $\left.\left|\psi\left(\frac{1}{3}L\right)\right.\right\rangle=\left[0,0,1,0\right]^T\rightarrow\left.\left|\psi\left(\frac{2}{3}L\right)\right.\right\rangle=\left[0,-1,0,0\right]^T$. In step III, and the state occupying waveguide S with $\left.\left|\psi\left(\frac{2}{3}L\right)\right.\right\rangle=\left[0,0,0,-1\right]^T$ transfers to waveguide B state $\left.\left|\psi\left(L\right)\right.\right\rangle=\left[0,0,1,0\right]^T$ and also obtains the $\pi$ phase. The mode switching behaviors can be verified by the calculated state vectors in the braiding process, as plotted in Fig. 1d. To summarize, the net outcome of the evolution is the swapping of the states in waveguides A and B, i.e., $\left.\left|\psi\left(0\right)\right.\right\rangle=\left[0,1,0,0\right]^T\rightarrow\left.\left|\psi\left(L\right)\right.\right\rangle=\left[0,0,1,0\right]^T$ and $\left.\left|\psi\left(0\right)\right.\right\rangle=\left[0,0,1,0\right]^T\rightarrow\left.\left|\psi\left(L\right)\right.\right\rangle=\left[0,-1,0,0\right]^T$, and the output states differ by a geometric phase of $\pi$ (the accumulated dynamical phases are both $\beta_{0}L$ which is omitted in the expression). This result can be captured by a unitary matrix $Y=\left[\begin{matrix}0&-1\\1&0\\\end{matrix}\right]$, which is a $U\left(2\right)$ operation also known as the $Y$-gate in quantum logics~\cite{bar95}. We show experimental results to verify the design. Injection at waveguide A and B is performed with a laser at 808 nm (CNI, MDL-III-808L). The light diffraction patterns at the output facet were recorded using a CCD (XG500, XWJG). The photographs are shown in Fig.~\ref{f1}e, where the swapping of light-dwelling sites is clearly seen. The braiding was also verified in the quantum-mechanical limit by single-photon injections, where indistinguishable pairs of photons at 810 nm were generated using a quantum setup. One photon was injected into the braiding structure via waveguide A or B, while the other one propagated in a single-mode reference optical fiber. We used two avalanche photodetectors to respectively collect the single photons at the output (i.e., waveguide A or B) and the reference fiber. Coincidence measurements were then performed using the collected data. In the results displayed in Fig.~\ref{f1}f, the output waveguide exhibiting the dominated coincidence is always different from the input one – clear evidence of the two-mode braiding described by the $Y$-gate. \subsection{Measurement of the geometric phase} To measure the geometric phase that is a key characteristic of the two-mode braiding, we have designed an interference experiment, as illustrated in Fig.~\ref{f2}a. The structure consists of three stages. In the injection stage, injected photons are equally split into two identical waveguides. Two separate braidings are then carried out in the second stage which contains two copies of braiding structures identical to the one in Fig.~\ref{f1}a. Two experiments are performed with different configurations. In experiment I (II), injection of the lower braiding structure is via waveguide $\mathrm{B}^{\prime} (\mathrm{A}^{\prime })$. The upper braiding structure is injected at waveguide A for both experiments. After the braiding, in stage III, the two output waveguides are equally split into four arms, two of which are merged again so that photons from the upper and lower clusters can interfere. The three terminal ports are labelled Y1, Y2, and Y3. \begin{figure}[t] \centering \includegraphics[width=8.4cm]{fig2.jpg} \caption{\label{f2}\textbf{Measurement of the geometric phase in two-mode braiding.} \textbf{a,} A schematic diagram of interference experiments. Two four-waveguide clusters produce the same braiding with different or identical injections and the interference of their output states at Y2 port reveals the geometric phase. \textbf{b,c,} The measured light diffraction patterns at the output for injections at $\mathrm{A}$ and $\mathrm{B}^{\prime}$ (Exp I) \textbf{(b)} and at $\mathrm{A}$ and $\mathrm{A}^{\prime}$ (Exp II) \textbf{(c)}. In \textbf{(b)}, Y2 has almost no power output, indicating a destructive interference. In \textbf{(c)}, Y2 lights up because the two arms interfere constructively.} \end{figure} The results of the two experiments with light are shown in Fig.~\ref{f2}b,c. Strong light intensities are seen at ports Y1 and Y3 for both cases, which indicates the successful braiding induced mode switching. However, discrepancies are seen at Y2. In experiment I (injections at $\mathrm{A}$ and $\mathrm{B}^{\prime}$), the image at Y2 is dark (Fig.~\ref{f2}b), which suggests a destructive interference of the light after braiding. Because the light propagating through the upper and lower braiding structures accumulates the same dynamical phase, this result indicates a phase difference of $\pi$, which can only be the consequence of a geometric phase. In experiment II for comparison (injections at $\mathrm{A}$ and $\mathrm{A}^{\prime}$), the port Y2 lights up (Fig.~\ref{f2}c), which suggests a constructive interference due to their same phase accumulation. These experimental results are strong evidence of the $\pi$ geometric phase difference induced by the two-mode braiding. \begin{figure*}[ht] \centering \includegraphics[width=14cm]{fig3.jpg} \caption{\label{f3}\textbf{Non-Abelian braiding of three modes.} \textbf{a,} A schematic diagram of a $G_2G_1$ braiding configuration. \textbf{b,} The modulation profiles of the coupling coefficients. \textbf{c,} Experimental results of the $G_2G_1$ braiding (left: braid diagram where the black dashed lines represent the waveguides and the colored lines mark the braiding path of wavefunctions), including the measured light diffraction patterns in the evolution process (middle) and the output light intensity distributions (right). The red arrows mark the injection and output waveguides. \textbf{d,} Measured light diffraction patterns at the output facet of the $G_1G_2$ braiding. \textbf{e,f,} The measured coincidences per second at each output waveguide with single-photon injections for $G_2G_1$ \textbf{(e)} and $G_1G_2$\textbf{(f)}. The distinct outcomes of the two configurations are clear evidence of the non-Abelian nature of the braiding.} \end{figure*} \subsection{Three-mode non-Abelian braiding} The two-mode braiding demonstrates the effectiveness of our photonic platform, which we now expand for three modes. The three-mode braid group has two generating operations $G_1:\left.\left|\psi^\prime\right.\right\rangle=[\varphi_{1},\varphi_{2},\varphi_{3}]^T\rightarrow[-\varphi_{2},\varphi_{1},\varphi_{3}]^T,$ $G_2:\left.\left|\psi^\prime\right.\right\rangle=[\varphi_{1},\varphi_{2},\varphi_{3}]^T\rightarrow[,-\varphi_{3},\varphi_{2}]^T$, where $\left.\left|\psi^\prime\right.\right\rangle$ is a truncated state vector with its elements successively denoting the wavefunction in waveguide A, B and C, respectively. Unlike the permutation of two modes which has only one possibility, permutations of three or more modes are non-Abelian in character, i.e., $G_2G_1\neq G_1G_2$, which we will next demonstrate. Figure~\ref{f3}a illustrates the schematic diagram for the three-mode braiding structure. Here, the system has seven waveguides. Waveguides A, B, C are sufficiently far apart so that they can only couple via waveguides X1 and X2. The system’s Hamiltonian thus has a similar structure as Eq.(\ref{e1}) but sustains three degenerate zero modes that form the braiding subspace. The waveguide array has a length of 80 mm and is divided into two sections, with the fitted coupling coefficients shown in Fig.~\ref{f3}b. The first section swaps the modes in waveguides A and B, which executes $G_1$. The second section exchanges modes in B and C, so that the net result is $G_2G_1:[\varphi_{1},\varphi_{2},\varphi_{3}]^T\rightarrow[-\varphi_{2},-\varphi_{3},\varphi_{1}]^T$ (see the braiding diagram in the left panel of Fig.~\ref{f3}c). The experimental results of $G_2G_1$ are summarized in Fig.~\ref{f3}c. We employ a double exposure-assisted scattering technique, in which point-like scatterers were fabricated inside all the waveguides so that the light passages can be captured by a camera (Zyla 5.5 sCMOS, Andor) as shown in the middle panels. We find that when the injection is at waveguide A, the light successively propagates to the waveguide S1, B, S2, and finally outputs at C. In contrast, when injected at waveguide B (C), the output is at A (B). The output patterns are shown in Fig.~\ref{f3}c (right panels), which are clearly the intended three-mode permutation described by $G_2G_1$. We fabricated another system with two sections arranged in the opposite order so that $G_1G_2:[\varphi_{1},\varphi_{2},\varphi_{3}]^T\rightarrow[\varphi_{3},\varphi_{1},\varphi_{2}]^T$ is executed (see the braiding diagram in the inset of Fig.~\ref{f3}d). The experimental results shown in Fig.~\ref{f3}d demonstrate the intended outcome. Comparing Fig.~\ref{f3}c and d, it is evident that $G_2G_1\neq G_1G_2$, which unambiguously demonstrates the non-Abelian nature of the three-mode braiding. The non-Abelian braiding is also successfully realized in single-photon experiments using the same photonic chips. The measured coincidences confirm that single photons also follow non-Abelian braiding, as shown in Fig.~\ref{f3}e,f for $G_2G_1$ and $G_1G_2$, respectively. Another property of three-mode braiding is the equivalence of $G_2G_1G_2$ and $G_1G_2G_1$~\cite{lut18}, both of which induce $[\varphi_{1},\varphi_{2},\varphi_{3}]^T\rightarrow[\varphi_{3},-\varphi_{2},\varphi_{1}]^T$. These braiding operations can be used to design a quantum $X$-gate (by discarding one state after the operation), which is confirmed in separate experiments. \begin{figure*} \centering \includegraphics[width=11cm]{fig4.jpg} \caption{\label{f4}\textbf{Multi-mode braiding.}\textbf{ a,} The expandability of the braiding design. \textbf{b,c,} Two examples of five-mode braiding (upper: braid diagram) and their experimental realization with light (lower). \textbf{d,e,} Results of single-photon five-mode braiding experiments.} \end{figure*} \subsection{Expandability of the photonic chips and multi-mode braiding} Comparing the structures of two-mode and three-mode braiding, it becomes clear that the photonic platform can be straightforwardly expanded to realize the braiding of an arbitrary number of modes, as shown in Fig.~\ref{f4}a. As a proof of concept, we present a five-mode braiding design. The targeted braid diagrams are shown in Fig.~\ref{f4}b,c (upper), which depict two operation sequences: $M_4M_3M_2M_1: [\varphi_{1},\varphi_{2},\varphi_{3},\varphi_{4},\varphi_{5}]^T\rightarrow[-\varphi_{2},-\varphi_{3},-\varphi_{4},-\varphi_{5},\varphi_{1}]^T$ and $M_1M_2M_3M_4: [\varphi_{1},\varphi_{2},\varphi_{3},\varphi_{4},\varphi_{5}]^T\rightarrow[\varphi_{5},\varphi_{1},\varphi_{2},\varphi_{3},\varphi_{4}]^T$. The measured diffraction patterns using lasers are given in Fig.~\ref{f4}b,c, and the measured coincidences using single photons are summarized in Fig.~\ref{f4}d,e. The observed permutation outcomes align well with theoretical predictions. We further remark that within each building block (red box in Fig.~\ref{f4}a), only one waveguide needs to be bent to achieve the modulation of three coupling coefficients. As a result, the top and bottom rows are all identical straight waveguides that can be prefabricated, which makes the design of subsequent braiding quite flexible. With these characteristics and the expandability as demonstrated, our scheme becomes a versatile and convenient on-chip platform for realizing more complex non-Abelian operations. \section{Conclusion} To conclude, we have realized non-Abelian braiding for both classical light and single photons in a photonic on-chip platform. The key characteristics of non-Abelian braiding – permutations of degenerate states and the order-dependent braiding outcomes, are both definitively observed. We remark that our scheme for photonic non-Abelian braiding is a purely geometric-phase effect on multiple degenerate modes protected by chiral symmetry. As a result, the braiding is robust to perturbations such as those on the evolution path and coupling coefficients. How to incorporate topological protection to make the braiding operations even more robust is a valuable future goal. The proposed versatile photonic platform is expected to reveal more non-Abelian physics related to the multi-mode Berry phase and its capability in generating a variety of unitary matrices may lead to a new generation of non-Abelian photonic devices for unprecedented light and photon manipulations.
proofpile-arXiv_065-106
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} We are at the cusp of an unprecedented revolution led by thousands of low earth orbit (LEO) satellites (SATs) in space. SpaceX has already launched more than 1,500 Starlink LEO SATs \cite{SAT_Survey1, SAT_UCL_1, Starlink} covering 12 countries in 3 different continents \cite{SAT_Industry_Comparison}. Besides, federal communications commission (FCC) has recently authorized Amazon Project Kuiper to launch half of 3,236 LEO SATs by 2026 and the rest by 2029 \cite{FCC_Kuiper}. The ramification of this LEO SAT mega-constellation is not only limited to advancing conventional SAT applications such as high-precision earth observation \cite{SAT_Survey5} but also envisaged to open up a new type of wireless connectivity, i.e., non-terrestrial networks (NTNs) in beyond the fifth-generation (5G) or 6G cellular systems \cite{SAT_Survey6}. While each SAT plays a role as a mobile base station (BS), a mega-constellation of LEO SATs has a great potential in provisioning fast and reliable connectivity to ground users anywhere in the globe, including ocean, rural areas, and disaster sites \cite{UAV_Survey1, UAV_Survey2, UAV_Survey3}. Notwithstanding, each SAT BS has an extremely wide coverage (e.g., $160 - 1000$ [km] inter-SAT BS distance \cite{SAT_Survey2}), disallowing sophisticated central coordination among SAT BSs and users in real-time under limited communication and computing resources. Furthermore, as opposed to fixed terrestrial BSs, SAT BSs move, requiring location-specific resource management. Unfortunately, existing model-based and standardized protocols, such as carrier-sense multiple access (CSMA) based WiFi protocols and random access channel (RACH) based cellular protocols, cannot flexibly optimize their operations without incurring severe protocol fragmentation, not to mention a significant effort in protocol standardization for a myriad of possible scenarios. To overcome the aforementioned fundamental challenges in LEO SAT networks, in this article, we develop a novel model-free random access (RA) protocol that naturally emerges from a given local SAT random access environment, dubbed \emph{emergent random access channel protocol (eRACH)}. In eRACH, the emergence of RA protocol is induced by deep reinforcement learning (DRL) at the ground user agents by utilizing locally observable information, not requiring inter-agent communication or centralized training. Consequently, eRACH jointly takes into account SAT associations and RA collision avoidance in a given local environment, thereby achieving lower RA latency and high downstream communication efficiency after eRACH. The fundamentals of eRACH are laid by \emph{protocol learning} via multi-agent DRL (MADRL). Several recent works have underpinned the effectiveness of protocol learning \cite{MARL_Foerster1, Semantic_Hoydis1, Semantic_Hoydis2} in that the learned protocols can adapt to various environments while achieving higher communication efficiency than those of model-based protocols. However, these benefits come at the cost of additional inter-agent communication for MADRL, which are non-negligible under non-stationary network topologies, questioning the feasibility of protocol learning for LEO SAT networks. In fact, MADRL agents in \cite{MARL_Foerster1, Semantic_Hoydis1, Semantic_Hoydis2} learn to communicate by exchanging local state information of each agent using a few bits, where the information exchange between agents is often referred to as a \emph{cheap talk}. Such cheap talks may no longer be cheap under a non-stationary topology of an LEO SAT network, which may require a large amount of local state information for RA and SAT BS associations. In this respect, our proposed eRACH is designed based on locally observable information. We extensively test which of the locally observable information candidates are essential on eRACH training. Surprisingly, it is shown that eRACH does not even require cheap talks. Instead, eRACH exploits \textit{1)} the expected SAT location that is known a priori owing to the pre-determined SAT orbiting pattern and \textit{2)} the collision events that are inherently observable without additional costs. While training eRACH, we have validated that the expected SAT location contains sufficient information on the network topology, as long as the variance between the expected and the actual SAT locations is less than 1~[km]. Furthermore, we have realized that the collision event contains sufficient information on how crowded each SAT BS is. Given \textit{1)} and \textit{2)}, thanks to the periodic orbiting pattern of LEO SAT, the problem of MADRL frequently revisits the almost identical environment, ensuring to discover an optimal protocol carrying out SAT BS association and RA decisions. We summarize our contributions in this paper as follows. \begin{itemize} \item We propose a novel emergent RA protocol for a ground-to-LEO SAT communication network, dubbed eRACH. To the best of our knowledge, this is the first of its kind to show that a new protocol that collaborates between multi-agents can emerge by utilizing just locally observable information, especially suited for LEO SAT networks. This is done by developing an actor-critic-based neural network architecture and a fully distributed MADRL framework without any inter-agent communication (see \textbf{Algorithm~\ref{Algorithm}} in Sec.~\ref{Body}). \item To provide an upper-bound performance, we additionally introduce a cheap-talk inter-agent communication into eRACH, and propose a variant of eRACH, termed eRACH-Coop (see Sec.~\ref{SimulationSetting}). \item Numerical results corroborate that while eRACH-Coop achieves the highest average network throughput with the lowest collision rate at the cost of the cheap talk overhead, eRACH still achieves up to $6.02$x and $54.6 \%$ higher average network throughput with $10$x and $2$x lower average RA latency than slotted ALOHA and conventional cellular RACH, respectively (see \textbf{Table \ref{Table_Proposed}} in Sec.~\ref{Numerical Result}). \item Furthermore, the distributed operations and the limited local information of eRACH inherently promote equal RA opportunities for all users, yielding 23.6\% higher Jain's fairness than eRACH-Coop (see \textbf{Table \ref{Table_Proposed}} in Sec.~\ref{Numerical Result}). \end{itemize} The remainder of this article is organized as follows. In Sec.~\ref{Background}, we first summarize the RA protocols for the traditional stationary SAT and for the emerging non-stationary SAT networks. In Sec.~\ref{System Model}, network model, system scenario, and performance matrices are presented. Then, the emergent contention-based RA for the LEO SAT networks, called eRACH, is proposed, which is addressed by our proposed multi-agent Actor-Critic based algorithm in Sec.~\ref{Body}. In Sec.~\ref{Numerical Result}, simulation results are provided, followed by concluding remarks in Sec.~\ref{conclusion}. \textit{Notation:} Throughout this paper, we use the normal-face font to denote scalars, and boldface font to denote vectors. We use $\mathbb{R}^{D\times 1}$ to represent the $D$-dimensional space of real-valued vectors. We also use $\|\cdot\|$ to denote the $L^2$-norm, which is a Euclidean norm, and use $(\cdot)^{\dag}$ to represents conjugate transpose. $\nabla_{\mathbf{x}} f(\mathbf{x})$ denotes the gradient vector of function $f(\mathbf{x})$, i.e., its components are the partial derivatives of $f(\mathbf{x})$. $\mathbf{I}_{N}$ is the identity matrix of size $N$. \begin{figure} \centering \includegraphics[width=1\columnwidth]{Fig_Illustration_SATAccess} \caption{An illustration of random access in LEO satellite networks.} \label{Illustration} \end{figure} \section{Related Works and Backgrounds}\label{Background} \subsection{Random Access for Satellite Networks} In traditional SAT networks, most of the access protocols have been developed for geosynchronous equatorial orbit (GEO) SATs, also known as geostationary SATs \cite{Survey_RA+SAT}. Slotted ALOHA scheme is widely used for GEO SAT network systems owing to its simple implementation. Various types of slotted ALOHA have been further proposed, taking into account the characteristics of GEO SAT, including large propagation latency \cite{ALOHA_SAT1, ALOHA_SAT2}. However, these ALOHA-based access protocols fail to support many user terminals due to a lack of collision control. Designing multiple access protocols for SAT networks, long round trip time (RTT) latency limits the adoption of conventional access techniques, such as CSMA/CA. Such long RTT latency is induced by long one-way propagation delay in GEO SAT network (approximately $120$ [ms]) and in LEO SAT network (approximately $1$ - $5$ [ms]). Particularly for GEO SAT networks, centralized reservation schemes are employed which avoid failed access attempts prior to the packet transmission to cope with such long RTT latency \cite{Survey_RA+SAT}. However, for LEO SAT networks which connect a myriad of scattered terrestrial user terminals, the coordinated method requires a significant system overhead. In contrast to stationary GEO SAT networks, emerging LEO SAT networks, wherein numerous LEO SATs orbit rapidly at a speed of around $7.6$ [km/s] (see Fig. \ref{Illustration}), are dynamic and require frequent and fast handovers \cite{3GPP_NTN, P4C_JH}, calling for another channel access considering not only long RTT latency but also the inherent dynamics of LEO SAT networks. In this regard, the contention-based random access channel (RACH), adopted for cellular networks (e.g., 5G NR, 4G LTE/LTE-A), is also considered for LEO SAT networks \cite{3GPP_NR_RACH, RA_SHW}. The contention-based RACH procedure is used to initialize and synchronize prior to allocating resources in downlink (DL) and uplink (UL) channels. In the RACH, a sender randomly transmits short packet preambles to the receiver and then waits for a positive response from the receiver prior to transmitting the complete message. Particularly, the RACH protocol is initiated by transmission of a randomly chosen preamble through physical RACH (PRACH) for the preamble transmission. The basic RACH is designed to follow the four-step message exchanging procedure, that is, \textit{1)} PRACH transmission, \textit{2)} random access response (RAR), \textit{3)} radio resource control (RRC) connection request, and \textit{4)} an RRC connection setup (and resolution). By the four-step procedure and utilizing Zadoff-Chu (ZC) sequences, RACH is able to control collision events while attempting to access many users. Despite this, RACH is not ideally suited for LEO SAT networks as it does not takes into account SAT associations \cite{3GPP_NR_NTN_SSB, 3GPP_NR_NTN_TA}. As such, existing access protocols have some limitations for non-stationary LEO SAT networks. To design LEO SAT-oriented access protocol, we need to consider the following two conditions: \textit{1)} Collisions needs to be managed efficiently without centralized access control; and \textit{2)} SAT associations and backoff mechanisms need to be jointly optimized to achieve low access delay and high spectral efficiency. The current standardized protocols and archetypical model-based algorithms face several challenges, mainly due to the non-stationarity of the LEO SAT network topology. Recently, a model-free protocol has been studied in the context of emergent communication, using model-free DRL algorithms, e.g., Actor-Critic \cite{DRL_A3C}, $Q$-learning \cite{DRL_DQN}, and deep deterministic policy gradient (DDPG) \cite{DRL_DDPG}, which can be an alternative solution for the time-varying network topology. \subsection{Emergence of Protocol through MADRL} Pioneered by \cite{MARL_Foerster1, MARL_Foerster2}, the topic of emergent communication has arisen firstly in the deep learning literature. In \cite{MARL_Foerster1}, Foerster et al. studied how languages emerge and how to teach natural languages to artificial agents. Spurred by this trend of emergent communication, Hoydis et al. adopted the idea of emergent communication into cellular systems in \cite{Semantic_Hoydis1, Semantic_Hoydis2}. Here, MAC signaling is interpreted as a language spoken by the user equipment (UE) and the base station (BS) to coordinate while pursuing the goal of delivering traffic across a network. Therein, two distinct policies (i.e., a channel access policy and a signaling policy) are trained by using the off-the-shelf learning-to-communicate (L2C) techniques (e.g., DIAL and RIAL in \cite{MARL_Foerster1}). This approach has shown the potential of the emergent protocol design in their particular scenario. Despite the novelty of \cite{Semantic_Hoydis1, Semantic_Hoydis2}, the superior performance of the learned protocol largely depends on the learning of cheap-talk exchanged between network nodes, which incurs additional communication and learning overhead. Moreover, the proposed method assumes centralized training that requires each network node to obtain full-observable information; it is an obstacle to actual system application. Besides, in these works, the protocol was optimized for only stationary networks. Thus, it is still questionable whether a protocol for non-stationary networks can emerge without relying too much on information exchange between agents. \input{SECTION_3.tex} \input{SECTION_4.tex} \input{SECTION_5.tex} \section{Conclusion} \label{conclusion} In this article, we proposed a novel RA for LEO SAT networks. To cope with the challenges incurred by its wide coverage and time-varying network topology, we proposed a model-free RA protocol that emerges from the given LEO SAT network environment via MADRL, dubbed eRACH. By simulations, we validated that eRACH better reflects the time-varying network topology than model-based ALOHA and RACH baselines, thereby achieving higher average network throughput with lower collision rates. Furthermore, eRACH is robust to SAT BS positioning errors, enabling its operations with known periodic patterns of SAT BS locations. Lastly, eRACH can flexibly adjust and optimize throughput-collision objectives in various user population scenarios. Extending the current throughput and collision objectives, considering a fairness-aware objective could be an interesting topic for future research. It is also worth investigating highly scalable MADRL frameworks to address complicated scenarios with more orbital lanes and users at different altitudes, such as high altitude platform systems (HAPS) and other GEO and MEO SATs. \bibliographystyle{IEEEtran} \section{System Model} \label{System Model} \subsection{Network and Channel Models} Consider a set $\mathcal{K}$ of orbital planes around the earth, sets $\mathcal{I}_k$ of LEO SATs orbiting on the orbital plane $k$ for all $k\in \mathcal{K}$, and a set $\mathcal{J}$ of SAT user terminals (UTs) deployed on the ground inside an area $A$. The position of UT $j \in\mathcal{J}$ is expressed as a 3-dimensional real vector on Cartesian coordinates denoted by $\vb*{q}_{j} = (q^x_{j},q_{j}^{y},q_{j}^{z})\in \mathbb{R}^3$, and similarly, the position and velocity of SAT $i \in \bigcup_{k\in\mathcal{K}}\mathcal{I}_k$ at time $t\geq 0$ is denoted by $\vb*{q}_{i}(t) = (q_{i}^{x}(t),q_{i}^{y}(t),q_{i}^{z}(t)) \in \mathbb{R}^{3}$ and $\vb*{v}_i(t) = (v_{i}^{x}(t),v_{i}^{y}(t),v_{i}^{z}(t)) \in \mathbb{R}^3$, respectively, for all $i\in \bigcup_{k\in\mathcal{K}}\mathcal{I}_k$. Suppose the number of SATs on each orbital plane is equal to each other given as $|\mathcal{I}_k| = I$ for all $k\in\mathcal{K}$, and assume all SATs are moving in uniform circular motion with the same orbital period $T$, while the arc length between any two neighboring SATs on the same orbital plane is equal to each other. Consider that time is discretized in slots of length $\tau$ and let $\vb*{q}_{i}[0]$ be the initial position of the SAT $i \in \bigcup_{k \in \mathcal{K}}\mathcal{I}_k$ at time $t = 0$. Then, by following the discrete-time state-space model \cite{UAV_SCA_YZ, P4C_JH}, the position of SAT $i$ at time $t = m\tau$ can be expressed as \begin{align}\label{C_LEO_q} \vb*{q}_{i}(m \tau) \approx \vb*{q}_{i}(0) + \tau \sum_{m' = 1}^{m}\vb*{v}_i(m'\tau) + \vb*{n}_i, \end{align} where $\vb*{n}_i$ is the additive random noise representing the perturbation on the $i$-th satellite position and attitude determination error \cite{SAT_PositionError}, whose entries are independent and identically distributed zero-mean Gaussian random variables with $\mathbb{E}[\vb*{n}_i \vb*{n}_i^{\dag}] = \sigma_{i}^{2} \mathbf{I}_{3}$. Communication channels between SATs and UTs follow the characteristics of ground-to-space (or space-to-ground) RF link \cite{SAT_Ch1}. The channel gain between satellite $i \in \bigcup_{k \in \mathcal{K}}\mathcal{I}_k$ and UT $j\in\mathcal{J}$ at time $t$ is expressed as \begin{equation}\label{channel_RF} h_{i,j}(t) = \tilde{h}_{i,j}(t) \sqrt{\gamma_{i,j}(t)}, \end{equation} where $\gamma_{i,j}(t)$ and $\tilde{h}_{i,j}(t)$ are the effects of large-scale (e.g., path loss and shadowing) and small-scale fading at time $t$, respectively, and $\mathbb{E}[|\tilde{h}_{i,j}(t)|^2]= 1$ for all $t\geq 0$. The large-scale fading is modeled based on the tractable line-of-sight (LoS) probability model \cite{SAT_Ch1, UAV_Ch_YZ} with shadowing and blockage effects. Note that in the LoS probability model, the large-scale fading follows generalized Bernoulli distribution of two different events; the channel is LoS or non-LoS (NLoS) with a certain probability. The probability of event that a channel between SAT $i$ and UT $j$ at time $t$ being LoS is modeled as \begin{align}\label{eq:LoSprobability} \varphi^{\text{LoS}}_{i,j}(t) = \frac{1}{1+L_{1}\mathrm{exp}[-L_{2} (\theta_{i,j}(t) - L_{1})]}, \end{align} where $L_{1}$ and $L_{2}$ are the environmental parameters depending on the propagation condition \cite{UAV_Rotary_YZ} and \begin{align}\label{eq:elevation_angle} \theta_{i,j}(t) = \frac{180}{\pi}\mathrm{sin}^{-1} \!\left(\frac{q_{i}^{z}(t) - q_{j}^{z}}{||\vb*{q}_i(t)-\vb*{q}_j||_2}\!\right) \end{align} is the angle of SAT $i$ and UT $j$ referred to as an elevation angle. Meanwhile, depending on whether the channel is LoS or NLoS, different large-scale fading $\gamma_{i,j}(t)$ can be expressed as \begin{equation} \gamma_{i,j}(t) = \begin{cases} \beta_{o}||\vb*{q}_{i}(t) - \vb*{q}_{j}||_2^{-\alpha}, & \text{for LoS channel},\\ \kappa \beta_{o}||\vb*{q}_{i}(t) - \vb*{q}_{j}||_2^{-\alpha}, & \text{for NLoS channel},\\ \end{cases} \end{equation} where $\beta_{o}$ is the average power gain at the reference distance $d_o=1$ [m], $\alpha$ is the path loss exponent and $\kappa$ is the attenuation scaling factor in the NLoS link \cite{UAV_Ch_YZ}. Note that $\tilde{h}_{i_{k},j}$ has its randomness from both random occurrence of LoS and NLoS as well as the small-scale fading. Accordingly, the expected channel gain over both randomness is given by \begin{equation} \mathbb{E}\!\left[|h_{i,j}(t)|^2 \!\right] = \tilde{\varphi}^{\text{LoS}}_{i,j}(t)\beta_{o}||\vb*{q}_{i}(t) - \vb*{q}_{j}||_2^{-\alpha}, \end{equation} where $\tilde{\varphi}^{\text{LoS}}_{i,j}(t) = \varphi^{\text{LoS}}_{i,j}(t)+\kappa(1-\varphi^{\text{LoS}}_{i,j}(t))$. \begin{figure} \centering \includegraphics[width=1\columnwidth]{Figure_Illustration_FrameStructure} \caption{Time and frequency resource structure of the considering random access scenario.} \label{Illustration_Frame} \vspace{.5em} \end{figure} \subsection{LEO SAT Random Access Scenario} \label{Sec2_RA} Consider an LEO satellite random access (RA) scenario, where UTs attempt to access to LEO satellite networks for radio resource grant. Suppose that UTs always have data to transmit in their infinite length queue, and thus, they always have intentions to RA at every opportunity they have. Each UT has information of the periodic position of SATs on each orbital plane and attempts to access only to the closest SAT on each orbital plane. For the sake of convenience, we assume that the closest SAT on each orbital plane is the same for every UT at the same time point. According to the network model, note that the time duration that one SAT becomes the closest among all SAT on each orbital plane is $\frac{T}{I}$, where $T$ is the orbiting period and $I$ is the number of SATs on each orbital plane. Suppose that there are $N$ RA opportunities during the interval $\frac{T}{I}$ and they are synchronized over all orbital planes as shown in Fig.~\ref{Illustration_Frame}. The time duration of each RA opportunity is $\frac{T}{IN}$, which incorporates RA signaling duration $\tau_s$ and data transmission duration $\tau_d$, i.e., $\tau_s+\tau_d = \frac{T}{IN}$, for some $\tau_s$ and $\tau_d$ such that ${\tau_s}/{\tau},{\tau_d}/{\tau} \in \mathbb{Z}^+$. For simple description, we focus only on $N$ RA opportunities in the rest of this section and suppose the first opportunity starts at $t = 0$. Here, the time duration of each $n$-th RA opportunity is discretized with $\tau$, i.e., $t = n \tau, \forall n\in \{1,2,\dots,N\}$. At each RA opportunity, each UT chooses whether to access or backoff, and then further decide the SAT to which it will access. Such set of actions is simply denoted by $\lbrace \texttt{BACKOFF},1, \ldots, K \rbrace$ in which $K:=|\mathcal{K}|$ is the number of orbital planes. The RA action of UT $j\in\mathcal{J}$ at RA opportunity $n$ is denoted by \begin{equation} a_{j}[n] \in \lbrace \texttt{BACKOFF},1, \ldots, K \rbrace. \label{AccessAction} \end{equation} Note that $a_{j}[n] = \texttt{BACKOFF}$ means that the UT $j$ does not access at the $n$-th opportunity and waits for the next one. Moreover, those who attempt to access additionally choose a preamble uniform-randomly, that is \begin{equation} p_{j}[n] \in \lbrace 1,2,\dots,P \rbrace. \end{equation} Here, each preamble is associated with $P$ resources that the SATs can grant during the data transmission duration. The RA signaling is done in two steps. First, those UTs that determine to access send preambles to the corresponding SATs. Secondly, SATs send feedback to confirm whether there were collisions or not for each chosen preamble. UTs that have chosen collided preambles fail to access, while those of UTs that have chosen preambles without collision succeed to access. \subsection{Collision Rate, Access Delay, and Network Throughput} The performance of RA in LEO SAT network is evaluated with collision rate, access delay and average network throughput, as explained in the following. \subsubsection{Collision Rate} Denote the collision indicator of UT $j\in\mathcal{J}$'s RA at opportunity $n \in \{1,\dots,N\}$ by $c_{j}[n]$, and define it as \begin{align} c_{j}[n] \! = \!\begin{cases} 0, & (a_{j}[n],p_{j}[n])\! \neq\! (a_{j^{'}}[n],p_{j'}[n])\ \forall j' \! \in\! \mathcal{J}\backslash j\\ 1, & \text{otherwise}, \end{cases}\label{Collision} \end{align} if $a_j[n] \neq \texttt{BACKOFF}$, and otherwise we have $c_j[n] = 0$. Then, the \emph{collision rate} is defined as: \begin{align} C = \frac{1}{|\mathcal{J}|} \sum_{n=1}^{N}\sum_{j \in \mathcal{J}}{c_{j}[n]}. \end{align} \subsubsection{Access Delay} Define the access indicator of UT $j\in\mathcal{J}$'s RA at opportunity $n \in \{1,\dots,N\}$ as \begin{align} \eta_{j}[n] = \begin{cases} (1 - c_{j}[n]), & a_{j}[n] \neq \texttt{BACKOFF} \\ 0, & a_{j}[n] = \texttt{BACKOFF} \end{cases}. \label{Access} \end{align} Let $N_{s}$ be the number of successful accesses of all UTs out of $N$ access attempts, which is given as \begin{align} N_{a} = \frac{1}{|\mathcal{J}|}\sum_{n=1}^{N}\sum_{j\in\mathcal{J}}\eta_j[n]. \end{align} Then, the \emph{average access delay} is given as: \begin{align} D = \frac{(N_{a}-N) (m_s+m_d)\tau}{N_{a}} + m_s\tau. \label{AccessDelay} \end{align} \subsubsection{Network Throughput} According to the defined channel model, the uplink (UL) throughput from UT $j\in\mathcal{J}$ to SAT $i\in \bigcup_{k\in\mathcal{K}} \mathcal{I}_k$ can be expressed as \begin{align} R_{i,j}(t) = B \log_2\left( 1+ \frac{ \Gamma \left|h_{i,j}(t)\right|^{2}}{\sigma_{n}^2} \right) \mathrm{[bps]}, \end{align} where $B$ represent the bandwidth, $\sigma_{n}^2$ is the noise variance, and $\Gamma$ is the UL transmission power. Note that UT can transmit data only when its access is successful. Thus, when $a_j[n] \neq \texttt{BACKOFF}$, the attainable throughput at UT $j \in \mathcal{J}$ in the $n$-th RA opportunity can be expressed as \begin{align} R_{j}[n] = \eta_j[n] \tau_d \sum_{t = (n-1)(\tau_s+\tau_d) + \tau_d}^{n(\tau_s+\tau_d)} R_{i_{a_j[n]},j}(t), \label{AchievableRate} \end{align} where $i_{k} \in \mathcal{I}_{k}$ denotes the most closest SAT from UT $j$ among all SATs on the orbital plane $k\in\mathcal{K}$. Otherwise if $a_j[n] = \texttt{BACKOFF}$, $R_{j}[n] = 0$. Consequently, the time-average network throughput $R$ within $N$ RA opportunities can be given as \begin{align} R = \frac{1}{N}\sum_{n = 1}^{N}\sum_{j\in\mathcal{J}}R_j[n]. \end{align} In what follows, under the LEO SAT RA scenario and with the performance metrics including collision rate, access delay, and network throughput, we propose an emergent RA protocol for LEO SAT networks. \section{Emergent Random Access Protocol for \\LEO Satellite Networks} \label{Body} This section introduces the emergent contention-based RA protocol for LEO SAT networks. Moreover, we explain step-by-step how the protocol is specifically designed. \subsection{Emergent RA Protocol for LEO SAT} We propose a novel RA solution for LEO SAT networks, coined \emph{emergent random access channel protocol (eRACH)}. As illustrated in Fig. \ref{Illustration}, the UT on the ground attempts to access an SAT by the contention-based RA and then transmits a data frame only when it successfully accesses to intended LEO SAT. To specifically discuss and evaluate our proposed eRACH protocol, the following optimization problem is mathematically formulated. This problem aims to maximize the throughput while minimizing the collision rate under the constraints related to the practical conditions of LEO SAT networks: \begin{eqnarray} &\displaystyle \max_{ \scriptsize \begin{array}{c} \scriptsize a_{j}[n] \end{array} } & \sum_{n=1}^{N} R_{j}[n] - \rho c_{j}[n] , \ \forall j\in\mathcal{J}, \label{P1} \label{Eq:OptObj} \\ &\textrm{s.t.} & \eqref{C_LEO_q}, \ \eqref{Collision}, \nonumber \end{eqnarray} where $\rho$ denotes a normalization coefficient, which strikes a balance between throughput and collision rate, and $N$ represents one orbital cycle of SAT. Here, the constraint in \eqref{C_LEO_q} represents the orbital motion of the LEO SAT constellation. Besides, the constraint in \eqref{Collision} represents the collision in LEO SAT. Recall that the collision occurs for UT agents that attempt to access the same LEO SAT $i_{k}$ with the same preamble signature, at the same time slot. In a nutshell, eRACH considers the following steps: \begin{enumerate} \item Association decision for SAT-UT, \item Backoff decision, \item RACH access (PRACH preamble $p$ is chosen uniform-randomly), and \item UL data transmission (only when accessed successfully), \end{enumerate} during which, eRACH determines 1) and 2), while the rest are automatically determined according to the established protocol as described in Sec. \ref{Sec2_RA}. As 3) and 4) follow the standard operations, the problem of interest focuses on the joint decision on 1) and 2). However, to optimize 1) and 2) for \eqref{P1}, traditional convex optimization methods (e.g., SCA \cite{P2J_JH}, BCD \cite{P3C_JH}) face several challenges mainly due to time-varying LEO SAT networks. To this end, we utilize MADRL algorithm in the eRACH protocol. While training the MADRL, we only use locally observable information of UT in the LEO SAT networks. Further, considering the specificity of the LEO SAT networks, we carefully pick out a piece of essential by extensively testing candidates among locally observable information to minimize the complexity while retaining near-optimality. For applying the MADRL method, first and foremost, it is necessary to design Markov decision process (MDP) model, which includes a state, action, and reward function in an environment that reflects our network scenario, which is discussed next. \subsection{MADRL Formulation for Emergent RA Protocol}\label{Sec4B} In what follows, we first reformulate the problem \eqref{P1} of eRACH using an MDP model, and then describe how to solve this problem via MADRL. \subsubsection{MDP Modeling} To recast \eqref{P1} as an MADRL problem, we model the SAT network scenario using an MDP, a discrete-time stochastic control process that mathematically characterizes decision-making in a random environment. In the MDP model, for a given state, a policy $\pi$ refers to the probability of choosing an action, and $\pi^{*}$ is the optimal policy maximizing the long-term average reward, which is the goal of MADRL. \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{Figure_Illustration_Network.pdf} \caption{An illustration of the proposed eRACH training process based on multi-agent Actor-Critic method.} \label{Figure_Illustration_Network} \end{figure} \textbf{Environment}.\quad As illustrated in Fig. \ref{Figure_Illustration_Network}, the MADRL framework under study consists of multiple UTs interacting with an environment that follows an MDP model. At each time $t$, each UT $j$ is an agent observing a state $\mathbf{s}_{j}[n]\in\mathcal{S}$, and takes an action $\mathbf{a}_{j}[n]\in\mathcal{A}$ based on a state-action policy $\pi$. Given this action, the state of the agent $j$ transitions from $\mathbf{s}_{j}[n]$ to $\mathbf{s}_{j}[n+1]$, and in return receives a reward $r_{j}[n]$ that reinforces the agent follows an optimal policy $\pi^*$. How to define these states, actions, and rewards significantly affects the performance, communication, and computing costs of eRACH as we shall elaborate next. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{Figure_Result/Figure_EffectiveState_2.pdf} \caption{Comparison of cumulative rewards with different information included in the state $s_j[n]$.} \label{fig_StateReasoning} \vspace{.2em} \end{figure} \textbf{State}.\quad In the aforementioned MDP model, we consider the following state of UT $j$: \begin{align} \vb*{s}_{j}[n] &= \lbrace n, \vb*{q}_{i_{1}}[n],\dots,\vb*{q}_{i_{K}}[n], R_{j}[n], c_{j}[n], a_{j}[n\!-\!1] \rbrace, \label{State_D} \end{align} where $\vb*{q}_{i_{k}}[n] \in\mathbb{R}^{I \times 3}, i\in \mathcal{I}, k\in \mathcal{K}$ denotes the position of SAT $i$ in the orbital plane $k$, $R_{j}[n]$ is the throughput, $c_{j}[n]$ is a binary indicator function of an RA collision event, and $a_j[0]$ and $\vb*{s}_{j}[1]$ are initial values chosen randomly at the beginning of Algorithm \ref{Algorithm}. Here, the previous action $a_{j}[n-1]$ and current time slot $n$ are used as a \emph{fingerprint}, adopting the fingerprint-based method proposed in \cite{MARL_Stabilising} to stabilize experience replay in the MADRL. Note that the non-stationarity of the multi-agent environment is dealt with by the fingerprinting that disambiguates the age of training samples and stabilizes the replay memory. The state of an agent should contain sufficient information for carrying out decision-making. Meanwhile such information should be minimal to reduce any additional data collection overhead from the environment or other agents and to promote MADRL training convergence. To screen for important information to be included in the state among locally observable information, we extensively test possible local information candidates and their impacts on eRACH performance. In this regard, we provide an ablation study of Fig. \ref{fig_StateReasoning} which presents the contribution of each information to the overall system. Each information in $s_j[n]$ has a different importance. As shown in this figure, eRACH mainly exploits the following information: \textit{1)} the expected SAT locations that are known a priori owing to the pre-determined LEO SAT orbiting pattern; and \textit{2)} the collision that are inherently observable. Such two locally observable information significantly contributes to the training of eRACH, which highlights eRACH does not even require cheap talks between agents. While training eRACH, we validate that the expected SAT location contains sufficient information on the network topology, as long as the variance between the expected and actual SAT position is less than $1$~[km] as shown in Fig. \ref{fig_PositionError}. Moreover, it is confirmed that the collision event incorporates sufficient information on the local environment, e.g., how crowded the network is, as will be elaborated in Sec \ref{Numerical Result}. As the LEO SAT periodically orbits a pre-designed trajectory, MADRL frequently revisits the same environment of SAT-UT RA, which facilitates the discovery of emergency protocols that perform SAT-UT connections and RA decisions without communication overhead between agents. \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{Figure_Result/Figure_PositionError.pdf} \caption{Robustness to SAT BS positioning errors, where shaded regions identifies the three different regimes: (a) same reward and convergence speed, (b) same reward, slower convergence, and (c) lower reward, slower convergence. Specific reward values and convergence times are reported in Table~\ref{Table_PositionError}.} \label{fig_PositionError} \vspace{.2em} \end{figure} \textbf{Action}.\quad The action space $\mathcal{A}$ in our environment is related to RA. Among SATs $i_{k}$ in orbital plane~$k$, the agent UT $j$ chooses one SAT to access by using the access action $a_{j}[n]$ in \eqref{AccessAction}. For the set of access action, $\mathcal{A}$, we define an one-hot encoded action as follows \begin{equation} \mathbf{a}_j[n]=\{a_0,a_1,a_2,\cdots,a_K\}, \ \mathrm{s.t.} \sum_{\ell=0}^K a_\ell=1, \end{equation} where $a_\ell$ for $\ell\neq 0$ denotes the association with the $\ell$-th orbital plane, and $a_0$ implies backoff. Note that it is assumed that the UT attempts to access the nearest SAT among SATs $\mathcal{I}_k$ in orbital plane~$k$. \textbf{Reward}.\quad Our reward function is supposed to reinforce UTs to carry out optimal access actions that maximize the network throughput while penalizing access collision. The objective function \eqref{Eq:OptObj} captures this goal, although it is not suitable for the reward function. Indeed, if there is no collision, \eqref{Eq:OptObj} gives positive throughput without penalty, and otherwise imposes only collision penalty with zero throughput. This incurs too significant and frequent reward variation, hindering the training convergence. To mitigate this issue, following \cite{DRL_DQN, DRL_Normalization}, we normalize \eqref{Eq:OptObj}, and consider the reward function of UT $j$ as follows \begin{align} r_{j}[n] = g(R_{j}[n] - \rho c_{j}[n]). \label{Reward_D} \end{align} Note that the normalization function is given by $g(Y) = \frac{Y - \mu}{\Sigma}$, where $\Sigma$ and $\mu$ are parameters for scale which shrinks the output value $Y$ between $-1$ and $1$ and for mean value, respectively. Here, the mean value corresponds with the network throughput of the RACH baseline in Sec.~\ref{Numerical Result}. It is worth noting that the state and reward of each UT agent are all locally observable information and thus the MDP model for RA protocol can be trained and executed in a fully \emph{distributed} manner. \subsubsection{Actor-Critic MADRL} \label{Body_3} Following the standard RL settings, the aforementioned SAT-UT RA environment is considered in which many agents interact for a given number of discrete time steps. At each time step $n$, the agent $j$ receives a state $\mathbf{s}_{j}[n]$ and selects an action $\mathbf{a}_{j}[n]$ from some set of possible actions $\mathcal{A}$ according to its policy $\pi_{\theta_{j}}$, where $\pi_{\theta_{j}}$ is a mapping from states $\mathbf{s}_{j}[n]$ to actions $\mathbf{s}_{j}[n]$. In return, the agent receives the next state $\mathbf{s}_{j}[n+1]$ and receives a scalar reward $r_{j}[n]$. The process continues until the agent reaches a terminal state after which the process restarts. The return $\tilde{r}_{j}[n] = \sum\nolimits_{k=0}^{\infty} \gamma^{k} r_{j}[n+k]$ is the total accumulated return from time step $n$ with discount factor $\gamma \in (0, 1]$. For our multi-agent scenario, we employ an Actor-Critic RL framework \cite{DRL_AC}, which combines the benefits of both policy gradient and value-based methods. The Actor-Critic framework comprises a pair of two NNs: an Actor NN seeks to take actions to obtain higher rewards based on the policy gradient method; and its paired Critic NN aims to approximate value functions more accurately via the value-based method. In particular, each UT agent $j$ has an Actor~NN and a Critic~NN and follows the synchronous advantage Actor-Critic operation \cite{DRL_AC, DRL_A3C}, in which the Critic NN updates its model parameters $\phi_{j}$ according to the policy $\pi_{\theta_{j}}$ given by the Actor NN. Meanwhile, the Actor NN updates its model parameters $\theta_{j}$ according to the value functions $V^{\pi_{\theta_{j}}}(\mathbf{s}_{j}[n]; \phi_{j})$ approximated by the Critic NN. Specifically, the Critic NN aims to minimize the loss function \begin{align} L^{j}_{\mathrm{Critic}}(\phi_{j}) = \kappa_{j}[n]^{2}, \end{align} where $\kappa \mathbf{s}_{j}[n] = r_{j}[n+1] + \gamma V^{\pi_{\theta_{j}}}(\mathbf{s}_{j}[n+1]; \phi_{j}) - V^{\pi_{\theta_{j}}}(\mathbf{s}_{j}[n]; \phi_{j})$ is referred to as the advantage of an action \cite{DRL_TDerror}. The Critic NN model parameters are then updated as \begin{equation} d \phi_{j} \leftarrow d \phi_{j} + \beta_{c} (R - V^{\pi_{\theta_{j}}}(\mathbf{s}_{j}[n]; \theta_{j}))\nabla_{\phi_{j}} V^{\pi_{\theta_{j}}}(\mathbf{s}_{j}[n]; \phi_{j}), \end{equation} where $\beta_{c}$ is the hyperparameter of Critic NN, which is related to the value estimate loss. Meanwhile, the Actor NN aims to minimize the loss function \begin{equation} L^{j}_{\mathrm{Actor}}(\theta_{j}) = -\kappa_{j}[n] \log\pi(\mathbf{s}_{j}[n] | \mathbf{s}_{j}[n];\theta_{j}). \end{equation} Hereafter, the index $j$ identifies different actors for multi-agent scenarios, which can be omitted for a single agent case. Consequently, the Actor NN model parameters are updated~as \begin{equation} d \theta_{j} \leftarrow d \theta_{j} + \nabla_{\theta_{j}} \log{\pi(\mathbf{s}_{j}[n] | \mathbf{s}_{j}[n]; \theta_{j})(R - V^{\pi_{\theta_{j}}}(\mathbf{s}_{j}[n]; \theta_{j}))}. \end{equation} where $H(\pi)$ is the entropy of the policy and $\beta_{e}$ is the hyperparameter of controlling the relative contributions of the entropy regularization term. The NN parameters are updated via gradient descent and backpropagation through time, using Advantage Actor-Critic as detailed by \cite{DRL_A3C}. We summarize the eRACH training process for each UT agent $j$ in Algorithm \ref{Algorithm}. \begin{algorithm}[] \small \caption{eRACH Training Algorithm} \label{Algorithm} Initialize step counter $n \leftarrow 1$ \\ Initialize episode counter $E \leftarrow 1$ \\ \Repeat{$E > E_{\mathrm{max}}$}{ Reset gradients: $d\theta_{j} \leftarrow 0$ and $d \phi_{j} \leftarrow 0$ \\ $n_{\mathrm{start}} = n$ \\ Get state $\mathbf{s}_{j}[n]$ \\ \Repeat{$\mathrm{terminal} \ \mathbf{s}_{j}[n]$}{ Perform access action $\mathbf{s}_{j}[n]$ according to policy $\pi(\mathbf{s}_{j}[n] | \mathbf{s}_{j}[n]; \theta_{j})$ \\ Receive reward $r_{j}[n]$ and new state $\mathbf{s}_{j}[n+1]$ \\ $n \leftarrow n + 1$ } $R = \begin{cases} 0, & \text{for terminal} \ \mathbf{s}_{j}[n]\\ V^{\pi_{\theta_{j}}}(\mathbf{s}_{j}[n]; \theta_{j}), & \text{for non-terminal} \ \mathbf{s}_{j}[n] \end{cases}$ \\ \For{$i \in \{ n-1, \dots, n_{\mathrm{start}} \}$}{ $R \leftarrow r_{j}[n] + \gamma R$ \\ Accumulate gradients w.r.t. $\theta_{j}$: \\ $ d \theta_{j} \leftarrow d \theta_{j} + \nabla_{\theta_{j}} \log{\pi(\mathbf{a}_{j}[i] | \mathbf{s}_{j}[i]; \theta_{j})(R - V^{\pi_{\theta_{j}}}(\mathbf{s}_{j}[i]; \theta_{j}))} $ \\ $ + \beta_{e} \partial H(\pi(\mathbf{a}_{j}[i] | \mathbf{s}_{j}[i]; \theta_{j}) / \partial \theta $ \\ Accumulate gradients w.r.t. $\phi_{j}$: \\ $d \phi_{j} \leftarrow d \phi_{j} + \beta_{c} (R - V^{\pi_{\theta_{j}}}(\mathbf{s}_{j}[i]; \theta_{j}))\nabla_{\phi_{j}} V^{\pi_{\theta_{j}}}(\mathbf{s}_{j}[i]; \phi_{j})$ } Perform update of $\theta_{j}$ using $d \theta_{j}$ and of $d \phi_{j}$ using $\phi_{j}$ \\ $E \leftarrow E + 1$ } \end{algorithm} \section{Numerical Evaluations} \label{Numerical Result} This section validates the proposed emergent RA methods of eRACH for LEO SAT networks with respect to average throughout, collision rate, and delay. \subsection{Simulation Settings} \label{SimulationSetting} \begin{table}[] \centering \resizebox{1\columnwidth}{!}{\begin{minipage}[t]{0.5\textwidth} \caption{Simulation parameters.} \centering \input{Table/Table_Parameter} \label{table_Paramter} \end{minipage}} \end{table} Unless otherwise stated, UTs under study are located in the ground uniformly at random within an area of $1000 \times 1000$ [m$^2$], while two orbital planes $K=2$ circulate over UTs at the altitude of $550$ [km]. Each orbital lane consists of $22$ SATs with the orbital lane circumference of $43486$ [km], resulting in an inter-SAT distance of $1977$ [km]. Since the orbital lane circumference is much larger than the area of interest, each orbital lane is approximated as a line segment at the altitude of $550$ [km] \cite{Starlink}, in which $22$ SATs orbit with the orbital speed $7.59$ [km/s], are separated with the inter-SAT distance $1977$ [km], and perturbed with $\sigma_{i}$. Here, the orbital period and speed are calculated using the relations $4\pi^{2}(r_{\mathrm{E}})^{3} = T^{2}GM$ and $V^{2}r_{\mathrm{E}} = GM$, respectively, where $r_{\mathrm{E}}$ is the radius of orbit in metres; $T$, the orbital period in seconds; $V$, the orbital speed in [m/s]; $G$, the gravitational constant, approximately $6.673 \times 10^{-11}$ [m$^{3}$/kg$^{1}$/s$^{2}$]; $M$, the mass of Earth, approximately $5.98 \times 1024$ [kg]. Unless otherwise stated, the simulation environments and parameters are as follows: each RA opportunity is opened in every $\tau = \tau_{s} + \tau_{d} = 100$ [ms]; the RA signaling duration and data transmission duration are set as $\tau_{s}=10$ [ms] and $\tau_{d}=90$ [ms], respectively; data transmission is conducted for $\tau_{d}$ time only if the attempt of the access succeeds; the objective function in our MDP model corresponds to the un-discounted accumulated rewards with $\rho = 1$ over an episode up to $N = \frac{T}{I \tau} = 2604$, which corresponds to the RA opportunity; and $J=5$ five UT agents are considered wherein two orbital planes $K=2$ circulate over UTs with resources $P=2$, given the MADRL environment mentioned above. \begin{table*}[t] \centering \resizebox{2.\columnwidth}{!}{\begin{minipage}[h]{1.72\columnwidth} \centering \caption{Throughput, collision rate, and access latency of eRACH, compared with eRACH-Coop, RACH, and Slotted ALOHA. } \label{Table_Proposed} \input{Table/Table_Proposed} \vspace{.3em} \end{minipage}} \end{table*} Throughout this section, we consider two benchmark RA schemes and our proposed eRACH with or without cooperation among agents as listed below. \begin{enumerate} \item \textbf{Slotted ALOHA} is a traditional contention-based channel access protocol. UTs agree upon the discrete slot boundaries. At the beginning of the access slot, i.e., $\tau_{s}$, each UT uniform-randomly chooses the SAT to access. For each SAT, if more than one UT attempts to access at the beginning of a slot, collisions occur. \item \textbf{RACH} is another conventional contention-based channel access protocol used in NR and LTE. RACH uniform-randomly selects a SAT to access and additionally chooses the preamble $p_j[n]$ at the beginning of the access slot, i.e., $\tau_{s}$. When a collision occurs, UT waits for a uniformly distributed backoff time and again repeats the process from the beginning. Here, the backoff range follows a discrete uniform distribution as $\tau_{b} \sim DU(1, W\tau)$, where $W$ is the backoff window size assumed fixed at $10$. As in Release 16 of NR \cite{3GPP_NR_RACH_Rel16}, we consider that RA signaling is done in two steps, i.e., 2-step RACH. \item \textbf{eRACH-Coop} is another variant of our proposed RA scheme, in which each UT can select an optimal action while cooperatively communicating with other UT agents. Unlike \emph{eRACH} where each distributed UT agent uses partially observable information, for \emph{eRACH-Coop}, cooperative UT agents use the full observability through cheap-talk with other agents. In particular, the cooperative UT agents use the network throughput as a corresponding reward by the previous access action of each agent, and the collision rate which involves all the collision information of each UT as state. The reward and state for the cooperative agent $j$ is given by \begin{align} r^{\mathrm{C}}_{j}[n] &= g({\textstyle\sum}_{j \in \mathcal{J}} R[n]), \label{Reward_C} \\ \mathbf{s}^{\mathrm{C}}_{j}[n] &= \lbrace \mathbf{s}_{j}[n], {\textstyle\sum}_{j \in \mathcal{J}}R_{j}[n], {\textstyle\sum}_{j \in \mathcal{J}}c_{j}[n] \rbrace. \label{State_C} \end{align} Here, this reward structure is used in the centralized training and decentralized execution (CTDE). During the centralized training, the centralized reward can be used to observe the throughput and the collision event of other UT agents. Note that each agent has its own Actor-Critic network and decide the optimal access action for itself via trained policy. Here, cheap talk is necessary during both the training and exploitation phases. \item \textbf{eRACH} is our proposed RA scheme with using Actor-Critic framework, wherein each UT agent optimizes its access action in the environment in a fully \emph{distributed} manner. Each UT has partial state observability. The agents interact through the environment but do not communicate with other agents. The learning can be cast as a partially observable MDP (POMDP). \end{enumerate} It is worth noting that in eRACH-Coop, each UT utilizes information from other UTs (see \eqref{Reward_C} and \eqref{State_C}) unlike the other baselines. Accordingly, eRACH-Coop can achieve better performance by using additional information. Here, one can regard its performance as an upper-bound result. The following conditions are considered comparing the baselines: \textit{1)} all UTs have enough data to be transmitted; \textit{2)} if collisions occur at a time slot for an SAT, all attempted UTs fail to access at the time slot to the SAT; and \textit{3)} the RA signaling is done in a two step procedure (see Sec.~\ref{Sec2_RA}). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figure_Result/Thr_Itr_Distributed_Agents.pdf} \caption{The convergence of eRACH for each UT agent ($J=5$).} \label{fig_Convergence_Agents} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figure_Result/Thr_Itr.pdf} \caption{Network throughput for different RA schemes.} \label{fig_Thr_Itr} \end{figure} \subsection{Model Architecture and Training} For all Actor and Critic NNs, we identically consider a $4$-layer fully-connected multi-layer perceptron (MLP) NN architecture. Each MLP has $2$ hidden layers, each of which has the $128\times 128$ dimension with the rectified linear unit (ReLU) activation functions. Both Actor and Critic NNs are trained using RMSprop with the learning rate $0.0001$, batch size $2604$, $2604$ iterations per episode, training episodes $1000$, and $2604$ iterations per update. The simulations are implemented using TensorFlow Version 2.3. Main parameters on simulation settings are summarized in Table \ref{table_Paramter}. Figs. \ref{fig_Convergence_Agents} and \ref{fig_Thr_Itr} plot the convergence of eRACH. Firstly, in Fig. \ref{fig_Convergence_Agents}, it is provided the training convergence behavior of Actor and Critic NNs for the five UT agents, in which the solid curves denote cumulative reward for each UT agent. The results show that each Actor and Critic NN for $J=5$ converge within about $500$ episodes. Besides, Fig. \ref{fig_Thr_Itr} compares eRACH and eRACH-Coop with conventional RACH and slotted ALOHA over the number of training episodes, in which shaded areas correspond to the maximum deviations during $5$ simulation runs. Note that our cumulative reward corresponds to the average network throughput, such that we can validate the convergence of the proposed Actor-Critic MADRL-based method in Algorithm \ref{Algorithm} with this figure. As identified in this figure, eRACH and eRACH-Coop converge and outperform the conventional RACH scheme only within around $200$ training episodes. It suggests that the observed data during only one day is enough to train the eRACH UT agent. \begin{table*}[t] \centering \resizebox{1\linewidth}{!}{\begin{minipage}[h]{\linewidth} \centering \caption{Top-view snapshots of SAT BS associations and preamble resource utilization (Resource Util.) under eRACH and RACH for $4$ consecutive time slots, where associated and backed-off UTs are drawn with or without solid lines, respectively ($K=2$, $I=22$, $J=5$, $P= 2$). For the same period, the RA snapshots are illustrated in Fig. \ref{Snapshot}. } \label{Illustration_Snapshot} \input{Table/Table_Snapshot} \end{minipage}} \end{table*} \begin{figure*} \centering \subfloat[RACH. \label{a-Snapshot}]{\includegraphics[width=0.33\linewidth]{Figure_Result/SnapShot_RACH.pdf}} \subfloat[eRACH. \label{c-Snapshot}]{\includegraphics[width=0.33\linewidth]{Figure_Result/SnapShot_Distribute_Collision.pdf}} \subfloat[eRACH-Coop. \label{d-Snapshot}]{\includegraphics[width=0.33\linewidth]{Figure_Result/SnapShot_Cooperative.pdf}} \caption{RA snapshots under eRACH, eRACH-Coop, and RACH for $10$ consecutive time slots, where $\{\textsf{SAT1},\textsf{SAT2}\}$ identifies SAT BS associations, and the red color of each box indicates the collision rate (the darker, the higher). For the first $4$ consecutive time slots in (a) and (b), the SAT BS association snapshots are illustrated in Fig. \ref{Illustration_Snapshot} ($K=2$, $I=22$, $J=5$, $P=2$).} \label{Snapshot} \vspace{.2em} \end{figure*} \subsection{RA Performance Analysis} Fig. \ref{fig_Thr_Itr} and Table \ref{Table_Proposed} compare eRACH with other baselines, in terms of network throughput, collision rate, and access delay. The results validate that eRACH achieves higher average network throughput with lower average access delay than the baselines. In particular, compared to RACH, eRACH and eRACH-Coop achieve 31.18 \% and 54.61 \% higher average network throughput, which is equivalent to 5.08x and 6.02x higher average network throughput of slotted ALOHA, respectively. Moreover, the results show that eRACH achieves 1.49x and 7.31x lower average access latency compared to RACH and slotted ALOHA, respectively. Next, in terms of RA collision, the average collision rate of eRACH is 1.41x lower than slotted ALOHA, yet is 4.94x higher than RACH. As opposed to other model-based protocols, eRACH is willing to risk collisions for yielding higher throughput and lower access latency in a given network and channel environment. This fact suggests that eRACH optimizes access flexibly according to the importance of throughput-collision in the LEO SAT network. Such flexibility is advantageous to best-effort services such as enhanced mobile broadband (eMBB) applications, but becomes a downside for mission-critical applications such as ultra-reliable and low-latency communication (URLLC). For the latter case, investigating the emergent protocols that strictly abide by a collision constraint are interesting and deferred to future work. Lastly, comparing the performance between eRACH and eRACH-Coop in Table \ref{Table_Proposed}, we conclude that eRACH is a fairer protocol that achieves 1.24x higher Jain's fairness than that of eRACH-Coop. The rationale is because the fully distributed operations of eRACH inherently hide the information that may promote selfish actions. Meanwhile, there is also a room to improve the fairness of eRACH-Coop by applying a fairness-aware reward function during training and/or learning fairness-aware representations for cheap talk communication, which could be an interesting direction for future research. \subsection{SAT Association and RA Operations} As shown in Table \ref{Illustration_Snapshot}, eRACH efficiently utilizes resources by optimizing the association of SAT-UT while considering the non-stationary LEO SAT network. Besides, as shown in Fig. 8, eRACH backs off flexibly thereby avoiding the collision under consideration of the given local environment, whereas when collision occurs, RACH backs off more than necessary due to a randomly selected backoff window. In particular, eRACH-Coop in Fig. \ref{d-Snapshot} shows that a certain UT agent continuously backs off. Here, since the cooperative UT agents consider the network throughput rather than the throughput of itself in the reward function, the sacrifice by this certain agent is reasonable. In contrast, for eRACH in Fig. \ref{c-Snapshot}, each agent takes turns and decides to backoff. The distributed UT agents also learn how to sacrifice for the entire network even without exchanging information between agents. It suggests that the distributed eRACH protocol is able to emerge from a given local environment during the training process. We, however, are aware of a few limitations of eRACH and eRACH-Coop, which are marked by a dotted purple line in Figs. \ref{c-Snapshot} and \ref{d-Snapshot}. In eRACH and eRACH-Coop, there are unnecessary backoff decisions that seem to be throughput-wise inefficient. It underlines that \textit{1)} DRL-based method occasionally decides a poor action, especially when dealing with time-series data, like our environment, and \textit{2)} the fairness issue can arise in Cooperative, during the certain agent sacrifice for the sake of the entire network. Despite the few limitations, we observe from such comparison results that eRACH can emerge from a local environment with the following remarkable features: \textit{1)} eRACH flexibly avoids unnecessary backoff with an understanding of the given network conditions, and \textit{2)} eRACH access the optimal LEO SAT considering the periodicity of the LEO SAT constellation. \subsection{Ablation Study of Key Hyper Parameters} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figure_Result/Thr_Itr_Distributed} \caption{Impact of SAT location precision on average network throughput ($\sigma^2=100$). } \label{fig_Thr_Itr_Actual} \vspace{.4em} \end{figure} \textbf{Impact of SAT Location Precision}.\quad While training eRACH, the information of SAT location affects dominantly the reward and convergence, as explained in Fig. \ref{fig_StateReasoning} of Sec.~\ref{Sec4B}. However, it is challenging to find the SAT location precisely. In this regard, to examine whether easy-to-acquire prior information can be used instead of actual observed information, we present Fig. \ref{fig_Thr_Itr_Actual} and Table \ref{Table_PositionError}. \begin{table}[] \centering \caption{Impact of SAT location precision on normalized reward and the number episodes until convergence. This table is also visualized in Fig. \ref{fig_PositionError}.} \resizebox{1\columnwidth}{!}{\begin{minipage}[]{.96\columnwidth} \centering \label{Table_PositionError} \input{Table/Table_PositionError} \end{minipage}} \end{table} Specifically, various physical forces perturb SAT, e.g., Earth oblateness, solar and lunar gravitational effects, and gravitational resonance effects \cite{SAT_PositionError}. In actual SAT communication systems, various methods are used to track the location of perturbed SAT, such as pointing, acquisition, and tracking (PAT) mechanism \cite{SAT_PAT} and global navigation satellite system (GNSS). Even with the tracking and navigation systems, some orbital positioning error of LEO SAT is inevitable. In this regard, we demonstrate the validity of the LEO SAT periodic location information, which is prior knowledge, in eRACH in Fig. \ref{fig_Thr_Itr_Actual} and Table \ref{Table_PositionError}. As shown in Fig. \ref{fig_Thr_Itr_Actual}, eRACH trained with the periodic LEO SAT position information converges within around $600$ episodes. It closely approaches the one trained with actual LEO SAT position information. However, the periodic SAT position is not always effective for training eRACH, as shown in Table \ref{Table_PositionError}. The number of episodes for convergence, which corresponds to the training time, increases as the positional error goes. Besides, the overall reward decreases as the positional error goes. Nevertheless, information of periodic SAT position can still be used effectively for eRACH until the divergence point at around $\sigma^2 = 10^4$. These results suggest that eRACH can be sufficiently trained with only prior information, and this emphasize that eRACH can be trained by not only online RL but also offline RL \cite{DRL_OnlineOffline}. \begin{figure}[t] \centering \subfloat[Sparse Case, $d_{j, j'} = 1000$. \label{Snapshot_Density_Sparse}]{\includegraphics[width=0.75\linewidth]{Figure_Result/SnapShot_Distributed_Sparse.pdf}}\hfill \\ \subfloat[Dense Case, $d_{j, j'} = 10$. \label{Snapshot_Density_Dense}]{\includegraphics[width=0.75\linewidth]{Figure_Result/SnapShot_Distributed_Dense.pdf}} \caption{Impact of UT distribution on access action policy of eRACH. Here, $d_{j, j'}$ represents the average distance between UTs in [m].} \label{Snapshot_Density} \vspace{.2em} \end{figure} \textbf{Impact of UT Distribution}.\quad The performance of the access protocol depends not only on the available resources and the number of UTs which attempt to access but also on the deployment scenario, e.g., sparse and dense networks. In this regard, we present Fig. \ref{Snapshot_Density} to demonstrate the impact of UT distribution on eRACH. In the sparse networks, each UT experiences different channel conditions for SAT; thus, each UT has a particular advantageous SAT to connect to. Here, eRACH mainly focuses on designing the optimal association with SAT rather than backoff action. Whilst, in the dense networks, the channel difference between each UT is not noticeable since the distance between UTs is relatively close compared to the distance between SAT and UTs. For the sparse case of Fig. \ref{Snapshot_Density_Sparse}, eRACH does not backoff during the entire time slot. In contrast, in the dense case of Fig. \ref{Snapshot_Density_Dense}, eRACH backoff up to about $30$ \% of the total time slot and focuses on backoff action rather than optimal association. Notably, eRACH in both cases achieves similar performance even though they chose different aspects of access action. This fact further corroborates that eRACH, which flexibly decides the access action, can emerge from a given local network scenario without communication between agents. \begin{figure}[t] \centering \subfloat[\textit{Rate-Max} ($\rho=0$). \label{RateMax}]{\includegraphics[width=0.75\linewidth]{Figure_Result/SnapShot_Distribute_Rate.pdf}} \hfill \\ \subfloat[\textit{Collision-Aware} ($\rho=2$). \label{CollisionAware}]{\includegraphics[width=0.75\linewidth]{Figure_Result/SnapShot_Distribute_rho2.pdf}}\hfill \caption{Impact of the collision aversion factor $\rho$ on access action policy of eRACH.} \label{Snapshot_RateMax} \end{figure} \textbf{Impact of the Collision Aversion Factor $\rho$}.\quad In the LEO SAT network, which suffers from a long one-way propagation delay, the contention resolution is challenging, as discussed in Sec. \ref{Background}. Hence, consideration for collision can be another significant issue. Regarding this, we further present the numerical results of the collision aversion case. To clearly see the collision aversion case, we additionally present \textit{Rate Max}, which only considers the throughput while training of eRACH by setting $\rho = 0$. Here, the reward function in \eqref{Reward_D} is rewritten for \textit{Rate Max} case as follows. \begin{align} r_{j}[n] = g(R_{j}[n]), \ \forall j, n. \label{Reward_D_Objective} \end{align} As shown in Table \ref{Table_Objective} and Figs. \ref{Snapshot_RateMax} and \ref{fig_Objective}, the eRACH agent learns mainly maximizing throughput or mainly minimizing collision, depending on the collision aversion factor $\rho$. Those results verify that eRACH method can be flexibly applied to various applications. \begin{table}[] \centering \caption{ Impact of $\rho$ on average network throughput and average collision rate of eRACH ($K=2$, $I=22$, $J=5$, $P=2$). } \resizebox{1\columnwidth}{!}{\begin{minipage}[h]{.96\columnwidth} \centering \label{Table_Objective} \input{Table/Table_Objective} \end{minipage}} \end{table} \begin{figure}[] \centering \includegraphics[width=.85\columnwidth, clip]{Figure_Result/Figure_Objective.pdf} \caption{Comparison of eRACH over the collision aversion factor $\rho$.} \label{fig_Objective} \end{figure}
proofpile-arXiv_065-107
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Object detection is a heavily-investigated topic in the computer vision community, because it is fundamental and the prerequisite for many other vision tasks, such as instance segmentation \cite{Mask_RCNN_2017, ins_seg_2016} and high-level object-based reasoning \cite{NIPS2017}. \begin{figure}[t] \centering \includegraphics[width=1.2\linewidth]{./figures/image_bbox.pdf} \caption{\small{ Illustration of the \emph{misalignment} between Smooth-$\ell_1$ Loss and the metric $IoU$. Although the predicted box 1 matches the ground-truth box better than the predicted 2 ($IoU$: 0.50 vs 0.14), Smooth-$\ell_1$ Loss of the ground-truth box and the predicted box 1 is larger than that of the ground-truth box and the predicted box 2 (Smooth-$\ell_1$ Loss: 0.54 vs 0.50).}} \label{Fig.1} \vspace{-1em} \end{figure} With the advent of deep CNNs \cite{Krizhevsky2012ImageNet, BatchNorm2015, ResNet2016} in recent years, the performance of object detection has progressed substantially. There are generally two possible approaches to improve detection accuracy besides increasing samples: constructing ingenious architectures and devising better losses. Constructing CNN architectures have made great strides in the past years \cite{Fast_RCNN_2015, Faster_RCNN_2017, FPN2017, Cascade_RCNN_2018, YOLOv3_2018, SSD_2016, Focal_loss_retinanet_2017, He2017Deep, Cheng1, Cheng2, Cheng3}. One tendency is to design more and more sophisticated architectures for better performance, but this way commonly will increase the computational cost. In contrast, devising better losses is more economical, since we can obtain the improvement at little cost of extra training and inferencing time. However, research on devising losses, especially localization losses, received much less attention in past years. Since R-CNN introduced a four-variable-independent-regression loss for localization in 2013~\cite{RCNN_2013}, the localization loss in modern deep detectors changed little. Although the four-variable-independent-regression loss is simple and straightforward, it is not consistent with the final detection performance metric, $IoU$. The gap between the four-variable-independent-regression loss and $IoU$ will inevitably result in some misaligned cases -- the loss is small, but $IoU$ is also small, which means the predicted box and the ground-truth box overlap little, and vice versa. An example in Fig 1 visually illustrates this gap between Smooth-$\ell_1$ Loss (the most widely-used four-variable-independent-regression loss) and $IoU$. It is intuitive that equipping an $IoU$ related loss can address this problem. However, the standard $IoU$ based losses did not popularize in the past years, since there are two intrinsic deficiencies in the standard $IoU$. {(i)} When the predicted box and the ground-truth box do not overlap, the standard $IoU$ itself is ill-defined since the value is constant zero. Then, the gradient of any standard $IoU$ based loss will also become zero, so the backpropagation cannot pull the predicted box close to the ground-truth in this non-overlapping case. {(ii)} The gradient of a simple standard $IoU$ loss at the minimum where two boxes completely overlap is non-zero, which will bring about oscillation and slow convergence when applying gradient descent algorithms. Very recently, \cite{GIOU_2019} pioneeringly proposed $GIoU$ that adds a regularization term after a standard $IoU$ loss, and then the new loss has non-zero gradients when two boxes are not overlapping. However, the regularization term also makes {$GIoU$} not equivalent to the standard $IoU$ when two boxes are overlapping. Hence the performance of {$GIoU$} might be suboptimal as the standard $IoU$ is the final evaluation metric. Moreover, $GIoU$ Loss still does not overcome problems of oscillation and slow convergence. \cite{DIoU2020} presented $CIoU$ loss by incorporating the normalized distance between two boxes into the standard $IoU$. Actually, $CIoU$ can be considered as the combination of a four-variable-independent-regression loss and a standard $IoU$ loss. $CIoU$ converges much faster than $GIoU$, but it still can not avoid oscillation due to non-zero gradients at the minimum. In this paper, we will propose a systematic method to tackle all the problems above and introduce some new techniques to further improve the detection accuracy. \begin{itemize} \item We propose a more generalized and well-defined $IoU$, namely $EIoU$. In the case of overlapping bounding boxes, $EIoU$ is identical to the standard $IoU$, while in the case of non-overlapping boxes, $EIoU$ is smaller as two boxes separate further, which will make $EIoU$ trainable. \item We present a convexification technique (CT) to construct a new loss. It will lead the gradient to become zero at the minimum. So it is possible to achieve the minimum through gradient descent algorithms. Moreover, just like Focal Loss, the convexification technique will adaptively assign higher weight on hard examples. \item We introduce a steady optimizing technique (SOT) to make the loss approach the minimum steadily and smoothly. The convergence of the steady optimization technique is theoretically ensured. \item Harnessing the computed ground-truth $IoU$ score in the new loss above, we add a single-layer head to be trained to predict this $IoU$ score. Then, we can utilize the predicted IoU score to help non-maximum suppression(NMS) select more precise bounding boxes in the inferencing stage. \end{itemize} \section{Related Work} \noindent\textbf{Architectures of CNN based detectors.} The architecture of modern CNN based detectors can be generally divided into two parts: the backbone network and the detection-specific network. The backbones are commonly borrowed from the networks designed for categorization, of which VGG \cite{VGG_2015}, ResNet \cite{ResNet2016}, ResNeXt \cite{ResNeXt_2017} are often leveraged. Besides, some specially-designed backbones for detection were also proposed in past years, such as DarkNet \cite{YOLO9000_2017} and DetNet \cite{DetNet_2018}, and Hourglass Net \cite{HourglassNet2016} are also frequently adopted. There are two different logics to design a detection-specific network. The first one is the two-stage network, and it consists of two sub-networks, where the first one is to generate a sparse set of candidate proposers, and the other is to determine the accurate location and categories based on the proposals. R-CNN \cite{RCNN_2013}, fast R-CNN\cite{Fast_RCNN_2015} and Faster R-CNN \cite{Faster_RCNN_2017} shaped the basic network architecture of two-stage detectors, and then R-FCN \cite{R-FCN_2016} replaced the fully-connected sub-network with a convolution sub-network to improve efficiency. FPN \cite{FPN2017} introduces a lateral network to produce object proposals at multiple scales with more contextual information. Cascade R-CNN devised a cascade structure and it improves performance substantially \cite{Cascade_RCNN_2018}. \cite{IoUNet_2018} proposed IoU-Net and IoU guided NMS to acquire location confidence for accurate detection. Grid R-CNN \cite{GridRCNN2019} can capture the spatial information explicitly and enjoys the position-sensitive property of fully convolutional architecture. Very recently, TridentNet\cite{TrideNet2019} constructed a parallel multi-branch architecture aiming to generate scale-specific feature maps with a uniform representational power. Another one is the one-stage network, which directly predicts the locations and categories of the object instance. YOLO \cite{YOLO_2015} and SSD \cite{SSD_2016} first popularized the one-stage methods by much reducing the computational cost but still maintaining competitive performance. Then, DSSD \cite{DSSD_2017} and RON \cite{RON_2017} introduced a network similar to the hourglass network to combine low-level and high-level information. RetinaNet~\cite{Focal_loss_retinanet_2017} with Focal loss as the one-stage detectors first outperformed the two-stage detectors. RefineDet \cite{RefineDet2018} designed the anchor refinement module and the object detection module to reduce negative boxes and improve detection. CornerNet \cite{CornerNet_2018} is an anchor-free framework and adopts two subnetworks to detect the top-left and bottom-right key points and then employs a grouping subnetwork to pair them. Later some other competitive anchor-free detectors, such as FSAF \cite{FSAF2019}, FCOS \cite{FCOS2019} and CenterNet \cite{zhou2019objects, CenterNet2019}, were further developed. These ingenious architectures significantly promoted the evolution of object detection. It is worth noting that the improvement of detection performance is partly attributed to sophisticated backbones and detection-specific networks that will commonly bring extra computational cost. \noindent\textbf{Losses of CNN based detectors.} Compared with the design of architectures, the exploration of losses is more economical, because a well-devised loss can obtain performance gain with little additional train time cost and no extra test time cost. However, research on losses for detection has been underestimated for a long time. Modern CNN based detectors were popularized by R-CNN in 2013~\cite{RCNN_2013}, and it introduced the softmax loss and a four-variable-independent-regression loss for classification and localization. Since then, this type of classification loss and localization loss became mainstream and were applied to most detectors. As for the classification loss, YOLO~\cite{YOLO_2015} used to employ the $\ell_2$ loss for categorization, but the later improved YOLO9000~\cite{YOLO9000_2017} gets back on track to reuse the softmax loss. Afterwards, Focal Loss ~\cite{Focal_loss_retinanet_2017} was specially developed to address extreme foreground-background ratio problem in one-stage detectors. It can adaptively down-weight overwhelming well-classified background examples to enjoy better detection performance. Recently, \cite{Cheng1} exploits new losses to address the object rotation problem and the within-class diversity problem. In terms of localization loss, Fast R-CNN substitutes the four-variable-independent-regression $\ell_2$ loss using in R-CNN with Smooth-$\ell_1$ loss \cite{Fast_RCNN_2015}. The localization loss of the latter CNN based detectors mostly follow Smooth-$ell_1$ loss with no or little change \cite{Faster_RCNN_2017, YOLO_2015, Focal_loss_retinanet_2017, FPN2017, IoUNet_2018}. However, as illustrated in Section 1 and Fig 1, there is misalignment between Smooth-$\ell_1$ loss and the evaluation metric of $IoU$. So \cite{IoU_Loss_2016} tried to introduce a standard $IoU$ based loss to address this problem. Nevertheless, the standard $IoU$ also has its own defect. As long as two boxes are mutually detached no matter how far the distance is, the standard $IoU$ will become constant zero, so that the gradient of a standard $IoU$ based loss will also become zero and the loss is not trainable in this case. $GIoU$ \cite{GIOU_2019} introduced a well-designed term added after a standard IoU based loss, and then the new loss becomes non-zero when two boxes are separated. This pioneering work made great progress to make $IoU$ based loss feasible. But just the adding term makes this new loss no longer equal the standard $IoU$. Hence it may lead to an unexpected result that $GIoU$ Loss in some cases of overlapping boxes is larger than that in some cases of non-overlapping boxes. In this work, we will propose a systematical method to tackle the problems above of existing localization losses. \section{The Proposed Approach} In this section, we will present this systematic approach. We first introduce the standard $IoU$, and interpret its plight for handling the situation that two boxes are non-overlapping. Next, we will show how we devise a new extended $IoU$ that can overcome the difficulty above. Then, we will use a convexification technique/focal technique to construct an extended $IoU$ based loss. Afterward, we will provide a steady optimization technique to make the training process steadily and smoothly. Finally, we will present an interrelated IoU-predicting head to select more precise predicted bounding boxes. \subsection{ Standard IoU} Constructing an $IoU$ based loss is an intuitive way to tackle the unappealing problems that the four-variable-independent-regression losses bring. However, the standard IoU ($SIoU$) has some deficiencies that hinder the prevalence of $IoU$ based losses, and we will elaborate it in the following. Given the targeted bounding boxes with a tuple $\left(x_1^t, y_1^t, x_2^t, y_2^t\right)$ and the predicted box with a tuple $\left(x_1^p, y_1^p, x_2^p, y_2^p \right)$, where $x_1$, $y_1$ and $x_2$, $y_2$ are the coordinate value of the top-left and bottom-right corners of the bounding boxes, respectively. When two boxes are overlapping, the definition of the standard $SIoU$ is \begin{align} &x_1 = \max\left(x_1^t,~ x_1^p \right), \label{Eq.1}\\ &y_1 = \max\left(y_1^t,~ y_1^p \right), \\ &x_2 = \min\left(x_2^t,~ x_2^p \right), \\ &y_2 = \min\left(y_2^t,~ y_2^p \right), \label{Eq.4}\\ &I_{\rm{std}} = (x_2 - x_1)(y_2 - y_1), \\ &S_t = (x_2^t - x_1^t)(y_2^t - y_1^t), \label{Eq.6}\\ &S_p = (x_2^p - x_1^p)(y_2^p - y_1^p), \\ &U_{\rm{std}} = S_t + S_p - I_{\rm{std}},\label{Eq.8} \\ &SIoU = \frac{I_{\rm{std}}}{U_{\rm{std}}} \label{Eq.9}. \end{align} However, when two boxes are not overlapping, the value of the intersection $I_{\rm{std}}$ and $SIoU$ is constant $0$, which will bring two drawbacks. \begin{itemize} \item $SIoU$ cannot distinguish whether the two boxes are just in the vicinity or they are separated remotely. \item The gradient of the $SIoU$ for backpropagation will also become zero. \end{itemize} Hence $SIoU$ is not trainable in this case\footnote{ Actually, $SIoU$ is not trainable only when all the pair boxes are non-overlapping. In practice, it is common there are overlapping pair boxes and non-overlapping pair boxes in a batch. Hence the total gradient of a batch might not be zero. However, the exist of non-overlapping boxes in a batch will still make the performance for $SIoU$ degrade which can be seen in Table I. }. \subsection{ Extended IoU } In this subsection, we introduce our extended IoU ($EIoU$) that is accurately equivalent to the standard $IoU$ in the case of overlapping boxes and has non-zero gradients in the case of non-overlapping boxes. Conserving the definition of Eq.(\ref{Eq.1}-\ref{Eq.4}), the extended intersection ($I_e$) is \begin{align} &x_0 = \min\left(x_1^t, ~ x_1^p \right) \label{Eq.11}\\ &y_0 = \min\left(y_1^t, ~ y_1^p \right) \\ &x_{\min} = \min\left(x_1, ~ x_2 \right) \\ &y_{\min} = \min\left(y_1, ~ y_2 \right) \\ &x_{\max} = \max\left(x_1, ~ x_2 \right) \\ &y_{\max} = \max\left(y_1, ~ y_2 \right)\label{Eq.15} \end{align} \begin{equation} \begin{aligned} I_e = &S_1 + S_2 + S_3 + S_4 \\ =&(x_2 - x_0)(y_2 - y_0) + (x_{\min} - x_0)(y_{\min} - y_0) \\ &- (x_1 - x_0)(y_{\max} - y_0) - (x_{\max} - x_0)(y_1 - y_0), \\ \end{aligned} \label{Eq.17} \end{equation} where we define $I_{\rm{e}} = S_1 + S_2 - S_3 -S_4$, in which $S_1$ is area of the rectangle with top-left corner $(x_0,y_0)$ and bottom-right $(x_2, y_2)$; $S_2$ is area of the rectangle with top-left corner $(x_0,y_0)$ and bottom-right $(x_{\min}, y_{\min})$; $S_3$ is area of the rectangle with top-left corner $(x_0,y_0)$ and bottom-right $(x_1,y_{\max})$; $S_4$ is area of the rectangle with top-left corner $(x_0,y_{0})$ and bottom-right $(x_{\max}, y_{1})$. \begin{figure}[tbp] \begin{minipage}{0.45\linewidth} \centerline{\includegraphics[width=1\linewidth]{./figures/EIoU_a.pdf}} \vspace{-10pt} \centerline{ \tiny{(a) Overlapping ($x_1 < x_2$, $y_1<y_2$)}} \centerline{ \tiny{$IoU=EIoU=\frac{1}{11}$}} \end{minipage} \begin{minipage}{0.45\linewidth} \centerline{\includegraphics[width=1\linewidth]{./figures/EIoU_b.pdf}} \vspace{-10pt} \centerline{ \tiny{(b) Non-Overlapping ($x_1 > x_2$, $y_1<y_2$)}} \centerline{ \tiny{$IoU=0$, $EIoU=-\frac{1}{5}$}} \end{minipage} \\ \vspace{10pt} \begin{minipage}{0.45\linewidth} \centerline{\includegraphics[width=1\linewidth]{./figures/EIoU_c.pdf}} \vspace{-10pt} \centerline{ \tiny{(c) Non-Overlapping ($x_1 < x_2$, $y_1>y_2$)}} \centerline{ \tiny{$IoU=0$, $EIoU=-\frac{1}{4}$}} \end{minipage} \begin{minipage}{0.45\linewidth} \centerline{\includegraphics[width=1\linewidth]{./figures/EIoU_d.pdf}} \vspace{-10pt} \centerline{ \tiny{(d) Non-Overlapping ($x_1 > x_2$, $y_1>y_2$)}} \centerline{ \tiny{$IoU=0$, $EIoU=-\frac{5}{11}$}} \end{minipage} \caption{\small{ Illustration the difference between $EIoU$ and $SIoU$. It is known $IoU = \frac{I}{S_t + S_p - I} $ and $S_t$ and $S_p$ are fixed, so the differences between $I_e$ and $I_{std}$ are the key. From Eq.(\ref{Eq.11}-\ref{Eq.17}), we know $I_{\rm{e}} = S_1 + S_2 - S_3 -S_4$, where $S_1$ is area of the rectangle with top-left corner $(x_0,y_0)$ and bottom-right $(x_2, y_2)$; $S_2$ is area of the rectangle with top-left corner $(x_0,y_0)$ and bottom-right $(x_{\min}, y_{\min})$; $S_3$ is area of the rectangle with top-left corner $(x_0,y_0)$ and bottom-right $(x_1,y_{\max})$; $S_4$ is area of the rectangle with top-left corner $(x_0,y_{0})$ and bottom-right $(x_{\max}, y_{1})$. It is known the standard $I_{\rm{std}} = S_0$ , where $S_0$ is the area of the rectangle with top-left corner $(x_1,y_{1})$ and bottom-right $(x_{2}, y_{2})$. Thus, when two boxes are overlapping as shown in (a) with $x_1 < x_2$ and $y_1<y_2$, $I_{\rm{e}}$ is always positive and exactly equivalent to the standard $I_{\rm{std}}$. When two boxes are not overlapping, there are three situations shown in (b) with $x_1 > x_2$ and $y_1<y_2$, (c) with $x_1 < x_2$ and $y_1>y_2$ and (d) with $x_1 > x_2$ and $y_1>y_2$. In this case, $I_{\rm{e}}$ become \emph{negative}. Moreover, unlike $I_{\rm{std}}$ that keeps \emph{constant $0$}, the further two boxes are mutually separated, the smaller the value of $I_{\rm{e}}$ is, which conforms to human's intuition better and make gradients of $I_{\rm{e}}$ \emph{non-zero}. Note that light yellow regions for $I_{\rm{e}}$ in (a), (b) and (c) are one-fold areas and deep yellow region in (b), (c) and (d) are two-fold areas. } } \label{Fig.2} \vspace{-12pt} \end{figure} We enumerate all the four situations whether two boxes are overlapping or not overlapping for the proposed $I_{\rm e}$ in the following. (i) As shown in \emph{Fig \ref{Fig.2}(a)}, when two boxes are overlapping with $x_1 < x_2$ and $y_1 < y_2$ , we have $x_{\min} = x_1$, $x_{\max} = x_2$, $y_{\min} = y_1$ and $y_{\max} = y_2$, and then \begin{equation} \begin{aligned} I_e = &(x_{\max} - x_0)(y_{\max} - y_0) + (x_{\min} - x_0)(y_{\min} - y_0) \\ &- (x_{\min} - x_0)(y_{\max} - y_0) - (x_{\max} - x_0)(y_{\min} - y_0) \\ =&(x_{\max} - x_{\min})(y_{\max} - y_{\min}) \\ =&(x_{2} - x_{1})(y_{2} - y_{1}) \\ >& 0. \end{aligned} \label{Eq.19} \end{equation} (ii) As shown in \emph{Fig \ref{Fig.2}(b)}, when two boxes are non-overlapping with $x_1 > x_2$ and $y_1 < y_2$, we have $x_{\min} = x_2$, $x_{\max} = x_1$, $y_{\min} = y_1$ and $y_{\max} = y_2$, and then \begin{equation} \begin{aligned} I_e = &(x_{\min} - x_0)(y_{\max} - y_0) + (x_{\min} - x_0)(y_{\min} - y_0) \\ &- (x_{\max} - x_0)(y_{\max} - y_0) - (x_{\max} - x_0)(y_{\min} - y_0) \\ =&(x_{\min} - x_{\max})(y_{\max} - y_{0}) + (x_{\min} - x_{\max})(y_{\min} - y_{0})\\ <& 0. \end{aligned} \end{equation} (iii) As shown in \emph{Fig \ref{Fig.2}(c)}, when two boxes are non-overlapping with $x_1 < x_2$ and $y_1 > y_2$, we have $x_{\min} = x_1$, $x_{\max} = x_2$, $y_{\min} = y_2$ and $y_{\max} = y_1$, and then \begin{equation} \begin{aligned} I_e = &(x_{\max} - x_0)(y_{\min} - y_0) + (x_{\min} - x_0)(y_{\min} - y_0) \\ &- (x_{\min} - x_0)(y_{\max} - y_0) - (x_{\max} - x_0)(y_{\max} - y_0) \\ =&(x_{\max} - x_{0})(y_{\min} - y_{\max}) + (x_{\min} - x_{0})(y_{\min} - y_{\max})\\ <& 0. \end{aligned} \end{equation} (iv) As shown in \emph{Fig \ref{Fig.2}(d)}, when two boxes are non-overlapping with $x_1 > x_2$ and $y_1 > y_2$, we have $x_{\min} = x_2$, $x_{\max} = x_1$, $y_{\min} = y_2$ and $y_{\max} = y_1$, and then \begin{equation} \begin{aligned} I_e = &(x_{\min} - x_0)(y_{\min} - y_0) + (x_{\min} - x_0)(y_{\min} - y_0) \\ &- (x_{\max} - x_0)(y_{\max} - y_0) - (x_{\max} - x_0)(y_{\max} - y_0) \\ =&2\left((x_{\min} - x_{0})(y_{\min} - y_0) - (x_{\max} - x_{0})(y_{\max} - y_0)\right)\\ <& 0. \end{aligned} \label{Eq.20} \end{equation} Therefore, $I_e$ is positive and reduced to $I_{\rm std}$ in the case of overlapping and $I_e$ is negative and decreases with the distance of two boxes in the case of non-overlapping. Analogous to {$SIoU$} in Eq.(\ref{Eq.6}-\ref{Eq.9}), we obtain $EIoU$ based on $I_{\rm e}$ in Eq. (\ref{Eq.17}): \begin{align} &U_e = S_t + S_p - I_e, \label{Eq.21} \\ &EIoU = \frac{I_e}{U_e}, \label{Eq.21_1} \end{align} $U_{\rm e}$ is a function of $I_{\rm e}$ and always larger than zero, so characteristics of $EIoU$ are similar to that of $I_{\rm e}$, which are summarized as follows: \begin{itemize} \item When two boxes are attached, $EIoU$ \emph{exactly equivalent} to the standard ${IoU}$ and always larger than zero. \item When the two boxes are detached, $EIoU$ is smaller than zero and decreases with distance of two boxes, so that gradient descent algorithms can be employed to train the predicted box to approach the targeted box until matched. \end{itemize} \noindent \textbf{Differences From GIoU.} Both $GIoU$ \cite{GIOU_2019} and the proposed $EIoU$ aim to address the problem of zero gradients when two boxes do not overlap, but there are still some significant distinctions between them. As shown in Algorithm 1, $GIoU$ adds an extra term after ${SIoU}$, which can be considered as a regularization metric. The new term indeed makes \emph{GIoU} have non-zero gradients when two boxes are detached, but it also leads $GIoU$ to be not equivalent to ${SIoU}$ any more when two boxes are attached. This change will cause new problems. \emph{First}, it brings some counter-intuitive and unreasonable cases, and one example is visually illustrated in Fig \ref{Fig.3}. \emph{Second}, the performance of $GIoU$ might be suboptimal as \emph{SIoU} is the final evaluation metric. As for $EIoU$ is not a regularization method and an incremental modification of $GIoU$. We fundamentally address the root of the problem by redefining ${IoU}$, so that it is trainable in the case of non-overlapping and equivalent(reduced) to ${SIoU}$ in the case of overlapping. Accordingly, $EIoU$ will never encounter similar plights shown in Fig \ref{Fig.3}. \renewcommand\arraystretch{1.0} \begin{table}[!hptb] \small \begin{threeparttable} \centering \begin{tabular}{p{8cm}l} \hline \textbf{Algorithm 1}: $GIoU$ in \cite{GIOU_2019} \\ \hline \textbf{Input:} Two arbitrary bounding boxes: $A$ and $B$ \\ \textbf{Output:} $GIoU$ \\ \textbf{1.} Find the smallest bounding box $C$ that encloses $A$ and $B$\\ \textbf{2.} Compute the standard IoU: $SIoU = \frac{A\bigcap B}{A\bigcup B}$ \\ \textbf{3.} Compute \emph{GIoU}: $GIoU = SIoU -\frac{C \setminus \left(A\bigcup B\right)}{C}$ \\ \hline \end{tabular} \end{threeparttable} \end{table} \begin{figure}[htbp] \centering \vspace{-10pt} \subfigure[]{\includegraphics[width=0.5\linewidth]{./figures/GIoU_a.pdf}}\hspace{-5pt} \subfigure[]{\includegraphics[width=0.5\linewidth]{./figures/GIoU_b.pdf}} \hspace{-10pt} {\caption{\small{ A \emph {counter-intuitive} case for $GIoU$. (a) We first compute the standard IoU: $SIoU = \frac{A\bigcap B}{A\bigcup B}=\frac{0.25}{2-0.25} = \frac{1}{7}$, and then compute $GIoU = SIoU -\frac{C \setminus \left(A\bigcup B\right)}{C}=\frac{1}{7} - \frac{0.5}{2.25} = -\frac{5}{63} $, while (b) two boxes are just attached, $ GIoU = SIoU = 0$. The value of $GIoU$ in (a) is \emph{smaller} than that in (b), which is inconsistent with the fact that two boxes match better in (a) than in (b).}} \label{Figure.3}} \end{figure} \subsection{Covexification Technique (CT)} \begin{figure*}[ht] \centering \subfigure[$- EIoU$ Loss]{\includegraphics[width=0.3\linewidth]{./figures/nonconvex.pdf}} \subfigure[Smooth-$EIoU$ Loss]{\includegraphics[width=0.3\linewidth]{./figures/convex.pdf}} \subfigure[Convergence Behaviors ]{\includegraphics[width=0.28\linewidth]{./figures/origin_vs_GI_IoU.pdf}} \caption{ \small{Visualization of \small{$- EIoU$} and Smooth-$EIoU$ Loss constructed from \small{$- EIoU$} with CT in Eq. (\ref{Eq.22}) when the targeted box is fixed with $(0, 0, 1, 1)$ and the predicted box varies with $(0, 0, x, y)$. (a) shows the value space of \small{$-EIoU$}. \small{$- EIoU$} is smaller than zero and not smooth in the neighbourhood of the minimum, and its gradient at the minimum is \emph{non-zero} ; (b) shows the value space of Smooth-$EIoU$ Loss. After employing CT, Smooth-$EIoU$ Loss is lager than zeros and smooth everywhere, and its gradients are gradually close to zero when it approaches the minimum. (c) Convergence behaviours of \small{$-EIoU$} and Smooth-$EIoU$ Loss when the targeted box is fixed with $(0, 0, 1, 1)$ and the predicted box is initialized with $(0, 0, 0.5, 0.5)$. Due to the non-zero gradients of \small{$- EIoU$} at the minimum and non-smoothness in its neighbourhood, \small{$-EIoU$} Loss \emph{oscillates} dramatically and cannot converge. In contrast, {Smooth-$EIoU$ Loss quickly converges} to the optimum.} } \label{Fig.3} \vspace{-1em} \end{figure*} Loosely speaking, any a decreasing function w.r.t. $IoU$ can be treated as a localization loss, such as $\frac{1}{IoU}$, {$-IoU$} and $-\ln(IoU)$, {but there are two problems in these simple $IoU$ based losses. \emph{First}, they are not ensured to be always non-negative. \emph{Second}, the gradients of them at the minimum are not zero. } It is well known that (stochastic) gradient methods ideally achieve a minimal point of which the gradient must be zero. Thus, theoretically, it cannot achieve the minimum if we use these losses in train. To make matters worse, non-zero gradients at the minimum are more likely to make the training process oscillating/non-convergent and even collapsed in practice. To tackle these problems, we present the convexification technique (CT) to modify the loss and make it practical during the training. It needs two steps: \begin{enumerate}[(i)] \item {Add the opposite number of the minimum of the original loss.} \item {Square the sum above .} \end{enumerate} Adopting CT, any a decreasing functions w.r.t. $IoU$ will become a well-defined loss, so that it is always non-negative and the gradient at the minimum is zero. Note that {CT is general, which can be employed to modify any loss not limited the localization loss and make it possess appealing characteristics.} In this paper, we present a new loss based on the simplest decreasing function {$-EIoU$} w.r.t. $EIoU$. The minimal value of $-EIoU$ is $-1$, so the loss is obtained through CT as follows\footnote{For a more general method, the power order is not limited to 2 but can be any number more than 1, such as ${\cal L}_{\rm Smooth\textrm{-}EIoU} = \left(1- \frac{I_e}{U_e}\right)^{1.5}$.}, \emph{i.e.}, \begin{align} {\cal L}_{\rm Smooth\textrm{-}EIoU} = \left(1- \frac{I_e}{U_e}\right)^2 \label{Eq.22} \end{align} CT can smooth loss functions, so the new loss is referred to as Smooth-$EIoU$ Loss. The new loss is also like Focal Loss. CT leads Smooth-$EIoU$ Loss to possess focal capability that down-weights the gradient of well-localized predicted boxes, \emph{i.e.}, \begin{equation} \vspace{-0.6em} \frac{\partial{\cal L}_{\rm Smooth\textrm{-}EIoU}}{\partial z} = -\left(1- \frac{I_e}{U_e}\right)\frac{\partial\left(\frac{I_e}{U_e}\right)}{\partial z} \label{Eq:20} \end{equation} \noindent where $z$ is any one of $\{x_1^p, y_1^p, x_2^p, y_2^p\}$. It is known $EIoU$ between a well-localized box and the ground-truth box is close to $1$, and then $\left(1- \frac{I_e}{U_e}\right)$ will be close to $0$. Thus, $\frac{\partial{\cal L}_{\rm Smooth\textrm{-}EIoU}}{\partial z}$ will also become very small, which means Smooth-{${EIoU}$} Loss will down-weight easy pair boxes and pay more attention to hard pair boxes in train. The following example illustrates the importance of CT. Given the targeted bounding box with a tuple $(0, 0, 1, 1)$ and the predicted bounding box with a tuple $(0, 0, x, y)$, the value space of {$-EIoU$} \ and Smooth-$EIoU$ Loss constructed from {$-EIoU$} with CT are shown in \ref{Fig.3} (a)-(b). Smooth-$EIoU$ Loss becomes smooth after employing CT, and then the gradients of the loss are gradually close to zero when approaching minimum, so CT makes Smooth-$EIoU$ Loss achieves the minimum when applying a gradient descent algorithm, which can be observed in \ref{Fig.3} (c). \ref{Fig.3} (c) shows the convergence behavior of $-EIoU$ and Smooth-$EIoU$ Loss when the predicted bounding box starts with the initial value $(0, 0, 0.5, 0.5)$. Not surprisingly, $-EIoU$ oscillates severely and there is no tendency to be converged. In contrast, Smooth-$EIoU$ Loss quickly and smoothly converges to the optimum. Notably, the steady optimization technique (SOT) that we will elaborate in the next subsection is adopted for $-EIoU$ and Smooth-$EIoU$ Loss in this experiment. \subsection{Steady Optimization Technique (SOT) } For simplicity, we only deduce the partial derivative of Smooth-$EIoU$ Loss in Eq. (\ref{Eq.22}) w.r.t. $x_1^p$ here, and others are similar and presented in the appendix. We first compute the gradient of $I_e$ w.r.t. $x_1^p$, $i.e.,$ \begin{equation} \frac{\partial I_e}{\partial x_1^p} = \left\{ \begin{aligned} &y_{\rm min}- y_{\rm max}, ~~~~~~~~{\rm if} ~ x_1^p \ge x_1^t ~{\rm and} ~ x_1 \le x_2, \\ &2y_0 - y_{\rm max} - y_1, ~~{\rm if}~ x_1^p \ge x_1^t ~{\rm and}~ x_1 > x_2, \\ &0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~\;{\rm if}~ x_1^p < x_1^t. \end{aligned} \right . \label{Eq.23} \end{equation} And then we compute the gradient of $U_e$ w.r.t. $x_1^p$ \begin{equation} \frac{\partial U_e}{\partial x_1^p} = (y_1^p - y_2^p) - \frac{\partial I_e}{\partial x_1^p}. \label{Eq.24} \end{equation} Finally we obtain the gradient of Smooth-$EIoU$ Loss w.r.t. $x_1^p$ \begin{equation} \frac{{\partial \cal} L_{\rm{Smooth}\textrm{-}\rm{EIoU}}}{\partial x_1^p} = 2\left(1- \frac{I_e}{U_e}\right)\frac{I_e\frac{\partial U_e}{\partial x_1^p} - \frac{\partial I_e}{\partial x_1^p}U_e }{U_e^2}. \label{Eq.25} \end{equation} The partial derivative of Smooth-$EIoU$ Loss w.r.t. $y_1^p$, $x_2^p$ and $y_2^p$ are similar to Eq. (\ref{Eq.25}) (details please see Appendix.A). From Eq. (\ref{Eq.17}), Eq. (\ref{Eq.21}), and Eq. (\ref{Eq.24}) , we know $I_{\rm e} \propto s$, $U_{\rm e} \propto s^2$, $\frac{\partial I_e}{\partial x_1^p} \propto s$ and $\frac{\partial U_e}{\partial x_1^p} \propto s$ where $s$ is the size (height or width) of the predicted box $(x_1^p, y_1^p, x_2^p, y_2^p)$. Hence, we analyze Eq. (\ref{Eq.25}) and find $\frac{{\partial \cal} L_{\rm{Smooth}\textrm{-}\rm{EIoU}}}{\partial x_1^p} \propto \frac{1}{s}$, which means the gradient of Smooth-$EIoU$ Loss w.r.t. $x_1^p$ is inverse proportional to the size of the predicted box. This inverse proportion will make the loss difficult to converge in train, since when the size of the predict box is large, it means the absolute difference between the targeted box and the predicted box is also large, and then it needs to update with a relatively large step, but if applying gradient in Eq. (\ref{Eq.25}) to update the variables, the update is small instead. When the size of the boxes is small, it will encounter a similar dilemma. A good iteratively update for variables should be proportional to the size, just like the gradients of $\ell_2$ Loss. To achieve this goal, we change the update rule for variables of $EIoU$. We take $x_1^p$ for example, \emph{i.e.}, \begin{equation} \begin{aligned} \vspace{-5em} x_{1_k}^p &= x_{1_{k-1}}^p - 2\alpha \frac{{\partial \cal} L_{\rm{GI}\textrm{-}\rm{IoU}}}{\partial x_1^p}U_e \vspace{-2em}\\ &= x_{1_{k-1}}^p -2\alpha\left(1- \frac{I_e}{U_e}\right)\frac{I_e\frac{\partial U_e}{\partial x_1^p} - \frac{\partial I_e}{\partial x_1^p}U_e }{U_e} \end{aligned} \vspace{-0.5em} \label{Eq.26} \end{equation} \noindent where $k$ is the number of iterations and $\alpha$ is the learning rate. Compared with Eq. (\ref{Eq.25}), Eq. (\ref{Eq.26}) multiplies $U_e$ to make sure the new gradient update is proportional to the scale of the boxes. We call this method as the steady optimization technique (SOT). This technique seems to be heuristic, but we will theoretically prove it reasonable in the following. \begin{theorem} If the gradient of $f(x)$, denoted as $\nabla f(x)$, is Lipschitz continuous, i.e., \begin{equation} \Vert \nabla f(x_1) - \nabla f(x_2)\Vert \le L \Vert x_1 - x_2\Vert_2, \label{Eq.27} \end{equation} the function $g(x)$ is positive and bounded, i.e., $0 < g(x) \le M$, and the learning rate satisfies $\alpha < \frac{1}{LM}$, the update update rule, \begin{equation} x_{k+1} = x_k - \alpha g(x_k) \nabla f(x_k), \label{Eq.28} \end{equation} will make $f(x)$ steadily decrease. \label{The.1} \end{theorem} We provide the proof in the appendix. {$U_e$ in our Smooth-$EIoU$ Loss is always greater than zero. Therefore if we set the learning rate properly, SOT can ensure Smooth-$EIoU$ Loss steadily decreases. From Eq. (\ref{Eq.22}) we know the Smooth-$EIoU$ Loss is nonnegative and bounded, hence SOT will further guarantee it to be convergent according to the bounded monotonic principle.} \begin{figure}[ht] \vspace{-1em} \centering \subfigure[]{\includegraphics[width=0.5\linewidth]{./figures/non_steady_vs_steady_0.pdf}}\hspace{-5 pt} \subfigure[]{\includegraphics[width=0.5\linewidth]{./figures/non_steady_vs_steady_1.pdf}} \caption{ \small{ Convergence of Smooth-$EIoU$ Loss optimized with/without SOT: (a) comparisons when the size of the targeted box and predicted box proportionally varies: the targeted box are fixed with $(0, 0, 1, 1)$ (small), $(0, 0, 2, 2)$ (medium) and $(0, 0, 4, 4)$ (large), and the initial value of the predicted is proportionally set as $(0, 0, 0.5, 0.5)$ (small), $(0, 0, 1, 1)$ (medium) and $(0, 0, 2, 2)$ (large). The converges tendency of Smooth-$EIoU$ Loss with SOT is completely the same regardless of the size, while Smooth-$EIoU$ Loss {without SOT is very sensitive to the varied size.} The larger the size is, the \emph{slower} the convergence rate is, just like what we analyzed. (b) comparison when only the size of the predicted box varies: the targeted box are fixed with $(0, 0, 1, 1)$, and the initial value of the predicted is set as $(0, 0, 0.5, 0.5)$ (small), $(0, 0, 2, 2)$ (medium) and $(0, 0, 4, 4)$ (large). Smooth-$EIoU$ Loss with {SOT still can quickly converge}, but Smooth-$EIoU$ Loss without SOT is more sensitive to the size under this circumstance. When the initial value of the predicted box set as $(0, 0, 4, 4)$, it is even \emph{trapped} and cannot move to the target box.} } \label{Fig.4} \vspace{-0.5em} \end{figure} According to Theorem \ref{The.1}, {SOT is very general and can be applied to optimize many types of losses for steady convergence, including but not limited to fractional losses of which its gradients is not linearly proportional to the size.} We design two examples in \ref{Fig.4} to further demonstrate the superiority of SOT. As shown in \ref{Fig.4}, SOT will make the convergence of the loss steady, and it is robust to the size of the initial predicted box and the targeted box. When the size of the targeted box and predicted box proportionally varies, the targeted box are fixed with $(0, 0, 1, 1)$ (small), $(0, 0, 2, 2)$ (medium) and $(0, 0, 4, 4)$ (large), and the initial value of the predicted is proportionally set as $(0, 0, 0.5, 0.5)$ (small), $(0, 0, 1, 1)$ (medium) and $(0, 0, 2, 2)$ (large). The converges tendency of Smooth-$EIoU$ Loss with SOT is completely the same regardless of the size, while Smooth-$EIoU$ Loss {without SOT is very sensitive to the varied size.} The larger the size is, the \emph{slower} the convergence rate is, just like what we analyzed above. When only the size of the predicted box varies, the targeted box are fixed with $(0, 0, 1, 1)$, and the initial value of the predicted is set as $(0, 0, 0.5, 0.5)$ (small), $(0, 0, 2, 2)$ (medium) and $(0, 0, 4, 4)$ (large). Smooth-$EIoU$ Loss with {SOT still can quickly converge}, but Smooth-$EIoU$ Loss without SOT is more sensitive to the size under this circumstance. When the initial value of the predicted box set as $(0, 0, 4, 4)$, it is even \emph{trapped} and cannot move to the target box. \subsection{IoU Head} In \cite{IoUNet_2018} it has demonstrated that there is a misalignment between classification confidence and localization accuracy, and utilizing precisely predicted $IoU$ scores of bounding boxes to guide NMS will largely alleviate this problem. Taking advantage of the existing ground-truth $IoU$ calculated in Smooth-$EIoU$ Loss, we add IoU Head and train it to predict accurate $IoU$ scores. It is known $IoU$ distributes over $[0, 1]$, so we first utilize the sigmoid function to compress the predicted $IoU$ score to $[0, 1]$, and then a Kullback-Leibler (KL) divergence loss is employed in train, \emph{i.e.}, \begin{align} &q_p(x) = {\rm Sigmoid}(x), \\ &{\cal L}_{KL} = q_g\log\frac{q_g}{q_p(x)} + (1 - q_g)\log\frac{1-q_g}{1-q_p(x)}, \end{align} where $x$ is the output of IoU Head, $q_p(x)$ is the predicted $IoU$ score and $q_g$ is the ground-truth $IoU$ score that is generated in Smooth-$EIoU$ Loss. Note that IoU Head is a single layer, and it shares most parameters with the classification head and the bounding-box head. Hence, it will increase little computational cost in train and test. \noindent\textbf{Differences From IoU-Net.} \cite{IoUNet_2018} pioneeringly proposed IoU-Net learning to predict $IoU$ to promote the localization accuracy. However, there are still some significant differences between our IoU Head and the IoU-Net. \emph{Firstly}, we used a KL loss that is widely proven to be effective for deep neural networks rather than a squared loss. \emph{Secondly}, it needs to manually construct synthetical bounding-box sets to train IoU-Net individually besides training the main branches of classification and localization, while our IoU Head can seamlessly embed to the existing network and be trained end-to-end. \emph{Thirdly}, IoU Head is much lighter than IoU-Net. IoU-Net is an individual subnet and works parallelly with the classification subnet and the localization subnet, while IoU Head is a single-layer branch and shares most layers with the main branches. Architectures of IoU-Net and IoU Head are visually illustrated in Fig \ref{Figure.6}. \emph{Fourthly}, the ground-truth $IoU$ used in IoU Head is generated by the localization head, so IoU Head and Localization Head are closely interrelated with better cooperativity, and the effect of "$1+1>2$" between them is shown in Table 1. But IoU-Net has little relation with the localization head. \begin{figure}[htbp] \small \centering \subfigure[IoU-Net]{\includegraphics[width=0.8\linewidth]{./figures/IoU-Net.pdf}} \subfigure[IoU Head]{\includegraphics[width=0.8\linewidth]{./figures/IoU-Head.pdf}} \vspace{-5pt} {\caption{\small{ The network architectures of IoU-Net and IoU Head.}} \label{Figure.6}} \vspace{-10pt} \end{figure} \section{Implementation} In modern deep CNN based detectors, the neural network does not directly estimate the coordinates of the bounding box, and instead it predicts the normalized difference value between the corresponding coordinates of the anchor or proposal box (henceforth, we only use anchor box for simplicity ) and the targeted box, and the normalization value is the width and height of anchor box. We adopt a similar strategy to generate the predicted box, but we uniformly employ the square root of the area of the anchor box to normalize all the coordinates rather than independently normalize them with the corresponding coordinate of the anchor, since the former will keep the width-height ratio of the predicted box and targeted box. Implementation details please see Algorithm 1. \renewcommand\arraystretch{1.3} \begin{table}[!htbp] \small \centering \begin{threeparttable} \begin{tabular}{p{8cm}l} \hline ~~~~~~~~~~~~~~~~~~~~\textbf{Algorithm 1.} Training $EIoU$ Loss \\ \hline \textbf{Input}: the anchor box $(x_1^a, y_1^a, x_2^a, y_2^a)$, the target box $(x_1^t, y_1^t, x_2^t, y_2^t)$ and the CNN predicted normalized difference value $(x_{1_0}^d, y_{1_0}^d, x_{2_0}^d, y_{2_0}^d)$ \\ \textbf{Output}: the $EIoU$ Loss ${\cal L}_{\rm{GI}\textrm{-}\rm{IoU}}$ \\ Compute $S = \sqrt{(x_2^a - x_1^a)(x_2^a - x_1^a)}$ \\ Compute $(x_1^{a_n}, y_1^{a_n}, x_2^{a_n}, y_2^{a_n}) = (\frac{x_1^a}{S}, \frac{y_1^a}{S}, \frac{x_2^a}{S}, \frac{y_2^a}{S}) $ \\ Compute $(x_1^{t_n}, y_1^{t_n}, x_2^{t_n}, y_2^{t_n}) = (\frac{x_1^t}{S}, \frac{y_1^t}{S}, \frac{x_2^t}{S}, \frac{y_2^t}{S}) $ \\ \textbf{while} \emph{not convergence} \textbf{do} \\ \quad Compute $(x_{1_k}^p, y_{1_k}^p, x_{2_k}^p, y_{2_k}^p) = (x_{1_k}^d, y_{1_k}^d, x_{2_k}^d, y_{2_k}^d)$ \\ \quad \quad $ + (x_1^{a_n}, y_1^{a_n}, x_2^{a_n}, y_2^{a_n})$ \\ \quad Using Eq. (\ref{Eq.1})- (\ref{Eq.4}), (\ref{Eq.11})- (\ref{Eq.22}) to compute ${\cal L}_{\rm{GI}\textrm{-}\rm{IoU}}$ \\ \quad Using Eq. (\ref{Eq.23})- (\ref{Eq.25}) to compute $(\frac{\partial \cal L}{\partial x_{1_k}^p}, \frac{\partial \cal L}{\partial y_{1_k}^p}, \frac{\partial \cal L}{\partial x_{2_k}^p}, \frac{\partial \cal L}{\partial y_{2_k}^p} )$ \\ \quad $(\frac{\partial \cal L}{\partial x_{1_k}^d}, \frac{\partial \cal L}{\partial y_{1_k}^d}, \frac{\partial \cal L}{\partial x_{2_k}^d}, \frac{\partial \cal L}{\partial y_{2_k}^d} ) = (\frac{\partial \cal L}{\partial x_{1_k}^p}, \frac{\partial \cal L}{\partial y_{1_k}^p}, \frac{\partial \cal L}{\partial x_{2_k}^p}, \frac{\partial \cal L}{\partial y_{2_k}^p} )$ \\ \quad Using Eq. (\ref{Eq.26}) to update $(x_{1_{k+1}}^d, y_{1_{k+1}}^d, x_{2_{k+1}}^d, y_{2_{k+1}}^d)$ \textbf{end while}\\ \hline \end{tabular} \end{threeparttable} \end{table} \renewcommand\arraystretch{1.2} \begin{table*}[!hptb] \small \centering \caption{ Ablation study by using Faster R-CNN with ResNet50 + FPN as the backbone. Models are trained on the union set of VOC\_2007\_trainval and VOC\_2012\_trainval. The results are reported on the set of VOC\_2007\_test.} \begin{tabular}{|p{0.2cm}<{\centering} p{1.5cm}<{\centering} p{0.8cm}<{\centering} p{0.8cm}<{\centering} p{0.8cm}<{\centering} p{0.8cm}<{\centering} p{0.8cm}<{\centering}p{1.5cm}<{\centering}|l|} \hline &Smooth-$l_1$ &$SIoU$ &$GIoU$ &$EIoU$ &CT &SOT &IoU Head &AP \\ \hline \hline \textcircled{1} &\checkmark &- &- &- &- &- &- &45.5 (Smooth-$l_1$ Loss, Baseline) \\ \textcircled{2} &- &\checkmark &- &- &- &- &- &46.6 (Standard $IoU$ Loss ) \\ \textcircled{3} &- &- &\checkmark &- &- &- &- &46.9 ($GIoU$ Loss) \\ \hline \hline \textcircled{4} &\checkmark &- &- &- &- &- &\checkmark &46.2 ( Smooth-$l_1$ Loss with IoU Head) \\ \textcircled{5}&- &- &- &\checkmark &- &- &- &47.5 ($EIoU$ Loss) \\ \textcircled{6}&- &- &- &\checkmark &\checkmark &- &- &47.9 ($EIoU$ Loss with CT )\\ \textcircled{7}&- &- &- &\checkmark &\checkmark &\checkmark &- &48.2 ($EIoU$ Loss with CT, SOT)\\ \textcircled{8}&- &- &- &\checkmark &\checkmark &\checkmark &\checkmark &{49.7} ($EIoU$ Loss with CT, SOT and IoU Head)\\ \hline \end{tabular} \label{Tab.3} \end{table*} \section{Experiment} \subsection{Experimental Setting} All the experiments are conducted on the benchmark datasets -- PASCAL VOC and MS COCO. Detectors are implemented in Facebook AI Research's Detectron system \cite{Detectron2018}. Following the default settings in Detectron, we trained all the detectors on 8 NVIDIA P100 GPUs. Each mini-batch totally contains 16 images which are uniformly distributed to 8 GPUs. Input images are resized to 500 and 800 pixels along the short side on PASCAL VOC and MS COCO, respectively. No other data augmentation except of the standard horizontal image flipping is employed. Standard SGD with weight decay of $0.0001$ and momentum of $0.9$ is adopted. We train the detectors with $20k$ iterations for PASCAL VOC and $90k$($180k$) iterations for MS COCO, and the learning rate is set to 0.02 at the begin and then decreased by a factor of $0.1$ after $12k$ and $18k$ for PASCAL VOC and $60k$($120k$) and $80k$($160k$) iterations for MS COCO, respectively. We comply with the MS COCO evaluation protocol to report the experimental results. \subsection{Ablation Study} We implement ablation experiments on PASCAL VOC to clarify the contributions of the proposed $EIoU$, CT, SOT and IoU Head, and the results are reported in Table \ref{Tab.3}. As shown in Table \ref{Tab.3}, with the standard {$SIoU$} based loss replacing the baseline Smooth-$\ell_1$ Loss, the performance is improved to some extents (+1.1\% mAP, comparing \textcircled{1} and \textcircled{2}). Substituting {$SIoU$} with {$GIoU$} further boost the performance with scores +0.3\% mAP (comparing \textcircled{2} and \textcircled{3}), which is consistence with the results in Table 5 in \cite{GIOU_2019}. Comparing with {$GIoU$}, individually equipping $EIoU$ can bring more substantial improvement(+0.9 \% mAP, \textcircled{2} and \textcircled{5}), which indicates $EIoU$ may be more piratically powerful than $GIoU$. With the help of CT, the performance is continually promoted (+0.4 \% mAP, \textcircled{5} and \textcircled{6}). Exploiting SOT in train further receives a gain of +0.3 \% mAP scores (comparing \textcircled{6} and \textcircled{7}). Adding IoU Head to the net significantly improves the performance (+1.5\% mAP, comparing \textcircled{7} and \textcircled{8}). Interestingly, $EIoU$ Loss with IoU Head can generate better cooperativity than Smooth-$l_1$ Loss with IoU Head(+1.5\% mAP vs +0.7\% mAP, comparing \textcircled{7} , \textcircled{8} and comparing \textcircled{1} , \textcircled{4}). The reason for it is that IoU Head has close relation to a IoU related loss, so they can receive the effect of "$1+1>2$". Totally, the proposed systematical method including $EIoU$, CT, SOT and IoU Head yields significant gains, which is 4.2\% higher than the baseline Smooth-$l_1$ Loss that is overwhelmingly used in popular detectors (comparing \textcircled{1} and \textcircled{8}). \renewcommand\arraystretch{1.1} \begin{table*}[!hptb] \small \centering \caption{ Comparisons of Average Precision(AP) of Smooth-$\ell_1$ Loss, $GIoU$ Loss and $EIoU$ Loss attaching to RetinaNet and Faster-RCNN with ResNet50+FPN as the backbone. Models are trained on the union set of VOC\_2007\_trainval and VOC\_2012\_trainval. The results are reported on the set of VOC\_2007\_test.} \begin{tabular}{|l|p{3.0cm}<{\centering}|p{1.0cm}<{\centering}|p{1.0cm}<{\centering}p{1.0cm}<{\centering}p{1.0cm}<{\centering}| p{1.0cm}<{\centering}p{1.0cm}<{\centering}p{1.0cm}<{\centering}|} \hline loss &Net &$\rm{mAP}$ &$\rm{AP_{50}}$ &$\rm{AP_{75}}$ &$\rm{AP_{90}}$ &$\rm{AP_{S}}$ &$\rm{AP_{M}}$ &$\rm{AP_{L}}$\\ \hline \hline Smooth-$\ell_1$ Loss (Baseline) \cite{FPN2017} &RetinaNet &44.2 &{70.5} &47.5 &25.0 &9.8 &28.3 &54.3 \\ $GIoU$ Loss \cite{GIOU_2019} &RetinaNet &45.2 &{70.0} &48.2 &28.8 &9.9 &29.1 &55.5 \\ $CIoU$ Loss \cite{DIoU2020} &RetinaNet &45.9 &{70.3} &50.1 &30.3 &10.1 &30.2 &55.9 \\ Ours &RetinaNet &\textbf{46.4} &\textbf{71.1} &\textbf{49.1} &\textbf{32.2} &\textbf{10.5} &\textbf{31.4} &\textbf{56.3} \\ \hline \hline Smooth-$\ell_1$ Loss (Baseline) \cite{FPN2017} &Faster-RCNN+FPN &45.5 &72.6 &49.8 &25.2 &10.0 &29.5 &55.6\\ $GIoU$ Loss \cite{GIOU_2019} &Faster-RCNN+FPN &46.9 &73.1 &50.8 &28.6 &9.6 &31.0 &57.2\\ $CIoU$ Loss \cite{DIoU2020} &Faster-RCNN+FPN &48.0 &73.5 &51.2 &28.8 &9.8 &31.8 &57.9 \\ Ours &Faster-RCNN+FPN &\textbf{49.7} &\textbf{73.7} &\textbf{54.1} &\textbf{33.4} &\textbf{11.9} &\textbf{32.8} &\textbf{59.8}\\ \hline \end{tabular} \label{Tab.1} \end{table*} \renewcommand\arraystretch{1.1} \begin{table*}[!hptb] \small \centering \caption{ Comparisons of Average Precision(AP) of of Smooth-$\ell_1$ Loss, $GIoU$ Loss and $EIoU$ Loss attaching to RetinaNet and Faster-RCNN with Res50 + FPN as the backbone. Models are trained on the union set of COCO\_2017\_train. The results are reported on the set of COCO\_2017\_val.} \begin{tabular}{|l|p{3.0cm}<{\centering}|p{1.0cm}<{\centering}|p{1.0cm}<{\centering}p{1.0cm}<{\centering}p{1.0cm}<{\centering}| p{1.0cm}<{\centering}p{1.0cm}<{\centering}p{1.0cm}<{\centering}|} \hline loss &Net &$\rm{mAP}$ &$\rm{AP_{50}}$ &$\rm{AP_{75}}$ &$\rm{AP_{90}}$ &$\rm{AP_{S}}$ &$\rm{AP_{M}}$ &$\rm{AP_{L}}$\\ \hline \hline Smooth-$\ell_1$ Loss (Baseline) \cite{FPN2017} &RetinaNet &35.7 &54.7 &38.5 &22.8 &19.5 &39.9 &47.5 \\ $GIoU$ Loss \cite{GIOU_2019} &RetinaNet &36.2 &54.5 &38.9 &24.4 &19.6 &40.3 &48.3\\ $CIoU$ Loss \cite{DIoU2020} &RetinaNet &36.7 &54.7 &39.3 &25.0 &20.1 &40.9 &48.9\\ Ours &RetinaNet &\textbf{37.1} &\textbf{54.8} &\textbf{39.6} &\textbf{25.7} &\textbf{20.6} &\textbf{41.3} &\textbf{49.2} \\ \hline \hline Smooth-$\ell_1$ Loss (Baseline) \cite{FPN2017} &Faster-RCNN+FPN &36.7 &{58.5} &39.6 &21.2 &{21.1} &39.8 &48.1\\ $GIoU$ Loss \cite{GIOU_2019} &Faster-RCNN+FPN &37.8 &58.4 &41.0 &24.8 &21.2 &40.9 &49.8 \\ $CIoU$ Loss \cite{DIoU2020} &Faster-RCNN+FPN &38.4 &58.3 &41.5 &25.2 &21.2 &41.3 &50.2 \\ Ours &Faster-RCNN+FPN &\textbf{39.0} &\textbf{58.7} &\textbf{42.1} &\textbf{26.5} &\textbf{22.3} &\textbf{41.7} &\textbf{50.7} \\ \hline \end{tabular} \label{Tab.2} \end{table*} \begin{figure}[t] \vspace{-1em} \centering \subfigure[]{\includegraphics[width=0.5\linewidth]{./figures/IoU_VS_mAR_0.pdf}} \hspace{-10 pt} \subfigure[]{\includegraphics[width=0.5\linewidth]{./figures/IoU_VS_mAR_1.pdf}} \caption{\small{IoU threshold against Average Recall(AR) of Faster RCNN + FPN with Smooth-$\ell_1$ Loss, $GIoU$ Loss and our approach on (a) PASCAL VOC and (b)MS COCO. Models are trained on (a) the union set of VOC 2007 trainval and VOC 2012 trainval and (b) the set of COCO 2017 train. The results are demonstrated on (a) the set of VOC 2007 test and (b) the COCO 2017 val.}} \vspace{-1em} \label{Fig.5} \end{figure} \renewcommand\arraystretch{1.1} \begin{table*}[!hptb] \small \centering \caption{ Performance of state-of-the-art detectors on the set of COCO test-dev. Our Model is trained on the set of COCO\_2017\_train.} \begin{tabular}{|l|l| p{1.2cm}<{\centering}|p{1.2cm}<{\centering}p{1.2cm}<{\centering}p{1.2cm}<{\centering}|p{1.2cm}<{\centering} p{1.2cm}<{\centering}p{1.2cm}<{\centering}|} \hline Method &Backbone &Year &$\rm{mAP}$ &$\rm{AP_{50}}$ &$\rm{AP_{75}}$ & $\rm{AP_S}$ & $\rm{AP_M}$ & $\rm{AP_L}$\\ \hline \hline YOLOv3 \cite{YOLOv3_2018} &DarkNet-53 &2018 &33.0 &57.9 &34.4 &18.3 &35.4 &41.9\\ SSD513 \cite{SSD_2016} &ResNet-101 &2016 &31.2 &50.4 &33.3 &10.2 &34.5 &49.8\\ RetinaNet800\cite{Focal_loss_retinanet_2017} &ResNeXt-101 &2017 &39.1 &59.1 &42.3 &21.8 &42.7 &50.2\\ DSSD513 \cite{DSSD_2017} &ResNet-101 & 2017 &33.2 &53.3 &35.2 &13.0 &35.4 &51.1\\ RefineDet512\cite{RefineDet2018} &ResNet-101 & 2018 &36.4 &57.5 &39.5 &16.6 &39.9 &51.4\\ CornerNet511\cite{CornerNet_2018} &Hourglass-104 &2018 &40.5 &56.5 &43.1 &19.4 &42.7 &53.9 \\ CenterNet \cite{CornerNet_2018} &Hourglass-104 &2019 &42.1 &61.1 &45.9 &24.1 &45.5 &52.8 \\ FCOS\cite{FCOS2019} &ResNeXt-101 &2019 &43.2 &62.8 & 46.6 &26.5 &46.2 &53.3 \\ \hline \hline Faster-R-CNN +++ \cite{ResNet2016} &ResNet-101 &2016 &34.9 &55.7 &37.4 &15.6 &38.7 &50.9\\ Faster-RCNN w FPN \cite{FPN2017} &ResNet-101 &2016 &36.2 &59.1 &39.0 &18.2 &39.0 &48.2\\ Mask R-CNN \cite{Mask_RCNN_2017} &ResNeXt-101 &2017 &39.8 &{62.3} &43.4 &22.1 &43.2 &51.2\\ DetNet \cite{DetNet_2018} &DetNet &2018 &40.3 &62.1 &43.8 &{23.6} &42.6 &50.0 \\ IoU-Net \cite{IoUNet_2018} &ResNet-101 &2018 &40.6 &59.0 &- &- &- &-\\ TridenNet w \emph{2fc} \cite{TrideNet2019} &ResNet-101 &2019 &42.0 &63.5 &45.5 &24.9 &47.0 &56.9 \\ Grid R-CNN \cite{GridRCNN2019} &ResNeXt-101 &2019 &43.2 &63.0 &46.6 &25.1 &46.5 &55.2 \\ \hline \hline Ours &ResNet-101 &2019 &{42.2} &61.8 &{46.1} &24.4 &{45.2} &{55.4} \\ Ours &ResNeXt-101 &2019 &\textbf{44.1} &\textbf{63.7} &\textbf{47.6} &\textbf{26.8} &\textbf{47.6} &\textbf{57.1} \\ \hline \end{tabular} \vspace{-0.5em} \label{Tab.4} \end{table*} \subsection{Comparison to the Related Localization Losses } The proposed systematical method is mainly built on localization loss, so we will extensively compare the proposed method to widely-used Smooth-$\ell_1$ Loss and the related $GIoU$ Loss \cite{GIOU_2019} and $CIoU$ \cite{DIoU2020} Loss in this subsection. For simplicity, our systematical method is referred to as $EIoU$ Loss henceforth. All the losses are attached to RetinaNet (that is a typical one-stage model) and Faster-RCNN (that is a typical two-stage detection model) during training. Overall Mean average Precision(mAP) for all the three losses is reported in Table \ref{Tab.1}-\ref{Tab.2}. Besides, the results of Average Precision (AP) at IoU thresholds: $[0.5, 0.75, 0.90]$ and for individual small-size, medium-size and large-size objects are also listed for detailed comparison. As shown in Table \ref{Tab.1} and \ref{Tab.2}, compared with Smooth-$\ell_1$ Loss and $GIoU$ Loss, $EIoU$ Loss in one-stage and two-stage detectors can steadily yield gains on PASCAL VOC and MS COCO. Specifically, for the baseline Smooth-$\ell_1$ Loss that is dominant in popular detectors, our approach combining Faster R-CNN substantially boosts $4.2\%$ AP and $1.2\%$ mAP on PASCAL VOC and COCO, respectively. When comparing with $GIoU$ Loss, $EIoU$ loss can still consistently surpass it by a more than $2.0\%$ margin on PASCAL VOC and an $1.0\%$ margin on COCO. There is an interesting phenomenon that when the $IoU$ threshold is set to $0.5$ , the performance of our approach is close to Smooth-$\ell_1$ Loss. However, when the threshold grows higher, $EIoU$ Loss gradually outperforms Smooth-$\ell_1$ Loss and $GIOU$ Loss,. Especially {{at $\rm{AP}_{90}$, comparing with Smooth-$\ell_1$ Loss, $EIoU$ Loss improves 8.2\% on PASCAL VOC dataset and 5.3\% on MS COCO dataset }}. The reason for it is $EIoU$ Loss can help a detector to predict more accurate bounds than Smooth-$\ell_1$ Loss. It is known there is a gap between Smooth-$\ell_1$ Loss and the final evaluation IoU, and the relative gap is enlarging as two boxes are gradually matched, while $EIoU$ is exactly equivalent to $IoU$ when two boxes are overlapping. Moreover, Smooth-$\ell_1$ Loss decreases quicker than $EIoU$ Loss as two boxes are gradually matched, so during training Smooth-$\ell_1$ Loss commonly gives less attention to better matched pair-boxes. Therefore, comparing to Smooth-$\ell_1$ Loss, $EIoU$ Loss will receive more gains when the final evaluation metric (IoU) is stricter. Another phenomenon observed from Table \ref{Tab.1} and \ref{Tab.2} is that $EIoU$ Loss seems to be superior to detect small-size objects, comparing to {$GIoU$} Loss. Although the overall performance of {$GIoU$} Loss is 1.4\% higher than Smooth-$\ell_1$ Loss on PASCAL VOC dataset with Faster-RCNN, but Smooth-$\ell_1$ Loss and {$GIoU$} Loss obtain similar scores (10.0\% and 9.6\%) for small-size objects, which means {$GIoU$} Loss is still weak to detect small-size objects. $EIoU$ Loss achieves 11.9 \% under the same conditions. The superiority of $EIoU$ loss to detect small-size objects stems from the IoU predict head. In post-processing, conventionally we use classification confidence to guide non-maximum suppression (NMS) to filter redundant bounding boxes. Commonly, the correlation of classification confidence and localization confidence is weaker when detecting smaller objects. In our method, we use the predicted IoU confidence to correct the bias of classification confidence and localization confidence. Hence, our method has a better capacity for finding smaller objects. Additionally, in terms of improvement, Faster-RCNN + FPN with $EIoU$ Loss performs better than RetinaNet with $EIoU$ Loss. It may be due to that there are denser anchor boxes in RetinaNet. Hence it is not so difficult to exactly regress the targeted boxes for Smooth-$\ell_1$ Loss. As shown in Fig \ref{Fig.5}, the superior performance of $EIoU$ Loss for \emph{Average Recall (AR)} are more obvious than that for AP across the different value of $IoU$ threshold, which means $EIoU$ Loss is more powerful to find more objects, comparing with the popular localization losses. \subsection{Comparisons to State-of-the-Art Detectors} We evaluate $EIoU$ Loss attached to FPN on the MS COCO 2019 $test\textrm{-}dev$ set with $180k$ iterations and compare the results to state-of-the-art one-stage and two-stage detectors. The experimental results are presented in Table \ref{Tab.4}. For fair comparison, we only list the results of competitors of a single model with no sophisticate data argumentation in the training and testing. Without bells and whistle, our method with ResNeX-64x4d-101+FPN achieves $44.1\%$ mAP, which surpasses the counterparts in the Table \ref{Tab.4} by a large margin. Compared to the closest competitor Grid R-CNN \cite{GridRCNN2019}, the superiority of the proposed approach is more substantial at the higher IoU threshold (0.75), improving more than $1.0\%$ ($47.6\%$ vs $46.6 \%$), which is consistent with that our method can predict more precise bounding boxes. \section{Conclusion and Discussion} Smooth-$\ell_1$ Loss and its variants dominate the localization loss in modern CNN based detectors. Nevertheless, their oversimplified assumption that four coordinate variables of a bounding box are independent does not accord with the fact. Therefore the localization performance of these detectors might suffer degradation. In light of this, we propose a generalized $EIoU$ to address this problem. To make the $EIoU$ based loss not oscillated in the neighbourhood of the minimum and steadily optimized in train, we introduce CT and SOT. Moreover, we present IoU Head to further improve localization accuracy. Very Recently, a wide variety of anchor-free detectors \cite{CornerNet_2018,FSAF2019,FCOS2019,zhou2019objects,CenterNet2019} were developed and receive more and more attention. We think the proposed $EIoU$ Loss may be more applicable to these detection models, because there may exist more non-overlapping box pairs due to no anchors. We provide a new route to design $IoU$ based losses, and all the decreasing functions of $IoU$ can be modified and become an applicable localization loss through CT. We just tried the simplest $-IoU$, and many other functions not limited to $\frac{1}{IoU}$, $-\ln(IoU)$ might be more appropriate. Therefore there is great potential to further the performance by exploiting these techniques. More importantly, CT and SOT are so general that they can beyond the field of detection. CT can help any loss to have zero-gradient at the minimum and make it possible to achieve the minimum through gradient descend algorithms. SOT can help many types of losses, including but not limited fractional losses ( factional losses are common in machine learning tasks, since we usually need to minimize an objective function and maximize another simultaneously ), to steadily and smoothly arrive at the minimum. Therefore, CT and SOT may find more applications in other fields. { \bibliographystyle{IEEEtran}
proofpile-arXiv_065-108
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} We live in exciting times for healthcare, standing on the edge of a fourth industrial revolution. Yet, in many ways, we are mired in the past, crippled by technical burden and antiquated ways of using technology. The advent of embedded computers in medical practice, and to some extent, the internet and the mobile communication revolution, have made isolated impacts in closed-loop systems, diagnostics, and remote consultation. However, the mode of practice of medicine remains firmly hierarchical and rooted in traditional social constructs, such as the practice of `rounding,' which dates back to World War I and II \cite{united1962medical}, and was a necessary product of mass casualty situations. The oral transmission of information at each shift change or round, with the laying on of hands, can be likened to traditional `campfire songs,' used before writing to transmit useful cultural heuristics and memories. Never-the-less, the recently articulated notion of the `unreasonable effectiveness' of machine learning (ML)\footnote{In 2014, Yan LeCun delivered a lecture with this title, borrowed from the title of Eugene Wigner's seminal article from 1960 `The Unreasonable Effectiveness of Mathematics in the Natural Sciences' \cite{Wigner1960}.} has created a bubble of excitement. (Although I use the eye-catching term Artificial Intelligence (AI) in the title, I will avoid its use in this article from hereon, since AI is an overlapping and somewhat larger field, which generally refers to technology that mimics how humans behave, including the process of learning, whereas the term ML generally refers to the field of {\it how} the learning is achieved (and is used in both AI and other prediction or classification tasks unrelated to AI). ML, and deep learning in particular, has been used to successfully attack a series of complex problems, ranging from image searches to speech recognition, partially delivering on the promise of neural networks, which disappointed everyone back in the 1990's. It's important to note that neural networks have been around for decades, and deep learning itself is not as novel as we might be led to believe by the media. Rather, Moore's Law for computation and its analog in storage capacity, together with the development of massive multicore GPUs (designed initially for computer gaming), have combined to push us across an imperceptible threshold. Perhaps most notably embodied by the realization of self-driving cars, deep learning on relatively low cost, low energy GPU-based (edge computing) systems that combine to provide real-time navigation in life-critical situations. Despite the fact that high profile accidents have occurred, these are much lower than the frequency of mistakes made by humans (3.1 fatalities per billion vehicle-km, compared to 7.3 per billion vehicle-km driven in the US in 2018 \cite{wikipediaSelfDrivingCarFatalities}). The potential is enormous when you consider that road traffic accidents are the largest global killer of people between 5 and 29 years old \cite{CDC2017}, costing an estimated 3\% of each nation's gross domestic product. However, it is arguably in healthcare where the greatest opportunity for preventable errors exists. Diagnostic errors contribute an estimated 40,000 to 80,000 lost lives per year in the US \cite{PinnacleCare2015,Slawomirski2017}. These numbers are comparable to (and cost far more, at an estimated US\$750 billion than) the reported almost $\approx$170,000 unintentional injury deaths in the US, most of which are due to accidental poisoning ($\approx$65,000), motor vehicle accidents ($\approx$40,000) and falls ($\approx$36,000) \cite{WorldHealthOrganization2015,Slawomirski2017}. Given the scale of these health burdens, reduction through better systems is a perennial call of health care professionals and thought leaders over the last decade or more \cite{Gawande2007,Balogh2016}. By analogy with the self-driving car industry, there is great potential to make enormous progress in the (similarly) life-critical healthcare arena. In particular, the same technology that is releasing humans from the errors we cause by driving is likely to revolutionize medicine in a similar manner - by providing additional oversight during processes that demand sustained attention. These changes have increased the capacity of the user (or healthcare professional) to deal with other simultaneous tasks. However, the barriers to success are different in healthcare, since the data and labels are far noisier, and the consequences of the predictions or classifications are less certain. Although it is exciting to see that the scientific literature has exploded with the application of ML to medical data, the vast majority of research (if not all) focuses on retrospectively trained ML algorithms. For routine classification tasks, such as reading images, there is enormous potential, although the use of the algorithms outside of the population used to train and test the algorithm remains a barrier to use. Perhaps more worrying are the many works claiming to predict a spectrum of problems and events, from stroke to sepsis, to readmission and death. Several common mistakes are present in almost all publications, and most probably in commercial algorithms\footnote{It is worth noting that I believe that these issues apply to commercial algorithms as well as open research, because commercial algorithms are subject to less rigorous scientific review/scrutiny than those that are published in the scientific literature.}. Specifically, these errors include: \begin{itemize} \item Including the missingness of data in the models and other culture-dependent variables such as length of stay (thus encoding clinical behaviors rather than underlying physiology); \item Training and testing on either a single database or across all databases as a single set (reducing the chances that the model will work on new databases/hospitals); \item Not accounting for under-represented groups in the data; \item Not taking into account the temporal nature of the data, and confusing classification with prediction. Training on a retrospective cohort to predict an event, such as sepsis, even if you account for the relative rate of sepsis per unit time, ignores the fact that an algorithm must monitor continuously, so the false alarm rate is likely to be extremely high; \item Using the wrong (traditional) cost function(s) for classical information retrieval such as AUC, AUPrc, Precision, Recall, Accuracy, F1, etc. In reality, the cost function should reflect the clinical behavior that would result from a prediction or classification, with a continuous time-variant fuzzy cost function related to repeated testing or the relative clinical cost of downstream treatment. \item Ignoring how humans may respond to the alert/prediction, assuming it would be a one-time decision, rather than a watch-and-wait approach, or a series of downstream decisions; \item Providing a binary prediction, rather than a fuzzy membership, which more closely reflects human thinking, where you can be a member of more than one category; \item Not accounting for the differences in noise in the observations or varying quality of labels; \item Failing to provide confidence intervals in a prediction; \item Assuming that there is one `best' algorithm. \end{itemize} The lattermost issue is perhaps the most under-appreciated in modern data science. There are perhaps multiple cultural reasons for this. First, although many sporting tournaments are often team events, there is a tendency for us to idolize the individual best performer, and credit the one who pushed the ball over the goal line, rather than the ones that orchestrated the complex build-up that created the unique opportunity. (Although it is interesting to observe the differences in US sport, where `assists' are given some partial credit.) In addition, the mode of approving devices and software from commercial companies could also drive this singular `best athlete' mindset. A company usually patents a specific method of approaching a problem, rather than an array of specific methods, since the latter is harder to defend and implement. (Patents are often written to {\it generalize} a given method, but this just obfuscates the method, rather than opens the door for multiple algorithms to be used together.) The medical device or software approval process also then applies to a single approach. I do not recall having read a patent for an ensemble method in medicine, and it's hard to imagine how the FDA or other regulatory bodies would view such an approach. In many ways, it's entirely natural to imagine our own way is the best. If those of us at the cutting edge weren't over-confident in our ability to develop world-leading research, we wouldn't have the confidence to try in the first place. In reality, the pay-off for the effort is pretty remote. Never-the-less, if the decision is extremely important, the wise person polls many people they see as experts or trusted parties, and aggregates the decision from these pieces of advice, with a weighting based on the perceived integrity or expertise of each opinion source, plus some contextual factors (or other biases). Why do we not then extend this approach to scientific decision making more often, and place it in a robust mathematical framework? In this article, I argue that an ensemble of independent algorithms, or a {\em product of experts}, can work in a formal aggregation framework to guarantee a better algorithm than any single algorithm. Importantly, the ensemble algorithm deals with edge cases and minority classes in a far better way than a single generalized algorithm. In addition, this type of approach also provides an ability to estimate the confidence in the prediction, in very much the same way that we start to trust hurricane predictions when all the track lines start to converge and indicate landfall at the same location. I also argue that independence is extremely difficult to achieve, particularly with the increasing ease with which we can download each other's code, and the proliferation of standard data science libraries such as TensorFlow, Caffe, PyTorch, and Scikit-learn. Ironically, it is the spread of open-source codebases and the culture of posting repeatable science through public code repositories, which has increased this tendency for much of the field of ML to manifest as minor variants of the same formulae. This increasing homogeneity of research software isn't an argument for concealing scientific methods and discouraging public libraries. Still, it does indicate that we must build better ways of identifying when a piece of code is genuinely novel or independent and is adding value to the overall prediction or classification task. In the following section, I describe a public challenge framework that could create such a tool. While the framework may be rather general, I will try to stick to examples from the field of ML in healthcare. \section{Designing a challenge to encourage independent methods and code} While this section focuses on the elements of a challenge, much of it can apply to research in general, particularly data set composition, label quality, and evaluation metrics. Of course, the primary consideration is the topic of the research. \subsection{Choosing a topic} In the field of healthcare, it is essential to have a well-defined target outcome that is meaningful. In that sense, it has to be something that leads to a change in action that is likely to cause an improvement in an individual. For example, the 2017 Challenge focused on classifying 30-second single-lead electrocardiograms into one of three rhythms (normal, atrial fibrillation, or other), or as too noisy to classify. Although there are many more rhythms, it is difficult to differentiate them without more clinical leads. Never-the-less, diagnosis of atrial fibrillation alone is enough to set off a sequence of referrals for further testing. This structure makes the 2017 Challenge an example of a well-defined problem. An example of a poorly defined topic can sometimes be the simplest and most obvious at first sight. A few years ago, a well-known ML-focused competition forum requested data from my institution to run a competition on mortality prediction. I advised against the idea because mortality is a poor treatment target. Everyone dies, but it's the timing and reason for death that is important, and potentially what you can do, if anything, to avoid it. Death before discharge from intensive care, discharge from hospital, or twenty-eight days later is arbitrary, creating a false dichotomy. Edge cases (those that die at twenty-nine days, for example) then create significant confusion for any classification or prediction algorithm. Moreover, the utility of such an algorithm is therefore highly questionable (outside benchmarking the performance of institutions), since there is no specific intervention or treatment that can be prescribed, and it is not clear that predicting death early in a hospital stay could change the outcome even if the underlying cause of the prediction could be identified. In contrast to this, the 2019 PhysioNet Challenge's target was predicting sepsis six hours ahead of clinical suspicion, using the Sepsis 3 Criteria. These criteria provide a clear set of clinical markers and predicting six hours before any clinician can identify the signs of sepsis provides an actionable window that can significantly change a patient's trajectory and affect outcomes. \subsection{Data set composition} The data are the most important part of any challenge. It is essential to have at least three V's of big data: {\it Volume, Variety} and {\it Veracity}. For time-series data problems, {\it Velocity} is also important - that is, we must sample fast enough to avoid classic errors such as aliasing. This latter point is often ignored in medical data and can lead to `ghost' signals that mislead the data analyst. \subsubsection{Data variety} It is critical that the data sets must represent the population in an unbiased manner. It is tempting just to include all the data you can find. Of course, this leads to a hidden sampling bias, driven by the manner in which the data were collected, or the access to healthcare of the sampled population. In addition, it is important that multiple datasets are used to represent the variety of ways in which different systems or institutions collect medical data. Both of these issues are examined in more detail later in section \ref{sect:TestData}. Conversely, selecting data based on which patients have full rank (no missingness), or artificially enhancing the representation of certain subgroups can lead to misleading results if the wrong metric is chosen. This issue is addressed more deeply in section \ref{sect:scoring_metrics}. \subsubsection{Data volume} Data volume (quantity) depends on the question you are asking, the variety of the data, and the technique applied to the data to solve a given problem. There is a meme in deep learning and big data, that as the data set size increases, conventional ML approaches performances will level off . Deep learning, on the other hand, is expected to improve performance as the dataset size increases. Assuming an infinite network topology and compute time, this may be true. (See solid lines in figure \ref{fig:MLmyth}.) However, just how big is big? In the 2017 Challenge, we posted over 10,000 ECGs for a public dataset for a relatively simple four-class problem. Yet, the winning entry was a standard ML approach based on hand-crafted domain-expert-driven clinical features. Deep learning did not outperform this technique, and a well-publicized method by a Ph.D. student of a well-known Silicon Valley ML expert ranked behind some novice deep learners! This result demonstrates that ten thousand ECGs are not enough for deep learning to make an impact. It does seem that one million ECGs may be enough to provide significant improvements through deep learning \cite{Attia2019}. However, this well-publicized study did not strictly compare deep learning to standard ML, or even multivariate regression for that matter. While standard statistical methods are amenable to power calculations, it's hard to determine, {\it a priori}, just how big a dataset needs to be for ML approach to reach a given level of performance. The general approach, therefore, is to attempt to assemble as much data from as many subjects as time and funding permits. However, this in itself leads to another limitation, that of data quality or veracity. \subsubsection{Data veracity - the emperor's new clothes} The quality of data, and the associated labels are perhaps the most important, yet understated problems that ML faces in this domain. Traditional databases, such as those found on PhysioNet prior to 2005, were relatively small by today's standards but were meticulously hand-annotated. With the advent of large public datasets, it became impractical to have expert (or even non-expert) annotation of all the significant events in the database. An example is the MIMIC II database, comprising over 30,000 patients stays, each lasting multiple days, and for a significant subset, including hundreds of thousands of hours of electrocardiographic and other bedside monitor data \cite{Saeed2011}. With monitors triggering events every few minutes on almost every patient, and up to 95\% of them being false \cite{Aboukhalil2008}, it is impossible to determine the veracity of each event. Even when data is verified in real-time by the clinical staff, this process can lead to significant errors \cite{Hug_CCM2011}. There has been so much interest in the application of ML to medical databases, and EMRs in particular, in the hope that the complex associations of the data with specific events could be identified. However, this dream has been thwarted because there are vast limitations in these databases which inhibit their use. I often refer to EMRs as the `Emperor's new data', because it has been touted to be a gold mine of finery, but then during the weaving process, the fabric was left out, and we were left naked (as far as a useful predictive tool). Data collected for routine clinical activities are recorded for human review or billing. EMRs were never designed for predictive analytics and ML. In the next section, I explore this fundamental issue, and in particular, the issue of errors in the labels. \begin{figure*} \centering \includegraphics[width=17cm]{img/DL_myth_medicine.png} \caption{The deep learning myth in medical applications. DL = deep learning. SVM = support vector machine. LR = logistic regression. Solid lines indicate how the algorithms should perform as dataset sizes increase. Dotted lines reflect the reality, because label quality drops as dataset size increases. Figure adapted from \cite{CliffordCCMtalk2020} under the Creative Commons Attribution License 3.0 (CCAL). }\label{fig:MLmyth} \end{figure*} \subsection{Label quality limitations and the big data challenge} Label errors come in many forms in medical databases. These include missing labels, false or phantom events, incorrect class labels (because humans do not over-read the labels), or temporal misalignments. (In EMRs for example, the label can be provided hours after the event, increasing the likelihood of remembering the event incorrectly, and leading to a timestamp that is minutes or hours before/after the event, destroying causality in the data!) Noise can also creep in from differences in ontologies, protocols, and end-usage of the data. For example, when using billing codes as targets, it is important to realize that the codes are sometimes optimized for reimbursement, not medical accuracy. Even in well-annotated databases, the over-read gold standard labels still contain significant noise and errors, and the target labels are usually given by humans. Moreover, the quality of the labels is bound to drop with increasing dataset size, as the humans have to cut corners, we use annotators with lower expertise, or we use algorithms trained on the smaller data sets. Never-the-less, it is important that the data set used has high inter-observer agreement levels. To improve agreement levels, multiple independent experts are required to label the data. In the 2017 PhysioNet Challenge \cite{clifford2017af}, we found that for a four-class (arrhythmia classification) problem, even eight experts were insufficient to reach a consensus for a significant portion of the labels. In reality, there is an optimal data set size that a finite group of humans can curate, and beyond that, the algorithms must be trained on increasingly lower quality data. In fact, the highly nonlinear nature of more complex algorithms means that simpler algorithms may even be preferred in such situations. (See dotted solid lines in figure \ref{fig:MLmyth}.) Michael I. Jordan, a well-known and respected ML expert at Berkeley, has been famed for predicting the `Big Data Winter', or rather the failure of ML to live up to the hype. I believe that label quality will be one of the main factors that may lead to such a winter. When subsequently asked for his opinion on which ML algorithm he was most excited about Jordan was reported to have said, in all seriousness I believe, `Logistic Regression'. We hear about `the unreasonable effectiveness of ML', but little is said about {\it the unreasonable effectiveness of logistic regression}, an algorithm with roots that stretch back to the early 19th century \cite{RePEc:tin:wpaper:20020119}. Notably, Claudia Perlich won several international competitions using this relatively simple approach. It is remarkable that this algorithm, which is essentially a simple neural network\footnote{with one hidden layer with a single hidden node, an identity activation function, and a single output node with the logistic sigmoid activation function}, or a discriminative form of na\"{i}ve Bayes \cite{NIPS2001_2020}, performs so well. One of the reasons for this is that logistic regression, particularly when combined with some form of penalization for overfitting, has far fewer free parameters than most classifiers, and is, therefore, less likely to overfit on noisy labels. Recent developments in ML aimed at dealing with noisy labels hold promise \cite{NIPS2013_5073}, but have yet to demonstrate anything significant in medical data. There is, however, another approach - that of {\it crowd sourcing}. The `wisdom of crowds' asserts that given enough independent annotators, the average will tend to the right answer. For most challenges we use a crowd of experts and vote together the labels to create a more accurate label, and a confidence interval (quantified by the spread of values or categories for the annotation of the same event). There are multiple issues with this approach, though. First, aggregating individuals, even low-cost individuals sourced through websites such as Amazon's Mechanical Turk, is not that scalable. Each annotation costs around \$0.1. With typically 20-50 annotations required to collapse the uncertainty of a label to acceptable levels, large databases can require millions of dollars to label accurately. Secondly, it's not clear how much one should weight each annotator, but equal weighting is unlikely to be optimal. A third related issue is that there may be multiple schools of thought about an annotation or label. If the distributions of annotations are highly skewed or multimodal, then the average might be completely wrong, and not reflective of any school of thought. Finally, it is very hard to determine if annotations are truly independent. In reality, annotators are rarely fully independent, with individuals having been educated in similar schools, or by the same expert. In some cases, experts often confer and reduce the information gain provided by voting. In section \ref{sect:crowdsource}, I discuss how these issues can be addressed. \subsection{How much preprocessing and cherry-picking should we do?} In addition to label errors, we also expect noise and outliers in the source data. The question of how much data to filter out or change before releasing them to the public is one of the most fundamental questions for any challenge. Ideally, the data would be provided in as raw of a representation as possible. However, this sets up a large barrier to entry, and reduces the chances to accelerate innovation in certain areas. To address this tension, we provide example source code - a baseline algorithm - which implements significant preprocessing and attempts to solve the problem, sometimes with state of the art algorithms. In this way, users have the opportunity to significantly build on prior work. The downside to this is that it may drive challengers to produce similar code bases, reducing independence. When providing multiple databases, some level of normalization is needed to make sure that the differences in acquisition systems or their settings aren't learned as features. If one or more data sets have biases in the distribution of the classes of outcomes of interest, then it is possible that an algorithm, particularly a deep learning approach, would learn the nuances of the difference between the format of the data, and associate them with the differences in class distributions. For example, one dataset may be drawn from one hospital that deals with more severe cases and has a higher proportion of a given race. A deep learning algorithm might learn that this particular race is always sicker, and create a high false positive rate of treatment for this class \cite{Obermeyer447}. On the other hand, it is sometimes important to preserve these differences, particularly in the unseen data, to encourage users to develop algorithms that are insensitive to these differences. Unfortunately, even when two or more public databases are provided, challengers often develop algorithms over all the training data, and do not exploit the differences. Worse, they may add a flag that represents from which database the data are drawn, and thus ensure a lack of generalizability. \subsection{Challenge length and phases} \label{sect:challengelength} The length of the public challenges ranges from 24 hours (the usual `hackathon' format) to several years (such as in the case of the X-prizes). As noted already, to solve a significant problem in healthcare, a thoughtfully labeled data set (or set of data sets) is required. It can take several years to assemble, format, label, and prepare for public dissemination. We often search for datasets years in advance. However, even after the first release of a well-curated dataset, there needs to be a significant {\it Beta Test Phase}, in which the public can comment on the data and metric(s) proposed for the target. Community feedback on the design of the challenge is an essential ingredient, allowing our peers to provide input, without being prevented from participating. For the challenge, we usually run the Beta Test Phase for two to three months, to allow us to stress test the entire framework for challengers to submit code, and the response of any performance metrics. To incentivize challengers to participate in this stress-test of the challenge framework, a single successful entry during the Beta Test Phase is required to be eligible for the final prize. Moreover, challenge teams are allowed to submit up to five entries, giving them more experience and time to develop a better algorithm. Although any entry during this phase doesn't officially count towards their final score, it does provide insight, and recently we showed that submitting several early entries is associated with a higher final ranking. The challenge is closed for a week after the Beta Test Phase, and the organizers regroup to examine what rules, metrics, and data should change in response to feedback from the challengers. The Official Phase of the competition then runs for at least four, sometimes five months, providing challengers ample time to develop world-class algorithms. In general, ten official entries are allowed, which are run on a subset of the hidden test data, and the scores are pushed to a public leader board within a day or two of submission. In general, the subset is restricted to less than 30\% of the hidden test data to prevent sequential over-training on these data. Approximately one month after the Official Phase opens, the teams are required to submit an abstract describing their preliminary approach to a conference at which they will present their approach. The abstract is reviewed for quality, and low quality ones are rejected. Since the publication of a scientific article is an important element in the challenge, this review step is essential. After the closing date of the challenge, the teams are given the option to nominate their preferred algorithm out of the ones they have submitted, to be run on the full test data. (If no nomination is received, then the algorithm which produced the best score so far is selected.) This score is the only one that counts since it includes the full test data. \subsection{Compute cost, time and memory limits} In any public challenge, there have to be limits on the amount of time any algorithm takes to complete the task. Setting the maximum compute time and memory capacity is challenging. One must factor in the cost of overall computation cost versus the downstream benefit, and the practicality of implementing the algorithm in real-time on affordable hardware. There is also a psychology of the software development team to consider. Most teams wish to react to code within 12-16 hours of submission, particularly if it failed to complete a run across all data. Therefore, we set an upper limit of 24 hours clock time on a well-provisioned cloud system. Each year this changes, but turns out to be equivalent to costing about \$0.1/hour, or much less than \$0.01 a patient/subject. \subsection{Scoring metrics} \label{sect:scoring_metrics} The metric for a given challenge is always one of the hardest issues to debate. The choice, of course, depends on the data, the problem, and the downstream practical application. We modify the challenge framework at least once, sometimes more during the course of the event. This sometimes includes our metrics. We do this because we acknowledge that our choice may not be `optimal', and we feel the scientific community should have a vote in this (within reason). Our aim isn't to run an artificial competition where thousands of teams beat the data to death to achieve the highest score on a standard metric, then claim they are the best team in the world. Rather, our aim is to push forward the frontiers of knowledge and research on a specific research topic, encouraging researchers to question everything about the structure of the challenge from the data to the scoring metric. (Indeed, one of the most important outputs of the challenges are the novel scoring metrics\footnote{The other key products of the challenges are the exchanges between the researchers, the resultant codebases, and subsequent scientific publications.}.) In the 2020 Challenge, focused on classifying arrhythmias from twelve-lead diagnostic electrocardiograms, we created yet another novel scoring metric. This metric treated any rhythm that would lead to the same (or very similar) treatment as almost the same. The result was a large confusion metric that represented the cost of classifying any rhythm as any other rhythm. In the 2019 Challenge, we developed a new scoring metric for predicting future events that accounted for the repeated nature of the prediction. Metrics such as `area under the curve', recall, precision, and other conventional information-theoretic measures deal poorly with the relative rarity of the event being predicted, and the fact that there isn't just a single point in time at which it is useful to make the prediction. I.e., predicting sepsis 4 hours ahead of time is almost as useful as six hours ahead of time, and both predictions are much more useful than two hours ahead of the event. We have long debated whether we should include a metric for computational efficiency or cost in the competition, but this adds in another time-varying dimension to the metric. Does it matter if you spend \$ 10k on cloud compute or do it on a recycled mobile phone? Probably, but what if the two costs become negligible? Perhaps we should consider the environmental impact of the source of power that is used to generate the entry or the memory needed. What if you can do it on a cheaper chipset - should GPUs be valued differently to traditional multicore processors? Perhaps there should there be an equity angle. For example, if more than 90\% of earth cannot afford or access the computing needed to do this, then should your entry be allowed? The answer probably is: it depends on how the code is going to be used and what computing/memory costs in the future. A real-time vigilance detection system will need a low energy consumption / small memory footprint. An offline system for sleep staging for later review could use as much memory and compute power as the market will tolerate (e.g. compared with human labor, which is considerable in this case). \begin{figure*} \centering \includegraphics[width=12cm]{img/challenge_team_utility_sliding.pdf} \caption{\label{fig:PerformanceChallenge2019} Performance of top 70 competitors in the 2019 PhysioNet Challenge \cite{PhysioNet2019} on three hospital databases for sequestered (hidden) test data. Color coding indicates overall score across all databases (red indicating a higher performance). Only database C was completely hidden from the competitors. Lines join the same algorithm/team so that performance can be traced across datasets. Note that the scale for database C is different (right-hand side) and significantly lower. Note also that relative ranking and absolute score on the top-performing algorithms on databases A and B (and overall) did not correlate with performance on database C, indicating that no high scoring algorithm was able to generalize to a new hospital fully. Figure based on \cite{PhysioNet2019}, courtesy of Matt Reyna under the Creative Commons Attribution License 3.0 (CCAL).} \end{figure*} \begin{figure*} \centering \includegraphics[width=16.5cm]{img/LR_plot_Cerner.png} \caption{\label{fig:LR_plot_Cerner} Cross-validation performance of a logistic regression model trained on 25 hospitals (200,000 patient stays) to predict mortality. Average AUROC indicated by black horizontal broken lines. Color codes indicate region in US per map, and size of diamond indicates relative contribution to model in terms of number of patients. Note that the performance is biased towards the West Coast and Mid West. Figure adapted from \cite{Clifford2011} under Creative Commons Attribution License 3.0 (CCAL).} \end{figure*} \subsection{Independent and hidden data - submitting code not labels} \label{sect:TestData} For each challenge we aim to provide at least three datasets of the same type of data, drawn from three independent sources. This is important because it allows users to create algorithms that generalize across databases and can be tested on at least one database that is completely hidden from the user. This means that, in contrast to other competitions, we require the challengers to submit the code, not the labels. This prevents the challengers from training on the test data, or attempting to learn labels by repeated testing, unsupervised learning approaches, or even hand-labeling the data. The simplest way to cheat on the test data is to use the mean and variance to normalize the data algorithm. The test framework is constructed to prevent this so that challengers cannot access more than one individual at a time, and gaming of the test data in this manner is prohibited. Of course, the repeated testing allows the challengers to learn {\em some} non-specific information from the test data. This is consistent with the fact that we have observed that on repeated testing on hidden data, almost every entrant in the challenges has improved their score sequentially. For this reason, we only test on a small subset (typically 20\%) up to 10 times per entrant. When the challengers' best algorithms are run on the final data set, they observe an average of about a 5\% drop in performance. \subsection{Generalization performance on multiple datasets} It cannot be over-emphasized just how important it is to have multiple databases, and for one or more of those to be hidden. It is unreasonable to expect that an algorithm trained on one database would generalize beyond the center at which the data were collected. Johnson {\it et al} \cite{Johnson2018-DBLP:journals/corr/abs-1812-02275} illustrate how poorly an algorithm trained on the MIMIC II database performs on the Philips VISICU eICU data from multiple other centers. This effect is also seen in the 2019 PhysioNet Challenge, where groups that performed well on the two public databases, tended to perform poorly on the hidden data from another medical center (Fig. \ref{fig:PerformanceChallenge2019}). This effect plays out in a non-random manner, based on the sampling bias. Figure \ref{fig:LR_plot_Cerner} illustrates how a logistic regression model for predicting mortality, trained on multiple hospitals using the Cerner ICU EMR, exhibit a wide range of performances, with the southeast of the US exhibiting particularly poor performance compared to hospitals in the western and mid-western areas of the US. This is illustrative of how important diversity is, yet on its own, it is not enough to solve the issues of generalizability beyond the training data. \subsection{Publication and defense of solutions} Many competitions take an easy way out, and post the unlabelled test data and require only the submission of the estimated labels or predictions. Without providing the code to actually implement the algorithm that produced the test labels, and a test to ensure the code actually runs and performs on a general system, any submission is of little value. Moreover, if the authors are not willing to defend their approach in a public forum (Computing in Cardiology), then there is little incentive to describe the code and approach clearly. Scores are emailed to challenge teams about a week after the end of the challenge, and teams must then prepare a four-page preprint describing the scientific approach, to be posted on the conference website for public viewing. The teams must then attend a public forum in person and orally defend the work. Importantly, the final scores are not provided until after the challengers meet at the international conference to discuss and defend their approaches at a public forum. This discussion is essential to the challenge, creating new knowledge, and enhancing the challengers' articles. After the prize ceremony at the conference, challengers receive their final scores and are required to update their articles to reflect these scores. A second peer review is then performed to ensure that each article is coherent and accurately reflects the final algorithm's performance. It's always surprising how often authors will provide misleading results, using inappropriate metrics other than the official ones in the challenge, or quoting training scores, or scores on a small subset of the test data. Ensuring that each team reports results that are directly comparable is key to the scientific integrity of the challenges. Although we allow teams to enter the challenge unofficially, and provide them with results, we always stress that such results should not be taken too seriously. Those teams that are unwilling or unable to fully describe their methods in peer-reviewed articles, provide little contribution to the field, and may have scored well by pure chance. \subsection{Why go to a conference at all?} While most of this article was written before the current pandemic, the pivot away from in-person meetings, particularly scientific conferences, has made me think even more deeply about whether we should even meet in person to defend our ideas. I'm on the board of Computing in Cardiology, so I've been privileged to witness and participate in the torment of having to rapidly move from in-person to online, and back again, depending on the ebb and flow of the pandemic and the risk-tolerances of everyone involved. While I'm not attending in-person this year, I do feel in-person attendance at conferences is important. It's almost impossible to understand if you trust an individual's research until you have an in-person meeting with them. It allows you to go beyond the biases you have, and get to know them as a person. Perhaps this creates new biases, because you may resonate with their love of a given sport, or have children the same age. This is something we struggle with constantly, often without an awareness of the issue. But it's very hard to use a video link to sit with someone for three hours over dinner and dive deeply into their thought processes. In the end, science is based on trust. We must trust our peers that they are honest and want to push the field to new heights, with an open mind to new ideas that are not their own. Opening up code and data helps, but the trust is bi-directional. We also need the challengers to trust that we are no manipulating the test data or scoring functions. For these reasons, we continue to require challengers to defend their approaches publicly. \section{Other considerations for maximizing the utility of solutions} \subsection{Code quality and independence} \label{sect:Code_quality_and_independence} The utility of the software submitted to data science competitions depends on four main factors: \begin{itemize} \item Reusability: To ensure a codebase entered into a competition is fully reusable, it is important that it can be run on an entirely new system from the one on which it was developed. By containerizing the pipeline, we reduce the probability that the code will fail, but it does not remove it completely since even small differences in random number generators can lead to differences in the training or even forward predictions of a complex model. \item Documentation: Without good documentation concerning how the code was constructed, and importantly why the code was designed the way it was (justifying choices of coefficients and hyperparameters), it is very hard to trust any codebase or its authors. \item Generalizability: By keeping a dataset completely hidden from public access, we are able to test how well any code generalizes to data from an unknown source. \item Independence: This is perhaps the most overlooked issue. In section \ref{sect:crowdsource2} we discuss the utility of combining together large numbers of algorithms in a rigorous weighted framework to leverage the strengths of individual algorithms in different contexts. Although there has been much work on combining multiple labels on data from independent labelers (which could be humans or algorithms), almost all research has focused on combining independent annotators. When the annotators (or algorithms that generate the labels) are not independent, then there is no guarantee that an optimal combination can be found. Rather, incorrect, but similarly behaving algorithms, can reinforce each other and lead to a biased or incorrect aggregate label or prediction. \end{itemize} \subsection{Prize money - does size matter?} Data science competition prizes have ranged from millions of dollars to nothing (i.e. kudos/bragging rights). The question of the size of the purse is entirely open. Obviously, the larger the amount, the more teams are likely to enter, but this doesn't guarantee a linear quality increase or a linear increase in independence between codebases (see section \ref{sect:Code_quality_and_independence}). At PhysioNet, we never reveal the actual dollar amount of the prize. This is partially because the amount changes from year to year depending on the sponsor, but also because we feel that this is beside the point. The driving motivation to enter the Challenge should be the desire to solve problem itself, although we understand that humans are often motivated by more than one factor. This issue is discussed in more detail, together with why someone might want to {\em run} a challenge, in section \ref{sect:incentives}. \subsection{Software and data licenses} Several authors have discussed the benefits of crowdsourced code from competitions \cite{Bender2016, Ledford2017, Guinney2018}. For the PhysioNet Challenges, we encourage teams to use an open-source license, so that others may use the work for research or even translation/commercialization. It is notable that not all open source licenses prevent commercialization by third parties. One such example is the Berkeley Source Distribution (BSD) License. Moreover, this type of license does not prevent the author from patenting the approach and licensing it out to a commercial partner. Of course, it does not stop an industrial competitor from stealing the idea, but then so does publication in a journal or as a patent (in theory). Patents were never intended to create intellectual `property,' and bestow `ownership' of an idea to a particular entity. Rather, they were intended to provide a limited reward for contributing the knowledge into the public domain on how to reproduce the `invention.' The reward is limited to a decade or two without competition, giving the licensee a commercialization head start. Of course, industry generally feels uncomfortable with this, and (almost) all patents in the computational field are as far from a scientific paper enabling reproducibility as possible, making general statements, sweeping claims and omitting important details (such as the exact parameters used, the precise model architecture, the details of the preprocessing steps and the exact values of the thresholds, coefficients, and weights in the model). However, it is not just patents that suffer from these ills. Many scientific papers also do, although not in such an extreme manner. Any complex piece of software cannot be properly described in an eight- to twelve-page paper (the limits most journals require), and so important details are edited out. It is also true that an author will often assume some concepts are obvious or trivial (which they may be to them at least), but to almost everyone else, they are not. It is for these particular reasons that we require all code to be submitted for testing on our servers (or rather on our cloud providers' servers), rather than providing unlabelled test data and asking for the challengers to submit labels. In this way, we know the code works in an environment beyond the developer's machine (and doesn't contain hardcoded paths, hidden files, and libraries, etc.), and the public can inspect the code to work out {\it exactly} how the authors actually designed their code. Nevertheless, it is important that industry participates in such challenges, so that the challenge cannot be accused of being an academic exercise, and so that the winners reflect the best approaches in the field. We hope that the challenge data serve as a benchmark for a given problem or field, and that industry uses it for commercial testing and reporting. It is worth noting that there are several public competitions that impose back doors to designers of the competition that mostly benefit them. While one may argue that there should be some reward for the enormous effort it takes to design and run such a competition, the intentions of the organizers should be front and center. Notably, Kaggle hosts competitions for third parties, which offer data and prize money in exchange for ``a worldwide, perpetual, irrevocable and royalty-free license ... to use the winning entry'', assigned exclusively to the third party sponsor of the competition \cite{KAggleTermsofUse2019}. While this can encourage third parties to provide large datasets and significant prize money, attracting thousands of competitors, this may lead to a skewing of the results towards those hunting for large sums of money. (See the section on prizes for more discussion on this.) Some competitions (e.g., Orange Data for Development cellular networks competitions \cite{Blondel2012,DeMontjoye2014}) only make the data available for a limited period of time and require the user to request permission to perform research on the data every time they want to use the data after the competition. This barrier significantly devalues the utility of the public data, since there is an arbitrary bottleneck for future use. Open datasets, without restriction, can provide benchmarks for the field, and help spur innovation on into the future. The original MIT-BIH Arrhythmia Database on PhysioNet has been around for almost thirty years, and although relatively small by modern standards, is still used as a benchmark database in FDA filings and many publications. Perhaps one of the more open competitions (in terms of licensing and publications) is the {\em Dialogue for Reverse Engineering Assessments and Methods (DREAM)} Challenges, which are run in collaboration with Sage Bionetworks. The organizers state that the objective is to run `Community competitions on fundamental questions about systems biology and translational medicine, and advance computational methods.' When submitting model code, participants should provide it under an ``open-source license of their choice'' which must ``permit the DREAM Challenges and Sage Bionetworks to distribute the code to the public for noncommercial research and development use via Synapse''. Authors are allowed to retain copyright and, of course, patent the idea before submission \cite{DREAMChallengesFAQs,DREAMChallengesAbout}. \cite{Bender2016, Ledford2017, Guinney2018}. \subsection{Repeatable code - requiring the training code?} While there is an increasing tendency for journals to request open-source code (and more commonly, open-access data), evaluating code for replication of results is a non-trivial task. Only the diligent reviewer will thoroughly vet the code. However, I have never seen a journal or competition demand the authors submit code to train the model. Without this step, the research will never be truly replicable. For this reason, in the 2020 PhysioNet Challenge, we have required the users to submit the training code as well as the forward model. \subsection{Ethical computing} With the rise of big data and ML, we are confronted by two critical issues. These are 1) the environmental impact, and 2) the potential for bias to create algorithms that disproportionately disempower minorities and disadvantaged populations. The impact of the energy consumption required for large scale machine learning has recently attracted attention, with estimates that data centers could account for 10\% of total electricity consumption by 2025 \cite{Andrae2015,Andrae2017}. It has been often quoted, that training a single AI model can emit as much carbon as the total lifetime of five cars \cite{DBLP:journals/corr/abs-1906-02243}! (Specifically, a Transformer consisting of 213 million parameters, with a neural architecture search.) Of course, promising developments in machine learning are reducing the computational complexity of ML algorithms, often with little impact on performance. Frankle and Carbin recently showed that neural networks contain subnetworks that are up to one-tenth the size of the overall system, yet are capable of being trained to make the same predictions without loss of performance, and sometimes can learn to do so even faster than the original much larger network \cite{DBLP:journals/corr/abs-1803-03635}. While we restrict both the training (and separately) the test computation to 24 hours on state-of-the-art equipment\footnote{In 2020 we ran the training code on Google Cloud using 8 vCPUs, 54 GB RAM, and an optional NVIDIA T4 Tensor Core GPU and the trained model on Google Cloud using 2 vCPUs, 13 GB RAM, and an optional NVIDIA T4 Tensor Core GPU.}. We have debated whether future Challenges should restrict this compute performance to lower performance edge computing systems such as the Coral TPU running Tensorflow Lite. However, this may distract from the main task of solving the medical problem first. The task of converting it to a more efficient model usually follows. With increasingly larger carbon footprints, this paradigm may need to change, however. The concept of bias in ML is perhaps more tricky, but none-the-less, not less important. Multiple researchers have raised the issue of race and gender bias in AI \cite{ONeil2016,pmlr-v81-buolamwini18a,10.1145/3306618.3314244, Obermeyer447}. Several have even proposed frameworks to measure and deal with this issue \cite{DBLP:journals/corr/abs-1710-03184,DBLP:journals/corr/abs-1710-06876, NIPS2017_6988, DBLP:journals/corr/abs-1808-00089,Sahil2018,Bellamy2019}. These issues, and frameworks, should be increasingly addressed in public competitions, both in the scoring systems, and the literature exploring the results of each event. In section \ref{sect:crowdsource2}, I discuss ways to reduce bias from multiple biased algorithms. I also argue that, because we cannot escape our intrinsic biases, public challenges, which generate a spectrum of biases, may indeed be the best way to mitigate the effect of these biases. Future challenges will explicitly incorporate these issues into the scoring metrics whenever possible. For example, in the 2020 challenge, the race and gender of many of the subjects is known. A post-challenge assessment will identify how algorithms perform across race. In later challenges, this information could be used to explicitly address bias, and perhaps become part of the scoring metric, penalizing those that show unbalanced performance across race and gender. It is unclear how we might offer a trade-off between performance, bias, and efficiency. Therefore, it may make sense to run three challenge categories, constrained by having to be in the top 10 (say) of each category. \section{A commons of models?} As dataset sizes increase, and cloud computing costs drop, we are increasingly running the challenges in the cloud, swapping the paradigm from downloading data to uploading code. This can create an advantage that not only the code used for running the forward (trained model) is open and easy to inspect and use, but the framework used to train the model is also open for everyone to inspect. This removes the final barrier to code reuse. However, there is another important opportunity here. With the increasing use of transfer learning, we are starting to see important gains in using large public datasets to pre-train models. Similarly, we are increasingly observing PhysioNet users combining datasets to improve the effectiveness of complex ML (and DL in particular). The rigorous modern paradigm for doing this is `Transfer Learning', where a ML model developed for one task is reused as the starting point for a model on a second task. Classic image networks such as VGGNet for face recognition have attracted much attention as starting points for thousands of new pieces of research \cite{simonyan2014very}. The challenges, and PhysioNet in general, provide a unique opportunity to create a repository of the equivalent DL networks for physiological signal processing and classification, in much the same way as ModelZoo.co and TFHub.dev have done for images and other related areas. This commons of models would lead to significant acceleration of the field and would allow groups to leverage private data of others, which the owners are not allowed to share (for various logistical, political, or other reasons). By posting a model trained on medical data from millions of in-house patients, another group could use such a model as a starting point and continue the training. If the new data set is only small, then the new model could be specific to that new population, but be less likely to overfit. \section{Crowd sourcing and aggregation of weak and strong labels} \label{sect:crowdsource} Any public competition should produce several key products, including a collection of publicly accessible data, labels/predictions, algorithms/models, and publications that result from the event, which may lead to inventions, products, and other resources. In its most general sense, this is crowdsourcing. More specifically, the labels or predictions provided by each annotator can be aggregated to produce a more robust label on the data or resultant prediction. In the PhysioNet Challenges we leverage the inter-competitor agreement levels (measured by Fleiss' Kappa, $\kappa_F$) during the first (unofficial) 10 weeks of the competition. During the immediate following (week-long hiatus) period we identify the patients or events for which the prediction was most problematic. The assumption is that since the training and testing examples for which the $\kappa_F$ is the lowest are the hardest for most to classify, and therefore are most likely to be poorly labelled. These poorly labelled examples can then be reviewed in detail, and relabelled by multiple experts to raise the $\kappa_F$ towards that of the less equivocal examples. Of course, such an approach can be repeated over and over, depending on the time and resources available. As long as there is no collaboration on labels (i.e. groupthink where the dominant authority in a group enforces their opinion) and the labels are independent, the labels should asymptotically approach some sort of `ground truth'. This may be one of the most important contributions of any competition - the relabelling of the database to boost the accuracy of the labels. Without high confidence labels, there is little chance an automated algorithm, whether it is the latest ML algorithm, or just a simple threshold, will learn a set of weights (thresholds) that can label or predict accurately. \begin{figure*} \centering \includegraphics[width=8cm]{img/Sandy1.jpg} \includegraphics[width=8cm]{img/Sandy2.jpg} \includegraphics[width=8cm]{img/Sandy3.jpg} \includegraphics[width=8.2cm]{img/Sandy5.png} \caption{\label{fig:Sandy} Hurricane tracks for various models of hurricane Sandy in 2012 as it passes over Jamaica and travels up the East coast of the USA. Red line indicates most probably path. Lower right image indicates cone of uncertainty. Figure adapted from \cite{NHC} under Creative Commons Attribution License 3.0 (CCAL).} \end{figure*} \section{Combining algorithms} \label{sect:crowdsource2} There is one field in which combining algorithms to deal with uncertainty is common: tropical cyclone (typhoon, hurricane, tropical storm or tropical depression) track prediction. For example, the Florida State Super Ensemble involves 11 models which are combined using a regression model, and produces forecasts better than each individual models or the mean of their predictions \cite{palmer2006predictability}. The ever-increasing frequency of destructive weather events has made us all far more familiar with these ensemble models, and the fact that we tend to respond to landfall predictions when the models begin to converge. In other words, when models agree, our confidence in their ensemble prediction goes up, and we feel confident we need to act. Figure \ref{fig:Sandy} illustrates these storm tracks for Hurricane Sandy in 2012 as it progresses over Jamaica and up the Eastern seaboard of the USA, finally making landfall in New Jersey. The sudden pivot to the east in the last 24 hours before this final landfall was well-predicted after the hurricane had passed over Jamaica, because the complexities of modeling over landmasses (i.e. the Caribbean islands) made the uncertainties too high to make a call on where the final landfall might be. The confidence intervals in the predictions are portrayed as a cone of uncertainty. Inside the cone is the forecast line that represents the probable track of the center of a the tropical cyclone over time increments in a set of circles up to five days ahead of the current observation. The size of the cone is based on the historical official forecast errors over five years. It is therefore not reasonable to ask why we don't apply this form of ensemble averaging (of strong predictors) to the medical domain? In fact, this is what we have been doing over the last few years. In previous challenges, we have shown that formally voting multiple algorithms together can be guaranteed to produce a higher performing algorithm than any single algorithm, without knowing the actual performance of any given algorithm. In particular, in past challenges, we have shown that anywhere from 20 to 50 algorithms provided significant gains in applications as wide as ECG repolarization interval estimation (to identify adverse drug events), to arrhythmia alarm calls, and prediction of sepsis \cite{Zhu2014,Zhu2015a,clifford2017af,PhysioNet2019}. The key to identifying which algorithms lies in determining the relative weights of each algorithm based on cross validation and regularization on the training data. However, many algorithms do not provide independent information, and so should not be weighted equally, even when they exhibit high performance. In earlier work we showed that a Bayesian voting framework can be used to combining any number of algorithms together and produce an estimate superior to any of the contributing algorithms on new, unseen data \cite{Zhu2014} \cite{Zhu2015a}. The weights of each algorithm are found by regressing their performances on training data, and features from the source data. In this way, we discover the strengths of each algorithm, and the specific circumstances in which each one excels, and in which each one under-performs. The combination is then highly likely to use the right algorithms in each particular context. This, of course, can only happen if each algorithm performs in very different ways, and this requires a measure of independence between algorithms. Another useful byproduct of voting algorithms together is the variance over all the contributing algorithms can provide an approximation of the confidence one might have in any prediction. Strangely, no medical monitor currently gives you any sense of how confident it is in its information. (It might have an intermediate `unsure' or `orange-light' warning, but that's not particularly useful for making decisions. Feeding back a continuously evolving high-resolution confidence interval back to the clinical team may allow them to identify spurious one-off events to ignore, and see when it is time to request new information or try something new. For example, if a confidence interval is continually increasing over time, it suggests the metric you are being fed is becoming less useful/relevant, and you should perhaps try to make an observation that would add value when integrated into the metric. The algorithm could even suggest which parameter is most likely to collapse the confidence intervals! As Micheal Jordan noted ``unless you are actually doing the full-scale engineering statistical analysis to provide some error bars and quantify the errors, it's gambling'' \cite{Gomes2014}. It should be noted that non-trivial algorithmic methods for voting have been around for decades, including expectation-maximization \cite{DawidandSkene1979}, products of experts \cite{Hinton00trainingproducts}, and Bayesian frameworks that account for both bias and variance \cite{Zhu2015a}. Although this is a large and evolving field, it has mostly focused on combining categorical/ordinal estimates, and either humans {\em or} algorithms (as two separate research fields). There has been little work on combining independent humans and algorithms (in a non-collaborative manner). Since humans and algorithms are likely to demonstrate significant differences in their error distributions is important to consider the multimodal distributions of each type of voter, accounting for large differences in biases and variances, as we have done in Zhu {\it et al.} \cite{Zhu2014,Zhu2015a}. This can also be thought of as a way to address different schools of thought around best practices. The question to ask then is, should clinicians collaborate on a diagnosis, or should we use a formal framework to maintain independence? Mathematically speaking, we preserve more information by preventing collaboration. This seems counter-intuitive to clinical teams, where the discussion can lead to a higher confidence in the decision. However, the consensus may sometimes be driven by the biggest personality, the most articulate speaker, or the most senior person in attendance. \section{Consensus of experts or independence of thought?} It is worth noting that, for the first time, in twenty years of running these challenges, we also introduced a hackathon at the end of the challenge, one day before the start of the conference at which the challengers met to describe and defend their approaches. Although I criticized short hackathons earlier in this article (and have done for many years\footnote{As noted in section \ref{sect:challengelength}, a research project of less than six months is unlikely to lead to a useful result, and will require a lot of short cuts resulting in over-use of public code, reducing novelty, independence, and robustness.}), holding a hackathon at the end of a eight-month competition meant that all entrants had the benefit of exposure to the challenge for a significant period of time. Many challengers leave their best work until the last minute and then wish for a second chance. The hackathon provided just that chance. Moreover, it provided the opportunity for challengers to work on-site with the competition organizers and collaborate. This provided a direct test to see if collaborative teams could produce better entries than a voting strategy. This was an attempt to address the question of whether a consensus of experts is preferable to a weighted voting of independent algorithms. We found that a weighted voting system of the top 10-20 algorithms provided a boost in the performance on all the hidden data, and in particular on the completely sequestered test set C, which proved problematic for almost all algorithms. Most importantly, the outputs are always better than any individual algorithm, or the best individual algorithm on any given data set. Perhaps even more importantly, the voting algorithm also beat all teams from the hackathon. The take-home message from this, if it holds in future experiments is, that maintaining independence provides a practical boost in performance and that no one algorithm was best, but a weighted vote of each algorithm provided a significant improvement over any single algorithm. I make this key point in a recent keynote talk given at the Society of Critical Care Medicine \cite{CliffordCCMtalk2020}. \section{Incentives to organize and participate in public competitions} \label{sect:incentives} Before I conclude this rather lengthy editorial, I wanted to address just {\em why} people will (And should) participate in public competitions or challenges. Participation in such events is rarely for prize money. In 20 years, we have never advertised the award amounts of our prizes and only been asked once (via email) how much the purse might be. This is partially because we change the amounts from year to year, and partially because we want people entering it for the scientific challenge. We have found, through emails and informal conversations, that challengers enter: \begin{itemize} \item To gain early access to new data; \item To supplement their research (often forming part of a doctoral or masters thesis); \item To gain a deeper insight into a specific problem; \item To pit their talents against the world's best groups; \item To be part of a close-knit community, network and have the chance to publish early; \item For the kudos of winning (often to enhance career prospects); and \item To be the first to publish (or patent) on a new problem. \end{itemize} It is harder to understand why someone might run a challenge. There is certainly some altruistic motivation, believing that data and code must exist in the public commons. There is also an element of feeling that we have benefited from public data and challenges in the past, and therefore are obliged to run these events. You could also argue that the publications by the challenge organizers describing each challenge become instantly highly cited (gaining over 100 citations in the first year or two). However, each challenge takes several years to run, and it would take a decade to increase a H-index by just 10 points in this way. I would argue that this is a pretty poor return in terms of impact factor, and that your efforts might be better focused elsewhere. I personally have many more highly cited manuscripts that took less than six months to complete. Perhaps one of the most important incentives is the ability to define an important topic in the field, and push forward a branch of research. From a personal perspective, I find the insights that can be derived by comparing 50-100 approaches to the same problem on the same data is unique. You see themes that are hard to extract from the literature, because of the heterogeneity of their data, metrics and quality of articles. Often, novel and powerful features or approaches emerge from several groups, indicating that they are not just random luck or false discoveries, but are really new potential solutions or research directions. Inventions in science are often co-discovered almost simultaneously, because true leaps in our innovation and discovery are a product of all the shoulders on which we stand. We see this in the way that Nobel prizes are often awarded to multiple independent researchers for the same idea. Novel breakthroughs rarely come from one genius or a single team, but grow in multiple areas, informing each other, and sparking ideas. By holding challenges we accelerate this process, converging teams on an annual basis. These teams then exchange ideas and spark new ones. The winning themes or discoveries bubble to the surface quickly. The gamification of science, if you like, can be a positive thing. \section{Should grant awardees be required to run public competitions and should we use this to revamp peer-review and the process of selecting new grant awardees?} Over the last two decades, the NIH, NSF, Wellcome Trust, and many other funding bodies and a few journals have enacted requirements for grantees/authors to disseminate their data. While this is a laudable idea, it is rarely enforced, and grantees usually only pay lip service to the idea. There are many reasons for this, ranging from not letting your competition get ahead of you and not having enough resources to disseminate data properly. The former problem can be obviated by placing a 2-4 year embargo on the data, allowing the researcher ample time to followup on the research and gain the next tranche of funding. It is not hard to argue that after seven to ten years, it is time to share the data, and give the rest of the academic community (and the taxpayers who funded the work) an opportunity to create something novel from the raw resources. The PhysioNet databases are an obvious example of how the return on investment for the researchers to make data public can be significant in terms of reputation, citation rates, and community building. Of course, without explicit funding for the publication of the data (and the support to understand the data), this process will rarely happen in any meaningful sense. I estimate it takes around one to two full-time researchers an entire year to curate the average dataset, and publish manuals and code that facilitate the use of the data. For highly complex datasets, like MIMIC II \cite{Saeed2011}, it took closer to five full-time research engineer years (not including the time it took to assemble the data in the first place). Perhaps the solution is to mandate a minimum effort level allocated to a grant to be used the year after the grant would normally end. Given that the return on investment of standard peer-reviewed grants has come into question in recent years Obviously, we would be overwhelmed by challenges if this extended beyond some key grants mechanisms, and may indeed be a mechanism for a specific funding agency or foundation. \cite{Brian2011}, perhaps the requirements should be extended to include the requirement to run a challenge in this final year of funding for all recipients of large awards (say, over \$10 million, including renewals). The funding required to run a challenge is perhaps double that of disseminating the data as a static database, but arguably creates ten times the impact in just one or two years. Moreover, many large grants\footnote{such as the National Center for Advancing Translational Sciences Clinical and Translational Science Awards Program} already award small subgrants mid-stream to allow other researchers outside the original applicants to work on the data. Wouldn't it be a better use of money to take these funds and use them to disseminate the data more widely and support access to them based on results, rather than proposals? The logical end-point is then to award moderately large follow-on grants to the highest scoring challenge teams, (as well as the original researchers, to support continued research on the data and act as a central repository for curating the new products of the data). Given the relatively weak correlation between peer-reviewed grant application scores and the productivity of the researchers \cite{a6c0983d44f445068e7ee47bf1f78550}, it seems that an alternative method of awarding grants should be piloted. Of course, I am not proposing we throw the standard grant peer-review system out completely, but rather that a small subset of data-rich grants should be targeted to pilot the idea of awarding grants based on challenge performance, rather than peer-opinion. Perhaps we can even do a controlled study to compare matched researchers to evaluate the two systems? \section{Final thoughts} Beyond the impressive collection of data, open-source algorithms on which others can build, and the peer-reviewed high-quality articles that result from a public competition, all of which accelerate the field, there is a more exciting opportunity that it generates. That is, the opportunity to perform research on the way in which independent teams solve problems. We have seen over and over that a voting system derived from scores of multiple experts (or algorithms) can outperform any single expert (or algorithm), and this points the way towards the future. It seems to suggest that the old paradigm of using a single algorithm to make predictions in medicine is doomed to underperform and make biased decisions. In a field where there is an ever-increasing awareness of bias, it seems more important than ever. Moreover, a group of algorithms provides estimates in the confidence of a prediction or classification, allows a clinician to understand whether they should exercise caution before reacting, and perhaps retest in the near future, or even make new measurements to reduce the confidence intervals. Such an approach will allow us to bootstrap the quality of data sets, which in turn will lead to an improvement in performance of the algorithms. This may be our only hope for dealing with the noisy labelling issue on enormous noisy data sets. \section*{Acknowledgements} This work was funded by the Gordon and Betty Moore Foundation and by the National Institute of General Medical Sciences (NIGMS) and the National Institute of Biomedical Imaging and Bioengineering (NIBIB) under NIH grant number 2R01GM104987-09. The content and all opinions are entirely the author's own, and do not belong to the Gordon and Betty Moore Foundation or the NIH. Of course, the thoughts in this article don't arise in a vacuum. Rather, they are the product of many years of conversations with mentors, mentees and colleagues. It's impossible to list them all, but it's important to call out Ary Goldberger, George Moody and Roger Mark, who had the vision and founded PhysioNet and these challenges over a decade before the ML field started running similar public events. George, in particular, was the driving force behind the detail in these challenges - a polymath with a mission (and the skills) to bring open-source/open-access science as a standard, long before the rest of the scientific community understood its importance (and acted upon it). You can't under-estimate the work and diligence it takes to assemble and run these challenges. I was lucky enough to learn from the best! Thank you Ary, George and Roger! Thanks also to the wonderful members of my research group (both current and past), who have contributed so much. So many have put in so much hard work. Joachim Behar, Qiao Li, Erick Andres Perez, Matt Reyna and Salman Seyedi have consistently contributed over the last few years. Assistance for generating figures 2 and 3 in the manuscript was gratefully received from Alistair Johnson and Matt Reyna respectively. Alistair Johnson and Matt Reyna did most of the hard work involved in generating figures 2 and 3, respectively. Matt Reyna read the first draft of this and had many useful suggestions. He has also taken on the large burden of co-leading the challenges with me in the last couple of years. All the errors and views in this article are, however, my own. \bibliographystyle{unsrt}
proofpile-arXiv_065-109
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The numerical analysis of elastic shells is a vast field with important applications in physics and engineering. In most cases, it is carried out via the finite element method. In the physics and computer graphics literature, there have been suggestions to use simpler methods based on discrete differential geometry \cite{meyer2003discrete,bobenko2008discrete}. Discrete differential geometry of surfaces is the study of triangulated polyhedral surfaces. (The epithet ``simpler'' has to be understood as ``easier to implement''.) We mention in passing that models based on triangulated polyhedral surfaces have applications in materials science beyond the elasticity of thin shells. E.g., recently these models have been used to describe defects in nematic liquids on thin shells \cite{canevari2018defects}. This amounts to a generalization to arbitrary surfaces of the discrete-to-continuum analysis for the XY model in two dimensions that leads to Ginzburg-Landau type models in the continuum limit \cite{MR2505362,alicandro2014metastability}. \medskip Let us describe some of the methods mentioned above in more detail. Firstly, there are the so-called \emph{polyhedral membrane models} which in fact can be used for a whole array of physical and engineering problems (see e.g.~the review \cite{davini1998relaxed}). In the context of plates and shells, the so-called Seung-Nelson model \cite{PhysRevA.38.1005} is widely used. This associates membrane and bending energy to a piecewise affine map $y:\R^2\supset U\to \R^3$, where the pieces are determined by a triangulation $\mathcal T$ of the polyhedral domain $U$. The bending energy is given by \begin{equation} E^{\mathrm{SN}}(y)= \sum_{K,L} |n(K)-n(L)|^2\,,\label{eq:1} \end{equation} where the sum runs over those unordered pairs of triangles $K,L$ in $\mathcal T$ that share an edge, and $n(K)$ is the surface normal on the triangle $K$. In \cite{PhysRevA.38.1005}, it has been argued that for a fixed limit deformation $y$, the energy \eqref{eq:1} should approximate the Willmore energy \begin{equation} E^{\mathrm{W}}(y)=\int_{y(U)} |Dn|^2\; \mathrm{d}{\mathscr H}^2\label{eq:2} \end{equation} when the grid size of the triangulation $\mathcal T$ is sent to 0, and the argument of the discrete energy \eqref{eq:1} approximates the (smooth) map $y$. In \eqref{eq:2} above, $n$ denotes the surface normal and $\H^2$ the two-dimensional Hausdorff measure. These statements have been made more precise in \cite{schmidt2012universal}, where it has been shown that the result of the limiting process depends on the used triangulations. In particular, the following has been shown in this reference: For $j\in\N$, let $\mathcal T_j$ be a triangulation of $U$ consisting of equilateral triangles such that one of the sides of each triangle is parallel to the $x_1$-direction, and such that the triangle size tends 0 as $j\to\infty$. Then the limit energy reads \[ \begin{split} E^{\mathrm{FS}}(y)=\frac{2}{\sqrt{3}}\int_U &\big(g_{11}(h_{11}^2+2h_{12}^2-2h_{11}h_{22}+3h_{22}^2)\\ &-8g_{12}h_{11}h_{12}+2 g_{22}(h_{11}^2+3h_{12}^2)\big)(\det g_{ij})^{-1}\; \mathrm{d} x\,, \end{split} \] where \[ \begin{split} g_{ij}&=\partial_i y\cdot\partial_j y\\ h_{ij}&=n\cdot \partial_{ij} y \,. \end{split} \] More precisely, if $y\in C^2(U)$ is given, then the sequence of maps $y_j$ obtained by piecewise affine interpolation of the values of $y$ on the vertices of the triangulations $\mathcal T_j$ satisfies \[ \lim_{j\to \infty}E^{\mathrm{SN}}(y_j)=E^{\mathrm{FS}}(y)\,. \] Secondly, there is the more recent approach to using discrete differential geometry for shells pioneered by Grinspun et al.~\cite{grinspun2003discrete}. Their energy does not depend on an immersion $y$ as above, but is defined directly on triangulated surfaces. Given such a surface $\mathcal T$, the energy is given by \begin{equation} E^{\mathrm{GHDS}}(\mathcal T)=\sum_{K,L} \frac{l_{KL}}{d_{KL}} \alpha_{KL}^2\label{eq:3} \end{equation} where the sum runs over unordered pairs of neighboring triangles $K,L\in\mathcal T$, $l_{KL}$ is the length of the interface between $K,L$, $d_{KL}$ is the distance between the centers of the circumcircles of $K,L$, and $\alpha_{KL}$ is the difference of the angle between $K,L$ and $\pi$, or alternatively the angle between the like-oriented normals $n(K)$ and $n(L)$, i.e. the \emph{dihedral angle}. In \cite{bobenko2005conformal}, Bobenko has defined an energy for piecewise affine surfaces $\mathcal T$ that is invariant under conformal transformations. It is defined via the circumcircles of triangles in $\mathcal T$, and the external intersection angles of circumcircles of neighboring triangles. Denoting this intersection angle for neighboring triangles $K,L$ by $\beta_{KL}$, the energy reads \begin{equation}\label{eq:4} E^\mathrm{B} (\mathcal T) = \sum_{K,L}\beta_{KL}-\pi\, \#\,\text{Vertices}(\mathcal T)\,. \end{equation} Here $\text{Vertices}(\mathcal T)$ denotes the vertices of the triangulation $\mathcal T$, the sum is again over nearest neighbors. It has been shown in \cite{bobenko2008surfaces} that this energy is the same as \eqref{eq:3} up to terms that vanish as the size of triangles is sent to zero (assuming sufficient smoothness of the limiting surface). The reference \cite{bobenko2008surfaces} also contains an analysis of the energy for this limit. If the limit surface is smooth, and it is approximated by triangulated surfaces $\mathcal T_\varepsilon$ with maximal triangle size $\varepsilon$ that satisfy a number of technical assumptions, then the Willmore energy of the limit surface is smaller than or equal to the limit of the energies \eqref{eq:3} for the approximating surfaces, see Theorem 2.12 in \cite{bobenko2008surfaces}. The technical assumptions are \begin{itemize} \item each vertex in the triangulation $\mathcal T_\varepsilon$ is connected to six other vertices by edges, \item the lengths of the sides of the hexagon formed by six triangles that share one vertex differ by at most $O(\varepsilon^4)$, \item neighboring triangles are congruent up to $O(\varepsilon^3)$. \end{itemize} Furthermore, it is stated that the limit is achieved if additionally the triangulation approximates a ``curvature line net''. \medskip The purpose of this present paper is to generalize this convergence result, and put it into the framework of $\Gamma$-convergence \cite{MR1968440,MR1201152}. Instead of fixing the vertices of the polyhedral surfaces to lie on the limiting surfaces, we are going to assume that the convergence is weakly * in $W^{1,\infty}$ as graphs. This approach allows to completely drop the assumptions on the connectivity of vertices in the triangulations, and the assumptions of congruence -- we only need to require a certain type of regularity of the triangulations that prevents the formation of small angles. % \medskip We are going to work with the energy \begin{equation}\label{eq:5} E(\mathcal T)=\sum_{K,L} \frac{l_{KL}}{d_{KL}} |n(K)-n(L)|^2\,, \end{equation} which in a certain sense is equivalent to \eqref{eq:3} and \eqref{eq:4} in the limit of vanishing triangle size, see the arguments from \cite{bobenko2008surfaces} and Remark \ref{rem:main} (ii) below. \medskip To put this approach into its context in the mathematical literature, we point out that it is another instance of a discrete-to-continuum limit, which has been a popular topic in mathematical analysis over the last few decades. We mention the seminal papers \cite{MR1933632,alicandro2004general} and the fact that a variety of physical settings have been approached in this vein, such as spin and lattice systems \cite{MR1900933,MR2505362}, bulk elasticity \cite{MR2796134,MR3180690}, thin films \cite{MR2429532,MR2434899}, magnetism \cite{MR2186037,MR2505364}, and many more. \medskip The topology that we are going to use in our $\Gamma$-convergence statement is much coarser than the one that corresponds to Bobenko's convergence result; however it is not the ``natural'' one that would yield compactness from finiteness of the energy \eqref{eq:5} alone. For a discussion of why we do not choose the latter see Remark \ref{rem:main} (i) below. Our topology is instead defined as follows: Let $M$ be some fixed compact oriented two-dimensional $C^\infty$ submanifold of $\R^3$ with normal $n_M:M\to S^2$. Let $h_j\in W^{1,\infty}(M)$, $j=1,2,\dots$, such that $\|h_j\|_{W^{1,\infty}}<C$ and $\|h_j\|_{\infty}<\delta(M)/2$ (where $\delta(M)$ is the \emph{radius of injectivity} of $M$, see Definition \ref{def:radius_injectivity} below) such that $ T_j:= \{x+h_j(x)n_M(x):x\in M\}$ are triangulated surfaces (see Definition \ref{def:triangular_surface} below). We say $\mathcal T_j\to \mathcal S:=\{x+h(x)n_M(x):x\in M\}$ if $h_j\to h$ in $W^{1,p}(M)$ for all $1\leq p<\infty$. Our main theorem, Theorem \ref{thm:main} below, is a $\Gamma$-convergence result in this topology. The regularity assumptions that we impose on the triangulated surfaces under considerations are ``$\zeta$-regularity'' and the ``Delaunay property''. The definition of these concepts can be found in Definition \ref{def:triangular_surface} below. \begin{thm} \label{thm:main} \begin{itemize} \item[(o)] Compactness: Let $\zeta>0$, and let $h_j$ be a bounded sequence in $W^{1,\infty}(M)$ such that $\mathcal T_j=\{x+h_j(x)n_M(x):x\in M\}$ is a $\zeta$-regular triangulated surface and $\|h_j\|_\infty\leq\delta(M)/2$ for $j\in \N$ with $\limsup_{j\to\infty}E(\mathcal T_j)<\infty$. Then there exists a subsequence $h_{j_k}$ and $h\in W^{2,2}(M)$ such that $h_{j_k}\to h $ in $W^{1,p}(M)$ for every $1\leq p < \infty$. \item[(i)] Lower bound: Let $\zeta>0$. Assume that for $j\in\N$, $h_j\in W^{1,\infty}(M)$ with $\|h_j\|\leq \delta(M)/2$, $\mathcal T_j:=\{x+h_j(x)n_M(x):x\in M\}$ is a $\zeta$-regular triangulated surface fulfilling the Delaunay property, and that $\mathcal T_j\to S=\{x+h(x)n_M(x):x\in M\}$ for $j\to\infty$. Then \[ \liminf_{j\to\infty} E(\mathcal T_j)\geq \int_{S} |Dn_S|^2\; \mathrm{d}\H^2\,. \] \item[(ii)] Upper bound: Let $h\in W^{1,\infty}(M)$ with $\|h\|_\infty\leq \delta(M)/2$ and $S=\{x+h(x)n_M(x):x\in M\}$. Then there exists $\zeta>0$ and a sequence $(h_j)_{j\in\N}\subset W^{1,\infty}(M)$ such that $\mathcal T_j:=\{(x+h_j(x)n_M(x):x\in M\}$ is a $\zeta$-regular triangulated surface satisfying the Delaunay property for each $j\in \N$, and we have $\mathcal T_j\to S$ for $j\to \infty$ and \[ \lim_{j\to\infty} E(\mathcal T_j)= \int_{S} |Dn_S|^2\; \mathrm{d}\H^2\,. \] \end{itemize} \end{thm} \begin{rem}\label{rem:main} \begin{itemize} \item[(i)] We are not able to derive a convergence result in a topology that yields convergence from boundedness of the energy \eqref{eq:5} alone. Such an approach would necessitate the interpretation of the surfaces as varifolds or currents. To the best of our knowledge, the theory of integral functionals on varifolds (see e.g.~\cite{menne2014weakly,hutchinson1986second,MR1412686}) is not developed to the point to allow for a treatment of this question. In particular, there does not exist a sufficiently general theory of lower semicontinuity of integral functionals for varifold-function pairs. \item[(ii)] We can state analogous results based on the energy functionals \eqref{eq:3}, \eqref{eq:4}. To do so, our proofs only need to be modified slightly: As soon as we have reduced the situation to the graph case (which we do by assumption), the upper bound construction can be carried out as here; the smallness of the involved dihedral angles assures that the arguments from \cite{bobenko2005conformal} suffice to carry through the proof. Concerning the lower bound, we also reduce to the case of small dihedral angles by a blow-up procedure around Lebesgue points of the derivative of the surface normal of the limit surface. (Additionally, one can show smallness of the contribution of a few pairs of triangles whose dihedral angle is not small.) Again, the considerations from \cite{bobenko2005conformal} allow for a translation of our proof to the case of the energy functionals \eqref{eq:3}, \eqref{eq:4}. \item[(iii)] As we will show in Chapter \ref{sec:necess-dela-prop}, we need to require the Delaunay property in order to obtain the lower bound statement. Without this requirement, we will show that a hollow cylinder can be approximated by triangulated surfaces with arbitrarily low energy, see Proposition~\ref{prop: optimal grid}. \item[(iv)] Much more general approximations of surfaces by discrete geometrical objects have recently been proposed in \cite{buet2017varifold,buet2018discretization,buet2019weak}, based on tools from the theory of varifolds. \end{itemize} \end{rem} \subsection*{Plan of the paper} In Section \ref{sec:defin-prel}, we will fix definitions and make some preliminary observations on triangulated surfaces. The proofs of the compactness and lower bound part will be developed in parallel in Section \ref{sec:proof-comp-lower}. The upper bound construction is carried out in Section \ref{sec:surf-triang-upper}, and in Section \ref{sec:necess-dela-prop} we demonstrate that the requirement of the Delaunay property is necessary in order to obtain the lower bound statement. \section{Definitions and preliminaries} \label{sec:defin-prel} \subsection{Some general notation} \begin{notation} For a two-dimensional submanifold $M\subset\R^3$, the tangent space of $M$ in $x\in M$ is denoted by $T_{x}M$. For functions $f:M\to\R$, we denote their gradient by $\nabla f\in T_xM$; the norm $|\cdot|$ on $T_xM\subset\R^3$ is the Euclidean norm inherited from $\R^3$. For $1\leq p\leq \infty$, we denote by $W^{1,p}(M)$ the space of functions $f\in L^p(M)$ such that $\nabla f\in L^p(M;\R^3)$, with norm \[ \|h\|_{W^{1,p}(M)}=\|f\|_{L^p(M)}+\|\nabla f\|_{L^p(M)}\,. \] For $U\subset\R^n$ and a function $f:U\to\R$, we denote the graph of $f$ by \[ \mathrm{Gr}\, f=\{(x,f(x)):x\in U\}\subset\R^{n+1}\,. \] For $x_1,\dots,x_m\subset \R^k$, the convex hull of $\{x_1,\dots,x_m\}$ is denoted by \[ [x_1,\dots,x_m]=\left\{\sum_{i=1}^m \lambda_ix_i:\lambda_i\in [0,1] \text{ for } i=1,\dots,m, \, \sum_{i=1}^m\lambda_i=1\right\}\,. \] We will identify $\R^2$ with the subspace $\R^2\times\{0\}$ of $\R^3$. The $d-$dimensional Hausdorff measure is denoted by $\H^d$, the $k-$dimensional Lebesgue measure by $\L^k$. The symbol ``$C$'' will be used as follows: A statement such as ``$f\leq C(\alpha)g$'' is shorthand for ``there exists a constant $C>0$ that only depends on $\alpha$ such that $f\leq Cg$''. The value of $C$ may change within the same line. For $f\leq C g$, we also write $f\lesssim g$. \end{notation} \subsection{Triangulated surfaces: Definitions} \begin{defi} \label{def:triangular_surface} \begin{itemize} \item [(i)] A \textbf{triangle} is the convex hull $[x,y,z]\subset \R^3$ of three points $x,y,z \in \R^3$. A \textbf{regular} triangle is one where $x,y,z$ are not colinear, or equivalently ${\mathscr H}^2([x,y,z])>0$. \item[(ii)] A \textbf{triangulated surface} is a finite collection ${\mathcal T} = \{K_i\,:\,i = 1,\ldots, N\}$ of regular triangles $K_i = [x_i,y_i,z_i] \subset \R^3$ so that $\bigcup_{i=1}^N K_i \subset \R^3$ is a topological two-dimensional manifold with boundary; and the intersection of two different triangles $K,L\in {\mathcal T}$ is either empty, a common vertex, or a common edge. We identify ${\mathcal T}$ with its induced topological manifold $\bigcup_{i=1}^N K_i \subset \R^3$ whenever convenient. We say that ${\mathcal T}$ is \textbf{flat} if there exists an affine subplane of $\R^3$ that contains ${\mathcal T}$. \item[(iii)] The \textbf{size} of the triangulated surface, denoted $\size({\mathcal T})$, is the maximum diameter of all its triangles. \item[(iv)] The triangulated surface ${\mathcal T}$ is called $\zeta$\textbf{-regular}, with $\zeta > 0$, if the minimum angle in all triangles is at least $\zeta$ and $\min_{K\in {\mathcal T}} \diam(K) \geq \zeta \size({\mathcal T})$. \item[(v)] The triangulated surface satisfies the \textbf{Delaunay} property if for every triangle $K = [x,y,z] \in {\mathcal T}$ the following property holds: Let $B(q,r)\subset \R^3$ be the smallest ball such that $\{x,y,z\}\subset \partial{B(q,r)}$. Then $B(q,r)$ contains no vertex of any triangle in ${\mathcal T}$. The point $q = q(K)\in \R^3$ is called the \textbf{circumcenter} of $K$, $\overline{B(q,r)}$ its \textbf{circumball} with circumradius $r(K)$, and $\partial B(q,r)$ its \textbf{circumsphere}. \end{itemize} \end{defi} Note that triangulated surfaces have normals defined on all triangles and are compact and rectifiable. For the argument of the circumcenter map $q$, we do not distinguish between triples of points $(a,b,c)\in \R^{3\times 3}$ and the triangle $[a,b,c]$ (presuming $[a,b,c]$ is a regular triangle). \begin{notation} If ${\mathcal T}=\{K_i:i=1,\dots,N\}$ is a triangulated surface, and $g:{\mathcal T}\to \R$, then we identify $g$ with the function $\cup_{i=1}^N K_i\to \R$ that is constant on the (relative) interior of each triangle $K$, and equal to $0$ on $K\cap L$ for $K\neq L\in {\mathcal T}$. In particular we may write in this case $g(x)=g(K)$ for $x\in \mathrm{int}\, K$. \end{notation} \begin{defi} Let ${\mathcal T}$ be a triangulated surface and $K,L \in {\mathcal T}$. We set \[ \begin{split} \l{K}{L} &:= \H^1(K\cap L)\\ d_{KL} &:= |q(K) - q(L)| \end{split} \] If $K,L$ are \textbf{adjacent}, i.e. if $\l{K}{L} > 0$, we may define $|n(K) - n(L)|\in \R$ as the norm of the difference of the normals $n(K),n(L)\in S^2$ which share an orientation, i.e. $2\sin \frac{\alpha_{KL}}{2}$, where $\alpha_{KL}$ is the dihedral angle between the triangles, see Figure \ref{fig:dihedral}. The discrete bending energy is then defined as \[ E({\mathcal T}) = \sum_{K,L\in {\mathcal T}} \frac{\l{K}{L}}{d_{KL}} |n(K) - n(L)|^2. \] Here, the sum runs over all unordered pairs of triangles. If $|n(K) - n(L)| = 0$ or $\l{K}{L} = 0$, the energy density is defined to be $0$ even if $d_{KL}=0$. If $|n(K) - n(L)| > 0$, $\l{K}{L} > 0$ and $d_{KL} = 0$, the energy is defined to be infinite. \end{defi} \begin{figure}[h] \begin{subfigure}{.45\textwidth} \begin{center} \includegraphics[height=5cm]{dihedral_triangles_v2.pdf} \end{center} \caption{ \label{fig:dihedral}} \end{subfigure} \hspace{5mm} \begin{subfigure}{.45\textwidth} \includegraphics[height=5cm]{d_KL_l_KL.pdf} \caption{\label{fig:dkllkl}} \end{subfigure} \caption{($\mathrm{A}$) The dihedral angle $\alpha_{KL}$ for triangles $K,L$. It is related to the norm of the difference between the normals via $|n(K)-n(L)|=2\sin\frac{\alpha_{KL}}{2}$. ($\mathrm{B}$) Definitions of $d_{KL}$, $l_{KL}$.} \end{figure} \begin{notation} \label{not:thetaKL} Let $H$ be an affine subplane of $\R^3$. For triangles $K,L\subset H$ that share an edge and $v\in\R^3$ parallel to $H$, we define the function $\mathds{1}^v_{KL}:H \to \{0,1\}$ as $\mathds{1}_{KL}^v(x) = 1$ if and only if $[x,x+v]\cap (K\cap L) \neq \emptyset$. If the intersection $K\cap L$ does not consist of a single edge, then $\mathds{1}_{KL}^v\equiv 0$. Furthermore, we let $\nu_{KL}\in \R^3$ denote the unit vector parallel to $H$ orthogonal to the shared edge of $K,L$ pointing from $K$ to $L$ and \[ \theta_{KL}^v=\frac{|\nu_{KL}\cdot v|}{|v|}\,. \] \end{notation} See Figure \ref{fig:parallelogram} for an illustration of Notation \ref{not:thetaKL}. \begin{figure} \includegraphics[height=5cm]{char_fun_1.pdf} \caption{Definition of $\theta_{KL}^v$: The parallelogram spanned by $v$ and the shared side $K\cap L$ has area $\theta^v_{KL}l_{KL}|v|$. This parallelogram translated by $-v$ is the support of $\mathds{1}_{KL}^v$. \label{fig:parallelogram}} \end{figure} \medskip We collect the notation that we have introduced for triangles and triangulated surfaces for the reader's convenience in abbreviated form: Assume that $K=[a,b,c]$ and $L=[b,c,d]$ are two regular triangles in $\R^3$. Then we have the following notation: \begin{equation*} \boxed{ \begin{split} q(K)&: \text{ center of the smallest circumball for $K$}\\ r(K)& :\text{ radius of the smallest circumball for $K$}\\ d_{KL}&=|q(K)-q(L)|\\ l_{KL}&:\text{ length of the shared edge of $K,L$}\\ n(K)&: \text{ unit vector normal to $K$ } \end{split} } \end{equation*} The following are defined if $K,L$ are contained in an affine subspace $H$ of $\R^3$, and $v$ is a vector parallel to $H$: \begin{equation*} \boxed{ \begin{split} \nu_{KL}&:\text{ unit vector parallel to $H$ orthogonal to}\\&\quad\text{ the shared edge of $K,L$ pointing from $K$ to $L$}\\ \theta_{KL}^v&=\frac{|\nu_{KL}\cdot v|}{|v|}\\ \mathds{1}_{KL}^v&: \text{ function defined on $H$, with value one if}\\ &\quad\text{ $[x,x+v]\cap (K\cap L)\neq \emptyset$, zero otherwise} \end{split} } \end{equation*} \subsection{Triangulated surfaces: Some preliminary observations} For two adjacent triangles $K,L\in {\mathcal T}$, we have $d_{KL} = 0$ if and only if the vertices of $K$ and $L$ have the same circumsphere. The following lemma states that for noncospherical configurations, $d_{KL}$ grows linearly with the distance between the circumsphere of $K$ and the opposite vertex in $L$. \begin{lma}\label{lma: circumcenter regularity} The circumcenter map $q:\R^{3\times 3} \to \R^3$ is $C^1$ and Lipschitz when restricted to $\zeta$-regular triangles. For two adjacent triangles $K = [x,y,z]$, $L = [x,y,p]$, we have that \[d_{KL} \geq \frac12 \big| |q(K)-p| -r(K) \big|\,. \] \end{lma} \begin{proof} The circumcenter $q = q(K)\in \R^3$ of the triangle $K = [x,y,z]$ is the solution to the linear system \begin{equation} \begin{cases} (q - x)\cdot (y-x) = \frac12 |y-x|^2\\ (q - x)\cdot (z-x) = \frac12 |z-x|^2\\ (q - x)\cdot ((z-x)\times (y-x)) = 0. \end{cases} \end{equation} Thus, the circumcenter map $(x,y,z)\mapsto q$ is $C^1$ when restricted to $\zeta$-regular $K$. To see that the map is globally Lipschitz, it suffices to note that it is $1$-homogeneous in $(x,y,z)$. For the second point, let $s=q(L)\in \R^3$ be the circumcenter of $L$. Then by the triangle inequality, we have \begin{equation} \begin{aligned} |p-q|\leq |p-s| + |s-q| = |x-s| + |s-q| \leq |x-q| + 2|s-q| = r + 2d_{KL},\\ |p-q| \geq |p-s| - |s-q| = |x-s| - |s-q| \geq |x-q| - 2 |s-q| = r - 2d_{KL}. \end{aligned} \end{equation} This completes the proof. \end{proof} \begin{lma} \label{lem:char_func} Let $\zeta>0$, and $a,b,c,d\in \R^2$ such that $K=[a,b,c]$ and $L=[b,c,d]$ are $\zeta$-regular. \begin{itemize}\item[(i)] We have that \begin{equation*} \int_{\R^2} \mathds{1}_{KL}^v(x)\d x = |v|l_{KL}\theta_{KL}\,. \end{equation*} \item[(ii)] Let $\delta>0$, $v,w\in\R^2$, $\bar v=(v,v\cdot w)\in \R^3$, $\bar a=(a,a\cdot w)\in \R^3$ and $\bar b, \bar c,\bar d\in \R^3$ defined analogously. Let $\bar K=[\bar a,\bar b,\bar c]$, $\bar L=[\bar b,\bar c,\bar d]$. Then \[ \int_{\R^2} \mathds{1}_{KL}^v(x)\, \d x = \frac{|\bar v|}{\sqrt{1+|w|^2}} \theta_{\bar K\bar L}^{\bar v}l_{\bar K\bar L}\,. \] \end{itemize} \end{lma} \begin{proof} The equation (i) follows from the fact that $\mathds{1}_{KL}^v$ is the characteristic function of a parallelogram, see Figure \ref{fig:parallelogram}. To prove (ii) it suffices to observe that $\int_{\R^2} \mathds{1}_{KL}^v(x)\sqrt{1+w^2}\d x$ is the volume of the parallelogram from (i) pushed forward by the map $\tilde h(x)= (x,x\cdot w)$, see Figure \ref{fig:char_fun_2}. \end{proof} \begin{figure}[h] \includegraphics[height=5cm]{char_fun_2.pdf} \caption{The parallelogram pushed forward by an affine map $x\mapsto (x,x\cdot w)$. \label{fig:char_fun_2}} \end{figure} \subsection{Graphs over manifolds} \begin{assump} \label{ass:Mprop} We assume $M\subset\R^3$ is an oriented compact two-dimensional $C^\infty$-submanifold of $\R^3$. \end{assump} This manifold will be fixed in the following. We denote the normal of $M$ by $n_M:M\to S^2$, and the second fundamental form at $x_0\in M$ is denoted by $S_M(x_0):T_{x_0}M\to T_{x_0}M$. \medskip \begin{defi} \label{def:radius_injectivity} The \emph{radius of injectivity} $\delta(M)>0$ of $M$ is the largest number such that the map $\phi:M\times (-\delta(M),\delta(M))\to \R^3$, $(x,h) \mapsto x + h n_M(x)$ is injective and the operator norm of $\delta(M)S_M(x)\in\mathcal{L}(T_xM)$ is at most $1$ at every $x\in M$. \end{defi} We define a graph over $M$ as follows: \begin{defi} \label{def:Mgraph} \begin{itemize} \item[(i)] A set $M_h = \{x+ h(x)n_M(x)\,:\,x\in M\}$ is called a \emph{graph} over $M$ whenever $h:M\to \R$ is a continuous function with $\|h\|_\infty \leq \delta(M)/2$. \item[(ii)] The graph $M_h$ is called a ($Z$-)Lipschitz graph (for $Z > 0$) whenever $h$ is ($Z$-)Lipschitz, and a smooth graph whenever $h$ is smooth. \item[(iii)] A set $N\subset B(M,\delta(M)/2)$ is said to be locally a tangent Lipschitz graph over $M$ if for every $x_0\in M$ there exists $r>0$ and a Lipschitz function $h:(x_0 +T_{x_0}M)\cap B(x_0,r)\to \R$ such that the intersection of $N$ with the cylinder $C(x_0,r,\frac{\delta(M)}{2})$ over $(x_0 +T_{x_0}M)\cap B(x_0,r)$ with height $\delta(M)$ in both directions of $n_M(x_0)$, where \[ C(x_0,r,s) := \left\{x + tn_M(x_0)\,:\,x\in (x_0 + T_{x_0}M) \cap B(x_0,r), t\in [-s,s] \right\}, \] is equal to the graph of $h$ over $T_{x_0}M\cap B(x_0,r)$, \[ N \cap C\left(x_0,r,\frac{\delta(M)}{2}\right) =\{x+h(x)n_M(x_0):x\in (x_0+T_{x_0}M)\cap B(x_0,r)\}\,. \] \end{itemize} \end{defi} \begin{lma}\label{lma: graph property} Let $N\subset B(M,\delta(M)/2)$ be locally a tangent Lipschitz graph over $M$.Then $N$ is a Lipschitz graph over $M$. \end{lma} \begin{proof} By Definition \ref{def:Mgraph} (iii), we have that for every $x\in M$, there exists exactly one element \[ x'\in N\cap \left( x+n_M(x_0)[-\delta(M),\delta(M)]\right)\,. \] We write $h(x):=(x'-x)\cdot n_M(x)$, which obviously implies $N=M_h$. For every $x_0\in M$ there exists a neighborhood of $x_0$ such that $h$ is Lipschitz continuous in this neighborhood by the locally tangent Lipschitz property and the regularity of $M$. The global Lipschitz property for $h$ follows from the local one by a standard covering argument. \end{proof} \begin{lma} \label{lem:graph_rep} Let $h_j\in W^{1,\infty}(M)$ with $\|h_j\|_{\infty}\leq\delta(M)/2$ and $h_j\weakstar h\in W^{1,\infty}(M)$ for $j\to \infty$. Then for every point $x\in M$, there exists a neighborhood $V\subset x+T_xM$, a Euclidean motion $R$ with $U:=R(x+T_xM)\subset \R^2$, functions $\tilde h_j:U\to\R$ and $\tilde h:U\to \R$ such that $\tilde h_j\weakstar \tilde h$ in $W^{1,\infty}(U)$ and \[ \begin{split} R^{-1}\mathrm{Gr}\, \tilde h_j&\subset M_{h_j} \\ R^{-1}\mathrm{Gr}\, \tilde h&\subset M_{h} \,. \end{split} \] \end{lma} \begin{proof} This follows immediately from our assumption that $M$ is $C^2$ and the boundedness of $\|\nabla h_j\|_{L^\infty}$. \end{proof} \section{Proof of compactness and lower bound} \label{sec:proof-comp-lower} \begin{notation} \label{not:push_gen} If $U\subset\R^2$, ${\mathcal T}$ is a flat triangulated surface ${\mathcal T}\subset U$, $h:U\to\R$ is Lipschitz, and $K=[a,b,c]\in{\mathcal T}$, then we write \[ h_*K=[(a,h(a)),(b,h(b)),(c,h(c))]\,. \] We denote by $h_*{\mathcal T}$ for the triangulated surface defined by \[ K\in{\mathcal T}\quad\Leftrightarrow \quad h_*K\in h_*{\mathcal T}\,. \] \end{notation} For an illustration Notation \ref{not:push_gen}, see Figure \ref{fig:push_gen}. \begin{figure}[h] \includegraphics[height=5cm]{pushforward_general.pdf} \caption{Definition of the push forward of a triangulation $\mathcal T\subset \R^2$ by a map $h:\R^2 \to \R$. \label{fig:push_gen}} \end{figure} \begin{lma} \label{lem:CS_trick} Let $U\subset\R^2$, let ${\mathcal T}$ be a flat triangulated surface with $U\subset {\mathcal T}\subset\R^2$, let $h$ be a Lipschitz function $U\to \R$ that is affine on each triangle of ${\mathcal T}$, ${\mathcal T}^*=h_*{\mathcal T}$, let $g$ be a function that is constant on each triangle of ${\mathcal T}$, $v\in \R^2$, $U^v=\{x\in\R^2:[x,x+v]\subset U\}$, and $W\subset U^v$. \begin{itemize}\item[(i)] Then \[ \begin{split} \int_{W}& |g(x+v)-g(x)|^2\d x\\ &\leq | v| \left(\sum_{K,L\in{\mathcal T}} \frac{l_{K^*L^*}}{d_{K^*L^*}} |g(K)-g(L)|^2\right) \max_{x\in W} \sum_{K,L\in{\mathcal T}} \mathds{1}^v_{KL}(x) \frac{\theta_{KL}^vl_{KL}d_{K^*L^*}}{l_{K^*L^*}} \,, \end{split} \] where we have written $K^*=h_*K$, $L^*=h_*L$ for $K,L\in {\mathcal T}$. \item[(ii)] Let $w\in\R^2$, and denote by $\bar K$, $\bar L$ the triangles $K,L$ pushed forward by the map $x\mapsto (x,x\cdot w)$. Then \[ \begin{split} \int_{W}& |g(x+v)-g(x)|^2\d x\\ &\leq \frac{|\bar v|}{\sqrt{1+|w|^2}} \left(\sum_{K,L\in{\mathcal T}} \frac{l_{K^*L^*}}{d_{K^*L^*}} |g(K)-g(L)|^2\right) \max_{x\in W} \sum_{K,L\in{\mathcal T}} \mathds{1}^v_{KL}(x) \frac{\theta_{\bar K\bar L}^{\bar v}l_{\bar K\bar L}d_{K^*L^*}}{l_{K^*L^*}}\,. \end{split} \] \end{itemize} \end{lma} \begin{proof} By the Cauchy-Schwarz inequality, for $x\in W$, we have that \[ \begin{split} | g(x+v)- g(x)|^2&\leq \left(\sum_{K,L\in {\mathcal T}} \mathds{1}_{K L}^v(x)| g(K)- g(L)|\right)^2\\ &\leq \left(\sum_{K,L\in {\mathcal T}} \frac{l_{K^*L^*}}{\theta_{KL}^vl_{KL}d_{K^*L^*}}\mathds{1}_{K L}^v(x)| g(K)- g(L)|^2\right)\\ &\qquad \times\left(\sum_{K,L\in {\mathcal T}} \mathds{1}^v_{KL}(x)\frac{\theta^v_{KL}l_{KL}d_{K^*L^*}}{l_{K^*L^*}}\right)\,. \end{split} \] Using these estimates and Lemma \ref{lem:char_func} (i), we obtain \begin{equation} \begin{aligned} &\int_{U^v} | g(x+v) - g(x)|^2\,\d x\\ \leq & \int_{U^v} \left( \sum_{K,L\in {\mathcal T}} \mathds{1}^v_{KL}(x)\frac{l_{K^*L^*}}{\theta^v_{KL}l_{KL}d_{K^*L^*}} | g(K) - g(L)|^2 \right)\\ &\quad \times \left( \sum_{K,L\in {\mathcal T}} \mathds{1}^v_{KL}(x) \frac{\theta^v_{KL}l_{KL}d_{K^*L^*}}{l_{K^*L^*}} \right) \,\d x\\ \leq & |v|\left( \sum_{K,L\in {\mathcal T}} \frac{l_{K^*L^*}}{d_{K^*L^*}} | g(K) - g(L)|^2 \right) \max_{x\in U^v} \sum_{K,L\in {\mathcal T}} \mathds{1}^v_{KL}(x) \frac{\theta^v_{KL}l_{KL}d_{K^*L^*}}{l_{K^*L^*}}\,. \end{aligned} \end{equation} This proves (i). The claim (ii) is proved analogously, using $ \frac{\theta_{\bar K\bar L}^{\bar v}l_{\bar K\bar L}d_{K^*L^*}}{l_{K^*L^*}}$ instead of $ \frac{\theta_{ K L}^{v}l_{ K L}d_{K^*L^*}}{l_{K^*L^*}}$ in the Cauchy-Schwarz inequality, and then Lemma \ref{lem:char_func} (ii). \end{proof} In the following proposition, we will consider sequences of flat triangulated surfaces ${\mathcal T}_j$ with $U\subset{\mathcal T}_j\subset\R^2$ and sequences of Lipschitz functions $h_j:U\to \R$. We write ${\mathcal T}_j^*=(h_j)_*{\mathcal T}_j$, and for $K\in {\mathcal T}_j$, we write \[ K^*=(h_j)_*K\,. \] \begin{prop}\label{prop:lower_blowup} Let $U,U'\subset\R^2$ be open, $\zeta>0$, $({\mathcal T}_j)_{j\in \N}$ a sequence of flat $\zeta$-regular triangulated surfaces with $U\subset{\mathcal T}_j\subset U'$ and $\mathrm{size} ({\mathcal T}_j) \to 0$. Let $(h_j)_{j\in\N}$ be a sequence of Lipschitz functions $U'\to \R$ with uniformly bounded gradients such that $h_j$ is affine on each triangle of ${\mathcal T}_j$ and the triangulated surfaces ${\mathcal T}_j^*=(h_j)_*{\mathcal T}_j$ satisfy the Delaunay property. \begin{itemize} \item[(i)] Assume that \[ \begin{split} h_j&\weakstar h\quad \text{ in } W^{1,\infty}(U')\,, \end{split} \] and $\liminf_{j\to \infty} \sum_{K,L\in{\mathcal T}_j} \frac{l_{K^*L^*}}{d_{K^*L^*}} |n(K^*) - n(L^*)|^2<\infty$. Then $h\in W^{2,2}(U)$. \item[(ii)] Let $U=Q=(0,1)^2$, and let $(g_j)_{j\in\N}$ be a sequence of functions $U'\to\R$ such that $g_j$ is constant on each triangle in ${\mathcal T}_j$. Assume that \[ \begin{split} h_j&\to h\quad \text{ in } W^{1,2}(U')\,,\\ g_j&\to g \quad\text{ in } L^2(U')\,, \end{split} \] where $h(x)=w\cdot x$ and $g(x)=u\cdot x$ for some $u,w\in \R^2$. Then we have \[ u^T\left(\mathds{1}_{2\times 2}+w\otimes w\right)^{-1}u \sqrt{1+|w|^2}\leq \liminf_{j\to \infty} \sum_{K,L\in{\mathcal T}_j} \frac{l_{K^*L^*}}{d_{K^*L^*}} |g_j(K) - g_j(L)|^2\,. \] \end{itemize} \end{prop} \begin{proof}[Proof of (i)] We write \[ E_j:= \sum_{K,L\in {\mathcal T}_j} \frac{l_{K^*L^*}}{d_{K^*L^*}} |n(K^*) - n(L^*)|^2 \,. \] Fix $v\in B(0,1)\subset\R^2$, write $U^v=\{x\in\R^2:[x,x+v]\subset U\}$, and fix $k\in \{1,2,3\}$. Define the function $N_j^k:U\to \R^3$ by requiring $N_j^k(x)=n(K^*)\cdot e_k$ for $x\in K\in{\mathcal T}_j$. By Lemma \ref{lem:CS_trick} with $g_j=N_j^k$, we have that \begin{equation} \label{eq:11} \int_{U^v} |N_{j}^k(x+v) - N_j^k(x)|^2\,\d x \leq |v| \left(\max_{x\in U^v} \sum_{K,L\in{\mathcal T}_j}\mathds{1}^v_{KL}(x) \frac{\theta^v_{KL}l_{KL}d_{K^*L^*}}{l_{K^*L^*}} \right) E_j\,. \end{equation} Since $h_j$ is uniformly Lipschitz, there exists a constant $C>0$ such that \[ \frac{l_{KL}}{l_{K^*L^*}} d_{K^*L^*}<C d_{KL}\,. \] We claim that \begin{equation}\label{eq:15} \begin{split} \max_{x\in U^v} \sum_{K,L\in {\mathcal T}_j} \mathds{1}_{KL}^v(x) \theta_{KL}d_{KL} &\lesssim |v|+C\size({\mathcal T}_{j})\,. \end{split} \end{equation} Indeed, let $K_0,\ldots,K_N\in {\mathcal T}_{j}$ be the sequence of triangles so that there is $i:[0,1]\to \{1,\ldots,N\}$ non-decreasing with $x+tv\in K_{i(t)}$. We have that for all pairs $K_i,K_{i+1}\in {\mathcal T}_{j}$, \begin{equation} \label{eq:12} \theta_{K_iK_{i+1}} d_{K_iK_{i+1}} = \left|(q(K_{i+1})-q(K_i)) \cdot \frac{v}{|v|}\right| \,, \end{equation} which yields the last estimate in \eqref{eq:15}. Inserting in \eqref{eq:11} yields \begin{equation} \int_{U^v} |N_{j}^k(x+v) - N_j^k(x)|^2\,\d x \leq C|v|(|v|+C\size({\mathcal T}_{j})) E_j\,. \end{equation} By passing to the limit $j\to\infty$ and standard difference quotient arguments, it then follows that the limit $N^k=\lim_{j\to\infty} N_j^k$ is in $W^{1,2}(U)$. Since $h$ is also in $W^{1,\infty}(U)$ and $(N^k)_{k=1,2,3}=(\nabla h,-1)/\sqrt{1+|\nabla h|^2}$ is the normal to the graph of $h$, it follows that $h\in W^{2,2}(U)$. \end{proof} \bigskip \begin{proof}[Proof of (ii)] We write \[ E_j:= \sum_{K,L\in {\mathcal T}_j} \frac{l_{K^*L^*}}{d_{K^*L^*}} |g_j(K) - g_j(L)|^2 \] and may assume without loss of generality that $\liminf_{j\to \infty} E_j<\infty$. Fix $\delta > 0$. Define the set of bad triangles as \[ {\mathcal B}_j^\delta := \{K \in {\mathcal T}_{j}\,:\,\left|\nabla h_j(K)- w\right| > \delta\}. \] Fix $v\in B(0,1)$, and write $Q^v=\{x\in \R^2:[x,x+v]\subset Q\}$. Define the set of good points as \[ A_j^{\delta,v} := \left\{x\in Q^v: \#\{K\in {\mathcal B}_j^\delta\,: \,K \cap [x,x+v] \neq \emptyset\} \leq \frac{\delta|v|}{\size({\mathcal T}_{j})}\right\}. \] We claim that \begin{equation}\label{eq:17} \L^2(Q^v \setminus A_j^{\delta,v}) \to 0\qquad\text{ for } j\to\infty\,. \end{equation} Indeed, let $v^\bot=(-v_2,v_1)$, and let $P_{v^\bot}:\R^2\to v^\bot \R$ denote the projection onto the linear subspace parallel to $v^\bot$. Now by the definition of $A_j^{\delta,v}$, we may estimate \[ \begin{split} \int_{Q^v}|\nabla h_j-w|^2\d x \gtrsim & \# \mathcal B_j^{\delta} \left(\size {\mathcal T}_j \right)^2 \delta\\ \gtrsim & \frac{\L^2(Q\setminus A_j^{\delta,v})}{|v|\size{\mathcal T}_j}\frac{\delta|v|}{\size {\mathcal T}_j} \left(\size {\mathcal T}_j \right)^2 \delta\\ \gtrsim &\L^2(Q^v \setminus A_j^{\delta,v})\delta^2|v|\,, \end{split} \] and hence \eqref{eq:17} follows by $h_j\to h$ in $W^{1,2}(Q)$. For the push-forward of $v$ under the affine map $x\mapsto (x,h(x))$, we write \[ \bar v= (v,v\cdot w)\in\R^3\,. \] Also, for $K=[a,b,c]\in {\mathcal T}_j$, we write \[ \bar K=[(a,a\cdot w),(b,b\cdot w),(c,c\cdot w)]=h_*K\,. \] By Lemma \ref{lem:CS_trick}, we have that \begin{equation} \label{eq: difference quotient estimate} \begin{split} \int_{A_j^{\delta, v}} &| g_{j}(x+v) - g_j(x)|^2\d x \\ & \leq \frac{|\bar v|}{\sqrt{1+|w|^2}} \left(\max_{x\in A_j^{\delta, v}} \sum_{K,L\in {\mathcal T}_j} \mathds{1}^v_{KL}(x) \frac{\theta_{\bar K\bar L}^{\bar v}l_{\bar K\bar L}d_{K^*L^*}}{l_{K^*L^*}}\right) E_j\,. \end{split} \end{equation} We claim that \begin{equation} \max_{x\in A_j^{\delta, v}} \sum_{K,L\in {\mathcal T}_j} \mathds{1}_{KL}(x) \frac{\theta_{\bar K \bar L}^{\bar v}l_{\bar K\bar L}d_{K^*L^*}}{l_{K^*L^*}} \leq (1+C\delta)\left(|\bar v|+C\size({\mathcal T}_j)\right)\,.\label{eq:16} \end{equation} Indeed, Let $K_0,\ldots,K_N\in {\mathcal T}_{j}$ be the sequence of triangles so that there is $i:[0,1]\to \{1,\ldots,N\}$ non-decreasing with $x+tv\in K_{i(t)}$. For all pairs $K_i,K_{i+1}\in {\mathcal T}_{j} $ we have \begin{equation} \theta_{\bar K_{i}\bar K_{i+1}}^{\bar v}d_{\bar K_i\bar K_{i+1}} = (q(\bar K_{i+1})-q(\bar K_i)) \cdot \frac{\bar v}{|\bar v|} \,. \end{equation} Also, we have that for $K_i,K_{i+1}\in {\mathcal T}_{j} \setminus {\mathcal B}_j^\delta$, \begin{equation*} \begin{split} \frac{l_{K_i^*K_{i+1}^*}d_{\bar K_i\bar K_{i+1}}}{l_{\bar K_i\bar K_{i+1}}d_{K_i^*K_{i+1}^*}}&\leq 1+C\delta\,. \end{split} \end{equation*} Hence \begin{equation}\label{eq: good triangles} \begin{split} \sum_{i\,:\,\{K_i,K_{i+1}\}\cap {\mathcal B}_k^\delta = \emptyset}& \frac{\theta_{\bar K_{i}\bar K_{i+1}}^{\bar v}l_{\bar K_i\bar K_{i+1}}d_{K_i^*K_{i+1}^*}}{l_{K^*_iK_{i+1}^*}}\\ & \leq (1+C\delta)\sum_{i\,:\,\{K_i,K_{i+1}\}\cap {\mathcal B}_k^\delta = \emptyset} \left(\left(q(\bar K_{i+1})-q(\bar K_i)\right)\cdot \frac{\bar v}{|\bar v|}\right)\, \end{split} \end{equation} If one of the triangles $K_i,K_{i+1}$ is in ${\mathcal B}_j^\delta$, then we may estimate \[ \left|\left(q(\bar K_{i+1})-q(\bar K_i)\right) \cdot \frac{\bar v}{|\bar v|}\right|\leq C\size{\mathcal T}_j\,. \] Since there are few bad triangles along $[x,x+v]$, we have, using $x\in A_j^{\delta,v}$, \begin{equation}\label{eq: bad triangles} \begin{split} \sum_{i\,:\,\{K_i,K_{i+1}\}\cap {\mathcal B}_k^\delta \neq \emptyset}& \frac{\theta_{\bar K_{i}\bar K_{i+1}}^{\bar v}l_{\bar K_i\bar K_{i+1}}d_{K_i^*K_{i+1}^*}}{l_{K^*_iK_{i+1}^*}}-(q(\bar K_{i+1})-q(\bar K_i)) \cdot \frac{\bar v}{|\bar v|}\\ &\leq C\#\{K\in {\mathcal B}_j^\delta\,:\,K \cap [x,x+v] \neq \emptyset\} \size({\mathcal T}_j) \\ &\leq C\delta|\bar v|\,. \end{split} \end{equation} Combining \eqref{eq: good triangles} and \eqref{eq: bad triangles} yields \begin{equation*} \begin{split} \sum_{i = 0}^{N-1}\frac{\theta_{\bar K_{i}\bar K_{i+1}}^{\bar v}l_{\bar K_i\bar K_{i+1}}d_{K_i^*K_{i+1}^*}}{l_{K^*_iK_{i+1}^*}}& \leq (1+C\delta)\sum_{i = 0}^{N-1}(q(\bar K_{i+1})-q(\bar K_i)) \cdot \frac{\bar v}{|\bar v|}+C\delta|\bar v|\\ &= (1+C\delta)(q(\bar K_N) - q(\bar K_0)) \cdot \frac{\bar v}{|\bar v|} + C \delta |\bar v| \\ &\leq (1+C\delta)\left(|\bar v| + C\size({\mathcal T}_{j})\right). \end{split} \end{equation*} This proves \eqref{eq:16}. \medskip Inserting \eqref{eq:16} in \eqref{eq: difference quotient estimate}, and passing to the limits $j\to\infty$ and $\delta\to 0$, we obtain \[|v\cdot u |^2 \leq \frac{|\bar v|^2}{\sqrt{1+|w|^2}}\liminf_{j\to \infty}E_j\,. \] Now let \[ \underline{u}:=\left(\mathds{1}_{2\times 2},w\right)^T \left(\mathds{1}_{2\times 2}+w\otimes w\right)^{-1}u\,. \] Then we have $|\underline{u}\cdot \bar v|=|u\cdot v|$ and hence \[ \begin{split} |\underline{u}|^2&=\sup_{v\in \R^2\setminus \{0\}}\frac{|\underline{u}\cdot \bar v|^2}{|\bar v|^2}\\ &\leq \frac{1}{\sqrt{1+|w|^2}}\liminf_{j\to \infty}E_j\,. \end{split} \] This proves the proposition. \end{proof} \subsection{Proof of compactness and lower bound in Theorem \ref{thm:main}} \begin{proof}[Proof of Theorem \ref{thm:main} (o)] For a subsequence (no relabeling), we have that $h_j\weakstar h$ in $W^{1,\infty}(M)$. By Lemma \ref{lem:graph_rep}, ${\mathcal T}_j$ may be locally represented as the graph of a Lipschitz function $\tilde h_j:U\to \R$, and $M_h$ as the graph of a Lipschitz function $\tilde h:U\to \R$, where $U\subset\R^2$ and $\tilde h_j\weakstar \tilde h$ in $W^{1,\infty}(U)$ \medskip It remains to prove that $\tilde h\in W^{2,2}(U)$. Since the norm of the gradients are uniformly bounded, $\|\nabla \tilde h_j\|_{L^\infty(U)}<C$, we have that the projections of ${\mathcal T}_j$ to $U$ are (uniformly) regular flat triangulated surfaces. Hence by Proposition \ref{prop:lower_blowup} (i), we have that $\tilde h\in W^{2,2}(U)$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main} (i)] Let $\mu_j = \sum_{K,L\in {\mathcal T}_j} \frac{1}{d_{KL}} |n(K) - n(L)|^2 \H^1|_{K\cap L}\in {\mathcal M}_+(\R^3)$. Note that either a subsequence of $\mu_j$ converges narrowly to some $\mu \in {\mathcal M}_+(M_h)$ or there is nothing to show. We will show in the first case that \begin{equation} \frac{d\mu}{d\H^2}(z) \geq |Dn_{M_h}|^2(z)\label{eq:7} \end{equation} at $\H^2$-almost every point $z\in M_h$ which implies in particular the lower bound. By Lemma \ref{lem:graph_rep}, we may reduce the proof to the situation that $M_{h_j}$, $M_h$ are given as graphs of Lipschitz functions $\tilde h_j:U\to \R$, $\tilde h:U\to \R$ respectively, where $U\subset \R^2$ is some open bounded set. We have that $\tilde h_j$ is piecewise affine on some (uniformly in $j$) regular triangulated surface $\tilde {\mathcal T}_j$ that satisfies \[ (\tilde h_j)_*\tilde {\mathcal T}_j={\mathcal T}_j\,. \] Writing down the surface normal to $M_h$ in the coordinates of $U$, \[N(x)=\frac{(-\nabla \tilde h, 1)}{\sqrt{1+|\nabla \tilde h|^2}}\,, \] we have that almost every $x\in U$ is a Lebesgue point of $\nabla N$. We write $N^k=N\cdot e_k$ and note that \eqref{eq:7} is equivalent to \begin{equation} \label{eq:8} \frac{\d\mu}{\d\H^2}(z)\geq \sum_{k=1}^3\nabla N^k(x)\cdot \left(\mathds{1}_{2\times 2}+\nabla \tilde h(x)\otimes\nabla \tilde h(x)\right)^{-1}\nabla N^k(x)\,, \end{equation} where $z=(x,\tilde h(x))$. Also, we define $N_j^k:U\to\R^3$ by letting $N_j^k(x)=n((\tilde h_j)_*K)\cdot e_k$ for $x\in K\in \tilde {\mathcal T_j}$. (We recall that $n((\tilde h_j)_*K)$ denotes the normal of the triangle $(\tilde h_j)_*K$.) \medskip Let now $x_0\in U$ be a Lebesgue point of $\nabla \tilde h$ and $\nabla N$. We write $z_0=(x_0,\tilde h(x_0))$. Combining the narrow convergence $\mu_j\to\mu$ with the Radon-Nikodym differentiation Theorem, we may choose a sequence $r_j\downarrow 0$ such that \[ \begin{split} r_j^{-1}\size{{\mathcal T}_j}&\to 0\\ \liminf_{j\to\infty}\frac{\mu_j(Q^{(3)}(x_0,r_j))}{r_j^2}&= \frac{\d\mu}{\d\H^2}(z_0)\sqrt{1+|\nabla \tilde h(x_0)|^2}\,, \end{split} \] where $Q^{(3)}(x_0,r_j)=x_0+[-r_j/2,r_j/2]^2\times \R$ is the cylinder over $Q(x_0,r_j)$. Furthermore, let $\bar N_j,\bar h_j,\bar N,\bar h: Q\to \R$ be defined by \[ \begin{split} \bar N_j^k(x)&=\frac{N_j(x_0+r_j x)-N_j(x_0)}{r_j}\\ \bar N^k(x)&=\nabla N^k(x_0)\cdot (x-x_0)\\ \bar h_j(x)&=\frac{\tilde h_j(x_0+r_j x)-\tilde h_j(x_0)}{r_j}\\ \bar h(x)&=\nabla \tilde h(x_0)\cdot (x-x_0)\,. \end{split} \] We recall that by assumption we have that $N^k\in W^{1,2}(U)$. This implies in particular that (unless $x_0$ is contained in a certain set of measure zero, which we discard), we have that \begin{equation}\label{eq:9} \bar N_j^k\to \bar N^k\quad\text{ in } L^2(Q)\,. \end{equation} Also, let $T_j$ be the blowup map \[ T_j(x)=\frac{x-x_0}{r_j} \] and let ${\mathcal T}_j'$ be the triangulated surface one obtains by blowing up $\tilde{\mathcal T}_j$, defined by \[ \tilde K\in \tilde {\mathcal T}_j\quad \Leftrightarrow \quad T_j\tilde K \in {\mathcal T}_j'\,. \] Now let $\mathcal S_j$ be the smallest subset of ${\mathcal T}_j'$ (as sets of triangles) such that $Q\subset\mathcal S_j$ (as subsets of $\R^2$). Note that $\size\mathcal S_j\to 0$, $\bar N_j^k$ is constant and $\bar h_j$ is affine on each $K\in \mathcal S_j$. Furthermore, for $x\in K\in \tilde {\mathcal T}_j$, we have that \[ \nabla \tilde h_j(x)=\nabla \bar h_j(T_jx) \] This implies in particular \begin{equation} \bar h_j\to \bar h\quad \text{ in } W^{1,2}(Q)\,.\label{eq:6} \end{equation} Concerning the discrete energy functionals, we have for the rescaled triangulated surfaces $({\mathcal T}_j')^*=(\bar h_j)_* {\mathcal T}_j'$, with $K^*=(\bar h_j)_*K$ for $K\in {\mathcal T}_j'$, \begin{equation}\label{eq:10} \liminf_{j\to\infty} \sum_{K,L\in {\mathcal T}_j'}\frac{l_{K^*L^*}}{d_{K^*L^*}} |\bar N_j(K)-\bar N_j(L)|^2\leq \liminf_{j\to\infty}r_j^{-2}\mu_j(Q^{(3)}(x_0,r_j)) \,. \end{equation} Thanks to \eqref{eq:9}, \eqref{eq:6}, we may apply Proposition \ref{prop:lower_blowup} (ii) to the sequences of functions $(\bar h_j)_{j\in\N}$, $(\bar N_j^k)_{j\in\N}$. This yields (after summing over $k\in\{1,2,3\}$) \[ \begin{aligned} |Dn_{M_h}|^2(z_0)&\sqrt{1+|\nabla \tilde h(x_0)|^2}\\ & = \nabla N(x_0)\cdot \left(\mathds{1}_{2\times 2} +\nabla \tilde h(x_0)\otimes \nabla \tilde h(x_0)\right)^{-1}\nabla N(x_0)\sqrt{1+|\nabla \tilde h(x_0)|^2} \\ & \leq \liminf_{j\to\infty} \sum_{K, L\in {\mathcal T}_j'}\frac{l_{K^*L^*}}{d_{K^*L^*}} |\bar N_j(K)-\bar N_j(L)|^2\,, \end{aligned} \] which in combination with \eqref{eq:10} yields \eqref{eq:8} for $x=x_0$, $z=z_0$ and completes the proof of the lower bound. \end{proof} \section{Surface triangulations and upper bound} \label{sec:surf-triang-upper} Our plan for the construction of a recovery sequence is as follows: We shall construct optimal sequences of triangulated surfaces first locally around a point $x\in M_h$. It turns out the optimal triangulation must be aligned with the principal curvature directions at $x$. By a suitable covering of $M_h$, this allows for an approximation of the latter in these charts (Proposition \ref{prop: local triangulation}). We will then formulate sufficient conditions for a vertex set to supply a global approximation (Proposition \ref{prop: Delaunay existence}). The main work that remains to be done at that point to obtain a proof of Theorem \ref{thm:main} (ii) is to add vertices to the local approximations obtained from Proposition \ref{prop: local triangulation} such that the conditions of Proposition \ref{prop: Delaunay existence} are fulfilled. \subsection{Local optimal triangulations} \begin{prop}\label{prop: local triangulation} There are constants $\delta_0, C>0$ such that for all $U \subset \R^2$ open, convex, and bounded; and $h\in C^3(U)$ with $\|\nabla h\|_\infty \eqqcolon \delta \leq \delta_0$, the following holds: Let $\varepsilon > 0$, $ C\delta^2 < |\theta| \leq \frac12$, and define $X \coloneqq \{(\varepsilon k + \theta \varepsilon l , \varepsilon l, h(\varepsilon k + \theta \varepsilon l, \varepsilon l))\in U\times \R\,:\,k,l\in \Z\}$. Then any Delaunay triangulated surface ${\mathcal T}$ with vertex set $X$ and maximum circumradius $\max_{K\in {\mathcal T}} r(K) \leq \varepsilon$ has \begin{equation}\label{eq: local error} \begin{aligned} \sum_{K,L\in {\mathcal T}}& \frac{\l{K}{L}}{d_{KL}}|n(K) - n(L)|^2\\ \leq &\left(1+ C(|\theta|+\delta+\varepsilon)\right) \L^2(U) \times\\ &\times\left(\max_{x\in U} |\partial_{11} h(x)|^2 + \max_{x\in U} |\partial_{22} h(x)|^2 + \frac{1}{|\theta|} \max_{x\in U} |\partial_{12} h(x)|^2 \right)+C\varepsilon\,. \end{aligned} \end{equation} \end{prop} \begin{proof} We assume without loss of generality that $\theta > 0$. We consider the projection of $X$ to the plane, \[ \bar X:=\{(\varepsilon k + \theta \varepsilon l , \varepsilon l)\in U:k,l\in\Z\}\,. \] Let $\bar{\mathcal T}$ be the flat triangulated surface that consists of the triangles of the form \[ \begin{split} \varepsilon[ ke_1+l(\theta e_1+e_2),(k+1)e_1+l(\theta e_1+e_2),ke_1+(l+1)(\theta e_1+e_2)]\\ \text{ or } \quad \varepsilon[ ke_1+l(\theta e_1+e_2),(k+1)e_1+l(\theta e_1+e_2),ke_1+(l-1)(\theta e_1+e_2)]\,, \end{split} \] with $k,l\in \Z$ such that the triangles are contained in $U$, see Figure \ref{fig:upper2d_barT}. \begin{figure}[h] \centering \includegraphics[height=5cm]{upper2d_barT.pdf} \caption{The flat triangulated surface $\bar {\mathcal T}$. \label{fig:upper2d_barT}} \end{figure} Obviously the flat triangulated surface $\bar{\mathcal T}$ has vertex set $\bar X$. Also, we have that \begin{equation}\label{eq:19} |x-y|\leq |(x,h(x))-(y,h(y))|\leq (1+C\delta)|x-y| \end{equation} for all $x,y \in \bar X$. We claim that for $\delta$ chosen small enough, we have the implication \begin{equation}\label{eq:18} h_*K=[(x,h(x)),(y,h(y)),(z,h(z))]\in {\mathcal T}\quad \Rightarrow \quad K= [x,y,z]\in \bar{\mathcal T} \,. \end{equation} Indeed, if $K\not\in \bar {\mathcal T}$, then either $r(K)>\frac32\varepsilon$ or there exists $w\in X$ with $|w-q(K)|<(1 -C\theta)r(K)$. In the first case, $r(h_*K)>(1-C\delta)\frac32\varepsilon$ by \eqref{eq:19} and hence $h_*K\not\in {\mathcal T}$ for $\delta$ small enough. In the second case, we have by \eqref{eq:19} and Lemma \ref{lma: circumcenter regularity} that \[ |(w,h(w))-q(h_*K)|<(1+C\delta)(1 -C\theta)r(h_*K)\,, \] and hence $h_*K$ does not satisfy the Delaunay property for $\delta$ small enough. This proves \eqref{eq:18}. Let $[x,y]$ be an edge with either $x,y \in X$ or $x,y \in \bar X$. We call this edge \emph{horizontal} if $(y-x) \cdot e_2 = 0$, \emph{vertical} if $(y-x) \cdot (e_1 - \theta e_2)= 0$, and \emph{diagonal} if $(y-x) \cdot (e_1 + (1-\theta) e_2) = 0$. By its definition, $\bar {\mathcal T}$ consists only of triangles with exactly one horizontal, vertical, and diagonal edge each. By what we have just proved, the same is true for ${\mathcal T}$. \medskip To calculate the differences between normals of adjacent triangles, let us consider one fixed triangle $K\in {\mathcal T}$ and its neighbors $K_1,K_2,K_3$, with which $K$ shares a horizontal, diagonal and vertical edge respectively, see Figure \ref{fig:upper2d}. \begin{figure}[h] \includegraphics[height=5cm]{upper2d.pdf} \caption{Top view of a triangle $K\in{\mathcal T}$ with its horizontal, diagonal and vertical neighbors $K_1,K_2,K_3$. \label{fig:upper2d}} \end{figure} We assume without loss of generality that one of the vertices of $K$ is the origin. We write $x_0=(0,0)$, $x_1=\varepsilon(1-\theta,-1)$, $x_2=\varepsilon(1,0)$, $x_3=\varepsilon(1+\theta,1)$, $x_4=\varepsilon(\theta,1)$, $x_5=\varepsilon(\theta-1,1)$, and $y_i=(x_i, h(x_i))$ for $i=0,\dots,5$. With this notation we have $K=[y_0,y_2,y_4]$, $K_1=[y_0,y_1,y_2]$, $K_2=[y_2,y_3,y_4]$ and $K_3=[y_4,y_5,y_0]$. See Figure \ref{fig:upper2d_barT}. As approximations of the normals, we define \[ \begin{split} v(K)&=\varepsilon^{-2}y_2\wedge y_4\,\\ v(K_1)&=\varepsilon^{-2} y_1\wedge y_2\\ v(K_2)&= \varepsilon^{-2}(y_3-y_2)\wedge(y_4-y_2)\\ v(K_3)&= \varepsilon^{-2} y_4\wedge y_5\,. \end{split} \] Note that $v(L)$ is parallel to $n(L)$ and $|v(L)|\geq 1$ for $L\in \{K,K_1,K_2,K_3\}$. Hence for $i=1,2,3$, we have that \[ |n(K)-n(K_i)|^2\leq |v(K)-v(K_i)|^2\,. \] For each $x_i$, we write \[ h(x_i)= x_i \cdot \nabla h(0) + \frac12 x_i \nabla^2 h(0) x_i^T+O(\varepsilon^3)\,, \] where $O(\varepsilon^3)$ denotes terms $f(\varepsilon)$ that satisfy $\limsup_{\varepsilon\to 0}\varepsilon^{-3}|f(\varepsilon)|<\infty$. By an explicit computation we obtain that \[ \begin{split} \left|v(K)-v(K_1)\right|^2&= \varepsilon^2\left|(\theta-1)\theta \partial_{11} h+2(\theta-1)\partial_{12}h+\partial_{22}h\right|^2+O(\varepsilon^3)\\ \left|v(K)-v(K_2)\right|^2&= \varepsilon^2\left(\left|\theta\partial_{11} h+\partial_{12}h\right|^2+\left|(\theta-1)\theta\partial_{11} h+(\theta-1)\partial_{12}h\right|^2\right)+O(\varepsilon^3)\\ \left|v(K)-v(K_3)\right|^2&=\varepsilon^2\left( \theta^2\left|(\theta-1)\partial_{11} h+\partial_{12}h\right|^2+\left|(\theta-1)\partial_{11} h+\partial_{12}h\right|^2\right)+O(\varepsilon^3)\,, \end{split} \] where all derivatives of $h$ are taken at $0$. Using the Cauchy-Schwarz inequality and $|1-\theta|\leq 1$, we may estimate the term on the right hand side in the first line above, \[ \left|(\theta-1)\theta\partial_{11} h+2(\theta-1)\partial_{12}h+\partial_{22}h\right|^2 \leq (1+\theta) |\partial_{22}h|^2+ \left(1+\frac{C}{\theta}\right)\left(\theta^2 |\partial_{11} h|^2+|\partial_{12}h|^2\right)\,. \] In a similar same way, we have \[ \begin{split} \left|\theta\partial_{11} h+\partial_{12}h\right|^2+\left|(\theta-1)\theta\partial_{11} h+(\theta-1)\partial_{12}h\right|^2&\leq C(|\partial_{12}h|^2 +\theta^2|\partial_{11} h|^2)\\ \theta^2\left|(\theta-1)\partial_{11} h+\partial_{12}h\right|^2+\left|(\theta-1)\partial_{11} h+\partial_{12}h\right|^2&\leq (1+\theta)|\partial_{11} h|^2+\frac{C}{\theta}|\partial_{12}h|^2\,, \end{split} \] so that \[ \begin{split} \left|n(K)-n(K_1)\right|^2&\leq \varepsilon^2(1+\theta) |\partial_{22}h|^2+ C\varepsilon^2 \left(\theta |\partial_{11} h|^2+ \frac1\theta |\partial_{12}h|^2\right)+O(\varepsilon^3)\\ \left|n(K)-n(K_2)\right|^2&\leq C\varepsilon^2 (|\partial_{12}h|^2 +\theta^2|\partial_{11} h|^2)+O(\varepsilon^3)\\ \left|n(K)-n(K_3)\right|^2&\leq \varepsilon^2(1+\theta)|\partial_{11} h|^2+\frac{C}{\theta}\varepsilon^2|\partial_{12}h|^2+O(\varepsilon^3)\,, \end{split} \] Also, we have by Lemma \ref{lma: circumcenter regularity} that \[ \begin{split} \frac{l_{KK_1}}{d_{KK_1}}&\leq 1+C(\delta+\varepsilon+\theta)\\ \frac{l_{KK_2}}{d_{KK_2}}&\leq \left(1+C(\delta+\varepsilon+\theta)\right) \frac{C}{\theta}\\ \frac{l_{KK_3}}{d_{KK_3}}&\leq 1+C(\delta+\varepsilon+\theta)\,. \end{split} \] Combining all of the above, and summing up over all triangles in ${\mathcal T}$, we obtain the statement of the proposition. \end{proof} \begin{comment} \begin{proof}[Old proof] Consider the points $a,b,c,d,e,f \in U$ defined, after translation, as $a \coloneqq (0,0)$, $b \coloneqq (\varepsilon,0)$, $c \coloneqq (\theta \varepsilon, \varepsilon)$, $d \coloneqq ((1+\theta)\varepsilon, \varepsilon)$, $e \coloneqq ((1-\theta)\varepsilon, -\varepsilon)$, and $f \coloneqq ((\theta-1)\varepsilon,\varepsilon)$. We consider the lifted points $A = (a,h(a))\in X$, and likewise $B,C,D,E,F\in X$. First, we note that ${\mathcal T}$ only contains translated versions of the triangles $K \coloneqq [A,B,C]$, $L \coloneqq [B,C,D]$, $P \coloneqq [A,B,D]$, and $S \coloneqq [A,C,D]$, as all other circumballs $\overline{B(q,r)}$ with $r\leq \varepsilon$ contain some fourth point in $X$. We note that ${\mathcal T}$ contains the triangles $K \coloneqq [A,B,C]$, $L \coloneqq [B,C,D]$, $N \coloneqq [A,B,E]$, and $O \coloneqq [A,C,F]$. We then define \begin{equation} \begin{aligned} v(K) \coloneqq \frac1{\varepsilon^2}(B-A)\times(C-A), v(L) \coloneqq \frac1{\varepsilon^2}(D-C)\times (D-B),\\ v(N) \coloneqq \frac1{\varepsilon^2}(B-A) \times (B-E), v(O) \coloneqq \frac1{\varepsilon^2}(C-F)\times(C-A). \end{aligned} \end{equation} Then by the convex projection theorem \begin{equation} \begin{aligned} |n(K) - n(L)| \leq &|v(K) - v(L)| \leq \frac{1}{\varepsilon} (2+\theta) |h(d)-h(c)-h(b)+h(a)|,\\ |n(K) - n(N)| \leq &|v(K) - v(N)| = \frac{1}{\varepsilon} |h(c)-h(a)-h(b)+h(e)|,\\ |n(K) - n(O)| \leq &|v(K) - v(O)| \leq \frac{1}{\varepsilon} (1+\theta) |h(b)-h(a)-h(c)+h(f)|. \end{aligned} \end{equation} By the fundamental theorem of calculus, we may rewrite these second differences of the parallelogram by e.g. \begin{equation} h(d)-h(c)-h(b)+h(a) = \frac{1}{|(b-a)\wedge (c-a)|}\int_{[a,b,c,d]} (b-a) \cdot D^2 h(x) (c-a)\,dx \end{equation} and thus estimate \begin{equation} \begin{aligned} |h(d)-h(c)-h(b)+h(a)| \leq &(1+C\theta) \int_{[a,b,c,d]} |\partial_{12} h(x)| + \theta |\partial_{11} h(x)|\,dx\\ |h(c)-h(a)-h(b)+h(e)| \leq &(1+C\theta) \int_{[a,b,c,e]}|\partial_{22} h(x)| + |\partial_{12} h(x)| + \theta |\partial_{11} h(x)|\,dx\\ |h(b)-h(a)-h(c)+h(f)| \leq &(1+C\theta)\int_{[a,b,c,f]} |\partial_{12} h(x)| + |\partial_{11} h(x)|\,dx. \end{aligned} \end{equation} We note that all interactions appearing in the left-hand side of \eqref{eq: local error} are translations of the diagonal $(K,L)$, vertical $(K,N)$, and horizontal $(K,O)$ cases above. We estimate the prefactors using Lemma \ref{lma: circumcenter regularity}: \begin{equation} \frac{\l{K}{L}}{d_{KL}} \leq \frac{C}{\theta},\,\frac{\l{K}{N}}{d_{KN}} \leq 1 + C\theta + C\delta,\,\frac{\l{K}{O}}{d_{KO}} \leq 1 + C\theta + C\delta. \end{equation} This allows us to bound all diagonal interactions, using H\"older's inequality, by \begin{equation} \frac12 \sum_{\tilde K, \tilde L\text{ diagonal}} \frac{\l{\tilde K}{\tilde L}}{d_{\tilde K \tilde L}} |n(\tilde K) - n(\tilde L)|^2 \leq \frac{C}{\theta}\left(C\delta^2 + C\theta^2\int_U |\partial_{11} h(x)|^2\,dx\right). \end{equation} On the other hand, we may bound the vertical interactions by \begin{equation} \begin{aligned} &\frac12 \sum_{\tilde K, \tilde N\text{ vertical}} \frac{\l{\tilde K}{\tilde N}}{d_{\tilde K \tilde N}} |n(\tilde K) - n(\tilde N)|^2\\ \leq &(1+C\theta+C\delta) \left((1+\theta)\int_U |\partial_{22} h(x)|^2\,dx + \frac{C}{\theta} \int_U|\partial_{12} h(x)|^2 + \theta^2 |\partial_{11}h(x)|^2\,dx \right), \end{aligned} \end{equation} and similarly all horizontal interactions by \begin{equation} \begin{aligned} &\frac12 \sum_{\tilde K, \tilde O\text{ horizontal}} \frac{\l{\tilde K}{\tilde O}}{d_{\tilde K \tilde O}} |n(\tilde K) - n(\tilde O)|^2\\ \leq &(1+C\theta+C\delta) \left((1+\theta)\int_U |\partial_{11} h(x)|^2\,dx + \frac{C}{\theta} \int_U|\partial_{12} h(x)|^2\,dx \right), \end{aligned} \end{equation} Combining these three estimates yields the result. \end{proof} \end{comment} \subsection{Global triangulations} We are going to use a known fact about triangulations of point sets in $\R^2$, and transfer them to $\R^3$. We first cite a result for planar Delaunay triangulations, Theorem \ref{thm: planar Delaunay} below, which can be found in e.g. \cite[Chapter 9.2]{berg2008computational}. This theorem states the existence of a Delaunay triangulated surface associated to a \emph{protected} set of points. \begin{defi} Let $N\subset\R^3$ be compact, $X\subset N$ a finite set of points and \[ D(X,N)=\max_{x\in N}\min_{y\in X}|x-y|\,. \] We say that $X$ is $\bar \delta$-protected if whenever $x,y,z \in X$ form a regular triangle $[x,y,z]$ with circumball $\overline{B(q,r)}$ satisfying $r \leq D(X,N)$, then $\left| |p-q| - r \right| \geq \bar\delta$ for any $p\in X \setminus \{x,y,z\}$. \end{defi} \begin{thm}\label{thm: planar Delaunay}[\cite{berg2008computational}] Let $\alpha > 0$. Let $X\subset \R^2$ be finite and not colinear. Define $\Omega := \conv(X)$. Assume that \[\min_{x\neq y \in X} |x-y| \geq \alpha D(X,\Omega)\,, \] and that $X$ is $\delta D(X,\Omega)$-protected for some $\delta>0$. Then there exists a unique maximal Delaunay triangulated surface ${\mathcal T}$ with vertex set $X$, given by all regular triangles $[x,y,z]$, $x,y,z\in X$, with circumdisc $\overline{B(q,r)}$ such that $B(q,r) \cap X = \emptyset$. The triangulated surface ${\mathcal T}$ forms a partition of $\Omega$, in the sense that \[ \sum_{K\in {\mathcal T}} \mathds{1}_K = \mathds{1}_\Omega\quad {\mathscr H}^2\text{almost everywhere}\,, \] where $\mathds{1}_A$ denotes the characteristic function of $A\subset \R^3$. Further, any triangle $K\in {\mathcal T}$ with $\dist(K,\partial \Omega) \geq 4D(X,\Omega)$ is $c(\alpha)$-regular, and $d_{KL} \geq \frac{\delta}{2} D(X,\Omega)$ for all pairs of triangles $K \neq L \in {\mathcal T}$. \end{thm} We are now in position to formulate sufficient conditions for a vertex set to yield a triangulated surface that serves our purpose. \begin{prop}\label{prop: Delaunay existence} Let $N\subset\R^3$ be a 2-dimensional compact smooth manifold, and let $\alpha, \delta > 0$. Then there is $\varepsilon = \varepsilon(N,\alpha,\delta)>0$ such that whenever $X\subset N$ satisfies \begin{itemize} \item [(a)]$D(X,N) \leq \varepsilon$, \item [(b)] $\min_{x,y\in X} |x-y| \geq \alpha D(X,N)$, \item [(c)] $X$ is $\delta D(X,N)$-protected \end{itemize} then there exists a triangulated surface ${\mathcal T}(X,N)$ with the following properties: \begin{itemize} \item [(i)] $\size({\mathcal T}(X,N)) \leq 2D(X,N)$. \item [(ii)] ${\mathcal T}(X,N)$ is $c(\alpha)$-regular. \item [(iii)] ${\mathcal T}(X,N)$ is Delaunay. \item [(iv)] Whenever $K\neq L \in {\mathcal T}(X,N)$, we have $d_{KL} \geq \frac{\delta}{2} D(X,N)$. \item [(v)] The vertex set of ${\mathcal T}(X,N)$ is $X$. \item [(vi)] ${\mathcal T}(X,N)$ is a $C(\alpha, N)D(X,N)$-Lipschitz graph over $N$. In particular, ${\mathcal T}(X,N)$ is homeomorphic to $N$. \end{itemize} \end{prop} The surface case we treat here can be viewed as a perturbation of Theorem \ref{thm: planar Delaunay}. We note that the protection property (c) is vital to the argument. A very similar result to Proposition \ref{prop: Delaunay existence} was proved in \cite{boissonnat2013constructing}, but we present a self-contained proof here. \begin{proof}[Proof of Proposition \ref{prop: Delaunay existence}] We construct the triangulated surface ${\mathcal T}(X,N)$ as follows: Consider all regular triangles $K=[x,y,z]$ with $x,y,z\in X$ such that the Euclidean Voronoi cells $V_x,V_y,V_z$ intersect in $N$, i.e. there is $\tilde q \in N$ such that $|\tilde q - x| = |\tilde q - y| = |\tilde q - z| \leq |\tilde q - p|$ for any $p\in X\setminus \{x,y,z\}$. \emph{Proof of (i):} Let $[x,y,z]\in {\mathcal T}(X,N)$. Let $\tilde q \in V_x \cap V_y \cap V_z \cap N$, set $\tilde r := |\tilde q - x|$. Then $\tilde r = \min_{p\in X} |\tilde q - p| \leq D(X,N)$, and because $[x,y,z]\subset \overline{B(\tilde q, \tilde r)}$ we have $\diam([x,y,z])\leq 2 \tilde r \leq 2D(X,N)$. \emph{Proof of (ii):} Let $\overline{B(q,r)}$ denote the Euclidean circumball of $[x,y,z]$. Then $r\leq \tilde r$ by the definition of the circumball. Thus $\min(|x-y|,|x-z|,|y-z|) \geq \alpha r$, and $[x,y,z]$ is $c(\alpha)$-regular by the following argument: Rescaling such that $r = 1$, consider the class of all triangles $[x,y,z]$ with $x,y,z \in S^1$, $\min(|x-y|,|x-z|,|y-z|) \geq \alpha$. All these triangles are $\zeta$-regular for some $\zeta>0$, and by compactness there is a least regular triangle in this class. That triangle's regularity is $c(\alpha)$. \emph{Proof of (iii):} Because of (ii), $N\cap \overline{B(q,r)}$ is a $C(\alpha, N)\varepsilon$-Lipschitz graph over a convex subset $U$ of the plane $ x + \R(y-x) + \R(z-x)$, say $N\cap \overline{B(q,r)} = U_h$. It follows that $\tilde q - q = h(\tilde q) n_U$. Because $h(x)= 0$, it follows that $|\tilde q - q| = |h(\tilde q)| \leq C(\alpha, N) D(X,N)^2$. Thus, for $D(X,N) \leq \delta(2C(\alpha,N))^{-1}$, we have that $|\tilde q - q| \leq \frac{\delta}{2}D(X,N)$. This together with (c) suffices to show the Delaunay property of ${\mathcal T}(X,N)$: Assume there exists $p\in X \setminus \{x,y,z\} \cap B(q,r)$. Then by (c) we have $|p-q| \leq r - \delta D(X,N)$, and by the triangle inequality $|p-\tilde q| \leq |p- q| + \frac{\delta}{2}D(x,N) < \tilde r$, a contradiction. \emph{Proof of (iv):} It follows also from (c) and Lemma \ref{lma: circumcenter regularity} that for all adjacent $K,L\in {\mathcal T}(X,N)$ we have $d_{KL} \geq \frac{\delta}{2} D(X,N)$. \emph{Proof of (v) and (vi):} Let $\eta>0$, to be fixed later. There is $s>0$ such that for every $x_0\in N$, the orthogonal projection $\pi:\R^3 \to x_0 + T_{x_0}N$ is an $\eta$-isometry when restricted to $N\cap B(x_0,s)$, in the sense that that $|D\pi - \mathrm{id}_{TN}|\leq \eta$. Let us write $X_\pi=\pi(X\cap B(x_0,s))$. This point set fulfills all the requirements of Theorem \ref{thm: planar Delaunay} (identifying $x_0+T_{x_0}N$ with $\R^2$), except for possibly protection. We will prove below that \begin{equation}\label{eq:23} X_\pi\text{ is } \frac{\delta}{4}D(X,N) \text{-protected}. \end{equation} We will then consider the planar Delaunay triangulated surface ${\mathcal T}' \coloneqq {\mathcal T}(X_\pi, x_0 + T_{x_0}N)$, and show that for $x,y,z\in B(x_0,s/2)$ we have \begin{equation}\label{eq:22} K:=[x,y,z]\in {\mathcal T}(X,N)\quad \Leftrightarrow \quad K_\pi:=[\pi(x),\pi(y),\pi(z)]\in {\mathcal T}'\, \end{equation} If we prove these claims, then (v) follows from Theorem \ref{thm: planar Delaunay}, while (vi) follows from Theorem \ref{thm: planar Delaunay} and Lemma \ref{lma: graph property}. \medskip We first prove \eqref{eq:23}: Let $\pi(x),\pi(y),\pi(z)\in X_\pi$, write $K_\pi= [\pi(x),\pi(y),\pi(z)]$, and assume $r(K_\pi)\leq D(X_\pi,\mathrm{conv}(X_{\pi}))$. For a contradiction, assume that $\pi(p)\in X_\pi\setminus \{\pi(x),\pi(y),\pi(z)\}$ such that \[ \left||q(K_\pi)-\pi(p)|-r(K_\pi)\right|<\frac{\delta}{4}D(X,N)\,. \] Using again $|D\pi-\mathrm{id}_{TN}|<\eta$ and Lemma \ref{lma: circumcenter regularity}, we obtain, with $K=[x,y,z]$, \[ \left||q(K)-p|-r(K)\right|<(1+C\eta)\frac{\delta}{4}D(X,N)\,. \] Choosing $\eta$ small enough, we obtain a contradiction to (c). This completes the proof of \eqref{eq:23}. \medskip Next we show the implication $K\in {\mathcal T}\Rightarrow K_\pi\in {\mathcal T}'$: Let $p\in X\cap B(x_0,s) \setminus \{x,y,z\}$. Assume for a contradiction that $\pi(p)$ is contained in the circumball of $K_\pi$, \[ |\pi(p) - q(K_\pi)|\leq r(K_\pi)\,. \] Then by $|D\pi-\mathrm{id}_{TN}|<\eta$ and Lemma \ref{lma: circumcenter regularity}\,, \[ |p-q(K)|\leq r(K) + C(\alpha)\eta D(X,N)\,. \] Choosing $\eta<\delta/(2C(\alpha))$, we have by (c) that \[|p-q(K)| \leq r(K) - \delta D(X,N)\,, \] which in turn implies $|p-\tilde q| < \tilde r$. This is a contradiction to $\tilde q \in V_x \cap V_y \cap V_z$, since $p$ is closer to $\tilde q$ than any of $x,y,z$. This shows $K_\pi\in {\mathcal T}'$. Now we show the implication $K_\pi\in {\mathcal T}'\Rightarrow K\in {\mathcal T}$: Let $x,y,z\in X\cap B(x_0,s/2)$ with $[\pi(x),\pi(y),\pi(z)]\in {\mathcal T}'$. Let $p\in X\cap B(x_0,s) \setminus \{x,y,z\}$. Assume for a contradiction that $|p-\tilde q| \leq \tilde r$. Then again by Lemma \ref{lma: circumcenter regularity} we have \[ |p - \tilde q| < \tilde r \Rightarrow |p-q| < r + \delta D(X,N) \Rightarrow |p-q| \leq r - \delta D(X,N) \Rightarrow |\pi(p) - q'| < r'. \] Here again we used (c) and the fact that $D(X,N)$ is small enough. The last inequality is a contradiction, completing the proof of \eqref{eq:22}, and hence the proof of the present proposition. \end{proof} \begin{rem} A much shorter proof exists for the case of the two-sphere, $N = \mathcal{S}^2$. Here, any finite set $X\subset \mathcal{S}^2$ such that no four points of $X$ are coplanar and every open hemisphere contains a point of $X$ admits a Delaunay triangulation homeomorphic to $\mathcal{S}^2$, namely $\partial \conv(X)$. Because no four points are coplanar, every face of $\partial \conv(X)$ is a regular triangle $K = [x,y,z]$. The circumcircle of $K$ then lies on $\mathcal{S}^2$ and $q(K) = n(K)|q(K)|$, where $n(K)\in \mathcal{S}^2$ is the outer normal. (The case $q(K)=-|q(K)|n(K)$ is forbidden because the hemisphere $\{x\in \mathcal{S}^2\,:\,x \cdot n(K)>0\}$ contains a point in $X$.) To see that the circumball contains no other point $p\in X\setminus \{x,y,z\}$, we note that since $K\subset \partial \conv(X)$ we have $(p-x)\cdot n(K)< 0$, and thus $|p-q(K)|^2 = 1 + 1 - 2p \cdot q(K) > 1 + 1 - 2x \cdot q(K) = |x-q(K)|^2$. Finally, $\partial \conv(X)$ is homeomorphic to $\mathcal{S}^2$ since $\conv(X)$ contains a regular tetrahedron. \end{rem} We are now in a position to prove the upper bound of our main theorem, Theorem \ref{thm:main} (ii). \begin{figure} \includegraphics[height=5cm]{upper_global.pdf} \caption{The global triangulation of a smooth surface is achieved by first covering a significant portion of the surface with the locally optimal triangulation, then adding additional points in between the regions, and finally finding a global Delaunay triangulation. \label{fig:upper bound}} \end{figure} \begin{proof}[Proof of Theorem \ref{thm:main} (ii)] We first note that it suffices to show the result for $h\in C^3(M)$ with $\|h\|_\infty < \frac{\delta(M)}{2}$. To see this, we approximate in the general case $h\in W^{2,2}(M)\cap W^{1,\infty}(M)$, $\|h\|_\infty \leq \frac{\delta(M)}{2}$ by smooth functions $h_{\beta} := H_\beta h$, where $(H_\beta)_{\beta >0 }$ is the heat semigroup. Clearly $H_\beta h \in C^\infty(M)$, and $\nabla H_\beta h \to \nabla h$ uniformly, so that $\|h\|_{\infty}\leq \frac{\delta}{2}$ and $\|\nabla h_{\beta}\|_\infty <\|\nabla h\|_{\infty}+1$ for $\beta$ small enough. Then \[ \int_M f(x,h_\beta(x),\nabla h_\beta(x), \nabla^2 h_\beta) \,\d{\mathscr H}^2 \to \int_M f(x,h(x),\nabla h(x), \nabla^2 h) \,\d{\mathscr H}^2 \] for $\beta\to 0$ whenever \[f:M \times [-\delta(M)/2, \delta(M)/2] \times B(0,\|\nabla h\|_{\infty}+1) \times (TM \times TM) \to \R \] is continuous with quadratic growth in $\nabla^2 h$. The Willmore functional \[ h\mapsto \int_{M_h} |Dn_{M_h}|^2\d{\mathscr H}^2\,, \] which is our limit functional, may be written in this way. This proves our claim that we may reduce our argument to the case $h\in C^3(M)$, since the above approximation allows for the construction of suitable diagonal sequences in the strong $W^{1,p}$ topology, for every $p<\infty$. \medskip For the rest of the proof we fix $h\in C^3(M)$. We choose a parameter $\delta>0$. By compactness of $M_h$, there is a finite family of pairwise disjoint closed open sets $(Z_i)_{i\in I}$ such that \[ {\mathscr H}^2\left(M_h \setminus \bigcup_{i\in I} Z_i\right) \leq \delta \] and such that, after applying a rigid motion $R_i:\R^3\to \R^3$, the surface $R_i(M_h \cap Z_i)$ is the graph of a function $h_i\in C^2(U_i)$ for some open sets $(U_i)_{i\in I}$ with $\|\nabla h_i\|_\infty \leq \delta$ and $\|\nabla^2 h_i - \diag(\alpha_i,\beta_i)\|_\infty \leq \delta$. \medskip We can apply Proposition \ref{prop: local triangulation} to $R_i(M_h \cap Z_i)$ with global parameters $\theta := \delta$ and $\varepsilon>0$ such that $\dist(Z_i,Z_j)>2\varepsilon$ for $i\neq j$, yielding point sets $X_{i,\varepsilon}\subset M_h \cap B_i$. The associated triangulated surfaces ${\mathcal T}_{i,\varepsilon}$ (see \ref{fig:upper bound}) have the Delaunay property, have vertices $X_{i,\varepsilon}$ and maximum circumball radius at most $\varepsilon$. Furthermore, we have that \begin{equation}\label{eq: sum local interactions} \begin{aligned} \sum_{i\in I} &\sum_{K,L\in {\mathcal T}_{i,\varepsilon}} \frac{l_{KL}}{d_{KL}} |n(K) - n(L)|^2\\ & \leq (1+C(\delta+\varepsilon)) \sum_{i\in I} \L^2(U_i)\times\\ &\quad \times \left(\max_{x\in U_i}|\partial_{11}h_i(x)|^2+\max_{x\in U_i}|\partial_{22}h_i(x)|^2+\delta^{-1}\max_{x\in U_i}|\partial_{12}h_i(x)|^2\right)+C\varepsilon\\ &\leq (1+C(\delta+\varepsilon)) \sum_{i\in I} \int_{M_h \cap Z_i} |Dn_{M_h}|^2 \,\d{\mathscr H}^2+C(\varepsilon+\delta)\,, \end{aligned} \end{equation} where in the last line we have used $\|\nabla h_i\|_{\infty}\leq \delta$, $\|\dist(\nabla^2h_i,\diag(\alpha_i,\beta_i)\|_{\infty}\leq \delta$, and the identity \[ \begin{split} \int_{M_h \cap Z_i} |Dn_{M_h}|^2 \,\d{\mathscr H}^2&= \int_{(U_i)_{h_i}}|Dn_{(U_i)_{h_i}}|^2\d\H^2\\ &=\int_{U_i}\left|(\mathbf{1}_{2\times 2}+\nabla h_i\otimes \nabla h_i)^{-1}\nabla^2 h_i\right|^2(1+|\nabla h_i|^2)^{-1/2}\d x\,. \end{split} \] We shall use the point set $Y_{0,\varepsilon} := \bigcup_{i\in I} X_{i,\varepsilon}$ as a basis for a global triangulated surface. We shall successively augment the set by a single point $Y_{n+1,\varepsilon} := Y_{n,\varepsilon} \cup \{p_{n,\varepsilon}\}$ until the construction below terminates after finitely many steps. We claim that we can choose the points $p_{n,\varepsilon}$ in such a way that for every $n\in\N$ we have \begin{itemize} \item [(a)] $\min_{x,y\in Y_{n,\varepsilon}, x\neq y} |x-y| \geq \frac{\varepsilon}{2}$. \item [(b)] Whenever $x,y,z,p\in Y_{n,\varepsilon}$ are four distinct points such that the circumball $\overline{B(q,r)}$ of $[x,y,z]$ exists and has $r\leq \varepsilon$, then \[ \left| |p-q| - r \right| \geq \frac{\delta}{2} \varepsilon. \] If at least one of the four points $x,y,z,p$ is not in $Y_{0,\varepsilon}$, then \begin{equation}\label{eq:21} \left| |p-q| - r \right| \geq c \varepsilon, \end{equation} where $c>0$ is a universal constant. \end{itemize} First, we note that both (a) and (b) are true for $Y_{0,\varepsilon}$. Now, assume we have constructed $Y_{n,\varepsilon}$. If there exists a point $x\in M_h$ such that $B(x,\varepsilon) \cap Y_{n,\varepsilon} = \emptyset$, we consider the set $A_{n,\varepsilon}\subset M_h \cap B(x,\frac{\varepsilon}{2})$ consisting of all points $p\in M_h \cap B(x,\frac{\varepsilon}{2})$ such that for all regular triangles $[x,y,z]$ with $x,y,z\in Y_{n,\varepsilon}$ and circumball $\overline{B(q,r)}$ satisfying $r\leq 2\varepsilon$, we have $\left||p-q| - r\right| \geq c \varepsilon$. Seeing as how $Y_{n,\varepsilon}$ satisfies (a), the set $A_{n, \varepsilon}$ is nonempty if $c>0$ is chosen small enough, since for all triangles $[x,y,z]$ as above we have \[ {\mathscr H}^2\left(\left\{ p\in B(x,\frac{\varepsilon}{2})\cap M_h\,:\,\left||p-q| - r\right| < c \varepsilon \right\}\right) \leq 4c \varepsilon^2, \] and the total number of regular triangles $[x,y,z]$ with $r\leq 2\varepsilon$ and $\overline{B(q,r)}\cap B(x,\varepsilon)\neq \emptyset$ is universally bounded as long as $Y_{n,\varepsilon}$ satisfies (a). We simply pick $p_{n,\varepsilon}\in A_{n,\varepsilon}$, then clearly $Y_{n+1,\varepsilon} \coloneqq Y_{n,\varepsilon} \cup \{p_{n,\varepsilon}\}$ satisfies (a) by the triangle inequality. We now have to show that $Y_{n+1,\varepsilon}$ still satisfies (b). This is obvious whenever $p = p_{n,\varepsilon}$ by the definition of $A_{n,\varepsilon}$. If $p_{n,\varepsilon}$ is none of the points $x,y,z,p$, then (b) is inherited from $Y_{n,\varepsilon}$. It remains to consider the case $p_{n,\varepsilon} = x$. Then $x$ has distance $c\varepsilon$ to all circumspheres of nearby triples with radius at most $2\varepsilon$. We now assume that the circumball $\overline{B(q,r)}$ of $[x,y,z]$ has radius $r \leq \varepsilon$ and that some point $p\in Y_{n,\varepsilon}$ is close to $\partial B(q,r)$. To this end, define \[ \eta \coloneqq \frac{\left||p-q| - r \right|}{\varepsilon}\,. \] We show that $\eta \geq \eta_0$ for some universal constant. To this end, we set \[ p_t \coloneqq (1-t)p + t\left(q+r\frac{p-q}{|p-q|}\right) \] (see Figure \ref{fig:pt}) and note that if $\eta\leq \eta_0$, all triangles $[y,z,p_t]$ are uniformly regular. \begin{figure}[h] \centering \includegraphics[height=5cm]{pt.pdf} \caption{The definition of $p_t$ as linear interpolation between $p_0$ and $p_1$. \label{fig:pt}} \end{figure} Define the circumcenters $q_t \coloneqq q(y,z,p_t)$, and note that $q_1 = q$. By Lemma \ref{lma: circumcenter regularity}, we have $|q_1 - q_0| \leq C|p_1 - p_0| = C\eta \varepsilon$ if $\eta\leq \eta_0$. Thus the circumradius of $[y,z,p_0]$ is bounded by \[ |y-q_0| \leq |y-q| + |q-q_0| \leq (1+C\eta)\varepsilon \leq 2\varepsilon \] if $\eta\leq \eta_0$. Because $x\in Y_{n+1,\varepsilon} \setminus Y_{n,\varepsilon} \subset A_{n,\varepsilon}$, we have, using \eqref{eq:21}, \[ c\varepsilon \leq \left| |x-q_0| - |p-q_0|\right| \leq \left| |x-q| - |p-q| \right| + 2 |q - q_0| \leq (1+2C)\eta\varepsilon, \] i.e. that $\eta \geq \frac{c}{1+2C}$. This shows (b). \begin{comment} We note that by (a) we have $r\geq \frac{\varepsilon}{4}$. We set $p_t \coloneqq (1-t) p + t\left((q + r\frac{p-q}{|p-q|}\right)$ for $t\in[0,1]$. If $\eta<\eta_0$, then the triangles $[y,z,p_t]$ are all $\zeta_0$-regular triangles for some universal constants $\zeta_0,\eta_0>0$. By Lemma \ref{lma: circumcenter regularity}, then $|q(y,z,p_0) - q(y,z,p)|\leq C \eta \varepsilon$. However, $q(y,z,p_0) = q$, and $|p-q(y,z,p)| \leq 2\varepsilon$ for $\eta<\eta_0$. By the choice $x\in A_{n,\varepsilon}$ then \[ \left| |x-q(y,z,p)| - |p-q(y,z,p)| \right| \geq c\varepsilon, \] which implies that \[ \left||x-q| - |p-q| \right| \geq c \varepsilon - C \eta \varepsilon, \] i.e. that $\eta \geq \min\left( \frac{1}{1+C}, \eta_0, \frac14 \right)$, which is a universal constant. This shows (b). \end{comment} Since $M_h$ is compact, this construction eventually terminates, resulting in a set $X_\varepsilon := Y_{N(\varepsilon),\varepsilon} \subset M_h$ with the properties (a), (b), and $D(X_\varepsilon,M) \leq \varepsilon$. \medskip Consider a Lipschitz function $g:M_h\to \R$. Since $M_h$ is a $C^2$ surface, we have that for $\|g\|_{W^{1,\infty}}$ small enough, $(M_h)_g$ is locally a tangent Lipschitz graph over $M$, see Definition \ref{def:Mgraph} (iii). By Lemma \ref{lma: graph property}, this implies that $(M_h)_g$ is a graph over $M$. Invoking Proposition \ref{prop: Delaunay existence} yields a Delaunay triangulated surface ${\mathcal T}_\varepsilon \coloneqq {\mathcal T}(X_\varepsilon, M_h)$ with vertex set $X_\varepsilon$ that is $\zeta_0$-regular for some $\zeta_0>0$, and $\bigcup_{K\in {\mathcal T}_\varepsilon} = (M_h)_{g_\varepsilon}$ with $\|g_\varepsilon\|_{W^{1,\infty}}\leq C(\delta)\varepsilon$. By the above, there exist Lipschitz functions $h_\varepsilon:M\to \R$ such that $(M_h)_{g_\varepsilon} = M_{h_\varepsilon}$, with $h_\varepsilon \to h$ in $W^{1,\infty}$, $\|h_\varepsilon\|_\infty \leq \frac{\delta(M)}{2}$ and $\|\nabla h_\varepsilon\|\leq \|\nabla h\|_{\infty}+1$. \medskip It remains to estimate the energy. To do so, we look at the two types of interfaces appearing in the sum \[ \sum_{K,L\in {\mathcal T}_\varepsilon} \frac{l_{KL}}{d_{KL}} |n(K) - n(L)|^2. \] First, we look at pairwise interactions where $K,L\in {\mathcal T}(X_{i,\varepsilon})$ for some $i$. These are bounded by \eqref{eq: sum local interactions}. Next, we note that if $\varepsilon < \min_{i\neq j \in I} \dist(B_i,B_j)$, it is impossible for $X_{i,\varepsilon}$ and $X_{j,\varepsilon}$, $i\neq j$, to interact. Finally, we consider all interactions of neighboring triangles $K,L\in {\mathcal T}_\varepsilon$ where at least one vertex is not in $Y_{0,\varepsilon}$. By \eqref{eq:21}, these pairs all satisfy $\frac{l_{KL}}{d_{KL}} \leq C$ for some universal constant $C$ independent of $\varepsilon,\delta$, and $|n(K) - n(L)|\leq C\varepsilon$ because ${\mathcal T}$ is $\zeta_0$-regular and $M_h$ is $C^2$. Further, no points were added inside any $B_I$. Thus \[ \begin{split} \sum_{\substack{K,L\in {\mathcal T}_\varepsilon\,:\,\text{at least}\\\text{ one vertex is not in }Y_{0,\varepsilon}}}& \frac{l_{KL}}{d_{KL}} |n(K) - n(L)|^2 \\ &\leq C{\mathscr H}^2\left(M_h \setminus \bigcup_{i\in I}B(x_i, r_i - 2\varepsilon)\right)\\ &\leq C \delta + C(\delta)\varepsilon. \end{split} \] Choosing an appropriate diagonal sequence $\delta(\varepsilon) \to 0$ yields a sequence ${\mathcal T}_\varepsilon = M_{h_\varepsilon}$ with $h_\varepsilon\to h$ in $W^{1,\infty}(M)$ with \[ \limsup_{\varepsilon \to 0} \sum_{K,L\in {\mathcal T}_\varepsilon} \frac{l_{KL}}{d_{KL}} |n(K) -n(L)|^2 \leq \int_{M_h} |Dn_{M_h}|^2\,d{\mathscr H}^2. \] \end{proof} \section{Necessity of the Delaunay property} \label{sec:necess-dela-prop} We now show that without the Delaunay condition, it is possible to achieve a lower energy. In contrast to the preceding sections, we are going to choose an underlying manifold $M$ with boundary (the ``hollow cylinder'' $S^1\times[-1,1]$). By ``capping off'' the hollow cylinder one can construct a counterexample to the lower bound in Theorem \ref{thm:main}, where it is assumed that $M$ is compact without boundary. \begin{prop}\label{prop: optimal grid} Let $M =S^1\times[-1,1] \subset \R^3$ be a hollow cylinder and $\zeta>0$. Then there are $\zeta$-regular triangulated surfaces ${\mathcal T}_j\subset \R^3$ with $\size({\mathcal T}_j) \to 0$ and ${\mathcal T}_j \to M$ for $j\to\infty$ with \[ \limsup_{j\to\infty} \sum_{K,L\in {\mathcal T}_j} \frac{\l{K}{L}}{d_{KL}} |n(K)-n(L)|^2 < c(\zeta) \int_M |Dn_M|^2\,d\H^2\,, \] where the positive constant $c(\zeta)$ satisfies \[ c(\zeta)\to 0 \quad \text{ for } \quad\zeta\to 0\,. \] \end{prop} \begin{figure} \includegraphics[height=5cm]{cylinder2.pdf} \caption{A non-Delaunay triangulated cylinder achieving a low energy . \label{fig:cylinder}} \end{figure} \begin{proof} For every $\varepsilon = 2^{-j}$ and $s\in\{2\pi j^{-1}:j=3,4,5,\dots\}$, we define a flat triangulated surface ${\mathcal T}_j\subset \R^2$ with $\size({\mathcal T}_j) \leq \varepsilon$ as follows: As manifolds with boundary, ${\mathcal T}_j=[0,2\pi]\times [-1,1]$ for all $j$; all triangles are isosceles, with one side a translation of $[0,\varepsilon]e_2$ and height $s\varepsilon$ in $e_1$-direction. We neglect the triangles close to the boundary $[0,2\pi]\times\{\pm 1\}$, and leave it to the reader to verify that their contribution will be negligeable in the end. \medskip We then wrap this triangulated surface around the cylinder, mapping the corners of triangles onto the surface of the cylinder via $(\theta,t) \mapsto (\cos\theta, \sin\theta, t)$, to obtain a triangulated surface $\tilde {\mathcal T}_j$. Obviously, the topology of $\tilde {\mathcal T}_j$ is $S^1\times[-1,1]$. Then we may estimate all terms $\frac{\l{K}{L}}{d_{KL}} |n(K) - n(L)|^2$. We first find the normal of the reference triangle $K\in \tilde {\mathcal T}_j$ spanned by the points $x = (1,0,0)$, $y = (1,0,\varepsilon)$, and $z = (\cos(s\varepsilon),\sin(s\varepsilon),\varepsilon/2)$. We note that \[ n(K) = \frac{(y-x) \times (z-x)}{|(y-x) \times (z-x)|} = \frac{(-s\varepsilon\sin(s\varepsilon), s\varepsilon(\cos(s\varepsilon)-1),0)}{s\varepsilon(2-2\cos(s\varepsilon))} = (1,0,0) + O(s\varepsilon). \] We note that the normal is the same for all translations $K+te_3$ and for all triangles bordering $K$ diagonally. The horizontal neighbor $L$ also has $n(L) = (1,0,0) + O(s\varepsilon)$. However, we note that the dimensionless prefactor satisfies $\frac{\l{K}{L}}{d_{KL}} \leq \frac{2\varepsilon}{\varepsilon/s} = s$. Summing up the $O(s^{-1}\varepsilon^{-2})$ contributions yields \[ \sum_{K,L\in {\mathcal T}_j} \frac{\l{K}{L}}{d_{KL}} |n(K) - n(L)|^2 \leq C \frac{s^3\varepsilon^2}{s\varepsilon^2} = Cs^2. \] This holds provided that $\varepsilon$ is small enough. Letting $s\to 0$, we see that this energy is arbitrarily small. \end{proof} \bibliographystyle{alpha}
proofpile-arXiv_065-110
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:Intro} This paper is a further exploration in our research on Gauss quadrature for the classical orthogonal polynomials; earlier publications are \cite{Gil:2018:FGH}, \cite{Gil:2018:GHL}, \cite{Gil:2018:AJP}, \cite{Gil:2019:NIG}. Other recent relevant papers on this topic are \cite{Bogaert:2014:IFC}, \cite{Hale:2013:FAC}, \cite{Town:2016:IMA}. When we assume that the degree $n$ and the two parameters $\alpha$ and $\beta$ of the Jacobi polynomial $P_n^{(\alpha,\beta)}(x)$ are large, and we consider the variable $x$ as a parameter that causes nonuniform behavior of the polynomial, it can be expected that, for a detailed and optimal description of the asymptotic approximation, we need a function of three variables. Candidates for this are the Gegenbauer and the Laguerre polynomial. The Gegenbauer polynomial can be used when the ratio $\alpha/\beta$ does not tend to zero or to infinity. When it does, the Laguerre polynomial is the best option. It is possible to transform an integral of $P_n^{(\alpha,\beta)}(x)$ into an integral resembling one of the Gegenbauer or the Laguerre polynomial (and similar when we are working with differential equations). From a theoretical point of view this may be of interest, however, for practical purposes, when using the results for Gauss quadrature, the transformations and the coefficients in the expansions become rather complicated. In addition, computing the approximants, that is, large degree polynomials with large additional parameter and a variable in domains where nonuniform behavior of these polynomials may happen, gives an extra nontrivial complication. Even when we use the Bessel functions or Hermite polynomials as approximants, these complications are still quite relevant. For this reason we consider in this paper expansions in terms of elementary functions, and we will see that to evaluate a certain number of coefficients already gives quite complicated expressions. For large values of $\beta$ with fixed degree $n$ we have quite simple results derived in \cite{Gil:2018:AJP}, which paper is inspired by \cite{Dimitrov:2016:ABJ}. Large-degree results valid near $x=1$ are given in \cite[\S28.4]{Temme:2015:AMI}, and for the case that $\beta$ is large as well we refer to \cite[\S28.4.1]{Temme:2015:AMI}. \section{Several asymptotic phenomena}\label{sec:phen} To describe the behavior of the Jacobi polynomial for large degree and parameters $\alpha$ and $\beta$, with $x\in[-1,1]$, it is instructive to consider the differential equation of the function \begin{equation}\label{eq:Intro01} W(x)=(1-x)^{\frac12(\alpha+1)}(1+x)^{\frac12(\beta+1)}P_n^{(\alpha,\beta)}( x). \end{equation} By using the Liouville-Green transformations as described in \cite{Olver:1997:ASF} uniform expansions can be derived for all combinations of the parameters $n$, $\alpha$, $\beta$. Let $\sigma$, $\tau$ and $\kappa$ be defined by \begin{equation}\label{eq:Intro02} \sigma=\frac{\alpha+\beta}{2\kappa},\quad \tau=\frac{\alpha-\beta}{2\kappa},\quad \kappa=n+\tfrac12(\alpha+\beta+1). \end{equation} Then $W(x)$ satisfies the differential equation \begin{equation}\label{eq:Intro03} \frac{d^2}{dx^2}W(x)=-\frac{\kappa^2(x_+-x)(x-x_-) +\frac14(x^2+3)}{(1-x^2)^2} W(x), \end{equation} where \begin{equation}\label{eq:Intro04} x_\pm=-\sigma\tau\pm\sqrt{(1-\sigma^2)(1-\tau^2)}; \end{equation} $x_-$ and $x_+$ are called turning points. We have $-1\le x_-\le x_+\le 1$ when $\alpha$ and $\beta$ are positive. When $\sigma^2+\delta^2=1$, one of the turning points $x_\pm$ is zero. When we skip the term $\frac14(x^2+3)$ of the denominator in \eqref{eq:Intro03}, the differential equation becomes one for the Whittaker or Kummer functions, with special case the Laguerre polynomial, and when we take $\alpha=\beta$ the equation becomes a differential equation for the Gegenbauer polynomial. When $\kappa$ is large we can make a few observations. \begin{enumerate} \item If $n\gg \alpha+\beta$, then $\sigma\to0$ and $\tau\to0$. Hence, $x_-\to-1$ and $x_+\to1$. This is the standard case for large degree, the zeros are spread over the complete interval $(-1,1)$. \item When $\alpha$ and/or $\beta$ become large as well, the zeros are inside the interval $(x_-,x_+)$. When, in addition, $\alpha/\beta\to0$, the zeros shift to the right, when $\beta/\alpha\to0$, they shift to the left. See also the limit in \eqref{eq:Intro09}. The zeros become all positive when $x_-\ge0$. In that case $\sigma^2+\delta^2\ge1$. \item When $x$ is in a closed neighborhood around $x_-$ that does not contain $-1$ and $x_+$, an expansion in terms of Airy functions can be given. Similar for $x$ in a closed neighborhood around $x_+$ that does not contain $x_-$ and $1$. The points $x_\pm$ are called turning points of the equation in \eqref{eq:Intro03}. \item When $-1\le x\le x_-(1+a)<x_+$, with $a$ a fixed positive small number, an expansion in terms of Bessel functions can be given. Similar for $x_-<x_+(1-a)\le x\le 1$. The latter case corresponds to the limit \begin{equation}\label{eq:Intro05} \lim_{n\to\infty} n^{-\alpha}P_n^{(\alpha,\beta)}\left(1-\frac{x^2}{2n^2}\right)=\left(\frac{2}{x}\right)^\alpha J_\alpha(x). \end{equation} Also, $\sqrt{x}J_\alpha\left(\alpha\sqrt{x}\right)$ satisfies the differential equation \begin{equation}\label{eq:Intro06} \frac{d^2}{dx^2}w(x)=\left(\alpha^2 \frac{1-x}{4x^2}-\frac{1}{4x^2}\right)w(x), \end{equation} in which $x=1$ is a turning point when $\alpha $ is large. \item If $ \alpha+\beta\gg n$, then $\sigma\to1$ and the turning points $x_-$ and $x_+$ coalesce at~$-\tau$. When $\alpha$ and $\beta$ are of the same order, the point $-\tau$ lies properly inside $(-1,1)$, and this case has been studied in \cite{Olver:1980:UAE} to obtain approximations of Whittaker functions in terms of parabolic cylinder functions. In the present case the parameters are such that the parabolic cylinder functions become Hermite polynomials. This corresponds to the limit (see \cite{Lopez:1999:AOP}) \begin{equation}\label{eq:Intro07} \lim_{\alpha,\beta\to\infty} \left(\frac{8}{\alpha+\beta}\right)^{n/2}\, P_n^{(\alpha,\beta)}\left(x\sqrt{{\frac{2}{\alpha+\beta}}}- \frac{\alpha-\beta}{\alpha+\beta}\right)=\frac1{n!}\,H_n(x), \end{equation} derived under the conditions \begin{equation}\label{eq:Intro08} x={\cal O}(1),\quad n={\cal O}(1),\quad \frac{\alpha-\beta}{\alpha+\beta}=o(1),\quad \alpha, \beta\to\infty. \end{equation} \item If $\alpha\gg\beta$, then $\tau\to1$, and $x_-$ and $x_+$ coalesce at $-\sigma$; if $\beta/\kappa=o(1)$, then the collision will happen at $-1$. Approximations in terms of Laguerre polynomials can be given. This corresponds to the limit \begin{equation}\label{eq:Intro09} \lim_{\alpha\to\infty}P^{(\alpha,\beta)}_{n}\bigl((2x/\alpha)-1\bigr)=(-1)^{n% }L^{(\beta)}_{n}(x). \end{equation} Similar for $\beta\gg\alpha$, in which case $L^{(\alpha)}_{n}(x)$ becomes the approximant. \end{enumerate} As explained earlier, we consider in this paper the second case: new expansions of $P^{(\alpha,\beta)}_{n}(x)$, and its zeros and weights in terms of elementary functions. Preliminary results regarding the role of Gegenbauer and Laguerre polynomials as approximants can be found in \cite{Temme:1990:PAE}. \section{An integral representation and its saddle points}\label{sec:Jacnabelfunint} The Rodrigues formula for the Jacobi polynomials reads (see \cite[\S18.15(ii)]{Koornwinder:2010:OPS}) \begin{equation}\label{eq:int01} P_n^{(\alpha,\beta)}(x)=\frac{(-1)^n}{2^n n!\,w(x)}\frac{d^n}{dx^n}\left(w(x)(1-x^2)^n\right), \end{equation} where \begin{equation}\label{eq:int02} w(x)=(1-x)^\alpha(1+x)^\beta. \end{equation} This gives the Cauchy integral representation \begin{equation}\label{eq:int03} P_n^{(\alpha,\beta)}(x)=\frac{(-1)^n}{2^n\,w(x)}\frac{1}{2\pi i}\int_{{\cal C}} \frac{w(z)(1-z^2)^n}{(z-x)^{n+1}}\,dz, \quad x\in(-1,1), \end{equation} where the contour ${{\cal C}}$ is a circle around the point $z=x$ with radius small enough to have the points $\pm1$ outside the circle. We write this in the form\footnote{The multi-valued functions of the integrand are discussed in Remark~\ref{rem:rem01}.} \begin{equation}\label{eq:int04} P_n^{(\alpha,\beta)}(x)=\frac{-1}{2^n\,w(x)}\frac{1}{2\pi i}\int_{{\cal C}} e^{-\kappa \phi(z)}\,\frac{dz}{\sqrt{(1-z^2)(x-z)}}, \end{equation} where \begin{equation}\label{eq:int05} \kappa=n+\tfrac12(\alpha+\beta+1). \end{equation} and \begin{equation}\label{eq:int06} \phi(z)=-\frac{n+\alpha+\frac12}{\kappa}\ln(1-z)-\frac{n+\beta+\frac12}{\kappa}\ln(1+z)+\frac{n+\frac12}{\kappa}\ln(x-z). \end{equation} We introduce the notation \begin{equation}\label{eq:int07} \sigma=\frac{\alpha+\beta}{2\kappa},\quad \tau=\frac{\alpha-\beta}{2\kappa}, \end{equation} and it follows that \begin{equation}\label{eq:int08} \phi(z)=-(1+\tau)\ln(1-z)-(1-\tau)\ln(1+z)+(1-\sigma)\ln(x-z). \end{equation} The saddle points $z_{\pm}$ follow from the zeros of \begin{equation}\label{eq:int09} \phi^\prime(z)= - \frac{(1+\sigma)z^2+2(\tau-x)z+1-\sigma-2\tau x}{(1-z^2)(x-z)}, \end{equation} and are given by \begin{equation}\label{eq:int10} \begin{array}{@{}r@{\;}c@{\;}l@{}} z_{\pm}&=&\displaystyle{\frac{x-\tau\pm iU(x)}{1+\sigma},}\\[8pt] U(x)&=&\sqrt{1-2\sigma\tau x-\tau^2-\sigma^2-x^2}=\sqrt{(x_+-x)(x-x_-)}, \end{array} \end{equation} where (see also \eqref{eq:Intro04}) \begin{equation}\label{eq:int11} x_{\pm}=-\sigma\tau\pm\sqrt{(1-\sigma^2)(1-\tau^2)}. \end{equation} In this representation we assume that $x_-\le x \le x_+$, in which $x$-domain the zeros of the Jacobi polynomial are located. \begin{remark}\label{rem:rem01} The starting integrand in \eqref{eq:int03} has a pole at $z=x$, while the one of \eqref{eq:int04} shows an algebraic singularity at $z=x$ and $\phi(z)$ defined in \eqref{eq:int06} has a logarithmic singularity at this point. To handle this from the viewpoint of multi-valued functions, we can introduce a branch cut for the functions involved from $z=x$ to the left, assuming that the phase of $z-x$ is zero when $z>x$, equals $-\pi$ when $z$ approaches $-1$ on the lower part of the saddle point contour of the integral in \eqref{eq:int04}, and $+\pi$ on the upper side. Because the saddle points $z_\pm$ stay off the interval $(-1,1)$, we do not need to consider function values on the branch cuts for the asymptotic analysis. \eoremark \end{remark} \section{Deriving the asymptotic expansion}\label{sec:Jacnabelfun} We derive an expansion in terms of elementary functions which is valid for $x\in[x_-(1+\delta),x_+(1-\delta)]$, where $x_\pm$ are the turning points defined in \eqref{eq:int11} and $\delta$ is a fixed positive small number. Also, we assume that $\sigma\in[0,\sigma_0]$ and $\tau\in[-\tau_0,\tau_0]$, where $\sigma_0$ and $\tau_0$ are fixed positive numbers smaller than $1$. The case $\sigma\to1$ is explained in Case~5 of Section~\ref{sec:phen}. A similar phenomenon occurs when $\tau\to\pm1$. First we consider contributions from the saddle point $z_+$ using the transformation \begin{equation}\label{eq:Jacasymp01} \phi(z)-\phi(z_+)=\tfrac12w^2 \end{equation} for the contour from $z=+1$ to $z=-1$ through $z_+$, with $\phi(z)$ and $z_+$ given in \eqref{eq:int08} and \eqref{eq:int10}. This transforms the part of the integral in \eqref{eq:int04} that runs with $\Im z\ge0$ into \begin{equation}\label{eq:Jacasymp02} P^+=\frac{e^{-\kappa\phi(z_+)}}{2^n\,w(x)}\frac{1}{2\pi i}\int_{-\infty}^\infty e^{-\frac12\kappa w^2}f_+(w)\,dw, \end{equation} where \begin{equation}\label{eq:Jacasymp03} f_+(w)= \frac{1}{\sqrt{(1-z^2)(x-z)}}\frac{dz}{dw},\quad \frac{dz}{dw}=\frac{w}{\phi^\prime(z)}. \end{equation} We expand $\displaystyle{f_+(w)=\sum_{j=0}^\infty f_j^+w^j}$, where \begin{equation}\label{eq:Jacasymp04} f_0^+= \frac{1}{\sqrt{(1-z_+^2)(x-z_+)\phi^{\prime\prime}(z_+)}}=\Frac{e^{\frac14\pi i}}{\sqrt{2U(x)}}, \end{equation} and $U(x)$ is defined in \eqref{eq:int10}. Because the contribution from the saddle point $z_-$ is the complex conjugate of that from $z_+$\footnote{We assume that $x\in(x_-,x_+)$ and that $\alpha$ and $\beta$ are positive.}, we take twice the real part of the contribution from $z_+$ and obtain the expansion \begin{equation}\label{eq:Jacasymp05} P_n^{(\alpha,\beta)}(x)\sim\Re\frac{e^{-\kappa\phi(z_+)-\frac14\pi i}}{2^{n}\,w(x)\sqrt{\pi \kappa U(x)}}\,\sum_{j=0}^\infty \frac{c_{j}^+}{\kappa^j}, \quad c_{j}=2^j\left(\tfrac12\right)_j \frac{f_{2j}^+}{f_0^+}. \end{equation} Evaluating $\phi(z_+)$ we find \begin{equation}\label{eq:Jacasymp06} \begin{array}{@{}r@{\;}c@{\;}l@{}} \phi(z_+)&=&-\ln 2+\psi+\xi+i\chi(x),\\[8pt] \psi&=&-\frac12(1-\tau)\ln(1-\tau)-\frac12(1+\tau)\ln(1+\tau)\ +\\[8pt] &&\frac12(1+\sigma)\ln(1+\sigma)+\frac12(1-\sigma)\ln(1-\sigma),\\[8pt] \xi(x)&=&-\frac12(\sigma+\tau)\ln(1-x)-\frac12(\sigma-\tau)\ln(1+x),\\[8pt] \chi(x)&=&\displaystyle{(\tau+1)\arctan\frac{U(x)}{1-x+\sigma+\tau}+(\tau-1)\arctan\frac{U(x)}{1+x+\sigma-\tau}\ +}\\[8pt] &&(1-\sigma)\,{\rm{atan}}2(-U(x),\tau+x\sigma). \end{array} \end{equation} \begin{figure} \begin{center} \epsfxsize=5cm \epsfbox{chi.eps} \caption{ \label{fig:fig01} The quantity $\chi(x)$ defined in \eqref{eq:Jacasymp06} for $x\in(x_-,x_+)$; $\alpha=90$, $\beta=75$, $n=125$. For these values, $\kappa=208$, $\sigma=\frac{165}{416}$, $\tau=\frac{15}{416}$, $x_-=-0.931$, $x_+=0.903$.} \end{center} \end{figure} In Figure~\ref{fig:fig01} we show a graph of $\chi(x)$ on $(x_-,x_+)$ for $\alpha=90$, $\beta=75$, $n=125$. For these values, $\kappa=208$, $\sigma=\frac{165}{416}$, $\tau=\frac{15}{416}$, $x_-=-0.931$, $x_+=0.903$. At the left endpoint we have $\chi(x_-)=-(1-\sigma)\pi=-1.896$. \begin{remark}\label{rem:rem02} The denominators of the first and second arctan functions of $\chi(x)$ in \eqref{eq:Jacasymp06} are always positive on $(x_-,x_+)$; this follows easily from the relations in \eqref{eq:int07}. The function ${\rm{atan}}2(y,x)$ in the third term of $\chi(x)$ denotes the phase $\in(-\pi,\pi]$ of the complex number $x+iy$. Because $\tau+x\sigma$ may be negative on $(x_-,x_+)$ we cannot use the standard arctan function for that term. \eoremark \end{remark} Observe that $e^{-\kappa\xi(x)}=\sqrt{w(x)}$, with $w(x)$ defined in \eqref{eq:int02}. To compute $x$ from $\chi(x)$, for example by using a Newton-procedure, it is convenient to know that \begin{equation}\label{eq:Jacasymp07} \frac{d\chi(x)}{dx}=\frac{U(x)} {\left(1-x^2\right)}. \end{equation} We return to the result in \eqref{eq:Jacasymp05} and split the coefficients of \eqref{eq:Jacasymp05} in real and imaginary parts. We write $c_j^+=p_j+iq_j$, and obtain \begin{equation}\label{eq:Jacasymp08} \begin{array}{@{}r@{\;}c@{\;}l@{}} P_n^{(\alpha,\beta)}(x)&=&\displaystyle{\frac{2^{\frac12(\alpha+\beta+1)}e^{-\kappa\psi}} {\sqrt{\pi \kappa w(x)U(x)}}W(x)},\\[8pt] W(x)&=&\displaystyle{\cos\left(\kappa\chi(x)+\tfrac14\pi\right)P(x)+\sin\left(\kappa\chi(x)+\tfrac14\pi\right)Q(x),} \end{array} \end{equation} with expansions \begin{equation}\label{eq:Jacasymp09} P(x)\sim \sum_{j=0}^\infty \frac{p_{j}}{\kappa^j},\quad Q(x)\sim \sum_{j=0}^\infty \frac{q_{j}}{\kappa^j}. \end{equation} Because $c_0^+=1$, we have $p_0=1$, $q_0=0$. To evaluate the coefficients $f_{2j}^+$ of the expansion in \eqref{eq:Jacasymp05}, we need the coefficients $z_j^+$ of the expansion $z=z_++\sum_{j=1}^\infty z_j^+ w^j$ that follow from \eqref{eq:Jacasymp01}. The first values are \begin{equation}\label{eq:Jacasymp10} \begin{array}{@{}r@{\;}c@{\;}l@{}} z_2^+&=&-\tfrac16 z_1^4\phi_3,\quad z_3^+=\displaystyle{\tfrac1{72}z_1^5\left(5z_1^2\phi_3^2-3\phi_4\right)},\\[8pt] z_4^+&=&\displaystyle{-\tfrac1{1080}z_1^6\left(9\phi_5-45z_1^2\phi_3\phi_4+40z_1^4\phi_3^2\right),} \end{array} \end{equation} where $z_1=z_1^+=1/\sqrt{\phi^{\prime\prime}(z_+)}$ and $\phi_j$ denotes the $j$th derivative of $\phi(z)$ at the saddle point $z=z_+$ defined in \eqref{eq:int10}. With these coefficients we expand $f(w)$ defined in \eqref{eq:Jacasymp04}. This gives \begin{equation}\label{eq:Jacasymp11} \begin{array}{@{}r@{\;}c@{\;}l@{}} c_1^+&=&\displaystyle{-\frac{ z_+}{8z_1 (1- z_+^2)^2 (x- z_+)^2}}\Bigl(-6 z_1^3 z_+^2+3 z_1^3-72 z_1 z_2 z_+^2 x\ +\\[8pt] &&24 z_1 z_+ z_2 x^2-24 z_1 z_+^3 z_2 x^2-48 z_3 x z_+-48 z_3 z_+^2 x^2+96 z_3 z_+^3 x\ +\\[8pt] &&24 z_3 z_+^4 x^2-48 z_3 z_+^5 x-12 z_1 z_+ z_2+48 z_1 z_+^3 z_2-48 z_3 z_+^4\ +\\[8pt] &&24 z_3 z_+^6+12 z_1 z_2 x-36 z_1 z_2 z_+^5-4 z_1^3 x z_++8 z_1^3 z_+^2 x^2-20 z_1^3 z_+^3 x\ + \\[8pt] &&4 z_1^3 x^2+15 z_1^3 z_+^4+24 z_3 x^2+24 z_3 z_+^2+60 z_1 z_2 z_+^4 x\Bigr), \end{array} \end{equation} where $z_j$ denotes $z_j^+$. The coefficients $p_1$ and $q_1$ of the expansions in \eqref{eq:Jacasymp09} follow from $c_1^+=p_1+iq_1$. \subsection{Expansion of the derivative}\label{sec:Pderiv} For the weights of the Gauss quadrature it is convenient to have an expansion of $\displaystyle{\frac{d}{dx}}P_n^{(\alpha,\beta)}(x)$. Of course this follows from using \eqref{eq:Jacasymp08} with different values of $\alpha$ and $\beta$ and the relation \begin{equation}\label{eq:Jacasymp12} \frac{d}{dx}P_n^{(\alpha,\beta)}(x)=\tfrac{1}{2}\left(\alpha+\beta+n+1\right)P_{n-1}^{(\alpha+1,\beta+1)}(x), \end{equation} but it is useful to have a representation in terms of the same parameters. By straightforward differentiation of \eqref{eq:Jacasymp08} we obtain \begin{equation}\label{eq:Jacasymp13} \begin{array}{@{}r@{\;}c@{\;}l@{}} \displaystyle{\frac{d}{dx}P_n^{(\alpha,\beta)}(x)}&=& \displaystyle{-\sqrt{\frac{\kappa}{\pi}}\,2^{\frac12(\alpha+\beta+1)e^{-\kappa\psi}}\chi^\prime(x)A(x) \ \times}\\[8pt] &&\displaystyle{\left(\sin\left(\kappa\chi(x)+\tfrac14\pi\right)R(x)-\cos\left(\kappa\chi(x)+\tfrac14\pi\right)S(x)\right)}, \end{array} \end{equation} where $\chi^\prime(x)$ is given in \eqref{eq:Jacasymp07} and \begin{equation}\label{eq:Jacasymp14} \begin{array}{@{}r@{\;}c@{\;}l@{}} A(x)&=&\displaystyle{\frac{1}{\sqrt{w(x)U(x)}}},\\[8pt] R(x)&=&\displaystyle{P(x)-\frac{1}{\kappa\chi^\prime(x)}Q^\prime(x)-\frac{A^\prime(x)}{\kappa A(x)\chi^\prime(x)}Q(x)},\\[8pt] S(x)&=&\displaystyle{Q(x)+\frac{1}{\kappa\chi^\prime(x)}P^\prime(x)+\frac{A^\prime(x)}{\kappa A(x)\chi^\prime(x)}P(x)}. \end{array} \end{equation} We have the expansions \begin{equation}\label{eq:Jacasymp15} R(x)\sim \sum_{j=0}^\infty \frac{r_{j}}{\kappa^j},\quad S(x)\sim \sum_{j=0}^\infty \frac{s_{j}}{\kappa^j}, \end{equation} where the coefficients follow from the relations in \eqref{eq:Jacasymp14}. The first coefficients are $r_0=p_0=1$, $s_0=q_0=0$, and \begin{equation}\label{eq:Jacasymp16} r_1=p_1,\quad s_1=q_1+\frac{A^\prime(x)}{A(x)\chi^\prime(x)}. \end{equation} \section{Expansion of the zeros}\label{sec:Jacnabzer} A zero $x_\ell$, $1\le \ell\le n$, of $P_n^{(\alpha,\beta)}(x)$ follows from the zeros of (see \eqref{eq:Jacasymp08}) \begin{equation}\label{eq:Jaczeros01} W(x)=\cos\left(\kappa\chi(x)+\tfrac14\pi\right)P(x)+\sin\left(\kappa\chi(x)+\tfrac14\pi\right)Q(x), \end{equation} where $\chi(x)$ is defined in \eqref{eq:Jacasymp08}. For a first approximation we put the cosine term equal to zero. That is, we can write \begin{equation}\label{eq:Jaczeros02} \kappa\chi(x)+\tfrac14\pi=\tfrac12\pi-(n+1-\ell)\pi, \end{equation} where $\ell$ is some integer. It appears that this choice in the right-hand side is convenient for finding the $\ell$th zero. Because the expansions in \eqref{eq:Jacasymp09} are valid for $x$ properly inside $(x_-,x_+)$, we may expect that the approximations of the zeros in the middle of this interval will be much better than those near the endpoints. We describe how to compute approximations of all $n$ zeros by considering the zeros of $\cos\left.(\chi(x)\kappa +\frac14\pi\right)$. We start with $\ell=1$ and using \eqref{eq:Jaczeros02} we compute $\chi_1=\left(\frac14-n\right)\pi/\kappa$. Next we compute an approximation of the zero $x_1$ by inverting the equation $\chi(x)=\chi_1$, where $\chi(x)$ is defined in \eqref{eq:Jacasymp08}. For a Newton procedure we can use $x_-+1/n$ as a starting value. \begin{example}\label{exemp:ex01} When we take $\alpha=50$, $\beta=41$, $n=25$, we have $\kappa=71$, $\sigma=91/142$, $\tau=9/142$. We find $\chi_1= -1.095133$ and the starting value of the Newton procedure is $x= -0.7667437$. We find $x_1\doteq -0.7415548$. Comparing this with the first zero computed by using the solver of Maple to compute the zeros of the Jacobi polynomial with Digits = 16, we find a relative error $0.00074$. For the next zero $x_2$, we compute $\chi_2$ from \eqref{eq:Jaczeros02} with $\ell=2$, use $x_1$ as a starting value for the Newton procedure, and find $x_2 \doteq-0.682106$, with relative error $0.00032$. And so on. The best result is for $x_{13}$ with relative error $0.000013$, and the worst result is for $x_{25}$ with a relative error $0.0010$. \eoexample \end{example} \begin{remark}\label{rem:rem03} We don't have a proof that the found zero always corresponds with the $\ell$th zero, when we start with \eqref{eq:Jaczeros02}. In a number of tests we have found all agreement with this choice. \eoremark \end{remark} To obtain higher approximations of the zeros, we use the method described in our earlier papers. We assume that the zero $x_\ell$ has an asymptotic expansion \begin{equation}\label{eq:Jaczeros03} x_\ell=\xi_0+{\varepsilon},\quad {\varepsilon}\sim \frac{\xi_2}{\kappa^2}+\frac{\xi_4}{\kappa^4}+\ldots, \end{equation} where $\xi_0$ is the value obtained as a first approximation by the method just described. The function $W(x)$ defined in \eqref{eq:Jaczeros01} can be expanded at $\xi_0$ and we have \begin{equation}\label{eq:Jaczeros04} W(x_\ell)=W(\xi_0+{\varepsilon})=W(\xi_0)+\frac{{\varepsilon}}{1!}W^\prime(\xi_0)+ \frac{{\varepsilon}^2}{2!}W^{\prime\prime}(\xi_0)+\ldots = 0, \end{equation} where the derivatives are with respect to $x$. We find upon substituting the expansions of ${\varepsilon}$ and those of $P$ and $Q$ given \eqref{eq:Jacasymp09}, and comparing equal powers of $\kappa$, that the first coefficients are \begin{equation}\label{eq:Jaczeros05} \begin{array}{@{}r@{\;}c@{\;}l@{}} \xi_2&=&\displaystyle{\frac{\left(1-x^2\right)q_1(x)}{U(x)}},\\[8pt] \xi_4&=&\displaystyle{\frac{1}{6U(x)^4}}\Bigl(3x^5q_1^2+3x^4q_1^2\sigma\tau-6x^3q_1^2-6x^2q_1^2\sigma\tau+3q_1^2x+3q_1^2\sigma\tau\ +\\[8pt] &&\bigl(6q_1^\prime q_1x^4+6x^3q_1^2-12q_1^\prime x^2q_1-6xq_1^2+6q_1^\prime q_1\bigr)U(x)^2\ +\\[8pt] &&\bigl(6p_2x^2q_1+2q_1^3x^2+6q_3-6p_2q_1-6q_3x^2-2q_1^3\bigr)U(x)^3\Bigr), \end{array} \end{equation} where $U(x)$ is defined in \eqref{eq:int10}, and $x$ takes the value of the first approximation of the zero as obtained in Example~\ref{exemp:ex01}. When we take the same values $\alpha=50$, $\beta=41$, $n=25$ as in Example~\ref{exemp:ex01}, and use \eqref{eq:Jaczeros03} with the term $\xi_2/\kappa^2$ included, we obtain for the zero $x_{13}$ a relative error $0.80\times10^{-9}$. With also the term $\xi_4/\kappa^4$ included we find for $x_{13}$ a relative error $0.13\times10^{-12}$. A more extensive test of the expansion is shown in Figure ~\ref{fig:fig02}. The label $\ell$ in the abscissa represents the order of the zero (starting from $\ell = 1$ for the smallest zero). In this figure we compare the approximations to the zeros obtained with the asymptotic expansion against the results of a Maple implementation (with a large number of digits) of an iterative algorithm which uses the global fixed point method of \cite{Segura:2010:RCO}. The Jacobi polynomials used in this algorithm are computed by using the intrinsic Maple function. As before, we use \eqref{eq:Jaczeros03} with the term $\xi_2/\kappa^2$ included. As can be seen, for $n = 100$ the use of the expansion allows the computation of the zeros $x_\ell$, $10\le\ell\le90$, with absolute error less than $10^{-8}$. When $n = 1000$, an absolute accuracy better than $10^{-12}$ can be obtained for about 90\% of the zeros of the Jacobi polynomials. The results become less accurate for the zeros near the endpoints $\pm1$, as expected. \begin{figure} \epsfxsize=15cm \epsfbox{figure1.eps} \caption{ \label{fig:fig02} Performance of the asymptotic expansion for computing the zeros of $P^{(\alpha,\beta)}_n(x)$ for $\alpha=50$, $\beta=41$ and $n=100,\,1000$.} \end{figure} In Figure~\ref{fig:fig03} we show the absolute errors for $n=100$ and $\alpha=50$, $\beta=41$ compared with $\alpha=150$, $\beta=141$. We see that the accuracy is slightly better for the larger parameters, and that the asymptotics is quite uniform when $\alpha$ and $\beta$ assume larger values. \begin{figure} \epsfxsize=15cm \epsfbox{figure2.eps} \caption{ \label{fig:fig03} Performance of the asymptotic expansion for computing the zeros for $n=100$ and $\alpha=50$, $\beta=41$ compared with $\alpha=150$, $\beta=141$. } \end{figure} \section{The weights of the Gauss-Jacobi quadrature}\label{sec:weights} As we did in \cite{Gil:2019:NIG}, and in our earlier paper \cite{Gil:2018:GHL} for the Gauss--Hermite and Gauss--Laguerre quadratures, it is convenient to introduce scaled weights. In terms of the derivatives of the Jacobi polynomials, the classical form of the weights of the Gauss-Jacobi quadrature can be written as \begin{equation}\label{eq:weights01} \begin{array}{@{}r@{\;}c@{\;}l@{}} w_\ell &=& \displaystyle{\frac{M_{n,\alpha,\beta}}{ \left(1-x_\ell^2\right) P_n ^{(\alpha ,\beta)\prime}(x_\ell)^2}}, \\ &&\\ M_{n,\alpha,\beta}&=&\displaystyle{2^{\alpha+\beta+1}\frac{\Gamma (n+\alpha+1)\Gamma (n+\beta+1)}{n! \Gamma (n+\alpha+\beta+1 )}}. \end{array} \end{equation} In Figure 4 we show the relative errors in the computation of the weights $w_\ell$ defined in \eqref{eq:weights01}, with the derivative of the Jacobi polynomial computed by using the relation in \eqref{eq:Jacasymp12}. We have used the representation in \eqref{eq:Jacasymp08}, with the asymptotic series \eqref{eq:Jacasymp09} truncated after $j = 3$ and the expansion \eqref{eq:Jaczeros03} for the nodes with the term $\xi_2/\kappa^2$ included. The relative errors are obtained by using high-precision results computed by using Maple. \begin{figure} \epsfxsize=15cm \epsfbox{weights2.eps} \caption{ \label{fig:fig04} Performance of the computation of the weights $w_\ell$ by using the asymptotic expansion of the Jacobi polynomial for $\alpha=50$, $\beta=41$ and $n=100,\,1000$.} \end{figure} As an alternative we consider the scaled weights defined by \begin{equation}\label{eq:weights02} \omega_\ell=\frac{1}{v^\prime(x_\ell)^{2}}, \end{equation} where \begin{equation}\label{eq:weights03} v(x)=C_{n,\alpha,\beta}\, (1-x)^{a}(1+x)^{b} P_n^{(\alpha,\beta)}(x), \end{equation} and we choose $a$ and $b$ such that $v^{\prime\prime}(x_\ell)=0$; $C_{n,\alpha,\beta}$ does not depend on $x$, and will be chosen later. We have \begin{equation}\label{eq:weights04} \begin{array}{@{}r@{\;}c@{\;}l@{}} v^\prime(x)&=&C_{n,\alpha,\beta}\bigl( \left(-a(1-x)^{a-1}(1+x)^{b} +b(1-x)^{a}(1+x)^{b-1}\right)P_n^{(\alpha,\beta)}(x)\ +\\[8pt] &&(1-x)^{a}(1+x)^{b}P_n^{(\alpha,\beta)\prime}(x)\bigr). \end{array} \end{equation} Evaluating $v^{\prime\prime}(x_\ell)$, we find \begin{equation}\label{eq:weights05} \begin{array}{@{}r@{\;}c@{\;}l@{}} v^{\prime\prime}(x_\ell)&=&C_{n,\alpha,\beta}(1-x_\ell)^{a}(1+x_\ell)^{b}(1-x_\ell^2)\ \times\\[8pt] &&\left((1-x_\ell^2)P_n^{(\alpha,\beta)\prime\prime}(x_\ell)+ 2\left(b-a-(a+b)x_\ell\right)P_n^{(\alpha,\beta)\prime}(x_\ell)\right), \end{array} \end{equation} where we skip the term containing $P_n^{(\alpha,\beta)}(x_\ell)$, because $x_\ell$ is a zero. The differential equation of the Jacobi polynomials is \begin{equation}\label{eq:weights06} \left(1-x^2\right) y^{\prime\prime}(x)+\left(\beta-\alpha-(\alpha+\beta+2)x\right)y^\prime(x)+n(\alpha+\beta+n+1)y(x)=0, \end{equation} and we see that $v^{\prime\prime}(x_\ell)=0$ if we take $a=\frac12(\alpha+1)$, $b=\frac12(\beta+1)$. We obtain \begin{equation}\label{eq:weights07} v(x)=C_{n,\alpha,\beta}\, (1-x)^{\frac12(\alpha+1)}(1+x)^{\frac12(\beta+1)} P_n^{(\alpha,\beta)}(x), \end{equation} with properties \begin{equation}\label{eq:weights08} v^\prime(x_\ell)=C_{n,\alpha,\beta}\, (1-x_\ell)^{\frac12(\alpha+1)}(1+x_\ell)^{\frac12(\beta+1)} P_n^{(\alpha,\beta)\prime}(x_\ell), \quad v^{\prime\prime}(x_\ell)=0. \end{equation} The weights $w_\ell$ are related with the scaled weights $\omega_\ell$ by \begin{equation}\label{eq:weights09} w_\ell =M_{n,\alpha,\beta}C^2_{n,\alpha,\beta}(1-x_\ell)^{\alpha}(1+x_\ell)^{\beta} \omega_\ell. \end{equation} The advantage of computing scaled weights is that, similarly as described in \cite{Gil:2018:GHL}, scaled weights do not underflow/overflow for large parameters. In additional, they are well-conditioned as a function of the roots $x_\ell$. Indeed, introducing the notation \begin{equation}\label{eq:weights10} V(x)=\frac{1}{v^\prime(x)^{2}}, \end{equation} the scaled weights are $\omega_\ell=V(x_\ell)$ and $V^\prime(x_\ell)=0$ because $v^{\prime\prime}(x_\ell)=0$. The vanishing derivative of $V(x)$ at $x_\ell$ may result in a more accurate numerical evaluation of the scaled weights. When considering the representation of the Jacobi polynomials in \eqref{eq:Jacasymp08}, the function $v(x)$ can be written as \begin{equation}\label{eq:weights11} v(x)= \frac{2^{\frac12(\alpha+\beta+1)}}{\sqrt{\pi \kappa}}\,C_{n,\alpha,\beta}e^{-\kappa\psi}Z(x)W(x),\quad Z(x)=\sqrt{\frac{1-x^2}{U(x)}}, \end{equation} where $U(x)$ is defined in \eqref{eq:int10}. For scaling $v(x)$ we choose \begin{equation}\label{eq:weights12} C_{n,\alpha,\beta}=2^{-\frac12(\alpha+\beta+1)}e^{\kappa\psi}. \end{equation} This gives \begin{equation}\label{eq:weights13} v(x)= \frac{Z(x)W(x)}{\sqrt{\pi \kappa}}. \end{equation} For the numerical computation of $\psi$ defined in \eqref{eq:Jacasymp06} for small values of $\sigma$ or $\tau$, we can use the expansion \begin{equation}\label{eq:weights14} (1-x)\ln(1-x)+(1+x)\ln(1+x)=\sum_{k=1}^\infty\frac{x^{2k}}{k(2k-1)},\quad \vert x\vert < 1. \end{equation} For computing the modified Gauss weights it is convenient to have an expansion of the derivative of the function $v(x)$ of \eqref{eq:weights13}, with $W(x)$ defined in \eqref{eq:Jacasymp08} and $Z(x)$ in \eqref{eq:weights11}. We have \begin{equation}\label{eq:weights20} \frac{d}{dx}v(x)=-\sqrt{\frac{\kappa}{\pi}}\chi^\prime(x)Z(x) \left(\sin\left(\kappa\chi(x)+\tfrac14\pi\right)M(x)-\cos\left(\kappa\chi(x)+\tfrac14\pi\right)N(x)\right), \end{equation} where $\chi^\prime(x)$ is given in \eqref{eq:Jacasymp07} and \begin{equation}\label{eq:weights21} \begin{array}{@{}r@{\;}c@{\;}l@{}} M(x)&=&\displaystyle{P(x)-\frac{1}{\kappa}p(x)Q^\prime(x)-\frac{1}{\kappa}q(x)Q(x)},\\[8pt] N(x)&=&\displaystyle{Q(x)+\frac{1}{\kappa} p(x)P^\prime(x)+\frac{1}{\kappa}q(x)P(x)}, \end{array} \end{equation} where \begin{equation}\label{eq:weights22} \begin{array}{@{}r@{\;}c@{\;}l@{}} p(x)&=&\displaystyle{\frac{1}{\chi^\prime(x)}=\frac{1-x^2}{U(x)},}\\[8pt] q(x)&=&\displaystyle{\frac{Z^\prime(x)}{Z(x)\chi^\prime(x)}=\frac{(1-x^2)(x+\sigma\tau)-2xU^2(x)}{2U^3(x)}}. \end{array} \end{equation} We have the expansions \begin{equation}\label{eq:weights23} M(x)\sim \sum_{j=0}^\infty \frac{m_{j}}{\kappa^j},\quad N(x)\sim \sum_{j=0}^\infty \frac{n_{j}}{\kappa^j}, \end{equation} where the coefficients follow from the relations in \eqref{eq:Jacasymp14}. The first coefficients are $m_0=p_0=1$, $n_0=q_0=0$, and for $j=1,2,3,\ldots$ \begin{equation}\label{eq:weights24} \begin{array}{@{}r@{\;}c@{\;}l@{}} m_j&=&\displaystyle{p_j-p(x)q_{j-1}^\prime-q(x)q_{j-1},}\\[8pt] n_j&=&\displaystyle{q_j+p(x)p_{j-1}^\prime+q(x)p_{j-1}.} \end{array} \end{equation} As an example, Figure~\ref{fig:fig05} shows the performance of the asymptotic expansion \eqref{eq:weights20} for computing the scaled weights \eqref{eq:weights02} for $\alpha=50$, $\beta=41$ and $n=1000$. The computation of the non-scaled weights \eqref{eq:weights01} is shown as comparison. \begin{figure} \epsfxsize=15cm \epsfbox{scaledweights.eps} \caption{ \label{fig:fig05} Comparison of the performance of the asymptotic expansions for computing non-scaled \eqref{eq:weights01} and scaled \eqref{eq:weights02} weights for $\alpha=50$, $\beta=41$ and $n=1000$.} \end{figure} In Figure~\ref{fig:fig06} and Figure~\ref{fig:fig07} we compare the effect of computing the weights $w_\ell$ defined in \eqref{eq:weights01} and the scaled weights $\omega_\ell$ defined in \eqref{eq:weights02} when we compute these weights with the asymptotic expansion of the zeros in \eqref{eq:Jaczeros03} with the term $\xi_4/\kappa^4$ included or not included. From these computations it follows that the that the scaled weights are well-conditioned as a function of the nodes and therefore they are not so critically dependent on the accuracy of the nodes. Contrary the non-scaled weights have worse condition and the accuracy of the nodes is more important. \begin{figure} \begin{center} \epsfxsize=15cm \epsfbox{NSWcompa.eps} \caption{ \label{fig:fig06} Performance of the computation of the weights $w_\ell$ defined in \eqref{eq:weights01} by using the asymptotic expansion of the Jacobi polynomial for $\alpha=50$, $\beta=41$ and $n=1000$. The comparison is between the expansion of the zeros in \eqref{eq:Jaczeros03} with the term $\xi_4/\kappa^4$ included or not included.} \end{center} \end{figure} \begin{figure} \epsfxsize=15cm \epsfbox{SWcompa.eps} \caption{ \label{fig:fig07} Same as in Figure~\ref{fig:fig06} for the scaled weights $\omega_\ell$ defined in \eqref{eq:weights02}.} \end{figure} \subsection{About quantities appearing in the weights. }\label{sec:weightscoeff} First we consider the term $e^{\kappa\psi}$, with $\psi$ given in \eqref{eq:Jacasymp06}. Using the relations in \eqref{eq:int07}, we have \begin{equation}\label{eq:weights15} \begin{array}{lll} &\kappa(1+\tau)=n+\alpha+\frac12,&\kappa(1-\tau)=n+\beta+\frac12,\\[8pt] &\kappa(1+\sigma)=n+\alpha+\beta+\frac12,&\kappa(1-\sigma)=n+\frac12, \end{array} \end{equation} and this gives \begin{equation}\label{eq:weights16} \begin{array}{@{}r@{\;}c@{\;}l@{}} e^{2\kappa\psi}&=&\displaystyle{\frac{\left(n+\alpha+\beta+\frac12\right)^{n+\alpha+\beta+\frac12} \left(n+\frac12\right)^{n+\frac12}} {\left(n+\alpha+\frac12\right)^{n+\alpha+\frac12} \left(n+\beta+\frac12\right)^{n+\beta+\frac12}}}\\[8pt] &=&\displaystyle{\frac{\Gamma\left( n+\alpha+\beta+\frac12\right)\Gamma\left(n +\frac12 \right)}{\Gamma\left(n+\alpha+\frac12 \right)\Gamma\left( n+\beta+\frac12\right)} \ \frac {\Gamma^*\left(n+\alpha+\frac12 \right)\Gamma^*\left( n+\beta+\frac12\right)} {\Gamma^*\left( n+\alpha+\beta+\frac12\right)\Gamma^*\left(n+\frac12 \right)} \times}\\[8pt] &&\displaystyle{\sqrt{\frac{\left( n+\alpha+\beta+\frac12\right)\left( n+\frac12\right)}{\left( n+\alpha+\frac12\right)\left( n+\beta+\frac12\right)}}}, \end{array} \end{equation} where \begin{equation}\label{eq:weights17} \Gamma^*(z)=\sqrt{{z/(2\pi)}}\,e^z z^{-z}\Gamma(z),\quad {\rm ph}\,z\in(-\pi,\pi),\quad z\ne0. \end{equation} We have $\Gamma^*(z)=1+{\cal O}(1/z)$ as $z\to\infty$. It follows that (see \eqref{eq:weights01} and \eqref{eq:weights09}) \begin{equation}\label{eq:weights18} \begin{array}{@{}r@{\;}c@{\;}l@{}} M_{n,\alpha,\beta}C^2_{n,\alpha,\beta}&=& \displaystyle{ \frac {\Gamma\left(n+\alpha+1 \right)\Gamma\left( n+\beta+1\right)\Gamma\left( n+\alpha+\beta+\frac12\right)\Gamma\left(n+\frac12 \right)} {\Gamma\left(n+\alpha+\frac12 \right)\Gamma\left( n+\beta+\frac12\right)\Gamma\left( n+\alpha+\beta+1\right)\Gamma\left(n+1 \right)}}\ \times\\[8pt] &&\displaystyle{ \frac {\Gamma^*\left(n+\alpha+\frac12 \right)\Gamma^*\left( n+\beta+\frac12\right)} {\Gamma^*\left( n+\alpha+\beta+\frac12\right)\Gamma^*\left(n+\frac12 \right)} \sqrt{\frac{\left( n+\alpha+\beta+\frac12\right)\left( n+\frac12\right)}{\left( n+\alpha+\frac12\right)\left( n+\beta+\frac12\right)}}. }\end{array} \end{equation} Using $\Gamma\left(z+\frac12\right)/\Gamma(z)\sim z^{\frac12}$ as $z\to\infty$, we see that, in the case that $\alpha$, $\beta$ and $n$ are all large, we have $M_{n,\alpha,\beta}C^2_{n,\alpha,\beta}\sim1$, and that, when using more details on expansions of gamma functions and ratios thereof (see \cite[\S6.5]{Temme:2015:AMI}), we can obtain \begin{equation}\label{eq:weights19} M_{n,\alpha,\beta}C^2_{n,\alpha,\beta}\sim 1+\frac{\sigma^2-\tau^2}{12(1-\sigma^2)(1-\tau^2)\kappa}+ \frac{(\sigma^2-\tau^2)^2}{288(1-\sigma^2)^2(1-\tau^2)^2\kappa^2}+\ldots, \end{equation} again, when $\alpha$, $\beta$ and $n$ are all large. As observed in the first lines of Section~\ref{sec:Jacnabelfun}, in the present asymptotics we assume that $\sigma$ and $\vert\tau\vert$ are bounded away from $1$. \section*{Acknowledgments} We acknowledge financial support from Ministerio de Ciencia e Innovaci\'on, Spain, projects MTM2015-67142-P (MINECO/FEDER, UE) and PGC2018-098279-B-I00 (MCIU/AEI/FEDER, UE). NMT thanks CWI, Amsterdam, for scientific support.
proofpile-arXiv_065-111
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Ultrashort light pulses with a duration of only a few attoseconds provide direct access to the quantum dynamics of electrons in atoms, molecules, and bulk systems \cite{pazourek2015,calegari2016,nisoli2017,biswas2020}. Thus, attosecond spectroscopy promises unprecedented possibilities for testing fundamental concepts of chemistry, like electronic structure principles or reaction mechanisms. However, the necessary theoretical modeling of molecules interacting with strong and short light pulses is challenging, in particular because both bound electrons and ionized electrons need to be described accurately \cite{palacios2019}. This situation leads to new technical and method developments in the area of quantum dynamics, e.g.\ to testing the applicability of time-dependent Density Functional Theory (DFT) \cite{bruner2017,sato2018}, explicit coupling of bound and continuum states \cite{palacios2019}, or first steps towards one-dimensional many-electron models \cite{majorosi2018,majorosi2020}. Despite these developments, analytical and numerical approaches to electron dynamics in strong laser fields often rely on a Single-Active Electron (SAE) assumption \cite{Schafer1993,Yang1993,Walker1994,Awasthi2008,Ivanov2014} where the dynamics of one electron in some effective potential is considered. The underlying idea is that only one ``active'' electron is mainly influenced by the laser field and the other electrons are treated as ``frozen''. By making the SAE assumption, a time-dependent Schr\"odinger equation (TDSE) for one electron in a (classical) laser field is solved and, in this way, many experimental findings can be explained qualitatively. However, the SAE assumption has challenges. For example, finding suitable effective one-electron potentials is an obstacle, especially for systems with more than one nucleus. In such systems the potential is not spherically symmetric and educated guesses based e.g.\ on the density may be useful \cite{abu-samha2010}. Additionally, the SAE assumption is an assumption and not an approximation in the sense that, to our knowledge, there exists no procedure which yields an SAE picture as a limit and which can systematically be improved towards the exact result. It is also sometimes implied that the SAE assumption does not allow to treat many-electron effects \cite{gordon2006,Ishikawa2015}, although recent studies suggest that e.g.\ field screening effects due to polarization of the other electrons can be included in an SAE approach by hand \cite{romanov2020,abu-samha2020}. To clarify, the SAE assumption does include many-electron effects via the effective potential. It does not, however, describe dynamic changes of the effective many-electron interaction as they may occur e.g.\ during an ionization process. Notwithstanding this, a one-electron theory can incorporate all many-electron effects in principle exactly via time-dependent potentials. In particular, a many-electron description can be reduced to a one-electron description when one-electron observables are of interest. Then, the observables may be obtained in a straightforward way via the one-electron wavefunction obtained from a one-electron Schr\"odinger equation. Effects like interaction with the laser field, with nuclei, and with other electrons, are then part of effective one-electron potentials and neither photons, nor nuclei, nor other electrons need to be included in the quantum description explicitly as particles. A reduction of a quantum system of, say, $n$ particles, to a quantum system of $m < n$ particles, is typically based on a semi-classical approximation, e.g.\ when an electron-laser interaction is modeled with a classical laser field or when nuclei are treated as classical particles in the Born-Oppenheimer approximation. However, such a reduction of a quantum system can be made without making approximations by using the Exact Factorization method \cite{abedi2010,abedi2012,gonze2018}: The $n$-particle probability density $|\psi|^2$ is written as product of a marginal $m$-particle probability density $|\chi|^2$ and a conditional $(n-m)$-particle probability density $|\phi|^2$. The wavefunction $\phi$ describes the $(n-m)$-particle subsystem but also depends parametrically on the remaining $m$ particles. $\phi$ can be used e.g.\ to include quantum effects of the nuclei in a Born-Oppenheimer-like treatment of electrons in a molecule \cite{agostini2018}, or to understand why time is a parameter in quantum mechanics \cite{briggs2000,schild2018}, and the Exact Factorization can naturally be applied multiple times up to a point where only single-particle wavefunctions $\phi_1(1), \phi_2(2;1), \phi_3(3;1,2), \dots$ are left that depend successively on more and more parameters \cite{cederbaum2015}. In contrast, $\chi$ represents the full $n$-particle system but in terms of only $m$ particles, with the effect of the remaining $(n-m)$-particles included as scalar and/or vector potentials. For instance, $\chi$ can represent the dynamics of a molecule in terms of a nuclear wavefunction $\chi$ alone, where the effect of the electrons is contained in potentials. A more abstract use of $\chi$, which considers the theoretical treatment of many electron systems, is that it can represent an electronic wavefunction via a set of spin orbitals (a ``fragment'') embedded in an environment of other spin orbitals \cite{lacombe2020}. Another interesting case is if $\psi(1,\dots,n)$ is an $n$-electron wavefunction and we choose \begin{align} |\psi(1,\dots,n)|^2 = |\chi(1)|^2 \, |\phi(2,\dots,n;1)|^2. \label{eq:eef_ansatz} \end{align} Then $|\chi(1)|^2$ is the one-electron density and $\chi(1)$ is a one-electron wavefunction that is obtained as the solution of a one-electron TDSE. It represents the whole $n$-electron system, because it yields (together with the effective one-electron potentials) the one-electron observables of the $n$-electron system, e.g.\ the expectation value of the position or momentum operator. One of the authors introduced \eqref{eq:eef_ansatz} as Exact Electron Factorization (EEF) \cite{schild2017}, but the idea was already introduced some time before \cite{hunter1986,hunter1987} and is also closely related to Orbital-free DFT \cite{kraisler2020}. A related static approach to tunnel ionization inspired by the Born-Oppenheimer approximation was also proposed \cite{brabec2005,zhao2007}. The EEF extends previous developments by providing equations to calculate the effective one-electron potentials and by applying the formalism to time-dependent processes, in particular to the electron dynamics in strong ultrashort laser fields. Thus, the effective one-electron potentials in the EEF are time-dependent. The main topic of the article at hand is the question of how the time-dependent many-electron effects are encoded in the exact effective potentials, as a step towards the ultimate aim of reproducing those effects approximately but efficiently. In the following, we first describe the EEF in section \ref{sec:eef}. Finding the exact one-electron EEF potentials seems to be at least as hard as solving the full problem. Hence, we want to identify the relevant features of the exact one-electron potentials for different laser field parameters and the origin of these features, with the aim of learning what needs to be approximated. To achieve this, we consider a simple (spinless) two-electron model of an atom in one dimension, because it already shows relevant many-electron effects but can also be solved numerically for a variety of field parameters. The system is presented in section \ref{sec:model}, which is followed in section \ref{sec:sae} by a conceptual comparison to an SAE assumption based on Kohn-Sham (KS) DFT. In section \ref{sec:tdd} the time-dependent behavior of the effective one-electron potentials is presented and analyzed. Finally, in section \ref{sec:con} we discuss what still needs to be learned and how the path towards the simulation of realistic many-electron systems may look like. \section{The Exact Electron Factorization} \label{sec:eef} In non-relativistic quantum mechanics, the wavefunction of a system of $n$ electrons can be written as sum of electron permutations of a product $\psi(1,\dots,n;t) \times \xi(1,\dots,n)$, where $\psi(1,\dots,n;t) = \psi(\mathbf{r}_1,\dots,\mathbf{r}_n;t)$ is a spatial wavefunction that depends on the time parameter $t$, and $\xi(1,\dots,n)$ is a spin wavefunction \cite{shpilkin1996}. To simplify the discussion, we write the equations for $n=2$ and we only consider the spatial wavefunction $\psi(\mathbf{r}_1, \mathbf{r}_2;t)$. Generalization to $n>2$ is straightforward by considering $\mathbf{r}_2$ to be the coordinates of all but one electron, see the supplemental information of \cite{schild2017}. In the EEF we write the joint probability density $|\psi(\mathbf{r}_1, \mathbf{r}_2;t)|^2$ as product of a marginal probability density $|\chi (\mathbf{r}_1;t)|^2$ and a conditional probability density $|\phi (\mathbf{r}_2; \mathbf{r}_1, t)|^2$, or \begin{equation} \psi(\mathbf{r}_1, \mathbf{r}_2;t) = \chi(\mathbf{r}_1;t) \phi(\mathbf{r}_2; \mathbf{r}_1, t), \label{eq:eef} \end{equation} where $\chi(\mathbf{r}_1;t)$ is the marginal amplitude and $\phi(\mathbf{r}_2; \mathbf{r}_1, t)$ is the conditional amplitude. Below, $\psi$, $\chi$ and $\phi$ are functions that always depend on $\mathbf{r}_1, \mathbf{r}_2$, and $t$ as indicated in \eqref{eq:eef} and those dependencies are only repeated for emphasis. We require the partial normalization condition \begin{equation} \braket{\phi(\mathbf{r}_2; \mathbf{r}_1, t)| \phi(\mathbf{r}_2; \mathbf{r}_1, t)}_2 = 1 , ~~~~ \forall \mathbf{r}_1, t \label{eq:norm} \end{equation} where $\braket{\dots | \dots}_2$ denotes the inner product over electron coordinate(s) $\mathbf{r}_2$. If $\psi$ is normalized to the number of electrons, $\Braket{\psi|\psi} = n$, we have that \begin{align} |\chi(\mathbf{r}_1;t)|^2 = \Braket{\psi(\mathbf{r}_1, \mathbf{r}_2;t)|\psi(\mathbf{r}_1, \mathbf{r}_2;t)}_2 \label{eq:chi2} \end{align} is the one-electron density which is also normalized to the total number of electrons, $\Braket{\chi(x_1;t)|\chi(x_1;t)}_1 = n$. We note that the magnitude of $\chi$ is determined by \eqref{eq:chi2} but its phase is can be chosen, as discussed below. Otherwise, \eqref{eq:eef} and \eqref{eq:norm} define the marginal amplitude $\chi$ and the conditional amplitude $\phi$ unambiguously. The wavefunction $\psi(\mathbf{r}_1, \mathbf{r}_2;t)$ is determined from the TDSE (we use atomic units throughout the text) \begin{align} i \partial_t \psi &= \left(\sum_{j=1}^2 \left(\hat{h}(j) + \mathbf{F}(t) \cdot \mathbf{r}_j \right) + V(\mathbf{r}_1,\mathbf{r}_2) \right) \psi \label{eq:tdse} \end{align} with electron-electron interaction $V(\mathbf{r}_1,\mathbf{r}_2)$ and with $t$-independent one-electron Hamiltonian \begin{align} \hat{h}(j) = -\frac{\nabla_j^2}{2} + V_{\rm ext}(\mathbf{r}_j), \end{align} where $V_{\rm ext}$ is the external potential due to the presence of the nuclei. We write the interaction with the laser field $\mathbf{F}(t)$ in \eqref{eq:tdse} using the dipole approximation and in the length gauge. The EEF formalism extends to more complicated Hamiltonians, e.g.\ such that include a vector potential, but these will not be discussed here. The equations of motion for the marginal and conditional amplitude can be derived algebraically or variationally. For the marginal amplitude $\chi(\mathbf{r}_1;t)$, the equation of motion is \begin{equation} i \partial_t \chi = \left(\frac{1}{2} \left(- i \nabla_1 + \mathbf{A}(\mathbf{r}_1;t)\right)^2 + \varepsilon(\mathbf{r}_1;t) \right) \chi , \label{eq:tDSE-MA} \end{equation} with $t$-dependent scalar potential $\varepsilon(\mathbf{r}_1;t)$ and vector potential $ \mathbf{A}(\mathbf{r}_1;t)$. The scalar potential (hereafter called EEF potential) is given by \begin{equation} \varepsilon(\mathbf{r}_1;t) = V\m{ext}(\mathbf{r}_1) + \varepsilon\m{av} + \varepsilon\m{F} + \varepsilon\m{FS} + \varepsilon\m{GD} \label{eq:eps} \end{equation} where \begin{equation} \varepsilon\m{av}(\mathbf{r}_1;t) = \Braket{\phi|\hat{h}(2) + V(\mathbf{r}_1,\mathbf{r}_2)|\phi}_2 \label{eq:epsav} \end{equation} is the average kinetic and potential energy of the electron(s) at $\mathbf{r}_2$ given one electron is clamped at $\mathbf{r}_1$, \begin{equation} \varepsilon\m{F}(\mathbf{r}_1;t) = \varepsilon\m{F1}(\mathbf{r}_1;t) + \varepsilon\m{F2}(\mathbf{r}_1;t) \end{equation} represents interaction with the laser field via the usual one-electron interaction \begin{align} \varepsilon\m{F1}(\mathbf{r}_1;t) &= \mathbf{F}(t) \cdot \mathbf{r}_1 \end{align} and an additional interaction \begin{align} \varepsilon\m{F2}(\mathbf{r}_1;t) &= \mathbf{F}(t) \cdot \mathbf{d}(\mathbf{r}_1;t) \label{eq:epsf2} \end{align} with a $t$-dependent dipole contribution $\mathbf{d}(\mathbf{r}_1;t) = \braket{\phi| \mathbf{r}_2 |\phi}_2$, \begin{equation} \varepsilon\m{FS}(\mathbf{r}_1;t) = \frac{1}{2} \braket{\nabla_1 \phi| \left(1 - \ket{\phi} \bra{\phi} \right) | \nabla_1 \phi}_2 , \label{eq:epsfs} \end{equation} is a geometric term that is needed because the electron at $\mathbf{r}_1$ is actually not clamped and that is related to the Fubini-Study metric \cite{provost1980}, and \begin{equation} \varepsilon\m{GD}(\mathbf{r}_1;t) = \braket{\phi|-i \partial_t|\phi}_2 \label{eq:epsgd} \end{equation} is a gauge-dependent term. The (gauge-dependent) vector potential $\mathbf{A}(\mathbf{r}_1;t)$ is \begin{equation} \mathbf{A}(\mathbf{r}_1;t) = \braket{\phi|-i \nabla_1|\phi}_2. \label{eq:epsvp} \end{equation} All these potentials carry a $t$-dependence because of the $t$-dependent conditional wavefunction $\phi$ and, in this way, encode the $t$-dependent many-electron interaction. As mentioned above, the phase $\arg\left( \chi(\mathbf{r}_1,t) \right)$ of the marginal amplitude is arbitrary and the transformation $(\chi,\phi) \rightarrow (\widetilde\chi,\widetilde\phi)$ with \begin{align} \widetilde\chi(\mathbf{r}_1;t) &= \mathrm{e}^{-i S(\mathbf{r}_1;t)} \chi(\mathbf{r}_1;t) \\ \widetilde\phi(\mathbf{r}_2;\mathbf{r}_1, t) &= \mathrm{e}^{+i S(\mathbf{r}_1;t)} \phi(\mathbf{r}_2;\mathbf{r}_1, t), \end{align} for real-valued $S(\mathbf{r}_1;t)$ leaves the total wavefunction \eqref{eq:eef} unchanged, fulfills the partial normalization condition \eqref{eq:norm}, and leaves the equations of motion for $\chi$ and $\phi$ (see below) invariant provided the potentials are changed as \begin{align} \widetilde\mathbf{A} &= \mathbf{A} + \nabla_1 S \\ \widetilde\varepsilon\m{GD} &= \varepsilon\m{GD} + \partial_t S. \end{align} Thus, the choice of $S(\mathbf{r}_1;t)$ fixes a gauge. For the EEF it is important to note that $\chi$ is determined from a one-electron TDSE \eqref{eq:tDSE-MA}, that $\rho(\mathbf{r}_1;t) = |\chi|^2$ is the exact one-electron probability density and $\mathbf{j}(\mathbf{r}_1;t) = \operatorname{Im}\left(\chi^* \nabla_1 \chi \right) + \mathbf{A} |\chi|^2$ is the exact one-electron probability current density. Also, the one-electron expectation values for position, momentum, and kinetic energy are given as \begin{align} \braket{\mathbf{r}_1}(t) &= \frac{1}{N} \Braket{\chi | \mathbf{r}_1 | \chi}_1 \\ \braket{\mathbf{p}_1}(t) &= \frac{1}{N} \Braket{\chi | -i \nabla_1 + \mathbf{A} | \chi}_1 \\ \braket{T_1}(t) &= \frac{1}{N} \Braket{\chi | \frac{1}{2} [- i \nabla_1 + \mathbf{A}]^2 + \varepsilon\m{FS} | \chi }_1 \end{align} with $N = \Braket{\chi | \chi}_1$. Consequently, the marginal one-electron amplitude $\chi(\mathbf{r}_1;t)$ together with $\mathbf{A}$ and $\varepsilon\m{FS}$ yield essentially all relevant one-electron quantities (operators that contain any power of $\mathbf{r}_1$ as well as the first and second derivative with respect to $\mathbf{r}_1$), but for the total $n$-electron system. Thus, we call $\chi$ the EEF wavefunction in the following. We note that $(-i \nabla_1 + \mathbf{A})$ is the canonical momentum operator and that $\mathbf{A}$ and $\varepsilon\m{FS}$ can be combined into the quantum geometric tensor which describes the effect that the presence of the electron(s) at $\mathbf{r}_2$ has on the wavefunction $\chi(\mathbf{r}_1;t)$ for infinitesimal changes of $\mathbf{r}_1$ \cite{berry1989}. In attoscience, typical observables are the high-harmonic generation spectrum and one-electron ionization rates. For the choice of gauge $\mathbf{A} = 0$ (which is only possible iff $\nabla_1 \wedge \mathbf{A} = 0$ \cite{requist16}), those observables can be determined from $\chi$ alone, hence we would only need to solve a one-electron TDSE \eqref{eq:tDSE-MA} to obtain the observables of the many-electron system. However, for this purpose we need the $t$-dependent effective potential $\varepsilon(\mathbf{r}_1;t)$, for which we need to know the conditional amplitude $\phi(\mathbf{r}_2;\mathbf{r}_1,t)$. The equation of motion for the conditional amplitude is \begin{align} \left(i \partial_t + \hat{C} + \varepsilon(\mathbf{r}_1;t) \right) \phi(\mathbf{r}_2;\mathbf{r}_1,t) &= \left(\hat{h}(2) + \hat{U} \right) \phi(\mathbf{r}_2;\mathbf{r}_1,t), \label{eq:tDSE-CO} \end{align} which is a generalized TDSE with operators \begin{align} \hat{C} &= -\frac{(-i \nabla_1 + \mathbf{A})\chi}{\chi} \cdot (-i \nabla_1 - \mathbf{A}) \\ \hat{U} &= \frac{(-i \nabla_1 - \mathbf{A})^2}{2}. \end{align} In terms of these operators, the EEF potential is given by the expression \begin{align} \varepsilon(\mathbf{r}_1;t) = \Braket{\phi|\hat{h}(2) + \hat{U} - i\partial_t - \hat{C}|\phi}_2. \end{align} Solving the coupled equations \eqref{eq:tDSE-MA} and \eqref{eq:tDSE-CO} exactly seems harder than to solve the full many-electron problem \eqref{eq:tdse}, in particular because solving \eqref{eq:tDSE-CO} numerically is mathematically challenging. However, if there was a way to find the exact scalar potential $\varepsilon(\mathbf{r}_1;t)$ approximately, only the one-electron TDSE \eqref{eq:tDSE-MA} needs to be solved. Thus, in the following we want to learn how the exact scalar potential $\varepsilon(\mathbf{r}_1;t)$ behaves during an ionization process in a strong and ultrashort laser field. \section{Model} \label{sec:model} For this purpose, we study a one-dimensional two-electron system similar to those used e.g.\ in \cite{Bauer1997}. It is described by the $t$-dependent wavefunction $\psi(x_1, x_2;t)$ obtained as solution of the TDSE \begin{equation} i \partial_t \psi = \left(\hat{H} + F(t) (x_1 + x_2) \right) \psi , \label{eq:model_tdse} \end{equation} with \begin{equation} \hat{H} = \sum_{j=1}^2 \left( -\frac{\partial_j^2}{2} + V_{\rm ext}(x_j) \right) + V(x_1,x_2), \end{equation} where we use the soft-Coulomb potentials \begin{align} V_{\rm ext}(x) &= -\frac{2}{\sqrt{c_{\rm en}+x^2}} \\ V(x_1,x_2) &= \frac{1}{\sqrt{c_{\rm ee}+(x_1-x_2)^2}} \end{align} with parameters \unit[$c_{\rm en} = c_{\rm ee} = 0.55$]{$a_0^2$} to describe the interaction of the electrons with one nucleus and the electron-electron interaction, respectively. We consider ``spinless'' electrons where the spatial wavefunction is antisymmetric, hence we use as initial state the lowest eigenstate $\psi_0$ of \begin{align} \hat{H} \psi_j = E_j \psi_j, \label{eq:hpsi} \end{align} where we only consider states with correct symmetry property, $\psi_j(x_1,x_2) = -\psi_j(x_2,x_1)$. Our model may also be interpreted as a one-dimensional model of a helium atom, then $\psi_0$ corresponds to its lowest triplet state. We chose this state because for the symmetric ground state of $\hat{H}$, KS-DFT and the EEF are identical, as there is only one orbital which both electrons share, and the electron interaction effects which we describe below are largely absent. There are also electron-interaction effects for spin-paired electrons occupying the same orbital (see e.g.\ \cite{pazourek2012}) which are, however, not the focus of the investigations presented here. We find that the qualitative features of the EEF potentials change when the number of orbitals occupied in a Kohn-Sham picture change. Hence, our spinless two-electron model contains effects similar to those that occur for a spin-paired three- or four-electron model where (in the KS picture) only two orbitals are occupied, even though neither the number of electrons nor the nuclear charge matches -- as can be seen by comparing the behavior of the model as presented below with the spin-paired three-electron model used in \cite{schild2017}. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.6]{pic_eigenstates.pdf} \caption{Lowest two anti-symmetric eigenstates $\psi_j$ of \eqref{eq:hpsi} (top) and corresponding EEF potentials $\varepsilon_j$ (bottom). In the bottom panels, also the one-electron density $\rho$ is shown as filled area. Vertical lines indicate the position of maxima of $\varepsilon_j$ which correspond to minima or some depletion of $\rho$.} \label{fig:es} \end{center} \end{figure} In Fig.\ \ref{fig:es} the two energetically lowest states $\psi_j$ of \eqref{eq:hpsi} with correct symmetry are shown together with the one-electron densities $\rho_j = |\chi_j|^2$ and potentials $\varepsilon_j$ appearing in the time-independent version of \eqref{eq:tDSE-MA}, \begin{align} E_j \chi_j(x_1) = \left( -\frac{\partial_1^2}{2} + \varepsilon_j(x_1) \right) \chi_j(x_1) \label{eq:mtise} \end{align} with $\varepsilon_j(x_1) = V_{\rm ext}(x_1) + \varepsilon_{\rm av}(x_1) + \varepsilon_{\rm FS}(x_1)$. Each state $\psi_j$ corresponds to a reduced potential $\varepsilon_j$ which has $\chi_j$ as its ground state with the energy eigenvalue $E_j$ of the full system. The electronic structure is encoded in $\varepsilon_j$, which thus has features like barriers in the core region (see the gray vertical lines in the panels of Fig.\ \ref{fig:es}) that correspond to a suppression of probability density in those regions. We note that smaller values of the one-electron density correspond to higher barriers, but that the one-electron density $\rho_j$ and hence $\chi_j$ is never zero. The appearance of those barriers (see below) is well known from orbital-free DFT, where the potential $\varepsilon_j(x_1)$ of \eqref{eq:mtise} is to be approximated, typically as functional of the one-electron density \cite{finzel2016}. We choose the 6-cycle laser pulse \begin{align} F(t) = F_0 E_{\rm e}(t) \cos(\omega_0 t) \end{align} with envelope function $E_{\rm e}(t)$ that increases quadratically as $(t/t_{\rm on})^2$ during the first two cycles, $t_{\rm on} = 2 \frac{2 \pi}{\omega_0}$, is $1$ during the next two cycles, decreases quadratically during the following two cycles, and is zero otherwise. The 6-cycle laser pulse depends on two parameters: The central angular frequency $\omega_0$ and the maximum amplitude of the laser field $F_0$. We consider values of the angular frequency $\omega_0$ ranging from \unit[$0.1$]{$E\m{h} /\hbar$} (wavelength \unit[$456$]{$\mathrm{nm}$}) to \unit[$1.0$]{$E\m{h} /\hbar$} (wavelength \unit[$46$]{$\mathrm{nm}$}) and three different maximal amplitudes of the laser field $F_0$, \unit[$0.015$]{$E\m{h} /(e a_0)$}, \unit[$ 0.030$]{$E\m{h} /(e a_0)$}, and \unit[$0.050$]{$E\m{h} /(e a_0)$}, which correspond to the intensities \unit[$7.9 \times 10^{12}$]{$\mathrm{W/cm^2}$}, \unit[$3.2 \times 10^{13}$]{$\mathrm{W/cm^2}$}, and $8.8 \times 10^{13} \mathrm{W/cm^2}$, respectively. There are qualitatively different regimes depending on the field parameters $\omega_0$ and $F_0$, as well as on the Keldysh parameter $\gamma = \sqrt{2 I\m{p}} \omega_0 / F_0$ \cite{Keldysh1965}. The Keldysh parameter combines the $I\m{p}$ as relevant system parameter with the field parameters $\omega_0$ and $F_0$. One regime defined by $F_0$ is over-the-barrier ionization, where the field strength is strong enough that electron(s) can freely leave the core region without the necessity of tunneling. Assuming an asymptotic Coulomb potential $-Z/|x|$, the field strength needs to be larger than $F\m{over} = I\m{p}^2/(4 Z)$ for over-the-barrier ionization \cite{kiyan1991}, with $Z$ being the nuclear charge and $I\m{p}$ being the ionization potential. For our model $I\m{p} = 0.377 E\m{h}$, hence calculations with the field strength \unit[$F_0 = 0.050$]{$E\m{h} /(e a_0)$} correspond to over-the-barrier ionization. A second regime is tunnel ionization, which happens for $\gamma < 1$ or $\ll 1$ and $F_0 < F\m{over}$, as the electron has enough time to tunnel through barrier within a laser cycle. Then, the parameter space of the laser field can also be separated based on the minimum number of absorbed photons required for ionization given by $\lceil I\m{p} / \omega_0 \rceil$: For $\hbar \omega_0 > I_{\rm p}$, we have single-photon ionization, while otherwise multi-photon ionization takes place. The parameter space of the laser field for our model is shown in Fig.\ \ref{fig:para}. A more detailed discussion of the regimes can e.g.\ be found in \cite{amini2019}. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.6]{E0w0.pdf} \caption{ Parameter space of the 6-cycle laser pulse used in our simulations with a central angular frequency $\omega_0$ and maximal amplitude of the laser field $F_0$ for our model with an ionization potential \unit[$I\m{p} = 0.377$]{$E\m{h}$}. We mark the necessary electric field strength $F\m{over}$ for over-the-barrier ionization, for $Z=1$, as well as regions of different Keldysh parameters $\gamma$ and the minimal number of absorbed photons required for ionization, given by $\lceil I\m{p} / \omega_0 \rceil$. The colorbar below the abscissa indicates the position of the visible part of the spectrum.} \label{fig:para} \end{center} \end{figure} For all numerical eigenstate calculations and time propagations we use QMstunfti \cite{qmstunfti}, which is a Python toolbox designed to solve grid-based quantum mechanics. In particular, we use a sparse-matrix representation of the respective Hamiltonian where derivatives are obtained within a finite difference approximation. Both for the eigenstate calculation and for the propagation we rely on functionalities of the scipy.sparse module \cite{scipy} which partially uses the ARPACK library \cite{arpack}. We use a grid spacing of $0.01 / \omega_0$ for the $t$-grid and of \unit[$0.2$]{$a_0$} for the spatial grid with \unit[$|x_j| < 100$]{$a_0$}. To avoid reflections at the grid boundaries, we absorb the wavefunction in the region $90 \, a_0 < |x_j| < 100 \, a_0$ by multiplication with a mask function being $1$ at \unit[$|x_j| = 90$]{$a_0$} and decreasing to 0 until \unit[$|x_j| = 100$]{$a_0$} as $\cos^{1/8}$. \section{Comparison to a Single-Active Electron assumption} \label{sec:sae} A standard approach to attoscience modeling is the SAE assumption, but there are different ways how this assumption can be implemented. Here, we consider an SAE model based on KS-DFT with the exact KS-potential. In KS-DFT, the one-electron density for a spinless $n$-electron system is obtained as \begin{align} \rho(\mathbf{r}_1) = \sum_{j=0}^{n-1} |\varphi_j^{\rm KS}(\mathbf{r}_1)|^2 \label{eq:ksdft} \end{align} where $\varphi_j^{\rm KS}(\mathbf{r}_1)$ are the KS-orbitals that are eigenstates of a one-electron Hamiltonian with KS-potential $V^{\rm KS}(\mathbf{r}_1)$, \begin{align} \left( -\frac{\nabla_1^2}{2} + V^{\rm KS}(\mathbf{r}_1) \right) \varphi_j^{\rm KS}(\mathbf{r}_1) = \varepsilon_j^{\rm KS} \varphi_j^{\rm KS}(\mathbf{r}_1). \end{align} An SAE approach can be defined by \begin{equation} i \partial_t \chi\n{SAE}(\mathbf{r}_1;t) = \left(- \frac{1}{2} \nabla_1^2 + V^{\rm KS}(\mathbf{r}_1) + \mathbf{F}(t) \cdot \mathbf{r}_1 \right) \chi\n{SAE}(\mathbf{r}_1;t) \label{eq:tdse_sae} \end{equation} with initial state $\chi\n{SAE}(\mathbf{r}_1;t=0) = \varphi_j^{\rm KS}(\mathbf{r}_1)$ which is one of the KS-orbitals, typically the highest occupied KS-orbital. Thus, in this SAE approach only one orbital is propagated while the others are kept frozen, and we assume that $V^{\rm KS}(x_1)$ is $t$-independent. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.9\textwidth]{pic_KSvsEEF.pdf} \caption{ Kohn-Sham potential and lowest two Kohn-Sham orbitals (left) as well as the Exact Electron Factorization potential and wavefunction (right) for the antisymmetric initial state of the considered two-electron model. For comparison, the external soft-Coulomb potential $V_{\rm ext}$ is also shown. } \label{fig:pic_KSvsEEF} \end{center} \end{figure} The presented SAE approach is close to the idea of reconstructing effective SAE potentials for molecules from the static KS potential \cite{Awasthi2008} and it uses the correct ionization potential, which is considered to be a decisive parameter in the SAE assumption \cite{Hofmann2014}. For our one-dimensional model, the exact KS-potential and KS-orbitals are shown in Fig.\ \ref{fig:pic_KSvsEEF} together with the EEF quantities for the ground state. While the EEF describes all electrons with a single one-electron wavefunction $\chi$, KS-DFT relies on multiple orbitals. In some sense this an advantage, as KS-DFT maps the interacting many-electron problem to a non-interacting many-electron problem with a wavefunction that is a Slater determinant of the KS-orbitals, and thus has the (anti-)symmetry requirements already contained in the ansatz. In contrast, while the product of $\chi$ and $\phi$ is the exact many-electron wavefunction and hence fulfills all relevant symmetry constraints, the ansatz \eqref{eq:eef} does not include the symmetry requirements explicitly. Thus, the effective one-electron potential $\varepsilon$ of the EEF is very different compared to the KS-potential. In particular, while the KS-potential looks qualitatively like the soft-Coulomb potential, the EEF potential for the ground state has an additional local barrier at ca.\ \unit[1]{$a_0$}. This barrier reflects the electronic structure, but in a way that is somewhat less intuitive than the multi-orbital picture of KS-DFT. However, we note that similar barriers can also appear in the KS potential, e.g.\ for an excited symmetric state of a model similar to the model used here \cite{elliott2012b}. For Fig.\ \ref{fig:pic_KSvsEEF} the potentials are shifted such that the asymptotic energy for $|x_1| \rightarrow \infty$ corresponds to the energy of the cation. Thus, the KS-eigenvalue $\varepsilon_1^{\rm KS}$ of the highest occupied KS-orbital is equal to the energy of the two-electron system, which is also the EEF eigenvalue for $\chi$ in the absence of the laser field. In \cite{schild2017}, a first approximation to the EEF was proposed which looks very similar to the SAE approach and which is computationally feasible for realistic many-electron systems. In the time-independent conditional amplitude (TICA) approximation it is assumed that the conditional amplitude does not change during the interaction with the laser field, i.e, $\phi(\mathbf{r}_2; \mathbf{r}_1, t) \approx \phi_0 (\mathbf{r}_2;\mathbf{r}_1)$ for all times $t$. When we chose the gauge such that the vector potential is zero, $\mathbf{A}(\mathbf{r}_1;t) = 0$, the TICA Schr\"odinger equation is \begin{equation} i \partial_t \chi\n{TICA}(\mathbf{r}_1;t) = \left(- \frac{1}{2} \nabla_1^2 + \varepsilon\n{TICA}(\mathbf{r}_1) + \mathbf{F}(t) \cdot \left(\mathbf{r}_1 + \mathbf{d}_0(\mathbf{r}_1)\right) \right) \chi\n{TICA}(\mathbf{r}_1;t) , \label{eq:TICA} \end{equation} with time-independent potential $ \varepsilon\n{TICA}(\mathbf{r}_1)$ that can be obtained from $\phi_0(\mathbf{r}_2;\mathbf{r}_1)$ or from the initial electron density $\rho_0(\mathbf{r}_1) = |\chi\n{TICA}(\mathbf{r}_1;0)|^2$, up to a constant, as \begin{align} \varepsilon\n{TICA}(\mathbf{r}_1) = \frac{\nabla_1^2 \sqrt{\rho_0}}{2 \sqrt{\rho_0}}, \end{align} assuming the initial phase of $\chi\n{TICA}$ is zero. The dipole operator $d_0(\mathbf{r}_1)$ is given by \begin{equation} d_0(\mathbf{r}_1) = \braket{\phi_0| \mathbf{r}_2 |\phi_0}_2. \end{equation} If we compare the TDSE for the TICA approximation \eqref{eq:TICA} and the TDSE for the SAE assumption \eqref{eq:tdse_sae}, we see that both are one-electron approaches with a time-independent effective potential that models the many-electron dynamics. However, the initial states and the effective potentials are very different, and the SAE approach models only one electron while the TICA in principle models all electrons. Thus, we can expect that their applications are rather different. \section{Time-dependent dynamics} \label{sec:tdd} In the following, we discuss results for the spinless one-dimensional two-electron model. We choose the gauge where the vector potential is zero, $A(x_1;t) = 0$, and we calculate all quantities from the solution $\psi(x_1,x_2;t)$ of the two-electron problem for the different laser field parameters. Thus, we have both the EEF wavefunction $\chi$ as well as the EEF potential $\varepsilon$ and can compare how features of one of these functions manifest in the other function. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\textwidth]{pic_steps_spikes.pdf} \caption{Representative snapshot of the exact one-electron potential $\varepsilon$ (top) and the corresponding one-electron density $\rho = |\chi|^2$ (bottom, shown logarithmically) at some time during the interaction with the laser pulse. Vertical lines indicate the presence of spikes and steps that appear at minima of the density.} \label{fig:pic_sss} \end{center} \end{figure} Prominent time-dependent features of the EEF potential are spikes and steps outside the core region, illustrated in Fig.\ \ref{fig:pic_sss}. Those spikes and steps appear for some parameters $t$ but also quickly disappear, and it seems that they are rather unimportant features for the construction of suitable approximations. Spikes typically appear in at the same place and time in the components of the EEF scalar potential, $\varepsilon_{\rm av}$, $\varepsilon_{\rm FS}$, $\varepsilon_{\rm GD}$, $\varepsilon_{\rm F2}$, and the EEF vector potential $\mathbf{A}$. From the mathematical formalism of the EEF, the origin of the spikes can be understood by writing \eqref{eq:epsav}, \eqref{eq:epsf2}, \eqref{eq:epsfs}, \eqref{eq:epsgd}, and \eqref{eq:epsvp} in terms of $\psi$ and $\chi$, because for each term we find that it is proportional to $1/|\chi|^2$. Alternatively, the spikes can be analyzed by looking at a feature of the EEF wavefunction $\chi$: This function can be written in polar representation as \begin{align} \chi(x_1;t) = e^{i \theta(x_1;t)} \sqrt{\rho(x_1;t)} \end{align} with phase $\theta(x_1;t) \in \mathbb{R}$ determined by the choice of gauge and with one-electron density \begin{align} \rho(x_1;t) = \Braket{\psi(x_1,x_2;t)|\psi(x_1,x_2;t)}_2. \end{align} For $|\chi|$ to be zero, we need that $|\psi(x_{1,0},x_2;t_0)| = 0$ for all $x_2$ at some $x_{1,0}$ and some $t_0$. While $\psi$ clearly may have nodes, we find that they never lie exactly along a line at some $x_{1,0}$ in $x_2$-direction -- a wavefunctions with this property can be obtained as eigenstate from suitably designed potential, but that such an exactly ``vertical'' node appears during a time-dependent simulation is extremely unlikely. Hence, the magnitude $|\chi|$ never reaches zero, but it may become very small. However, while such a node is very unlikely from the perspective of the full wavefunction $\psi$, a propagation of $\chi$ by solving the TDSE \eqref{eq:tDSE-MA} for some potential allows in principle for nodes in $\chi$, i.e., for $|\chi|$ to become exactly zero. The appearance of nodes is a very common situation when a wavefunction is propagated in some static potential. In contrast, the EEF potential $\varepsilon$ has time-dependent spikes of finite height in regions and at times where $|\chi|$ becomes small. Scattering at these spikes changes (the phase of) $\chi$ such that the sign change is avoided. For an approximate simulation we find that we can ignore the spikes and simply allow the one-electron wavefunction to have nodes, thus these spikes are of little relevance. The steps appear in the gauge-dependent potential and are equivalent to spikes in the vector potential if a different gauge was chosen. From the simulation, we have the vector potential for the gauge $\tilde{\chi} = \sqrt{\rho}$ ($\chi$ being real-valued) given by \begin{align} \tilde{A}(x_1;t) = \frac{1}{\rho} \Braket{\psi| -i \partial_1 \psi}_2, \label{eq:ag} \end{align} and we determine the phase $\theta$ of $\chi$ for the gauge $A \stackrel{!}{=} 0$ from \begin{align} \theta(x_1;t) = -\int_{-\infty}^{x_1} \tilde{A}(x';t) dx' \label{eq:gt} \end{align} such that $A(x) = \tilde{A}(x') + \partial_1 \theta(x;t) \equiv 0$. When $\tilde{A}$ has a spike, we get from \eqref{eq:gt} that the phase $\theta(x_1;t)$ has a step which transfers to the gauge-dependent potential via $\varepsilon_{\rm GD} = \tilde{\varepsilon}_{\rm GD}+ \partial_t \theta$. The steps seems to be related to steps found in DFT \cite{elliott2012,hodgson2017,kraisler2020b} and hint at some qualitative change in the behavior of the electron density $\rho$, but we do not yet have a clear understanding of the steps within the framework of the EEF. In DFT, the steps are for example relevant to correctly describe charge transfer and are related to ionization phenomena \cite{lein2005}, hence they might be important in some situations. However, as \eqref{eq:ag} suggests we find that the steps in $\varepsilon_{\rm GD}$ always appear where $\rho$ is small (as $\tilde{A} \propto \frac{1}{\rho}$) and we find that they can be ignored for the considered simulations. To describe an ionization dynamics, a more important many-electron effect encoded in the effective potential is the time-dependence of the core region. When we compared DFT with the EEF in Fig.\ \ref{fig:pic_KSvsEEF} above, we noted that the electronic structure of the ground state translates to the EEF as an additional barrier at ca.\ \unit[$|x_1| = 1.0$]{$a_0$}. During interaction with the laser field, this barrier can change significantly in height and width. Also, the depth of the potential well in the core region may vary. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.9\textwidth]{pic_potential_comparison_w0_0p1.pdf} \caption{ (a) 6-cycle laser pulse used in the simulations. The black dot indicates the $t$-parameter at which the potentials in the other panels are shown. The other panels are, for laser frequency \unit[$\omega_0=0.1$]{$E_{\rm h}/\hbar$} and for different values of the field strength $F_0$, the norm of the wavefunction (b), the one-electron potential $\varepsilon$ (c) as well as its contributions $\varepsilon_{\rm av}$, $\varepsilon_{\rm FS}$ (d) and $\varepsilon_{\rm F1}$, $\varepsilon_{\rm F2}$, $\varepsilon_{\rm GD}$ (e), (f).} \label{fig:potc1} \end{center} \end{figure} In Fig.\ \ref{fig:potc1} we compare different parts of the EEF potential for a laser frequency \unit[$\omega_0 = 0.1$]{$E_{\rm h}/\hbar$} at an instant of time where the laser field amplitude is maximal and the effects in the exact potential are most pronounced. Panel (a) of Fig.\ \ref{fig:potc1} shows the laser pulse and the time at which the potentials in the other panels are depicted, while panel (c) shows the exact potential $\varepsilon$ for the three different $F_0$ and in comparison to the external one-electron potential $V_{\rm ext}$. The larger $F_0$, the higher is the effective barrier. This barrier keeps the electrons bound: In panel (b) of Fig. \ref{fig:potc1} the norm of the wavefunction for these cases is shown, which indicates that there is significant ionization happening for the higher field strengths. If we think e.g.\ in the SAE KS picture about such an ionization, we expect the electron from the higher orbital to be ionized more easily compared to that in the lower orbital. The equivalent in the EEF seems to be the appearance of the higher barrier, which makes it harder to ionize the second electron: Ignoring the time-dependence of the barrier in such a simulation, e.g.\ by using the TICA approximation, would overestimate the amount of electron density leaving the core region. The barrier is also a sign of excited states in the core region. Looking at Fig.\ \ref{fig:es}, we see that the EEF potential of the first excited state $\psi_1$ has a higher barrier in the core region compared to that of the ground state and the corresponding average energy $\varepsilon_{\rm av}$ as well as the Fubini-Studi potential $\varepsilon_{\rm FS}$ look very similar to those shown for \unit[$F_0 = 0.050$]{$E\m{h} /(e a_0)$} in Fig. \ref{fig:potc1}. Occupation numbers $|c_j|^2$ with \begin{align} c_j(t) = \braket{\psi_j(x_1,x_2)|\psi(x_1,x_2;t)} \label{eq:ccc} \end{align} confirm this observation, as they show significant population of lower excited states of our model Hamiltonian. An interesting case are the simulations for \unit[$\omega_0 = 0.2$]{$E_{\rm h}/\hbar$} where the laser frequency is very close to the transition between $\psi_0$ and $\psi_1$ (\unit[$E_1 - E_0 = 0.201$]{$E_{\rm h}$}). There, the initial potential $\varepsilon(x_1;t)$ resembles $\varepsilon_0(x_1)$ but becomes close to $\varepsilon_1(x_1)$ in the core region during the pulse, with a $t$-dependent variation that indicates some population of other states. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\textwidth]{pic_potential_comparison_others.pdf} \caption{Like panels (d), (e) of Fig.\ \ref{fig:potc1}, but for laser frequencies \unit[$\omega_0=0.3$]{$E_{\rm h}/\hbar$} (top) and \unit[$\omega_0=0.5$]{$E_{\rm h}/\hbar$} (bottom).} \label{fig:potc} \end{center} \end{figure} Further information about the EEF potential $\varepsilon$ can be gained by looking at the one-electron laser interaction potential $\varepsilon_{\rm F1}$, the effective interaction potential $\varepsilon_{\rm F2}$ with the laser field, and the gauge-dependent part $\varepsilon_{\rm GD}$ of the EEF potential. For SAE calculations, recent publications have found that there is a screening effect due to polarization of the ``other'' electrons which cancels the effect of the laser potential in the core region \cite{romanov2020,abu-samha2020}. In the EEF, the behavior is somewhat different: First, we note that $\varepsilon_{\rm F2}$ and $\varepsilon_{\rm GD}$ cancel each other mostly, as illustrated in Fig.\ \ref{fig:potc1}, panels (e) and (f), for a laser frequency \unit[$\omega_0 = 0.1$]{$E_{\rm h}/\hbar$}. What remains is a potential well in the core region that is more pronounced with higher field strength $F_0$. It partially counteracts the one-electron laser interaction potential $\varepsilon_{\rm F1}$, as can be seen by comparing $\varepsilon_{\rm F1}$ and $\varepsilon_{\rm F2} + \varepsilon_{\rm GD}$ in the bottom-right panel of Fig.\ \ref{fig:potc1}. For smaller $F_0$ the effect of $\varepsilon_{\rm F2} + \varepsilon_{\rm GD}$ may indeed be approximated by ``switching off'' $\varepsilon_{\rm F1}$ in the core region, but for larger $F_0$ the potential well is relevant and modeling within the EEF framework seems to be more involved that what was proposed for SAE approaches. The situation is different for higher frequencies $\omega_0$, as illustrated in Fig. \ref{fig:potc}. We find that for higher $\omega_0$ both the average energy $\varepsilon_{\rm av}$ and the Fubini-Studi potential $\varepsilon_{\rm FS}$ have little $t$-dependence and can thus be approximated by the initial potentials. Also, the effective interaction potential $\varepsilon_{\rm F2}$ with the laser field and the gauge-dependent part of the potential $\varepsilon_{\rm GD}$ cancel almost perfectly in the core region, leaving only the one-electron laser interaction potential $\varepsilon_{\rm F1}$ as contribution to the total potential $\varepsilon$. From those findings, we expect that a TICA simulation should be appropriate for high frequencies of the laser field because it is close to the EEF potential which represents the exact dynamics. To quantify this statement, we computed the integrated absolute difference \begin{align} \Delta_{\varepsilon} = \frac{1}{6 T} \int\limits_0^{6T} \int |\varepsilon(x_1;t) - \varepsilon'(x_1;t) | \rho(x_1;t) \, dx_1 dt \label{eq:deleps} \end{align} between the EEF potential $\varepsilon(x_1;t)$ and the potential in a TICA simulation, \begin{align} \varepsilon'(x_1;t) = \varepsilon^{\rm TICA} + F(t) (x_1 + d_0(x_1)), \label{eq:tica01} \end{align} and without the dipole modification, \begin{align} \varepsilon'(x_1;t) = \varepsilon^{\rm TICA} + F(t) x_1, \label{eq:tica02} \end{align} as well as the integrated density differences \begin{align} \Delta_{\rho} = \frac{1}{6 T} \int\limits_0^{6T} \int |\rho(x_1;t) - \rho'(x_1;t)| \, dx_1 dt \label{eq:delrho} \end{align} with the exact electron density $\rho(x_1;t)$ and with $\rho'(x_1;t)$ being either the density from an SAE simulation, the density from a TICA simulation using the potential \eqref{eq:tica01}, or the density from a TICA simulation without modified dipole using the potential \eqref{eq:tica02}. The absolute value of $\Delta_{\varepsilon}$ is weighted by the one-electron density $\rho(x_1;t)$ to count only relevant parts of the potential, and the time integration is performed over the duration $6 T$ of the 6-cycle laser pulse. As $T$ changes with the frequency, we also divide both differences by the pulse duration. The results are shown in Figure \ref{fig:ticac}. As expected, $\Delta_{\varepsilon}$ becomes smaller with increasing frequency $\omega_0$, hence the TICA potential becomes closer to the EEF potential. A notable exception is when \unit[$\omega_0 = 0.2$]{$E_{\rm h}/\hbar$}, which is close to resonance of the transition between the ground state and the first excited state: The TICA potential has, by construction, always the character of the ground state. In contrast, the time-dependent EEF potential resembles the ground state EEF potential only initially but becomes close to the EEF potential of the first excited state during the propagation. Neglect of $d_0$ always make the agreement better for $\Delta_{\varepsilon}$, but only slightly. However, it needs to be tested if $d_0$ plays a more prominent role when more electrons are part of the system. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.99]{pic_tica_comp.pdf} \caption{ Left: Integrated absolute difference $\Delta_{\varepsilon}$ between the EEF potential and the TICA potential (black) as well as the TICA potential neglecting the modified dipole $d_0$ (magenta), for different field strengths $F_0$ and laser frequencies $\omega_0$. Right: Like left panel, but for the density difference $\Delta_{\rho}$. The gray lines show $\Delta_{\rho}$ for the SAE simulation.} \label{fig:ticac} \end{center} \end{figure} The density difference $\Delta_{\rho}$ illustrates that the different potentials influence the dynamics. Clearly, the agreement of the TICA densities (with and without modified dipole) becomes better with higher frequencies, while the SAE simulation is better than the TICA approximation for low frequencies but worse for high frequencies. A closer look at the dynamics shows what can be expected from the neglect of the time-dependent barrier in the TICA simulation for low frequencies: Far too much electron density leaves the core region and becomes highly delocalized. In contrast, the SAE simulation captures the dynamics qualitatively correctly for low frequencies. The applicability of the TICA approximation thus seems complementary to the SAE assumption, as the latter is often applied for relatively low frequencies $\omega_0$ (in the visible regime, e.g.\ for \unit[800]{nm} laser radiation) and is considered a good description of tunnel ionization in the strong field. We note that, interestingly, the ionization yield at the grid boundaries is well reproduced with the TICA simulations and is, for low frequencies, in even better agreement with the exact ionization yield than what is obtained from an SAE simulation. However, this finding is a coincidence for our model, because the dynamics of the TICA simulation differs drastically from the exact simulation for these low frequencies of the laser field. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.6]{com.pdf} \caption{Like Fig.\ \ref{fig:para}, but with number of eigenstates needed to model the Exact Electron Factorization potential indicated as color/shade. The thick black dashed line indicates the approximate bound where the Time-Independent Conditional Amplitude approximation is valid for our model.} \label{fig:com} \end{center} \end{figure} The TICA approximation is based on one electronic state only. To understand better what is needed in the EEF framework to correctly describe the dynamics beyond the TICA approximation, we determined how many states $\psi_j$ are actually needed to reproduce the dynamics of the system. Based on the expansion coefficients \eqref{eq:ccc} of the bound states $\psi_j$ during a propagation, EEF potentials were constructed from the truncated wavefunction \begin{align} \psi_{n_{\rm e}}(x_1,x_2;t) = \sum_{j=0}^{n_{\rm e}} c_j(t) \psi_j(x_1,x_2) \end{align} and compared to the exact potentials in the core region. Fig.\ \ref{fig:com} shows graphically the index $n_{\rm e}$ of the highest excited state needed to reasonably reproduce the EEF potential. Less states may be necessary as it may happen that some states with $j < n_{\rm e}$ are not populated. To find the highest state to be included, we also calculated the occupation numbers based on a Rabi model of the ground and the first excited state. Starting with the initial occupation numbers $|c_0|^2 = 1$, $|c_1|^2 = 0$, within the rotating-wave approximation the occupation numbers evolve with $t$ as \begin{equation} \begin{pmatrix} |c_0 (t)|^2 \\ |c_1 (t)|^2 \end{pmatrix} = \begin{pmatrix} \frac{\delta^2+ |\Omega|^2 \cos^2(\omega\m{R} t)}{ \delta^2+|\Omega|^2} \\ \frac{|\Omega|^2 \sin^2(\omega\m{R} t)}{ \delta^2+|\Omega|^2} \end{pmatrix} \, , \end{equation} where $\delta = E_1 - E_0 - \omega_0$, $\Omega = F_0 \braket{\psi_0|x_1+x_2 |\psi_1}$, and $\omega\m{R} = \frac{1}{2} \sqrt{\delta^2+|\Omega|^2}$. Here, we consider the occupation numbers only until the end of the pulse. The thick dashed line in Fig.\ \ref{fig:com} shows where the transition from a one-state to a multi-state model approximately is located, based on the criterion that $|c_1 (t)|^2$ does not exceed \unit[0.8]{\%}. From Fig.\ \ref{fig:com} we see where TICA is expected to reproduce the dynamics accurately: For high laser field frequencies $\omega_0$ or for large Keldysh parameter $\gamma$, as well as for small $\omega_0$ and laser field strength $F_0$ (smaller than the field strengths $F_0$ used in our simulations), where the initial state is only little perturbed. For parameter regions around $\gamma = 1$ and the frequencies of visible light, which is where tunnel ionization happens and where a lot of activity in attoscience was focused in recent years, many states are needed to reproduce the exact EEF potential and thus the exact dynamics. \section{Conclusions} \label{sec:con} In the framework of the EEF, a many-electron dynamics can be mapped to a one-electron dynamics exactly. The effective potentials appearing in this one-electron dynamics carry a heavy burden, as they encode the time-dependent many-electron effects and the anti-symmetry requirements of the many-electron wavefunction. From the study of our simple model we found that to study ionization dynamics in laser fields, correct description of low-lying excitations in the core region is of central importance to obtain good effective potentials. However, we also found that some terms in the effective potential can be neglected. It will be interesting to study how the features found in the EEF potential carry over to more electrons. Additionally, we found that the simplest approximation of the EEF, the TICA approximation, provides a good description of the dynamics for relatively high frequencies of the laser field. It is thus complementary to the SAE assumption, which is typically used for comparably low frequencies. A TICA simulation for a realistic system is computationally as expensive as a SAE simulation, hence it is worthwhile to to to test how the TICA approach works compared to experimental data. However, the TICA approximation is based on only one electronic state of the many-electron system and does, for our model, not reproduce the exact dynamics for relatively low frequencies of the laser field. In this regime further electronic states are populated and a TICA simulation does not describe the core dynamics correctly. Methods improving on the TICA approximation are conceivable, e.g.\ by simulating the dynamics of the bound states with an approach based e.g.\ on a few many-electron Slater determinants. Although there are problems due to a truncated dynamics in the core region \cite{ruggenthaler2009}, an approach based on that idea may provide suitable one-electron EEF potentials which reproduce the exact dynamics well. In this way, it would also be possible to avoid the difficult explicit coupling of bound and continuum states by simulating the bound-state dynamics and the ionization dynamics separately. Further developments are needed to find a practical method based on the EEF, but our analysis shows that the features of the exact EEF potentials can be understood and seem to be accessible from a computational point of view also for systems of experimental interest. \section{Acknowledgement} This research is supported by an Ambizione grant of the Swiss National Science Foundation (grant number 174212). \bibliographystyle{unsrt}
proofpile-arXiv_065-112
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\label{sec:level1}Introduction\protect} Quantum nonlocality is one of the most important properties in quantum information theory, and the most well-known manifestation of quantum nonlocality is Bell nonlocality [1], which means a quantum state violates Bell-type inequalities [2-5]. Apart from the foundational implications, Bell nonlocality has many applications in quantum technologies [6-8]. In fact, nonlocal properties also have another class which is different from Bell nonlocality. Specifically, when a set of orthogonal quantum states cannot be perfectly distinguished by local operations and classical communication (LOCC), it reflects the fundamental feature of quantum mechanics which is called nonlocality [9]. Locally distinguishing orthogonal quantum states refers to that, in a quantum system which consists of several parts held by separated observers, a state secretly chosen from a set of prespecified orthogonal quantum states is shared by these parties, and the goal is to determine the shared state, only using LOCC [10-21]. In addition, the nonlocality of orthogonal quantum states can be used for practical purposes, such as, data hiding [22,23], quantum secret sharing [24], and so on. Thus, in the past two decades, it has received considerable attention to study the locally distinguishability of orthogonal quantum states and explore the relationship between quantum nonlocality and quantum entanglement [25-39]. All the above results prompt our researchers to make further study on more extensive quantum nonlocality. Recently, in $d \otimes d \otimes d, d=3,4$, Halder \emph{et al.} proved that two sets of orthogonal product states are strongly nonlocal because these states are locally irreducible in all bipartitions [40]. Here, local irreducibility means that it is impossible to locally eliminate one or more states from the set of states while preserving orthogonality of the post measurement states. Then, the authors in [41] present the general definition of strong quantum nonlocality based on the local irreducibility. However, the local irreducibility is only a sufficient but not necessary condition for local indistinguishability. Furthermore, for orthogonal product states, the authors in [42,43] present the classification based on the local distinguishability when all the parties are spatially separated or different subsets of the parties come together, and design the local state discrimination protocols for such states with additional entangled recources. Thus, the above results leave the following questions: (i) for entangled states, how to define the property of strong quantum nonlocality based on the local indistinguishability, (ii) whether we can find strongly nonlocal orthogonal entangled states, and how many states can have strong nonlocality. Motivated by the above questions, in this work, we first present the general definition of strong quantum nonlocality based on the local inditinguishability, and our definition is more general than the definition in [40] because the local irreducibility is only a sufficient but not necessary condition for local indistinguishability. Then, in $2 \otimes 2 \otimes 2$, we show one set of strong nonlocality of orthogonal entangled states to explain the above definition, that is, these states are locally reducible but locally indistinguishable. Furthermore, in N-qubit quantum system, where $N\geqslant 3$, we generalize the above quantum states and prove that these states are strongly nonlocal. Finally, we also construct a class of strong nonlocality of entangled states in $d\otimes d\otimes \cdots \otimes d, d\geqslant 3$. In addition, our results can also let people have a better understanding of the relationship between entanglement and nonlocality. The rest of this paper is organized as follows. In Sec. \uppercase\expandafter{\romannumeral2}, we present a general definition of strong quantum nonlocality. Then, we show some sets of orthogonal entangled states are strongly nonlocal in N-qubit quantum system, where $N\geqslant 3$. Furthermore, in $d\otimes d\otimes \cdots \otimes d, d\geqslant 3$, we construct a class of strong nonlocality of entangled states in Sec. \uppercase\expandafter{\romannumeral3}. Finally, we summarize in Sec. \uppercase\expandafter{\romannumeral4}. \theoremstyle{remark} \newtheorem{definition}{\indent Definition} \newtheorem{lemma}{\indent Lemma} \newtheorem{theorem}{\indent Theorem} \newtheorem{corollary}{\indent Corollary} \def\mbox{\rule[0pt]{1.3ex}{1.3ex}}{\mbox{\rule[0pt]{1.3ex}{1.3ex}}} \def\QEDclosed{\mbox{\rule[0pt]{1.3ex}{1.3ex}}} \def\indent{\em Proof}.{\indent{\em Proof}.} \def\hspace*{\fill}~\QED\par\endtrivlist\unskip{\hspace*{\fill}~\QEDclosed\par\endtrivlist\unskip} \section{The general definition of strong quantum nonlocality} In this section, we present a general definition of strong quantum nonlocality based on the local indistinguishability of orthogonal quantum states. In Ref.[40], for $d_{1}\otimes d_{2}\otimes \cdots \otimes d_{n}$ quantum system, the authors have defined the local irreducibility of a set of orthogonal quantum states, which means that it is not possible to locally eliminate one or more states from the set while preserving orthogonality of the post measurement states. Then, based on the local irreducibility, the authors in [40, 41] defined the strong quantum nonlocality. From the definition of local irreducibility, we know that a set of locally irreducible orthogonal quantum states must be locally indistinguishable, so these states have nonlocality, but the converse does not always hold. Thus, we think the best appropriate definition of strong nonlocality should use the locally indistinguishability. In the following, we present a general definition for strong quantum nonlocality based on the locally indistinguishability. \begin{definition} In $d_{1}\otimes d_{2}\otimes \cdots \otimes d_{n}, n\geqslant 3$, a set of orthogonal quantum states is strongly nonlocal if it is locally indistinguishable in every bipartition, where bipartition means the whole quantum system is divided into two parts. \end{definition} From the Definition 1, we can know that it is different from the definition of strong nonlocality in [40] and more general. In [40], the authors present that the three-qubit GHZ basis (unnormalized): $|000\rangle \pm |111\rangle$, $|011\rangle \pm |100\rangle$, $|001\rangle \pm |110\rangle$, $|010\rangle \pm |101\rangle$ is locally reducible in every bipartition. Nevertheless, we find that these states are still locally indistinguishable in every bipartition. According to our definition, these states are strongly nonlocal. Thus, our definition is more general. In the following, we present that even some of three-qubit GHZ basis are strongly nonlocal. First, we present the following states in $2\otimes 2\otimes 2$: \begin{eqnarray} \label{eq.2} \begin{split} &|\phi_{1,2}\rangle=|000\rangle \pm |111\rangle, \\ &|\phi_{3}\rangle=|011\rangle + |100\rangle,\\ &|\phi_{4}\rangle=|001\rangle + |110\rangle, \\ &|\phi_{5}\rangle=|010\rangle + |101\rangle.\\ \end{split} \end{eqnarray} Then, we prove the above states are locally indistinguishable in every bipartition. \begin{theorem} In $2\otimes 2\otimes 2$, the above $5$ states are strongly nonlocal. \end{theorem} \begin{proof} First, we consider the bipartition $AB|C$. Physically this means that the subsystems $A$ and $B$ are treated together as a four-dimensional subsystem $AB$ on which joint measurements are now allowed. To reflect this, we rewrite the states $|\phi_{1,2,4}\rangle$ as follows: \begin{eqnarray} \label{eq.2} \begin{split} &|\phi_{1,2}\rangle=|0_2 0\rangle \pm |1_21\rangle, \\ &|\phi_{4}\rangle=|0_21\rangle + |1_20\rangle.\\ \end{split} \end{eqnarray} where $|0_2\rangle$ means the first $|00\rangle$, and $|1_2\rangle$ means first $|11\rangle$. Then, the states (2) cannot be locally distinguished across $AB|C$ because the above states are locally equivalent to three Bell states $|00\rangle \pm |11\rangle$, $|01\rangle + |10\rangle$ which are locally indistinguishable [37, 40]. Thus, the states (1) cannot be locally distinguished across $AB|C$. From the structure of states (1), we know these states have a cyclic property as the cyclic property of the trace. Then, the states (1) are also locally indistinguishable across $B|CA$ and $C|AB$. Therefore, the states (1) are strongly nonlocal. This completes the proof. \end{proof} In the following, we generalize the above result in N-qubit quantum systems. In the same way, we first present some of N-qubit GHZ basis states as follows. \begin{eqnarray} \label{eq.2} \begin{split} &|\phi_{1}\rangle=|00\ldots0\rangle - |11\ldots1\rangle, \\ &|\phi_{2+a_{2}2^{N-2}+a_{3}2^{N-3}+\ldots+a_{N}2^0}\rangle=|a_{1}\ldots a_{N}\rangle + |\bar{a_{1}}\ldots \bar{a_{N}}\rangle.\\ \end{split} \end{eqnarray} where $a_{1}=0$, $a_{2},\ldots, a_{N}=0$ or $1$, $\bar{a_{i}}=(a_{i}+1)\bmod 2, i=1,\ldots, N$. Then, we have the following result. \begin{theorem} In N-qubit quantum systems $2\otimes 2\otimes \ldots \otimes2$, the above $2^{N-1}+1$ states of (3) are strongly nonlocal. \end{theorem} \begin{proof} First, we consider any bipartition $A_{1} \ldots A_{j}|A_{j+1} \ldots A_{N}$. Physically this means that the subsystems $A_{1}, \ldots, A_{j}$ are treated together as a $2^{j}$-dimensional subsystem $A_{1} \ldots A_{j}$ on which joint measurements are now allowed. Others parties are similar. Here $A_{1}, \ldots, A_{N}$ can be any parties. Then, there must be one state in which the first $j$ parties are all in state $|0\rangle$ (or $|1\rangle$) and simultaneously the last $N-j$ parties are all in state $|1 \rangle$ (or $|0\rangle$), thus we can rewrite the state as $|\phi \rangle=|0_{j}\rangle |1_{N-j}\rangle +|1_{j} \rangle |0_{N-j}\rangle$ under new basis. Similar to Theorem 1, we consider the state $|\phi \rangle$ and $|\phi_{1,2}\rangle$ which are locally equivalent to three Bell states $|00\rangle \pm |11\rangle$, $|01\rangle + |10\rangle$. Thus, the states (3) cannot be locally distinguished by LOCC in the bipartition $A_{1} \ldots A_{j}|A_{j+1} \ldots A_{N}$. Therefore, the states (3) are strongly nonlocal. This completes the proof. \end{proof} In addition, the states of (3) are locally reducible in every bipartition. However, we have proved that these states are locally indistinguishable and strongly nonlocal. Thus, our definition should be more general and suitable. In [40], the authors show that such product states cannot be found in systems where one of subsystems has dimension two. However, for entangled states, the minimum dimension can be two according to the above results. In the following, we construct strongly nonlocal maximally entangled states (MESs) in more general quantum systems. \section{Strong nonlocality of orthogonal maximally entangled states} In this section, we will present the explicit set of strongly nonlocal MESs in $d \otimes d \otimes d$ and $d \otimes d \otimes \cdots \otimes d$ quantum systems respectively, where $d \geqslant 3$. \subsection{Strongly nonlocal MESs in tripartite quantum systems} To clearly explain the general strong nonlocality in tripartite quantum systems, we need start with the construction of strongly nonlocal MESs in $3 \otimes 3 \otimes 3$ quantum system. \begin{lemma} In $3 \otimes 3 \otimes 3$ quantum system, the following $6$ MESs \begin{eqnarray} |000\rangle + |111\rangle + |222\rangle,\nonumber \\ |000\rangle + \omega |111\rangle + \omega^2 |222\rangle, \nonumber \\ |000\rangle + \omega^2 |111\rangle + \omega |222\rangle,\nonumber \\ |100\rangle + |211\rangle + |022\rangle,\nonumber \\ |010\rangle + |121\rangle + |202\rangle,\nonumber \\ |001\rangle + |112\rangle + |220\rangle. \nonumber \end{eqnarray} are strongly nonlocal, where $\omega=e^{\frac{2\pi i}{3}}$. \end{lemma} \begin{proof} From the definition of strong nonlocality, if we prove the nonlocality in every bipartite separation, i.e., $A|BC$, $B|AC$ and $C|AB$, then the above $6$ MESs have strong nonlocality. In $A|BC$ separation, we set last two $|00\rangle$ as $|0_2\rangle$, $|11\rangle$ as $|1_2\rangle$, $|22\rangle$ as $|2_2\rangle$, thus the first four states of the above set can be rewritten as \begin{eqnarray} |00_2\rangle + |11_2\rangle + |22_2\rangle,\nonumber \\ |00_2\rangle + \omega |11_2\rangle + \omega^2 |22_2\rangle, \nonumber \\ |00_2\rangle + \omega^2 |11_2\rangle + \omega |22_2\rangle,\nonumber \\ |10_2\rangle + |21_2\rangle + |02_2\rangle,\nonumber \end{eqnarray} These four states can be regarded as in a new $3 \otimes 3$ quantum system, in which the computational basis is $\{ |00_2\rangle, |01_2\rangle, |02_2\rangle, |10_2\rangle, |11_2\rangle, 12_2\rangle, |20_2\rangle, |21_2\rangle, |22_2\rangle \}$. Based on the result that any $d+1$ MESs cannot be distinguished by LOCC in [18], the set of these four states has quantum nonlocality in this $A|BC$ separation. Similarly, we can prove the set including the first three and the fifth states cannot be locally distinguished in $B|AC$ separation, and the set including the first three and the sixth states cannot be locally distinguished in $C|AB$ separation. \end{proof} It is not hard to construct strongly nonlocal MESs sets in a most general tripartite quantum system according to the idea in the above proof. Thus we can derive the following theorem. \begin{theorem} In $d \otimes d \otimes d$ quantum system, where $d \geqslant 3$, the following $d+3$ MESs \begin{eqnarray} \sum_{l=0}^{d-1} \omega^{jl} |lll\rangle, j=0,1,\cdots, d-1, \nonumber \\ |100\rangle + |211\rangle + \cdots + |0,d-1,d-1\rangle,\nonumber \\ |010\rangle + |121\rangle + \cdots + |d-1,0,d-1\rangle,\nonumber \\ |001\rangle + |112\rangle + \cdots + |d-1,d-1,0\rangle, \nonumber \end{eqnarray} are strongly nonlocal, where $\omega=e^{\frac{2\pi i}{d}}$. \end{theorem} Actually in $d \otimes d$ quantum system, any $d+1$ MESs have already been used to represent quantum nonlocality [18]. Here from this theorem, we know the fact that adding just $2$ more MESs to the nonlocal MESs can realize the strong nonlocality in $d \otimes d \otimes d$ quantum system. \subsection{Strongly nonlocal MESs in more than tripartite quantum systems} When the number of quantum subsystems is bigger than $3$, the construction of strongly nonlocal MESs will become a little different. In this subsection, we will present the explicit form of strongly nonlocal MESs in more than tripartite quantum systems. \begin{theorem} In a $k$-partite quantum system $d \otimes d \otimes \cdots \otimes d$, where $k \geqslant 4, d \geqslant 3$, the following $(k+1)d$ MESs \begin{eqnarray} \sum_{l=0}^{d-1} \omega^{jl} |ll \cdots l\rangle, j=0,1,\cdots, d-1, \nonumber \\ \sum_{l=0}^{d-1} \omega^{jl} |l\oplus_d1,l, \cdots, l\rangle, j=0,1,\cdots, d-1, \nonumber \\ \sum_{l=0}^{d-1} \omega^{jl} |l,l\oplus_d1, \cdots, l\rangle, j=0,1,\cdots, d-1, \nonumber \\ \vdots \nonumber \\ \sum_{l=0}^{d-1} \omega^{jl} |l,l, \cdots, l\oplus_d1\rangle, j=0,1,\cdots, d-1, \nonumber \end{eqnarray} are strongly nonlocal, where $\omega=e^{\frac{2\pi i}{d}}$. \end{theorem} \begin{proof} To prove the strong nonlocality of these $(k+1)d$ MESs, we need to prove they are nonlocal in every separation. Next we will complete this proof case by case. In ``1|(k-1)'' separation, without loss of generality, we will take $P_1 | P_2 \cdots P_{k}$ as an example. The first $d+1$ MESs in the above set can be rewritten as \begin{eqnarray} \sum_{l=0}^{d-1} \omega^{jl} |ll_{k-1}\rangle, j=0,1,\cdots, d-1, \nonumber \\ \sum_{l=0}^{d-1} |l\oplus_d1,l_{k-1}\rangle, \nonumber \\ \end{eqnarray} where $| l_{k-1}\rangle$ denotes $| l \rangle ^{\otimes (k-1)}$. Then these $d+1$ MESs can be regarded as in another new $d \otimes d$ quantum system where the computational basis of the second subsystem is $\{ | l_{k-1} \rangle \}_{l=0}^{d-1}$. Thus these MESs have nonlocality in $P_1 | P_2 \cdots P_{k}$ separation. Similarly, we can prove quantum nonlocality in other ``1|(k-1)'' separations. In ``2|(k-2)'' separation, the number of new basis states increases. Taking the $P_1 P_2 | P_3 \cdots P_{k}$ separation as an example, the new computational basis for ``$P_1 P_2$'' is $\{ |jj\rangle, |j \oplus_d 1,j \rangle, |j, j\oplus_d 1 \} _{j=0}^{d-1}$, that is, ``$P_1 P_2$'' can be regarded as a new $3d$-dimension subsystem. Meanwhile, the latter subsystem ``$P_3 \cdots P_{k}$'' can be regarded as a new $(k-1)d$-dimension subsystem because its computational basis is $\{ |jj \cdots j \rangle, |j\oplus_d 1,j,\cdots, j \rangle, |j,j,\cdots, j\oplus_d 1 \rangle \}_{j=0}^{d-1}$. Thus the new dimension for $P_1 P_2 | P_3 \cdots P_{k}$ separation is $3d \otimes (k-1)d$, where $k \geqslant 4$. Then we need $(k-1)d+1$ MESs to illustrate the quantum nonlocality in this separation, which can be ensured by the fact that we have $(k+1)d$ MESs in the original set. Other ``2|(k-2)'' separations can be similarly proved. Next, we will consider the ``3|(k-3)'' separation. Actually, the number of new basis states is no more than `2|(k-2)'' separation, or even exact the symmetric case of the former cases, so the original $(k+1)d $ MESs can also assure the quantum nonlocality in every ``3|(k-3)'' separation. The following ``m|(k-m)'' separation with $m \geqslant 4$ will become easier, or even exact the symmetric case of the former cases. Until now, we have proved the quantum nonlocality in every possible separations, which exactly satisfies the definition of quantum strong nonlocality. \end{proof} \section{Conclusion and Discussion} In summary, we have shown the definition for strong nonlocality of orthogonal quantum states and constructed some sets of strongly nonlocal orthogonal quantum states in $d\otimes d\otimes \ldots \otimes d$, thus extending the concept of strong nonlocality. Our results can lead to a better understanding of the relationship between nonlocality and entanglement. In addition, for orthogonal product states, there are some locally distinguishing protocols with entanglement resource [42-47]. However, for entangled states, it is very few, so it is interesting to investigate the less entanglement resource required to distinguish entangled states especially the above entangled states. However, for a more than tripartite quantum system, the definition of nonlocality is not complete. For example, in $d_{A}\otimes d_{B}\otimes d_{C} \otimes d_{D}$ quantum system, (i) when a set of orthogonal quantum states is locally indistinguishable in a 4-partition $A|B|C|D$, we know that these states have nonlocality that we presently understand; (ii) when a set of orthogonal quantum states is locally indistinguishable in every bipartition (e.g., $AB|CD$ and $ABC|D$), these states have strong nonlocality, such as our results in above section; but (iii) when a set of orthogonal quantum states is locally indistinguishable in every tripartition (e.g., $AB|C|D$ and $BD|A|C$), but locally distinguishable in some bipartition, the nonlocality of these states should be defined. As [41, 42], we also can classify the different strength of nonlocality of orthogonal quantum states based on the local indistinguishability. Here, we use $\mathcal{N}$ to indicate the strength of nonlocality of a set of orthogonal quantum states and get the relationship as follows. \begin{eqnarray} \label{eq.2} \begin{split} \mathcal{N}_{2}>\cdots>\mathcal{N}_{i+}>\mathcal{N}_{i}>\cdots\\ >\mathcal{N}_{n+}>\mathcal{N}_{n}, i=3, \cdots, n-1, \end{split} \end{eqnarray} where $\mathcal{N}_{j}$, $j=2,\cdots ,n$, means that a set of orthogonal quantum states is only locally indistinguishabel in every $j$-partition and $\mathcal{N}_{j+}$, $j=3,\cdots ,n$, means that a set of orthogonal quantum states is locally indistinguishable in every $j$-partition and also locally indistinguishable in only some $(j-1)$-partition. Therefore, from the above relationship, we can present the definition for strong nonlocality. In the following, super-LOCC means that there are at least two parties treated together as a subsystem on which joint measurements are allowed, and the $n$-parties are at least divided into $2$ parts. \begin{definition} In an $n$-partite quantum system, where $n>2$, a set of orthogonal quantum states is strongly nonlocal if it cannot be perfectly distinguished by super-LOCC. \end{definition} From the Definition 2, we know that super-LOCC is more powerful than LOCC, but less powerful than global operations. Thus the definition should be more general and appropriate. In addition, we recently find that the authors in Ref. [48] have also presented a class of strong nonlocality of entangled states based on the local irreducibility in $d \otimes d \otimes d, d\geqslant 3$, and in their construction, strong nonlocality needs almost $d^3$ entangled states. But in our construction method, because we use local indistinguishability, it only needs $d+3$ entangled states to show our new-defined strong nonlocality, and generalize the construction in $d\otimes d\otimes \ldots \otimes d, d\geqslant 2$. \begin{acknowledgments} The authors are grateful for the anonymous referees' suggestions to improve the quality of this paper. This work was supported by the Beijing Natural Science Foundation (Grant No. 4194088), the NSFC (Grants No. 61901030, No. 61801459, No. 61701343 and No. 11847210), the National Postdoctoral Program for Innovative Talent (Grant No. BX20180042 and No. BX20190335), the China Postdoctoral Science Foundation (Grant No. 2018M640070), and the Anhui Initiative in Quantum Information Technologies (Grant No. AHY150100). \end{acknowledgments}
proofpile-arXiv_065-113
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Stefan problems model heat transfer processes that involve a change of phase. They constitute a broad field of study since they appear in a great number of mathematical and industrial significance problems \cite{AlSo1993}, \cite{Ca1984}, \cite{Gu2003}, \cite{Lu1991}. A large bibliography on the subject is given in \cite{Ta2000} and a review on analytical solutions in \cite{Ta2011}. The Stefan problem with a space-dependent latent heat can be found in several physical processes. In \cite{SwVoPa2000}, it was developed a mathematical model for the shoreline movement in a sedimentary basin using an analogy with the one-phase melting Stefan problem with a variable latent heat. Besides, in \cite{ZhShZh2018}, it was introduced a two-phase Stefan problem with a general type of space-dependent latent heat from the background of the artificial ground-freezing technique. The assumption of variable latent heat not only becomes meaningful in the study of the shoreline movement or in the soil freezing techniques but also in the nanoparticle melting \cite{RiMy2016} and in the one-dimensional consolidation with threshold gradient \cite{ZhBuLu2013}. More references dealing with non-constant latent heat can be found in: \cite{VoSwPa2004}, \cite{ZhWaBu2014},\cite{ZhXi2015}, \cite{SaTa2011-a}, \cite{BoTa2018-CAA}, \cite{BoTa2018-ZAMP}, \cite{Do2014}, \cite{Mc1991}, \cite{ZhHuLiZhZh2018}, \cite{Pr1970}, \cite{Go2002}. In this paper we are going to consider two different Stefan-like problems (P) and (P$_h$) with space-dependent latent heat imposing different conditions at the fixed boundary. The first problem to consider can be stated as follows: \noindent\textbf{Problem (P)}. Find the location of the free boundary $x=s(t)$ and the temperature $T=T(x,t)$ at the liquid region $0<x<s(t)$ such that:\\[-0.25cm] \begin{subequations}\label{Pinfty-A} \be \pder[T]{t}=a^2 \pder[^2T]{x^2},\qquad 0<x<s(t), \quad t>0, \label{EcCalor:1F-pos-tempinfty-A} \ee \be T(0,t)=\theta_{_\infty} t^{\alpha/2},\qquad t>0, \label{FrontFija:1F-pos-tempinfty-A} \ee \be T(s(t),t)=0,\qquad t>0, \label{TempFront:1F-pos-tempinfty-A} \ee \be k\pder[T]{x}(s(t),t)=-\gamma s(t)^{\alpha} \dot s(t), \qquad t>0, \label{CondStefan:1F-pos-tempinfty-A} \ee \be s(0)=0,\label{FrontInicial:1F-pos-tempinfty-A} \ee \end{subequations} The equation (\ref{EcCalor:1F-pos-tempinfty-A}) is the heat conduction equation in the liquid region where $a^2=\frac{k}{\rho c}$ is the diffusion coefficient being $k$ the thermal conductivity, $\rho$ the density mass and $c$ the specific heat capacity. At $x=0$, a Dirichlet condition (\ref{FrontFija:1F-pos-tempinfty-A}) is imposed. It must be noticed that the temperature at the fixed boundary is time-dependent and it is characterized by a parameter $\theta_{_\infty}>0$. In addition, condition (\ref{TempFront:1F-pos-tempinfty-A}) represents the fact that the phase change temperature is assumed to be 0 without loss of generality, condition (\ref{CondStefan:1F-pos-tempinfty-A}) is the corresponding Stefan condition and (\ref{FrontInicial:1F-pos-tempinfty-A}) is the initial position of the free boundary. The remarkable feature of the problem is related to the condition at the interface given by the Stefan condition (\ref{CondStefan:1F-pos-tempinfty-A}), where the latent heat by unit of volume is space-dependent defined by a power function of the position $\tfrac{\gamma}{\rho} x^{\alpha}(t)$ with $\gamma$ a given positive constant and $\alpha$ an arbitrarily non-negative real value. The second problem (P$_h$) arises by imposing a convective (Robin) condition at the fixed face $x=0$ instead of a Dirichlet one. In mathematical terms, we can define (P$_h$) as: \noindent \textbf{Problem (P$_h$)}. Find the location of the free boundary $x=s_{_{h}}(t)$ and the temperature $T_{h}=T_{h}(x,t)$ at the liquid region $0<x<s_{h}(t)$ such that equations (\ref{EcCalor:1F-pos-tempinfty-A}), (\ref{TempFront:1F-pos-tempinfty-A})-(\ref{FrontInicial:1F-pos-tempinfty-A}) are satisfied, together with the Robin condition \begin{equation}\label{FrontFijaConvectiva} k\pder[T]{x}(0,t)=\frac{h}{\sqrt{t}}\left[T(0,t)-\theta_{_\infty}t^{\alpha/2} \right],\qquad t>0. \tag{\ref{FrontFija:1F-pos-tempinfty-A}$^\star$} \end{equation} Condition (\ref{FrontFijaConvectiva}) states that the incoming heat flux at the fixed face is proportional to the difference between the material temperature and the ambient temperature. Here, $\theta_{_\infty}t^{\alpha/2}$ characterizes the bulk temperature at a large distance from the fixed face $x = 0$ and $h$ represents the heat transfer at the fixed face. We will work under the assumption that $h>0$ and \mbox{$0<T_{{h}}(0,t)<\theta_{_\infty}t^{\alpha/2}$} in order to guarantee the melting process. The exact solution to problem (P) was given in \cite{ZhWaBu2014} for integer non-negative values of $\alpha$ and was generalized in \cite{ZhXi2015} by taking $\alpha$ as a real non negative constant. Besides, the exact solution of the problem (P$_h$) was provided in \cite{BoTa2018-CAA}. It is known that due to the non-linear nature of the Stefan problem, exact solutions are limited to a few cases and therefore it is necessary to solve them either numerically or approximately. The idea in this paper is to take advantage of the exact solutions available in the literature testing the accuracy of different approximate integral methods. The heat balance integral method, introduced by Goodman \cite{Go1958}, is an approximate technique which is usually employed for solving the location of the free boundary in phase-change problems. It consists in the transformation of the heat equation into an ordinary differential equation in time, assuming a quadratic profile in space for the temperature. For those profiles, several variants have been introduced in \cite{Wo2001} and \cite{SaSiCo2006}. In addition, in \cite{Hr2009-a}, \cite{Hr2009-b}, \cite{Mi2012}, \cite{MiMy2010-a} this method has been applied defining new accurate temperature profiles. Moreover, for the case $\alpha=0$, the explicit solution to the problem (P$_{{h}}$) for the two-phase process was given in \cite{Ta2017} and this was useful to obtain the accuracy of different heat balance integral methods to problem (P$_{{h}}$) in \cite{BoSeTa2018}. The paper will be structured as follows: in Section 2 we will give a briefly introduction about the approximate methods to be implemented. Then, in Section 3, we will recall the exact solution to problem (P) that considers a Dirichlet condition at the fixed face and we will get some different approximate solutions that will be tested with the exact one. In Section 4, we will present the exact solution to the problem with a Robin condition at the fixed face, i.e. problem (P$_{{h}}$). We are going to implement the different approximate methods and we will test their accuracy. In all cases, we are going to provide numerical examples and comparisons. In addition, we will show that the approximate solutions to problem (P$_{{h}}$) converge to the approximate solutions to problem (P) when the heat transfer coefficient $h$ goes to infinity. Finally, in Section 5, we will implement an approximate method that consists in minimizing the least-squares error as in \cite{RiMyMc2019}. For the case $\alpha=0$ we obtain different approximations for the problems (P) and (P$_{{h}}$) by using the least-squares approximate method. \section{Heat balance integral methods} The classical heat balance integral method, described for first time in \cite{Go1958}, was designed to approximate problems involving phase-changes. This method consists in changing the heat equation (\ref{EcCalor:1F-pos-tempinfty-A}) by an ordinary differential equation in time that arises by: assuming a suitable temperature profile consistent with the boundary conditions, integrating (\ref{EcCalor:1F-pos-tempinfty-A}) with respect to the spacial variable in an appropiate interval, and replacing the Stefan condition (\ref{CondStefan:1F-pos-tempinfty-A}) by a new equation obtained from the phase-change temperature (\ref{TempFront:1F-pos-tempinfty-A}). Therefore, if we derive condition (\ref{TempFront:1F-pos-tempinfty-A}) with respect to time, and take into account the heat equation (\ref{EcCalor:1F-pos-tempinfty-A}) we get \be \pder[T]{x}(s(t),t) \dot s(t)+a^2 \pder[^2 T]{x^2} (s(t),t)=0. \ee Clearing $\dot s$ and replacing it in the Stefan condition (\ref{CondStefan:1F-pos-tempinfty-A}) it gives \be \frac{k}{\gamma s^{\alpha}(t)}\left[ \pder[T]{x}(s(t),t) \right]^2= a^2 \pder[^2 T]{x^2} (s(t),t). \tag{\ref{CondStefan:1F-pos-tempinfty-A}$^\star$} \ee This last condition is going to substitute the Stefan condition in the approximated problem obtained from the classical heat balance integral method. On the other hand, using equation (\ref{EcCalor:1F-pos-tempinfty-A}) and the condition (\ref{TempFront:1F-pos-tempinfty-A}) we have \begin{subeqnarray*} \dfrac{d}{d t} \int\limits_{0}^{s(t)} T(x,t) d x&=&\int\limits_{0}^{s(t)} \pder[T]{t}(x,t)d x +T(s(t),t)\dot{s}(t) \\ &=& \int\limits_{0}^{s(t)} a^2 \pder[^2 T]{x^2}(x,t)d x=a^2\left[\pder[T]{x}(s(t),t)-\pder[T]{x}(0,t) \right]. \end{subeqnarray*} Then, by applying the Stefan condition (\ref{CondStefan:1F-pos-tempinfty-A}) it results that \be \frac{d}{d t} \int\limits_{0}^{s(t)} T(x,t) d x= -a^2\left[\frac{\gamma}{k}s^{\alpha} (t) \dot s(t)+\pder[T]{x}(0,t) \right].\tag{\ref{EcCalor:1F-pos-tempinfty-A}$^\star$} \ee The {\bf classical heat balance integral method}, approximate problem (P) through a new problem that arises from replacing the heat equation (\ref{EcCalor:1F-pos-tempinfty-A}) by (\ref{EcCalor:1F-pos-tempinfty-A}$^\star$) and the Stefan condition (\ref{CondStefan:1F-pos-tempinfty-A}) by (\ref{CondStefan:1F-pos-tempinfty-A}$^\star$) keeping the rest of the conditions of (P) the same. In short, the method consists in solving the problem goberned by (\ref{EcCalor:1F-pos-tempinfty-A}$^\star$), (\ref{FrontFija:1F-pos-tempinfty-A}),(\ref{TempFront:1F-pos-tempinfty-A}), (\ref{CondStefan:1F-pos-tempinfty-A}$^\star$) and (\ref{FrontInicial:1F-pos-tempinfty-A}). A priori, this method will work better than the classical one due to the fact that it changes less conditions from the exact problem. In \cite{Wo2001}, a {\bf modified integral balance method} is presented. It postulates to change only the heat equation keeping the same the rest of conditions, even the Stefan condition. It means that it consists in solving an approximate problem given by (\ref{EcCalor:1F-pos-tempinfty-A}$^\star$), (\ref{FrontFija:1F-pos-tempinfty-A}), (\ref{TempFront:1F-pos-tempinfty-A}), (\ref{CondStefan:1F-pos-tempinfty-A}) and (\ref{FrontInicial:1F-pos-tempinfty-A}). On the other hand, from the heat equation (\ref{EcCalor:1F-pos-tempinfty-A}), and the condition (\ref{TempFront:1F-pos-tempinfty-A}) we have \begin{subeqnarray*} \int\limits_0^{s(t)} \int\limits_0^x \pder[T]{t}(z,t) d z d x &=& \int\limits_0^{s(t)} \int\limits_0^x a^2 \pder[^2T]{z^2}(z,t) d z\; d x \\ &=& \int\limits_0^{s(t)} a^2 \left[\pder[T]{x}(x,t)-\pder[T]{x}(0,t) \right] d x \\ &=& a^2 \left[T(s(t),t)-T(0,t)-\pder[T]{x}(0,t)s(t) \right], \end{subeqnarray*} that is to say \be \int\limits_0^{s(t)} \int\limits_0^x \pder[T]{t}(z,t) d z d x = -a^2 \left[T(0,t)+\pder[T]{x}(0,t)s(t) \right].\tag{\ref{EcCalor:1F-pos-tempinfty-A}$^{\dag}$} \ee The {\bf refined integral method} introduced in \cite{SaSiCo2006} suggests to solve an approximate problem given by (\ref{EcCalor:1F-pos-tempinfty-A}$^\dag$), (\ref{FrontFija:1F-pos-tempinfty-A}), (\ref{TempFront:1F-pos-tempinfty-A}), (\ref{CondStefan:1F-pos-tempinfty-A}) and (\ref{FrontInicial:1F-pos-tempinfty-A}). That is to say, to replace the heat equation (\ref{EcCalor:1F-pos-tempinfty-A}) by (\ref{EcCalor:1F-pos-tempinfty-A}$^{\dag}$). In all cases, to solve the above approximated problems, it is necessary to adopt a suitable profile for the temperature. Throughout this paper we will assume a quadratic profile in space \begin{equation}\label{Perfil} \widetilde{T}(x,t)=t^{\alpha/2}\theta_{_\infty}\left[ \widetilde{A}\left( 1-\frac{x}{\widetilde{s}(t)}\right)+\widetilde{B}\left( 1-\frac{x}{\widetilde{s}(t)}\right)^2\right], \end{equation} where $\widetilde{T}$ and $\widetilde{s}$ will be approximations of $T$ y $s$ respectively. We can notice that in the chosen profile a power function of time arises in order to be compatible with the boundary conditions imposed in the exact problem. It is worth to mention that for the approximations to the problem (P$_{h}$), it will be enough to consider the same approximate problems stated for (P), changing only the boundary condition (\ref{FrontFija:1F-pos-tempinfty-A}) by (\ref{FrontFija:1F-pos-tempinfty-A}$^\star$). \section{One-phase Stefan problem with Dirichlet condition} \subsection{Exact solution} \medskip Before introducing the different approaching methods for problem (P), we present the exact solution, which was given in \cite{ZhWaBu2014} and \cite{ZhXi2015} for the cases when $\alpha\in\mathbb{N}_0$ and $\alpha\in\mathbb{R}^+\setminus\mathbb{N}_0$, respectively. Let us define the following non-dimensional parameter \be \text{Ste}=\frac{k\theta_{_\infty}}{\gamma a^{\alpha+2}}\label{Ste} \ee which is called generalized Stefan number. We use the word ``generalized'' since in case that the latent heat $l$ is constant, i.e. $\alpha=0$, we can recover the usual formula for the Stefan number, which assuming a zero phase-change temperature is given by $\text{Ste}=\frac{c\theta_\infty}{l}$. Notice that if we take $\alpha=0$ then the Dirichlet condition at the fixed face is given by $\theta_\infty$ and from the Stefan condition (\ref{CondStefan:1F-pos-tempinfty-A}) the latent heat becomes $l=\gamma/\rho$. Then, if we combine the results found in \cite{ZhWaBu2014} and \cite{ZhXi2015} we can rewrite the solution of the problem (P) (as it was done in the appendix of \cite{BoTa2018-EJDE}), obtaining for each $\alpha\in\mathbb{R}^+_0$ that: \begin{align} &T(x,t)=t^{\alpha/2}\left[AM\left(-\frac{\alpha}{2},\frac{1}{2},-\eta^2 \right) +B\eta M\left(-\frac{\alpha}{2}+\frac{1}{2},\frac{3}{2},-\eta^2 \right)\right],\\ &s(t)=2a\nu \sqrt{t}, \end{align} where $\eta=\frac{x}{2a\sqrt{t}}$ is the similarity variable, \be A=\theta_{_\infty},\qquad \qquad B=\frac{-\theta_{_\infty}M\left( -\frac{\alpha}{2},\frac{1}{2},-{\nu^2}\right)}{\nu M\left(-\frac{\alpha}{2}+\frac{1}{2},\frac{3}{2},-{\nu^2} \right)}, \ee and $\nu$ is the unique positive solution to the following equation \be\label{EcNuA} \frac{\text{Ste}}{2^{\alpha+1}}f(z)=z^{\alpha+1}, \qquad z>0, \ee where is defined by \be f(z)=\frac{1}{zM\left(\frac{\alpha}{2}+1,\frac{3}{2},z^2 \right)} \label{f} \ee and $M(a,b,z)$ is the Kummer function defined by \begin{align} & M(a,b,z)=\sum\limits_{s=0}^{\infty}\frac{(a)_s}{(b)_s s!}z^s ,\qquad\qquad \text{ (b cannot be a nonpositive integer)} \label{M} \end{align} being $(a)_s$ the Pochhammer symbol: \begin{equation} (a)_s=a(a+1)(a+2)\dots (a+s-1), \quad \quad (a)_0=1 \end{equation} \begin{remark}\label{ExactaMenor1} If $0<\text{Ste}<1$, the unique solution $\nu$ of equation (\ref{EcNuA}) belongs to the interval $ (0,1)$. In fact, define $H(x)=\tfrac{\text{Ste}}{2^{\alpha+1}}f(z)-z^{\alpha+1}$. On one hand we have $H(0)=+\infty$ due to the fact that $M\left(\tfrac{\alpha}{2}+1,\tfrac{3}{2},0\right)=1$. On the other hand, we obtain $H(1)<0$ as $\tfrac{\text{Ste}}{2^{\alpha+1}}<1< M\left(\tfrac{\alpha}{2}+1,\tfrac{3}{2},1 \right)$. \end{remark} \subsection{Approximate solutions} \medskip We are going to implement the different approximate techniques for the problem (P) and test their accuracy taking advantage of the knowledge of the exact solution. First of all, we introduce a problem (P$_{1}$) which arises when applying the classical heat balance integral problem to (P). According to the previous section, the \textbf{problem (P$_{1}$)} consists in finding the free boundary $s_{1}=s_{1}(t)$ and the temperature $T_{1}=T_{1}(x,t)$ in $0<x<s_{1}(t)$ such that conditions (\ref{EcCalor:1F-pos-tempinfty-A}$^\star$), (\ref{FrontFija:1F-pos-tempinfty-A}),(\ref{TempFront:1F-pos-tempinfty-A}), (\ref{CondStefan:1F-pos-tempinfty-A}$^\star$) and (\ref{FrontInicial:1F-pos-tempinfty-A}) are verified. Provided that $T_{1}$ assumes a quadratic profile in space like (\ref{Perfil}) we get the following result \begin{teo}\label{TeoP1} If $0<\text{Ste}<1$, there exists at least one solution to problem $\mathrm{(P_{1})}$, given by \begin{eqnarray} T_{1}(x,t)&=&t^{\alpha/2}\theta_{_\infty} \left[ A_{1}\left(1-\frac{x}{s_{1}(t)}\right) +B_{1} \left(1-\frac{x}{s_{1}(t)}\right)^{2}\right], \label{PerfilT1}\\ s_{1}(t)&=& 2a\nu_{1} \sqrt{t}, \end{eqnarray} where the constants $A_{1}, B_{1}$ are defined as a function of $\nu_{1}$ by: \begin{align} A_{1}&=\frac{-2\left[ 3\; 2^\alpha \nu_{1}^{\alpha+2}+\mathrm{Ste}\left((-3+(1+\alpha)\nu_{1}^2\right)\right]}{\mathrm{Ste} \left(3+(1+\alpha)\nu_{1}^2\right)},\qquad\label{A1} \\ B_{1}&= \frac{3\left[2^{\alpha+1}\nu_{1}^{\alpha+2}+\mathrm{Ste}\left(-1+(1+\alpha)\nu_{1}^2\right) \right]}{\mathrm{Ste} \left(3+(1+\alpha)\nu_{1}^2\right)},\label{B1} \end{align} and the coefficient $\nu_{1}$ is a solution to the following equation \begin{align} & z^{2\alpha+4}(-3)\; 2^{2\alpha+1} (\alpha-2)+z^{2\alpha+2}(-9)\; 2^{2\alpha+1} +z^{4+\alpha} \left(-3\right)\; 2^\alpha (\alpha-3)(\alpha+1)\mathrm{Ste}\nonumber\\ &+z^{\alpha+2}\left( -3\right) \; 2^{\alpha+1} (\alpha+7)\mathrm{Ste}+z^{\alpha} 9\; 2^{\alpha} \mathrm{Ste}+z^4 2 (\alpha+1)^2\mathrm{Ste}^2\nonumber\\ &+z^2 (-12)(\alpha+1)\mathrm{Ste}^2+18\mathrm{Ste}^2=0, \qquad z>0.\label{EcNu1} \end{align} \end{teo} \begin{proof} First of all we shall notice that if $T_{1}$ adopts the profile (\ref{PerfilT1}), it is clear evident that the condition (\ref{TempFront:1F-pos-tempinfty-A}) is automatically verified. From the imposed Dirichlet condition at the fixed boundary (\ref{FrontFija:1F-pos-tempinfty-A}) we get \be \label{1} A_{1}+B_{1}=1 \ee In addition, we have that $$\pder[T_{1}]{x}(x,t)=-t^{\alpha/2} \theta_{_\infty}\left[\frac{A_{1}}{s_{1}(t)}+\frac{2B_{1}}{s_{1}(t)}\left(1-\frac{x}{s_1(t)}\right) \right],$$ and $$ \pder[^2T_{1}]{x^2}(x,t)=t^{\alpha/2}\theta_{_\infty} \frac{2B_{1}}{s_1^2(t)}. $$ Therefore, from condition (\ref{CondStefan:1F-pos-tempinfty-A}$^\star$) we claim $$\frac{k}{\gamma s^{\alpha}_{1}(t)} t^{\alpha} \theta_{_\infty}^2 \frac{A_{1}^2}{s_1^2 (t)}=a^2 t^{\alpha/2} \theta_{_\infty}\frac{2B_{1}}{s_1^2(t)}. $$ Then, it follows that $$s_{1}(t)= \left( \frac{A_{1}^2}{2 B_{1}} \frac{k \theta_{_\infty}}{\gamma a^2}\right)^{1/\alpha} \sqrt{t}.$$ Defining $\nu_{1}$ such that $\nu_{1}=\frac{1}{2a}\left( \frac{A_{1}^2}{2 B_{1}} \frac{k \theta_{_\infty}}{\gamma a^2}\right)^{1/\alpha}$, we deduce that \be \label{s1Demo} s_{1}(t)=2a\nu_{1}\sqrt{t} \ee where $\nu_{1}$, $A_{1}$ and $B_{1}$ are related as \be\label{2} A_{1}^2 =\frac{2^{\alpha+1} \nu_{1}^{\alpha}}{\text{Ste}} B_{1}. \ee Condition (\ref{EcCalor:1F-pos-tempinfty-A}$^\star$) and \begin{align*} \frac{d}{d t} \int\limits_{0}^{s_1(t)} T_1(x,t) d x& =\frac{d}{d t} \int\limits_{0}^{s_1(t)} t^{\alpha/2} \theta_{_\infty} \left[ A_{1}\left(1-\frac{x}{s_{1}(t)}\right) +B_{1}\left(1-\frac{x}{s_{1}(t)}\right)^{2}\right] d x \\ &= \theta_{_\infty}\left(\frac{A_{1}}{2}+\frac{B_{1}}{3} \right) \left(\frac{\alpha}{2}t^{\alpha/2-1}s_{1}(t)+t^{\alpha/2}\dot s_{1}(t) \right), \end{align*} gives \be \theta_{_\infty}\left(\tfrac{A_{1}}{2}+\tfrac{B_{1}}{3} \right) \left(\tfrac{\alpha}{2}t^{\alpha/2-1}s_{1}(t)+t^{\alpha/2}\dot s_{1}(t) \right)= -a^2 \left[\tfrac{\gamma}{k}s_{1}^{\alpha} (t) \dot s_{1}(t)+t^{\alpha/2}\theta_{_\infty}\tfrac{(A_{1}+2B_{1})}{s_{1}(t)} \right]. \ee According to (\ref{s1Demo}), it results that \be \label{3} A_{1}\left( (\alpha+1)\nu_{1}^2-1 \right) +B_{1} \left( \tfrac{2}{3} (\alpha+1)\nu_{1}^2-2\right)=\tfrac{-2^{\alpha+1}\nu_{1}^{\alpha+2}}{\text{Ste}}. \ee Thus, we have obtained three equations (\ref{1}), (\ref{2}) and (\ref{3}) for the unknown coefficients $A_{1}$, $B_{1}$ and $\nu_{1}$. From (\ref{1}) and (\ref{3}) it is obtained that $A_{1}$ and $B_{1}$ are given as a function of $\nu_{1}$ by (\ref{A1}) and (\ref{B1}), respectively. Then, equation (\ref{2}) leads to the fact that $\nu_{1}$ must be a positive solution to (\ref{EcNu1}). For the existence of solution to problem (P$_{1}$) it remains to prove that the function $w_{1}=w_{1}(z)$, defined as the left hand side of equation (\ref{EcNu1}), has at least one positive root. This can be easily check by evaluating $w_{1}(0)=18\text{Ste}^2>0$ and $$w_{1}(1)=-\alpha^2 (3\; 2^{\alpha}-2\text{Ste})\text{Ste}-2\alpha(3 \; 4^{\alpha}+4\text{Ste}^2)-2(3\; 4^{\alpha}+3\; 2^{\alpha+2}\text{Ste}-4\text{Ste}^2)$$ From the assumption that $0<\text{Ste}<1$, we obtain $3\; 2^{\alpha}-2\text{Ste}>0$, and $$3\; 4^{\alpha}+3\; 2^{\alpha+2}\text{Ste}-4\text{Ste}^2>2^{\alpha+2}\text{Ste}-4\text{Ste}^2=4\text{Ste}(3\; 2^{\alpha}-\text{Ste})>0.$$ Therefore $w_{1}(1)<0$. Consequently, we can assure that there exists at least one positive solution to equation (\ref{EcNu1}) in the interval $(0,1)$. \end{proof} \begin{remark} The approximated free boundary $s_{1}$ behaves as a square root of time just like the exact one $s$, it means that $s_{1}(t)= 2a\nu_{1} \sqrt{t}$ while $s(t)= 2a\nu \sqrt{t}$. \end{remark} \begin{remark} After Theorem \ref{TeoP1} follows the question about uniqueness of solution. We found that there exists different values for $\alpha$ and $0<\text{Ste}<1$ that leads to multiple roots of equation (\ref{EcNu1}), i.e. $w_1(z)=0,\; z>0$ (see Figure \ref{Fig:Nu1NoUnico}) \begin{figure}[h!!] \begin{center} \includegraphics[scale=0.2]{NoUnicidad.eps} \end{center} \caption{{\footnotesize Plot of $w_1(z)$ for $\alpha=1$ and $\text{Ste}=0.5$ }}\label{Fig:Nu1NoUnico} \end{figure} However our study must be reduced to find the roots of $w_1(z)$ located in the interval $(0,1)$ in view of the proof of Theorem \ref{TeoP1} but also in view of Remark \ref{ExactaMenor1}. For the particular case of $\alpha=0$ the uniqueness analysis was given in \cite{BoSeTa2018}. Although we could not prove it analytically, by setting different values for $\alpha$ and $\text{Ste}$ we can see that there exists just one root of the polynomial $w_1(z)$ located in the interval $(0,1)$. In Figure \ref{Fig:Nu1-Ste05} we illustrate this fact setting $\alpha=0.5, 1,1.5,2,3,5,10$ and $\text{Ste}=0.5$. We have just plot between $0\leq z\leq 0.5$ in order to appreciate better this fact. \begin{center} \begin{figure}[h!!] \begin{center} \includegraphics[scale=0.3]{Nu1Ste05.eps} \end{center} \caption{{\footnotesize Plot of $w_1(z)$ for different values of $\alpha$ setting $\text{Ste}=0.5$ }}\label{Fig:Nu1-Ste05} \end{figure} \end{center} \end{remark} \medskip With the purpose of testing the classical integral balance method and in view of the above remark we will only compare graphically the coefficient $\nu_{1}$ that characterizes the approximated free boundary problem $s_{1}$ with the coefficient $\nu$ that characterizes the exact free boundary $s$. In Figure \ref{Fig:Nu1VsNu}, we illustrate this comparisons for different values of $0<\text{Ste}<1$ and $\alpha$. \begin{figure}[h!!] \begin{center} \includegraphics[scale=0.4]{Nu1VsNu.eps} \end{center} \begin{center} \caption{{\footnotesize Plot of $\nu$ and $\nu_{_1}$ against $\text{Ste}$ for different values of $\alpha=0.5,1,5$ }}\label{Fig:Nu1VsNu} \end{center} \end{figure} For the comparisons we have assumed that $0<\text{Ste}<1$ not only due to the hypothesis in Theorem \ref{TeoP1}, but also because of the fact that in general, the majority of phase change materials under a realistic temperature present a Stefan number that does not exceed 1 (see \cite{So1979}). \vspace{1cm} Now, we will turn to the modified integral balance method. In this case we state an approximated \textbf{problem (P$_{2}$)} for the problem (P) that is stated as follows: find the free boundary $s_{2}=s_{2}(t)$ and the temperature $T_{2}=T_{2}(x,t)$ in $0<x<s_{2}(t)$ such that equation (\ref{EcCalor:1F-pos-tempinfty-A}$^\star$) and conditions (\ref{FrontFija:1F-pos-tempinfty-A}), (\ref{TempFront:1F-pos-tempinfty-A}), (\ref{CondStefan:1F-pos-tempinfty-A}) and (\ref{FrontInicial:1F-pos-tempinfty-A}) are satisfied. Assuming a quadratic profile in space for $T_{2}$ we obtain the next theorem \begin{teo}\label{TeoP2} The problem $\mathrm{(P_{2})}$ has a unique solution given by: \begin{eqnarray} T_{2}(x,t)&=&t^{\alpha/2}\theta_{_\infty} \left[ A_{2}\left(1-\frac{x}{s_{2}(t)}\right) +B_{2} \left(1-\frac{x}{s_{2}(t)}\right)^{2}\right],\label{PerfilT2} \\ s_{2}(t)&=& 2a\nu_{2} \sqrt{t}, \end{eqnarray} where the constantes $A_{2}$ and $B_{2}$ are given by \begin{align} A_{2}&=\frac{6\mathrm{Ste}-2\mathrm{Ste}\;\nu_{2}^2 (\alpha+1)-3\; 2^{\alpha+1} \nu_{2}^{\alpha+2}}{\mathrm{Ste }\;\left(\nu_{2}^2 (\alpha+1)+3 \right)},\label{A2}\\ B_{2}&=\frac{-3\mathrm{Ste}\;+3\mathrm{Ste}\;\nu_{2}^2(\alpha+1)+3\; 2^{\alpha+1} \nu_{2}^{\alpha+2}}{\mathrm{Ste }\;\left(\nu_{2}^2 (\alpha+1)+3 \right)},\label{B2} \end{align} and where $\nu_2$ is the unique positive solution to the equation \be \label{EcNu2} z^{\alpha+4} 2^{\alpha} (\alpha+1)+z^{\alpha+2}3\; 2^{\alpha+1}+z^2 \mathrm{Ste} (\alpha+1)-3\mathrm{Ste}=0, \qquad z>0. \ee \end{teo} \begin{proof} Condition (\ref{TempFront:1F-pos-tempinfty-A}) is clearly checked from the chosen temperature profile. From the Stefan condition (\ref{CondStefan:1F-pos-tempinfty-A}) we obtain \be -kt^{\alpha/2}\theta_{_\infty}\tfrac{A_{2}}{s_{2}(t)}=-\gamma s_{2}^{\alpha}(t)\dot s_{2}(t). \ee Therefore it results that \be s_{2}(t)=\left( \frac{(\alpha+2)}{(\frac{\alpha}{2}+1)} \frac{k\theta_{_\infty}}{\gamma}A_{2}\right)^{1/(\alpha+2)} \sqrt{t}. \ee If we introduce the coefficient $\nu_{2}$ such that $ \nu_{2}=\frac{1}{2a}\left( \frac{(\alpha+2)}{(\frac{\alpha}{2}+1)} \frac{k\theta_{_\infty}}{\gamma}A_{2}\right)^{1/(\alpha+2)}$, the free boundary can be expressed as \be s_{2}(t)=2a\; \nu_{2}\sqrt{t}, \ee where the following relation holds \be\label{RelP2-10} A_{2}=\frac{2^{\alpha+1} \nu_{2}^{\alpha+2}}{\text{Ste}} . \ee Taking into account the boundary condition at the fixed face (\ref{FrontFija:1F-pos-tempinfty-A}) we get \be \label{RelP2-20} A_{2}+B_{2}=1. \ee In addition, in virtue of the equation (\ref{EcCalor:1F-pos-tempinfty-A}$^\star$) we get \be \label{RelP2-30} A_{2}\left( (\alpha+1)\nu_{2}^2-1 \right) +B_{2} \left( \tfrac{2}{3} (\alpha+1)\nu_{2}^2-2\right)=\tfrac{-2^{\alpha+1}\nu_{2}^{\alpha+2}}{\text{Ste}}. \ee From equations (\ref{RelP2-10}), (\ref{RelP2-20}) and (\ref{RelP2-30}) we claim that $A_{2}$ and $B_{2}$ can be written in function of $\nu_{2}$ through formulas (\ref{A2}) and (\ref{B2}), respectively. In addition, $\nu_{2}$ must be a solution to the equation (\ref{EcNu2}). So that, to finish the proof, it remains to show that equation (\ref{EcNu2}) has a unique positive solution, i.e the function defined by the left hand side of this equation $w_{2}=w_{2}(z)$ has a unique positive root. This is easily checked by noting that $$w_{2}(0)=-3\text{Ste}<0,\qquad w_{2}(+\infty)=+\infty, \qquad \frac{d w_{2}}{d z}(z)>0,\quad \forall z>0.$$ \end{proof} In the Figure \ref{Fig:Nu2VsNu}, as we did for the classical heat balance integral method, we compare the coefficients $\nu_{2}$ (approximate) with $\nu$ (exact) for different values of $0<\text{Ste}<1$ and $\alpha$. \begin{figure}[h!!] \begin{center} \includegraphics[scale=0.4]{Nu2VsNu.eps} \end{center} \caption{{\footnotesize Plot of $\nu$ and $\nu_{2}$ against $\text{Ste}$ for different values of $\alpha=0.5,1,5$ }}\label{Fig:Nu2VsNu} \end{figure} The refined integral method intends to approximate the problem (P) through solving a\textbf{ problem (P$_{3}$)} that consists in finding the free boundary $s_{3}=s_{3}(t)$ and the temperature $T_{3}=T_{3}(x,t)$ in $0<x<s_{3}(t)$ such that equation (\ref{EcCalor:1F-pos-tempinfty-A}$^\dag$) and conditions (\ref{FrontFija:1F-pos-tempinfty-A}), (\ref{TempFront:1F-pos-tempinfty-A}), (\ref{CondStefan:1F-pos-tempinfty-A}) and (\ref{FrontInicial:1F-pos-tempinfty-A}) are satisfied. Under the assumption that $T_{3}$ adopts a quadratic profile in space like (\ref{Perfil}) we can state the following result: \begin{teo}\label{TeoP3} The unique solution to problem $\mathrm{(P_{3})}$ is given by \begin{eqnarray} T_{3}(x,t)&=&t^{\alpha/2}\left[ A_{3}\theta_{_\infty}\left(1-\frac{x}{s_{3}(t)}\right) +B_{3}\theta_{_\infty} \left(1-\frac{x}{s_{3}(t)}\right)^{2}\right],\label{PerfilT3} \\ s_{3}(t)&=& 2a\nu_{3} \sqrt{t}, \end{eqnarray} where the constants $A_{3}$ and $B_{3}$ are given by \begin{align} A_{3}&=\frac{6\mathrm{Ste}-2\mathrm{Ste}\;\nu_{3}^2 (\alpha+1)-3\; 2^{\alpha+1} \nu_{3}^{\alpha+2}}{\mathrm{Ste }\;\left(\nu_{3}^2 (\alpha+1)+3 \right)},\label{A3}\\ B_{3}&=\frac{-3\mathrm{Ste}\;+3\mathrm{Ste}\;\nu_{3}^2(\alpha+1)+3\; 2^{\alpha+1} \nu_{3}^{\alpha+2}}{\mathrm{Ste }\;\left(\nu_{3}^2 (\alpha+1)+3 \right)},\label{B3} \end{align} and where $\nu_{3}$ is the unique solution to equation \be \label{EcNu3} z^{\alpha+4} 2^{\alpha+1} \alpha+z^{\alpha+2}3\; 2^{\alpha+2}+z^2 \mathrm{Ste} (2+3\alpha)-6\mathrm{Ste}=0, \qquad z>0. \ee \end{teo} \begin{proof} The proof is similar to the one of the Theorem \ref{TeoP2}. The only difference to take into account is the fact that equation(\ref{EcCalor:1F-pos-tempinfty-A}$^{\dag}$) is equivalent to \be \nu_{3}^2 \left[A_{3}\left( \frac{1}{3}+\frac{2}{3}\alpha\right)+B_{3}\left( \frac{1}{3}+\frac{\alpha}{2}\right) \right]=B_{3} \ee \end{proof} \medskip In the Figure \ref{Fig:Nu3VsNu} we compare graphically the coefficient $\nu_{3}$ that characterizes the approximate free boundary $s_{3}$ with the coefficient $\nu$ that characterizes the exact boundary $s$. \begin{figure}[h!!] \begin{center} \includegraphics[scale=0.4]{Nu3VsNu.eps} \end{center} \caption{{\footnotesize Plot of $\nu$ and $\nu_{3}$ against $\text{Ste}$ for different values of $\alpha=0.5,1,5$ }}\label{Fig:Nu3VsNu} \end{figure} \subsection{Comparisons between the approximate solutions and the exact one} \medskip In the previous section we have applied three different methods to approximate the solution to the Stefan problem (P), with a Dirichlet condition at the fixed face and a variable latent heat. For each method we have stated a problem (P$_{{i}}$), ${i}=1,2,3$ and we have compared graphically the dimensionless coefficients $\nu_{{i}}$ that characterizes their free boundaries $s_{{i}}$, with the coefficient $\nu$ that characterizes the exact free boundary $s$. Then the goal will be to compare numerically, for different Stefan number, the coefficient $\nu$ given by (\ref{EcNuA}) with the approximate coefficients $\nu_{1}$, $\nu_{2}$ and $\nu_{3}$ defined by (\ref{EcNu1}), (\ref{EcNu2}) and (\ref{EcNu3}), respectively. In order that the comparisons be more representative, in Tables \ref{Tabla:NuiVsNu1}-\ref{Tabla:NuiVsNu3} we show the exact values obtained for $\nu$, the approximate value $\nu_{{i}}$ and percentage error committed in each case $E(\nu_{{i}})=100\left\vert\frac{\nu-\nu_{{i}}}{\nu} \right\vert$, ${i}=1,2,3$ for different values of $\text{Ste}$ and $\alpha$. \begin{table} \small \caption{{\footnotesize Dimensionless coefficients of the free boundaries and their percentage relative error for $\alpha=0$.}} \label{Tabla:NuiVsNu1} \begin{center} \begin{tabular}{cc|cc|cc|cc} \hline Ste & $\nu$ & $\nu_1$ & $E_{\text{rel}}(\nu_1)$ & $ \nu_{2} $ & $E_{\text{rel}}(\nu_{2})$ & $\nu_{3}$ & $E_{\text{rel}}(\nu_{3})$ \\ \hline 0.1 & 0.2200 & 0.2232 & 1.4530 \% & 0.2209 & 0.3947 \% & 0.2218 & 0.7954 \% \\ 0.2 & 0.3064 & 0.3143 & 2.5729 \% & 0.3087 & 0.7499 \% & 0.3111 & 1.5213 \% \\ 0.3 & 0.3699 & 0.3827 & 3.4575 \% & 0.3738 & 1.0707 \% & 0.3780 & 2.1856 \% \\ 0.4 & 0.4212 & 0.4388 & 4.1687 \% & 0.4270 & 1.3618 \% & 0.4330 & 2.7953 \% \\ 0.5 & 0.4648 & 0.4869 & 4.7478 \% & 0.4723 & 1.6266 \% & 0.4804 & 3.3561 \% \\ 0.6 & 0.5028 & 0.5290 & 5.2236 \% & 0.5122 & 1.8683 \% & 0.5222 & 3.8729 \% \\ 0.7 & 0.5365 & 0.5666 & 5.6173 \% & 0.5477 & 2.0895 \% & 0.5599 & 4.3501 \% \\ 0.8 & 0.5669 & 0.6006 & 5.9443 \% & 0.5799 & 2.2923 \% & 0.5941 & 4.7913 \% \\ 0.9 & 0.5946 & 0.6316 & 6.2165 \% & 0.6094 & 2.4786 \% & 0.6255 & 5.1999 \% \\ 1.0 & 0.6201 & 0.6600 & 6.4432 \% & 0.6365 & 2.6500 \% & 0.6547 & 5.5786 \% \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \small \caption{{\footnotesize Dimensionless coefficients of the free boundaries and their percentage relative error for $\alpha=0.5$.}} \label{Tabla:NuiVsNu2} \begin{center} \begin{tabular}{cc|cc|cc|cc} \hline Ste & $\nu$ & $\nu_{_1}$ & $E_{\text{rel}}(\nu_{_1})$ & $ \nu_{_2} $ & $E_{\text{rel}}(\nu_{_2})$ & $\nu_{_3}$ & $E_{\text{rel}}(\nu_{_3})$ \\ \hline 0.1 & 0.2569 & 0.2587 & 0.6956 \% & 0.2574 & 0.2001 \% & 0.2580 & 0.4012 \% \\ 0.2 & 0.3339 & 0.3372 & 0.9999 \% & 0.3349 & 0.3147 \% & 0.3360 & 0.6321 \% \\ 0.3 & 0.3876 & 0.3921 & 1.1718 \% & 0.3891 & 0.3974 \% & 0.3907 & 0.7995 \% \\ 0.4 & 0.4298 & 0.4353 & 1.2678 \% & 0.4318 & 0.4596 \% & 0.4338 & 0.9260 \% \\ 0.5 & 0.4650 & 0.4711 & 1.3143 \% & 0.4674 & 0.5067 \% & 0.4698 & 1.0225 \% \\ 0.6 & 0.4953 & 0.5018 & 1.3264 \% & 0.4980 & 0.5423 \% & 0.5007 & 1.0959 \% \\ 0.7 & 0.5220 & 0.5288 & 1.3133 \% & 0.5249 & 0.5684 \% & 0.5280 & 1.1508 \% \\ 0.8 & 0.5458 & 0.5528 & 1.2814 \% & 0.5491 & 0.5869 \% & 0.5523 & 1.1905 \% \\ 0.9 & 0.5675 & 0.5745 & 1.2352 \% & 0.5709 & 0.5989 \% & 0.5744 & 1.2173 \% \\ 1.0 & 0.5873 & 0.5943 & 1.1777 \% & 0.5909 & 0.6054 \% & 0.5946 & 1.2334 \% \\ \hline \end{tabular} \end{center} \end{table} \begin{table}\small \caption{{\footnotesize Dimensionless coefficients of the free boundaries and their percentage relative error for $\alpha=5$.}} \label{Tabla:NuiVsNu3} \begin{center} \begin{tabular}{cc|cc|cc|cc} \hline Ste & $\nu$ & $\nu_{_1}$ & $E_{\text{rel}}(\nu_{_1})$ & $ \nu_{_2} $ & $E_{\text{rel}}(\nu_{_2})$ & $\nu_{_3}$ & $E_{\text{rel}}(\nu_{_3})$ \\ \hline 0.1 & 0.3793 & 0.3563 & 6.0700 \% & 0.3723 & 1.8469 \% & 0.3656 & 3.6135 \% \\ 0.2 & 0.4151 & 0.3849 & 7.2853 \% & 0.4055 & 2.3333 \% & 0.3963 & 4.5496 \% \\ 0.3 & 0.4374 & 0.4020 & 8.0816 \% & 0.4256 & 2.6810 \% & 0.4145 & 5.2154 \% \\ 0.4 & 0.4537 & 0.4143 & 8.6859 \% & 0.4403 & 2.9615 \% & 0.4276 & 5.7505 \% \\ 0.5 & 0.4667 & 0.4239 & 9.1776 \% & 0.4518 & 3.2010 \% & 0.4377 & 6.2058 \% \\ 0.6 & 0.4775 & 0.4317 & 9.5943 \% & 0.4612 & 3.4122 \% & 0.4460 & 6.6060 \% \\ 0.7 & 0.4869 & 0.4384 & 9.9572 \% & 0.4693 & 3.6025 \% & 0.4529 & 6.9656 \% \\ 0.8 & 0.4950 & 0.4442 & 10.2795 \% & 0.4763 & 3.7766 \% & 0.4589 & 7.2936 \% \\ 0.9 & 0.5023 & 0.4492 & 10.5699 \% & 0.4826 & 3.9376 \% & 0.4642 & 7.5962 \% \\ 1.0 & 0.5090 & 0.4538 & 10.8345 \% & 0.4881 & 4.0880 \% & 0.4689 & 7.8780 \% \\ \hline \end{tabular} \end{center} \end{table} \newpage From the tables, we can notice that for $\alpha=0.5$, the error committed by each method is lowen than for $\alpha=0$ or $\alpha=5$. In all cases, the method which shows the greatest accuracy is the modified integral balance method. In other words, the best approximate problem to (P) is given by problem (P$_{_2}$). Besides, we can also provide an illustration ot the exact temperature $T$ with the approximate temperatures $T_{_{i}}$, ${i}=1,2,3$, given by (\ref{PerfilT1}), (\ref{PerfilT2}) and (\ref{PerfilT3}), respectively. If we consider $\alpha=5$, $\text{Ste}=0.5$, $\theta_{_\infty}=30$ and $a=1$ we obtain Figures (4)-(7) \begin{figure}[h!]\centering \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[scale=0.3]{Exacta.eps} \caption{{\footnotesize Colour map for $T$ }} \end{center} \label{Fig:Exacta} \end{minipage} \begin {minipage}{0.49\textwidth} \begin{center} \includegraphics[scale=0.3]{BIC.eps} \caption{ {\footnotesize Colour map for $T_{_1}$}} \end{center} \label{Fig:BIC} \end{minipage} \end{figure} \begin{figure}[h!]\centering \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[scale=0.3]{BIM.eps} \caption{{\footnotesize Colour map for $T_{_2}$ }} \end{center} \label{Fig:BIM} \end{minipage} \begin {minipage}{0.49\textwidth} \begin{center} \includegraphics[scale=0.3]{RIM.eps} \caption{ {\footnotesize Colour map for $T_{_3}$}} \end{center} \label{Fig:RIM} \end{minipage} \end{figure} \section{One-phase Stefan problem with Robin condition} In this section we are going to present the exact solution of the problem with a Robin condition, then we will obtain different approximate solutions that will be compared and we will analyse their convergence when the coefficient that characterizes the heat transfer at the fixed boundary goes to infinity. \subsection{Exact solution} \medskip We recall that the exact solution to problem (P$_{h}$) governed by equations (\ref{EcCalor:1F-pos-tempinfty-A}), (\ref{FrontFijaConvectiva}), (\ref{TempFront:1F-pos-tempinfty-A})-(\ref{FrontInicial:1F-pos-tempinfty-A}) given in \cite{BoTa2018-CAA} can be written as \begin{align} &T_{h}(x,t)= t^{\alpha/2}\left[ A_{h} M\left(-\frac{\alpha}{2}, \frac{1}{2},-\eta^2\right)+B_{h} \eta M\left(-\frac{\alpha}{2}+\frac{1}{2},\frac{3}{2},-\eta^2\right)\right], \\ &s_{h}(t) =2 a \nu_{h}\sqrt{ t}, \end{align} where $\eta=\frac{x}{2a\sqrt{t}}$ is the similarity variable, the coefficients $A_{h}$ and $B_{h}$ are given by \begin{align} & A_{h}=\frac{-\nu_{h} M\left(-\frac{\alpha}{2}+\frac{1}{2},\frac{3}{2},-\nu_{h}^2\right)}{M\left(-\frac{\alpha}{2},\frac{1}{2},-\nu_{h}^2\right)}B_{h}, \\ & B_{h}=\frac{- \theta_{_\infty} M\left( -\frac{\alpha}{2},\frac{1}{2},-\nu_{h}^2\right)}{\left[\frac{1}{2\text{Bi}} M\left( -\frac{\alpha}{2},\frac{1}{2},-\nu_{h}^2\right)+ \nu_{h} M\left( -\frac{\alpha}{2}+\frac{1}{2},\frac{3}{2},-\nu_{h}^2\right) \right]}, \end{align} and with $\nu_{h}$ defined as the unique solution to the following equation \be \label{EcNuh} \frac{\text{Ste}}{ 2^{\alpha+1} } \frac{1}{\left[\frac{1}{f(z)}+\frac{1}{2\text{Bi} } M\left(\frac{\alpha}{2}+\frac{1}{2},\frac{1}{2},z^2 \right)\right]} =z^{\alpha+1},\qquad z>0, \ee where $\text{Ste}$ and $f$ are given by (\ref{Ste}) and (\ref{f}), respectively and where the Biot number is defined by $\text{Bi}=\frac{ah}{k}$. In \cite{BoTa2018-CAA} it was also proved that the unique solution to the exact problem with convective condition (P$_{h}$) converges pointwise to the unique solution to the problem with temperature condition (P) when the Biot number goes to infinity (i.e $h\to \infty$) \subsection{Approximate solutions and convergence} \medskip As it was done for the problem (P), we will now apply the classical integral balance method, the modified integral balance method and the refined integral method to the problem (P$_{h}$). For each method we will going to state an approximate problem (P$_{_{{ih}}}$), $i=1,2,3$. Assuming a quadratic profile in space we are going to obtain the solutions to the approximate problems. Finally, we will show that the solution of each problem (P$_{_{{ih}}}$) converges to the solution of the problem (P$_{i}$) defined in the previous section, when $h\to\infty$. This fact is intuitively expected because the same happens to the exact problems (P$_{h}$) and (P). We introduce an approximate \textbf{problem (P$_{_{{1h}}}$)} that arises when applying the classical heat balance integral method to the problem (P$_{h}$). It consists in finding the free boundary $s_{_{{1h}}}=s_{_{{1h}}}(t)$ and the temperature $T_{_{{1h}}}=T_{_{{1h}}}(x,t)$ in $0<x<s_{_{{1h}}}$ such that conditions: (\ref{EcCalor:1F-pos-tempinfty-A}$^\star$), (\ref{FrontFijaConvectiva}),(\ref{TempFront:1F-pos-tempinfty-A}), (\ref{CondStefan:1F-pos-tempinfty-A}$^\star$) and (\ref{FrontInicial:1F-pos-tempinfty-A}) are satisfied. Provided that $T_{_{1h}}$ adopts a quadratic profile in space, like (\ref{Perfil}) we can prove the next result: \begin{teo} If $0<\mathrm{Ste}<1$, $\alpha\geq 0$ and $\mathrm{Bi}$ is large enough, there exists at least one solution to problem $\mathrm{(P_{_{1h}})}$, which is given by \begin{eqnarray} T_{_{1h}}(x,t)&=&t^{\alpha/2}\theta_{_\infty} \left[ A_{_{1h}}\left(1-\frac{x}{s_{_{1h}}(t)}\right) +B_{_{1h}} \left(1-\frac{x}{s_{_{1h}}(t)}\right)^{2}\right],\label{PerfilT1h} \\ s_{_{1h}}(t)&=& 2a\nu_{_{1h}} \sqrt{t}, \end{eqnarray} where the constants $A_{_{1h}}$ and $B_{_{1h}}$ are defined as a function of $\nu_{_{1h}}$ \begin{align} A_{_{1h}}&=\frac{6\mathrm{Ste}-2\mathrm{Ste}\; \nu^2_{_{1h}}(\alpha+1)-\frac{3}{\mathrm{Bi}} 2^{\alpha+1} \nu_{_{1h}}^{\alpha+1}-3\; 2^{\alpha+1}\nu_{_{1h}}^{\alpha+2} }{\mathrm{Ste}\left[\nu_{_{1h}}^2 (\alpha+1)+\frac{2}{\mathrm{Bi}} \nu_{_{1h}}(\alpha+1)+3 \right]},\qquad\label{A1h} \\[0.2cm] B_{_{1h}}&= \frac{-3\mathrm{Ste}+3\mathrm{Ste}\; \nu_{_{1h}}^2(\alpha+1)+\frac{3}{\mathrm{Bi}} 2^{\alpha} \nu_{_{1h}}^{\alpha+1}+3\;2^{\alpha+1} \nu_{_{1h}}^{\alpha+2} }{\mathrm{Ste}\left[\nu_{_{1h}}^2 (\alpha+1)+\frac{2}{\mathrm{Bi}} \nu_{_{1h}}(\alpha+1)+3 \right]},\label{B1h} \end{align} where $\nu_{_{1h}}$ is a solution to the following equation \begin{align} &z^{2\alpha+4} (-3)2^{2\alpha+1}(\alpha-2)+ z^{2\alpha+3}(-3)\frac{2^{2\alpha}}{\mathrm{Bi}}(5\alpha-7)+z^{2\alpha+2}(-3)2^{2\alpha+1}\left(\frac{\alpha-2}{\mathrm{Bi}^2} +3\right)\nonumber \\ &+z^{2\alpha+1}(-9)\frac{2^{2\alpha}}{\mathrm{Bi}}+z^{\alpha+4} (-3)2^{\alpha}\mathrm{Ste} (\alpha-3)(\alpha+1)+z^{\alpha+3}(-3)\frac{2^{\alpha+1}}{\mathrm{Bi}} \mathrm{Ste} (\alpha-1) (\alpha+1)\nonumber \\ &+z^{\alpha+2} (-3)2^{\alpha+1}\mathrm{Ste} (\alpha+7)+z^{\alpha+1} 3 \frac{2^{\alpha+1}}{\mathrm{Bi}}\mathrm{Ste} (\alpha-5)+z^{\alpha} 9 \;2^\alpha \mathrm{Ste}+z^4 2 \mathrm{Ste}^2 (1+\alpha)^2\nonumber \\ &+ z^2 (-12)\mathrm{Ste}^2 (\alpha+1)+18\mathrm{Ste}^2=0, \qquad z>0. \label{EcNu1h} \end{align} \end{teo} \begin{proof} We shall notice first that the profile chosen (\ref{PerfilT1h}), makes the condition (\ref{TempFront:1F-pos-tempinfty-A}) to be verified automatically. In addition we have $$\pder[T_{_{1h}}]{x}(x,t)=-t^{\alpha/2} \theta_{_\infty}\left[\frac{A_{_{1h}}}{s_{_{1h}}(t)}+\frac{2B_{_{1h}}}{s_{_{1h}}(t)}\left(1-\frac{x}{s_{1h}(t)}\right) \right],$$ and $$ \pder[^2T_{_{1h}}]{x^2}(x,t)=t^{\alpha/2}\theta_{_\infty} \frac{2B_{_{1h}}}{s_{_{1h}}^2(t)}. $$ In virtue of condition (\ref{CondStefan:1F-pos-tempinfty-A}$^\star$), the following equality holds $$\frac{k}{\gamma s^{\alpha}_{_{1h}}(t)} t^{\alpha} \theta_{_\infty}^2 \frac{A_{_{1h}}^2}{s_{1h}^2 (t)}=a^2 t^{\alpha/2} \theta_{_\infty}\frac{2B_{_{1h}}}{s_{_{1h}}^2(t)}. $$ Consequently $$s_{_{1h}}(t)= \left( \frac{A_{_{1h}}^2}{2 B_{_{1h}}} \frac{k \theta_{_\infty}}{\gamma a^2}\right)^{1/\alpha} \sqrt{t}.$$ Defining $\nu_{_{1h}}$ such that $\nu_{_{1h}}=\frac{1}{2a}\left( \frac{A_{_{1h}}^2}{2 B_{_{1h}}} \frac{k \theta_{_\infty}}{\gamma a^2}\right)^{1/\alpha}$, we conclude that \be \label{s1hDemo} s_{_{1h}}(t)=2a\nu_{_{1h}}\sqrt{t}, \ee where $\nu_{_{1h}}$ is an unknown that is related with $A_{_{1h}}$ and $B_{_{1h}}$ in the following way \be\label{1h} A_{_{1h}}^2 =\frac{2^{\alpha+1} \nu_{_{1h}}^{\alpha}}{\text{Ste}} B_{_{1h}}. \ee Then, condition (\ref{EcCalor:1F-pos-tempinfty-A}$^{\star}$) leads to \be\label{2h} A_{_{1h}}\left[(\alpha+1)\nu^2_{_{1h}}-1 \right]+B_{_{1h}}\left[\frac{2}{3}(\alpha+1)\nu_{_{1h}}^2-2 \right]=-\frac{2^{\alpha+1}}{\text{Ste}}\nu_{_{1h}}. \ee In addition, according to (\ref{FrontFijaConvectiva}) we have \be\label{3h} A_{_{1h}}\left( 1+2\text{Bi}\; \nu_{_{1h}} \right)+2B_{_{1h}}\left(1+\text{Bi}\; \nu_{_{1h}} \right)=2\text{Bi}\; \nu_{_{1h}}. \ee Thus, we have obtained three equations (\ref{1h}), (\ref{2h}) and (\ref{3h}), for the three unknown coefficients $A_{_{1h}}$, $B_{_{1h}}$ and $\nu_{_{1h}}$. From (\ref{2h}) and (\ref{3h}) we obtain that $A_{_{1h}}$ and $B_{_{1h}}$ are given by (\ref{A1h}) and (\ref{B1h}), respectively. Then, equation (\ref{1h}) leads to $\nu_{_{1h}}$ as a positive solution to equation (\ref{EcNu1h}). If we denote by $\omega_{_{1h}}=\omega_{_{1h}}(z)$ the left hand side of equation (\ref{EcNu1h}), we have \be \omega_{_{1h}}(0)=18\; \text{Ste}^2>0 \ee and \begin{align} \omega_{_{1h}}(1)&=-\alpha^2\left( 3\; 2^{\alpha}-2\text{Ste}+\tfrac{3}{\text{Bi}}2^{\alpha+1}\right)\text{Ste}-2\alpha\left(3\; 4^{\alpha}+4\text{Ste}^2+\tfrac{21}{\text{Bi}}2^{\alpha-1}-\tfrac{3}{\text{Bi}}2^{\alpha}\text{Ste} \right)\nonumber\\ &-2\left( 3\; 4^{\alpha}+3\; 2^{2+\alpha}\text{Ste}-4\text{Ste}^2\right)+\tfrac{3}{\text{Bi}}\left( 2^{2\alpha+3}-2^{3+\alpha}\text{Ste}\right). \end{align} It can be noticed that if $0<\text{Ste}<1$ and $\alpha\geq 0$ we have \begin{align*} & 3\; 2^{\alpha}-2\text{Ste}+\frac{3}{\text{Bi}}2^{\alpha+1}>0,\\ & 3\; 4^{\alpha}+3\; 2^{2+\alpha}\text{Ste}-4\text{Ste}^2>0, \end{align*} and $$3\; 4^{\alpha}+4\text{Ste}^2+\frac{21}{\text{Bi}}2^{\alpha-1}-\frac{3}{\text{Bi}}2^{\alpha}\text{Ste}=3\; 4^{\alpha}+4\text{Ste}^2+ \frac{3}{\text{Bi}} 2^{\alpha }\left(\frac{7}{2}-\text{Ste} \right)>0.$$ As $ 2^{2\alpha+3}-2^{3+\alpha}\text{Ste}=2^{\alpha}2^3 (2^\alpha-\text{Ste})>0$, there exists a large enough Biot number $\text{Bi}$ that makes $\omega_{_{1h}}(1)<0$. In consequence, there will exists at least one solution to equation (\ref{EcNu1h}). \end{proof} With the aim of testing the accuracy of the classical heat balance integral method and taking into account that the exact free boundary $s_{h}(t)=2a\nu_{h}\sqrt{t}$ and the approximate one is given by $s_{_{1h}}(t)=2a\nu_{_{1h}}\sqrt{t}$ we are going to compare graphically only the coefficients $\nu_{h}$ with $\nu_{_{1h}}$ for different values of $\text{Bi}$ and $\alpha$, fixing $\text{Ste}=0.5$ (see Figure \ref{Fig:Nu1hVsNuh}). \begin{figure}[h!!] \begin{center} \includegraphics[scale=0.4]{Nu1hVsNuh.eps} \end{center} \caption{{\footnotesize Plot of $\nu_{h}$ and $\nu_{_{1h}}$ against $\text{Bi}$ for $\alpha=1$ or 5 and $\text{Ste}=0.5$}} \label{Fig:Nu1hVsNuh} \end{figure} The modified integral balance method defines a new approximated problem for (P$_{h}$) that will be called as \textbf{problem (P$_{_{2h}}$)} and which consists in finding the free boundary $s_{_{2h}}=s_{_{2h}}(t)$ and the temperature $T_{_{2h}}=T_{_{2h}}(x,t)$ in $0<x<s_{_{2h}}(t)$ such that equations (\ref{EcCalor:1F-pos-tempinfty-A}$^\star$), (\ref{FrontFijaConvectiva}),(\ref{TempFront:1F-pos-tempinfty-A})-(\ref{FrontInicial:1F-pos-tempinfty-A}) are satisfied. Once again assuming a quadratic profile in space as (\ref{Perfil}) for the temperature $T_{_{2h}}$ we can state the following results \begin{teo}\label{TeoP2h} Given $\mathrm{Ste}>0$ and $\alpha\geq 0$, there exists a unique solution to the problem $\mathrm{(P_{_2})}$ which is given by \begin{eqnarray} T_{_{2h}}(x,t)&=&t^{\alpha/2}\left[ A_{_{2h}}\theta_{_\infty}\left(1-\frac{x}{s_{_{2h}}(t)}\right) +B_{_{2h}}\theta_{_\infty} \left(1-\frac{x}{s_{_{2h}}(t)}\right)^{2}\right],\label{PerfilT2h} \\ s_{_{2h}}(t)&=& 2a\nu_{_{2h}} \sqrt{t}, \end{eqnarray} where the constants $A_{_{2h}}$ and $B_{_{2h}}$ are given by \begin{align} A_{_{2h}}&=\frac{6\mathrm{Ste}-2\mathrm{Ste}\; \nu^2_{_{2h}}(\alpha+1)-\frac{3}{\mathrm{Bi}} 2^{\alpha+1} \nu_{_{2h}}^{\alpha+1}-3\; 2^{\alpha+1}\nu_{_{2h}}^{\alpha+2} }{\mathrm{Ste}\left[\nu_{_{2h}}^2 (\alpha+1)+\frac{2}{\mathrm{Bi}} \nu_{_{2h}}(\alpha+1)+3 \right]},\qquad\label{A2h} \\[0.2cm] B_{_{2h}}&= \frac{-3\mathrm{Ste}+3\mathrm{Ste}\; \nu_{_{2h}}^2(\alpha+1)+\frac{3}{\mathrm{Bi}} 2^{\alpha} \nu_{_{2h}}^{\alpha+1}+3\;2^{\alpha+1} \nu_{_{2h}}^{\alpha+2} }{\mathrm{Ste}\left[\nu_{_{2h}}^2 (\alpha+1)+\frac{2}{\mathrm{Bi}} \nu_{_{2h}}(\alpha+1)+3 \right]},\label{B2h} \end{align} and where the coefficient $\nu_{_{2h}}$ is the unique solution to the following equation \begin{align} &z^{\alpha+4} 2^{\alpha} (\alpha+1)+z^{\alpha+3}\; \tfrac{2^{\alpha+1}}{\mathrm{Bi}}(\alpha+1)+z^{\alpha+2}3\; 2^{\alpha+1}\nonumber\\ &+z^{\alpha+1}3\tfrac{2^{\alpha}}{\mathrm{Bi}}+z^2 \mathrm{Ste} (\alpha+1)-3\mathrm{Ste}=0, \qquad z>0. \label{EcNu2h} \end{align} \end{teo} \begin{proof} It is clear immediate that the chosen profile temperature leads the condition (\ref{TempFront:1F-pos-tempinfty-A}) to be automatically verified. From condition (\ref{CondStefan:1F-pos-tempinfty-A}) we obtain \be -kt^{\alpha/2}\theta_{_\infty}\frac{A_{_{2h}}}{s_{_{2h}}(t)}=-\gamma s_{_{2h}}^{\alpha}(t)\dot s_{_{2h}}(t). \ee Therefore \be s_{_{2h}}(t)=\left( \frac{(\alpha+2)}{(\frac{\alpha}{2}+1)} \frac{k\theta_{_\infty}}{\gamma}A_{_{2h}}\right)^{1/(\alpha+2)} \sqrt{t}. \ee Introducing the new coefficient $\nu_{_{2h}}$ such that $ \nu_{_{2h}}=\frac{1}{2a}\left( \frac{(\alpha+2)}{(\frac{\alpha}{2}+1)} \frac{k\theta_{_\infty}}{\gamma}A_{_{2h}}\right)^{1/(\alpha+2)}$, the free boundary can be expressed as \be s_{_{2h}}(t)=2a\; \nu_{_{2h}}\sqrt{t}, \ee where the following equality holds \be\label{Ec1:P2h} A_{_{2h}}=\frac{2^{\alpha+1} \nu_{_{2h}}^{\alpha+2}}{\text{Ste}} . \ee The convective boundary condition at $x=0$, i.e. condition (\ref{FrontFijaConvectiva}), leads to \be \label{Ec2:P2h} A_{_{2h}}(1+2\text{Bi}\; \nu_{_{2h}})+2B_{_{2h}}(1+\text{Bi}\;\nu_{_{2h}})=2\text{Bi}\;\nu_{_{2h}}. \ee In addition, from (\ref{EcCalor:1F-pos-tempinfty-A}$^{\star}$) it results that \be \label{Ec3_P2h} A_{_{2h}}\left( (\alpha+1)\nu_{_{2h}}^2-1 \right) +B_{_{2h}} \left( \tfrac{2}{3} (\alpha+1)\nu_{_{2h}}^2-2\right)=\tfrac{-2^{\alpha+1}\nu_{_{2h}}^{\alpha+2}}{\text{Ste}}. \ee Taking into account equations (\ref{Ec1:P2h})-(\ref{Ec3_P2h}) we obtain that $A_{_{2h}}$ y $B_{_{2h}}$ can be given as functions of $\nu_{_{2h}}$ through formulas (\ref{A2h}) and (\ref{B2h}), respectively. Moreover, we get that $\nu_{_{2h}}$ must be a solution to equation (\ref{EcNu2h}). To finish the proof it remains to show that the equation (\ref{EcNu2h}) has a unique positive solution. If we define the function $w_{_{2h}}=w_{_{2h}}(z)$ as the left hand side of equation (\ref{EcNu2h}) we have that $$w_{_{2h}}(0)=-3\text{Ste}<0,\qquad w_{_{2h}}(+\infty)=+\infty, \qquad \frac{d w_{_{2h}}}{d z}(z)>0,\quad \forall z>0.$$ So we conclude that $w_{_{2h}}$ has a unique positive root. \end{proof} In what follows, we will show that the unique solution to the problem (P$_{_{2h}}$) converges to the unique solution to the problem (P$_{_2}$) when $h\to \infty$. \begin{teo}\label{ConvergenciaP2h} The solution to problem (P$_{_{2h}}$) given in Theorem \ref{TeoP2h} converges to the solution to problem (P$_{_2}$) given by Theorem \ref{TeoP2} when the coefficient $h$, that characterizes the heat transfer in the fixed boundary, goes to infinity \end{teo} \begin{proof} The free boundary of the problem (P$_{_{2h}}$) is characterized by a dimensionless coefficient $\nu_{_{2h}}$ which is the unique positive root of the function $\omega_{_{2h}}=\omega_{_{2h}}(z)$ defined as the left hand side of equation (\ref{EcNu2h}). On one hand, we can notice that if $h_1<h_2$ then $\omega_{\mathrm{2h_1}}(z)>\omega_{\mathrm{2h_2}}(z)$ and consequently their unique positive root verify $\nu_{_\mathrm{2h_1}}<\nu_{_\mathrm{2h_2}}$.\\ On the other hand, if we define $\omega_{_2}=\omega_{_2}(z)$ as the left hand side of equation (\ref{EcNu2}), we get $$\omega_{_{2h}}(z)-\omega_{_2}(z)= z^{\alpha+3}\frac{2^{\alpha+1}}{\mathrm{Bi}}(\alpha+1)+z^{\alpha+1}3\frac{2^{\alpha}}{\mathrm{Bi}}>0, \quad \forall z>0.$$ Therefore $\lbrace \nu_{_h}\rbrace_{_h}$ is increasing and bounded from above by $\nu.$ In addition, it is easily seen that when $h\to\infty$, or equivalently when $\text{Bi}\to\infty$, we obtain $\omega_{2h}\to\omega_2$ and so $\nu_{_{2h}}\to\nu_{_2}$. Therefore it is obtained that $s_{_{2h}}(t)\to s_{_2}(t)$, for every $t>0$. Showing that $A_{_{2h}}\to A_{_2}$ and $B_{_{2h}}\to B_{_2}$ we get $T_{_{2h}}(x,t)\to T_{_2}(x,t)$ when $h\to\infty$ for every $t>0$ and $0<x<s_{_2}(t)$. \end{proof} In Figure \ref{Fig:Nu2hVsNuh} we compare graphically, for different values of $\text{Bi}>1$, the coefficient $\nu_{_{2h}}$ that characterizes the free boundary $s_{_{2h}}$ with the coefficient $\nu_{h}$ that characterizes the exact free boundary $s_{h}$, for different values of $\alpha$, fixing $\text{Ste}=0.5$. We shall notice that when the Biot number increases then the value of $\nu_{_{2h}}$ gets closer to the value of $\nu_{_2}$. \begin{figure}[h!!] \begin{center} \includegraphics[scale=0.4]{Nu2hVsNuh.eps} \end{center} \caption{{\footnotesize Plot of $\nu_{h}$ and $\nu_{_{2h}}$ against $\text{Bi}$ for $\alpha=1$ or 5 and $\text{Ste}=0.5$}}\label{Fig:Nu2hVsNuh} \end{figure} \newpage Lastly we will turn to the refined integral method applied to problem (P$_{h}$). We define a new approximate\textbf{ problem (P$_{_{3h}}$) }which consists in finding the free boundary $s_{_{3h}}=s_{_{3h}}(t)$ and the temperature $T_{_{3h}}=T_{_{3h}}(x,t)$ in $0<x<s_{_{3h}}(t)$ such that equations (\ref{EcCalor:1F-pos-tempinfty-A}$^\dag$), (\ref{FrontFijaConvectiva}),(\ref{TempFront:1F-pos-tempinfty-A})-(\ref{FrontInicial:1F-pos-tempinfty-A}) are verified. Provided that $T_{_{3h}}$ adopts a profile like (\ref{Perfil}) we state the following theorem \begin{teo}\label{TeoP3h} Let $0<\mathrm{Ste}<1$, $\alpha\geq 0$ and $\mathrm{Bi}\geq 0$, then there exists a unique solution to problem $\mathrm{(P_{_{3h}})}$ which is given by \begin{eqnarray} T_{_{3h}}(x,t)&=&t^{\alpha/2}\left[ A_{_{3h}}\theta_{_\infty}\left(1-\frac{x}{s_{_{3h}}(t)}\right) +B_{_{3h}}\theta_{_\infty} \left(1-\frac{x}{s_{_{3h}}(t)}\right)^{2}\right],\label{PerfilT3h} \\ s_{_{3h}}(t)&=& 2a\nu_{_{3h}} \sqrt{t}, \end{eqnarray} where the constants $A_{_{3h}}$ and $B_{_{3h}}$ are defined by \begin{align} A_{_{3h}}&=\frac{12\nu_{_{3h}} \left(1-\nu_{_{3h}}^2 \left( \frac{\alpha}{2}+\frac{1}{3}\right) \right) }{2\alpha \nu_{_{3h}}^3+\left( \frac{5\alpha+2}{\mathrm{Bi}}\right)\nu_{_{3h}}^2+\frac{6}{\mathrm{Bi}}+12\nu_{_{3h}} },\label{A3h}\\ B_{_{3h}}&=\frac{12\nu_{_{3h}}^3 \left(\frac{2}{3}\alpha+\frac{1}{3}\right) }{2\alpha \nu_{_{3h}}^3+\left( \frac{5\alpha+2}{\mathrm{Bi}}\right)\nu_{_{3h}}^2+\frac{6}{\mathrm{Bi}}+12\nu_{_{3h}} },\label{B3h} \end{align} and where $\nu_{_{3h}}$ is the unique solution to the following equation \begin{align} &z^{\alpha+4} 2^{\alpha+1} \alpha+z^{\alpha+3}\left(\tfrac{2^{\alpha}(2+5\alpha)}{\mathrm{Bi}}\right) +z^{\alpha+2}3\; 2^{\alpha+2}+z^{\alpha+1} \tfrac{3\; 2^{\alpha+1}}{\mathrm{Bi}}\nonumber\\ &+z^2 \mathrm{Ste} (2+3\alpha)-6\mathrm{Ste}=0, \qquad z>0. \label{EcNu3h} \end{align} \end{teo} \begin{proof} The proof is similar to the one given in Theorem \ref{TeoP2h}. The only difference lies in the fact that equation (\ref{EcCalor:1F-pos-tempinfty-A}$^{\dag}$) is equivalent to \be \nu_{_{3h}}^2 \left[A_{_{3h}}\left( \tfrac{1}{3}+\tfrac{2}{3}\alpha\right)+B_{_{3h}}\left( \tfrac{1}{3}+\tfrac{\alpha}{2}\right) \right]=B_{_{3h}} \ee \end{proof} The approximated problem (P$_{_{3h}}$) obtained when applying the refined integral method verify the same convergence property than the exact problem (P$_{h}$). \begin{teo} \label{ConvergenciaP3h} The unique solution to problem (P$_{_{3h}}$) given by Theorem \ref{TeoP3h} converges to the unique solution to problem (P$_{_3}$), given by Theorem \ref{TeoP3}, when the coefficient that charaterizes the heat transfer at the fixed face $h$ goes to infinity. \end{teo} \begin{proof} The proof is analogous to the proof given in Theorem \ref{ConvergenciaP2h}. \end{proof} \medskip In Figure \ref{Fig:Nu3hVsNuh} we compare graphically, for different values of $\text{Bi}>1$, the coefficient $\nu_{_{3h}}$ that characterizes the approximate free boundary $s_{_{3h}}$ with the coefficient $\nu_{h}$ corresponding to the exact free boundary $s_{h}$, for different values of $\alpha$ fixing $\text{Ste}=0.5$. Once again, as $\text{Bi}$ increases, the value $\nu_{_{3h}}$ becomes closer to the value $\nu_{_3}$ \begin{figure}[h!!] \begin{center} \includegraphics[scale=0.4]{Nu3hVsNuh.eps} \end{center} \caption{{\footnotesize Plot of $\nu_{h}$ and $\nu_{_{3h}}$ against $\text{Bi}$ for $\alpha=1$ or 5 and $\text{Ste}=0.5$}}\label{Fig:Nu3hVsNuh} \end{figure} \newpage \subsection{Comparisons between the approximate solutions and the exact one} \medskip In this section we are going to compare the exact solution to the problem with a convective condition at the fixed face (P$_{h}$) with the approximate solutions obtained by applying the integral balance methods proposed in the previous sections. For each method, we have defined a new problem (P$_{_{_{ih}}}$), ${i}=1,2,3$ and we have compared graphically the coefficient $\nu_{_{_{ih}}}$ that characterizes each free boundary $s_{_{_{ih}}}$, with the coefficient $\nu_{h}$ that corresponds to the exact free boundary $s_{h}$. The goal is to compare numerically the coefficient $\nu_{h}$ given by (\ref{EcNuh}) with the approximate coefficients $\nu_{_{1h}}$, $\nu_{_{2h}}$ and $\nu_{_{3h}}$ given by (\ref{EcNu1h}), (\ref{EcNu2h}) and (\ref{EcNu3h}), respectively. In order that the comparisons be more representative, in Tables \ref{Tabla:NuihVsNu1h}-\ref{Tabla:NuihVsNu3h} we show the exact value $\nu_{h}$, the approximate value $\nu_{_{_{ih}}}$ and the percentaje error committed in each case $E(\nu_{_{_{ih}}})=100\left\vert\frac{\nu_{h}-\nu_{{_{ih}}}}{\nu_{h}} \right\vert$, ${i}=1,2,3$ for different values of $\text{Bi}$ and $\alpha$ fixing $\text{Ste}=0.5$. \begin{table} \small \caption{{\footnotesize Dimensionless coefficients of the free boundaries and their percentage relative error for $\alpha=0$ and $\text{Ste}=0.5$.}} \label{Tabla:NuihVsNu1h} \begin{center} \begin{tabular}{cc|cc|cc|cc} \hline \text{Bi} & $\nu_{h}$ & $\nu_{_{1h}}$ & $E_{\text{rel}}(\nu_{_{1h}})$ & $ \nu_{_{2h}} $ & $E_{\text{rel}}(\nu_{_{2h}})$ & $\nu_{_{3h}}$ & $E_{\text{rel}}(\nu_{_{3h}})$ \\ \hline 1& 0.2926 & 0.2966 & 1.3828 \% & 0.2937 & 0.3939 \% & 0.2899 & 0.9103 \% \\ 10 & 0.4422 & 0.4681 & 5.8548 \% & 0.4484 & 1.4111 \% & 0.4545 & 2.7969 \% \\ 20 & 0.4533 & 0.4776 & 5.3525 \% & 0.4602 & 1.5151 \% & 0.4672 & 3.0744 \% \\ 30 & 0.4571 & 0.4807 & 5.1622 \% & 0.4642 & 1.5514 \% & 0.4716 & 3.1679 \% \\ 40 & 0.4590 & 0.4822 & 5.0628 \% & 0.4662 & 1.5699 \% & 0.4738 & 3.2148 \% \\ 50 & 0.4601 & 0.4832 & 5.0019 \% & 0.4674 & 1.5811 \% & 0.4751 & 3.2430 \% \\ 60 & 0.4609 & 0.4838 & 4.9606 \% & 0.4682 & 1.5886 \% & 0.4759 & 3.2618 \% \\ 70 & 0.4615 & 0.4842 & 4.9309 \% & 0.4688 & 1.5940 \% & 0.4766 & 3.2752 \% \\ 80 & 0.4619 & 0.4845 & 4.9085 \% & 0.4693 & 1.5980 \% & 0.4771 & 3.2853 \% \\ 90 & 0.4622 & 0.4848 & 4.8909 \% & 0.4696 & 1.6012 \% & 0.4774 & 3.2932 \% \\ 100 & 0.4625 & 0.4850 & 4.8768 \% & 0.4699 & 1.6037 \% & 0.4777 & 3.2994 \% \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \small \caption{{\footnotesize Dimensionless coefficients of the free boundaries and their percentage relative error for $\alpha=5$ and $\text{Ste}=0.5$.}} \label{Tabla:NuihVsNu2h} \begin{center} \begin{tabular}{cc|cc|cc|cc} \hline \text{Bi} & $\nu_{h}$ & $\nu_{_{1h}}$ & $E_{\text{rel}}(\nu_{_{1h}})$ & $ \nu_{_{2h}} $ & $E_{\text{rel}}(\nu_{_{2h}})$ & $\nu_{_{3h}}$ & $E_{\text{rel}}(\nu_{_{3h}})$ \\ \hline 1& 0.3274 & 0.3293 & 0.5908 \% & 0.3280 & 0.1779 \% & 0.3160 & 3.4746 \% \\ 10 & 0.4459 & 0.4551 & 2.0484 \% & 0.4480 & 0.4543 \% & 0.4474 & 0.3370 \% \\ 20 & 0.4553 & 0.4631 & 1.7173 \% & 0.4574 & 0.4798 \% & 0.4583 & 0.6724 \% \\ 30 & 0.4585 & 0.4657 & 1.5912 \% & 0.4607 & 0.4886 \% & 0.4621 & 0.7874 \% \\ 40 & 0.4601 & 0.4671 & 1.5250 \% & 0.4623 & 0.4931 \% & 0.4640 & 0.8456 \% \\ 50 & 0.4610 & 0.4679 & 1.4844 \% & 0.4633 & 0.4958 \% & 0.4651 & 0.8807 \% \\ 60 & 0.4617 & 0.4684 & 1.4569 \% & 0.4640 & 0.4976 \% & 0.4659 & 0.9042 \% \\ 70 & 0.4622 & 0.4688 & 1.4370 \% & 0.4645 & 0.4989 \% & 0.4664 & 0.9210 \% \\ 80 & 0.4625 & 0.4691 & 1.4220 \% & 0.4648 & 0.4999 \% & 0.4668 & 0.9336 \% \\ 90 & 0.4628 & 0.4693 & 1.4103 \% & 0.4651 & 0.5006 \% & 0.4672 & 0.9434 \% \\ 100 & 0.4630 & 0.4695 & 1.4009 \% & 0.4653 & 0.5012 \% & 0.4674 & 0.9513 \% \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \small \caption{{\footnotesize Dimensionless coefficients of the free boundaries and their percentage relative error for $\alpha=0.5$ and $\text{Ste}=0.5$.}} \label{Tabla:NuihVsNu3h} \begin{center} \begin{tabular}{cc|cc|cc|cc} \hline \text{Bi} & $\nu_{h}$ & $\nu_{_{1h}}$ & $E_{\text{rel}}(\nu_{_{1h}})$ & $ \nu_{_{2h}} $ & $E_{\text{rel}}(\nu_{_{2h}})$ & $\nu_{_{3h}}$ & $E_{\text{rel}}(\nu_{_{3h}})$ \\ \hline 1 & 0.4073 & 0.3834 & 5.8702 \% & 0.4005 & 1.6647 \% & 0.3730 & 8.4069 \% \\ 10 & 0.4569 & 0.4170 & 8.7307 \% & 0.4437 & 2.8806 \% & 0.4259 & 6.7799 \% \\ 20 & 0.4616 & 0.4203 & 8.9507 \% & 0.4476 & 3.0301 \% & 0.4315 & 6.5196 \% \\ 30 & 0.4632 & 0.4214 & 9.0256 \% & 0.4489 & 3.0845 \% & 0.4335 & 6.4217 \% \\ 40 & 0.4641 & 0.4220 & 9.0633 \% & 0.4496 & 3.1126 \% & 0.4345 & 6.3703 \% \\ 50 & 0.4646 & 0.4224 & 9.0861 \% & 0.4501 & 3.1298 \% & 0.4351 & 6.3387 \% \\ 60 & 0.4649 & 0.4226 & 9.1012 \% & 0.4503 & 3.1414 \% & 0.4356 & 6.3173 \% \\ 70 & 0.4652 & 0.4228 & 9.1121 \% & 0.4505 & 3.1497 \% & 0.4359 & 6.3018 \% \\ 80 & 0.4654 & 0.4229 & 9.1203 \% & 0.4507 & 3.1560 \% & 0.4361 & 6.2901 \% \\ 90 & 0.4655 & 0.4230 & 9.1266 \% & 0.4508 & 3.1609 \% & 0.4363 & 6.2809 \% \\ 100 & 0.4656 & 0.4231 & 9.1317 \% & 0.4509 & 3.1649 \% & 0.4364 & 6.2736 \% \\ \hline \end{tabular} \end{center} \end{table} \newpage From the above tables we can deduce that for $\alpha=0.5$, the percentage error committed is smaller than for the other cases. In all cases, as it happened with the problem (P), the method with best accuracy for approximating the problem (P$_{h}$) is the modified integral method, i.e. the best approximate problem is given by (P$_{_{2h}}$). We can also compare the exact temperature $T_{h}$ with the approximate ones $T_{_{_{ih}}}$, ${i}=1,2,3$, given by (\ref{PerfilT1h}), (\ref{PerfilT2h}) and (\ref{PerfilT3h}), respectively. In Figures (11)-(14) we show a color map for $\alpha=5$, $\text{Ste}=0.5$, $\theta_{_\infty}=30, a=1$ \begin{figure}[h!]\centering \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[scale=0.3]{ExactaConv.eps} \caption{{\footnotesize Colour map for $T_{h}$ }} \end{center} \label{Fig:ExactaConv} \end{minipage} \begin {minipage}{0.49\textwidth} \begin{center} \includegraphics[scale=0.3]{BICConv.eps} \caption{ {\footnotesize Colour map for $T_{_{1h}}$}} \end{center} \label{Fig:BICConv} \end{minipage} \end{figure} \begin{figure}[h!]\centering \begin{minipage}{0.49\textwidth} \begin{center} \includegraphics[scale=0.3]{BIMConv.eps} \caption{{\footnotesize Colour map for $T_{_{2h}}$ }} \end{center} \label{Fig:BIMConv} \end{minipage} \begin {minipage}{0.49\textwidth} \begin{center} \includegraphics[scale=0.3]{RIMConv.eps} \caption{ {\footnotesize Colour map for $T_{_{3h}}$}} \end{center} \label{Fig:RIMConv} \end{minipage} \end{figure} \newpage \section{Minimising the least-squares error in the heat balance integral method} In this section, we are going to analyse the least-squares error that we commit when assuming a quadratic profile in space. If we have an approximate solution for the heat equation given by $\hat{T}$, $\hat{s}$ such that \begin{equation}\label{PerfilQuadTemp} \hat{T}(x,t)=t^{\alpha/2}\theta_{\infty}\left[\hat{A}\left( 1-\frac{x}{\hat{s}(t)}\right)+\hat{B}\left( 1-\frac{x}{\hat{s}(t)}\right)^2\right], \end{equation} with adequate coefficients $\hat{A}, \hat{B}$ and $\hat{s}$, then we can measure how far we are from the heat equation by computing the least-squares error (see \cite{RiMyMc2019}) given by \begin{equation}\label{ErrorTemp} E=\int\limits_0^{\hat{s}(t)} \left(\pder[\hat{T}]{t}(x,t)-a^2\pder[^2 \hat{T}]{x^2}(x,t) \right)^2 d x \end{equation} Taking into account that \begin{align} \pder[\hat{T}]{t}(x,t)&=\frac{\alpha}{2} t^{\alpha/2-1} \theta_{_\infty}\left[ \hat{A}\left(1-\frac{x}{\hat{s}(t)}\right)+\hat{B}\left(1-\frac{x}{\hat{s}(t)}\right)^2\right]\nonumber\\ & +t^{\alpha/2} \frac{\dot{\hat{s}}(t) }{\hat{s}^2(t)} x \theta_{_\infty} \left[ \hat{A}+2\hat{B}\left(1-\frac{x}{\hat{s}(t)}\right)\right] \end{align} and \begin{align} \pder[^2\hat{T}]{x^2}(x,t)=t^{\alpha/2}\frac{2\hat{B}\theta_{_\infty}}{\hat{s}^2(t)} \end{align} we get \begin{align} E&=\tfrac{\alpha^2}{4} \theta_{_\infty}^2 t^{\alpha-2}\left(\tfrac{\hat{A}^2}{3}+\tfrac{\hat{A}\hat{B}}{2}+\tfrac{\hat{B}^2}{5} \right)+t^{\alpha} \theta_{_\infty}^2 \tfrac{\dot{\hat{s}}^2(t)}{\hat{s}^2(t)} \left( \tfrac{\hat{A}^2}{3}+\tfrac{\hat{A} \hat{B}}{3}+\tfrac{2\hat{B}^2}{15}\right)\nonumber\\ &+ 4t^{\alpha}a^4 \theta_{_\infty}^2 \tfrac{\hat{B}^2}{\hat{s}^4(t)}+\alpha \theta_{_\infty}^2 t^{\alpha-1}\tfrac{\dot{\hat{s}}(t)}{\hat{s}(t)} \left( \tfrac{\hat{A}^2}{6}+\tfrac{\hat{A} \hat{B}}{4}+\tfrac{\hat{B}^2}{10}\right)-2\alpha a^2\theta_{_\infty}^2 t^{\alpha-1} \tfrac{\hat{B}}{\hat{s}^2(t)}\left( \tfrac{\hat{A}}{2}+\tfrac{\hat{B}}{3}\right)\nonumber\\ &-4a^2 \theta_{_\infty}^2 t^{\alpha} \tfrac{\dot{\hat{s}}(t)}{\hat{s}^3(t)} \hat{B}\left(\tfrac{\hat{A}}{2}+\tfrac{\hat{B}}{3}\right)\label{ErrorCuadraticoGenerico} \end{align} In case that the free boundary $\hat{s}(t)=2a\xi\sqrt{t}$ with $\xi>0$, by simple computations, the least-squares error becomes $E=E(\xi)$, given by the following expression: \begin{align} E(\xi)& =t^{\alpha-2} \tfrac{\theta_{_\infty}^2}{\xi^4} \left[ \tfrac{\xi^4}{4}\left( \alpha^2 \left( \tfrac{\hat{A}^2}{3}+\tfrac{\hat{A}\hat{B}}{2}+\tfrac{\hat{B}^2}{5} \right)+2\alpha \left(\tfrac{\hat{A}^2}{6}+\tfrac{\hat{A}\hat{B}}{4}+\tfrac{\hat{B}^2}{10} \right)+\tfrac{\hat{A}^2}{3}+\tfrac{\hat{A}\hat{B}}{3}+\tfrac{2\hat{B}^2}{15} \right)\right.\nonumber\\ &\left.-\tfrac{\xi^2}{2}\hat{B}(\alpha+1) \left( \tfrac{\hat{A}}{2}+\tfrac{\hat{B}}{3}\right)+\tfrac{\hat{B}^2}{4} \right]\label{ErrorCuadraticoGenerico2} \end{align} Let us then define a new approximate\textbf{ problem (P$_{_4}$)} for the problem (P) that consists in finding the free boundary $s_{_4}=s_{_4}(t)$ and the temperature $T_{_4}=T_{_4}(x,t)$ in the domain $0<x<s_{_4}(t)$ given by the profile (\ref{PerfilQuadTemp}) such that they minimize the least-squares error (\ref{ErrorCuadraticoGenerico}) subject to the conditions (\ref{FrontFija:1F-pos-tempinfty-A}), (\ref{TempFront:1F-pos-tempinfty-A}), (\ref{CondStefan:1F-pos-tempinfty-A}) and (\ref{FrontInicial:1F-pos-tempinfty-A}). \begin{teo}\label{TeoP4} If a free boundary $s_{_4}$ and a temperature $T_{_4}$ constitute a solution to problem (P$_{_4}$) then they are given by the expressions: \begin{eqnarray} T_{_4}(x,t)&=&t^{\alpha/2}\theta_{_\infty} \left[ A_{_4}\left(1-\frac{x}{s_{_{4}}(t)}\right) +B_{_{4}} \left(1-\frac{x}{s_{_{4}}(t)}\right)^{2}\right],\label{PerfilT4} \\ s_{_{4}}(t)&=& 2a\nu_{_{4}} \sqrt{t}, \end{eqnarray} where the constants $A_{_{4}}$ and $B_{_{4}}$ are defined as a function of $\nu_{_{4}}$ as \begin{align} \label{A4} A_{_4}=\frac{2^{\alpha+1} \nu_{_4}^{\alpha+2} }{\mathrm{Ste}}, \qquad \qquad B_4= 1- \frac{2^{\alpha+1} \nu_{_4}^{\alpha+2} }{\mathrm{Ste}}, \end{align} and where $\nu_{_{4}}>0$ must minimize for every $t>0$, the function \begin{align} E(\xi)=\frac{t^{\alpha-2} \theta_{_\infty}^2}{60 \mathrm{Ste}^2} \frac{p(\xi)}{\xi^4},\quad \forall t>0 \label{EcNu4} \end{align} with \begin{align} p(\xi)&= \xi^{8+2\alpha} 2^{2\alpha+1}(\alpha^2+\alpha+4)+ 5 \;\xi^{2\alpha+6} 2^{2\alpha+2}(1+\alpha)+ 15\;\xi^{2\alpha+4} 2^{2\alpha+2}\nonumber\\ &+\xi^{\alpha+6} 2^{\alpha} \mathrm{Ste} (2+3\alpha+3\alpha^2)+5\; \xi^{\alpha+4} 2^{\alpha+1}\mathrm{Ste} (1+\alpha)\nonumber\\ &-15\; \xi^{2+\alpha} 2^{\alpha+2} 2^{\alpha+2} \mathrm{Ste}+\xi^4 \mathrm{Ste}^2 (2+3\alpha+3\alpha^2)\nonumber\\ &-10\xi^2\mathrm{Ste}^2 (1+\alpha)+15\mathrm{Ste}^2.\label{p4} \end{align} \end{teo} \begin{proof} Provided that $T_{_4}$ adopts a quadratic profile in space given by (\ref{PerfilT4}), then the condition (\ref{TempFront:1F-pos-tempinfty-A}) holds immediately and the Stefan condition (\ref{CondStefan:1F-pos-tempinfty-A}) becomes equivalent to \be -kt^{\alpha/2}\theta_{_\infty}\frac{A_{_4}}{s_{_4}(t)}=-\gamma s_{_4}^{\alpha}(t)\dot s_{_4}(t). \ee Then \be s_{_4}(t)=\left( \frac{(\alpha+2)}{(\frac{\alpha}{2}+1)} \frac{k\theta_{_\infty}}{\gamma}A_{_4}\right)^{1/(\alpha+2)} \sqrt{t}. \ee Introducing $\nu_{_4}=\frac{1}{2a}\left( \frac{(\alpha+2)}{(\frac{\alpha}{2}+1)} \frac{k\theta_{_\infty}}{\gamma}A_{_4}\right)^{1/(\alpha+2)}$, the free boundary becomes \be s_{_4}(t)=2a\; \nu_{_4}\sqrt{t},\label{s4} \ee and \be\label{RelA4} A_{_4}=\frac{2^{\alpha+1} \nu_{_4}^{\alpha+2}}{\text{Ste}} . \ee In addition, from the boundary condition at the fixed face (\ref{FrontFija:1F-pos-tempinfty-A}) we get \be \label{RelA4B4-2} A_{_4}+B_{_4}=1. \ee Then we obtain formulas (\ref{A4}) for the coefficients $A_{_4}$ and $B_{_4}$. Finally, as the free boundary $s_{_4}$ is defined by (\ref{s4}), we have to minimize the least-squares error $E$ given by \ref{ErrorCuadraticoGenerico2}. In addition, replacing $A_{_4}$ and $B_{_4}$ by the formulas given in (\ref{A4}) we get that $\nu_{_4}$ must minimize (\ref{EcNu4}). \end{proof} \begin{corollary} \label{CorolarioAlpha0-Temp}For the classical Stefan problem, i.e. for the case $\alpha=0$, we get that problem (P$_{_4}$) has a unique solution given by \begin{eqnarray} T_{_{4}}^{(0)}(x,t)&=&\theta_{_\infty} \left[ A_{_4}^{(0)}\left(1-\frac{x}{s_{_{4}^{(0)}}(t)}\right) +B_{_{4}^{(0)}} \left(1-\frac{x}{s_{_{4}^{(0)}}(t)}\right)^{2}\right],\label{PerfilT40} \\ s_{_{4}}^{(0)}(t)&=& 2a\nu_{_{4}}^{(0)} \sqrt{t}, \end{eqnarray} where the superscript $(0)$ makes reference to the value of $\alpha=0$ and the constants $A_{_{4}}^{(0)}$ and $B_{_{4}}^{(0)}$ are defined as a function of $\nu_{_{4}}^{(0)}$ as \begin{align} \label{A40} A_{_4}^{(0)}=\frac{2 (\nu_{_4}^{(0)})^{2} }{\mathrm{Ste}}, \qquad \qquad B_4= 1- \frac{2 (\nu_{_4}^{(0)})^{2} }{\mathrm{Ste}} \end{align} being $\nu_{_{4}}^{(0)}>0$ the value where the function $E^{(0)}$ attains its minimum \begin{align} E^{(0)}(\xi)=\frac{t^{-2} \theta_{_\infty}^2}{60 \mathrm{Ste}^2} \frac{p^{(0)}(\xi)}{\xi^4},\quad \forall t>0 \label{EcNu40} \end{align} with \begin{align} p^{(0)}(\xi)&=8\xi^8+2(10+\mathrm{Ste})\xi^6+2(30+5\mathrm{Ste}+\mathrm{Ste}^2)\xi^4\nonumber\\ &-10\mathrm{Ste}(6+\mathrm{Ste})\xi^2+15\mathrm{Ste}^2 \label{p40} \end{align} In addition, $\nu_{_4}^{(0)}$ can be obtained as the unique positive root of the following real polynomial \begin{equation} r(\xi)=32\xi^8+4(10+\mathrm{Ste})\xi^6+20\mathrm{Ste}(6+\mathrm{Ste})\xi^2-60\mathrm{Ste}^2.\label{r} \end{equation} \end{corollary} \begin{remark} Due to formula (\ref{EcNu40}) we have that the error we commit when approximating with problem (P$_{_4}$) for the case $\alpha=0$ is inversely proportional to the square of time, i.e $E^{(0)}\propto 1/t^2$. \end{remark} \begin{proof} From Theorem \ref{TeoP4} we need to minimize the function $E(\xi)$ given by (\ref{EcNu4}) for the case $\alpha=0$. So, it is clear evident that we need to minimize the function $E^{(0)}(\xi)$ given by (\ref{EcNu40}) which is equivalent to minimize the function $F^{(0)}(\xi)=\frac{p^{(0)}(\xi)}{\xi^4}$. Therefore, let us show that $F^{(0)}$ has a unique positive value where the minimum is attained. Observe that $F^{(0)}$ is a continuous function in $\mathbb{R}^{+}$. Moreover if we compute its derivative we obtain $$F'^{(0)}(\xi)=\frac{r(\xi)}{\xi^5}$$ with $r$ given by (\ref{r}). As $r$ is a polynomial that verifies $r(0)=-60 \text{Ste}^2<0$, $r(+\infty)=+\infty$, and $r'(\xi)>0$, $\forall \xi>0$, we obtain that there exists a unique value $\xi_0>0$ such that $r(\xi_0)=0$. In addition, we can assure that $r(\xi)<0$, for every $\xi<\xi_0$ and $r(\xi)>0$, for every $\xi>\xi_0$. Consequently we have $$F'^{(0)}(\xi)<0, \;\;\forall \xi<\xi_0,\qquad F'^{(0)}(\xi_0)=0,\qquad F'^{(0)}(\xi)>0, \;\; \forall \xi>\xi_0. $$ We can conclude that $F^{(0)}$ decreases in $(0,\xi_{0})$ and increases in $(\xi_0,+\infty)$. This means that $F^{(0)}$ has a unique minimum that is attained at $\xi_0$. Calling $\nu_{_4}^{(0)}=\xi_0$ we get that $\nu_{_4}^{(0)}$ is the unique positive root of $r$ and minimizes the error function $E^{(0)}$. \end{proof} Taking into account the last result we show in the Table \ref{Tabla:NuiVsNu4} the coefficient $\nu$ that characterizes the exact free boundary of problem (P), the approximate coefficient $\nu_{_2}$ obtained by the modified integral balance method (which until now was the most accurate technique) and the coefficient $\nu_{_4}$ defined by the Corollary \ref{CorolarioAlpha0-Temp} for different values of $\text{Ste}$ numbers. Computing also the percentage relative error committed in each case we assure that the approximate problem (P$_{_4}$) is the best approximation we can obtain adopting a quadratic profile in space for the temperature. \begin{table} \small \caption{{\footnotesize Dimensionless coefficients of the free boundaries and their percentage relative error for $\alpha=0$.}} \label{Tabla:NuiVsNu4} \begin{center} \begin{tabular}{cc|cc|cc} \hline Ste & $\nu$ & $ \nu_{_2} $ & $E_{\text{rel}}(\nu_{_2})$ & $\nu_{_4}$ & $E_{\text{rel}}(\nu_{_4})$ \\ \hline 0.1 & 0.2200 & 0.2209 & 0.3947 \% & 0.2209 & 0.3855 \% \\ 0.2 & 0.3064 & 0.3087 & 0.7499 \% & 0.3086 & 0.7168 \% \\ 0.3 & 0.3699 & 0.3738 & 1.0707 \% & 0.3736 & 1.0040 \% \\ 0.4 & 0.4212 & 0.4270 & 1.3618 \% & 0.4265 & 1.2551 \% \\ 0.5 & 0.4648 & 0.4723 & 1.6266 \% & 0.4716 & 1.4762 \% \\ 0.6 & 0.5028 & 0.5122 & 1.8683 \% & 0.5112 & 1.6722 \% \\ 0.7 & 0.5365 & 0.5477 & 2.0895 \% & 0.5464 & 1.8470 \% \\ 0.8 & 0.5669 & 0.5799 & 2.2923 \% & 0.5783 & 2.0037 \% \\ 0.9 & 0.5946 & 0.6094 & 2.4786 \% & 0.6074 & 2.1449 \% \\ 1.0 & 0.6201 & 0.6365 & 2.6500 \% & 0.6342 & 2.2727 \% \\ \hline \end{tabular} \end{center} \end{table} \newpage In a similar way, we can define a new approximate \textbf{ problem (P$_{_{4h}}$)} for the problem (P$_{h}$) that consists in finding the free boundary $s_{_{4h}}=s_{_{4h}}(t)$ and the temperature $T_{_{4h}}=T_{_{4h}}(x,t)$ in $0<x<s_{_{4h}}(t)$ given by the profile (\ref{PerfilQuadTemp}) such that they minimize the least-squares error (\ref{ErrorCuadraticoGenerico}) subject to to the conditions (\ref{FrontFija:1F-pos-tempinfty-A}$^\star$), (\ref{TempFront:1F-pos-tempinfty-A})-(\ref{FrontInicial:1F-pos-tempinfty-A}). \begin{teo} \label{TeoP4h} If a free boundary $s_{_{4h}}$ and a temperature $T_{_{4h}}$ constitute a solution to problem (P$_{_{4h}}$) then they are given by the expressions: \begin{eqnarray} T_{_{4h}}(x,t)&=&t^{\alpha/2}\theta_{_\infty} \left[ A_{_{4h}}\left(1-\frac{x}{s_{_{4h}}(t)}\right) +B_{_{4h}} \left(1-\frac{x}{s_{_{4h}}(t)}\right)^{2}\right],\label{PerfilT4h} \\ s_{_{4h}}(t)&=& 2a\nu_{_{4h}} \sqrt{t},\label{s4h} \end{eqnarray} where the constants $A_{_{4h}}$ and $B_{_{4h}}$ are defined as a function of $\nu_{_{4h}}$ as \begin{align} \label{A4h} A_{_{4h}}=\frac{2^{\alpha+1} \nu_{_{4h}}^{\alpha+2} }{\mathrm{Ste}}, \qquad \qquad B_{_{4h}}= \frac{2 \mathrm{Bi}\; \nu_{_{4h}}-A_{_{4h}} (1+2\mathrm{Bi}\; \nu_{_{4h}})}{2(1+\mathrm{Bi}\; \nu_{_{4h}})} \end{align} and where $\nu_{_{4h}}>0$ must minimize for every $t>0$, the real function: \begin{align} E_{_h}(\xi)&=\frac{t^{\alpha-2} \theta_{_\infty}^2}{60\; \mathrm{Ste}^2 (\frac{1}{\mathrm{Bi}}+\xi)^2}\; \cdot \; \nonumber\\ & \left\lbrace p(\xi)+ \frac{1}{{\mathrm{Bi}}} \left[ 2^{2 \alpha } \left(7 \alpha ^2+7 \alpha +18\right) x^{2 \alpha +7} +25\ 2^{2 \alpha +1} (\alpha +1) x^{2 \alpha +5},\right.\right.\nonumber\\ & +2^{\alpha } \left(9 \alpha ^2+9 \alpha +6\right) \mathrm{Ste} x^{\alpha +5}+15\ 2^{2 \alpha +2} x^{2 \alpha +3}\nonumber\\ &\left.-5\; 2^{\alpha +1} (\alpha +1) \mathrm{Ste}\; x^{\alpha +3} -15 \;2^{\alpha +1} \mathrm{Ste}\; x^{\alpha +1}\right]\nonumber\\ &+ \left. \frac{1}{\mathrm{Bi}^2} \left[4^{\alpha +1} \left(2 \alpha ^2+2 \alpha +3\right) x^{2 \alpha +6}+ 5\ 4^{\alpha +1} (\alpha +1) x^{2 \alpha +4}+ 15\ 4^{\alpha } x^{2 \alpha +2} \right] \right\rbrace \label{EcNu4h} \end{align} with $p(\xi)$ given by formula (\ref{p4}) \end{teo} \begin{proof} It is clear immediate that the chosen profile temperature leads the condition (\ref{TempFront:1F-pos-tempinfty-A}) to be automatically verified. From condition (\ref{CondStefan:1F-pos-tempinfty-A}) we obtain \be -kt^{\alpha/2}\theta_{_\infty}\frac{A_{_{4h}}}{s_{_{4h}}(t)}=-\gamma s_{_{4h}}^{\alpha}(t)\dot s_{_{4h}}(t). \ee Therefore \be s_{_{4h}}(t)=\left( \frac{(\alpha+2)}{(\frac{\alpha}{2}+1)} \frac{k\theta_{_\infty}}{\gamma}A_{_{4h}}\right)^{1/(\alpha+2)} \sqrt{t}. \ee Introducing the new coefficient $\nu_{_{4h}}$ such that $ \nu_{_{4h}}=\frac{1}{2a}\left( \frac{(\alpha+2)}{(\frac{\alpha}{2}+1)} \frac{k\theta_{_\infty}}{\gamma}A_{_{4h}}\right)^{1/(\alpha+2)}$, the free boundary can be expressed as \be s_{_{4h}}(t)=2a\; \nu_{_{4h}}\sqrt{t}, \ee where the following equality holds \be\label{Ec1:P4h} A_{_{4h}}=\frac{2^{\alpha+1} \nu_{_{4h}}^{\alpha+2}}{\text{Ste}} . \ee The convective boundary condition at $x=0$, i.e. condition (\ref{FrontFijaConvectiva}), leads to \be \label{Ec2:P4h} A_{_{4h}}(1+2\text{Bi}\; \nu_{_{4h}})+2B_{_{4h}}(1+\text{Bi}\;\nu_{_{4h}})=2\text{Bi}\;\nu_{_{4h}}. \ee Therefore we obtain the formulas given in (\ref{A4h}). Replacing $A_{_{4h}}$, $B_{_{4h}}$ and $s_{_{4h}}$ for their expressions in function of $\nu_{_{4h}}$, minimizing the least-squares error (\ref{ErrorCuadraticoGenerico}) is equivalent to minimizing (\ref{EcNu4h}) (obtained by Mathematica software). \end{proof} \begin{corollary} \label{CorolarioAlpha0-Conv}For the classical Stefan problem, i.e. for the case $\alpha=0$, we get that if $\mathrm{Bi}>\frac{1}{\sqrt{12}}$ and $\mathrm{Ste}<\frac{1}{2\mathrm{Bi}^2}$, then (P$_{_{4h}}$) has a unique solution given by \begin{eqnarray} T_{_{4h}}^{(0)}(x,t)&=&\theta_{_\infty} \left[ A_{_{4h}}^{(0)}\left(1-\frac{x}{s_{_{4h}^{(0)}}(t)}\right) +B_{_{4h}^{(0)}} \left(1-\frac{x}{s_{_{4h}^{(0)}}(t)}\right)^{2}\right],\label{PerfilT40h} \\ s_{_{4h}}^{(0)}(t)&=& 2a\nu_{_{4h}}^{(0)} \sqrt{t}, \end{eqnarray} where the superscript $(0)$ makes reference to the value of $\alpha=0$ and the constants $A_{_{4h}}^{(0)}$ and $B_{_{4h}}^{(0)}$ are defined as a function of $\nu_{_{4h}}^{(0)}$ as \begin{align} \label{A40h} A_{_{4h}}^{(0)}=\frac{2 (\nu_{_{4h}}^{(0)})^{2} }{\mathrm{Ste}}, \qquad \qquad B_{_{4h}}^{(0)}=\frac{2\mathrm{Bi}\;\nu_{_{4h}}^{(0)}-A_{_{4h}}^{(0)} (1+2\mathrm{Bi}\nu_{_{4h}}^{(0)} ) }{2(1+\nu_{_{4h}}^{(0)} \mathrm{Bi})} \end{align} being $\nu_{_{4h}}^{(0)}>0$ the value where the function $E_{h}^{(0)}$ attains its minimum \begin{align} E_{h}^{(0)}(\xi)&=\frac{t^{-2} \theta_{_\infty}^2}{60 \mathrm{Ste}^2 x^2(\frac{1}{\mathrm{Bi}}+x)^2}\left\lbrace p^{(0)}(\xi)+\frac{1}{{\mathrm{Bi}}}\left[2x(9x^6+(3\mathrm{Ste}+25)x^4\right.\right.\nonumber\\ &\left.\left. +5(6-\mathrm{Ste})x^2-15\mathrm{Ste} )\right]+ \frac{1}{{\mathrm{Bi}^2}} x^2(12x^4+20x^2+15)\right\rbrace \label{EcNu40h} \end{align} where $p^{(0)}$ is given by (\ref{p40}). Moreover, $\nu_{_{4h}}^{(0)}$ can be obtained as the unique positive root of the following polynomial: \begin{align} r_{_h}(\xi)&= 16 \mathrm{Bi}^3 \xi^9+51 \mathrm{Bi}^2 \xi^8+\xi^7 \left(2 \mathrm{Bi}^3 \mathrm{Ste}+20 \mathrm{Bi}^3+57 \mathrm{Bi}\right)\nonumber\\ &+\xi^6 \left(7 \mathrm{Bi}^2 \mathrm{Ste}+65 \mathrm{Bi}^2+24\right)\nonumber\\ &+\mathrm{Bi} (3 \mathrm{Ste}+25) \xi^5+\xi^4 \left(\mathrm{Bi}^2 \left(2 \mathrm{Ste}^2+15 \mathrm{Ste}+30\right)+20\right)\nonumber\\ &+5 \mathrm{Bi} (3 + (-1 + 12 \mathrm{Bi}^2) \mathrm{Ste} + 2 \mathrm{Bi}^2 \mathrm{Ste}^2) \xi^3+45 \mathrm{Bi}^2 \mathrm{Ste} \xi^2\nonumber\\ &+15 \mathrm{Bi} \mathrm{Ste} (1 - 2 \mathrm{Bi}^2 \mathrm{Ste}) \xi-15 \mathrm{Bi}^2 \mathrm{Ste}^2.\label{rh} \end{align} \end{corollary} \begin{proof} When we replace $\alpha=0$ in Theorem \ref{TeoP4h} we immediately obtain the formulas (\ref{A40h}) and (\ref{EcNu40h}). In order to prove that there exists a unique value that minimizes the least-squares error, we compute $E'_{h}(\xi)$ and we get that $E_{h}'(\xi)=\frac{\theta_{_\infty}}{30 \mathrm{Ste}^2 t^2 \xi^3 (\mathrm{Bi} \xi+1)^3}r_{h}(\xi)$ with $r_{h}$ given by (\ref{rh}). We can observe that $r_{h}(0)<0$, $r_{h}(+\infty)=+\infty$, $r'_{h}>0$ under the hypothesis that $\mathrm{Bi}>\frac{1}{\sqrt{12}}$ and $\mathrm{Ste}<\frac{1}{2\mathrm{Bi}^2}$. Therefore, we can assure that there exists a unique $\xi_{_{h0}}$ such that $r_{h}(\xi_{_{h0}})=0$. In addition we have that $r_{h}(\xi)<0$, $\forall \xi<\xi_{_{h0}}$ and $r_{h}(\xi)>0$, $\forall \xi>\xi_{_{h0}}$. Then we get that $E_{h}(\xi)$ decreases $\forall \xi<\xi_{_{h0}}$ and increases $\forall \xi>\xi_{_{h0}}$. Consequently we obtain that $\xi_{_{h0}}$ constitutes the unique minimum of the least-squares error. \end{proof} In view of the above result, for $\alpha=0$ we compare the coefficient $\nu_{h}$ that characterizes the exact free boundary problem with the coefficient $\nu_{_{2h}}$ corresponding to the modified integral method, which was until now the most accurate, and we also compare with the coefficient $\nu_{_{2h}}$ obtained when minimizing the least-squares error. We fix $\text{Ste}=0.02$ and vary $\text{Bi}$ between 1 and 5. The value of this parameters are chosen in order to verify the hypothesis of Corollary \ref{CorolarioAlpha0-Conv}. By computing the percentage relative error of each method we conclude that the approximate problem (P$_{_{4h}}$) gives us the best approximate solution to problem (P$_{h}$). \begin{table} \small \caption{{\footnotesize Coefficients of the free boundaries and their percentage relative error for $\alpha=0$ and $\text{Ste}=0.02$.}} \label{Tabla:NuihVsNu4h} \begin{center} \begin{tabular}{cc|cc|cc} \hline $\text{Bi}$ & $\nu_{h}$ & $ \nu_{_{2h}} $ & $E_{\text{rel}}(\nu_{_{2h}})$ & $\nu_{_{4h}}$ & $E_{\text{rel}}(\nu_{_{4h}})$ \\ \hline 1.0000 & 0.0193 & 0.0193 & 0.0002 \% & 0.0193 & 0.0002 \% \\ 2.0000 & 0.0350 & 0.0350 & 0.0023 \% & 0.0350 & 0.0022 \%\\ 3.0000 & 0.0468 & 0.0468 & 0.0066 \% & 0.0468 & 0.0064 \%\\ 4.0000 & 0.0553 & 0.0553 & 0.0120 \% & 0.0553 & 0.0117 \%\\ 5.0000 & 0.0617 & 0.0617 & 0.0175 \% & 0.0617 & 0.0172 \%\\ \hline \end{tabular} \end{center} \end{table} In case we decide to use the formula (\ref{rh}) to compute $\nu_{_{4h}}$ without satisfying the hypothesis of the Corollary \ref{CorolarioAlpha0-Conv}, fixing $\text{Ste}=0.5$ and varying $\text{Bi}$ from 1 to 100 we get the results shown int Table \ref{Tabla:NuihVsNu4h1}. \begin{table} \small \caption{{\footnotesize Coefficients of the free boundaries and their percentage relative error for $\alpha=0$ and $\text{Ste}=0.02$.}} \label{Tabla:NuihVsNu4h1} \begin{center} \begin{tabular}{cc|cc|cc} \hline $\text{Bi}$ & $\nu_{h}$ & $ \nu_{_{2h}} $ & $E_{\text{rel}}(\nu_{_{2h}})$ & $\nu_{_{4h}}$ & $E_{\text{rel}}(\nu_{_{4h}})$ \\ \hline 1 & 0.2926 & 0.2937 & 0.3939 \% & 0.2933 & 0.2600 \%\\ 10 & 0.4422 & 0.4484 & 1.4111 \% & 0.4477 & 1.2478 \%\\ 20 & 0.4533 & 0.4602 & 1.5151 \% & 0.4595 & 1.3576 \%\\ 30 & 0.4571 & 0.4642 & 1.5514 \% & 0.4635 & 1.3962 \%\\ 40 & 0.4590 & 0.4662 & 1.5699 \% & 0.4655 & 1.4158 \%\\ 50 & 0.4601 & 0.4674 & 1.5811 \% & 0.4667 & 1.4277 \%\\ 60 & 0.4609 & 0.4682 & 1.5886 \% & 0.4675 & 1.4357 \%\\ 70 & 0.4615 & 0.4688 & 1.5940 \% & 0.4681 & 1.4414 \%\\ 80 & 0.4619 & 0.4693 & 1.5980 \% & 0.4686 & 1.4457 \%\\ 90 & 0.4622 & 0.4696 & 1.6012 \% & 0.4689 & 1.4491 \%\\ 100 & 0.4625 & 0.4699 & 1.6037 \% & 0.4692 & 1.4518 \%\\ \hline \end{tabular} \end{center} \end{table} \newpage \section{Conclusions} In this paper we have studied different approximate methods for one-dimensional one-phase Stefan problems where the main feature consists in taking a space-dependent latent heat. We have considered two different problems that differ from each other in their boundary condition at the fixed face $x=0$: Dirichlet or Robin condition. We have implemented the classical heat balance integral method, a modified integral method and the refined integral method. Exploiting the knowledge of the exact solution of both problems (available in literature), we have studied the accuracy of the different approximations obtained. All the analysis have been carried out using dimensionless parameters like Stefan number and Biot number. Furthermore we have studied the case when Bi goes to infinity in the problem with a convective condition, recovering the approximate solutions when a temperature condition is imposed at the fixed face. We provided some numerical simulations and we have concluded that in the majority of cases the modified integral method is the most reliable in terms of accuracy. When approaching by the minimization of the least-squares error, we get better approximations but only for the case $\alpha=0$ (where we could prove existence and uniqueness of solution). The least accurate method was the classical heat balance integral method, not only to the high percentage error committed but also because we could not obtain a result that assures uniqueness of the approximate solution. \section*{Acknowledgement} The present work has been partially sponsored by the Project PIP No. 0275 from CONICET-UA, Rosario, Argentina, by the Project ANPCyT PICTO Austral 2016 No. 0090, and by the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie grant agreement 823731 CONMECH. The authors would like to thank the two anonymous referees for their helpful comments. \bibliographystyle{elsarticle-num}
proofpile-arXiv_065-114
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $\mathcal H$ be a real Hilbert space with its inner product denoted by $\langle \cdot, \cdot \rangle$ and the induced norm denoted by $\|\cdot\|$. In this paper we study the asymptotic properties of the simultaneous projection method as applied to a possibly countably infinite number of closed and linear subspaces of $\mathcal H$. We begin by briefly recalling some of the known results which have so far been established only for a finite number of subspaces. Moreover, we recall relevant results related to the cyclic projection method. We do not discuss here the case of general closed and convex sets, for which we refer the interested reader to \cite{BauschkeBorwein1996, BauschkeNollPhan2015, BorweinLiTam2017, Cegielski2012, CegielskiReichZalas2018}. For other examples of projection methods studied in the setting of closed and linear subspaces, we refer the reader to \cite{ArtachoCampoy2019, BadeaSeifert2017, BauschkeCruzNghiaPhanWang2014, BauschkeCruzNghiaPhanWang2016, Tam2020}. \subsection{Related Work} For now let $r \in \mathbb Z_+$. For each $i =1,2, \ldots, r$, let $M_i\subset \mathcal H$ be a nontrivial closed and linear subspace, and let $P_{M_i}$ denote the corresponding orthogonal projection. Moreover, let $M:=\bigcap_{i=1}^r M_i$ with the corresponding orthogonal projection $P_M$. In the next three theorems, the operator $T_r$ can be either the cyclic projection operator $T_r:=P_{M_r}\ldots P_{M_1}$ or the simultaneous projection operator $T_r:=\frac 1 r \sum_{i=1}^r P_{M_i}$. In particular, $T_2=P_{M_2}P_{M_1}$ is the alternating projection operator. We begin with a well-known result. \begin{theorem}[Norm Convergence]\label{int:th:norm} For each $x\in\mathcal H$, we have \begin{equation} \lim_{k\rightarrow\infty}\left\|T_r^k (x)-P_M(x)\right\| = 0. \end{equation} \end{theorem} Theorem \ref{int:th:norm} goes back to von Neumann \cite{Neumann1949}, when $T_2=P_{M_2}P_{M_1}$, and to Halperin \cite{Halperin1962}, when $T_r=P_{M_r}\ldots P_{M_1}$. Lapidus \cite{Lapidus1981} and Reich \cite{Reich1983} proved Theorem \ref{int:th:norm} for $T_r=\frac 1 r \sum_{i=1}^r P_{M_i}$. It turns out that in the infinite dimensional case, the convergence properties can indeed differ from their finite dimensional counterparts. \begin{theorem}[Dichotomy]\label{int:th:dichotomy} Exactly one of the following two statements holds: \begin{enumerate}[(i)] \item $\sum_{i=1}^r M_i^\perp$ is closed. Then the sequence $\{T_r^k\}_{k=1}^\infty$ converges linearly to $P_M$. \item $\sum_{i=1}^r M_i^\perp$ is not closed. Then the sequence $\{T_r^k\}_{k=1}^\infty$ converges arbitrarily slowly to $P_M$. \end{enumerate} \end{theorem} We recall that the linear convergence in (i) means that there are constants $c>0$ and $q\in (0,1)$ such that the inequality $\|T_r^k(x)-P_M(x)\|\leq c q^k \|x\|$ holds for all $k=1,2,\ldots$ and all $x \in \mathcal H$. The arbitrarily slow convergence in (ii) means that for any sequence of scalars $\{a_k\}_{k=1}^\infty$ with $0\leq a_k$ and $a_k \to 0$, there is a point $x \in \mathcal H$ such that the inequality $\|T_r(x) - P_{M}(x)\| \geq a_k$ holds for all $k=1,2,\ldots$. The first instance of Theorem \ref{int:th:dichotomy} (ii) is due to Bauschke, Deutsch and Hundal \cite{BauschkeDeutschHundal2009}, who proved it for the alternating projection method ($T_2=P_{M_2} P_{M_1}$) with decreasing null sequences $\{a_k\}_{k=1}^\infty$. These authors commented that their result is also valid for $T_r=\frac 1 r \sum_{i=1}^r P_{M_i}$ with $r\geq 2$ because of the connection between the method of simultaneous projections and the method of alternating projections in the product space. We return to this connection below. The statement of Theorem \ref{int:th:dichotomy} with $T_r=P_{M_r}\ldots P_{M_1}$, allowing $r\geq 2$ and any null nonnegative sequence, has been established by Deutsch and Hundal in \cite{DeutschHundal2010}. Similar results can be found, for example, in \cite{BadeaGrivauxMuller2011, BadeaSeifert2016, BadeaSeifert2017, DeutschHundal2011, DeutschHundal2015}. Despite the arbitrarily slow convergence presented in alternative (ii), there do exist sets of starting points in $\mathcal H$ for which there are relatively good error upper bounds. We comment on this matter in Theorems \ref{int:th:superPoly} and \ref{int:th:poly}. \begin{theorem}[Super-polynomial Rate] \label{int:th:superPoly} If $\sum_{i=1}^r M_i^\perp$ is not closed, then the sequence $\{T_r^k\}_{k=1}^\infty$ converges super-polynomially fast to $P_M$ on some dense linear subspace $X\subset\mathcal H$. \end{theorem} The super-polynomially fast convergence means that $k^n \|T_r^k(x)-P_M(x)\|\to 0$ as $k\to\infty$ for each $x\in X$ and for all $n=1,2,\ldots$. Theorem \ref{int:th:superPoly} is due to Badea and Seifert \cite{BadeaSeifert2016}, who established it for $T_r:=P_{M_r}\ldots P_{M_1}$ in a complex Hilbert space. By using a complexification argument, we see that this result is also valid in a real Hilbert space. Similarly to the case of Theorem \ref{int:th:dichotomy}, the result holds for $T_r=\frac 1 r \sum_{i=1}^r P_{M_i}$ as can be seen by using the product space approach. The details can be found in \cite{ReichZalas2017}. The following theorem has recently been established by Borodin and Kopeck\'{a} in \cite{BorodinKopecka2020}. \begin{theorem}[Polynomial Rate]\label{int:th:poly} Assume that $M_1 \cap M_2 = \{0\}$. Then for any $x \in M_1^\perp + M_2^\perp$ there is $C(x)>0$ such that \begin{equation}\label{int:th:poly:ineq} \|(P_{M_2}P_{M_1})^k(x)\| \leq \frac{C(x)}{\sqrt k}, \quad k=1,2,\ldots. \end{equation} Moreover, when $\mathcal H$ is infinite dimensional, the denominator $\sqrt k$ cannot be replaced by $k^{1/2+\varepsilon}$ for any $\varepsilon>0$ (that is, for each $\varepsilon >0$ there are two closed linear subspaces $M_1$, $M_2$ and $x \in M_1^\perp + M_2^\perp$ such that $\|(P_{M_2}P_{M_1})^k(x)\| \geq C(x) k^{-(1/2+\varepsilon)}$ for some $C(x) > 0$ and all $k = 1,2,\ldots$). \end{theorem} It is not difficult to see that estimate \eqref{int:th:poly:ineq} also holds when $M_1^\perp \cap M_2^\perp\neq \{0\}$. We comment on this in the proof of Theorem \ref{th:polyRate} below. We now return to the case where $\sum_{i=1}^{r}M_i^\perp$ is closed. In this case one may be interested in finding the optimal error bound, that is, the smallest possible estimate for the relative error $e_k(x):=\|T_r^k(x)-P_M(x)\|/\|x\|$, which is independent of $x$. The answer to this question leads to the computation of the operator norm since $\sup_{x\neq 0} e_k(x) = \| T_r^k -P_M\|$. The first result of this type for the alternating projections method (APM) is due to Aronszajn (inequality) \cite{Aronszajn1950}, and Kayalar and Weinert (equality) \cite{KayalarWeinert1988}, who expressed the optimal error bound in terms of the cosine of the Friedrichs angle between the subspaces $M_1$ and $M_2$, which we denote by $\cos(M_1,M_2)$. Recall that \begin{equation}\label{int:def:cosM1M2} \cos(M_1,M_2) := \sup \left\{ \langle x, y \rangle \colon \begin{array}{l} x \in M_1 \cap (M_1 \cap M_2)^\perp \cap B,\\ y \in M_2 \cap (M_1 \cap M_2)^\perp \cap B \end{array} \right\} \in [0,1], \end{equation} where $B:=\{x \in \mathcal H \colon \|x\|\leq 1\}$. Their result reads as follows: \begin{theorem}[Optimal Error Bound] \label{int:th:APM} For each $k=1,2,\ldots,$ we have \begin{equation}\label{int:th:APM:eq} \|(P_{M_2}P_{M_1})^k-P_M\|=\cos(M_1,M_2)^{2k-1}. \end{equation} \end{theorem} Only estimates are known for $r>2$; see, for example, \cite{KayalarWeinert1988, PustylnikReichZaslavski2012} for those which involve angles measured between $M_1\cap\ldots\cap M_i$ and $M_{i+1}$, $i=1,\ldots,r-1$, and \cite{BadeaGrivauxMuller2011, PustylnikReichZaslavski2013} for those which are expressed using the so-called inclination number. At this point, recall that \begin{equation}\label{int:equivalence} \cos(M_1,M_2)<1 \quad \Longleftrightarrow \quad M_1^\perp+M_2^\perp \text{ is closed}; \end{equation} see, for example, \cite{Deutsch1985, BauschkeBorwein1993} and \cite[Theorem 9.35 and p. 235]{Deutsch2001} for detailed historical notes. Equivalence \eqref{int:equivalence} may also be deduced from Theorems \ref{int:th:dichotomy} and \ref{int:th:APM}. A result analogous to Theorem \ref{int:th:APM} has been established in \cite{ReichZalas2017} for the simultaneous projection method. Indeed, let the product space $\mathbf H_r := \mathcal H^r = \mathcal H \times \ldots \times \mathcal H$ be equipped with the inner product $\langle \mathbf x, \mathbf y \rangle_r :=\frac 1 r \sum_{i=1}^{r} \langle x_i,y_i\rangle$ and the induced norm $\|\mathbf x\|_r:=\sqrt{\langle \mathbf x, \mathbf x\rangle_r}$, where $\mathbf x = \{x_1,\ldots,x_r\}$, $\mathbf y=\{y_1,\ldots,y_r\}$. Moreover, let $\mathbf C_r:=M_1\times\cdot\times M_r \subset \mathbf H_r$ and $\mathbf D_r:=\{\{x,\ldots,x\}\colon x\in \mathcal H\} \subset \mathbf H_r$, and denote by $\cos_r(\mathbf C_r, \mathbf D_r)$ the corresponding cosine of the Friedrichs angle in $\mathbf H_r$; see \eqref{def:cosM1N2inHr} and Remark \ref{rem:notation}. \begin{theorem}[Optimal Error Bound] \label{int:th:SPM} For each $k=1,2,\ldots,$ we have \begin{equation}\label{int:th:SPM:eq} \left\|\left(\frac 1 r \sum_{i=1}^r P_{M_i}\right)^k - P_{M}\right\|=\cos_r(\mathbf C_r, \mathbf D_r)^{2k}. \end{equation} In particular, when $r = 2$, we get $\cos_2(\mathbf C_2, \mathbf D_2)^2 = \frac 1 2 + \frac 1 2 \cos(M_1,M_2)$. \end{theorem} Recall that the alternating projection formalization introduced above is due to Pierra \cite{Pierra1984}, who observed that for each $x\in \mathcal H$, $\mathbf x = (x,\ldots,x)$ and $k=1,2,\ldots$, we have $\|( \frac 1 r \sum_{i=1}^r P_{M_i})^k(x) - P_{M}(x)\| = \|(P_{\mathbf D_r} P_{\mathbf C_r})^k(\mathbf x) - P_{\mathbf C_r \cap \mathbf D_r}(\mathbf x)\|_r.$ Note here that by simply combining this with Theorem \ref{int:th:APM}, we only obtain an upper bound given by $\cos_r(\mathbf C_r, \mathbf D_r)^{2k-1}$. The properties of the cosine $\cos_r(\mathbf C_r, \mathbf D_r)$ were studied in \cite{BadeaGrivauxMuller2011}, where equality \eqref{int:th:SPM:eq} was shown for $k = 1$. It turns out that when $r = 2$, the cosine of the Friedrichs angle appears in the optimal rate estimates for many other well-known projection methods. See, for example, \cite{BauschkeCruzNghiaPhanWang2016} for the relaxed alternating projection method, \cite{BauschkeCruzNghiaPhanWang2014} for the Douglas-Rachford method or \cite{ArtachoCampoy2019} for the method of averaged alternating modified reflections. We refer the interested reader to \cite[Table 1]{ArtachoCampoy2019}, where one can find an elegant comparison of rates. \subsection{Contribution and Organization of the Paper} The purpose of the present paper is to investigate the asymptotic properties of the simultaneous projection operator, analogous to those mentioned above, in the case where the number of subspaces $M_i$ is possibly countably infinite, that is, when $r \in \mathbb Z_+\cup\{\infty\}$. The aforesaid operator is defined by $T_\omega:=\sum_{i=1}^r \omega_i P_{M_i}$, where $\omega = \{\omega_i\}_{i=1}^r$ is a vector/sequence of weights $\omega_i \in (0,1)$ the sum of which equals one. We carry out our study by adjusting the product space formalization of Pierra. For this purpose, for each operator $T_\omega$, we define the weighted product space $(\mathbf H_\omega, \langle \cdot, \cdot \rangle_\omega)$, which is the analogue of $(\mathbf H_r, \langle \cdot, \cdot \rangle_r)$, the subspaces $\mathbf C_\omega$ and $\mathbf D_\omega$ in $\mathbf H_\omega$, which correspond to $\mathbf C_r$ and $\mathbf D_r$ in $\mathbf H_r$ and finally, the cosine of the Friedrichs angle $\cos_\omega(\mathbf C_\omega, \mathbf D_\omega)$, which is an analogue of $\cos_r(\mathbf C_r, \mathbf D_r)$; see Section \ref{sec:preliminaries} for fully-fledged definitions, notation and basic properties. We note here only that for $r = \infty$ the product space $\mathbf H_\omega$ coincides with $\ell^2_\omega(\mathcal H):=\{\mathbf x = \{x_i\}_{i=1}^\infty \colon x_i\in\mathcal H,\ i=1,2,\ldots,\ \sum_{i=1}^{\infty}\omega_i\|x_i\|^2 <\infty\}$. We begin this study in Section \ref{sec:Pierra} by showing the explicit connection between the operator $T_\omega$, the projection onto $M$ and the projections onto $\mathbf C_\omega$, $\mathbf D_\omega$ and $\mathbf C_\omega \cap \mathbf D_\omega$. Within this framework, we establish that the iterates of the simultaneous projection method $\{T_\omega^k(x)\}_{k=1}^\infty$ converge in norm to $P_M(x)$ for each starting point $x \in \mathcal H$, even when $r=\infty$. Moreover, using the powers of $\cos_\omega(\mathbf C_\omega, \mathbf D_\omega)^2$, we find an expression for the norm $\|T_\omega^k - P_M\|$, which, when smaller than $1$, becomes the optimal error bound for linear convergence. In Section \ref{sec:cosCD} we present a detailed study of the cosine $\cos_\omega(\mathbf C_\omega, \mathbf D_\omega)$. In particular, we provide an alternative formula for it, where the supremum is taken over a possibly smaller set (Lemma \ref{lem:cosCD1}). Furthermore, we find a new estimate, which, depending on the weights $\omega$, may hold as a strict inequality, or as an equality (see Lemma \ref{lem:cosCD2} and Example \ref{ex:orthogonal}). The important property of this estimate is that it must hold as an equality whenever the subspaces $\mathbf C_\omega$ and $\mathbf D_\omega$ are parallel and in this case the equality holds for all weights $\omega$ (Theorem \ref{thm:cosCDequal1}). On the other hand, we show that the cosine can be easily evaluated when the subspaces $M_i$ are pairwise orthogonal (Proposition \ref{prop:orthogonal}). In addition, we point out that by reducing the multiple copies of the subspaces $M_i$, the cosine $\cos_\omega(\mathbf C_\omega, \mathbf D_\omega)$ can be computed in a simpler manner. For this reason we introduce a rearrangement lemma (see the \hyperref[sec:Appendix]{Appendix}). On the other hand, when $r = \infty$, we can approximate the cosine $\cos_\omega(\mathbf C_\omega, \mathbf D_\omega)$ by a limit process of cosines between $\mathbf C_q$ and $\mathbf D_q$ defined in smaller product spaces, where $q\in\mathbb Z_+$ and $q\to \infty$. In Section \ref{sec:AsymptoticProp} we return to the asymptotic properties of the simultaneous projection method. Building on the idea of $\ell^2$-summability, we replace the subspace $\sum_{i=1}^{r}M_i^\perp$, which plays a central role in Theorems \ref{int:th:dichotomy}--\ref{int:th:poly}, by another $\omega$-dependent subspace, which for $r = \infty$ becomes $\{\sum_{i=1}^{\infty} \omega_i x_i \colon x_i \in M_i^\perp,\ \sum_{i=1}^{\infty}\omega_i\|x_i\|^2<\infty\}$; see \eqref{prop:Minkowski:Aw}. We show that this subspace, which we denote by $A_\omega (\mathbf C_\omega^\perp)$, plays a similar role to that of $\sum_{i=1}^{r}M_i^\perp$. In particular, the closedness of this subspace, or lack thereof, determines the dichotomy between linear and arbitrarily slow convergence. The latter case also implies the super-polynomially fast convergence on some dense linear subspace of $\mathcal H$. Moreover, $A_\omega (\mathbf C_\omega^\perp)$ becomes the set of ``good'' starting points on which we always have at least a polynomial rate of convergence. It is not difficult to see, that when $r = \infty$, the sets $A_\omega (\mathbf C_\omega^\perp)$ may differ for different sequences of weights $\omega$; see Example \ref{ex:AwCwAreDifferent}. In spite of this, we find that the closedness of $A_\omega (\mathbf C_\omega^\perp)$ in $\mathcal H$ does not depend on the weights $\omega$, but only on the subspaces $M_i$ themselves. To be more precise, we prove that if the set $A_\omega (\mathbf C_\omega^\perp)$ is closed for one sequence of weights $\omega$, then it must be closed for all sequences of weights and be equal to $M^\perp$ (see Theorem \ref{thm:cosCDequal1} in Section \ref{sec:cosCD} phrased in the language of cosines and Proposition \ref{prop:Minkowski}). Hence it cannot happen that the rate of convergence is linear for one sequence $\omega$, but is arbitrarily slow for another one. This observation slightly strengthens the aforementioned dichotomy theorem. \section{Preliminaries} \label{sec:preliminaries} From now on, let \begin{equation}\label{} r \in \mathbb Z_+ \cup \{\infty\} \end{equation} and let \begin{equation} \Omega_r := \left\{ \{\omega_i\}_{i=1}^r \colon \ \omega_i>0,\ i=1,\ldots,r,\ \sum_{i=1}^{r} \omega_i = 1 \right\}. \end{equation} In this section we extend the notation used in the introduction for the particular vector $\omega = \{1/r,\ldots, 1/r\}\in \Omega_r$ to an arbitrary vector $\omega \in \Omega_r$, when $r\in \mathbb Z_+$, and to an arbitrary sequence $\omega \in \Omega_\infty$, when $r=\infty$. For each $\omega \in \Omega_r$, we define a weighted product $\ell^2$-space and an associated weighted inner product by \begin{equation}\label{def:Hw} \mathbf H_\omega := \begin{cases} \mathcal H^r, & \text{if } r\in\mathbb Z_+, \\ \ell^2_\omega(\mathcal H), & \text{if } r = \infty \end{cases} \qquad \text{and} \qquad \langle \mathbf x, \mathbf y \rangle_{\omega} := \sum_{i=1}^{r} \omega_i\langle x_i, y_i\rangle, \end{equation} where $\ell^2_\omega(\mathcal H):=\{\mathbf x = \{x_i\}_{i=1}^\infty \colon x_i\in\mathcal H ,\ i=1,2,\ldots,\ \sum_{i=1}^{\infty}\omega_i\|x_i\|^2 <\infty\}$ and where $\mathbf x = \{x_i\}_{i=1}^r, \mathbf y = \{y_i\}_{i=1}^r \in \mathbf H_\omega$. One can verify that the pair $(\mathbf H_\omega, \langle \cdot, \cdot \rangle_{\omega})$ is a Hilbert space. The induced norm on $\mathbf H_\omega$ and the operator norm on $\mathcal B(\mathbf H_\omega)$, the Banach space of all bounded linear operators on $\mathbf H_\omega$, are both denoted by $\|\cdot\|_{\omega}$. \begin{proposition}[$r=\infty$]\label{prop:abs} If $\mathbf x = \{x_i\}_{i=1}^\infty\in \mathbf H_\omega$ for some $\omega \in \Omega_\infty$, then the series $\sum_{i=1}^{\infty}\omega_i x_i$ is absolutely convergent, hence unconditionally (compare with the \hyperref[sec:Appendix]{Appendix}). \end{proposition} \begin{proof} Observe that $\omega_i \|x_i\| < \omega_i$ for all $i \in I:=\{i\colon \|x_i\|<1\}$ and $\omega_j \|x_j\| \leq \omega_j \|x_j\|^2$ for all $j \in J:=\{j\colon \|x_j\|\geq 1\}$. Consequently, for each $n\geq 1$, we get \begin{equation}\label{} \sum_{i=1}^{n} \|\omega_i x_i\| = \sum_{\substack{1 \leq i\leq n \\ i\in I}} \omega_i \|x_i\| + \sum_{\substack{1 \leq j \leq n\\ j\in J}} \omega_j \|x_j\| \leq \sum_{\substack{1 \leq i\leq n \\ i\in I}} \omega_i + \sum_{\substack{1 \leq j \leq n\\ j\in J}} \omega_j \|x_j\|^2 \leq 1 + \|\mathbf x\|_\omega^2 <\infty, \end{equation} with the convention that the summation over the empty set is zero. \end{proof} Consequently, for each $\omega \in \Omega_r$, we can define the \emph{averaging operator} $ A_\omega \colon \mathbf H_\omega \to \mathcal H$ by \begin{equation}\label{def:A} A_\omega(\mathbf x) := \sum_{i=1}^{r}\omega_i x_i, \end{equation} where $\mathbf x = \{x_i\}_{i=1}^r \in \mathbf H_\omega$. In particular, the unconditional convergence of $A_\omega(\mathbf x)$ for $r=\infty$ gives us a lot of freedom in rearranging the summands in \eqref{def:A}; see Lemma \ref{lem:rearrangement} in the \hyperref[sec:Appendix]{Appendix}. Moreover, $A_\omega$ is a norm one linear operator which for all $\mathbf x \in \mathbf H_\omega$ and $z\in\mathcal H$ satisfies \begin{equation}\label{def:Aweak} \langle A_\omega(\mathbf x), z \rangle = \left\langle \sum_{i=1}^r \omega_i x_i, z \right\rangle = \sum_{i=1}^r \omega_i \langle x_i, z \rangle. \end{equation} Let $M_i$ be a nontrivial ($M_i\neq \{0\}$) closed and linear subspace of $\mathcal H$, $i=1,\ldots,r$, and let \begin{equation}\label{def:MinH} M := \bigcap_{i=1}^{r} M_i. \end{equation} For each $\omega \in \Omega_r$, the \emph{simultaneous projection operator} $T_\omega\colon \mathcal H \to \mathcal H$ is defined by \begin{equation}\label{def:T} T_\omega(x) := \sum_{i=1}^{r} \omega_i P_{M_i}(x), \end{equation} where $x\in \mathcal H$. Note that $T_\omega (x) = A_\omega(\{P_{M_i}(x)\}_{i=1}^{r})$ and $\sum_{i=1}^{r}\omega_i\|P_{M_i}(x)\|^2 \leq \sum_{i=1}^{r}\omega_i\|x\|^2 = \|x\|^2<\infty$ due to the equalities $\|P_{M_i}\| = 1$, $i=1,\ldots,r$; see, for example, \cite[Theorem 5.13]{Deutsch2001}. Hence, for $r=\infty$, the series $T_\omega(x)$ is absolutely convergent by Proposition \ref{prop:abs}. Moreover, since each projection $P_{M_i}$ is self-adjoint, see again \cite[Theorem 5.13]{Deutsch2001}, this also holds for the simultaneous projection $T_\omega$. Furthermore, by the convexity of $\|\cdot\|$, we have $\|T_\omega\|\leq 1$ and $\fix T_\omega = M$. Indeed, the inclusion $M \subset \fix T_\omega$ is obvious and if there is $x\in \fix T_\omega$ such that $x \notin M_j$ for some $j\in \{1,\ldots,r\}$, then $\|P_{M_j}(x)\| < \|x\|$ and thus we arrive at a contradiction as $\|x\|=\|T_\omega(x)\| \leq \sum_{i=1}^{r}\omega_i \|P_{M_i}(x)\| < \|x\|$. We now extend the definition for the product set $\mathbf C_r$ and the diagonal set $\mathbf D_r$ to \begin{equation}\label{def:CDinHr} \mathbf C_{\omega} := \mathbf H_\omega \cap \prod_{i=1}^{r}M_i \qquad \text{and} \qquad \mathbf D_{\omega} := \big\{\{x\}_{i=1}^{r} \colon x \in \mathcal H\big\}, \end{equation} respectively, where $\omega \in \Omega_r$. It is not difficult to see that both $\mathbf C_\omega$ and $\mathbf D_\omega$ are closed and linear subspaces of $\mathbf H_\omega$. Obviously, when $r <\infty$, the intersection with $\mathbf H_\omega$ can be omitted. However, when $r=\infty$, the weighted $\ell^2$-spaces $\mathbf H_\omega$ (and hence $\mathbf C_\omega$) may be different for different $\omega \in \Omega_\infty$, as the following example shows. \begin{example}[$r=\infty$]\label{ex:HwAreDifferent} Let $\alpha >0$ and $\beta > 1$. Consider the weighted $\ell^2$-space $\mathbf H_{\omega,\beta}=\ell^2_{\omega,\beta}(\mathcal H)$ with weights $\omega_{i,\beta}:=1/(i^\beta s_\beta) $, where $s_\beta:=\sum_{i=1}^{\infty}1/i^\beta$. Moreover, let $\mathbf x_\alpha=\{x_i\}_{i=1}^\infty$ be any sequence with $\|x_i\|=i^{\alpha/2}$. Then $\mathbf x_\alpha \in \mathbf H_{\omega,\beta}$ if and only if $\beta > 1+\alpha$. Consequently, for any fixed $\alpha>0$, we can find $\beta \neq \beta'$ (for example, $1<\beta'<1+\alpha<\beta$) such that $\mathbf x_\alpha \in \mathbf H_{\omega,\beta}$, but $\mathbf x_\alpha \notin \mathbf H_{\omega,\beta'}$. \end{example} Following \eqref{int:def:cosM1M2}, for each $\omega \in \Omega_r$, we define the \emph{cosine of the Friedrichs angle} between two nontrivial closed and linear subspaces $\mathbf M_1, \mathbf M_2$ of $\mathbf H_\omega$ by \begin{equation}\label{def:cosM1N2inHr} \cos_\omega(\mathbf M_1, \mathbf M_2) := \sup \left\{ \langle \mathbf x, \mathbf y \rangle_\omega \colon \begin{array}{l} \mathbf x \in \mathbf M_1 \cap (\mathbf M_1 \cap \mathbf M_2)^\perp \cap \mathbf B_\omega,\\ \mathbf y \in \mathbf M_2 \cap (\mathbf M_1 \cap \mathbf M_2)^\perp \cap \mathbf B_\omega \end{array} \right\} \in [0,1], \end{equation} where $\mathbf B_\omega := \{\mathbf x \in \mathbf H_\omega \colon \|\mathbf x\|_\omega \leq 1\}$. We use the same symbol ``$\perp$'' for the orthogonal complement both in $\mathcal H$ and in $\mathbf H_\omega$. In this paper we are interested in the cosine between $\mathbf C_\omega$ and $\mathbf D_\omega$ and its connection to \begin{equation}\label{def:MDeltainHr} \mathbf M_{\omega} := \mathbf H_\omega \cap \prod_{i=1}^{r}M \qquad \text{and} \qquad \mathbf \Delta_{\omega} := \prod_{i=1}^{r} B, \end{equation} where $B:=\{x\in\mathcal H \colon \|x\|\leq 1\}$. For this reason, we introduce the following \emph{configuration constant}: \begin{equation}\label{def:cM1N2inHr} c_\omega(\mathbf M_1, \mathbf M_2) := \sup \left\{ \langle \mathbf x, \mathbf y \rangle_\omega \colon \begin{array}{l} \mathbf x \in \mathbf M_1 \cap (\mathbf M_1 \cap \mathbf M_2)^\perp \cap \mathbf \Delta_\omega,\\ \mathbf y \in \mathbf M_2 \cap (\mathbf M_1 \cap \mathbf M_2)^\perp \cap \mathbf \Delta_\omega \end{array} \right\} \in [0,1]. \end{equation} We proceed with the following proposition. \begin{proposition} \label{prop:inclusion} Let $\omega \in \Omega_r$. We have $\mathbf M_{\omega}^\perp = \mathbf H_\omega \cap \prod_{i=1}^{r}M^\perp$ and $\mathbf D_{\omega} \cap \mathbf M_{\omega}^\perp = \mathbf D_{\omega} \cap (\mathbf C_{\omega} \cap \mathbf D_{\omega})^\perp.$ However, the inclusion \begin{equation} \label{prop:inclusion:eq} \mathbf C_{\omega} \cap \mathbf M_{\omega}^\perp \subset \mathbf C_{\omega} \cap (\mathbf C_{\omega} \cap \mathbf D_{\omega})^\perp \end{equation} is strict if $M\neq \{0\}$ and $M_j \neq M$ for some $j \in \{1,\ldots,r\}$. \end{proposition} \begin{proof} We begin by showing the first equality. It is not difficult to see that, by the definition of $\mathbf M_\omega$, we have $\mathbf M_{\omega}^\perp \supset \mathbf H_\omega \cap \prod_{i=1}^{r}M^\perp$. We now demonstrate the opposite inclusion ``$\subset$''. Indeed, let $\mathbf x = \{x_i\}_{i=1}^r \in \mathbf M_\omega^\perp$. Then, $\langle \mathbf x, \mathbf y \rangle_\omega = 0$ for all $\mathbf y = \{y_i\}_{i=1}^r \in \mathbf M_\omega$, where $y_i \in M$. By choosing $j\in \{1,\ldots,r\}$ and $y_i:=0$ for all $i\neq j$, we obtain $\langle x_j, y_j\rangle = 0$ for all $y_j \in M$, and thus we must have $x_j \in M^\perp$. This argument, when repeated for all $j\in\{1,\ldots,r\}$, shows that $\mathbf x \in \mathbf H_\omega \cap \prod_{i=1}^{r}M^\perp$, as asserted. Next we show the second equality and inclusion \eqref{prop:inclusion:eq}. To this end, observe that, by the definition of $\mathbf C_\omega$ and $\mathbf D_\omega$, we get \begin{equation}\label{pr:inclusion:CD} \mathbf C_{\omega} \cap \mathbf D_{\omega} = \left\{\{x\}_{i=1}^r \colon x \in M \right\}. \end{equation} Consequently, by using the first equality, we see that \begin{align}\label{} \nonumber \mathbf D_\omega \cap \mathbf M_\omega^\perp & = \left\{ \{x\}_{i=1}^r \colon x \in M^\perp \right\} \\ \nonumber & = \left\{ \{x\}_{i=1}^r \colon \langle x, y \rangle = 0 \text{ for all } y \in M \right\} \\ \nonumber & = \left\{ \mathbf x \in \mathbf D_\omega \colon \langle \mathbf x, \mathbf y \rangle_\omega = 0 \text{ for all } \mathbf y \in \mathbf C_{\omega} \cap \mathbf D_{\omega} \right\} \\ & = \mathbf D_\omega \cap (\mathbf C_{\omega} \cap \mathbf D_{\omega})^\perp \end{align} and \begin{align}\label{} \nonumber \mathbf C_{\omega} \cap \mathbf M_{\omega}^\perp & = \left\{\{x_i\}_{i=1}^r \in \mathbf H_\omega \colon x_i \in M_i\cap M^\perp,\ i=1,\ldots,r \right\} \\ \nonumber & \subset \left\{\{x_i\}_{i=1}^r \in \mathbf H_\omega \colon x_i \in M_i,\ i=1,\ldots,r,\ \sum\nolimits_{i=1}^r \omega_i x_i \in M^\perp \right\} \\ & = \mathbf C_{\omega} \cap (\mathbf C_{\omega} \cap \mathbf D_{\omega})^\perp. \end{align} Finally, we show that inclusion \eqref{prop:inclusion:eq} is strict. The assumption $M_j \neq M$ guarantees that the subspace $M_j \cap M^\perp$ is nontrivial. Indeed, by the orthogonal decomposition theorem (that is, $I = P_{M} + P_{M^\perp}$), for any $x\in M_j \setminus M$, the point $x_j:= P_{M^\perp}(x) = x - P_{M}(x) \neq 0 $ and $x_j \in M_j \cap M^\perp$. Let $m\in M$ be nonzero. Define $\mathbf y = \{y_i\}_{i=1}^r$ by $y_j:=\frac 1 {\omega_j} (m+x_j)$, $y_{j+1}:= - \frac 1 {\omega_{j+1}} m$ and $y_i:=0$ for all $i\neq j,j+1$. Note that $y_j \notin M^\perp$. Thus $\mathbf y \notin \mathbf C_{\omega} \cap \mathbf M_{\omega}^\perp$. On the other hand, $y_i \in M_i$ for all $i=1,\ldots,r,$ and, moreover, for any $m'\in M$, we have \begin{equation} \sum_{i=1}^{r} \omega_i \langle y_i, m'\rangle = \langle (m+x_j) - m, m'\rangle = \langle x_j, m'\rangle = 0, \end{equation} which proves that $\mathbf y \in \mathbf C_{\omega} \cap (\mathbf C_{\omega} \cap \mathbf D_{\omega})^\perp$. \end{proof} In the next proposition we show the connections among the sets $\mathbf C_{\omega}, \mathbf D_{\omega}$ and the averaging operator $A_\omega$. When $r \in \mathbb Z_+$, equality \eqref{prop:Minkowski:M} can be found in \cite[Theorem 4.6 (5)]{Deutsch2001}. \begin{proposition}\label{prop:Minkowski} Let $\omega \in \Omega_r$. We have $\mathbf C_{\omega}^\perp = \mathbf H_\omega \cap \prod_{i=1}^{r} M_i^\perp $ and $\mathbf D_\omega^\perp = \mathcal N (A_\omega)$ -- the null space of $A_\omega$. Consequently the subspace $\mathbf C_\omega^\perp + \mathbf D_\omega^\perp$ is closed in $\mathbf H_\omega$ if and only if the subspace \begin{equation}\label{prop:Minkowski:Aw} A_\omega (\mathbf C_\omega^\perp) = \begin{cases} \sum_{i=1}^{r} M_i^\perp, & \text{if } r\in \mathbb Z_+,\\ \big\{\sum_{i=1}^{\infty} \omega_i x_i \colon x_i \in M_i^\perp,\ \sum_{i=1}^{\infty}\omega_i\|x_i\|^2<\infty \big\}, & \text{if } r= \infty \end{cases} \end{equation} is closed in $\mathcal H$. Moreover, \begin{equation}\label{prop:Minkowski:M} \overline{A_\omega (\mathbf C_\omega^\perp)} = M^\perp. \end{equation} \end{proposition} \begin{proof} The first equality can be established by using an argument similar to the one presented in the proof of Proposition \ref{prop:inclusion} with $M$ replaced by $M_j$. In order to show the second equality, take $\mathbf x \in \mathbf D_\omega^\perp$. Then, by \eqref{def:Aweak}, for all $\mathbf y = \{y\}_{i=1}^r \in \mathbf D_\omega$, we have $\langle \mathbf x, \mathbf y\rangle_\omega = \langle A_\omega (\mathbf x), y\rangle = 0$. In particular, by taking $y:=A_\omega(\mathbf x)$, we see that $A_\omega(\mathbf x) = 0$, that is, $\mathbf x \in \mathcal N(A_\omega)$. On the other hand, it is easy to see that when $\mathbf x \in \mathcal N(A_\omega)$, then for all $\mathbf y = \{y\}_{i=1}^r \in \mathbf D_\omega$, we have $0 = \langle A_\omega(\mathbf x), y\rangle = \langle \mathbf x, \mathbf y \rangle_\omega$, that is, $\mathbf x \in \mathbf D_\omega^\perp$. This shows the second equality. Recall that $ A_\omega$ is linear and bounded. In view of \cite[section 17H, p. 142]{Holmes1975}, the set $A_\omega(\mathbf C_\omega^\perp)$ is closed in $\mathcal H$ if and only if $\mathbf C_\omega^\perp + \mathcal N(A_\omega)$ is closed in $\mathbf H_\omega$. The equalities in \eqref{prop:Minkowski:Aw} follow from the above discussion. We now focus on \eqref{prop:Minkowski:M}, where we first show that \begin{equation}\label{pr:Minkowski:M} \big(A_\omega (\mathbf C_\omega^\perp)\big)^\perp = M. \end{equation} It is not difficult to see that $M \subset \big(A_\omega (\mathbf C_\omega^\perp)\big)^\perp$. In order to demonstrate the opposite inclusion ``$\supset$'', take $x \in \big(A_\omega (\mathbf C_\omega^\perp)\big)^\perp$. Consequently, for all $\mathbf y = \{y_i\}_{i=1}^r \in \mathbf C_\omega^\perp$ (hence $y_i \in M_i^\perp$), we have $\langle x, A_\omega(\mathbf y)\rangle =0$. In particular, by choosing $j \in \{1,\ldots,r\}$ and setting $y_i:=0$ for all $i\neq j$, we obtain that $\langle x, y_j\rangle = 0$ for all $y_j \in M_j^\perp$. This, when combined with the fact $M_j$ is a closed linear subspace, implies that $x \in M_j^{\perp \perp} = M_j$; see, for example, \cite[Theorem 4.5 (8)]{Deutsch2001}. The arbitrariness of $j\in\{1,\ldots,r\}$ yields that $x \in M$. We now return to \eqref{prop:Minkowski:M}. Recall that, when $L$ is a linear subspace of $\mathcal H$ which is not necessarily closed, then $L^\perp = (\overline L)^\perp$; see \cite[Theorem 4.5 (2)]{Deutsch2001}. Consequently, by \eqref{pr:Minkowski:M}, \begin{equation}\label{} M^\perp = \big(A_\omega (\mathbf C_\omega^\perp)\big)^{\perp \perp} = \big(\overline{A_\omega (\mathbf C_\omega^\perp)}\big)^{\perp \perp} = \overline{A_\omega (\mathbf C_\omega^\perp)}, \end{equation} which completes the proof. \end{proof} Note that similarly to Example \ref{ex:HwAreDifferent}, when $r = \infty$, the sets $A_\omega(\mathbf C_\omega^\perp)$ may be different for different $\omega \in \Omega_\infty$. \begin{example}[$r=\infty$]\label{ex:AwCwAreDifferent} Assume that $\mathcal H$ is separable and let $\{e_i\}_{i=1}^\infty$ be a norm-one Schauder basis of it. Let $M_i^\perp:=\operatorname{span}\{e_i\}$. Using the notation of Example \ref{ex:HwAreDifferent}, consider two spaces, $\mathbf H_{\omega,\beta}$ and $\mathbf H_{\omega,\beta'}$, with \begin{equation}\label{ex:AwCwAreDifferent:alhpaBeta} \varepsilon >0,\quad \alpha >0, \quad \beta:=1+\alpha+\varepsilon \quad \text{and} \quad \alpha':=\alpha+2\varepsilon, \quad \beta':= 1+\alpha'. \end{equation} By using Example \ref{ex:HwAreDifferent}, we see that \begin{equation}\label{ex:AwCwAreDifferent:xx} \mathbf x_\alpha :=\{i^{\alpha/2}e_i\}_{i=1}^\infty \in \mathbf H_{\omega,\beta} \quad \text{and} \quad \mathbf x_{\alpha'} :=\{i^{\alpha'/2}e_i\}_{i=1}^\infty \notin \mathbf H_{\omega,\beta'}. \end{equation} Consequently, $y := \frac{s_{\beta}}{s_{\beta'}} A_{\omega,\beta}(\mathbf x_\alpha) \in A_{\omega,\beta}(\mathbf C_{\omega,\beta}^\perp)$. Note that, by the choice of the $e_i$'s, the representation of \begin{equation}\label{} y = \frac{1}{s_{\beta'}} \sum_{i=1}^{\infty} \frac{1}{i^{\beta'}} \left( i^{[\alpha+2(\beta'-\beta)]/2} e_i\right) = \frac{1}{s_{\beta'}} \sum_{i=1}^{\infty} \frac{1}{i^{\beta'}} \left( i^{\alpha'/2} e_i\right) \end{equation} is unique. Hence, by \eqref{ex:AwCwAreDifferent:xx}, we see that $y \notin A_{\omega,\beta'}(\mathbf C_{\omega,\beta'}^\perp)$. \end{example} \begin{remark}[Notation]\label{rem:notation} To emphasize that we refer to a particular vector $\omega = \{1/r,\ldots,1/r\} \in \Omega_r$ for some $r \in \mathbb Z_+$, we may replace the subscript ``$\omega$'' by the subscript ``$r$'' in all the above-mentioned definitions. For example, we write \begin{equation} \mathbf H_r,\ \langle \cdot, \cdot \rangle_r,\ \|\cdot\|_r, \quad \mathbf C_r,\ \mathbf D_r,\ \mathbf M_r,\ \mathbf \Delta_r, \quad \cos_r(\cdot, \cdot),\ c_r(\cdot, \cdot), \quad T_r \text{ and } A_r \end{equation} instead of \begin{equation} \mathbf H_\omega,\ \langle \cdot, \cdot \rangle_\omega,\ \|\cdot\|_\omega, \quad \mathbf C_\omega,\ \mathbf D_\omega,\ \mathbf M_\omega,\ \mathbf \Delta_\omega, \quad \cos_\omega(\cdot, \cdot),\ c_\omega(\cdot, \cdot), \quad T_\omega \text{ and } A_\omega, \end{equation} respectively. This coincides with the notation used in the introduction. \end{remark} \section{Alternating Projection Formalization of Pierra} \label{sec:Pierra} In the next two results we bring out the connections among the operators $A_\omega$, $P_M$ and $T_\omega$, and the projections $P_{\mathbf C_\omega}$, $P_{\mathbf D_\omega}$ and $P_{\mathbf C_\omega \cap \mathbf D_\omega}$. \begin{lemma}\label{lem:ProjCD} Let $\omega \in \Omega_r$. For each $\mathbf x := \{x_i\}_{i=1}^r \in \mathbf H_\omega$, we have \begin{equation}\label{lem:ProjCD:C} P_{\mathbf C_{\omega}}(\mathbf x) = \left\{P_{M_i}(x_i) \right\}_{i=1}^r, \end{equation} \begin{equation}\label{lem:ProjCD:D} P_{\mathbf D_\omega}(\mathbf x) = \left\{A_\omega(\mathbf x) \right\}_{i=1}^r \end{equation} and \begin{equation}\label{lem:ProjCD:CD} P_{\mathbf C_{\omega} \cap \mathbf D_\omega}(\mathbf x) = \left\{ P_{M}(A_\omega(\mathbf x)) \right\}_{i=1}^r. \end{equation} \end{lemma} \begin{proof} Recall that for a closed and linear subspace $L$ of $\mathcal H$, we have \begin{equation}\label{pr:ProjCD:PL1} y = P_{L}( x) \quad \Longleftrightarrow \quad y \in L \quad\text{and}\quad \langle x - y, z\rangle = 0 \quad \forall z\in L; \end{equation} see, for example, \cite[Theorem 4.9]{Deutsch2001}. Analogously, for a closed and linear subspace $\mathbf L$ of $\mathbf H_\omega$, we have \begin{equation}\label{pr:ProjCD:PL2} \mathbf y = P_{\mathbf L}(\mathbf x) \quad \Longleftrightarrow \quad \mathbf y \in \mathbf L \quad\text{and}\quad \langle \mathbf x - \mathbf y, \mathbf z\rangle_\omega = 0 \quad \forall \mathbf z\in \mathbf L. \end{equation} We now consider each asserted equality separately. By definition, $\mathbf y:= \{P_{M_i}(x_i)\}_{i=1}^r \in \prod_{i=1}^{r}M_i$. Moreover, using the equality $\|P_{M_i}\|=1$, we see that \begin{equation} \sum_{i=1}^{r}\omega_i \|P_{M_i}(x_i)\|^2 \leq \sum_{i=1}^{r}\omega_i \|x_i\|^2 = \|\mathbf x\|^2_\omega <\infty. \end{equation} This shows that $\mathbf y \in \mathbf H_\omega$ and thus $\mathbf y \in \mathbf C_\omega$. Moreover, by \eqref{pr:ProjCD:PL1} applied to $L := M_i$, $i=1,\ldots,r$, for each $\mathbf z = \{z_i\}_{i=1}^r \in \mathbf C_\omega$, we have \begin{equation}\label{} \langle \mathbf x - \mathbf y, \mathbf z\rangle_{\omega} = \sum_{i=1}^{r}\omega_i \langle x_i - P_{M_i}(x_i), z_i\rangle = 0. \end{equation} By \eqref{pr:ProjCD:PL2}, this shows \eqref{lem:ProjCD:C}. By definition, $\mathbf y:= \{A_\omega(\mathbf x)\}_{i=1}^r \in \mathbf D_\omega$. Moreover, by \eqref{def:Aweak}, for each $\mathbf z = \{z\}_{i=1}^r \in \mathbf D_\omega$, we have \begin{equation}\label{} \langle \mathbf x - \mathbf y, \mathbf z\rangle_{\omega} = \sum_{i=1}^{r}\omega_i \langle x_i - A_\omega(\mathbf x), z\rangle = \left\langle \sum_{i=1}^{r}\omega_i( x_i - A_\omega(\mathbf x)), z \right\rangle = \langle A_\omega(\mathbf x) - A_\omega(\mathbf x), z\rangle = 0. \end{equation} Again, by \eqref{pr:ProjCD:PL2}, this proves \eqref{lem:ProjCD:D}. Finally, let now $\mathbf y:= \{P_{M}(A_\omega(\mathbf x))\}_{i=1}^r$. It is clear that, by definition, $\mathbf y \in \mathbf C_\omega \cap \mathbf D_\omega = \{\mathbf x = \{x\}_{i=1}^r \colon x \in M\}.$ By \eqref{def:Aweak} and \eqref{pr:ProjCD:PL1}, for any $\mathbf z = \{z\}_{i=1}^r \in \mathbf C_\omega \cap \mathbf D_\omega$, we have \begin{align}\label{} \nonumber \langle \mathbf x - \mathbf y, \mathbf z\rangle_{\omega} & = \sum_{i=1}^{r} \omega_i \langle x_i - P_{M}(A_\omega(\mathbf x)), z\rangle = \left\langle \sum_{i=1}^{r} \omega_i \Big(x_i - P_{M}(A_\omega(\mathbf x)) \Big), z \right\rangle \\ & = \langle A_\omega(\mathbf x) - P_{M}(A_\omega(\mathbf x)), z\rangle = 0. \end{align} This, in view of \eqref{pr:ProjCD:PL2}, proves the last equality. \end{proof} \begin{theorem}\label{thm:normConvergence} Let $\omega \in \Omega_r$. For each $x \in \mathcal H$ and $\mathbf x := \{x\}_{i=1}^r$, we have \begin{equation}\label{thm:normConvergence:eq} \|T_\omega^k(x) - P_{M}(x)\| =\|(P_{\mathbf D_\omega} P_{\mathbf C_\omega})^k(\mathbf x) - P_{\mathbf C_\omega \cap \mathbf D_\omega}(\mathbf x)\|_\omega \to 0 \text{\quad as } k \to \infty. \end{equation} \end{theorem} \begin{proof} Using \eqref{lem:ProjCD:C}, \eqref{lem:ProjCD:D} and induction with respect to $k$, we see that the equality \begin{equation}\label{} (P_{\mathbf D_\omega}P_{\mathbf C_\omega})^k(\mathbf x) = \{T_\omega^k(x)\}_{i=1}^r \end{equation} holds for all $x \in \mathcal H$ and $\mathbf x:=\{x\}_{i=1}^r \in \mathbf H_\omega$. This, when combined with \eqref{lem:ProjCD:CD}, leads to \begin{equation}\label{} \|T_\omega^k(x)-P_M(x)\| = \|\{T_\omega^k(x)\}_{i=1}^r - \{P_M(x)\}_{i=1}^r\|_\omega = \|(P_{\mathbf D_\omega} P_{\mathbf C_\omega})^k(\mathbf x) - P_{\mathbf C_\omega \cap \mathbf D_\omega}(\mathbf x)\|_\omega, \end{equation} which proves the equality in \eqref{thm:normConvergence:eq}. The norm convergence of the alternating projection method follows from Theorem \ref{int:th:norm} applied to $M_1:=\mathbf C_\omega$ and $M_2:=\mathbf D_\omega$ in $\mathbf H_\omega$. \end{proof} \begin{theorem}[Exact Norm Value]\label{thm:norm} Let $\omega \in \Omega_r$. For each $k=1,2,\ldots,$ we have \begin{align}\label{thm:norm:eq} \|T_\omega^k - P_{M}\| = \|(P_{\mathbf D_{\omega}}P_{\mathbf C_{\omega}} P_{\mathbf D_{\omega}})^k - P_{\mathbf C_{\omega} \cap \mathbf D_{\omega}}\|_\omega = \cos_{\omega}(\mathbf C_{\omega}, \mathbf D_{\omega})^{2k} \leq 1. \end{align} \end{theorem} \begin{proof} The proof follows the argument in \cite[Theorem 7]{ReichZalas2017}. We give it here for the convenience of the reader. Recall that the operator $T_\omega$ is self-adjoint, $\|T_\omega\|\leq 1$ and $\fix T_\omega = M$ (compare with Section \ref{sec:preliminaries}). Moreover, for each $i=1,\ldots,r$, the projection $P_{M_i}$ commutes with $P_M$, that is, \begin{equation}\label{pr:norm:PM_PMi} P_{M} P_{M_i} = P_{M_i}P_{M} = P_{M}; \end{equation} see \cite[Lemma 9.2]{Deutsch2001}. Consequently, the operator $T_\omega$ commutes with $P_M$ too and we have \begin{equation}\label{} P_M T_\omega = T_\omega P_M = P_M. \end{equation} By using \cite[Lemma 6]{ReichZalas2017}, we get $\|T_\omega^k-P_M\| = \|T_\omega-P_M\|^k$. On the other hand, the operator $\mathbf T:=P_{\mathbf D_{\omega}}P_{\mathbf C_{\omega}} P_{\mathbf D_{\omega}}$ is also self-adjoint, $\|\mathbf T\|_\omega\leq 1$ and $\fix \mathbf T = \mathbf C_{\omega} \cap \mathbf D_{\omega}$. Note that similarly to \eqref{pr:norm:PM_PMi}, the projections $P_{\mathbf C_{\omega}}$ and $P_{\mathbf D_{\omega}}$ commute with $P_{\mathbf C_{\omega} \cap \mathbf D_{\omega}}$, where \begin{equation}\label{pr:norm:PCD_PC_PD} P_{\mathbf C_\omega \cap \mathbf D_\omega} = P_{\mathbf C_\omega \cap \mathbf D_\omega} P_{\mathbf C_\omega} = P_{\mathbf C_\omega} P_{\mathbf C_\omega \cap \mathbf D_\omega} = P_{\mathbf C_\omega \cap \mathbf D_\omega} P_{\mathbf D_\omega} = P_{\mathbf D_\omega} P_{\mathbf C_\omega \cap \mathbf D_\omega}. \end{equation} This leads to \begin{equation}\label{pr:norm:PM_T} P_{\mathbf C_{\omega} \cap \mathbf D_{\omega}} \mathbf T = \mathbf T P_{\mathbf C_{\omega} \cap \mathbf D_{\omega}} = P_{\mathbf C_{\omega} \cap \mathbf D_{\omega}}. \end{equation} Again, by using \cite[Lemma 6]{ReichZalas2017}, but this time in $\mathbf H_\omega$, we obtain $\|\mathbf T^k - P_{\mathbf C_{\omega} \cap \mathbf D_{\omega}}\|_\omega = \|\mathbf T - P_{\mathbf C_{\omega} \cap \mathbf D_{\omega}}\|_\omega^k$. In order to complete the proof, it suffices to show the equalities of \eqref{thm:norm:eq} only for $k=1$. By the properties of the adjoint operation ``$*$'' and by Theorem \ref{int:th:APM}, we obtain \begin{align}\label{pr:norm:1} \nonumber \|P_{\mathbf D_\omega}P_{\mathbf C_\omega}P_{\mathbf D_\omega}-P_{\mathbf C_\omega\cap\mathbf D_\omega}\|_{ \omega} &=\|P_{\mathbf D_\omega}P_{\mathbf C_\omega}P_{\mathbf C_\omega}P_{\mathbf D_\omega}-P_{\mathbf C_\omega\cap\mathbf D_\omega}\|_{ \omega} \\ \nonumber &=\|(P_{\mathbf D_\omega}P_{\mathbf C_\omega}-P_{\mathbf C_\omega\cap\mathbf D_\omega}) (P_{\mathbf C_\omega}P_{\mathbf D_\omega}-P_{\mathbf C_\omega\cap\mathbf D_\omega})\|_{ \omega} \\ \nonumber &=\|(P_{\mathbf D_\omega}P_{\mathbf C_\omega}-P_{\mathbf C_\omega\cap\mathbf D_\omega}) (P_{\mathbf D_\omega}P_{\mathbf C_\omega}-P_{\mathbf C_\omega\cap\mathbf D_\omega})^*\|_{ \omega} \\ \nonumber &=\|P_{\mathbf D_\omega}P_{\mathbf C_\omega}-P_{\mathbf C_\omega\cap\mathbf D_\omega}\|^2_{ \omega} \\ &= \cos_\omega(\mathbf C_\omega, \mathbf D_\omega)^2. \end{align} Let $\mathbf B_\omega:= \{\mathbf x \colon \|\mathbf x\|_\omega \leq 1\}$. Since $P_{\mathbf D_\omega}(\mathbf B_\omega)= \mathbf D_\omega \cap \mathbf B_\omega$, we see that \begin{align}\label{pr:norm:2} \nonumber \|P_{\mathbf D_\omega}P_{\mathbf C_\omega}P_{\mathbf D_\omega}-P_{\mathbf C_\omega\cap\mathbf D_\omega}\|_{ \omega} & =\|P_{\mathbf D_\omega}P_{\mathbf C_\omega}P_{\mathbf D_\omega} - P_{\mathbf C_\omega\cap\mathbf D_\omega} P_{\mathbf D_\omega}\|_{ \omega} \\ \nonumber & =\sup\left\{ \|P_{\mathbf D_\omega}P_{\mathbf C_\omega}P_{\mathbf D_\omega}(\mathbf x)-P_{\mathbf C_\omega\cap\mathbf D_\omega}P_{\mathbf D_\omega}(\mathbf x)\|_{ \omega} \colon \mathbf x \in \mathbf B_\omega \right\} \\ \nonumber & = \sup\left\{ \|P_{\mathbf D_\omega}P_{\mathbf C_\omega}(\mathbf y)-P_{\mathbf C_\omega\cap\mathbf D_\omega}(\mathbf y)\|_{ \omega} \colon \mathbf y \in P_{\mathbf D_\omega}(\mathbf B_\omega) \right\}\\ \nonumber & = \sup\left\{ \|P_{\mathbf D_\omega}P_{\mathbf C_\omega}(\mathbf y)-P_{\mathbf C_\omega\cap\mathbf D_\omega}(\mathbf y)\|_{ \omega} \colon \mathbf y \in \mathbf D_\omega \cap \mathbf B_\omega \right\}\\ \nonumber & = \sup\left\{ \|T_\omega(y)-P_M(y)\| \colon y \in \mathcal H \text{ and } \|y\|\leq 1 \right\}\\ & = \|T_\omega-P_M\|. \end{align} This completes the proof. \end{proof} Note that equality \eqref{thm:norm:eq} becomes useful only when $\cos_\omega(\mathbf C_\omega, \mathbf D_\omega) <1$ in which case it turns into the optimal error bound. We return to this inequality in Theorems \ref{thm:cosCDequal1} and \ref{thm:equiv} below. \section[Properties of the Cosine]{Properties of the Cosine $\cos_\omega(\mathbf C_\omega, \mathbf D_\omega)$ \label{sec:cosCD}} In the next two lemmata, we show that the set $(\mathbf C_\omega \cap \mathbf D_\omega)^\perp$ can be replaced by its subset $\mathbf M_\omega^\perp$ in the definitions of $\cos_\omega(\mathbf C_\omega, \mathbf D_\omega)$ and $c_\omega(\mathbf C_\omega, \mathbf D_\omega)$, despite the discussion concerning the inclusion of Proposition \ref{prop:inclusion}. \begin{lemma} \label{lem:cosCD1} Let $\omega \in \Omega_r$. The cosine of the Friedrichs angle between $\mathbf C_\omega$ and $\mathbf D_\omega$ satisfies: \begin{align} \nonumber \label{lem:cosCD1:sup} \cos_\omega(\mathbf C_\omega, \mathbf D_\omega) & = \sup \left\{ \langle \mathbf x, \mathbf y \rangle_\omega \colon \begin{array}{l} \mathbf x \in \mathbf C_\omega \cap \mathbf M_\omega^\perp \cap \mathbf B_\omega,\\ \mathbf y \in \mathbf D_\omega \cap \mathbf M_\omega^\perp \cap \mathbf B_\omega \end{array} \right\}\\ & = \sup \left\{ \sum_{i=1}^{r} \omega_i \langle x_i, y\rangle \colon \begin{array}{l} x_i \in M_i \cap M^\perp,\ i=1,\ldots,r,\ \sum_{i=1}^{r}\omega_i\| x_i\|^2 \leq 1,\\ y \in M^\perp,\ \|y\| \leq 1 \end{array} \right\}. \end{align} \end{lemma} \begin{proof} Denote the right-hand side of \eqref{lem:cosCD1:sup} by $\alpha$ and observe that, by the inclusion $\mathbf C_{\omega} \cap \mathbf M_{\omega}^\perp \subset \mathbf C_{\omega} \cap (\mathbf C_{\omega} \cap \mathbf D_{\omega})^\perp$, we have $\cos_\omega(\mathbf C_\omega, \mathbf D_\omega) \geq \alpha$. We now show that $\cos_\omega(\mathbf C_\omega, \mathbf D_\omega) \leq \alpha$. Note first that, analogously to \eqref{pr:norm:PM_PMi}, for each $i=1,\ldots,r$, we have \begin{equation}\label{pr:cosCD1:PM_PMi:perp} P_{M^\perp} P_{M_i} = P_{M_i}P_{M^\perp} = P_{M_i\cap M^\perp}. \end{equation} Indeed, when $x \in M_i$, by \eqref{pr:norm:PM_PMi} and using the orthogonal decomposition theorem, we see that \begin{equation}\label{} P_{M^\perp} (x) = P_{M^\perp} P_{M_i}(x) = P_{M_i}(x) - P_{M} P_{M_i}(x) = P_{M_i}(x) - P_{M}(x) \in M_i, \end{equation} that is, $P_{M^\perp}(M_i)\subset M_i$. Hence we may again apply \cite[Lemma 9.2]{Deutsch2001} to obtain \eqref{pr:cosCD1:PM_PMi:perp}. Let $\mathbf x = \{x_i\}_{i=1}^r \in \mathbf H_\omega$ be such that $x_i \in M_i$, $i=1,\ldots,r$, and let $y\in M^\perp$. Using \eqref{pr:cosCD1:PM_PMi:perp}, we arrive at \begin{equation}\label{pr:cosCD1:reduction} \langle x_i, y\rangle = \langle P_{M}(x_i)+P_{M^\perp}(x_i), y\rangle = \langle P_{M^\perp}P_{M_i}(x_i), y\rangle = \langle P_{M_i \cap M^\perp}(x_i), y\rangle. \end{equation} Furthermore, by \eqref{pr:norm:PM_PMi} and \eqref{pr:cosCD1:PM_PMi:perp}, we obtain \begin{equation}\label{} \|x_i\|^2 = \|P_{M_i\cap M}(x_i)\|^2 + \|P_{M_i\cap M^\perp}(x_i)\|^2. \end{equation} Therefore, \begin{align}\label{pr:cosCD1:cosCD}\nonumber \cos_\omega(\mathbf C_\omega, \mathbf D_\omega) & = \sup\left\{ \sum_{i=1}^r \omega_i \langle x_i, y\rangle \colon \begin{array}{l} x_i \in M_i,\ i=1,\ldots,r,\ \sum_{i=1}^{r}\omega_i x_i \in M^\perp,\\ \sum_{i=1}^{r}\omega_i\| x_i\|^2 \leq 1,\ y \in M^\perp,\ \|y\| \leq 1 \end{array} \right\}\\ \nonumber & \leq \sup\left\{ \sum_{i=1}^r \omega_i \langle x_i, y\rangle \colon \begin{array}{l} x_i \in M_i,\ i=1,\ldots,r,\ \sum_{i=1}^{r}\omega_i\| x_i\|^2 \leq 1,\\ y \in M^\perp,\ \|y\| \leq 1 \end{array} \right\}\\ \nonumber & \leq \sup\left\{ \sum_{i=1}^r \omega_i \langle x_i, y\rangle \colon \begin{array}{l} x_i \in M_i,\ i=1,\ldots,r,\ \sum_{i=1}^r \omega_i \|P_{M_i\cap M^\perp}(x_i)\|^2\leq 1,\\ y \in M^\perp,\ \|y\| \leq 1 \end{array} \right\}\\ \nonumber & = \sup\left\{ \sum_{i=1}^r \omega_i \langle z_i, y\rangle \colon \begin{array}{l} z_i \in M_i\cap M^\perp,\ i=1,\ldots,r,\ \sum_{i=1}^{r}\omega_i\| z_i\|^2 \leq 1,\\ y \in M^\perp,\ \|y\| \leq 1 \end{array} \right\}\\ & = \sup \left\{ \langle \mathbf z, \mathbf y \rangle_\omega \colon \begin{array}{l} \mathbf z \in \mathbf C_\omega \cap \mathbf M_\omega^\perp,\ \|\mathbf z\|_\omega\leq 1,\\ \mathbf y \in \mathbf D_\omega \cap \mathbf M_\omega^\perp,\ \|\mathbf y\|_\omega\leq 1 \end{array} \right\} = \alpha, \end{align} where in the first two inequalities, we take the supremum over a larger set. In the fourth line the equality holds since for every $\mathbf x = \{x_i\}_{i=0}^r \in \mathbf H_\omega$ such that $x_i\in M_i$ and $\sum_{i=1}^r \omega_i \|P_{M_i\cap M^\perp}(x_i)\|^2\leq 1$, there is at least one $\mathbf z = \{z_i\}_{i=0}^r \in \mathbf H_\omega$ with $z_i\in M_i\cap M^\perp$ and $\|\mathbf z\|_\omega^2\leq 1$ for which the equality $\sum_{i=1}^r \omega_i \langle x_i, y\rangle = \sum_{i=1}^r \omega_i \langle z_i, y\rangle$ holds for all $y\in M^\perp$. For example, by \eqref{pr:cosCD1:reduction}, one can take $z_i:=P_{M_i\cap M^\perp}(x_i)$. This shows that $\cos_\omega(\mathbf C_\omega, \mathbf D_\omega) = \alpha$. \end{proof} \begin{lemma} \label{lem:cCD1} Let $\omega \in \Omega_r$. The configuration constant between $\mathbf C_\omega$ and $\mathbf D_\omega$ satisfies: \begin{align} \nonumber \label{lem:cCD1:sup} c_\omega(\mathbf C_\omega, \mathbf D_\omega) & = \sup \left\{ \langle \mathbf x, \mathbf y \rangle_\omega \colon \begin{array}{l} \mathbf x \in \mathbf C_\omega \cap \mathbf M_\omega^\perp \cap \mathbf \Delta_\omega,\\ \mathbf y \in \mathbf D_\omega \cap \mathbf M_\omega^\perp \cap \mathbf \Delta_\omega \end{array} \right\}\\ & = \sup \left\{ \sum_{i=1}^{r} \omega_i \langle x_i, y\rangle \colon \begin{array}{l} x_i \in M_i \cap M^\perp,\ \| x_i\| \leq 1,\ i=1,\ldots,r\\ y \in M^\perp,\ \|y\| \leq 1 \end{array} \right\}. \end{align} \end{lemma} \begin{proof} The argument is similar to the one presented in the proof of Lemma \ref{lem:cosCD1}, where one should write ``$\sup_{i=1,\ldots,r}\|\cdot\|\leq 1$'' instead of ``$\sum_{i=1}^{r}\omega_i\|\cdot\|^2\leq 1$'' in \eqref{pr:cosCD1:cosCD}. We leave the details to the reader. \end{proof} \begin{lemma} \label{lem:cosCD2} Let $\omega \in \Omega_r$. The following estimates hold: \begin{equation}\label{lem:cosCD2:ineq} \cos_\omega(\mathbf C_\omega, \mathbf D_\omega)^2 \leq c_\omega(\mathbf C_\omega, \mathbf D_\omega) \leq \cos_\omega(\mathbf C_\omega, \mathbf D_\omega). \end{equation} \end{lemma} \begin{proof} By Theorem \ref{thm:norm}, we obtain \begin{equation}\label{} \cos_\omega(\mathbf C_\omega, \mathbf D_\omega)^2 = \|T_\omega - P_M\| = \sup \left\{\left\| \sum_{i=1}^{r} \omega_i(P_{M_i}(x) - P_M(x)) \right\| \colon x\in M^\perp,\ \|x\|\leq 1\right\}. \end{equation} Note that, by \eqref{pr:cosCD1:PM_PMi:perp}, for all $x\in M^\perp$, we have \begin{equation} P_{M_i}(x) - P_M(x) = P_{M_i}(x) = P_{M_i}P_{M^\perp}(x) = P_{M_i\cap M^\perp}(x) \in M_i\cap M^\perp \end{equation} and $\|P_{M_i\cap M^\perp}(x)\| \leq \|x\| \leq 1$. Consequently, \begin{equation}\label{pr:cosCD2} \cos_\omega(\mathbf C_\omega, \mathbf D_\omega)^2 \leq \sup \left\{\left\|\sum_{i=1}^{r}\omega_i x_i \right\| \colon x_i\in M_i \cap M^\perp,\ i=1,\ldots,r,\ \|x_i\|\leq 1\right\}. \end{equation} By the Riesz representation theorem, \eqref{def:Aweak}, the assumption that $x_i\in M_i \cap M^\perp$ and the fact that $\sum_{i=1}^r \omega_i x_i \in M^\perp$, we get \begin{equation}\label{} \left\|\sum_{i=1}^{r}\omega_i x_i \right\| = \sup \left\{ \sum_{i=1}^{r}\omega_i \langle x_i, y\rangle \colon y\in M^\perp, \|y\|\leq 1 \right\} \leq c_\omega(\mathbf C_\omega, \mathbf D_\omega). \end{equation} This, when combined with \eqref{pr:cosCD2}, proves the first inequality in \eqref{lem:cosCD2:ineq}. The second inequality in \eqref{lem:cosCD2:ineq} is trivial. \end{proof} \begin{remark}\label{rem:cosCD3} Let $\omega \in \Omega_r$ and assume that $J:=\{j \colon M_j \neq M\} \neq \emptyset$. Note that $M_j\cap M^\perp \neq \{0\}$ for all $j \in J$ and $M_j\cap M^\perp = \{0\}$ whenever $j\notin J$; compare with the proof of Proposition \ref{prop:inclusion}. It is not difficult to show that \begin{align}\label{rem:cosCD3:a1} \cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega) & = \sup \left\{ \sum_{j\in J} \omega_j \langle x_j, y\rangle \colon \begin{array}{l} x_j \in M_j \cap M^\perp,\ j \in J,\ \sum_{j\in J}\omega_j \| x_j\|^2 = 1,\\ y \in M^\perp,\ \|y\| = 1 \end{array} \right\} \\ \label{rem:cosCD3:a2} & = \sup \left\{ \left\|\sum_{j\in J}\omega_j x_j \right\| \colon x_j \in M_j \cap M^\perp,\ j \in J,\ \sum_{j\in J}\omega_j \|x_j\|^2 = 1\right\}\\ \label{rem:cosCD3:a3} & = \sup \left\{ \frac{\| \sum_{j\in J} \omega_j x_j\|}{\sqrt{ \sum_{j\in J} \omega_j \|x_j\|^{2}}} \colon x_j \in M_j\cap M^\perp,\ j \in J,\ 0 \neq \sum_{j \in J} \omega_j \|x_j\|^2 < \infty \right\} \end{align} and \begin{align} \label{rem:cosCD3:b1} c_{\omega}(\mathbf C_\omega, \mathbf D_\omega) & = \sup \left\{ \sum_{j\in J} \omega_j \langle x_j, y\rangle \colon \begin{array}{l} x_j \in M_j \cap M^\perp,\ \| x_j\| = 1,\ j \in J, \\ y \in M^\perp,\ \|y\| = 1 \end{array} \right\} \\ \label{rem:cosCD3:b2} & = \sup \left \{ \left\|\sum_{j \in J}\omega_j x_j \right\| \colon x_j \in M_j\cap M^\perp,\ \|x_j\| = 1,\ j \in J \right\}. \end{align} \end{remark} \begin{theorem}[Reduction to Unique Subspaces]\label{th:reduction} Let $q\in \mathbb Z_+ \cup\{\infty\}$ be such that $q \leq r$, let $L_j$ be nontrivial, closed and linear subspaces of $\mathcal H$, $j=1,\ldots,q$, and let $L$ be their intersection. Moreover, let $\{I_j\}_{j=1}^q$ consist of nonempty, pairwise disjoint subsets of $\{1,\ldots,r\}$, possibly infinite, such that $\bigcup_{j=1}^q I_j = \{1,\ldots,r\}$, and assume that $M_i = L_j$ for all $i\in I_j$. Then, \begin{equation}\label{th:reduction:eq} \cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega) = \cos_{\lambda}(\mathbf E_\lambda, \mathbf D_\lambda) \quad \text{and} \quad c_{\omega}(\mathbf C_\omega, \mathbf D_\omega) = c_{\lambda}(\mathbf E_\lambda, \mathbf D_\lambda), \end{equation} where $\lambda := \{\lambda_j\}_{j=1}^q \in \Omega_q$ with $\lambda_j:= \sum_{i\in I_j} \omega_i$ and where (compare with \eqref{def:CDinHr}) $\mathbf E_\lambda := \mathbf H_\lambda \cap \prod_{j=1}^{q}L_j$. \end{theorem} \begin{proof} We demonstrate only the first equality in \eqref{th:reduction:eq} by using Lemma \ref{lem:cosCD1}. A similar argument, when combined with Lemma \ref{lem:cCD1}, can be used to establish the second equality in \eqref{th:reduction:eq}. In order to show the inequality $\cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega) \leq \cos_{\lambda}(\mathbf E_\lambda, \mathbf D_\lambda)$, for each pair of points \begin{equation}\label{pr:reduction:xy} \mathbf x = \{x_i\}_{i=1}^r \in \mathbf C_\omega \cap \mathbf M_\omega^\perp \cap \mathbf B_\omega \quad \text{and} \quad \mathbf y = \{y\}_{i=1}^r \in \mathbf D_\omega \cap \mathbf M_\omega^\perp \cap \mathbf B_\omega \end{equation} in $\mathbf H_\omega$ we define another pair \begin{equation}\label{pr:reduction:uv} \mathbf u = \{u_j\}_{j=1}^q \in \mathbf E_\lambda \cap \mathbf L_\lambda^\perp \cap \mathbf B_\lambda \quad \text{and} \quad \mathbf v = \{v\}_{j=1}^q \in \mathbf D_\lambda \cap \mathbf L_\lambda^\perp \cap \mathbf B_\lambda \end{equation} in $\mathbf H_\lambda$ which satisfies \begin{equation}\label{pr:reduction:xyuv} \langle \mathbf x, \mathbf y \rangle_\omega = \langle \mathbf u, \mathbf v \rangle_\lambda, \end{equation} where analogously to \eqref{def:MDeltainHr}, $\mathbf L_\lambda := \mathbf H_\lambda \cap \prod_{j=1}^{q}L$. To this end, for each $j\in \{1,\ldots,q\}$, define $u_j:=\frac 1 {\lambda_j}\sum_{i\in I_j}\omega_i x_i$. Notice that $u_j$ is well defined and $u_j \in L_j$. Moreover, we have $\sum_{i=1}^{r}\omega_i x_i = \sum_{j=1}^{q}\lambda_j u_j$, where for $r = \infty$ we use Lemma \ref{lem:rearrangement}. By the convexity of $\|\cdot\|^2$, \begin{equation}\label{} \sum_{j=1}^{q}\lambda_j \|u_j\|^2 = \sum_{j=1}^{q}\lambda_j \left\|\sum_{ i\in I_j } \frac{\omega_i}{\lambda_j}x_i \right\|^2 \leq \sum_{j=1}^{q}\lambda_j \sum_{i\in I_j} \frac{\omega_i}{\lambda_j} \|x_i\|^2 = \|\mathbf x\|_\omega^2 <\infty, \end{equation} that is, $\mathbf u \in \mathbf H_\lambda$. On the other hand, since $L=M$, we can define $v:=y$. It is not difficult to see that with the above defined $\mathbf u$ and $\mathbf v$, equality \eqref{pr:reduction:xyuv} holds. In order to prove the opposite inequality $\cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega) \geq \cos_{\lambda}(\mathbf E_\lambda, \mathbf D_\lambda)$, this time for each pair in \eqref{pr:reduction:uv} we define the corresponding pair in \eqref{pr:reduction:xy}, for which again equality \eqref{pr:reduction:xyuv} holds. It suffices to take $x_i:=u_j$ for all $i \in I_j$ and $y:=v$. Indeed, by assumption, $x_i\in M_i$. Moreover, $\sum_{i\in I_j}\omega_i x_i = \lambda_j u_j$, hence $\sum_{i=1}^{r}\omega_i x_i = \sum_{j=1}^{q}\lambda_j u_j$. Furthermore, \begin{equation}\label{} \sum_{i=1}^{r} \omega_i \|x_i\|^2 = \sum_{j=1}^{q} \lambda_j \|u_j\|^2 = \|\mathbf u\|_\lambda^2 <\infty, \end{equation} which shows that $\mathbf x \in \mathbf H_\omega$. Clearly, equality \eqref{pr:reduction:xyuv} holds for the pair $(\mathbf x, \mathbf y)$ defined above. This completes the proof. \end{proof} \begin{theorem}[Approximation, $r=\infty$] \label{th:approximation} Let $\omega \in \Omega_\infty$. Then \begin{equation}\label{th:approximation:eq} \cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega) = \lim_{q\to\infty} \cos_{\lambda_q}(\mathbf C_{\lambda_q}, \mathbf D_{\lambda_q}) \quad\text{and}\quad c_{\omega}(\mathbf C_\omega, \mathbf D_\omega) = \lim_{q\to\infty} c_{\lambda_q}(\mathbf C_{\lambda_q}, \mathbf D_{\lambda_q}), \end{equation} where $\lambda_q := \left\{\frac{\omega_1}{s_q},\ldots,\frac{\omega_q}{s_q} \right\} \in \Omega_q$ with $s_q := \sum_{i=1}^{q} \omega_i$ and $q = 2,3,\ldots$. \end{theorem} \begin{proof} We only show the first equality in \eqref{th:approximation:eq}. A similar argument can be employed to prove the second equality. Note that for all $q=2,3,\ldots,$ we have $s_q < s_{q+1}<1$, $N_q^\perp \subset N_{q+1}^\perp \subset M^\perp,$ where $N_q:=\bigcap_{i=1}^q M_i$ and \begin{equation}\label{} \mathbf C_{\lambda_q} \cap \mathbf M_{\lambda_q}^\perp \cap \mathbf B_{\lambda_q} = \left\{ \{x_i\}_{i=1}^{q} \colon \begin{array}{l} x_i \in M_i \cap N_q^\perp,\ i=1,\ldots,q,\\ \sum_{i=1}^{q}\omega_i \|x_i\|^2 \leq s_{q} \end{array} \right\}. \end{equation} Consequently, if $\{x_1,\ldots,x_q\} \in \mathbf C_{\lambda_q} \cap \mathbf M_{\lambda_q}^\perp \cap \mathbf B_{\lambda_q}$, then $\{x_1,\ldots,x_q, 0\} \in \mathbf C_{\lambda_{q+1}} \cap \mathbf M_{\lambda_{q+1}}^\perp \cap \mathbf B_{\lambda_{q+1}}$ and analogously $\{x_1,\ldots,x_q, 0, 0, \ldots\} \in \mathbf C_\omega \cap \mathbf M_\omega^\perp \cap \mathbf B_\omega$. Hence, by Lemma \ref{lem:cosCD1}, \begin{equation}\label{} s_q \cos_{\lambda_q}(\mathbf C_{\lambda_q}, \mathbf D_{\lambda_q}) \leq s_{q+1} \cos_{\lambda_{q+1}}(\mathbf C_{\lambda_{q+1}}, \mathbf D_{\lambda_{q+1}}) \leq \cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega) \end{equation} and thus the sequence $\{s_q \cos_{\lambda_q}(\mathbf C_{\lambda_q}, \mathbf D_{\lambda_q})\}_{q=2}^\infty$ is monotone and bounded, and therefore converges to some number $\alpha$. Moreover, $\alpha \leq \cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega)$. In order to show the opposite inequality $\alpha \geq \cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega)$, we first demonstrate that for each pair \begin{equation}\label{pr:approximation:xy} \mathbf x = \{x_i\}_{i=1}^\infty \in \mathbf C_\omega \cap \mathbf M_\omega^\perp \cap \mathbf B_\omega \quad\text{and}\quad \mathbf y = \{y\}_{i=1}^\infty \in \mathbf C_\omega \cap \mathbf M_\omega^\perp \cap \mathbf B_\omega \end{equation} in $\mathbf H_\omega$ and for each $\varepsilon >0$, we can find another pair \begin{equation}\label{pr:approximation:xqyq} \mathbf x_q = \{x_{q,i}\}_{i=1}^q \in \mathbf C_{\lambda_q} \cap \mathbf M_{\lambda_q}^\perp \cap \mathbf B_{\lambda_q} \quad\text{and}\quad \mathbf y_q = \{y_q\}_{i=1}^q \in \mathbf C_{\lambda_q} \cap \mathbf M_{\lambda_q}^\perp \cap \mathbf B_{\lambda_q} \end{equation} in $\mathbf H_{\lambda_q}$ such that \begin{equation}\label{pr:approximation:xyANDxqyq} \langle \mathbf x, \mathbf y\rangle_\omega \leq s_q \langle \mathbf x_q, \mathbf y_q\rangle_{\lambda_q} + \varepsilon. \end{equation} To this end, choose $n$ so that the tail satisfies $\sum_{i=n+1}^{\infty} \omega_i \langle x_i, y\rangle < \frac \varepsilon 2$. For each $q > n$, define \begin{equation}\label{} x_{q,i}:= \begin{cases} \sqrt{s_q} \cdot P_{M_i \cap N_q^\perp}(x_i), & \mbox{if } i=1,\ldots,n \\ 0, & \mbox{otherwise} \end{cases} \quad\text{and}\quad y_q := \sqrt{s_q} \cdot P_{N_q^\perp}(y). \end{equation} Note that $\mathbf x_q$ and $\mathbf y_q$ indeed satisfy \eqref{pr:approximation:xqyq} as $\|\mathbf x_q\|_{\lambda_q}\leq 1$ and $\|\mathbf y_q\| \leq 1$. On the other hand, the decreasing sequence of sets $\{N_q\}_{q=2}^\infty$ converges to $M$ in the sense of Mosco; see \cite[Definition 1.2 and Lemma 3.1]{Mosco1969}. This, when combined with \cite[Theorem 3.2]{Tsukada1984}, implies that for all $x \in \mathcal H$, we get $P_{N_q}(x) \to P_M(x)$ as $q\to \infty$ and further, $ P_{N_q^\perp} (x) = x - P_{N_q}(x) \to x - P_M(x) = P_{M^\perp}(x)$ as $q\to \infty$. In this connection, see also \cite[Proposition 7]{IsraelMaximianoReich1983} and \cite[Lemma 4.2]{BauschkeBorwein1996}. In particular, using the assumptions that $x_i \in M_i\cap M^\perp$, $y\in M^\perp$ and the equality $P_{M_i\cap N_q^\perp} = P_{N_q^\perp}P_{M_i}$ (compare with \eqref{pr:cosCD1:PM_PMi:perp}), we obtain \begin{equation}\label{} x_{q,i} \to P_{M^\perp}(x_i) = x_i \quad\text{and}\quad y_q \to P_{M^\perp}(y) = y \end{equation} as $q \to \infty$. Consequently, for all large enough $q$ and for all $i=1,\ldots,n$, we reach the inequality $\langle x_i, y\rangle \leq \langle x_{q,i}, y_q\rangle + \frac{\varepsilon}{2n}$, which leads to \begin{equation}\label{} \langle \mathbf x, \mathbf y\rangle_\omega \leq \sum_{i=1}^{n}\omega_i\langle x_i, y \rangle + \frac \varepsilon 2 \leq \sum_{i=1}^{n}\omega_i\langle x_{q,i}, y_q \rangle + \varepsilon = s_q \langle \mathbf x_q, \mathbf y_q\rangle_{\lambda_q} + \varepsilon. \end{equation} This shows \eqref{pr:approximation:xyANDxqyq}, as claimed. We are now ready to return to the inequality $\alpha \geq \cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega)$. Indeed, by \eqref{pr:approximation:xyANDxqyq}, and by the monotonicity of the sequence $\{s_q \cos_{\lambda_q}(\mathbf C_{\lambda_q}, \mathbf D_{\lambda_q})\}_{q=2}^\infty$, we have \begin{equation}\label{} \langle \mathbf x, \mathbf y\rangle_\omega \leq s_q \langle \mathbf x_q, \mathbf y_q\rangle_{\lambda_q} + \varepsilon \leq s_q \cos_{\lambda_q}(\mathbf C_{\lambda_q}, \mathbf D_{\lambda_q}) + \varepsilon \leq \alpha + \varepsilon. \end{equation} By taking the supremum over all $\mathbf x$ and $\mathbf y$ satisfying \eqref{pr:approximation:xy}, we obtain that $\cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega) \leq \alpha + \varepsilon$, which proves the asserted inequality and hence completes the proof. \end{proof} \begin{proposition}\label{prop:orthogonal} Let $\omega \in \Omega_r$ and assume that the subspaces $M_1,\ldots,M_r$ are nontrivial and pairwise orthogonal. Then \begin{equation}\label{prop:orthogonal:eq} c_{\omega}(\mathbf C_\omega, \mathbf D_\omega) = \sqrt{\sum_{i=1}^{r}\omega_i^2} \qquad \text{and} \qquad \cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega) = \sqrt{\sup_{i=1,\ldots,r}\omega_i}. \end{equation} \end{proposition} \begin{proof} Observe first that we must have $J:=\{j\colon M_j \neq M\} = \{1,\ldots,r\}$. Otherwise, there would be a pair $i$ and $j$ for which $M_i = M \subset M_j$ and $M_i \perp M_j$, which is possible only when $M_i=\{0\}$, a contradiction. Moreover, $\sup_i \omega_i = \omega_j$ for some $j \in \{1,\ldots,r\}$, even when $r = \infty$. By the assumed pairwise orthogonality, for all $\{x_i\}_{i=1}^r$ with $x_i\in M_i\cap M^\perp$ and $\sum_{i=1}^{r}\omega_i \|x_i\|^2 = 1$, we have \begin{equation}\label{pr:orthogonal:eq} \left\|\sum_{i=1}^{r}\omega_i x_i \right\|^2 = \sum_{i=1}^{r}\omega_i^2 \|x_i\|^2 \leq \omega_j \sum_{i=1}^{r}\omega_i \|x_i\|^2 = \omega_j. \end{equation} Thus, by \eqref{rem:cosCD3:a2}, $\cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega) \leq \omega_j$. On the other hand, when $\|x_j\| = \frac 1 {\sqrt{\omega_j}}$, the assumption $\sum_{i=1}^{\infty}\omega_i \|x_i\|^2 = 1$ implies that $\|x_i\| = 0$ for all $i\neq j$. Hence, in this case, $\|\sum_{i=1}^{r}\omega_i x_i\|^2 = \omega_j$ and therefore $\cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega) = \omega_j$. In view of \eqref{rem:cosCD3:b2} and \eqref{pr:orthogonal:eq}, it is not difficult to see that $c_{\omega}(\mathbf C_\omega, \mathbf D_\omega) = \sqrt{\sum_{i=1}^{r}\omega_i^2}$. \end{proof} In the next example, we show that the equality or inequality between $c_{\omega}(\mathbf C_\omega, \mathbf D_\omega)$ and $\cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega)$ may depend on the weights $\omega \in \Omega_r$. \begin{example}\label{ex:orthogonal} Let $q\in \mathbb Z_+$ be such that $q\leq r$ and let $L_1,\ldots,L_q$ be a tuple of nontrivial, closed and linear subspaces of $\mathcal H$, which are pairwise orthogonal. Assume that the list $M_1, \ldots, M_r $ consists only of subspaces from $L_1,\ldots, L_q$ and let $\lambda \in \Omega_q$ be defined as in Theorem \ref{th:reduction}. Then, by Theorem \ref{th:reduction} and Proposition \ref{prop:orthogonal}, we have $c_{\omega}(\mathbf C_\omega, \mathbf D_\omega) = \sqrt{\sum_{j=1}^{q}\lambda_j^2}$ and $\cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega) = \sqrt{\sup_{j=1,\ldots,q}\lambda_j}$. We consider two cases: \begin{enumerate}[(a)] \item If $\lambda_j = 1/q$ for all $j=\{1,\ldots,q\}$, then $c_{\omega}(\mathbf C_\omega, \mathbf D_\omega) = \cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega) = \sqrt{1/q}$. \item If $\lambda_i:=\max_{j=1,\ldots,q}\lambda_j > 1/q$, then $\sum_{j=1}^{q}\lambda_j^2 < \lambda_i \sum_{j=1}^{q}\lambda_j = \lambda_i$ and consequently, $c_{\omega}(\mathbf C_\omega, \mathbf D_\omega) < \cos_{\omega}(\mathbf C_\omega, \mathbf D_\omega)$. \end{enumerate} \end{example} In spite of the previous example, the ``parallel'' alignment of the subspaces $M_i$ does not depend on $\omega$, as we show in our next theorem. \begin{theorem} \label{thm:cosCDequal1} The following conditions are equivalent: \begin{enumerate}[(i)] \item There is $\lambda = \{\lambda_i\}_{i=1}^r \in \Omega_r$ such that $\cos_\lambda(\mathbf C_\lambda, \mathbf D_\lambda) = 1$. \item For all $\omega = \{\omega_i\}_{i=1}^r \in \Omega_r$, we have $\cos_\omega(\mathbf C_\omega, \mathbf D_\omega) = 1$. \end{enumerate} \end{theorem} \begin{proof} Assume that $\cos_\lambda(\mathbf C_\lambda, \mathbf D_\lambda) = 1$. It suffices to show that there is at least one sequence of pairs $\{\mathbf x_k, \mathbf y_k\}_{k=1}^\infty$, which for all $\omega \in \Omega_r$ satisfies \begin{equation}\label{pr:cosCDequal1:toShowA} \mathbf x_k = \{x_{k,i}\}_{i=1}^r \in \mathbf C_\omega \cap \mathbf M_\omega^\perp \cap \mathbf B_\omega, \qquad \mathbf y_k = \{y_k\}_{i=1}^r \in \mathbf D_\omega \cap \mathbf M_\omega^\perp \cap \mathbf B_\omega \end{equation} and \begin{equation}\label{pr:cosCDequal1:toShowB} \lim_{k\to \infty} \langle \mathbf x_k, \mathbf y_k \rangle_\omega =1. \end{equation} By Lemma \ref{lem:cosCD2}, $c_\lambda(\mathbf C_\lambda, \mathbf D_\lambda) = 1$. Observe that we must have $J:=\{j\colon M_j \neq M\} = \{1,\ldots,r\}$. Otherwise, by \eqref{rem:cosCD3:b1} and by the Cauchy-Schwarz inequality (applied to each summand), we would arrive at the following contradiction: $ 1 = c_\lambda(\mathbf C_\lambda, \mathbf D_\lambda) \leq \sum_{j\in J} \lambda_j <1 $. Consequently, by the definition of the supremum in \eqref{rem:cosCD3:b1}, for each $k=1,2,\ldots$, there are \begin{equation}\label{pr:cosCDequal1:xk} \mathbf x_k = \{x_{k,i}\}_{i=1}^r, \qquad x_{k,i}\in M_i\cap M^\perp, \qquad \|x_{k,i}\| = 1 \end{equation} and \begin{equation}\label{pr:cosCDequal1:yk} \mathbf y_k = \{y_k\}_{i=1}^r, \qquad y_k\in M^\perp, \qquad \|y_k\| = 1, \end{equation} which satisfy \begin{equation}\label{pr:cosCDequal1:supLambda} 1-\frac 1 k \leq \langle \mathbf x_k, \mathbf y_k \rangle_\lambda = \sum_{i=1}^{r}\lambda_i \langle x_{k,i}, y_k \rangle \leq 1. \end{equation} Without any loss to the generality we may assume that $\langle x_{k,i}, y_k\rangle \geq 0$ for all $i=1,\ldots,r$ and all $k=1,2,\ldots$. Indeed, if $\langle x_{k,i}, y_k\rangle < 0$ for some $k$ and $i$ then, by replacing ``$x_{k,i}$'' by ``$-x_{k,i}$'', we can only increase the number $\langle \mathbf x_k, \mathbf y_k \rangle_\lambda$ in \eqref{pr:cosCDequal1:supLambda}. Obviously, the above-defined sequence of pairs $\{\mathbf x_k, \mathbf y_k\}_{k=1}^\infty$ satisfies \eqref{pr:cosCDequal1:toShowA} for all $\omega \in \Omega_r$ and \eqref{pr:cosCDequal1:toShowB} for $\omega = \lambda$. What remains to be shown is that $\{\mathbf x_k, \mathbf y_k\}_{k=1}^\infty$ satisfies \eqref{pr:cosCDequal1:toShowB} for all $\omega \in \Omega_r$. Before doing so, we investigate the properties of $\langle x_{k,i}, y_k \rangle$ in more detail. Note that by the Cauchy-Schwarz inequality, we have $\langle x_{k,i}, y_k\rangle \leq 1$. We now show that for each $i=1,\ldots,r$, we have \begin{equation}\label{pr:cosCDequal1:toShowC} \lim_{k\to\infty}\langle x_{k,i}, y_k \rangle = 1. \end{equation} Suppose to the contrary that \begin{equation} \liminf_{k\to\infty}\langle x_{k,j}, y_k \rangle = \lim_{n\to\infty} \langle x_{k_n,j}, y_{k_n} \rangle = 1 - \varepsilon \end{equation} for some $j$ and some $\varepsilon \in (0,1]$, where $\{k_n\}_{n=1}^\infty$ is a subsequence of $\{k\}_{k=1}^\infty$. By taking $n$ large enough, we may assume that $\langle x_{k_n,j}, y_{k_n} \rangle \leq 1 - \frac \varepsilon 2$. This, when combined with \eqref{pr:cosCDequal1:supLambda}, leads to \begin{equation}\label{} 1-\frac 1 {k_n} \leq \lambda_j \langle x_{k_n,j}, y_{k_n} \rangle + \sum_{i\neq j} \lambda_i \langle x_{k_n,i}, y_{k_n} \rangle \leq \lambda_j (1 - \frac \varepsilon 2) + \sum_{i\neq j} \lambda_i = 1-\lambda_j \frac \varepsilon 2 < 1, \end{equation} which is a contradiction since the left-hand side converges to one as $n\to\infty$. We are now ready to show that the above-defined sequence of pairs $\{\mathbf x_k, \mathbf y_k\}_{k=1}^\infty$ satisfies \eqref{pr:cosCDequal1:toShowB} for all $\omega \in \Omega_r$. Indeed, let $\omega \in \Omega_r$. Moreover, let $\varepsilon \in (0,1)$ and let $n \in \{1,\ldots,r\}$ be an integer such that $\sum_{i=1}^{n}\omega_i \geq \sqrt{1-\varepsilon}$. Obviously, when $r\in \mathbb Z_+$, we can take $n:=r$. By \eqref{pr:cosCDequal1:toShowC}, we may assume that $\langle x_{k,i}, y_k \rangle \geq \sqrt{1-\varepsilon}$ for all $i=1,2,\ldots,n$ and all large enough $k \geq K_n$. Thus, for all $k\geq K_n$, we arrive at \begin{equation}\label{} 1 \geq \langle \mathbf x_k, \mathbf y_k \rangle_\omega \geq \sum_{i=1}^{n}\omega_i \langle x_{k,i}, y_k \rangle \geq \sum_{i=1}^{n}\omega_i \sqrt{1-\varepsilon} \geq 1-\varepsilon, \end{equation} which shows that $\langle \mathbf x_k, \mathbf y_k \rangle_\omega \to 1$ as $k\to \infty$. This proves \eqref{pr:cosCDequal1:toShowB} and completes the proof of the lemma itself. \end{proof} \begin{remark}[Erratum to \cite{ReichZalas2017}]\label{rem:erratum} As we have already observed in Proposition \ref{prop:inclusion}, $\mathbf C_{\omega} \cap \mathbf M_{\omega}^\perp$ may be a proper subset of $ \mathbf C_{\omega} \cap (\mathbf C_{\omega} \cap \mathbf D_{\omega})^\perp$ and equality need not hold. Consequently, the argument used in the proof of \cite[Theorem 8]{ReichZalas2017} preceding \cite[equality (19)]{ReichZalas2017} was incorrect. However, Lemma \ref{lem:cosCD1} justifies the validity of \cite[equality (19)]{ReichZalas2017} because \begin{equation}\label{} \cos_\omega(\mathbf C_\omega, \mathbf D_\omega) = \sup \left\{ \frac{\langle \mathbf x, \mathbf y \rangle_\omega}{\|\mathbf x\|_\omega \|\mathbf y\|_\omega} \colon \begin{array}{l} \mathbf x \in \mathbf C_\omega \cap \mathbf M_\omega^\perp,\ \mathbf x \neq \mathbf 0\\ \mathbf y \in \mathbf D_\omega \cap \mathbf M_\omega^\perp,\ \mathbf y \neq \mathbf 0 \end{array} \right\}. \end{equation} \end{remark} \section{Asymptotic Properties of the Simultaneous Projection Method} \label{sec:AsymptoticProp} In this section we oftentimes refer to the subspace $A_\omega(\mathbf C_\omega^\perp)$, the explicit form of which is given in Proposition \ref{prop:Minkowski}. We begin with a theorem which corresponds to equivalence \eqref{int:equivalence}. \begin{theorem} \label{thm:equiv} Let $\omega \in \Omega_r$. The following conditions are equivalent: \begin{multicols}{2} \begin{enumerate}[(i)] \item $\|T_\omega - P_{M}\|<1$; \item $\cos_{\omega}(\mathbf C_{\omega}, \mathbf D_{\omega})<1$; \item The set $A_\omega(\mathbf C_\omega^\perp)$ is closed in $\mathcal H$; \item $\|P_{\mathbf D_{\omega}} P_{\mathbf C_{\omega}} - P_{\mathbf C_{\omega} \cap \mathbf D_{\omega}}\|_{\omega} <1 $; \item $\{\mathbf C_{\omega}, \mathbf D_{\omega}\}$ is linearly regular; \item $\mathbf C_{\omega}^\perp + \mathbf D_{\omega}^\perp$ is closed in $\mathbf H_\omega$. \end{enumerate} \end{multicols} \end{theorem} \begin{proof} By \eqref{int:equivalence} applied to $\mathbf C_\omega$ and $\mathbf D_\omega$, we have the equivalence between (ii) and (vi). Similarly, by \cite[Theorem 5.19]{BauschkeBorwein1996} applied to $\mathbf C_\omega$ and $\mathbf D_\omega$, we obtain the equivalence between (v) and (vi). Recall that $\{\mathbf C_{\omega}, \mathbf D_{\omega}\}$ is said to be linearly regular if the inequality $\max\{d(\mathbf x,\mathbf C_{\omega}), d(\mathbf x, \mathbf D_{\omega})\} \leq \kappa d(\mathbf x, \mathbf C_\omega \cap \mathbf D_\omega)$ holds for all $\mathbf x \in \mathbf H_\omega$ and some $\kappa >0$. Proposition \ref{prop:Minkowski} verifies the equivalence between (iii) and (vi). Finally, by Theorems \ref{thm:norm} and \ref{int:th:APM}, we have \begin{equation}\label{} \|T_\omega - P_M\| = \cos_\omega (\mathbf C_\omega, \mathbf D_\omega)^2 = \|P_{\mathbf D_{\omega}}P_{\mathbf C_{\omega}} - P_{\mathbf C_{\omega} \cap \mathbf D_{\omega}}\|_\omega, \end{equation} which explains the equivalence between (i), (ii) and (iv). \end{proof} \begin{theorem}[Dichotomy]\label{th:dichotomy} Exactly one of the following two statements holds: \begin{enumerate}[(i)] \item For all $\omega\in \Omega_r$, the set $A_\omega(\mathbf C_\omega^\perp)$ is closed in $\mathcal H$. Then the sequence $\{T^k_{\omega}\}_{k=1}^\infty$ converges linearly to $P_M$ as $k\to \infty$ and the optimal error bound is given by \begin{equation}\label{th:dichotomy:estimate} \|T_\omega^k(x) - P_{M}(x)\| \leq \cos_{\omega}(\mathbf C_{\omega}, \mathbf D_{\omega})^{2k} \cdot \|x\|. \end{equation} \item For all $\omega \in \Omega_r$, the set $A_\omega(\mathbf C_\omega^\perp)$ is not closed in $\mathcal H$. Then the sequence $\{T^k_{\omega}\}_{k=1}^\infty$ converges arbitrarily slowly to $P_{M}$ as $k\to \infty$. \end{enumerate} \end{theorem} \begin{proof} By combining Theorems \ref{thm:cosCDequal1} and \ref{thm:equiv}, we see that either $A_\omega(\mathbf C_\omega^\perp)$ is closed for all $\omega \in \Omega_r$ or $A_\omega(\mathbf C_\omega^\perp)$ is not closed for all $\omega \in \Omega_r$. This shows the dichotomy between (i) and (ii). If we assume as in (i) that $A_\omega(\mathbf C_\omega^\perp)$ is closed, then both the linear convergence and the optimality of the estimate \eqref{th:dichotomy:estimate} follow from Theorems \ref{thm:norm} and \ref{thm:equiv}. Assume now that $A_\omega(\mathbf C_\omega^\perp)$ is not closed, where $\omega \in \Omega_r$. We show that the sequence $\{T_\omega^k\}_{k=1}^\infty$ converges arbitrarily slowly to $P_{M}$ as $k\to \infty$. To this end, let $\{a_k\}_{k=1}^\infty \subset [0,\infty)$ be a null sequence and let $\{b_k\}_{k=1}^\infty$ be defined by $b_1 := a_1,\ b_2 := a_1$ and $b_k := a_{k-1},\ k \geq 3$. By Theorem \ref{thm:equiv}, we see that $\mathbf C_{\omega}^\perp + \mathbf D_{\omega}^\perp$ is not closed in $\mathbf H_\omega$. This, when combined with Theorem \ref{int:th:dichotomy}, implies that the sequence $\{(P_{\mathbf D_{\omega}}P_{\mathbf C_{\omega}})^k\}_{k=1}^\infty$ converges to $P_{\mathbf C_{\omega} \cap \mathbf D_{\omega}}$ arbitrarily slowly as $k \to \infty$. In particular, there is $\mathbf x \in \mathbf H_\omega$, such that \begin{equation}\label{pr:dichotomy:b_k} \|(P_{\mathbf D_{\omega}}P_{\mathbf C_{\omega}})^k(\mathbf x) - P_{\mathbf C_{\omega} \cap \mathbf D_{\omega}} (\mathbf x)\| \geq b_k \end{equation} for all $k=1,2,\ldots$. Note that $\mathbf y := P_{\mathbf D_{\omega}}P_{\mathbf C_{\omega}} (\mathbf x) \in \mathbf D_\omega$, hence $\mathbf y = \{y\}_{i=1}^r$ for some $y \in \mathcal H$. Moreover, $P_{\mathbf C_{\omega} \cap \mathbf D_{\omega}} (\mathbf x) = P_{\mathbf C_{\omega} \cap \mathbf D_{\omega}} (\mathbf y)$ (compare with \eqref{pr:norm:PCD_PC_PD}). Consequently, by rewriting \eqref{pr:dichotomy:b_k} in terms of $\mathbf y$ and $a_k$, and by Theorem \ref{thm:normConvergence}, we arrive at \begin{equation} \|T_\omega^k(y)-P_M(y) \| = \|(P_{\mathbf D_{\omega}}P_{\mathbf C_{\omega}})^k(\mathbf y) - P_{\mathbf C_{\omega} \cap \mathbf D_{\omega}} (\mathbf y)\| \geq a_k \end{equation} for all $k=1,2,\ldots$. This shows that the sequence $\{T_\omega^k\}_{k=1}^\infty$ converges arbitrarily slowly to $P_{M}$ as $k\to \infty$, as asserted. \end{proof} \begin{theorem}[Super-polynomial Rate]\label{th:superPolyRate} Let $\omega \in \Omega_r$ and assume that the set $A_\omega(\mathbf C_\omega^\perp)$ is not closed in $\mathcal H$. Then the sequence $\{T^k_{\omega}\}_{k=1}^\infty$ converges super-polynomially fast to $P_{M}$, as $k\to \infty$, on some dense linear subspace $Y_\omega \subset \mathcal H$. \end{theorem} \begin{proof} The argument follows the proof of \cite[Theorem 14]{ReichZalas2017}. In view of Theorem \ref{thm:equiv}, the subspace $\mathbf C_\omega^\perp + \mathbf D_\omega^\perp$ is not closed. By Theorem \ref{int:th:superPoly} applied to $\mathbf C_\omega$ and $\mathbf D_\omega$, there is a dense linear subspace $\mathbf X_\omega$ of $\mathbf H_\omega$ on which the sequence $\{(P_{\mathbf C_\omega}P_{\mathbf D_\omega})^k\}_{k=1}^\infty$ converges super-polynomially fast to $P_{\mathbf C_\omega \cap \mathbf D_\omega}$. Define \begin{equation}\label{} \mathbf Y_\omega:=P_{\mathbf D_\omega}(\mathbf X_\omega) \quad \text{and} \quad Y_\omega := \{y \in \mathcal H \colon \mathbf y = \{y\}_{i=1}^r \in \mathbf Y_\omega\}. \end{equation} Note that the linearity of $P_{\mathbf D_\omega}$ implies that $\mathbf Y_\omega$ and $Y_\omega$ are both linear subspaces of $\mathbf H_\omega$ and $\mathcal H$, respectively. Let $y \in Y_\omega$ and $\mathbf x \in \mathbf X_\omega$ be such that $\mathbf y = \{y\}_{i=1}^r = P_{\mathbf D_\omega}(\mathbf x)$. Then, by Lemma \ref{lem:ProjCD}, \eqref{pr:norm:PCD_PC_PD} and by the nonexpansivity of $P_{\mathbf D_\omega}$, for each $n=1,2,\ldots,$ we have \begin{align}\nonumber k^n \|T_\omega^k(y) - P_M(y)\| & = k^n \|(P_{\mathbf D_\omega}P_{\mathbf C_\omega})^k(\mathbf y) - P_{\mathbf C_\omega \cap \mathbf D_\omega}(\mathbf y) \|_\omega \\ \nonumber & = k^n \|P_{\mathbf D_\omega}(P_{\mathbf C_\omega}P_{\mathbf D_\omega})^k(\mathbf x) - P_{\mathbf D_\omega}P_{\mathbf C_\omega \cap \mathbf D_\omega}(\mathbf x) \|_\omega \\ & \leq k^n \|(P_{\mathbf C_\omega}P_{\mathbf D_\omega})^k(\mathbf x) - P_{\mathbf C_\omega \cap \mathbf D_\omega}(\mathbf x) \|_\omega \to 0 \end{align} as $k \to \infty.$ This shows that the sequence $\{T^k_{\omega}\}_{k=0}^\infty$ converges super-polynomially fast to $P_{M}$ on $Y_\omega$. We now show that $Y_\omega$ is a dense linear subspace of $\mathcal H$. Indeed, let $x\in \mathcal H$ and let $\mathbf x := \{x\}_{i=1}^r \in \mathbf D_\omega$. Since $\mathbf X_\omega$ is dense in $\mathbf H_\omega$, there is $\{\mathbf x_k\}_{k=1}^\infty$ in $\mathbf X_\omega$ such that $\mathbf x_k \to \mathbf x$. Let $\mathbf y_k:= P_{\mathbf D_\omega}(\mathbf x_k)$. Since $\mathbf y_k \in \mathbf Y_\omega$, there is $y_k\in Y_\omega$ such that $\mathbf y_k= \{y_k\}_{i=1}^r$. Again, by Lemma \ref{lem:ProjCD} and by the nonexpansivity of $P_{\mathbf D_\omega}$, we arrive at \begin{equation}\label{} \|y_k -x\| = \|\mathbf y_k - \mathbf x\|_\omega = \|P_{\mathbf D_\omega}(\mathbf x_k) - P_{\mathbf D_\omega}(\mathbf x) \|_\omega \leq \|\mathbf x_k - \mathbf x\|_\omega \to 0 \end{equation} as $k\to \infty$. This completes the proof. \end{proof} \begin{theorem}[Polynomial Rate]\label{th:polyRate} Let $\omega \in \Omega_r$. Assume that $y \in A_\omega(C_\omega^\perp)$. Then there is $C_\omega(y)>0$ such that for all $k$, we have \begin{equation}\label{th:polyRate:eq} \|T_\omega^k(y) - P_{M}(y)\| \leq \frac{C_\omega(y)}{\sqrt{k}}. \end{equation} \end{theorem} \begin{proof} We first show that in spite of the possible inequality $\mathbf C_\omega \cap \mathbf D_\omega \neq \{\mathbf 0\}$ (see Theorem \ref{int:th:poly}), for all $\mathbf x = \{x_i\}_{i=1}^r \in \mathbf C_\omega^\perp + \mathbf D_\omega^\perp$ there is $C_\omega(\mathbf x)>0$ such that \begin{equation}\label{pr:polyRate:step1} \|(P_{\mathbf D_\omega}P_{\mathbf C_\omega})^k(\mathbf x) - P_{\mathbf C_\omega \cap \mathbf D_\omega}(\mathbf x)\|_\omega \leq \frac{C_\omega(\mathbf x)}{\sqrt{k}} \end{equation} for all $k=1,2,\ldots$. To this end, let $\mathbf M_1 := \mathbf C_\omega \cap (\mathbf C_\omega \cap \mathbf D_\omega)^\perp$ and $\mathbf M_2 := \mathbf D_\omega \cap (\mathbf C_\omega \cap \mathbf D_\omega)^\perp$. Recall again that the projections $P_{\mathbf C_\omega}$ and $P_{\mathbf D_\omega}$ commute with $P_{\mathbf C_\omega \cap \mathbf D_\omega}$; see \eqref{pr:norm:PCD_PC_PD}. Similarly to \eqref{pr:cosCD1:PM_PMi:perp}, they commute with $P_{(\mathbf C_\omega \cap \mathbf D_\omega)^\perp}$, that is, \begin{equation}\label{} P_{\mathbf M_1} = P_{(\mathbf C_\omega \cap \mathbf D_\omega)^\perp} P_{\mathbf C_\omega} = P_{\mathbf C_\omega} P_{(\mathbf C_\omega \cap \mathbf D_\omega)^\perp}, \end{equation} \begin{equation} P_{\mathbf M_2} = P_{(\mathbf C_\omega \cap \mathbf D_\omega)^\perp} P_{\mathbf D_\omega} = P_{\mathbf D_\omega} P_{(\mathbf C_\omega \cap \mathbf D_\omega)^\perp}. \end{equation} Using their linearity and the above-mentioned commuting properties, we obtain \begin{align}\label{} \nonumber (P_{\mathbf D_\omega}P_{\mathbf C_\omega})^k - P_{\mathbf C_\omega \cap \mathbf D_\omega} & = (P_{\mathbf D_\omega}P_{\mathbf C_\omega})^k - (P_{\mathbf D_\omega}P_{\mathbf C_\omega})^k P_{\mathbf C_\omega \cap \mathbf D_\omega}\\ \nonumber & = (P_{\mathbf D_\omega}P_{\mathbf C_\omega})^k P_{(\mathbf C_\omega \cap \mathbf D_\omega)^\perp}\\ \nonumber & = (P_{\mathbf D_\omega}P_{\mathbf C_\omega})^k (P_{(\mathbf C_\omega \cap \mathbf D_\omega)^\perp})^{2k}\\ & = (P_{\mathbf M_2} P_{\mathbf M_1})^k. \end{align} We may now apply Theorem \ref{int:th:poly} to $\mathbf M_1$ and $\mathbf M_2$ because $\mathbf M_1 \cap \mathbf M_2 = \{\mathbf 0\}$. Thus, for every $\mathbf x \in \mathbf M_1^\perp + \mathbf M_2^\perp$ there is $C_\omega(\mathbf x)>0$ such that \begin{equation}\label{pr:polyRate:ineq} \|(P_{\mathbf D_\omega}P_{\mathbf C_\omega})^k(\mathbf x) - P_{\mathbf C_\omega \cap \mathbf D_\omega}(\mathbf x)\|_\omega = \|(P_{\mathbf M_2} P_{\mathbf M_1})^k(\mathbf x)\|_\omega \leq \frac{C_\omega(\mathbf x)}{\sqrt{k}}. \end{equation} Note that, by \cite[Theorem 4.6 (5)]{Deutsch2001} (or by \eqref{prop:Minkowski:M} with $r = 2$), we obtain \begin{equation}\label{} \mathbf C_\omega^\perp \subset \overline{\mathbf C_\omega^\perp + (\mathbf C_\omega \cap \mathbf D_\omega)} = \mathbf M_1^\perp \quad \text{and} \quad \mathbf D_\omega^\perp \subset \overline{\mathbf D_\omega^\perp + (\mathbf C_\omega \cap \mathbf D_\omega)} = \mathbf M_2^\perp. \end{equation} Consequently, $\mathbf C_\omega^\perp + \mathbf D_\omega^\perp \subset \mathbf M_1^\perp + \mathbf M_2^\perp$, which, when combined with \eqref{pr:polyRate:ineq}, proves \eqref{pr:polyRate:step1}. We may now return to \eqref{th:polyRate:eq}. Let $y \in A_\omega(\mathbf C_\omega^\perp)$ and let $\mathbf y = \{y\}_{i=1}^r$. Then $y = A_\omega(\mathbf x)$ for some $\mathbf x \in \mathbf C_\omega^\perp$ and, by \eqref{lem:ProjCD:D}, $\mathbf y = P_{\mathbf D_\omega}(\mathbf x) = \mathbf x - P_{\mathbf D_\omega^\perp}(\mathbf x) \in \mathbf C_\omega^\perp + \mathbf D_\omega^\perp$. Consequently, by \eqref{pr:polyRate:step1}, there is $C_\omega(y):=C_\omega(\mathbf y)>0$ such that \begin{equation}\label{} \|T_\omega^k(y) - P_{M}(y)\| = \|(P_{\mathbf D_\omega}P_{\mathbf C_\omega})^k(\mathbf y) - P_{\mathbf C_\omega \cap \mathbf D_\omega}(\mathbf y)\|_\omega \leq \frac{C_\omega(y)}{\sqrt{k}} \end{equation} for all $k=1,2,\ldots$, where the equality follows from \eqref{thm:normConvergence:eq}. \end{proof} \section{Appendix} \label{sec:Appendix} It is well known that if a series $\sum_{i=1}^{\infty} y_i$ in $\mathcal H$ is \emph{absolutely convergent}, that is, when $\sum_{i=1}^{\infty}\|y_i\| <\infty$, then it is also \emph{unconditionally convergent}, that is, $\sum_{i=1}^{\infty} y_{\sigma(i)}$ exists for all bijections $\sigma$ in $\mathbb Z_+$ and equals $\sum_{i=1}^{\infty} y_i$. At this point recall that the unconditionally convergent series coincide with the absolutely convergent series if and only if the space $\mathcal H$ is of finite dimension; see \cite{DvoretzkyRogers1950}. We slightly strengthen the unconditional convergence in Lemma \ref{lem:rearrangement} below. To this end, for an absolutely convergent series $\sum_{i=1}^{\infty}y_i$ and for any subset $I = \{i_1, i_2,\ldots\}$ of $\mathbb Z_+$, we formally define $\sum_{i\in I} y_i := \sum_{l=1}^{\# I} y_{i_l}$. A result similar to Lemma \ref{lem:rearrangement} can be found, for example, in \cite[Theorem 6.3.1]{KrizPultr2013} for $\mathcal H=\mathbb R$. \begin{lemma}[Rearrangement Lemma]\label{lem:rearrangement} Let $q \in \mathbb Z_+ \cup \{\infty\}$ and let $\{I_j\}_{j=1}^q$ consist of nonempty and pairwise disjoint subsets of $\mathbb Z_+$, possibly infinite, such that $\mathbb Z_+ = \bigcup_{j=1}^q I_j$. Assume that the series $\sum_{i=1}^{\infty} y_i$ is absolutely convergent. Then \begin{equation}\label{lem:rearrangement:eq} \sum_{i=1}^{\infty} y_i = \sum_{j=1}^q \left( \sum_{i \in I_j} y_i \right), \end{equation} where the summation over $j$, as well as the summations over $i\in I_j$, do not depend on the order of summands. \end{lemma} \begin{proof} Note that the absolute convergence of the series $\sum_{i=1}^{\infty} y_i$, when combined with the triangle inequality, leads to \begin{equation}\label{} \sum_{j=1}^q \left\|\sum_{i\in I_j} y_i \right\| \leq \sum_{j=1}^{q}\left(\sum_{i\in I_j}\|y_i\|\right) = \sum_{i=1}^{\infty}\|y_i\| < \infty, \end{equation} where the equality holds by \cite[Theorem 6.3.1]{KrizPultr2013}. Consequently, the series $\sum_{i\in I_j} y_i$ converges absolutely, hence unconditionally, to some $z_j \in \mathcal H$, $j=1,\ldots,q$. Furthermore, the series $\sum_{j=1}^{q}z_j$ converges absolutely, hence unconditionally. Let now $I=\{i_1,i_2,\ldots\}$, $J=\{j_1,j_2,\ldots\}$ and $K = I \cup J = \{k_1, k_2, \ldots\}$ be countably infinite and increasingly ordered sets of $\mathbb Z_+$ such that $I\cap J = \emptyset$. We claim that \begin{equation}\label{pr:rearrangement:2sets} \sum_{k\in K} y_k = \sum_{i\in I} y_i + \sum_{j \in J} y_j. \end{equation} To see this, first define \begin{equation}\label{} [n]:=\min\{m \colon \{i_1,\ldots,i_n\} \cup \{j_1,\ldots,j_n\} \subset \{k_1,\ldots,k_m\}\} \end{equation} and \begin{equation}\label{} M_n := \left\{ \{k_1,\ldots,k_{[n]}\} \setminus \big(\{i_1,\ldots,i_n\} \cup \{j_1,\ldots,j_n\}\big)\right \}. \end{equation} Observe that $\min M_n \geq n$ whenever the set $M_n \neq \emptyset$. Since all the three series in \eqref{pr:rearrangement:2sets} converge, we have \begin{equation} \left\| \sum_{k\in K} y_k - \sum_{i\in I} y_i - \sum_{j \in J} y_j\right\| = \lim_{n\to\infty} \left\| \sum_{l=1}^{[n]} y_{k_l} - \sum_{l=1}^{n}y_{i_l} - \sum_{l=1}^{n}y_{j_l}\right\| \leq \lim_{n\to\infty} \sum_{i=n}^{\infty} \|y_i\| = 0. \end{equation} Obviously, formula \eqref{pr:rearrangement:2sets} holds when either one, or both, of $I$ and $J$ are finite. By induction, equality \eqref{pr:rearrangement:2sets} carries over to any finite number of sets. In particular, this proves \eqref{lem:rearrangement:eq} for all finite $q \in \mathbb Z_+$. We now show that \eqref{lem:rearrangement:eq} also holds for $q = \infty$. Indeed, redefine \begin{equation}\label{} [n]:= \min \{m \colon \{1,\ldots,n\} \subset I_1 \cup \ldots \cup I_m\} \end{equation} and \begin{equation}\label{} M_n := Z_+ \setminus K_n, \quad \text{where} \quad K_n:=I_1\cup \ldots \cup I_{[n]}. \end{equation} Note here that since $q=\infty$, we get $M_n \neq \emptyset$ and thus $\min M_n \geq n$. Consequently, by \eqref{pr:rearrangement:2sets} applied to a finite number of sets, first to $K = K_n$ and then to $K = \mathbb Z_+$, we obtain \begin{equation}\label{} \left\| \sum_{i=1}^{\infty} y_i - \sum_{j=1}^{[n]} z_j\right\| = \left\| \sum_{i=1}^{\infty} y_i - \sum_{k\in K_n} y_k\right\| = \left\| \sum_{i \in \mathbb Z_+\setminus K_{n}} y_i\right\| \leq \sum_{i=n}^{\infty}\|y_i\| \to 0 \end{equation} as $n\to \infty$. \end{proof} \textbf{Acknowledgements.} We are grateful to two anonymous referees for all their comments and remarks which helped us improve our manuscript. This work was partially supported by the Israel Science Foundation (Grants 389/12 and 820/17), the Fund for the Promotion of Research at the Technion and by the Technion General Research Fund. \small
proofpile-arXiv_065-115
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The LIGO/Virgo Collaboration announced recently the observation of a merger of a black hole with mass $23.2^{+1.1}_{-1.0} {\rm M}_\odot$ with a compact object with mass $2.59^{+0.08}_{-0.09}{\rm M}_\odot$ \cite{2020ApJ...896L..44A}. The mass of the secondary component lies within the so-called low mass gap \cite{Bailyn_1998,_zel_2010,Belczynski_2012}. Theoretical and observational evidence suggest that black holes of mass less than $\sim 5{\rm M}_\odot$ may not be produced by stellar evolution \cite{Bailyn_1998,_zel_2010,Farr_2011}. On the other hand while some candidate equations of state of neutron stars allow for a neutron star maximum mass $M_{\rm max} \sim 3{\rm M}_\odot$ \cite{1996NuPhA.606..508M,PhysRevLett.32.324,_zel_2012,Kiziltan_2013,Alsing_2018}, the relatively small tidal deformabilities measured in gravitational-wave signal GW170817 do not favor such large values of $M_{\rm max}$ but rather suggest it is of the order of $2.5{\rm M}_\odot$ \cite{PhysRevLett.121.161101,2019PhRvX...9a1001A,2020ApJ...896L..44A}. The heaviest neutron star observed to date has a mass of $2.01\pm 0.04{\rm M}_\odot$ \cite{Antoniadis_2013}, while there was a recent claim that PSR J0740+6620 may host a $2.14^{+0.10}_{-0.09} {\rm M}_\odot$ neutron star \cite{Cromartie_2019}. Therefore the existence and nature of compact objects in the mass regime $\sim {[2.5,5]} {\rm M}_\odot$ are highly uncertain. These theoretical and observational uncertainties regarding the maximum mass of neutron stars and lower mass of black holes render it challenging to conclude with certainty on the nature of the secondary GW190814 component. Several analyses seem to favor a stellar black hole \cite{2020ApJ...896L..44A,2020arXiv200703799F,2020arXiv200706057T,2020arXiv200709683S}. Other proposals include a primordial black hole \cite{2020arXiv200615675V,lehmann2020modelindependent,2020arXiv200706481C,2020arXiv200703565J}, a fast pulsar \cite{2020arXiv200702513Z,2020arXiv200614601M,2020arXiv200705526T}, a heavy neutron star with stiff equation of state \cite{2020arXiv200616296T}, a neutron star with accretion disk \cite{2020arXiv200700847S}. Independent of the LIGO/Virgo results, neutron star properties have been reported recently by the Neutron Star Interior Composition Explorer (NICER) team. In partcular, observations of the isolated millisecond pulsar PSR J0030+0451 indicate a stiffer equation of state \cite{2019ApJ...887L..24M,2019ApJ...887L..21R} than those mostly favored by the LIGO/Virgo collaboration \cite{PhysRevLett.121.161101,2020ApJ...896L..44A}. Besides a stiff equation of state, anisotropies in the core of a neutron star \cite{2007ASSL..326.....H} may also allow higher maximum neutron star masses \cite{1975A&A....38...51H}. The anisotropies inside the star can grow due to superfluidity \cite{1969Natur.224..673B,1970PhRvL..24..775H,2019EPJA...55..167S}, solidification \cite{1971NPhS..231..145A,1972NPhS..236...37C,1973PhRvL..30..999C,1973NPhS..243..130S}, hyperons \cite{1998PhRvC..57..409B}, quarks \cite{1984PhR...107..325B} as well as pion and kaon condensates \cite{1972PhRvL..29..382S,1995PThPh..94..457T}. In addition, nuclear matter in a magnetic field becomes anisotropic, with different pressures in directions along and transverse to the field \cite{2010PhRvC..82f5802F,2019Univ....5..104F}. The electromagnetic energy-momentum tensor is naturally anisotropic. Thus, an anisotropic core is more realistic than an ideally isotropic one. An anisotropic compact object is subject to a tangential pressure $p_t = p_\theta = p_\phi$ in the angular directions that is different than the radial pressure $p_r \neq p_t$ \cite{1964RSPSA.282..303B,Bowers:1974tgi}. If the anisotropy parameter is positive $\Delta \equiv p_t-p_r>0$ an additional repulsive anisotropic force enables more compact stable configurations to appear in the anisotropic than in the isotropic case \cite{2002PhRvD..65j4011I}. The maximum mass of anisotropic compact neutron stars has been estimated to be $M_{\rm max}\sim 4 {\rm M}_\odot$ \cite{1975A&A....38...51H}. Several anisotropic solutions include Refs \cite{PhysRevD.26.1262,PhysRevD.77.027502,Thirukkanesh_2008,2016Ap&SS.361..339S,Maurya_2016,Maurya_2017,2018EPJC...78..673E,2019EPJC...79..885T,2019EPJC...79..853D,2019EPJP..134..600E,2019EPJC...79..138B,Ivanov_2002}. Here, we will investigate the possibility that the GW190814 secondary component is an anisotropic neutron star. We will work in a metric ansatz introduced by Krori \& Barua (KB) \cite{1975JPhA....8..508K}. Anisotropic compact star models in the KB-spacetime have also been studied in Refs. \cite{PhysRevD.82.044052,2012EPJC...72.2071R,Kalam_2013,Bhar:2014mta,2015Ap&SS.356..309B,Bhar_etaL_2015,2015Ap&SS.359...57A,2016Ap&SS.361....8Z,2016Ap&SS.361..342Z,2016CaJPh..94.1024S,2017Ap&SS.362..237I,2018EPJC...78..307Y,2018IJGMM..1550093S,2019CoTPh..71..599F,2019EPJC...79..919S,2020IJMPA..3550013S,2020MPLA...3550354S,2020arXiv200709797R}.With input the total mass we will calculate the boundary density, radius, and equation of state compatible with LIGO's constraints imposed by gravitational-wave signals GW170817 and GW190814 \cite{2020ApJ...896L..44A}. In the next section \ref{sec:GW190814} we impose the LIGO constraints of the neutron star equation of state to an anisotropic neutron star with mass $2.6{\rm M}_\odot$, that is equal to that of the secondary GW190814 component. In the final section we discuss our conclusions. \section{Anisotropic star subject to LIGO constraints}\label{sec:GW190814} Let us first review briefly the KB-ansatz. We write the spherically symmetric metric in General Relativity in spherical coordinates $(t,r,\theta, \phi)$ as \begin{equation} ds^2=-e^{\alpha(r)} \, c^2 dt^2+e^{\beta(r)}dr^2+r^2(d\theta^2+\sin^2\theta d\phi^2)\,,\label{eq:met1} \end{equation} for some functions of $r$, $\alpha(r)$ and $\beta(r)$. If we denote $\rho$, $p_r$, $p_t$, the mass density, the radial pressure and the tangential pressure, respectively, the spherically symmetric, anisotropic energy momentum tensor is \begin{equation}\label{eq:T_enmom} T_\nu^\mu{} =(\frac{p_t}{c^2}+\rho )u^\mu u_\nu + p_t\delta_a{}^\mu + (p_r - p_t) \xi^\mu \xi_\nu, \end{equation} where $u^\mu$ is the four-velocity and $\xi^\mu$ the unit space-like vector in the radial direction. Krori \& Barua \cite{1975JPhA....8..508K} introduced the following ansatz for the metric potentials \begin{align}\label{eq:pot} \alpha(x) =a_0 x^2+a_1\,,\quad \beta(x) =a_2 x^2\, , \end{align} where however we use the dimensionless variable \begin{equation}\label{eq:x_dless} x\equiv \frac{r}{R} \in [0,1]. \end{equation} We will use the KB-ansatz to model the core of an anisotropic neutron star. We denote the radius of the core as $r=R$. Using the characteristic density \begin{equation}\label{eq:rho_star} \rho_\star \equiv \frac{c^2}{8\pi G R^2} \end{equation} we get the dimensionless variables \begin{equation}\label{eq:rho+p_dless} \tilde{\rho} = \frac{\rho}{\rho_\star}\,,\; \tilde{p}_r = \frac{p_r}{\rho_\star c^2}\,,\; \tilde{p}_t = \frac{p_t}{\rho_\star c^2}\, . \end{equation} Einstein equations give for the dimensionless variables \begin{align} \label{eq:rho_a} \tilde{\rho} &= \frac{ e^{-a_2 x^2}}{x^2} \left( e^{a_2 x^2}-1 + 2a_2 x^2 \right) \,, \\ \label{eq:p_r_a} \tilde{p}_r &= \frac{ e^{-a_2 x^2}}{x^2} \left( 1-e^{a_2 x^2}+2a_0 x^2 \right)\,, \\ \label{eq:p_t_a} \tilde{p}_t &= e^{-a_2x^2} \left( 2a_0 -a_2 + a_0 (a_0 - a_2) x^2 \right)\, . \end{align} Integrating $\mathcal{M}^\prime = 4\pi \rho r^2$ we get for the mass $\mathcal{M}(r)$ contained within $r$ that \begin{equation} \label{eq:mas_a} \mathcal{M}(x)=M\, C^{-1} x\left( 1-e^{-a_2 x^2}\right) \, . \end{equation} where $M$ denotes the total mass of the core and $C$ is the compactness \begin{equation} C = \frac{2GM}{Rc^2}. \end{equation} We match the KB-solution with the Tolman-Oppenheimer-Volkoff metric\cite{2020CQGra..37i7001R} at the boundary of the anisotropic core \begin{align} ds^2_{\rm TOV} &= -e^{\alpha_{\rm TOV}(r)} \, c^2 dt^2 + \left( 1 - \frac{2G\mathcal{M}(r)}{rc^2}\right)^{-1} dr^2 + r^2(d\theta^2+\sin^2\theta d\phi^2)\,,\label{eq:TOV} \\ \alpha_{\rm TOV}(r) &= \frac{2}{c^2}\int_r^\infty d\xi\, \left(\frac{G \mathcal{M}(\xi)}{\xi^2} + \frac{4\pi G}{c^2} P(\xi)\, \xi\right)\left( 1 - \frac{2G \mathcal{M}(\xi) }{\xi c^2} \right)^{-1}, \; r\geq R, \end{align} that is assumed to describe the outer layers with a different equation of state $P = P(\rho)$. The boundary pressure $p_R\equiv p_r(R) = P(R)$ is a free parameter. We have \begin{equation}\label{eq:boundary_cond} \beta(r=R)= \ln \left(1- C \right)^{-1}, \quad \tilde{p}_r(r=R) = \tilde{p}_R. \end{equation} For our purposes we only need $a_0$, $a_2$ as is evidenced from Eqs. (\ref{eq:rho_a})-(\ref{eq:p_t_a}). By use of Eqs. (\ref{eq:pot}), (\ref{eq:p_r_a}), (\ref{eq:boundary_cond}) we get \begin{equation} \label{eq:a_param} a_0(C) = \frac{1}{2}\frac{C +\tilde{p}_R}{1-C}, \quad a_2(C)= \ln\left(1- C\right)^{-1}\,. \end{equation} These equations along with (\ref{eq:rho_a})-(\ref{eq:p_t_a}) allow us parametrize density and pressure solely with respect to compactness, $\tilde{\rho} = \tilde{\rho}(x;C)$, $\tilde{p} = \tilde{p}(x;C)$, as was firstly shown in \cite{2020arXiv200709797R}. Stars with the same compactness acquire the same core profile and properties. The parameter $a_1$, not required in our analysis, can be determined from the boundary condition $\alpha_{\rm TOV}(r=R) = a_0 + a_1$. \begin{figure}[!tb] \begin{center} \includegraphics[scale = 0.5]{./p_r_rho_LIGO.eps} \caption{The radial pressure with respect to the density for the two marginal anisotropic cores with $M = 2.6 {\rm M}_\odot$ and $R = 13.2$ (dash-dotted line), $R=14.0{\rm km}$ (continuous line) compatible with LIGO's constraints (green shaded region) from GW170817, GW190814. } \label{fig:eos_LIGO} \end{center} \end{figure} The boundary density $\rho_R$ and boundary radial pressure $p_R$ are free parameters which we constrain by LIGO's observations. These are summarized in Figure 8 of \cite{2020ApJ...896L..44A} and reproduced here as the green shaded region of Figure \ref{fig:eos_LIGO}. We find that an anisotropic neutron core with mass equal to the secondary GW190814 component $M=2.6{\rm M}_\odot$ satisfies the constraints on the equation of state compatible with signals GW170817 and GW190814 given by LIGO only for radius with values in the range $(13.2-14.0){\rm km}$, given in detail in Table \ref{tab:density}. The boundary density is $(3.5-4.0)10^{14}{\rm g}/{\rm cm}^3$ with the lower value corresponding to the bigger radius. In Figure \ref{fig:eos_LIGO} we depict that the two marginal solutions $R=13.2{\rm km}$ and $R = 14.0{\rm km}$ satisfy LIGO's constraints of the equation of state. It seems plausible that the bigger star $R=14.0{\rm km}$ is a more probable candidate, because of the more realistic core's boundary density $\rho_R=3.5\cdot 10^{14}{\rm g}/{\rm cm}^3$. \begin{small} \begin{table}[tbp] \begin{center} \begin{tabular}{c | c c c c } \toprule $R {[{\rm km}]}$ & $\rho_R{[10^{14}{\rm g}/{\rm cm}^3]}$ & $p_R/c^2{[10^{14}{\rm g}/{\rm cm}^3]}$ & $\frac{dp_r}{d\rho}{[c^2]}$ & $\frac{dp_t}{d\rho}{[c^2]}$ \\ \midrule $13.2$ & $4.03$ & $(0.24-0.26)$ & $0.49$ & $0.34$ \\ $13.3$ & $3.95$ & $(0.18-0.23)$ & $(0.46-0.48)$ & $(0.32-0.34)$ \\ $13.4$ & $3.88$ & $(0.14-0.21)$ & $(0.45-0.47)$ & $(0.31-0.33)$ \\ $13.5$ & $3.81$ & $(0.13-0.18)$ & $(0.44-0.46)$ & $(0.30-0.32)$ \\ $13.6$ & $3.73$ & $(0.12-0.16)$ & $(0.44-0.45)$ & $(0.30-0.31)$ \\ $13.7$ & $3.66$ & $(0.11-0.14)$ & $(0.43-0.44)$ & $0.30$ \\ $13.8$ & $3.59$ & $(0.10-0.12)$ & $(0.42-0.43)$ & $(0.29-0.30)$ \\ $13.9$ & $3.53$ & $(0.09-0.10)$ & $0.42$ & $0.29$ \\ $14.0$ & $3.46$ & $0.08$ & $0.41$ & $0.28$ \\ \bottomrule \end{tabular} \end{center} \caption{The radius $R$, boundary density $\rho_R$, boundary radial pressure $p_R$ and slopes of $p_r(\rho)$, $p_t(\rho)$ linear fits, compatible with LIGO's constraints for an anisotropic core with mass $2.6{\rm M}_\odot$.} \label{tab:density} \end{table} \end{small} The acceptable radii correspond to compactness values $C=(0.55-0.58)$. These are not only realistic for a neutron star, but also are lower than $C<0.71$. This is the stability and physical requirement condition for an anisotropic star in KB-spacetime calculated by Roupas \& Nashed \cite{2020arXiv200709797R}. Note that this limit was calculated assuming zero boundary pressure. Nevertheless, it is straightforward to verify that the solutions of Table \ref{tab:density} satisfy the stricter condition that gave the limit $0.71$, that is the strong energy condition \cite{1988CQGra...5.1329K,2017EPJC...77..738I} $ \rho c^2 - p_r - 2p_t > 0\,. $ It is also straightforward to verify that there are satisfied all other conditions of Roupas \& Nashed \cite{2020arXiv200709797R} such as the stability condition for the adiabatic index \cite{1994MNRAS.267..637C,1997PhR...286...53H} $ \Gamma \equiv \frac{\rho+ p_r}{p_r}\frac{dp_r}{d \rho} > \frac{4}{3} $ , the causality conditions $v_r < c$, $v_t <c$ and also the condition of stability against cracking \cite{HERRERA1992206,2007CQGra..24.4631A} $0 < v_r{}^2-v_t{}^2 < c^2$. \section{Conclusions} We have used the constraints on a neutron star's core equation of state given by LIGO as derived from the gravitational-wave signals GW170817, GW190814 \cite{2020ApJ...896L..44A} in order to investigate if an anisotropic neutron core is compatible with the secondary GW190817 component. We found intriguingly that not only it is compatible, but also that the parameters imposed by these constraints give physical, stable solutions. There is no a-priori reason why this would be the case, especially since LIGO's analysis \cite{2020ApJ...896L..44A} favours a black hole as a candidate for the secondary GW190814 component. In particular, we conclude that an anisotropic neutron star core with $M=2.6{\rm M}_\odot$ in the KB-ansatz is compatible with LIGO constraints for a radius $R=(13.2-14.0)\,{\rm km}$. The corresponding boundary density for the $R=14.0{\rm km}$ solution is $\rho_R=3.5\cdot 10^{14}{\rm g}/{\rm cm}^3$ that is very close to the nuclear saturation density. For this solution the linear fit of the equation of state gives $p_r \propto 0.41 \rho c^2$ and $p_t \propto 0.28 \rho c^2$.
proofpile-arXiv_065-116
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Due to the availability of vast amounts of data and corresponding tremendous advances in machine learning, computer software is nowadays an ever increasing presence in every aspect our society. As we rely more and more on machine-learned software, we become increasingly vulnerable to programming errors but (in contrast to traditional software) also errors in the data used for training. In general, before software training, the data goes through long pre-processing pipelines\footnote{\url{https://www.nytimes.com/2014/08/18/technology/for-big-data-scientists-hurdle-to-insights-is-janitor-work.html}}. Errors can be missed, or even introduced, at any stage of these pipelines. This is even more true when data pre-processing stages are disregarded as single-use glue code and, for this reason, are poorly tested, let alone statically analyzed or verified. Moreover, this kind of code is often written in a rush and is highly dependent on the data (e.g., the use of magic constants is not uncommon) All this together, greatly increases the likelihood for errors to be noticed extremely late in the pipeline (which entails a more or less important waste of time), or more dangerously, to remain completely unnoticed. \subsubsection{Motivating Example.} \begin{figure}[t] \lstinputlisting[language=Python]{gpa.py} \caption{Simple GPA calculator for multiple students.} \label{fig:gpa} \end{figure} As an example, let us consider the data processing \python code shown in Figure~\ref{fig:gpa}, which calculates the simple GPA for a given number of students (cf. Line $2$). For each class taken by a student (cf. Line $7$), their (A-F) grade is converted into a numeric (4-0) grade, and all numeric grades are added together (cf. Line $9$). The GPA is obtained by dividing this by the number of classes taken by the student (cf. Line $10$). Even this small program makes several assumptions on its input data. For instance, it assumes that the very first input read by the program (cf. Line $2$) is a string representation of an integer number that indicates how many student records follow in the data file (cf. Line $3$). A similar assumption holds for the second input read for each student record (cf. Line $5$), which should indicate how many student grades follow in the data file (cf. Line $7$). This number should be different from zero (or the division at Line $10$ would raise a \lstinline[language=Python]{ZeroDivisionError}). Finally, the program assumes that each grade read at Line $8$ is a string in the set $\set{\text{\lstinline[language=Python]{'A'}}, \text{\lstinline[language=Python]{'B'}}, \text{\lstinline[language=Python]{'C'}}, \text{\lstinline[language=Python]{'D'}}, \text{\lstinline[language=Python]{'F'}}}$ (or the dictionary access at Line $9$ would raise a \lstinline[language=Python]{KeyError}). Note that, not all assumptions necessarily lead to a program error if violated. For instance, consider the following data stream: \begin{center} \begin{tabular}{ccccc} 1 & Emma & 1 & A & F \\ && $\shortuparrow$ && \end{tabular} \end{center} A mistake is indicated by the arrow: the number of classes taken by the student Emma is off by one (i.e., it should be $2$ instead of $1$). In this case the program in Figure~\ref{fig:gpa} will not raise any error but will instead compute a wrong (but plausible!) GPA for Emma (i.e., $4.0$ instead of $2.0$). \subsubsection{Our Approach.} \looseness=-1 To address these issues, we propose an abstract interpretation-based \emph{shape analysis} framework \emph{for input data} of data-processing programs. The analysis automatically infers implicit assumptions on the input data that are embedded in the source code of a program. Specifically, we infer assumptions on the structure of the data as well as on the values and the relations between the data. We propose a \emph{new} data shape \emph{abstract domain}, capable of reasoning about the input data in addition to the program variables. The domain builds on a family of underlying over-approximating abstract domains, which collect constraints on the program variables and, indirectly, on the input data of a program. The abstract domain is parametric in the choice of the underlying domains. Thus, our analysis infers \emph{necessary conditions} on the data read by the program, i.e., conditions such that, if violated, guarantee that the program will execute unsuccessfully or incorrectly. This approach suffers from false negatives. However, we argue that this is preferable in practice to overwhelming data scientists with possibly many false positives (as with sufficient conditions). Back to our motivating example, the analysis (parameterized by the sign abstract domain \cite{CousotC-92b} and the finite string set domain \cite{Christensen-03}) infers that data files read by the program in Figure~\ref{fig:gpa} have the following shape: \begin{equation*} \begin{array}{rcrclc} &&&& \scriptstyle{\textcolor{gray}{1}} & \textsc{int} \geq 0 \\ \multirow{5}{*}{$d_1$} &\multirow{5}{*}{$\begin{cases} & \\ & \\ & \\ & \\ & \end{cases}$} & && \scriptstyle{\textcolor{gray}{2}} & \textsc{string} \\ &&&& \scriptstyle{\textcolor{gray}{3}} & \textsc{int} \geq 0 \\ &&\multirow{2}{*}{$d_3$} &\multirow{2}{*}{$\begin{cases} & \\ & \end{cases}$}& \scriptstyle{\textcolor{gray}{4}} & \textsc{string}\in \set{\text{'A', 'B', 'C', 'D', 'F'}} \\ &&&& \scriptstyle{\textcolor{gray}{\vdots}} & \dots \\ &&&& \scriptstyle{\textcolor{gray}{\vdots}} & \end{array} \end{equation*} where $d_i$ denotes the data at line $i$ of the data file. Thus, the analysis would detect the mistake discussed above, since a data file containing the erroneous data does not match this inferred condition. Note that, in general, a mismatch between a data file and a data-processing program indicates a mistake either in data or in the source code of the program. Our analysis does not aim to address this question. More generally, the result of our analysis can be used for a wide range of applications: from code specifications \cite{Cousot13}, to grammar-based testing \cite{Hennessy05}, to automatically checking and guiding the cleaning of the data \cite{Radwa18,Madelin17}. \subsubsection{Outline.} Section~\ref{sec:semantics} introduces the syntax and concrete semantics of our data-processing programs. In Section~\ref{sec:constraining}, we define and present instances of the underlying abstract domains. We describe the rest our data shape abstract domain in Section~\ref{sec:stack} and define the abstract semantics in Section~\ref{sec:abstract}. Our prototype static analyzer is presented in ~\ref{sec:implementation}. Finally, Section~\ref{sec:related} discusses related work and Section~\ref{sec:conclusion} concludes and envisions future work. \section{Input Data-Aware Program Semantics} \label{sec:semantics} \subsubsection{Input Data.} We consider \emph{tabular data} stored, e.g., in CSV files. We note, however, that what we present easily generalizes to other files as, e.g., spreadsheets. Let $\svals$ be a set of string values. Furthermore, let $\isvals \subseteq \svals$ and $\fsvals \subseteq \svals$ be the sets of string values that can be interpreted as integer and float values, respectively. We formalize a data file as a possibly empty $(r \times c)$-matrix of string values, where $r \in \nat$ and $c \in \nat$ denote the number of matrix row (i.e., data records) and columns (i.e., data fields), respectively. We write $\epsilon$ to denote an empty data file. Let \begin{equation}\label{eq:files} \files \defined \bigcup_{r \in \nat}\bigcup_{c \in \nat} \svals^{r \times c} \end{equation} be the set of all data files. Without loss of generality, to simplify our formalization, we assume that data records contain only one field, i.e., $r = 1$. We lift this assumption and consider multiple data fields in Section~\ref{subsec:other}. \subsubsection{Data-Processing Language.} \begin{figure}[t] \begin{center} \begin{tabular}{lclr} A &$\Coloneqq$& $X$ & $X\in \vars$ \\ &$\vert$& $v$ & $v \in \vals$ \\ &$\vert$& $\ipt~~\vert~~\mathsf{\mathbf{int}}(A)~~\vert~~\mathsf{\mathbf{float}}(A)~$ & \\ &$\vert$& $A_1 \diamond A_2 $ & \hspace{3em}$\diamond \in \set{+,-,*,/}$ \\ \\ B & $\Coloneqq$& $A_1 \bowtie A_2$ & $\bowtie~\in \{<, \leq, =, \not=, >, \geq\}$ \\ &$\vert$ & B_1 \lor B_2~~\vert~~B_1 \land B_2$ \\ \\ S & $\Coloneqq$& $^lX := A$ & $l \in \labels, X \in \vars$ \\ &$\vert$& $\mathsf{\mathbf{if}}~^lB~\mathsf{\mathbf{then}}~S_1~\mathsf{\mathbf{else}}~S_2~\mathsf{\mathbf{fi}}$ & $l \in \labels$ \\ &$\vert$& $\mathsf{\mathbf{for}}~^lA~\mathsf{\mathbf{do}}~S~\mathsf{\mathbf{od}}~~\vert~~\mathsf{\mathbf{while}}~^lB~\mathsf{\mathbf{do}}~S~\mathsf{\mathbf{od}}$ & $l \in \labels$ \\ &$\vert$& $S_1 ; S_2$ & \\ \\ P & $\Coloneqq$& $S^l$ & $l \in \labels$ \end{tabular} \end{center} \vspace{-1em} \caption{Syntax}\label{fig:syntax} \end{figure} We consider a toy \python-like programming language for data manipulation, which we use for illustration throughout the rest of the paper. Let \vars be a finite set of program variables, and let $\vals\defined\ivals \cup \fvals \cup \svals$ be a set of values partitioned in sets of integer (\ivals), float (\fvals), and string (\svals) values. The syntax of programs is defined inductively in Figure~\ref{fig:syntax}. A program $P$ consists of an instruction $S$ followed by a unique label $l \in \labels$. Another unique label appears within each instruction. Programs can read data from an input data file: the $\ipt$ expression consumes a record from the input data file. Without loss of generality, to simplify our formalization, we assume that only the right-hand sides of assignments can contain $\ipt$ sub-expressions. (Programs can always be rewritten to satisfy this assumption.) % The $\mathsf{\mathbf{for}}~A~\mathsf{\mathbf{do}}~S~\mathsf{\mathbf{od}}$ instruction repeats an instruction $S$ for $A$ times. % The rest of the language syntax is standard. \subsubsection{Input-Aware Semantics.} We can now define the (concrete) semantics of the data-processing programs. This semantics differs from the usual semantics in that it is \emph{input data-aware}, that is, it explicitly considers the data read by programs. An environment $\rho\colon \vars \rightarrow \vals$ maps each program variable $X \in \vars$ to its value $\rho(X) \in \vals$. Let \envs denote the set of all environments. The semantics of an arithmetic expression $A$ is a function $\function{\arith{A}}{\envs\times\files}{\vals\times\files}$ mapping an environment and a data file to the value (in $\vals$) of the expression in the given environment and given the data read from the file (if any), and the (rest of) the data file (in $\files$) after the data is consumed. \begin{example}\label{ex:arith} Let $\rho$ be an environment that maps the variable $gpa$ to the value \text{\lstinline[language=Python]{3.0}}, and let $D = \left[ \begin{matrix} \text{\lstinline[language=Python]{4.0}} \\ \text{\lstinline[language=Python]{1.0}} \\ \text{\lstinline[language=Python]{3.0}} \end{matrix} \right]$ be a data file containing three data records. We consider the expression $gpa + \ipt$, which simplifies the right-hand side of the assignment at line 9 in Figure~\ref{fig:gpa}. Its semantics is $\arith{ gpa + \ipt } = \left( \text{\lstinline[language=Python]{7.0}}, \left[ \begin{matrix} \text{\lstinline[language=Python]{1.0}} \\ \text{\lstinline[language=Python]{3.0}} \end{matrix} \right] \right)$. \qee \end{example} We also define the standard input-agnostic semantics $\function{\aritha{A}}{\envs}{\powerset{\vals}}$ mapping an environment to the set of all possible values of the expression in the environment: $\aritha{A}\rho \defined \set{v \in \vals \mid \exists D \in \files\colon \tuple{v}{\_} = \arith{A}\tuple{\rho}{D}}$. Similarly, the semantics of a boolean expression $\function{\bool{B}}{\envs}{\set{\textsf{tt}, \textsf{ff}}}$ maps an environment to the truth value of the expression in the given environment. \begin{figure}[t] \begin{align*} &\stmt{^lX := A}W \defined \set{ \tuple{\rho}{D} \in \envs\times\files \mid \tuple{v}{R} = \arith{A}\tuple{\rho}{D}, \tuple{ \rho\lbrack X \mapsto v \rbrack }{ R } \in W } \\ &\stmt{\mathsf{\mathbf{if}}~^lB~\mathsf{\mathbf{then}}~S_1~\mathsf{\mathbf{else}}~S_2~\mathsf{\mathbf{fi}}}W \defined W1 \cup W2 \\ &\qquad W1 \defined \set{ \tuple{\rho}{D} \in \envs\times\files \mid \textsf{tt} = \bool{B}\rho, \tuple{ \rho }{ D } \in \stmt{S_1}W } \\ &\qquad W2 \defined \set{ \tuple{\rho}{D} \in \envs\times\files \mid \textsf{ff} = \bool{B}\rho, \tuple{ \rho }{ D } \in \stmt{S_2}W } \\ &\stmt{\mathsf{\mathbf{for}}~^lA~\mathsf{\mathbf{do}}~S~\mathsf{\mathbf{od}}}W \defined \set{ \tuple{\rho}{D} \in \envs\times\files \mid \tuple{0}{D} = \arith{A}\tuple{\rho}{D}, \tuple{ \rho }{ D } \in W } \cup W' \\ &\qquad W' \defined \set{ \tuple{\rho}{D} \in \envs\times\files ~\left\vert~ \begin{matrix} \tuple{v}{D} = \arith{A}\tuple{\rho}{D}, v > 0, \\ \tuple{ \rho }{ D } \in \stmt{S; \mathsf{\mathbf{for}}~^lA-1~\mathsf{\mathbf{do}}~S~\mathsf{\mathbf{od}}}W \end{matrix} \right. } \\ &\stmt{\mathsf{\mathbf{while}}~^lB~\mathsf{\mathbf{do}}~S~\mathsf{\mathbf{od}}}W \defined \lfp~F \\ &\qquad F(Y) \defined \set{ \tuple{\rho}{D} \in \envs\times\files \mid \textsf{ff} = \bool{B}\rho, \tuple{ \rho }{ D } \in W } \cup W' \\ &\qquad W' \defined \set{ \tuple{\rho}{D} \in \envs\times\files \mid \textsf{tt} = \bool{B}\rho, \tuple{ \rho }{ D } \in \stmt{S}Y } \\ &\stmt{S_1; S_2}W \defined \stmt{S_1} \circ \stmt{S_2}W \end{align*} \caption{Input-Aware Concrete Semantics of Instructions}\label{fig:concrete} \end{figure} The semantics of programs $\function{\Delta\semantics{P}}{\labels}{\powerset{\envs\times\files}}$ maps each program label $l \in \labels$ to the set of all pairs of environments that are possible when the program execution is at that label, and input data files that the program can \emph{fully read without errors} starting \emph{from} that label. We define this semantics \emph{backwards}, starting from the final program label where all environments in \envs are possible but only the empty data file $\epsilon$ can be read from that program label: \begin{equation} \Delta\semantics{P} = \Delta\semantics{S^l} \defined \Delta\semantics{S}\left(\lambda p. \begin{cases} \envs\times\set{\epsilon} & p = l \\ \text{undefined} & \text{otherwise} \end{cases} \right) \end{equation} In Figure~\ref{fig:concrete}, we (equivalently) define the semantics $\function{\Delta\semantics{S}}{(\labels\rightarrow\powerset{\envs\times\files})} {(\labels\rightarrow\powerset{\envs\times\files})}$ of each instruction pointwise within $\powerset{\envs\times\files}$: each function $\function{\stmt{S}}{\powerset{\envs\times\files}}{\powerset{\envs\times\files}}$ takes as input a set $W$ of pairs of environments and data files and outputs the pairs of possible environments and data files that can be read from the program label within the instruction $S$. \begin{example}\label{ex:stmt} Let $\rho'$ be an environment that maps the variable $gpa$ to the value \lstinline[language=Python]{7.0}, and let $R = \left[ \begin{matrix} \text{\lstinline[language=Python]{1.0}} \\ \text{\lstinline[language=Python]{3.0}} \end{matrix} \right]$ be a data file. We consider the assignment $gpa := gpa + \ipt$ which simplifies the assignment at line 9 in Figure~\ref{fig:gpa}. Its semantics, given $W = \set{ \tuple{\rho'}{R} }$, is $\stmt{gpa := gpa + \ipt}W = \set{\tuple{\rho}{D}}$ where $\rho$ maps the variable $gpa$ to the value \lstinline[language=Python]{3.0} and $D = \left[ \begin{matrix} \text{\lstinline[language=Python]{4.0}} \\ \text{\lstinline[language=Python]{1.0}} \\ \text{\lstinline[language=Python]{3.0}} \end{matrix} \right]$ (see Example~\ref{ex:arith}). \qee \end{example} \subsubsection{Data Shape Abstraction.} In the following sections, we design a decidable abstraction of $\Delta\semantics{P}$ which \emph{over-approximates} the concrete semantics of $P$ at each program label $l \in \labels$. As a consequence, this abstraction yields \emph{necessary preconditions} for a program to execute successfully and correctly. In particular, if a data file is not in the abstraction, the program will definitely eventually run into an error or compute a wrong result if it tries to read data from it. On the other hand, if a data file is in the abstraction there is no guarantee that the program will execute successfully and correctly when reading data from it. We derive the abstraction $\function{\Delta^\natural\semantics{P}}{\labels}{\aelm}$ by \emph{abstract interpretation} \cite{CousotC-POPL77}. No approximation is made on \labels. On the other hand, each program label $l \in \labels$ is associated to an element $Q \in \aelm$ of the \emph{data shape abstract domain} \adom. $Q$ over-approximates the possible environments and data files read starting from $l$. \begin{figure}[t] \center \includegraphics[width=0.75\textwidth]{domain.pdf} \caption{Data Shape Abstract Domain.}\label{fig:domain} \end{figure} An overview of the data shape abstract domain is given in Figure~\ref{fig:domain}. It is parameterized by a family $\kdom_1, \dots, \kdom_k$ of \emph{constraining abstract domains}, which collect constraints on the program variables, and an \emph{input abstract domain} $\hdom$, which collects constraints on the input data read by the program. We now present and describe instances of these abstract domains, before defining $\Delta^\natural\semantics{P}$. \section{Constraining Abstract Domains}\label{sec:constraining} The constraining abstract domains abstract the possible environments at each program label. Thus, they constrain the values of the variables of the analyzed program and also \emph{indirectly} constraint the input data read by the program. Any constraining domain $\kdom$ that we present is characterized by a choice of: \begin{itemize} \item[$\bullet$] a set $\kelm$ of computer-representable abstract domain elements; \item[$\bullet$] a partial order $\sqsubseteq_\kdom$ between domain elements; \item[$\bullet$] a concretization function $\function{\gamma_\kdom}{\kelm}{\powerset{\envs}}$ mapping abstract domain elements to sets of possible environments, or, when possible, a Galois connection $\tuple{\powerset{\envs}}{\subseteq} \galois{\alpha_\kdom}{\gamma_\kdom} \tuple{\kelm}{\sqsubseteq_\kdom}$; \item[$\bullet$] a least element $\bot_\kdom \in \kelm$ such that $\gamma_\kdom(\bot_\kdom) = \emptyset$; \item[$\bullet$] a greatest element $\top_\kdom \in \kelm$ such that $\gamma_\kdom(\top_\kdom) = \envs$; \item[$\bullet$] a sound join operator $\sqcup_\kdom$ such that $\gamma_\kdom(K_1) \cup \gamma_\kdom(K_2) \subseteq \gamma_\kdom(K_1 \sqcup_\kdom K_2)$; \item[$\bullet$] a sound widening $\triangledown_\kdom$ if $\kdom$ does not satisfy the ascending chain condition; \item[$\bullet$] a sound backward assignment operator $\assign[\kdom]{X := A}$ such that \\ $\set{ \rho \in \envs \mid \exists v \in \aritha{A}\rho\colon \rho[X \mapsto v] \in \gamma(K)} \subseteq \gamma_\kdom(\assign[\kdom]{X := A}K)$; and \item[$\bullet$] a sound filter operator $\filter[\kdom]{B}$ such that \\ $\set{ \rho \in \gamma_\kdom(K) \mid \textsf{tt} \in \bool{B}\rho } \subseteq \gamma_\kdom(\filter[\kdom]{B}K)$. \end{itemize} Essentially any of the existing classical abstract domains \cite[etc.]{Costantini-15,CousotC-76,Mine-06} can be a constraining domain. Some of their operators just need to be augmented with certain operations to ensure the communication with the input domain $\hdom$, which (directly) constraints the input data. Specifically, the backward assignment operation $\assign[\kdom]{X := A}$ needs to be preceded by a $\replace{A, \ivars}$ operation, which replaces each $\ipt$ sub-expressions of $A$ with a fresh special input variable $I \in \ivars$, The input variables are added to the constraining domain on the fly to track the value of the input data as well as the \emph{order} in which the data is read by the program. \begin{example}\label{ex:replace} Let us consider again the assignment $gpa := gpa + \ipt$ which simplifies line 9 in Figure~\ref{fig:gpa}. On way to track the order in which input data is read by the program is to parameterize the fresh input variables by the program label at which the corresponding \ipt expression occur. If we use line numbers as labels, in this case we only need one fresh input variable $I_9$ (for multiple \ipt expressions at the same program label $9$ we can add superscripts: $I^1_9, I^2_9, \dots$). Thus, $\replace{gpa + \ipt, \set{I_9}} = gpa + I_9$. \qee \end{example} Once the assignment or filter operation has been performed, the operation $\record{I}$ extracts from the domain the constraints on each newly added input variable $I$ so that they can be directly recorded in the input domain $\hdom$. The input variables can then be removed from the constraining domain $\kdom$. \subsection{Non-Relational Constraining Abstract Domains}\label{subsec:nonrel} In the following, we present a few instances of \emph{non-relational} constraining domains. These domains abstract each program variable independently. Thus, each constraining domain element $K \in \kelm_\uelm$ of $\kdom_\udom$ is a map $\function{K}{\vars}{\uelm}$ from program variables to elements of a \emph{basis} abstract domain $\udom$. In the following, we write $\uelm\semantics{A}K$ to denote the value (in $\uelm$) of an arithmetic expression $A$ given the abstract domain element $K \in \kelm_\uelm$. In particular, for a binary expression $A_1 \diamond A_2$, we define $\uelm\semantics{A_1 \diamond_\udom A_2}K = \uelm\semantics{A_1}K \diamond_\udom \uelm\semantics{A_2}K$ and thus we assume that the basis $\udom$ is equipped with the operator $\diamond_\udom$. The concretization function $\function{\gamma_{\kdom_\udom}}{\kelm_\uelm}{\powerset{\envs}}$ is: \begin{equation}\label{eq:gamma} \gamma_{\kdom_\udom}(K) \defined \set{ \rho \in \envs \mid \forall X \in \vars\colon \mathsf{str}(\rho(X)) \in \gamma_\udom(K(X)) } \end{equation} where $\function{\gamma_\udom}{\uelm}{\svals}$ and $\function{\mathsf{str}}{\vals}{\svals}$ converts float and integer values to strings such that $\mathsf{str}(\fvals) = \fsvals$ and $\mathsf{str}(\ivals) = \isvals$. The partial order $\sqsubseteq_{\kdom}$, join $\sqcup_{\kdom}$, and widening $\triangledown_{\kdom}$ are straightforwardly defined pointwise. For these constraining domains, the $\replace{A, \ivars}$ operation \emph{temporarily} enlarges the domain of the current abstract element $K \in \kelm_\uelm$ to also include input variables, i.e., $\function{K}{\vars\cup\ivars}{\uelm}$. The $\record{I}$ operation simply returns the value $K(I) \in \uelm$. All input variable are then removed from the domain of $K$. \subsubsection{Type Constraining Abstract Domain.} The first instance that we consider is very simple but interesting to catch exceptions that would be raised when casting inputs to integers or floats, as at lines 2 and 5 in Figure~\ref{fig:gpa}. \begin{wrapfigure}{r}{0.4\textwidth} \vspace{-25pt} \begin{center} \begin{tikzpicture \node (A) {\textsc{string}}; \node (B) [below of=A] {\textsc{float}}; \node (E) [below of=B] {\textsc{int}}; \node (H) [below of=E] {$\bot_\tdom$}; \draw (A) -- (B); \draw (B) -- (E); \draw (E) -- (H); \end{tikzpicture} \end{center} \vspace{-10pt} \caption{The $\telm$ type lattice.}\label{fig:type} \vspace{-15pt} \end{wrapfigure} We define the basis type domain $\tdom$, to track \emph{the type of input data that can be stored in the program variables}. Its elements belong to the type lattice $\telm$ represented by the Hasse diagram in Figure~\ref{fig:type}. $\telm$ defines the type hierarchy (reminiscent of that of \python) that we use for our analysis. Data is always read as a string (cf. Section~\ref{sec:semantics}). Thus, \textsc{string} is the highest type in the hierarchy. Some (but not all) strings can be cast to float or integer, thus the \textsc{float} and \textsc{int} types follow in the hierarchy. Finally, $\bot_\tdom$ indicates an exception. We define the concretization function $\function{\gamma_\tdom}{\telm}{\svals}$ as follows: \begin{equation}\label{eq:gammaT} \begin{array}{cccc} \gamma_\tdom(\textsc{string}) \defined \svals & \hspace{1em} \gamma_\tdom(\textsc{float}) \defined \fsvals & \hspace{1em} \gamma_\tdom(\textsc{int}) \defined \isvals & \hspace{1em} \gamma_\tdom(\bot_\tdom) \defined \emptyset \end{array} \end{equation} The partial order $\sqsubseteq_\tdom$, join $\sqcup_\tdom$, and meet $\sqcap_\tdom$ are defined by Figure~\ref{fig:type}. No widening $\triangledown_\tdom$ is necessary since the basis type domain $\tdom$ is finite. Each element $K \in \kelm_\telm$ of the type constraining abstract domain $\kdom_\tdom$ is thus a map $\function{K}{\vars}{\telm}$ from program variables to type elements. The bottom element is the constant map $\lambda X. \bot_\tdom$ which represent a program exception. The top element is $\lambda X. \textsc{string}$ or, better, $\lambda X. \textsf{type}(X)$, where $\textsf{type}(X)$ is the type inferred for $X$ by a static type inference previously run on the program (e.g., \cite{Hassan-18,Monat-20} for \python). In the latter case, the analysis with $\kdom_\tdom$ might refine the inferred type (e.g., $\textsf{type}(X) = \textsc{float}$ but the analysis finds $K(X) = \textsc{int}$). In particular, such a refinement is done by the $\assign[\kdom_\tdom]{X := A}$ and $\filter[\kdom_\tdom]{B}$ operators. The $\assign[\kdom_\tdom]{X := A}$ operator refines the type of input data mapped to from the variables that appear in the assigned expression $A$. Specifically, $\assign[\kdom_\tdom]{X := A}K \defined \refine{\replace{A, \ivars}}{K[X \mapsto \textsf{type}(X)]}{K(X)}$, where the $\function{\textsc{refine}_A}{\kelm \rightarrow \telm}{\kelm}$ function is defined as follows: \begin{equation*} \begin{array}{rclr} \refine{X}{K}{T} & \defined & K[X \mapsto K(X) \sqcap_\tdom T] & X \in \vars \\ \refine{v}{K}{T} & \defined & K & v \in \vals \\ \refine{I}{K}{T} & \defined & K[I \mapsto T] & I \in \ivars \\ \refine{\mathsf{\mathbf{int}}(A)}{K}{T} & \defined & \refine{A}{K}{T \sqcap_\tdom \textsc{int}} & \\ \refine{\mathsf{\mathbf{float}}(A)}{K}{T} & \defined & \refine{A}{K}{T \sqcap_\tdom \textsc{float}} & \\ \refine{A_1 \diamond A_2}{K}{T} & \defined & \refine{A_1}{\refine{A_2}{K}{T'}}{T'} & \hspace{1em} T' = T \sqcap_\tdom \textsc{float} \end{array} \end{equation*} Note that, for soundness, the current value $K(X)$ of the assigned variable $X$ must be forgotten before the refinement (i.e., $K[X \mapsto \textsf{type}(X)]$). We refine variables within an arithmetic operation $A_1 \diamond A_2$ to contain data of at most type $\textsc{float}$. \begin{example}[continue from Example~\ref{ex:replace}]\label{ex:type} Let us consider again the assignment $gpa := gpa + \ipt$ which simplifies line 9 in Figure~\ref{fig:gpa} and let $K$ be an abstract domain element which maps the variable $gpa$ to the type value $\textsc{int}$, while a previously ran type inference has determined that $\textsf{type}(gpa) = \textsc{float}$. We have: \begin{equation*} \begin{array}{l} \assign[\kdom_\tdom]{gpa := gpa + \ipt}K \defined \refine{gpa + I_9}{K[gpa \mapsto \textsc{float}]}{\textsc{int}} \\ = \refine{gpa}{\refine{I_9}{K[gpa \mapsto \textsc{float}]}{\textsc{int}}}{\textsc{int}} \\ = \refine{gpa}{K[gpa \mapsto \textsc{float}][I_9 \mapsto \textsc{int}]}{\textsc{int}} = K[I_9 \mapsto \textsc{int}][gpa \mapsto \textsc{int}] \end{array} \end{equation*} which indicates that the program expects to read an integer at line $9$. Note that, this is a result of our choice for $K$. Indeed, with $K$ mapping $gpa$ to $\textsc{float}$, we have $\assign[\kdom_\tdom]{gpa := gpa + \ipt}K = K[I_9 \mapsto \textsc{float}][gpa \mapsto \textsc{float}]$ (which is what the program in Figure~\ref{fig:gpa} actually expects). \qee \end{example} Similarly, the filter operator $\filter[\kdom_\tdom]{B}$ is defined as follows: \begin{equation*} \begin{array}{rclr} \filter[\kdom_\tdom]{A_1 = A_2}K & \defined & \refine{A_1}{\refine{A_2}{K}{\telm\semantics{A_1}K}}{\telm\semantics{A_2}K} & \\ \filter[\kdom_\tdom]{A_1 \not= A_2}K & \defined & K & \\ \filter[\kdom_\tdom]{A_1 \mathrel{\overline{\mbox{$\bowtie$}\raisebox{1.9mm}{}}} A_2}K & \defined & \refine{A_1}{\refine{A_2}{K}{\textsc{float}}}{\textsc{float}} & \\ \filter[\kdom_\tdom]{B_1 \lor B_2}K & \defined & \filter[\kdom_\tdom]{B_1}K \sqcup_{\kdom_\ndom} \filter[\kdom_\tdom]{B_2}K & \\ \filter[\kdom_\tdom]{B_1 \land B_2}K & \defined & \filter[\kdom_\tdom]{B_2}K \circ \filter[\kdom_\tdom]{B_1}K & \end{array} \end{equation*} where $\overline{\mbox{$\bowtie$}\raisebox{1.9mm}{}} \in \set{<, \leq, >, \geq}$. The soundness of the domain operators is straightforward: \begin{lemma}\label{lm:type} The operators of the type constraining domain $\kdom_\tdom$ are sound. \end{lemma} \subsubsection{Value Constraining Abstract Domains.} Numerical abstract domains such as the interval domain \cite{CousotC-76} or the sign domain \cite{CousotC-92b} can be used to track \emph{the input data values that can be stored in the program variables}. In particular, the latter is useful to catch exceptions raised when diving by zero, as at line 10 in Figure~\ref{fig:gpa}. \begin{wrapfigure}{l}{0.4\textwidth} \vspace{-30pt} \begin{center} \begin{tikzpicture}[node distance=1cm] \node (A) {$\top_\ndom$}; \node (B) [below of=A] {$\not= 0$}; \node (C) [left of=B] {$\leq 0$}; \node (D) [right of=B] {$\geq 0$}; \node (E) [below of=B] {$= 0$}; \node (F) [left of=E] {$< 0$}; \node (G) [right of=E] {$> 0$}; \node (H) [below of=E] {$\bot_\ndom$}; \draw (A) -- (B); \draw (A) -- (C); \draw (A) -- (D); \draw (B) -- (F); \draw (B) -- (G); \draw (C) -- (E); \draw (C) -- (F); \draw (D) -- (E); \draw (D) -- (G); \draw (E) -- (H); \draw (F) -- (H); \draw (G) -- (H); \end{tikzpicture} \end{center} \vspace{-15pt} \caption{The $\nelm$ sign lattice.}\label{fig:sign} \vspace{-45pt} \end{wrapfigure} The sign lattice $\nelm$ shown in Figure~\ref{fig:sign} represents the elements of the basis sign domain $\ndom$. We define the concretization function $\function{\gamma_\ndom}{\nelm}{\svals}$ as follows: \begin{equation}\label{eq:gammaN} \begin{array}{rcl} \gamma_\ndom(\top_\ndom) &\defined& \svals \\ \gamma_\ndom(\lhd 0) &\defined& \fsvals^{\lhd 0} \\ \gamma_\ndom(\bot_\ndom) &\defined& \emptyset \\ \end{array} \end{equation} where $\lhd \in \set{<, \leq, =, \not=, >, \geq}$ and $\fsvals^{\lhd 0}$ denotes the set of string values that can be interpreted as float values that satisfy $\lhd 0$. The partial order $\sqsubseteq_\ndom$, join $\sqcup_\ndom$, and meet $\sqcap_\ndom$ are defined by the Hasse diagram in Figure~\ref{fig:sign}. Again, no widening $\triangledown_\ndom$ is necessary since the basis domain $\ndom$ is finite. Each element $K \in \kelm_\nelm$ of the sign constraining abstract domain $\kdom_\ndom$ is thus a map $\function{K}{\vars}{\nelm}$ from program variables to sign elements. For this domain, the backward assignment operator is $\assign[\kdom_\ndom]{X := A}K \defined \refine{\replace{A, \ivars}}{K[X \mapsto \top_\ndom]}{K(X)}$, where $\function{\textsc{refine}_A}{\kelm \rightarrow \nelm}{\kelm}$ is: \begin{equation*} \begin{array}{rclr} \refine{X}{K}{N} & \defined & K[X \mapsto K(X) \sqcap_\ndom N] & \hspace{-35em}X \in \vars \\ \refine{v}{K}{N} & \defined & K & \hspace{-35em}v \in \vals \\ \refine{I}{K}{N} & \defined & K[I \mapsto N] & \hspace{-35em}I \in \ivars \\ \refine{\mathsf{\mathbf{int}}(A)}{K}{N} & = & \refine{\mathsf{\mathbf{float}}(A)}{K}{N} \defined \refine{A}{K}{N} & \\ \refine{A_1 + A_2}{K}{N} & \defined & \refine{A_1}{ \refine{A_2}{K}{ N -_\ndom \nelm\semantics{A_1}K } }{N -_\ndom \nelm\semantics{A_2}K} & \\ \refine{A_1 - A_2}{K}{N} & \defined & \refine{A_1}{ \refine{A_2}{K}{ \nelm\semantics{A_1}K -_\ndom N } }{N +_\ndom \nelm\semantics{A_2}K} & \\ \refine{A_1 * A_2}{K}{N} & \defined & \refine{A_1}{ \refine{A_2}{K}{ N \mathrel{/_\ndom} \nelm\semantics{A_1}K} }{N \mathrel{/_\ndom} \nelm\semantics{A_2}K} & \\ \refine{A_1 / A_2}{K}{N} & \defined & \refine{A_1}{K'}{N *_\ndom \nelm\semantics{A_2}K} & \\ && K' = \refine{A_2}{K}{ \not= 0 \sqcap_\ndom (\nelm\semantics{A_1}K \mathrel{/_\ndom} N) } & \end{array} \end{equation*} Note that we refine variables in the denumerator $A_2$ of a division expression $A_1 \div A_2$ to have values different from zero. \begin{example}\label{ex:sign} Let us consider the assignment $result := gpa \mathrel{/} classes$ at line 10 in Figure~\ref{fig:gpa} and let $K$ be an abstract domain element which maps the variables $gpa$ and $result$ to the sign value $\geq 0$ and the variable $classes$ to $\top_\ndom$. We have: \begin{equation*} \begin{array}{l} \assign[\kdom_\ndom]{result := gpa \mathrel{/} classes}K \defined \refine{gpa / classes}{K[result \mapsto \top_\ndom]}{\geq 0} \\ = \refine{gpa}{\refine{classes}{K[gpa \mapsto \top_\ndom]}{\not= 0}}{\geq 0} \\ = \refine{gpa}{K[result \mapsto \top_\ndom][classes \mapsto \not= 0]}{\geq 0} \\ = K[result \mapsto \top_\ndom][classes \mapsto \not= 0][gpa \mapsto \geq 0] \end{array} \end{equation*} which, in particular, indicates that the program expects the variable $classes$ (read at line 5 in Figure~\ref{fig:gpa}) to have a value different from zero. \qee \end{example} Instead, the filter operator $\filter[\kdom_\ndom]{B}$ is defined as follows: \begin{equation*} \begin{array}{rclr} \filter[\kdom_\ndom]{A \lhd 0}K & \defined & \refine{A}{K}{\lhd 0} & \hspace{-5em}\lhd \in \set{<, \leq, =, \not=, >, \geq} \\ \filter[\kdom_\ndom]{A_1 \bowtie A_2}K & \defined & \filter[\kdom_\ndom]{A_1 - A_2 \bowtie 0}K & A_2 \not= 0 \\ \filter[\kdom_\ndom]{B_1 \lor B_2}K & \defined & \filter[\kdom_\ndom]{B_1}K \sqcup_{\kdom_\ndom} \filter[\kdom_\ndom]{B_2}K & \\ \filter[\kdom_\ndom]{B_1 \land B_2}K & \defined & \filter[\kdom_\ndom]{B_2}K \circ \filter[\kdom_\ndom]{B_1}K & \end{array} \end{equation*} The soundness of the sign constraining domain operators follows directly from the soundness of the sign abstract domain \cite{CousotC-92b}. \begin{lemma}\label{lm:sign} The operators of the sign constraining domain $\kdom_\ndom$ are sound. \end{lemma} \subsubsection{String Constraining Abstract Domains.} Finally, we build a last instance of non-relational constraining domain on the finite string set domain \cite{Christensen-03}, to track \emph{the string data values that can be stored in the program variables}. Other more sophisticated string domains exist \cite[etc.]{Arceri19,Costantini-15}. However, even this simple domain suffices to catch \lstinline[language=Python]{KeyError} exceptions that might occur, e.g., at line 9 in Figure~\ref{fig:gpa}. Each abstract domain element $K \in \kelm_\welm$ of the string domain $\kdom_\wdom$ is a map $\function{K}{\vars}{\welm}$ from program variables to an element $W \in \welm$ of the basis domain $\wdom$. Elements of $\wdom$ are finite sets of at most $m$ string, or the top element $\top_\wdom$ which abstracts larger sets of strings, i.e., $\welm \defined \powerset{\svals} \cup \set{\top_\wdom}$. In the following, we write $\bot_\wdom$ to denote the empty string set. The concretization function $\function{\gamma_\wdom}{\welm}{\svals}$ is: \begin{equation}\label{eq:gammaW} \begin{array}{cc} \gamma_\wdom(\top_\wdom) \defined \svals & \hspace{1em} \gamma_\wdom(W) \defined W \end{array} \end{equation} The partial order $\sqsubseteq_\wdom$, join $\sqcup_\wdom$, and meet $\sqcap_\wdom$ are the set operations $\subseteq$, $\cup$, and $\cap$ extended to also handle $\top_\wdom$: \begin{equation*} \begin{array}{rclr} W_1 \sqsubseteq W_2 & \Leftrightarrow & W_2 = \top_\wdom \lor (W_1 \not= \top_\wdom \land W_1 \subseteq W_2) & \\ W_1 \sqcup_\wdom W_2 & \defined & \begin{cases} \top_\wdom & W_1 = \top_\wdom \lor W_2 = \top_\wdom \lor |W_1 \cup W_2| > m \\ W_1 \cup W_2 & \text{otherwise} \end{cases} \\ W_1 \sqcap_\wdom W_2 & \defined & \begin{cases} W_1 & W_2 = \top_\wdom \\ W_2 & W_1 = \top_\wdom \\ W_1 \cap W_2 & \text{otherwise} \end{cases} \end{array} \end{equation*} The widening $W_1 \triangledown_\wdom W_2$ yields $\top_\wdom$ unless $W_2 \subseteq W_1$ (in which case it yields $W_1$). We can now define the backward assignment operator $\assign[\kdom_\wdom]{X := A}K \defined \refine{\replace{A, \ivars}}{K[X \mapsto \top_\wdom]}{K(X)}$, where $\function{\textsc{refine}_A}{\kelm \rightarrow \welm}{\kelm}$ is: \begin{equation*} \begin{array}{rclr} \refine{X}{K}{W} & \defined & K[X \mapsto K(X) \sqcap_\wdom W] & X \in \vars \\ \refine{v}{K}{W} & \defined & K & v \in \vals \\ \refine{I}{K}{W} & \defined & K[I \mapsto W] & I \in \ivars \\ \refine{\mathsf{\mathbf{int}}(A)}{K}{W} & = & \refine{\mathsf{\mathbf{float}}(A)}{K}{W} \defined \refine{A}{K}{W} & W = \top_\wdom \\ \refine{\mathsf{\mathbf{int}}(A)}{K}{W} & = & \refine{\mathsf{\mathbf{float}}(A)}{K}{W} \defined \refine{A}{K}{\bot_\wdom} & W \not= \top_\wdom \\ \refine{A_1 \diamond A_2}{K}{W} & \defined & \refine{A_1}{\refine{A_2}{K}{W}}{W} & W = \top_\wdom \\ \refine{A_1 \diamond A_2}{K}{W} & \defined & \refine{A_1}{\refine{A_2}{K}{\bot_\wdom}}{\bot_\wdom} & W \not= \top_\wdom \end{array} \end{equation*} Note that, variables in numerical expressions (such as $\mathsf{\mathbf{int}}(A)$, $\mathsf{\mathbf{float}}(A)$ or $A_1 \diamond A_2$) should not have a specific string value (i.e, a value different from $\top_\wdom$). \begin{example}\label{ex:string} Let us consider a small extension of our toy language with dictionaries. In particular, we extend the grammar of arithmetic expressions with dictionary display (in \python terminology) expressions $\set{v_0: v_1, v_2: v_3, \dots}$, $v_0, v_1, v_2, v_3, \ldots \in \vals$, for dictionary creation (cf. line 1 in Figure~\ref{fig:gpa}) and dictionary access expressions $X[A]$ (such as $grade2gpa[grade]$ at line 9 in Figure~\ref{fig:gpa}). For each dictionary, we assume that abstract domains only keep track of two summary variables \cite{Gopan04}, one representing the dictionary keys and one representing its values. For instance, let us consider the $grade2gpa$ dictionary in Figure~\ref{fig:gpa} and let the string domain element $K$ map the variable $keys(grade2gpa)$ to the set of strings $\set{\text{'A', 'B', 'C', 'D', 'F'}}$ and $values(grade2gpa)$ to $\top_\ndom$. We can extend $\textsc{refine}_A$ defined above to handle dictionary access expressions as follows: $ \refine{X[A]}{K}{W} \defined \refine{A}{K}{K(keys(X))} $. No refinement can be made on $X$ since, for soundness, only weak updates are allowed on summary variables \cite{Chase90}. For the assignment $gpa := gpa + grade2gpa[grade]$ at line 9 in Figure~\ref{fig:gpa} we thus have $\assign[\kdom_\wdom]{gpa := gpa + grade2gpa[grade]}K = K[grade \mapsto \set{'A', 'B', 'C', 'D', 'F'}]$, which indicates the string values expected by the program for the variable $grade$ (read at line 8 in Figure~\ref{fig:gpa}). \qee \end{example} The filter operator $\filter[\kdom_\wdom]{B}$ is defined as follows: \begin{equation*} \begin{array}{rclr} \filter[\kdom_\wdom]{A_1 = A_2}K & \defined & \refine{A_1}{\refine{A_2}{K}{\welm\semantics{A_1}K}}{\welm\semantics{A_2}K} & \\ \filter[\kdom_\wdom]{A_1 \not= A_2}K & \defined & K & \\ \filter[\kdom_\wdom]{A_1 \mathrel{\overline{\mbox{$\bowtie$}\raisebox{1.9mm}{}}} A_2}K & \defined & \refine{A_1}{\refine{A_2}{K}{\welm\semantics{A_1}K}}{\welm\semantics{A_2}K} & \\ &&& \hspace{-50em}\welm\semantics{A_2}K = \top_\wdom \land \welm\semantics{A_1}K = \top_\wdom \\ \filter[\kdom_\wdom]{A_1 \mathrel{\overline{\mbox{$\bowtie$}\raisebox{1.9mm}{}}} A_2}K & \defined & \refine{A_1}{\refine{A_2}{K}{\bot_\wdom}}{\bot_\wdom} & \\ &&& \hspace{-15em}\welm\semantics{A_2}K \not= \top_\wdom \lor \welm\semantics{A_1}K \not= \top_\wdom \\ \filter[\kdom_\wdom]{B_1 \lor B_2}K & \defined & \filter[\kdom_\wdom]{B_1}K \sqcup_{\kdom_\wdom} \filter[\kdom_\wdom]{B_2}K & \\ \filter[\kdom_\wdom]{B_1 \land B_2}K & \defined & \filter[\kdom_\wdom]{B_2}K \circ \filter[\kdom_\wdom]{B_1}K & \end{array} \end{equation*} where $\overline{\mbox{$\bowtie$}\raisebox{1.9mm}{}} \in \set{<, \leq, >, \geq}$. The soundness of the string constraining domain operators follows directly from the soundness of the finite string set abstract domain \cite{Christensen-03}. \begin{lemma}\label{lm:sign} The operators of the string constraining domain $\kdom_\wdom$ are sound. \end{lemma} \subsection{Other Constraining Abstract Domains}\label{subsec:other} We now briefly discuss other instances of constraining domain. \subsubsection{Relational Constraining Abstract Domains.} Other constraining domain can be built on \emph{relational} abstract domains. Popular such domains are octagons \cite{Mine-06} or polyhedra \cite{CousotH-POPL78}, which track linear relations between program variables. We refer to the literature for the formal definition of these abstract domains and only discuss here the implementation of the additional operations needed to communicate with the input domain \hdom. In particular, similarly to non-relational domains, the $\replace{E, \ivars}$ operation temporarily adds the input variables in $\ivars$ to the current abstract element $K \in \kelm$. These are unconstrained at first and might become subjects to constraints after an assignment or filter operation. The implementation of the $\record{I}$ operation is more complex for relational domains: $\record{I}$ extracts from the current abstract element $K$, an abstract domain element $\overline{K}$ containing all and only the constraints in $K$ that involve the input variable $I$. The domain $\dom(\overline{K})$ of $\overline{K}$ is the subset of $\dom(K)$ containing only the variables appearing in these constraints. The input variables can then be projected away from $K$. \begin{example} Let us consider again the assignment $gpa := gpa + \ipt$ which simplifies line 9 in Figure~\ref{fig:gpa} and let $K = \set{gpa \geq 0, grades > 0}$ be a polyhedra defined over the variables $gpa$ and $grades$, i.e., $\dom(K) = \set{gpa, grades}$. After $\replace{gpa + \ipt, \set{I_9}}$ (cf. Example~\ref{ex:replace}), $K$ is unchanged but its domain is enlarged to also include the input variable $I_9$, i.e., $\dom(K) = \set{gpa, grades, I_9}$. The result of the (replaced) assignment $gpa := gpa + I_9$ is then the polyhedra $K' = \set{gpa + I_9 \geq 0, grades > 0}$. Finally, the $\record{I_9}$ operation returns the polyhedra $\overline{K} = \set{gpa + I_9 \geq 0}$, where $\dom(\overline{K}) = \set{gpa, I_9}$. \qee \end{example} In the following, we assume that input variables are parameterized by the program label at which their corresponding \ipt expressions occur, as in Example~\ref{ex:replace}. Note that, there is not necessarily a one-to-one correspondence between $\ipt$ expressions in a program and data record in a data file. Indeed, multiple records can be read by the same $\ipt$ expression (i.e., in a for loop as in Figure~\ref{fig:gpa}) or, vice versa, the same data record could be read by multiple $\ipt$ expressions (i.e., in different if branches). In particular, the latter case implies that two abstract domain elements $K_1$ and $K_2$ might be defined over different input variables. Thus, relational constraining domains need to be equipped with a unification operation $\unify{K_1}{K_2}$ to match different input variables that correspond to the same data record. One simple option to deal with this problem is to keep track of the order in which the input variables are added to a domain element by each $\replace{E, \ivars}$ operation. The $\unify{K_1}{K_2}$ operation then simply consists in matching input variables in their order. \subsubsection{Container Constraining Abstract Domains.} We now lift the assumption that data records only have one field (cf. Section~\ref{sec:semantics}). We extend the grammar of expressions to also include data access expressions $X[A]$, $X \in \vars$, (similarly to what we did in Example~\ref{ex:string} for dictionaries). Similarly, we extend the grammar of statements to also include assignments of the form $X[A_1] := A_2$. We call variables like the $X$ we used in these expressions, \emph{array variables}. In this case, abstract domains should be able to also handle reads and updates of array variables in addition to numerical and string variables as so far. The most basic option to do so is to use summarization \cite{Gopan04} (as in Example~\ref{ex:string}) and only perform weak updates \cite{Chase90}. It is sometimes possible to fully expand array variables to improve precision \cite{Blanchet02}, or use a combination of expansion and summarization (i.e., expand part of the array up to a certain size and summarize the rest). Many other abstract domains exist that are specifically designed to analyze arrays \cite[etc.]{Cousot11,Gopan05,Halbwachs08,Liu17} or, more generally, containers (e.g., sets, dictionaries) \cite[etc.]{Cox13,Cox14,Dillig10,Dillig11,Fulara12}. Any of these can be instantiated as a constraining domain (as we showed in this section) and used within our framework. \section{Input Abstract Domain}\label{sec:stack} The input abstract domain $\hdom$, as mentioned, \emph{directly} constrains the input data read by a program. An element $H \in \helm$ of $\hdom$ is a \emph{stack} of mutable length $h$: \begin{equation*} \begin{array}{cr} R_0 \mid R_1 \mid \dots \mid R_{h-1} \mid R_{h} & \hspace{7em} R_i \in \relm \end{array} \end{equation*} of assumptions on (part of) the input data, or the special element $\bot_\adom$ or $\top_\adom$. The top element $\top_\adom$ denotes unconstrained input data, while $\bot_\adom$ indicates a program exception. A stack element grows or shrinks based on the level of nesting of the currently analyzed \ipt expression. Each layer $R_i \in \relm$ is a list of $r$ assumptions repeated $M$ times: $\relm \defined \set{ M \cdot (J_i)^r_{i=1} \mid J_i \in \celm \cup \set{\bigstar} \cup \relm}. $ The \emph{multiplier} $M$ follows this grammar: \begin{equation*} \begin{array}{lr} M \Coloneqq X \in \vars \mid I \in \ivars \mid v \in \ivals \mid M_1 \diamond M_2 & \hspace{5em} \diamond \in \set{+,-,*,/} \end{array} \end{equation*} while an assumption $J_i$ can be a \emph{basic assumption} in $\celm$, the \emph{dummy assumption} $\bigstar$, or another list of repeated assumptions in $\relm$. A basic assumption $C \in \celm$ is a family of constraints, one for each constraining domain $\kdom_1, \dots, \kdom_k$ in $\adom$, associated to a particular program label $l \in \labels$: $\celm \defined \set{ \tuple{l}{(Y_i)^k_{i=1}} \mid l \in \labels, Y_i \in \overline{\kelm_i}}$, where $\overline{\kelm_i} = \uelm_i$ if $\kdom_i$ is a non-relational domain, or $\kelm_i$ otherwise (cf. Section~\ref{sec:constraining}). \begin{example}\label{ex:basic} Let us consider the assignment $grade := I_8$ where $I_8$ is the result of $\replace{\ipt, \set{I_8}}$ at line 8 in Figure~\ref{fig:gpa}. Moreover, let $K_T \in \kdom_\tdom$ and $K_W \in \kdom_\wdom$ map the variable $grade$ to $\textsc{string}$ and $\set{\text{'A', 'B', 'C', 'D', 'F'}}$, respectively. After the analysis of the assignment, we have $K_T(I_8) = \textsc{string}$ and $K_W(I_8) = \set{\text{'A', 'B', 'C', 'D', 'F'}}$. The call to the function $\record{I_8}$ in the two constraining domains $\kdom_\tdom$ and $\kdom_\wdom$ effectively creates the basic assumption $\tuple{l_8}{\left[\textsc{string}, \set{\text{'A', 'B', 'C', 'D', 'F'}}\right]}$ in the input domain $\hdom$. \qee \end{example} A repeated assumption $M \cdot (J_i)^r_{i=1}$ constrains \emph{all} data read by a for loop. \begin{example}[continue from Example~\ref{ex:basic}]\label{ex:repeated} Let us consider the for loop at lines 7-9 in Figure~\ref{fig:gpa}. The \ipt expression at line 8 is constrained by the basic assumption $\tuple{l_8}{\left[\textsc{string}, \set{\text{'A', 'B', 'C', 'D', 'F'}}\right]}$. Thus, all data read by the for loop is constrained by $classes \cdot \left[ \tuple{l_8}{\left[\textsc{string}, \set{\text{'A', 'B', 'C', 'D', 'F'}}\right]} \right]$. \qee \end{example} Finally, data read by a while loop is generally approximated by the dummy assumption $\bigstar$, which denotes an unknown number of unconstrained data records. The concretization function $\function{\gamma_\hdom}{\helm}{\powerset{\files}}$ is defined as follows: \begin{equation} \begin{array}{rcl} \gamma_\hdom(\bot_\hdom) & \defined & \emptyset \\ \gamma_\hdom(H) & \defined & \set{ D \in \files \mid D \models H } \\ \gamma_\hdom(\top_\hdom) & \defined & \files \end{array} \end{equation} In particular, the concretization of a stack element $H \in \hdom$ is the set of data files that \emph{satisfy} the assumptions fixed by the stack element. We omit the formal definition of the satisfaction relation $\models$ due to space limitations. The following example should provide an intuition: \begin{example}\label{ex:gamma} Let us assume that the program in Figure~\ref{fig:gpa} is analyzed with $\adom$ instantiated with the type $\kdom_\tdom$, sign $\kdom_\ndom$, and string $\kdom_\wdom$ constraining domains. Let us consider the following stack element $H \in \hdom$ at line $5$: \begin{equation*} 1 \cdot \left[ \langle l_5, \left[\textsc{int}, \not= 0, \top_\wdom \right] \rangle, I_5 \cdot \left[ \tuple{l_8}{\left[\textsc{string}, \top_\ndom, \set{\text{'A', 'B', 'C', 'D', 'F'}}\right]} \right] \right] \mid 1 \cdot \left[\right] \end{equation*} The data file $\left[ \begin{matrix} \text{\lstinline[language=Python,mathescape]!$2$!} \\ \text{\lstinline[language=Python]{A}} \\ \text{\lstinline[language=Python]{F}} \end{matrix} \right]$ satisfies $H$ since $ \text{\lstinline[language=Python,mathescape]!$2$!} \in \gamma_\tdom(\textsc{int}) \cap \gamma_\ndom(\not= 0) \cap \gamma_\wdom(\top_\wdom)$ and, similarly, $\text{\lstinline[language=Python]{A}}, \text{\lstinline[language=Python]{F}} \in \gamma_\tdom(\textsc{string}) \cap \gamma_\ndom(\top_\ndom) \cap \gamma_\wdom(\set{\text{'A', 'B', 'C', 'D', 'F'}})$. Moreover, $I_5 = \text{\lstinline[language=Python,mathescape]!$2$!}$ and, indeed, there are exactly two data records following $\text{\lstinline[language=Python,mathescape]!$2$!}$. Instead, the data file $\left[ \begin{matrix} \text{\lstinline[language=Python,mathescape]!$1$!} \\ \text{\lstinline[language=Python]{A}} \\ \text{\lstinline[language=Python]{F}} \end{matrix} \right]$ (cf. the motivating example in the Introduction) does not satisfy $H$ since $I_5 = 1$ is followed by two data records instead of one. \qee \end{example} Any data file satisfies the dummy assumption $\bigstar$. Thus any stack element starting with the dummy assumption (e.g, $1 \cdot \left[\bigstar\right])$ is equivalent to $\top_\hdom$. We define the partial order $\sqsubseteq_\hdom$ such that $H_1 \sqsubseteq_\hdom H_2$ only if $\gamma_\hdom(H_1) \subseteq \gamma_\hdom(H_2)$. Thus, $H_1 \sqsubseteq_\hdom H_2$ is always true if $H_1 = \bot_\hdom$ or $H_2 = \top_\hdom$. Otherwise, $H_1$ and $H_2$ must have the same number of layers to be comparable and $\sqsubseteq_\hdom$ is defined laywer-wise. Specifically, for each $R_1 = M_1 \cdot \left[J^1_1, \dots, J^1_{r_1} \right] \in \relm$ and $R_2 = M_2 \cdot \left[J^2_1, \dots, J^2_{r_2} \right] \in \relm$, $R_1 \sqsubseteq_\rdom R_2$ if and only if $M_1 = M_2$ and $r_1 = r_2$ (i.e., $R_1$ and $R_2$ consists of the same number of assumptions repeated the same number of times), and $\bigwedge^{r_1=r_2}_{i=1} J^1_i \sqsubseteq_\mathbb{J} J^2_i$, i.e., $R_1$ imposes stronger constraints on the input data than $R_2$. The partial order $J_1 \sqsubseteq_\mathbb{J} J_2$ is again $J_1 \sqsubseteq_\rdom$, if $J_1, J_2 \in \relm$. Otherwise, $J_1 \sqsubseteq_\mathbb{J} J_2$ is always true when $J_2 = \bigstar$. For basic assumptions $J_1 = \tuple{l_1}{\left[ Y^1_0, \dots, Y^1_k \right]} \in \celm$ and $J_2 = \tuple{l_1}{\left[ Y^2_0, \dots, Y^2_k \right]} \in \celm$, $J_1 \sqsubseteq_\mathbb{J} J_2$ is true if and only if $\bigwedge^k_{i_1} Y^1 \sqsubseteq_{\overline{\kdom}_i} Y^2$, where $\overline{\kdom}_i = \udom_i$ if $\kdom_i$ is a non-relational domain, or $\kdom_i$ otherwise. Note that, for relational domains, a unification must be performed prior to $\sqsubseteq_\mathbb{J}$ as discussed in Section~\ref{sec:constraining}. No comparison is possible when $J_1 \in \celm$ and $J_2 \in \relm$, or vice versa. This is a rather rigid definition for $\sqsubseteq_\hdom$. Indeed, in some cases, $H_1 \not\sqsubseteq_\hdom H_2$ even though $\gamma_\hdom(H_1) \subseteq \gamma_\hdom(H_2)$, e.g., consider $H_1 = 1 \cdot \left[ \tuple{l_a}{\left[\textsc{int}\right]}, \tuple{l_b}{\left[\textsc{float}\right]} \right]$ and $H_2 = 2 \cdot \left[ \tuple{l_c}{\left[\textsc{float}\right]} \right]$. Such incomparable stack elements may result from syntactically different but semantically close programs \cite{DelmasM19} (e.g., in one program a loop has been unrolled but not in the other), but never during the analysis of a single program. Thus, for our purposes, this definition of $\sqsubseteq_\hdom$ suffices. The join $\sqcup_\hdom$ is defined analogously to $\sqsubseteq_\hdom$. We omit its formal definition due to space limitations. The join of incomparable stack layers is approximated with the dummy layer $1 \cdot \left[\bigstar\right]$. Thus, no widening $\triangledown_\hdom$ is needed. The backward assignment operator $\assign[\hdom]{X := A}$ and filter operator $\filter[\hdom]{B}$ operate on each stack layer independently. For each $R = M \cdot (J_i)^r_{i=1} \in \relm$, the assignment replaces any occurrence of $X$ in the multiplier $M$ with the expression $\replace{A, \ivars}$. The assignment (resp. filter) operation is done recursively on each assumption $J_i$. When $J_i \in \celm$, the assignment (resp. filter) is delegated to the constraining domains directly. \begin{example}[continue from Example~\ref{ex:repeated}]\label{ex:assign} Let us consider again the assumption $classes \cdot \left[ \tuple{l_8}{\left[\textsc{string}, \set{\text{'A', 'B', 'C', 'D', 'F'}}\right]} \right]$, which constrains the data read by the for loop at lines 7-9 in Figure~\ref{fig:gpa}, and the assignment $classes := \ipt$ (cf. line 5). The assignment simply replaces the multiplier $classes$ in the assumption with the input variable $I_5$: $I_5 \cdot \left[ \tuple{l_8}{\left[\textsc{string}, \set{\text{'A', 'B', 'C', 'D', 'F'}}\right]} \right]$. \qee \end{example} During the analysis of a for loop, the $\rpt{A}$ operator modifies the multiplier of the assumption in the first stack layer: $\rpt{A}(M \cdot [J, \dots] \mid \dots \mid R_h) \defined (A * M) \cdot [J, \dots] \mid \dots \mid R_h$. The resulting multiplier expression is then simplified, whenever possible (e.g, $(X + 1) - 1$ is simplified to $X$). Finally, it remains to discuss how stack elements $H \in \helm$ grow and shrink during the analysis of a program. Whenever the analysis enters the body of an if or loop statement, the $\push(H)$ operation simply adds an extra layer to $H$ containing the empty assumption $1 \cdot \left[\right]$: $\push(H) \defined 1 \cdot \left[\right] \mid H$. When the analysis later leaves the body of the statement, the $\pop(H)$ operation inserts the assumption in the first layer into the assumption in the second layer: $\pop(R_0 \mid M \cdot [J, \dots] \mid \dots \mid R_h) = M \cdot [R_0, J, \dots] \mid \dots \mid R_h$. Instead, the $\overline{\pop}$ operation merges the assumption in the first layer with the (first) assumption in the second layer: $\pop(R_0 \mid M \cdot [J, \dots] \mid \dots \mid R_h) = M \cdot [R_0 \sqcup_\mathbb{J} J, \dots] \mid \dots \mid R_h$. The input domain operators ultimately build on the operators of the constraining domains. Thus, their soundness directly follows from that of the constraining domain operators. \begin{lemma}\label{lm:sign} The operators of the input domain $\hdom$ are sound. \end{lemma} \section{Input Data-Aware Program Abstraction}\label{sec:abstract} \begin{figure}[t] \begin{align*} &\astmt{^lX := A}Q \defined \assign[\adom]{X := A}Q \\ % &\astmt{\mathsf{\mathbf{if}}~^lB~\mathsf{\mathbf{then}}~S_1~\mathsf{\mathbf{else}}~S_2~\mathsf{\mathbf{fi}}}Q \defined Q_1 \sqcup_\adom Q_2 \\ &\qquad Q_1 \defined \pop \circ \filter[\adom]{B} \circ \astmt{S_1} \circ \push(Q) \\ &\qquad Q_2 \defined \pop \circ \filter[\adom]{\neg B} \circ \astmt{S_2} \circ \push(Q) \\ % &\astmt{\mathsf{\mathbf{for}}~^lA~\mathsf{\mathbf{do}}~S~\mathsf{\mathbf{od}}}Q \defined \lfp^\natural_{\pop \circ \rpt{A} \circ \astmt{S} \circ \push(W)}~G \\ &\qquad G(Y) \defined \overline{\pop} \circ \rpt{A} \circ \astmt{S} \circ \push(Y) \\ % &\astmt{\mathsf{\mathbf{while}}~^lB~\mathsf{\mathbf{do}}~S~\mathsf{\mathbf{od}}}Q \defined \lfp^\natural~F \\ &\qquad F(Y) \defined \pop \circ \filter[\adom]{\neg B} \circ \push(Q) \sqcup_\adom \pop \circ \filter[\adom]{B} \circ \astmt{S} \circ \push(Y) \\ &\astmt{S_1; S_2}Q \defined \astmt{S_1} \circ \astmt{S_2}Q \end{align*} \caption{Input-Aware Abstract Semantics of Instructions}\label{fig:abstract} \end{figure} We can now use the data shape abstract domain $\adom$ to define the abstract semantics $\Delta^\natural\semantics{P}$. We write $\tuple{\langle K_1, \dots, K_k \rangle}{H} \in \aelm$ to denote an element of $\adom$, where $K_1 \in \kelm_1, \dots, K_k \in \kelm_k$ are elements of the constraining domains $\kdom_1, \dots, \kdom_k$ and $H \in \helm$ is an element of the input domain. The abstract data shape semantics of a data-processing program $P$ is thus: \begin{equation} \Delta^\natural\semantics{P} = \Delta^\natural\semantics{S^l} \defined \Delta^\natural\semantics{S}\left(\lambda p. \begin{cases} \tuple{\langle \top_{\kdom_1}, \dots, \top_{\kdom_k} \rangle}{1 \cdot \left[\right]} & p = l \\ \text{undefined} & \text{otherwise} \end{cases} \right) \end{equation} The semantics $\function{\Delta^\natural\semantics{S}}{(\labels\rightarrow\aelm)} {(\labels\rightarrow\aelm)}$ of each instruction is (equivalently) defined pointwise within $\aelm$ in Figure~\ref{fig:abstract}: each function $\function{\astmt{S}}{\aelm}{\aelm}$ over-approximates the possible environments and data files that can be read from the program label within the instruction $S$. The $\assign[\adom]{X := A}$ operator first invokes $\assign[\kdom_i]{X := A}$ on each constraining domain $\kdom_i$. Then, the $\record{I}$ operation is executed for each input variable $I \in \ivars$ corresponding to an $\ipt$ sub-expression of $A$. Finally, the assignment is performed on the input domain by $\assign[\hdom]{X := A}$. Similarly, the $\filter[\adom]{B}$ operation is first executed on each constraining domain $\kdom_i$ by $\filter[\kdom_i]{B}$, and then on the input domain by $\filter[\hdom]{B}$. The $\rpt{A}$, $\push$, $\pop$, $\overline{\pop}$ have no effect on the constraining domains and only modify the input domain (cf. Section~\ref{sec:stack}). The abstract semantics of each instruction is sound: \begin{lemma} $\stmt{\gamma_\adom(Q)} \subseteq \gamma_\adom(\astmt{Q})$ \end{lemma} where the concretization function $\function{\gamma_\adom}{\aelm}{\powerset{\envs\times\files}}$ is $\gamma_\adom(\tuple{\langle K_1, \dots, K_k \rangle}{H}) \defined \set{ \tuple{\rho}{D} \in \envs\times\files \mid \rho \in \gamma_{\kdom_1}(K_1) \cap \dots \cap \gamma_{\kdom_1}(K_k), D \in \gamma_\hdom(H) }$. Thus, the abstract data shape semantics $\Delta^\natural\semantics{P}$ is also sound: \begin{theorem} For each data-processing program $P$, we have $\Delta\semantics{P} \subseteq \gamma_\adom(\Delta^\natural\semantics{P})$. \end{theorem} \section{Implementation}\label{sec:implementation} We have implemented our input data shape analysis in the open-source prototype static analyzer \tool\footnote{\url{https://github.com/caterinaurban/Lyra}}. The implementation is in \python and, at the time of writing, accepts data processing programs written in a subset of \python without user-defined classes. Programs are expected to be type-annotated, either manually or by a type inference \cite{Hassan-18}. For the analysis, various constraining domains are available: in addition to the \emph{type}, \emph{sign}, and \emph{string} domains presented in Section~\ref{subsec:nonrel}, \tool is equipped with the \emph{character inclusion} domain \cite{Costantini-15}, as well as the \emph{intervals} \cite{CousotC-76}, \emph{octagons} \cite{Mine-06}, and \emph{polyhedra} domains \cite{CousotH-POPL78}, which build upon the \textsc{apron} library \cite{JeannetM-CAV09}. A native (non-\textsc{apron}-based) implementation of the intervals domain is also available. For containers (e.g., lists, sets, dictionaries, \dots), a summarization-based abstraction \cite{Gopan04} is the default. Lists, tuples, and dictionaries can be expanded up to a fixed bound beyond which they are summarized (cf. Section~\ref{subsec:other}). The data shape analysis is performed backwards on the control flow graph of the program with a standard worklist algorithm \cite{NielsonNH-99}, using widening at loop heads to enforce termination. The precision of the analysis can be improved by running a forward pre-analysis which collects values information about the program variables (e.g., in Figure~\ref{fig:gpa}, this would allow the data shape analysis to know the values of the keys of the $grade2gpa$ dictionary already at line $9$ even if the dictionary is not created until line $1$, cf. Example~\ref{ex:string}). \tool outputs the analysis results in \textsc{json} format so that other applications (e.g., automated data checking tools \cite{Radwa18,Madelin17}) can easily interface with it. Below, we demonstrate the expressiveness of our data shape abstract domain on more examples besides the program shown in Figure~\ref{fig:gpa}. \subsubsection{Magic Trick.} Let us consider the following \python program fragment: \lstinputlisting[language=Python]{magic.py} (from a solution to the \emph{Magic Trick} problem of the Google Code Jam 2014 programming competition\footnote{\url{https://codingcompetitions.withgoogle.com/codejam/archive/2014}}). We instantiate our data shape domain $\adom$ with the type constraining domain $\kdom_\tdom$ and the interval constraining domain $\kdom_\mathbb{I}$. In this case, our data shape analysis with $\adom(\kdom_\tdom, \kdom_\mathbb{I})$, determines that correct data files for the program have the following shape: \begin{equation*} \begin{array}{rcrclccc} &&&&& \scriptstyle{\textcolor{gray}{1}} & \scriptstyle{\textcolor{gray}{2}} & \scriptstyle{\textcolor{gray}{\dots}} \\ % &&&& \scriptstyle{\textcolor{gray}{1}} & \tuple{\textsc{int}}{[0, \inf]} & & \\ % \multirow{9}{*}{$d^1_1$} &\multirow{9}{*}{$\begin{cases} & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \end{cases}$} &&& \scriptstyle{\textcolor{gray}{2}} & \tuple{\textsc{int}}{[1, 4]} & & \\ % && \multirow{3}{*}{$4$} & \multirow{3}{*}{$\begin{cases} & \\ & \\ & \end{cases}$} & \scriptstyle{\textcolor{gray}{3}} & \tuple{\textsc{string}}{[-\inf, \inf]} & \tuple{\textsc{string}}{[-\inf, \inf]} & \dots \\ % &&&& \scriptstyle{\textcolor{gray}{\vdots}} & \dots & \dots & \\ &&&& \scriptstyle{\textcolor{gray}{6}} & \tuple{\textsc{string}}{[-\inf, \inf]} & \tuple{\textsc{string}}{[-\inf, \inf]} & \dots \\ &&&& \scriptstyle{\textcolor{gray}{7}} & \tuple{\textsc{int}}{[1, 4]} & & \\ % && \multirow{3}{*}{$4$} & \multirow{3}{*}{$\begin{cases} & \\ & \\ & \end{cases}$} & \scriptstyle{\textcolor{gray}{8}} & \tuple{\textsc{string}}{[-\inf, \inf]} & \tuple{\textsc{string}}{[-\inf, \inf]} & \dots \\ % &&&& \scriptstyle{\textcolor{gray}{\vdots}} & \dots & \dots & \\ &&&& \scriptstyle{\textcolor{gray}{11}} & \tuple{\textsc{string}}{[-\inf, \inf]} & \tuple{\textsc{string}}{[-\inf, \inf]} & \dots \\ &&&& \scriptstyle{\textcolor{gray}{\vdots}} & \dots & \dots & \dots \end{array} \end{equation*} where $d^1_1$ denotes the first (i.e., and in fact the only) data field $1$ of the first data record in the data file. In particular, we know that $1 \leq, d^1_2 \leq 4$ (resp. $1 \leq d^1_7 \leq 4$) from the for loops at lines 4-5 and 7-8 (resp. at lines 10-11 and 13-14). \qee \subsubsection{Bird Watching.} Let us consider now the following \python program fragment: \lstinputlisting[language=Python]{bird.py} (from a solution to the \emph{Bird Watching} problem of the \textsc{SWERC 2019-2020} programming competition\footnote{\url{https://swerc.eu/2019/}}). We instantiate $\adom$ with the type constraining domain $\kdom_\tdom$ and the octagon constraining domain $\kdom_\mathbb{O}$. A forward numerical pre-analysis with the octagon domain $\mathbb{O}$ determines, in particular, that $0 \leq \mathsf{len}(pre) \leq N - 1$ (cf. line 2). Thus, our backward data shape analysis with $\adom(\kdom_\tdom, \kdom_\mathbb{O})$ determines that correct data files for the program have the following shape: \begin{equation*} \begin{array}{rclccc} &&& \scriptstyle{\textcolor{gray}{1}} & \scriptstyle{\textcolor{gray}{2}} & \scriptstyle{\textcolor{gray}{3}} \\ % && \scriptstyle{\textcolor{gray}{1}} & \tuple{\textsc{int}}{0 \leq d^1_1 } & \tuple{\textsc{int}}{0 \leq d^2_1 } & \tuple{\textsc{int}}{0 \leq d^3_1 \leq d^1_1 - 1 } \\ % \multirow{3}{*}{$d^2_1$} &\multirow{3}{*}{$\begin{cases} & \\ & \\ & \end{cases}$} & \scriptstyle{\textcolor{gray}{2}} & \tuple{\textsc{int}}{\text{true}} & \tuple{\textsc{int}}{0 \leq d^2_2 \leq d^1_1 } & \\ % && \scriptstyle{\textcolor{gray}{3}} & \tuple{\textsc{int}}{\text{true}} & \tuple{\textsc{int}}{0 \leq d^2_3 \leq d^1_1 } & \\ % && \scriptstyle{\textcolor{gray}{\vdots}} & \dots & \dots & \end{array} \end{equation*} where $d^j_i$ denotes the data field $j$ of the data record $i$. In particular, we know that $0 \leq d^3_1 \leq d^1_1 - 1$ from the list access at line 6 and, similarly, $0 \leq d^2_i \leq d^1_1$, for $2 \leq i$, from the list access at line 5. \qee \subsubsection{Adult Census Data.} Let us consider the following fragment of a pre-processing \python function for the Adult Census dataset\footnote{\url{https://archive.ics.uci.edu/ml/datasets/adult}}: \lstinputlisting[language=Python]{adult.py} (taken from \cite{Urban19}) where the function argument \lstinline[language=Python]{data} has been loaded from a CSV file. Our backward shape analysis instantiated with the string set constraining domain $\kdom_\wdom$ determines that correct CSV files have the following shape: \begin{equation*} \begin{array}{lccc} & \hspace{3em} \scriptstyle{\textcolor{gray}{1}} & \hspace{3em} \scriptstyle{\textcolor{gray}{2}} & \hspace{3em} \scriptstyle{\textcolor{gray}{\dots}} \\ % \scriptstyle{\textcolor{gray}{1}} & \hspace{3em} \top_\wdom & \hspace{3em} W & \hspace{3em} \dots \\ \scriptstyle{\textcolor{gray}{2}} & \hspace{3em} \top_\wdom & \hspace{3em} W & \hspace{3em} \dots \\ % \scriptstyle{\textcolor{gray}{3}} & \hspace{3em} \top_\wdom & \hspace{3em} W & \hspace{3em} \dots \\ % \scriptstyle{\textcolor{gray}{\vdots}} & \hspace{3em} \dots & \hspace{3em} \dots & \hspace{3em} \dots \end{array} \end{equation*} where $W$ is the set of strings $\{$ 'Private', 'Self-emp-not-inc', 'Self-emp-inc', 'Federal-gov', 'Local-gov', 'State-gov', 'Without-pay', 'Never-worked' $\}$. \qee \section{Related Work}\label{sec:related} Learning the input format of a given program is not a new research area but it has recently seen increased interest, especially in the contest of grammar-based automated test generation and fuzzing applications \cite[etc.]{Godefroid08,Holler12,Majumdar07}. Many of the approaches in the literature are \emph{black-box}, e.g., \textsc{glade} \cite{Bastani17} and \textsc{Learn\&Fuzz} \cite{Godefroid17}. These generally generate input grammars or grammar-like structures that are strictly meant as intermediate representation to be fed to a test generation engine and are not meant to be readable by a human. On the other hand, the result of our analysis is human-readable and can be used for other purposes than test generation, e.g., code specification and data cleaning. Moreover, these approaches have to rely on samples of valid inputs, while our approach only needs the program to be analyzed. Another sample-free approach is \textsc{autogram} \cite{Hoschele16}, which uses dynamic data flow analysis to generate readable and usable grammars. One disadvantage of this approach is that it will skip parts of the input if these are not stored in some program variables (e.g. if a program scans over a comment). On the contrary, in such a case, our approach will not know any value information about the skipped data but will still know that this data should be present in the data file (see the \emph{Magic Trick} example in Section~\ref{sec:implementation} for instance). To the best of our knowledge ours is the first approach that uses static analysis to infer the input format of a given program. Moreover, contrary to the above grammar synthesis approaches, our approach infers \emph{semantic} (and not just syntactic) information on the input data of a program. Closest to ours, is the work of Cheng and Rival \cite{Cheng15} on the static analysis of spreadsheet applications. They however only focused on type-related properties. Finally, the main difference compared to the inference of necessary preconditions proposed by Cousot et al. \cite{Cousot13} or the (bi-)abduction \cite{Calcagno11} used in tools like \textsc{Infer} \cite{Calcagno15} is that our analysis can also deal with inputs read at any point during the program (thus notably also inside loops whose execution may depend on other inputs --- this is where the need for the stack comes from, cf. Section~\ref{sec:stack}). \section{Conclusion and Future Work}\label{sec:conclusion} In this paper, we have proposed a parametric static shape analysis framework based on abstract interpretation for inferring semantics properties of input data of data-processing programs. Specifically, our analysis automatically infers necessary conditions on the structure and values of the input data for the data-processing program to run successfully and correctly. It remains for future work to explore possible applications of the result our analysis. In particular, we are interested in developing better grammar-based testing approaches. We are also interested in developing tools for assisting and guiding or even automating the checking and cleaning of data. \bibliographystyle{abbrv}
proofpile-arXiv_065-117
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\textbf{Introduction }}Let us first give some notation. For a Banach space $X$, $B_X$, $S_X$ and $X^*$ will stand for its unit ball, its unit sphere and its dual space, respectively. All spaces are over the real field. A slice is a subset of $B_X$ of the form \begin{align*} S(x^*,\alpha)=\{x\in B_X: x^*(x)>1-\alpha\}, \end{align*} where $x^*\in S_{X^*}$ and $0<\alpha<1$. A topic now known as {\it Tingley's problem} or {\it the isometric extension problem} was first raised by D. Tingley \cite{3}. It is described as follows: let $T$ be a surjective isometry between $S_X$ and $S_Y$. Is it true that $T$ extends to a linear isometry $U : X\rightarrow Y$ of the corresponding spaces? Although this problem for general spaces remains unsolved even in dimension two, there is a number of publications devoted to Tingley's problem (say, Zentralblatt Math. shows 57 related papers published from 2002 to 2019). The positive answers for many classical Banach spaces were given in \cite{KM,THL} and the references therein. It is well worth mentioning that there is a fruitful series of recent papers dealing with Tingley's problem and related questions for operator algebras, for example, see \cite{BF,F1,F2}. The interested reader is referred to the survey \cite{Pe18} for more information on operator algebras, and for other recent contributions not considered in the survey, please see \cite{C,CA,KP,WH}. The notion of the Mazur-Ulam property was introduced by Cheng and Dong in \cite{CD}: a (real) Banach space $X$ is said to have {\it the Mazur-Ulam property} (MUP) if for every Banach space $Y$ every surjective isometry between $S_X$ and $S_Y$ extends to a real linear isometry from $X$ onto $Y$. Kadets and Mart\'{i}n \cite{KM} proved that all finite-dimensional polyhedral spaces (i.e. those spaces whose unit ball is a polyhedron) have the MUP. In order to show that a large class of Banach spaces enjoy the MUP, Tan, Huang and Liu introduced in \cite{THL} the notion of generalized-lushness. \begin{definition} A Banach space $X$ is said to be generalized-lush (GL) if for every $x\in S_{X}$ and every $\varepsilon>0$, there exists a slice $S_{x^*}:=S(x^{*},\varepsilon)$ with $x^{*}\in S_{X^{*}}$ such that \[x\in S_{x^*} \quad \mbox{and} \quad \mbox{dist}(y,S_{x^*})+\mbox{dist}(y,-S_{x^*})<2+\varepsilon \,\quad \mbox{for all}\,\ y\in S_{X}.\] \end{definition} This definition, at least for separable spaces, is a generalisation of the concept of lushness introduced in \cite{KVM} which has a connection with the numerical index of operators. For more spaces with MUP, the authors of \cite{THL} further introduced the concept of local-generalized-lushness. \begin{definition} A Banach space $X$ is said to be a local-GL-space if for every separable subspace $E\subset X$, there is a GL-subspace $F\subset X$ such that $E\subset F\subset X$. \end{definition} In \cite{THL}, it is shown that that all local-GL-spaces (and consequently all GL-spaces, all lush spaces) possess the MUP. Moreover many stable properties for GL-spaces are established in \cite{THL}, for example, it is established that the class of GL-spaces is stable under $c_0$, $l_1$ and $l^\infty$-sums (\cite[Theorem 2.11 and Proposition 2.12]{THL}) and that if $X$ is a GL space then so is the space $C(K,X)$ of all continuous functions from any compact Hausdorff space $K$ into $X$ (\cite[Theorem 2.10]{THL}). Later Jan-David Hardtke in \cite{H} stated that a large class of GL-spaces is stable under ultraproducts and under passing to a large class of $F$-ideals, in particular to $M$-ideals. And more, he introduced in \cite{H} (with the help of an anonymous referee as is mentioned in the \cite[2.4 Lush spaces]{H2}) the following (at least formally) weaker version of GL-spaces: \begin{definition}\label{def1} A Banach space $X$ is said to have the property $(**)$ if for all $x_1,x_2\in S_X$ and every $\varepsilon>0$, there exists a slice $S_{x^*}:=S(x^{*},\varepsilon)$ with $x^{*}\in S_{X^{*}}$ such that \begin{equation}\label{equ:24} x_1\in S_{x^*}\,\,\, \mbox{and}\,\, \mbox{dist}(x_2,S_{x^*})+\mbox{dist}(x_2,-S_{x^*})<2+\varepsilon. \end{equation} \end{definition} Throughout what follows, we shall freely use without explicit mention an elementary fact that Definition \ref{def1} is equivalent to another one where the assumption: $x_1,x_2\in S_X$ is replaced by $x_1\in S_X$ and $x_2\in B_X$. It should be remarked that the following observations were made in \cite{H}. \begin{enumerate} \item Every lush space has the property ($**$). \item For separable spaces, ($**$) is equivalent to GL. \item Every space with the property ($**$) has the MUP. \end{enumerate} Very recently, a stability results that $X$ having the property ($**$) implies that $L_1(\mu, X)$ and $L_\infty(\mu,X)$ also have the the property ($**$) with $(\Omega,\Sigma,\mu)$ being a $\sigma$-finite measure space has been proved in \cite[Theorem 4.8]{H2} by a reduction theorem. In fact, this reduction theorem is shown in \cite{H2} for a large class of spaces that enjoy a certain type of geometric properties, such as octahedrality, almost squareness, lushness, the Daugavet property and so on. In the earlier time, stronger stability results for lushness have already been stated in recent monograph \cite{MRK}: $C(K,X)$ is lush if and only if $X$ is, and the same results hold for $L_1(\mu, X)$ and $L_\infty(\mu,X)$. The aim of this paper is to demonstrate that these results remain true for the property ($**$) in the same spaces. Let us make a comment here on vector-valued function spaces for GL-spaces. We only know that if $X$ is a GL-space, then so are $C(K,X)$ (\cite[Theorem 2.10]{THL}) and $L_1(\mu,X)$ (\cite[Theorem 5.1]{H2}). It is not known whether this is true for $L_\infty(\mu, X)$ nor if $X$ is a GL-space whenever $ C(K,X)$, $L_1(\mu,X)$ or $L_\infty(\mu,X)$ is a GL-space, where $X$ is non-separable. Throughout the paper, given a compact Hausdorff topological space $K$ and a Banach space $X$, $C(K, X)$ is the Banach space of all continuous functions from $K$ into $X$ endowed with the supremum norm. Given a $\sigma$-finite measure space $(\Omega,\Sigma,\mu)$, for $A\in\Sigma$, $\chi_A$ is the characteristic function of $A$, and for a Banach space $X$, $L_\infty(\mu, X)$ is the Banach space of all (clases of) measurable functions $f$ from $\Omega$ into $X$ which are essentially bounded, endowed with the essential supremum norm \begin{center} $\|f\|_{\infty}=$ess sup$\{\|f(t):t\in\Omega\}$. \end{center} $L_1(\mu, X)$ is the Banach space of all (clases of) Bochner-integrable functions from $\Omega$ into $X$, endowed with the integral norm $$ \|f\|_1=\int_\Omega \|f(t)\|du(t).$$ \section{the results} Our aim is to present several results concerning the property ($**$) for vector-valued function spaces. We begin this with the spaces of continuous functions. The proof of the ``only if'' part of the following result is an easy adaptation of \cite[Theorem 2.10]{THL}. We present it here for completeness. \begin{theorem}\label{CK-theorem} Let $K$ be a compact Hausdorff topological space, and let $X$ be a Banach space. Then $X$ has the property ($**$) if and only if $C(K,X)$ has the property ($**$). \end{theorem} \begin{proof} We first show the ``only if" part. Let $f_{1},f_2 \in S_{C(K,X)}$ and $\varepsilon>0$. It is clear that there exists a $t_{0}\in K$ such that $\|f_{1}(t_{0})\|=1$. Since $X$ has the property ($**$), it follows that there exists a slice $S_{x^*}:=S(x^{*},\frac{\varepsilon}{4})$ with $x^{*}\in S_{X^{*}}$ such that $f_{1}(t_{0})\in S_{X^{*}}$ and \[\mbox{dist}\left(f_{2}(t_{0}),S_{x^{*}}\right)+\mbox{dist}\left(f_{2}(t_{0}),-S_{x^{*}}\right)<2+\frac{\varepsilon}{4}.\] Namely, we can find $y_{1}\in S_{x^{*}}$ and $y_{2}\in -S_{x^{*}}$ such that \[\left\|f_{2}(t_{0})-y_1\right\|+\left\|f_{2}(t_{0})-y_{2}\right\|<2+\frac{\varepsilon}{2}.\] Define a functional $f^*\in S_{C(K,X)^{*}}$ by $f^{*}(f)=x^{*}(f(t_{0}))$ for every $f\in C(K,X)$. Obviously, $f_{1}\in S_{f^*}:=S(f^*,\varepsilon)$, and there is a continuous map $\phi:K\rightarrow [0,1]$ which satisfies \begin{center} $\phi(t_{0})=1$ \quad and \quad $\phi(t)=0$ \quad if $\|f_2(t)-f_{2}(t_{0})\|\geqslant\frac{\varepsilon}{4}$. \end{center} Let $g_{i}(t)=\phi(t)y_{i}+(1-\phi(t))f_{2}(t)$ for every $t\in K$ and for $i=1,2$. Then it is easily checked that $g_{1}\in S_{f^*}$ and $g_{2}\in -S_{f^*}$. Moreover, \begin{equation*} \|g_1-f_2\|+\|f_2-g_2\|<2+\varepsilon. \end{equation*} Hence $C(K,X)$ has the property ($**$). Now let us prove the ``if" part. For any $x_{1},x_{2}\in S_{X}$, let $f_{1}=x_{1}\chi_{K}$ and $f_{2}=x_{2}\chi_{K}$. Then we have $f_{1},f_{2}\in S_{C(K,X)}$. Since $C(K,X)$ has the property ($**$), for every $\varepsilon>0$ there exists an $f^{*}\in S_{C(K,X)^{*}}$ such that $f_{1}\in S_{f^*}:=S(f^{*},\frac{\varepsilon}{8})$ and \[\mbox{dist}\left(f_{2},S_{f^*}\right)+\mbox{dist}\left(f_{2},-S_{f^*}\right)<2+\frac{\varepsilon}{8}.\] This means that there are $g_{1},-g_{2}\in S_{f^*}$ such that \begin{equation*} \|f_{2}-g_{1}\|+\|f_{2}-g_{2}\|<2+\frac{\varepsilon}{4}. \end{equation*} Note that we can find a $t_{0}\in K$ such that $\|g_{1}-g_{2}+x_{1}\chi_{K}\|=\|g_{1}(t_{0})-g_{2}(t_{0})+x_{1}\|$. By the Hahn-Banach theorem, there exists an $x^{*}\in S_{X^{*}}$ such that \begin{equation*} x^{*}(g_{1}(t_{0})-g_{2}(t_{0})+x_{1})=\|g_{1}(t_{0})-g_{2}(t_{0})+x_{1}\|. \end{equation*} Set $y_{1}:=g_{1}(t_{0})$, $y_{2}:=g_{2}(t_{0})$ and $S_{x^*}:=S(x^{*},\varepsilon)$. We deduce from \begin{equation*} \|g_{1}(t_{0})-g_{2}(t_{0})+x_{1}\|\geq f^{*}(g_{1}-g_{2}+x_{1}\chi_{K})>3-\frac{\varepsilon}{2} \end{equation*} that $x^{*}(x_{1})>1-\varepsilon/2$. Otherwise, \begin{align*} 3-\frac{\varepsilon}{2} &<\|g_{1}(t_{0})-g_{2}(t_{0})+x_{1}\|\\&=x^{*}(g_{1}(t_{0})-g_{2}(t_{0})+x_{1})\leq1+1+1-\frac{\varepsilon}{2}=3-\frac{\varepsilon}{2}, \end{align*} a contradiction. Thus $x_{1}\in S_{x^*}$. In a similar way, we can obtain $y_{1},-y_{2}\in S_{x^*}$. Moreover, it is easy to see that \begin{align*} \|x_{2}-y_{1}\|+\|x_{2}-y_{2}\|&=\|f_{2}(t_0)-g_{1}(t_{0})\|+\|f_{2}(t_0)-g_{2}(t_{0})\|\\ &\leq\|f_{2}-g_{1}\|+\|f_{2}-g_{2}\|<2+\varepsilon. \end{align*} So $X$ has the property ($**$). The proof is complete. \end{proof} We will deal with the property ($**$) for $L_\infty(\mu,X)$ and $L_1(\mu,X)$. Very recently, it has been shown in \cite[Theorem 4.8]{H2} that if $X$ has the property ($**$), then $L_1(\mu, X)$ and $L_\infty(\mu,X)$ also have the property ($**$). In fact, even more general reduction theorem is proved in \cite{H2} for a large class of spaces, such as octahedral and almost square spaces, lush spaces and so on. However, we do not think the converse of the previous result, that is if $L_1(\mu,X)$ or $L_\infty(\mu,X)$ has the property ($**$), then so does $X$, can be deduced from this reduction theorem. Additionally, it may be necessary to provide a direct proof for the fact that $L_1(\mu,X)$ and $L_\infty(\mu,X)$ enjoy the property ($**$) whenever $X$ does. To simplify the notation, we will use the following notation during the proof of the theorems: \begin{equation*} \Sigma^+:=\{A\in\Sigma: 0<\mu(A)<\infty\}. \end{equation*} \begin{theorem} \label{infty-theorem} Let $X$ be a Banach space, and let $(\Omega,\Sigma,\mu)$ be a $\sigma$-finite measure space. Then $X$ has the property ($**$) if and only if $L_{\infty}(\mu,X)$ has the property ($**$). \end{theorem} \begin{proof} Suppose first that $X$ has the property ($**$). Let $f_{1},f_{2}\in S_{L_{\infty}(\mu,X)}$ and $\varepsilon>0$. Note that every function in $L_\infty(\mu,X)$ is essentially separably valued. Thus there is an $A_1\in\Sigma^+$ and $x_1\in S_X$ such that \begin{equation*} \|x_1\chi_{A_1}-f_1\chi_{A_1}\|_\infty<\frac{\varepsilon}{4}. \end{equation*} Consider the function $f_2\chi_{A_1}$. We may also find $A_{2}\in\Sigma^+$ and $x_{2}\in B_X$ such that $A_2\subset A_1$ and $$\|f_{2}\chi_{A_{2}}-x_{2}\chi_{A_{2}}\|_\infty<\frac{\varepsilon}{4}.$$ Since $X$ has the property ($**$), we can find $x^*\in S_{X^*}$ such that $x_1\in S_{x^*}:=S(x^*,\varepsilon/4)$ and $y_1,-y_2\in S_{x^*}$ satisfying \begin{equation*} \|y_1-x_2\|+\|x_2-y_2\|<2+\frac{\varepsilon}{4}. \end{equation*} With $A_2$ and $x^*$ in hand, we can define a functional $f^*\in S_{C(K,X)^*}$ by \begin{equation*} f^*(f)=x^*\Big(\frac{1}{\mu(A_2)}\int_{A_2}f d\mu\Big) \end{equation*} for all $f\in C(K,X)$. Set $g_1:=y_1\chi_{A_2}+f_2\chi_{\Omega\setminus A_2}$ and $g_2:=y_2\chi_{A_2}+f_2\chi_{\Omega\setminus A_2}$. Then it is obvious that $f_1, g_1,-g_2\in S_{f^*}:=S(f^*,\varepsilon)$ and \begin{align*} \|g_1-f_2\|_\infty+\|f_2-g_2\|_\infty&=\|y_1\chi_{A_2}-f_2\chi_{A_2}\|_\infty+\|f_2\chi_{ A_2}-y_2\chi_{A_2}\|_\infty\\ &\leq\|y_1-x_2\|+\|x_2-y_2\|+\frac{1}{2}\varepsilon<2+\varepsilon. \end{align*} This thus proves that $L_\infty(\mu,X)$ has the property ($**$). Now we deal with the converse. Fix $x_{1},x_{2}\in S_{X}$ and $A\in\Sigma^+$. Set $f_{1}=x_{1}\chi_{A}$ and $f_{2}=x_{2}\chi_{A}$. That $L_{\infty}(\mu,X)$ has the property ($**$) produces $f^{*}\in S_{L_{\infty}(\mu,X)^{*}}$ such that $f_{1}\in S_{f^*}:=(f^{*},\frac{\varepsilon}{8})$ and $g_1,-g_2\in S_{f^*}$ such that \begin{equation*} \|g_1-f_{2}\|_\infty+\|f_{2}-g_{2}\|_\infty<2+\frac{\varepsilon}{4}. \end{equation*} Observe that $\|g_1-g_2+f_1\|\geq f^*(g_1-g_2+f_1)>3-\varepsilon/2$. Therefore, there exists $B\subset A$ with $B\in\Sigma^+$ such that \begin{equation*} \|g_{1}(t)-g_{2}(t)+x_1\|>3-\frac{\varepsilon}{2}. \end{equation*} for all $t\in B$. Similar arguments as above show that there are $y_1,y_2\in B_X$ and $C\in \Sigma^+$ such that $C\subset B$ and \begin{align*} \|y_1\chi_C-g_1\chi_C\|_\infty<\frac{\varepsilon}{8} \,\, \mbox{and} \,\, \|y_2\chi_C-g_2\chi_C\|_\infty<\frac{\varepsilon}{8}. \end{align*} It follows that \begin{equation*} \|y_1-y_2+x_1\|>3-\varepsilon. \end{equation*} The Hahn-Banach theorem ensures us that there is a functional $x^*\in S_X$ such that \begin{equation*} x^*(y_1-y_2+x_1)>3-\varepsilon. \end{equation*} It follows that $y_1,-y_2,x_1\in S(x^*,\varepsilon)$, and more, \begin{equation*} \|y_1-x_2\|+\|x_2-y_2\|<2+\varepsilon. \end{equation*} Thus $X$ has the property ($**$). \end{proof} In fact, a minor modification of the proof of \cite[Propsition 2.2]{THL} can provide a stronger conclusion. This conclusion yields the equivalence of generalised-lushness and the property ($**$) for separable spaces which was previously noted in \cite{H}. We also apply it to show that $X$ has the property ($**$) whenever $L_1(\mu,X)$ does. Thus for our particular use, we include here its proof. Given a Banach space $X$, a subset $G \subset X^*$ is called norming if $\|x\|=\sup\{ |x^*(x)| : x^*\in G \}$ for every $x\in X$. \begin{proposition}\label{propsition:1} Let $X$ be a Banach space having the property ($**$), and let $X_0\subset X$ be a separable subspace. Suppose that $G\subset S_{X^*}$ is norming and symmetric. Then for every $\varepsilon>0$, the set \begin{align*} \{x^*\in G: \mbox{dist}(y, S)+\mbox{dist}(y,-S)<2+\varepsilon \ \mbox{ for all}\ y\in S_{X_0} , \mbox{ where } S=S(x^*, \varepsilon )\} \end{align*} is a weak$^*$ $G_\delta$-dense subset of the weak$^*$ closure of $G$. In particular, if $X$ is separable, then $X$ is a GL-space. \end{proposition} \begin{proof} Let $\{y_n\}\subset S_{X_0}$ be a sequence dense in $S_{X_0}$. Fix $0<\varepsilon<1$. Given $n\geq1$, set \begin{align*} K_n=\{x^*\in G: \ \ \mbox{dist}(y_n, S)+\mbox{dist}(y_n,-S)<2+\varepsilon\ \ \mbox{where} \ \ S=S(x^*, \varepsilon )\}. \end{align*} Then $K_n$ is weak$^*$-open and $\overline{K_n}^{\omega^*}=\overline{G}^{\omega^*}$. Indeed, if $x^*\in K_n$, there exist $x_n\in S(x^*,\varepsilon)$ and $z_n\in -S(x^*,\varepsilon)$ such that \begin{align*} \|x_n-y_n\|+\|y_n-z_n\|<2+\varepsilon. \end{align*} Let $$U=\{y^*\in G: y^*(x_n)>1-\varepsilon \ \ \mbox {and} \ \ y^*(-z_n)>1-\varepsilon\} .$$ Then it is easily checked that $U$ is a weak$^*$-neighborhood of $x^*$ in $G$ satisfying $U\subset K_n$. Thus $K_n$ is weak$^*$-open. To prove $\overline{K_n}^{\omega^*}=\overline{G}^{\omega^*}$, it is enough to show that $G\subset \overline{K_n}^{\omega^*}$. Since \cite [Lemma 3.40]{F} states that for every $x^*\in G$, the weak$^*$-slices containing $x^*$ form a neighborhood base of $x^*$, it suffices to prove that for every $x\in S$, the weak$^*$-slice $S(x,\varepsilon_1)\cap K_n\neq\emptyset$ for all $\varepsilon_1\in (0, \varepsilon)$. Since $X$ has the property ($**$), there is a slice $S_{y^*}:=S(y^*,\varepsilon_1/3)$ with $y^*\in S_{X^*}$ such that \begin{align*} x\in S_y^*\ \ \mbox{and}\ \ \mbox{dist}(y_n,S_{y^*})+\mbox{dist}(y_n,-S_{y^*})<2+\varepsilon_1. \end{align*} Thus we may find $x_n'\in S_{y^*}$ and $z_n' \in -S_{y^*}$ such that \begin{align*} \|x_n'-y_n\|+\|y_n-z_n'\|<2+\varepsilon_1 \ \ \mbox{and}\ \ \|x+x_n'-z_n'\|>3-\varepsilon_1. \end{align*} Note that $G$ is norming and symmetric. Thus there is a $z^*\in G$ such that \begin{align*} z^*(x+x_n'-z_n')>3-\varepsilon_1. \end{align*} This implies that $z^*\in S(x, \varepsilon_1)\cap K_n$. Now set $K=\bigcap_{n\in\mathbb{N}} K_n$. Then by the Baire theorem, $K$ is a weak$^* $ $G_\delta$-dense subset of $\overline{G}^{\omega^*}$. This together with density of $(y_n)$ in $S_{X_0}$ gives the first conclusion and the second conclusion is clear. \end{proof} Let us make a remark here. Proposition \ref{propsition:1} combined with Theorem \ref{CK-theorem} establishes that if $C(K,X)$ is a GL-space, then so is $X$ under the assumption that $X$ is separable. The same result holds for the space $L_\infty(\mu,X)$. We do not know if this is true in the general. Throughout what follows, we will use the notation \begin{equation*} \mathcal{S}(x^*,\alpha):=\{x\in X: x^*(x)>\|x\|-\alpha\}, \end{equation*} where $x^*\in S_{X^*}$ and $0<\alpha<1$. In this notation, it is obvious that $\mathcal{S}(x^*,\alpha)$ contains the general slice $S(x^*,2\alpha)$ for $0<\alpha<\frac{1}{2}$. To show that $X$ has the property ($**$) whenever $L_1(\mu,X)$ does, we need some more lemmas. \begin{lemma}\label{lem:2} Let $X$ be a Banach space, and let $y$ be in $S_X$. For every $0<\varepsilon<1$, if there are $x^*\in S_{X^*}$, $x_1\in \mathcal{S}(x^*,\varepsilon/3)$, $x_2\in -\mathcal{S}(x^*,\varepsilon/3)$ such that \begin{equation*} \|x_1-y\|+\|y-x_2\|<\|x_1\|+\|x_2\|+\frac{\varepsilon}{3}, \end{equation*} then we have $x_1-y, y-x_2\in \mathcal{S}(x^*, \varepsilon)$. \end{lemma} \begin{proof} The proof of the two cases $x_1-y\in \mathcal{S}(x^*, \varepsilon)$ and $y-x_2\in \mathcal{S}(x^*, \varepsilon)$ are completely the same. It is enough to prove the first one. Assume, on the contrary, that $x^*(x_1-y)\leq \|x_1-y\|-\varepsilon$. Then \begin{align*} \|x_1-y\|+\|y-x_2\|&\geq x^*(x_1-y)+\varepsilon+x^*(y-x_2)\\&=x^*(x_1-x_2)+\varepsilon>\|x_1\|+\|x_2\|+\frac{\varepsilon}{3}. \end{align*} A contradiction therefore completes the proof. \end{proof} \begin{remark} One can easily check that a converse version of the previous lemma remains true. To be precise, if $x_1-y, y-x_2\in \mathcal{S}(x^*, \varepsilon)$, then \begin{equation*} \|x_1-y\|+\|y-x_2\|\leq x^*(x_1-y)+x^*(y-x_2)+2\varepsilon\leq \|x_1\|+\|x_2\|+2\varepsilon. \end{equation*} This observation actually provides an approach to find a slice which satisfies \eqref{equ:24}. \end{remark} A simple but very useful numerical result appears in \cite[Lemma 8.13]{MRK}. We will also apply it to deal with the property ($**$) in the space $L_1(\mu,X)$. We give the proof for the sake of completeness. \begin{lemma}\label{lem:1} Let $\varepsilon>0$ $\delta>0$, and let $\lambda_i\geq 0$ for all $i=1,\cdots,n$. Suppose that $\alpha_i, \beta_i\in\mathbb{R}$ are such that $\alpha_i\leq\beta_i$ for all $i=1,\cdots,n$ and satisfy $(\sum_{i=1}^n\lambda_i\beta_i)-\varepsilon \delta<\sum_{i=1}^n\lambda_i\alpha_i$. Then \begin{equation*} \sum\{\lambda_i: \beta_i-\alpha_i\geq\varepsilon\}<\delta. \end{equation*} In particular, if $\sum_{i=1}^n\lambda_i=1$, then \begin{equation*} \sum\{\lambda_i: \beta_i-\alpha_i<\varepsilon\}>1-\delta. \end{equation*} \end{lemma} \begin{proof} Set $I=\{1\leq i\leq n:\beta_i-\alpha_i\geq\varepsilon\}$. Then it is easily seen that \begin{align*} \sum_{i=1}^{n}\lambda_i\beta_i=\sum_{i\in I} \lambda_i\beta_i+\sum_{i\notin I} \lambda_i\beta_i&\geq\sum_{i\in I} \lambda_i(\alpha_i+\varepsilon)+\sum_{i\notin I} \lambda_i\alpha_i\\&=\sum_{i=1}^{n}\lambda_i\alpha_i+\varepsilon\sum_{i\in I} \lambda_i. \end{align*} It follows immediately from this and the hypothesis that $\sum_{i\in I} \lambda_i<\delta$. The second conclusion is obvious. \end{proof} The same results as \cite[Theorem 2.11]{THL} also hold for the property ($**$). Although the proofs are actually analogous to those of \cite[Theorem 2.11]{THL}, we give the proof of the $l_1$-sum case since this result is necessary in what follows. \begin{proposition}\label{proposition2} Let $\{E_\lambda : \lambda\in\Lambda\}$ be a family of Banach spaces, and let $E = [\bigoplus_{\lambda\in\Lambda} E_\lambda]_{F}$ where $F=c_0, \, l_\infty \ \mbox{or } \, l_1$. Then $E$ has the property ($**$) if and only if each $E_\lambda$ has the property ($**$). \end{proposition} \begin{proof} In the $l_1$-sum case, we first show the ``if" part. Given $x=(x_\lambda), y=(y_\lambda)\in S_E$ and $\varepsilon>0$, for each $\lambda$ with $x_\lambda\neq 0$, there is a corresponding slice $S_\lambda:=S(x_\lambda^*, \varepsilon)$ with $x_\lambda^*\in S_{E_\lambda^*}$ such that \begin{align*} x_\lambda^*(x_\lambda)>(1-\varepsilon )\|x_\lambda\| \ \mbox{and} \ \ \mbox{dist}(\frac{y_\lambda}{\|y_\lambda\|}, S_\lambda)+\mbox{dist}(\frac{y_\lambda}{\|y_\lambda\|},-S_\lambda)<2+\varepsilon, \end{align*} where $y_\lambda\neq0$. Then $x^*=(x_\lambda^*)\in S_{E^*}$ with $x_\lambda^*=0$ whenever $x_\lambda=0$, and the required slice satisfying (\ref{equ:24}) is $S(x^*,\varepsilon)$. Therefore $E$ has the property ($**$). For the ``only if" part, fix $x_\lambda,y_\lambda \in S_{E_\lambda}$ and $0<\varepsilon<1/16$. Then $x=(x_\delta),y=(y_\delta)\in S_E$ where $x_\delta=y_\delta=0$ for all $\delta\neq \lambda$. Since $E$ has the property ($**$), there is an $x^*=(x_\delta^*)\in S_{E^*}$ with $S:=S(x^*,\varepsilon^2/4)$ such that $$ x\in S\ \ \mbox{and} \ \ \mbox{dist}(y, S)+\mbox{dist}(y,-S)<2+\frac{\varepsilon^2}{4}.$$ We will prove that the slice $S_\lambda:=S(x_\lambda^*/\|x_\lambda^*\| , \varepsilon)$ is the desired one. It is easily checked that $x_\lambda\in S_\lambda$ and there are $u=(u_\delta)\in S$ and $v=(v_\delta)\in -S$ such that \begin{align}\label{equ:30} \|y-u\|+\|y-v\|<2+\frac{\varepsilon^2}{4}. \end{align} It follows from the definition of $E$ that \begin{align} \|y-u\|+\|y-v\|&=\|y_\lambda-u_\lambda\|+\sum_{\delta\neq \lambda}\|u_\delta\|+\|y_\lambda-v_\lambda\|+\sum_{\delta\neq \lambda}\|v_\delta\| \nonumber\\ &>\|y_\lambda-u_\lambda\|+1-\varepsilon^2/4-\|u_\lambda\|+\|y_\lambda-v_\lambda\|+1-\varepsilon^2/4-\|v_\lambda\|\nonumber \\ &=\|y_\lambda-u_\lambda\|-\|u_\lambda\|+\|y_\lambda-v_\lambda\|-\|v_\lambda\|+2-\varepsilon^2/2.\label{equ:31} \end{align} We deduce from (\ref{equ:30}) and (\ref{equ:31}) that \begin{align}\label{equ:33} \|y_\lambda-u_\lambda\|+\|y_\lambda-v_\lambda\|<\|u_\lambda\|+\|v_\lambda\|+\varepsilon^2. \end{align} On the other hand, \begin{align}\label{equ:34} x_\lambda^*(u_\lambda)>1-\varepsilon^2/4-\sum_{\delta\neq \lambda}\|u_\delta\|\geq1-\varepsilon^2/4-1+\|u_\lambda\|=\|u_\lambda\|-\varepsilon^2/4, \end{align} and similarly, \begin{align}\label{equ:35} x_\lambda^*(-v_\lambda)>\|v_\lambda\|-\frac{\varepsilon^2}{4}. \end{align} We apply \eqref{equ:33}, \eqref{equ:34}, \eqref{equ:35} and Lemma \ref{lem:2} to get that \begin{equation}\label{equ:36} x_\lambda^*(u_\lambda-y_\lambda)\geq \|u_\lambda-y_\lambda\|-3\varepsilon^2 \end{equation} and \begin{equation*} x_\lambda^*(y_\lambda-v_\lambda)\geq \|y_\lambda-v_\lambda\|-3\varepsilon^2. \end{equation*} Therefore, if $\|u_\lambda\|\leq \varepsilon/4$, \eqref{equ:36} yields \begin{equation*} x^*_\lambda(-y_\lambda)>1-\varepsilon/4-3\varepsilon^2-\varepsilon/4>1-\varepsilon. \end{equation*} This means that $-y_\lambda\in S_\lambda$. Clearly it satisfies $$\mbox{dist}(y_\lambda, S_\lambda)+\mbox{dist}(y_\lambda, -S_\lambda)\leq2<2+\varepsilon.$$ A similar result holds in the case of $\|v_\lambda\|\leq \varepsilon/4$. It remains to consider the case that $\|v_\lambda\|>\varepsilon/4$ and $\|v_\lambda\|>\varepsilon/4$. Put $w_\lambda:=u_\lambda/\|u_\lambda\|$ and $t_\lambda:=v_\lambda/\|v_\lambda\|$. Then $w_\lambda, -t_\lambda \in S_\lambda$ following from \eqref{equ:34} and \eqref{equ:35} respectively. The desired estimate \begin{align*} \|y_\lambda-w_\lambda\|+\|y_\lambda-t_\lambda\|<2+\varepsilon \end{align*} is got directly from (\ref{equ:33}). The proof is complete. \end{proof} Now we are ready to work with the property ($**$) for the space $L_1(\mu, X)$. \begin{theorem} Let $X$ be a Banach space, and let $(\Omega,\Sigma,\mu)$ be a $\sigma$-finite measure space. Then $X$ has the property ($**$) if and only if $L_{1}(\mu,X)$ has the property ($**$). \end{theorem} \begin{proof} Since $L_1(\mu,X)$ is isometrically isomorphic to an $l_1$-sum of spaces $L_1(\mu_i,X)$ for some finite measures $\mu_i$, we deduce from Proposition \ref{proposition2} that it is enough to deal with finite measure, and by normalizing the measure, we may assume that $\mu(\Omega)=1$. Assume that $X$ has the property ($**$). To prove that so does $L_1(\mu, X)$, we will check that \eqref{equ:24} is satisfied. Given $f,g\in S_{L_1(\mu,X)}$ and $\varepsilon>0$, we apply \cite[Lemma III.2.1]{Die} to obtain a partition $\pi$ of $\Omega$ into a finite family of disjoint members of $\Sigma^+$ such that \begin{align}\label{equ:60} \|E_{\pi}(f)-f\|_1<\frac{\varepsilon}{8}. \end{align} and \begin{align*} \|E_{\pi}(g)-g\|_1<\frac{\varepsilon}{8}. \end{align*} where $E_{\pi}:L_1(\mu,X)\rightarrow L_1(\mu,X)$ is a contractive projection given by \[E_{\pi}(h)=\sum_{A\in\pi}(\frac{1}{\mu(A)}\int_Ah\,d\mu)\chi_A,\] for all $h\in L_1(\mu,X).$ We set $x_A:=\int_A f \,d\mu$ and $y_A:=\int_A g \,d\mu$. Since $X$ has the property ($**$), there exists an $x_A^*\in S_{X^*}$ with $S_{x_A^*}=S(x_A^*,\varepsilon)$ such that \begin{align}\label{x_A} x_A^*(x_A)\geq (1-\frac{\varepsilon}{2})\|x_A\| \end{align} and \begin{align*} \|y_A-\|y_A\|z_A^+\|+\|y_A-\|y_A\|z_A^-\|\leq(2+\frac{\varepsilon}{2})\|y_A\| \end{align*} where $z_A^+,-z_A^-\in S_{x_A^*}$. Now we can define a functional $f^*\in L_1(\mu,X)^*$ by $$f^*(h)=\sum_{A\in\pi}x_A^*(\int_A h\,d\mu)$$ for all $h\in L_1(\mu,X).$ Then clearly $f^*\in S_{L_1(\mu,X)^*}.$ We will check that the slice $S_{f^*}=S(f^*,\varepsilon)$ is the desired one. Observe that $f\in S_{f^*}$ is an immediate consequence of (\ref{x_A}) and \eqref{equ:60}. Consider the functions $h^+, h^-\in L_1(\mu,X)$ defined by \[h^+=\sum_{A\in\pi}(\frac{\|y_A\|}{\mu(A)}z^+_A)\chi_A \ \ \mbox{and}\ \ h^-=\sum_{A\in\pi}(\frac{\|y_A\|}{\mu(A)}z^-_A)\chi_A.\] By the definition of $f^*$ and the partition $\pi$, we see that $$h^{+}\in S_{f^*} \,\ \mbox{and}\ \, h^{-}\in -S_{f^*}.$$ Furthermore, an easy computation shows that \begin{align*} &\|g-h^+\|_1+\|g-h^-\|_1\\ \leq&\|E_{\pi}(g)-g\|_1+\|E_{\pi}(g)-h^+\|_1+\|E_{\pi}(g)-g\|_1+\|E_{\pi}(g)-h^-\|_1\\ \leq&2+\varepsilon/2+\varepsilon/4<2+\varepsilon. \end{align*} This thus proves that $L_1(\mu,X)$ has the property ($**$). For the converse, we will draw an idea from \cite[Theorem 8.10.(b)]{MRK} where Lemma \ref{lem:1} is applied. Fix $x,y\in S_X$, and for every $0<\varepsilon<1/4$, choose $\eta\in(0,1)$ such that $\eta<(\varepsilon/4)^6$. It suffices to show that there is an $x^*\in S_{X^*}$ such that \eqref{equ:24} holds. The hypothesis provides a $g^*\in S_{{L_{1}(\mu,X)}^*}$ such that $x\chi_{\Omega}\in S_{g^*}:=S(g^*, \eta^9/3)$ and \begin{equation*} \mbox{dist}(y\chi_{\Omega}, S_{g^*})+\mbox{dist}(y\chi_{\Omega}, -S_{g^*})<2+\frac{\eta^9}{3}. \end{equation*} This and the density of the simple functions in $L_{1}(\mu,X)$ imply that there exist simple functions $g_1\in S_{g^*}$ and $g_2\in -S_{g^*}$ such that \begin{equation}\label{equ:2} \|y\chi_{\Omega}-g_1\|_1+\|y\chi_{\Omega}-g_2\|_1<2+\frac{\eta^9}{3}. \end{equation} We may write $g_1=\Sigma_{i=1}^{n}x_i\chi_{A_i}\in S_{g^*}$ and $g_2=\Sigma_{i=1}^{n}y_i\chi_{A_i}\in-S_{g^*}$, where $x_i,y_i\in X$ and $\{A_i\}_{i=1}^{n}\subset \Sigma^+$ is a finite partition of $\Omega$. For each $i=1,\cdots,n$, define a functional $y^*_i\in X^*$ by \begin{equation*} y_i^*(z)=g^*\Big(\frac{z\chi_{A_i}}{\mu(A_i)}\Big) \quad ( z\in X). \end{equation*} Then it is clear that $\|y_i^*\|\leq 1$ for $i=1,\cdots,n$, and \begin{align} &\sum_{i=1}^{n}y_i^*(x)\mu(A_i)=g^*(x\chi_{\Omega})>1-\frac{\eta^9}{3},\label{equ:44} \\ &\sum_{i=1}^{n}y_i^*(x_i)\mu(A_i)=g^*(g_1)>1-\frac{\eta^9}{3}\label{equ:45} \end{align} and \begin{equation}\label{equ:46} \sum_{i=1}^{n}y_i^*(-y_i)\mu(A_i)=g^*(-g_2)>1-\frac{\eta^9}{3}. \end{equation} Furthermore, by \eqref{equ:2} and Lemma \ref{lem:2}, we have \begin{equation*} f^*(g_1-y\chi_\Omega)>\|g_1-y\chi_\Omega\|_1-\eta^9\,\, \mbox{and}\,\, f^*(y\chi_\Omega-g_2)>\|g_1-y\chi_\Omega\|_1-\eta^9. \end{equation*} That is \begin{equation}\label{equ:47} \sum_{i=1}^{n} y_i^*(x_i-y)\mu(A_i)>\sum_{i=1}^{n}\|x_i-y\|\mu(A_i)-\eta^9 \end{equation} and \begin{equation}\label{equ:48} \sum_{i=1}^{n} y_i^*(y-y_i)\mu(A_i)>\sum_{i=1}^{n}\|y-y_i\|\mu(A_i)-\eta^9. \end{equation} Observe that $y^*_i(z)\leq\|z\|$ for all $z\in X$ and for each $i=1,\cdots,n$. Then applying Lemma \ref{lem:1} to the above inequalities \eqref{equ:44}-\eqref{equ:48}, we clearly get \begin{equation}\label{equ:38} \sum\{\mu(A_i): z_i\in \mathcal{S}(y_i^*, \eta^3)\}>1-\eta^6, \end{equation} for $\{z_i\}_{i=1}^{n}\in\Big\{\{x\}_{i=1}^n,\{x_i\}_{i=1}^n,\{-y_i\}_{i=1}^{n},\{x_i-y\}_{i=1}^{n},\{y-y_i\}_{i=1}^{n}\Big\}$. On the other hand, note that \begin{equation*} \sum_{i=1}^{n}\|x_i\|\mu(A_i)=\|g_1\|=1. \end{equation*} Thus \begin{equation*} \sum\{\mu(A_i):\|x_i\|>1+ \eta^3\}<\frac{1}{1+\eta^3}. \end{equation*} So \begin{equation}\label{equ:11} \sum\{\mu(A_i):\|x_i\|\leq 1+ \eta^3\}>\frac{\eta^3}{1+\eta^3}. \end{equation} Since $\eta<\varepsilon^3/64<1/64$, we deduce from \eqref{equ:38} and \eqref{equ:11} that there is some $0\leq i_0\leq n$ such that \begin{equation}\label{equ:12} \|x_{i_0}\|\leq 1+ \eta^3 \end{equation} and \begin{equation}\label{equ:39} \{x,x_{i_0},-y_{i_0},x_{i_0}-y,y-y_{i_0}\}\subset S(y_{i_0}^*,\eta^3). \end{equation} For our conclusion, the argument will be divided into three cases. If $\|x_{i_0}\|\leq \eta$, using that $\|y-x_{i_0}\|\geq 1-\eta$, we apply \eqref{equ:39} to conclude that \begin{align*} y^*_{i_0}(-y)&\geq\|x_{i_0}-y\|-\eta^3-y^*_{i_0}(x_{i_0})\\ &\geq 1-2\eta-\eta^3>1-\varepsilon. \end{align*} Then $x,-y\in S_{y_{i_0}^*}:=S(y_{i_0}^*,\varepsilon)$, and thus $\mbox{dist}(y,-S_{y_{i_0}^*})=0$. So \eqref{equ:24} is already verified. A similar proof shows that if $\|y_{i_0}\|\leq \eta$, then \begin{align*} y^*_{i_0}(y)\geq 1-2\eta-\eta^3>1-\varepsilon. \end{align*} It follows that $S_{y_{i_0}^*}$ is just the desired slice. The previous argument also implies that it is only possible that $\|y_{i_0}\|\leq \eta$ or $\|x_{i_0}\|\leq \eta$ since $\|y_{i_0}\|\leq \eta$ and $\|x_{i_0}\|\leq \eta$ cannot hold simultaneously. Thus the remaining case that needs to deal with is that $\|y_{i_0}\|>\eta$ and $\|x_{i_0}\|>\eta$. This and \eqref{equ:39} guarantee that \begin{equation}\label{equ:50} y^*_{i_0}\Big(\frac{x_{i_0}}{\|x_{i_0}\|}\Big)>1-\eta^2 \end{equation} and \begin{equation*} y^*_{i_0}\Big(-\frac{y_{i_0}}{\|y_{i_0}\|}\Big)>1-\eta^2. \end{equation*} Moreover, \eqref{equ:39} combined with \eqref{equ:12} establishes that \begin{align} y^*_{i_0}\Big(\frac{x_{i_0}}{\|x_{i_0}\|}-y\Big)&\geq\|x_{i_0}-y\|-\eta^3-y^*_{i_0}\Big(x_{i_0}-\frac{x_{i_0}}{\|x_{i_0}\|}\Big)\nonumber\\ &\geq\|x_{i_0}-y\|-\eta^3-(\|x_{i_0}\|-1+\eta^2) \nonumber \\ &\geq\Big\|\frac{x_{i_0}}{\|x_{i_0}\|}-y\Big\|-\Big|1-\|x_{i_0}\|\Big|+1-\|x_{i_0}\|-\eta^3-\eta^2 \nonumber \\ &\geq\Big\|\frac{x_{i_0}}{\|x_{i_0}\|}-y\Big\|-2\eta^3-\eta^3-\eta^2 \nonumber \\ &>\Big\|\frac{x_{i_0}}{\|x_{i_0}\|}-y\Big\|-\varepsilon^9. \label{equ:15} \end{align} In fact, the proof will be done provided that \eqref{equ:12} also holds for $y_{i_0}$. However, this cannot be obtained directly. For this reason, we still need to consider the vector $y\chi_{A_{i_0}}/\mu(A_{i_0})\in S_{L_1(\mu,X)}$. Note that for each finite partition $\{A_1,\cdots, A_n\}$ of $\Omega$ and finite vectors $\{x_1\cdots,x_n\}\subset S_X$, $X_0=\mbox{span}\{x_i\chi_{A_i}: 1\leq i\leq n\}$ is an $n$-dimensional Banach space. By Proposition \ref{propsition:1}, we may assume that there are simples $f^{+}, -f^{-} \in S(g^*,\eta^9/3)$ such that \begin{equation*} \|f^+-\frac{y\chi_{A_{i_0}}}{\mu(A_{i_0})}\|_1+\|\frac{y\chi_{A_{i_0}}}{\mu(A_{i_0})}-f^{-}\|_1<2+\frac{\eta^9}{3}. \end{equation*} We may write \begin{equation*} f^{+}=\Sigma_{j=1}^m x_{i_0,j}^{+}\chi_{A_{i_0,j}}+\Sigma_{j=m+1}^{k}x^+_{j}\chi_{B_j} \end{equation*} and \begin{equation*} f^{-}=\Sigma_{j=1}^m x_{i_0,j}^{-}\chi_{A_{i_0,j}}+\Sigma_{j=m+1}^{k}x^-_{j}\chi_{B_j}, \end{equation*} where $\{A_{i_0,1},\cdots, A_{i_0,m}, B_{m+1},\cdots,B_k\}\subset\Sigma^+$ is a finite partition of $\Omega$ such that $\cup_{j=1}^{m} A_{i_0,j}=A_{i_0}$. Similarly as above, define $y_{i_0,j}^*, x^*_j\in B_{X^*}$ respectively by \begin{equation*} y_{i_0,j}^*(z)=g^*\Big(\frac{z\chi_{A_{i_0,j}}}{\mu(A_{i_0,j})}\Big)\, \quad (z\in X, j=1,\cdots,m) \end{equation*} and \begin{equation*} x_{j}^*(z)=g^*\Big(\frac{z\chi_{B_{j}}}{\mu(B_{j})}\Big) \,\quad (z\in X, j=m+1,\cdots,k). \end{equation*} Since $g^*(f^+)>1-\eta^9$, this together with an observation that \begin{equation*} \sum_{j=m+1}^{k}x_j^*(x_{i_0,j}^+)\mu(B_j)\leq\sum_{j=m+1}^{k}\|x_{i_0,j}^+\|\mu(B_j) \end{equation*} yields \begin{align}\label{equ:17} \sum_{j=1}^{m}y_{i_0,j}^*\big(x_{i_0,j}^+\mu(A_{i_0})\big)\frac{\mu(A_{i_0,j})}{\mu(A_{i_0})} =&\sum_{j=1}^{m}y_{i_0,j}^*(x_{i_0,j}^+)\mu(A_{i_0,j})\nonumber \\ >&\sum_{j=1}^{m}\|x_{i_0,j}^+\|\mu(A_{i_0,j})-\eta^9\nonumber\\ >&\sum_{j=1}^{m}\big\| x_{i_0,j}^+\mu(A_{i_0})\big\|\frac{\mu(A_{i_0,j})}{\mu(A_{i_0})}-\eta^3 \nonumber\\ >&\sum_{j=1}^{m}\big\|x_{i_0,j}^+\mu(A_{i_0})\big\|\frac{\mu(A_{i_0,j})}{\mu(A_{i_0})}-\varepsilon^9. \end{align} Following in the similar line as above, we conclude that \eqref{equ:17} also holds for $\{-x_{i_0,j}^-\}_{j=1}^{m}$, $\{x_{i_0,j}^+-y/\mu(A_{i_0,j})\}_{j=1}^{m}$ and $\{y/\mu(A_{i_0,j})-x_{i_0,j}^-\}_{j=1}^{m}.$ Note that for every $z\in X$, we have \begin{equation*} y_{i_0}^*(z)=\sum_{j=1}^{m}y_{i_0,j}^*(z)\frac{\mu(A_{i_0,j})}{\mu(A_{i_0})}. \end{equation*} Combining this with \eqref{equ:39}, \eqref{equ:50} and \eqref{equ:15} and noting $\eta<\varepsilon^6$, we obtain \begin{equation*} \sum_{j=1}^{m}y_{i_0,j}^*(z)\frac{\mu(A_{i_0,j})}{\mu(A_{i_0})}>\|z\|-\varepsilon^9 \end{equation*} for all $z\in \{x,\frac{x_{i_0}}{\|x_{i_0}\|}, \frac{x_{i_0}}{\|x_{i_0}\|}-y\}$. Thus an application of Lemma \ref{lem:1} again guarantees that \begin{equation}\label{equ:40} \sum\{\frac{\mu(A_{i_0,j})}{\mu(A_{i_0})}: z_j\in \mathcal{S}(y^*_{i_0,j}, \varepsilon^3)\}>1-\varepsilon^6, \end{equation} for \begin{equation*} \{z_j\}\in\Big\{\{x\}, \{\frac{x_{i_0}}{\|x_{i_0}\|}\}, \{\frac{x_{i_0}}{\|x_{i_0}\|}-y\}\Big\} \end{equation*} and \begin{equation*} \{z_j\}\in\Big\{\{x_{i_0,j}^+\mu(A_{i_0})\}, \{-x_{i_0,j}^-\mu(A_{i_0})\}, \{x_{i_0,j}^+\mu(A_{i_0})-y\},\{y-x_{i_0,j}^-\mu(A_{i_0})\}\Big\}. \end{equation*} (Here, we omit the superscript and subscript when confusion is unlikely). Observe from $\sum_{j=1}^{m}\|x_{i_0,j}\|\mu({A_{i_0,j}})\leq1$ that \begin{equation*} \sum\{\frac{\mu(A_{i_0,j})}{\mu(A_{i_0})}: \|x_{i_0,j}\|>\frac{1+\varepsilon^3}{\mu(A_{i_0})}\}<\frac{1}{1+\varepsilon^3}. \end{equation*} Consequently, \begin{equation*} \sum\{\frac{\mu(A_{i_0,j})}{\mu(A_{i_0})}: \|x_{i_0,j}\|\leq\frac{1+\varepsilon^3}{\mu(A_{i_0})}\}>\frac{\varepsilon^3}{1+\varepsilon^3}. \end{equation*} This together with \eqref{equ:40} allows us to conclude that there is a $j_0\in\{1,\cdots,m\}$ such that \begin{equation}\label{equ:43} \|x_{i_0,j_0}^-\mu(A_{i_0})\|\leq1+\varepsilon^3 \end{equation} and \begin{equation}\label{equ:41} \{x, \frac{x_{i_0}}{\|x_{i_0}\|}, \frac{x_{i_0}}{\|x_{i_0}\|}-y, z_{i_0,j_0}^+, -z_{i_0,j_0}^-, z_{i_0,j_0}^+-y, y-z_{i_0,j_0}^-\Big\}\subset \mathcal{S}(y_{i_0,j_0}^*,\varepsilon^3), \end{equation} where $z_{i_0,j_0}^+=x_{i_0,j_0}^+\mu(A_{i_0})$ and $z_{i_0,j_0}^-=x_{i_0,j_0}^-\mu(A_{i_0})$. Following in an exactly similar way as in the case where $y\chi_\Omega$ is considered, we have \begin{equation*} y_{i_0,j_0}^*(-y)>1-\varepsilon/2-\varepsilon^3>1-\varepsilon \quad \mbox{or} \quad y_{i_0,j_0}^*(y)>1-\varepsilon/2-\varepsilon^3>1-\varepsilon \end{equation*} under the condition that $\|z_{i_0,j_0}^+\|\leq \varepsilon/4$ or $\|z_{i_0,j_0}^-\|\leq\varepsilon/4$, respectively. We only need to settle the case where $\|z_{i_0,j_0}^+\|>\varepsilon/4$ and $ \|z_{i_0,j_0}^+\|>\varepsilon/4$. A similar argument to that in the case where we get \eqref{equ:15} by using \eqref{equ:43} shows that \begin{equation*} y_{i_0,j_0}^*\Big(y-\frac{x_{i_0,j_0}^-}{\|x_{i_0,j_0}^-\|}\Big)>\Big\|y-\frac{x_{i_0,j_0}^-}{\|x_{i_0,j_0}^-\|}\Big\|-3\varepsilon^3-4\varepsilon^2. \end{equation*} On combining this with \eqref{equ:41}, we deduce that \begin{align*} &\Big\|\frac{x_{i_0}}{\|x_{i_0}\|}-y\Big\|+\Big\|y-\frac{x_{i_0,j_0}^-}{\|x_{i_0,j_0}^-\|}\Big\|\\ <& y_{i_0,j_0}^*\Big(\frac{x_{i_0}}{\|x_{i_0}\|}-y\Big)+y_{i_0,j_0}^*\Big(y-\frac{x_{i_0,j_0}^-}{\|x_{i_0,j_0}^-\|}\Big)+4(\varepsilon^2+\varepsilon^3)<2+\varepsilon. \end{align*} Finally, \eqref{equ:41} proves that the required slice is right $S(y^*_{i_0,j_0},\varepsilon)$. This completes the proof. \end{proof} Let us stress a question on vector-valued function spaces for GL-spaces. It is only known that if $X$ is a GL-space, then so are $C(K,X)$ (\cite[Theorem 2.10]{THL}) and $L_1(\mu,X)$ (\cite[Theorem 5.1]{H2}). We did not know whether this is true for $L_\infty(\mu, X)$ nor if $X$ is a GL-space whenever $ C(K,X)$, $L_1(\mu,X)$ or $L_\infty(\mu,X)$ is in general.
proofpile-arXiv_065-118
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The crater chronology expresses the crater production rate (number of craters per unit time per surface area) as a function of time. It encapsulates our understanding of the observed crater record on surfaces of different bodies. If known, it can be used to estimate the surface age, identify the dominant populations of impactors, and infer interesting things about the dynamical and collisional evolution of the solar system. Unfortunately, it is quite difficult do determine an accurate crater chronology from data alone. This is because the ages of different craters are often unknown and must be inferred by indirect means. The only crater chronology that is directly derived from observational data is the lunar chronology (e.g. \citealp{2001SSRv...96...55N}; \citealt{2009AJ....137.4936M}; \citealp{2014E&PSL.403..188R}). The Moon has a well preserved crater record and the soil samples returned by lunar missions can be used to infer accurate absolute ages of at least some lunar craters and basins. This provides time anchors from which the lunar chronology can be reconstructed. For most other solar system bodies, for which the crater record is not well preserved and/or the absolute crater ages are unknown, the crater chronology must be inferred by different means (e.g. \citealp{2010P&SS...58.1116M}; \citealp{2012P&SS...66...87M}; \citealp{2014P&SS..103..131O}; \citealp{2016NatCo...712257M}). For example, some researchers have re-scaled the lunar chronology to other bodies (\citealp{2009AJ....137.4936M}; \citealp{2014P&SS..103..104S}), including the main belt asteroids, even if this method may be difficult to justify (\citealp{2014P&SS..103..131O}). Another approach, which we pursue here, is to model the evolution of impactors and their impacts on target bodies, and use the scaling laws (\citealp{2007Icar..187..345H}; \citealp{2016JGRE..121.1695M}; \citealp{2016Icar..271..350J}) to determine the expected crater distributions. The results are then compared to observations. Before 2011 our knowledge of the asteroid crater records was based on spacecraft images of $\sim$10 of these bodies, all of them smaller than 100 km in diameter. The arrival of the \textit{Dawn} spacecraft to Vesta in 2011 and Ceres in 2015 opened a new window into studies of impact cratering in the asteroid belt. A large basin on Vesta's surface have been suggested to explain the Vesta's collisional family \citep{1993Sci...260..186B}. It was later imaged by the Hubble Space Telescope \citep{1997Icar..128...88T} and \textit{Dawn}, and found to be $\simeq 500$ km in diameter (named Rheasilvia). \textit{Dawn} has also discovered another basin on Vesta, now called Veneneia, roughly 400 km in diameter \citep{2012Sci...336..690M}. In contrast, Ceres's surface does not show any obvious basins and the largest craters, Kerwan and Yalode, have diameters of only 280 km and 260 km, respectively \citep{2016NatCo...712257M}. This is puzzling because Ceres has a collisional cross-section that is $\sim 4$ times larger than Vesta. For two of Vesta's basins, there should thus be $\sim 8$ basins on Ceres, but there is none. Previous attempts to derive a crater chronology for Vesta have been carried out by \citet{2014P&SS..103..104S} and \citet{2014P&SS..103..131O}. The former work used the lunar chronology and rescaled it, by simply multiplying the crater production rate by a fixed factor, to Vesta. They estimated the Rheasilvia and Veneneia age to be $\sim3.5$ Gy. This is a significantly older age of Rheasilvia than the one ($\sim1$ Gy) suggested in \citet{2012Sci...336..690M}. At least part of this difference is due to different crater counting strategies adopted by different research teams. The young age of Rheasilvia would be more in line with the age of the Vesta family, thought to form in aftermath of the Rheasilvia impact, which was estimated from arguments based on the collisional grinding of family members \citep{1996A&A...316..248M}. Dynamical modeling of the Vesta family does not constrain the family age well and admits ages $\geq1$ Gy (\citealt{2005A&A...441..819C}; \citealp{2008Icar..193...85N}), which are compatible with either age estimate mentioned above. \citet{2014P&SS..103..131O} developed a new chronology for Vesta based on a synthesis of previous results. Their chronology accounts for the long-term dynamical depletion of the asteroid belt \citep{2010Icar..207..744M}, effects of planetary migration/instability and scattering by planetary embryos that may have resided in the belt during the earliest stages (\citealp{2001Icar..153..338P}; \citealp{2007Icar..191..434O}; \citealp{2010AJ....140.1391M}). Their chronology implies the Rheasilvia age to be $\sim1$ Gy and creates some tension with the low probability of forming Rheasilvia this late ($\sim4$\% according to \citealp{2014P&SS..103..131O}). They also pointed out a significant difference between the lunar and Vesta chronologies, suggesting that the flux of impactors on Vesta was {\it not} orders of magnitude higher during the lunar Late Heavy Bombardment (LHB). A similar analysis was published for Ceres in \citet{2016Sci...353.4759H} and \citet{2016NatCo...712257M}. The former work applied both the lunar and O'Brien chronologies to Ceres and determined a relatively young age of the Kerwan crater (550-720 My). The absence of (obvious) large basins on Ceres is puzzling. \citet{2016NatCo...712257M} proposed that some large depressions observed on the Ceres's surface, referred to as \textit{planitia}, could be strongly relaxed basins. They identified at least two of these topological features, Vendimia planitia with a $\sim830$ km diameter and another planitia with a $\sim570$ km diameter. Various geological mechanisms related to crustal relaxation, including potentially recent geologic activity, could be responsible for nearly complete basin erasure. Here we determine the crater chronologies of Ceres, Vesta and the Moon using a dynamical model of the asteroid belt from \citet{2017AJ....153..103N}. See that work for a complete description of the model. In brief, the model accounts for the early dynamical evolution of the belt due to migration/instability of the outer planets and tracks asteroid orbits to the present epoch. The main asteroid belt, well characterized by modern surveys, is then used to calibrate the number and orbits of asteroids at any given time throughout the solar system history. The model does not account for other effects, such as scattering by planetary embryos, or other impactor populations, such as comets, leftovers of the terrestrial planet accretion, etc. In Sect. \ref{sec:The-model}, we describe the model in more detail and explain the method that we used to determine the crater chronology and size distribution. The results for Vesta and Ceres are discussed in Sect. \ref{sec:Results}. Sect. \ref{sec:Conclusions} summarizes our main conclusions. \section{Model\label{sec:The-model}} \subsection{Dynamical model\label{sec:Dynamical-model}} We use the dynamical model of \citet{2017AJ....153..103N} to determine the crater chronologies of Ceres and Vesta. In that work, we performed a numerical simulation --labeled as CASE1B-- of 50,000 test asteroids over the age of the solar system. The simulation starts at the time of the solar nebula dispersal (it does not account for gas drag). The adopted physical model takes into account gravitational perturbations of all planets from Venus to Neptune (Mercury is included for $t \leq t_{\mathrm{inst}}$, where $t_{\mathrm{inst}}$ is the time of dynamical instability; see below). During the early stages, the giant planets are assumed to evolve by planetesimal-driven migration and dynamical instability (the so-called jumping-Jupiter model; \citealp{2009A&A...507.1041M}; \citealp{2012Natur.485...78B}; \citealp{2012AJ....144..117N}). See \citet{2018ARA&A..56..137N} for a review. The simulations span 4.56 Gy and the time of the instability time $t_{\mathrm{inst}}$ is considered to be a free parameter. The Yarkovsky effect and collisional evolution of the main belt is not modeled in \citet{2017AJ....153..103N}. This limits the reliability of the model to large asteroids for which these effects are not overly significant \citep{2018AJ....155...42N}. Comets and other impactor populations are not considered. This is equivalent to assuming that Ceres and Vesta crater records are dominated by asteroid impactors. The dynamical model of \citet{2017AJ....153..103N} employed a flexible scheme to test any initial orbital distribution of asteroids. By propagating this distribution to the present time and comparing it with the observed distribution of main belt asteroids, we were able to reject models with too little or too much initial excitation (also see \citealp{2015AJ....150..186R}). From the models that passed this test we select the one that has the Gaussian distributions in $e$ and $i$ with $\sigma_e=0.1$ and $\sigma_i=10^\circ$, and a power-law radial surface density $\Sigma(a)=1/a$. We also tested other initial distributions, such as the one produced by the Grand Tack model \citep{2011Natur.475..206W}, and will briefly comment on them in Sect. 3. The Grand Tack distribution is wider in eccentricity (approximately Gaussian with $\sigma_e \simeq 0.2$ and Rayleigh in $i$ with $\sigma_i \simeq 10^\circ$; see \citealp{2015AJ....150..186R} for explicit definitions of these distributions). The impact probability and velocity of a test asteroid on a target world is computed by the \"Opik algorithm \citep{1994Icar..107..255B}. This allows us to account for impacts on bodies that were not explicitly included in the simulation, such as Ceres, Vesta or the Moon. Ceres and Vesta are placed on their current orbits since time zero (corresponding to the dispersal of the protosolar nebula). This is only an approximation because in reality both these asteroids must have experienced orbital changes during the planetary migration/instability. Establishing how these changes may have affected their crater records is left for future work. See \citet{2017AJ....153..103N} for the method used for the Moon. The impact probabilities are initially normalized \textit{to 1 test particle surviving at the end of the simulation}. In other words, the impact probabilities directly provided by a given simulation are divided by the total number of test particles that survived at the end of that simulation. This normalization is necessary, because the final state of the simulation resembles well the present asteroid belt only in terms of orbital distribution, but not in absolute numbers. The actual impact flux is obtained by multiplying these normalized impact probabilities by the number of asteroids larger than a given size in the present asteroid belt (see Eq. (\ref{eq:ntd}) below). \subsection{Crater chronology\label{subsec:chrono}} The usual approach to modeling crater records of planetary and minor bodies consists of two steps. In the first step, scientists define the chronology function, $f(t)$, which gives the crater production rate as a function of time $t$. In the second step, the model production function (MPF), $n(D_{\mathrm{crat}})$, is synthesized from available constraints to compute the crater production rate as a function of crater diameter, $D_{\mathrm{crat}}$. The number of craters is then computed as $n(t,D_{\mathrm{crat}})=f(t)\,n(D_{\mathrm{crat}})\,{\rm d}t\,{\rm d}D_{\mathrm{crat}}$. Integrating this relationship over $t$ and/or $D_{\mathrm{crat}}$ leads to cumulative distributions (e.g., the number of craters larger than diameter $D_{\mathrm{crat}}$ produced since time $t$). This approach implicitly assumes that MPF is unchanging with time which may not be accurate if size-dependent processes such as the Yarkovsky effect \citep{2015aste.book..509V} influence the impactor population. We do not investigate such processes here. Here we use a notation where $t$ measures time from time zero, corresponding to the dispersal of the protosolar gas nebula, to the present epoch ($t=0$ to 4.56 Gy) and $T$ measures time backward from the present epoch to time zero; thus $T=4.56\,{\rm Gy}-t$. We first define the chronology function and MPF in terms of the {\it impactor} flux and diameters (the conversion method from impactor properties to craters is described in Sect. \ref{subsec:mpf}). The cumulative number of impacts, $n(T,D_{\mathrm{ast}})$, of asteroids larger than the diameter $D_{\mathrm{ast}}$ in the last $T$, is \begin{equation} n(T,D_{\mathrm{ast}})=F(T) \, \mathcal{N}(>\!\! D_{\mathrm{ast}})\label{eq:ntd} \end{equation} where $\mathcal{N}(>\!\!\! D_{\mathrm{ast}})$ is the current number of main belt asteroid larger than $D_{\mathrm{ast}}$ and $F(T)$ is the cumulative chronology function obtained from the dynamical model (here normalized to one asteroid larger than $D_{\mathrm{ast}}$ at $T=0$). Eq. (\ref{eq:ntd}) represents a forward-modeling approach that is independent of any crater data; instead, it relies on the accuracy of numerical simulations to reproduce the main belt evolution and our understanding of the main belt size distribution (see Sect. \ref{subsec:mpf}). Having the chronology function, the intrinsic impact probability (actually, the expected value of a Poisson distribution) on the target world, $P_{i}$, can be obtained as: \begin{equation} P_{i}(T)=\frac{4\pi}{S}\frac{{\rm d}F(T)}{{\rm d}T}\label{eq:pit} \end{equation} where $S$ is the surface area of the target and the factor $4\pi$ accounts for the difference between the surface area and the cross section. With this definition of $P_{i}$, the total number of impacts is given as $P_{i}\,R^{2}\,n\, \Delta t$, where $R$ is the target radius, $n$ is the number of impactors and $\Delta t$ is the time interval. The model gives $P_{i}(0) \simeq 4.1\times10^{-18}\:\mathrm{km}^{-2}\mathrm{y}^{-1}$ for both Ceres and Vesta. This is somewhat higher than the mean value $P_i = 2.85\times10^{-18}\:\mathrm{km}^{-2}\mathrm{y}^{-1}$ usually considered for the whole asteroid belt \citep{1992Icar...97..111F}. For Ceres, \citet{2016NatCo...712257M} found $P_i=3.55\times10^{-18}\:\mathrm{km}^{-2}\mathrm{y}^{-1}$, which is more consistent with our $P_{i}(0)$. The small difference can be related to the fact that our model distribution of main belt asteroids is more concentrated towards smaller semimajor axes, because the model does not account the presence of large collisional families at $a\gtrsim3$ au (mainly the Themis, Hygiea and Eos families). The mean impact velocities computed from our model are in the range of 4.6 to 7 km~$\mathrm{s}^{-1}$ for the whole simulated time interval. They show a slightly decreasing trend with $t$ during the earliest stages as asteroid impactors on high-$e$ orbits are removed. The mean velocity at $T=0$ is in good agreement with the current value \citep{1994Icar..107..255B}. \subsection{Size distribution\label{subsec:mpf}} A general procedure to analytically estimate the MPF has been outlined in \citet{2009AJ....137.4936M}. A limitation of this procedure arises from uncertainties in modeling the processes of crater erasure such as, in particular, the obliteration of older and smaller craters by newer and larger ones. The crater erasure can be included in the MPF through a weight function, as explained in \citet{2006Icar..183...79O} and \citet{2009AJ....137.4936M}. Here we instead develop a Monte Carlo approach to forward model the crater size distribution \citep[also see][]{2016NatCo...712257M}. To simulate the formation of craters we combine the observed size distribution of the main belt asteroids with the chronology functions obtained from our dynamical model. The size distribution is constructed from the WISE/NEOWISE observations \citep{2011ApJ...741...68M, 2016PDSS..247.....M}\footnote{Available at the NASA PDS Small Bodies Node, \url{https://sbn.psi.edu/pds/resource/neowisediam.html}}, which is practically complete down to $D_{\mathrm{ast}}\simeq9$-10 km. For diameters slightly smaller than that, we adopt an extrapolation $\mathcal{N}=10^{\,\alpha}D_{\mathrm{ast}}^{\,\,\,\,-\gamma}$, where $\alpha=6.5,\,\gamma=-2.6$ for the distribution of the whole main belt, and $\alpha=6.23,\,\gamma=-2.54$ for the main belt background, i.e. subtracting the members of known asteroid families. These extrapolations were obtained by fitting the size distribution of asteroids slightly larger than 10 km by a power law and extending the power law below 10 km. Our model consists of the following steps: \begin{enumerate} \item We define the minimum impactor diameter, $D_{\mathrm{ast},0}$, that needs to be accounted for to match the smallest craters that we want to model. \item We use Eq. (\ref{eq:ntd}) to determine the average number of impacts $\overline{n}_{\mathrm{imp}} =n(T,D_{\mathrm{ast},0})$ at $T=0$ Ga. \item We draw the actual number of impacts $n_{\mathrm{imp}}$ over the desired time span from a Poisson distribution with mean $\overline{n}_{\mathrm{imp}}$. \item We generate $n_{\mathrm{imp}}$ craters from main belt impactors larger than $D_{\mathrm{ast},0}$ using the following procedure: \begin{enumerate} \item From the main belt size distribution, we draw the size $D_{\mathrm{ast}}$ of the impactor (in m). \item From the chronology function, we draw the time $T$ that will represent the crater age. \item We obtain the velocity $v$ of the impact (in $\mathrm{m\,s}{}^{-1}$) at the time $T$. Note that this is more accurate than just drawing a value from the overall impact velocity distribution, because velocities are slightly higher at earlier times. \item We set the impact angle $\theta=45^{\circ}$ \citep{1962pam..book.....K}. \item We compute the crater diameter $D_{\mathrm{crat}}$ (in m) using the scaling law from \citet{2016Icar..271..350J} for non-porous targets: \begin{equation} D_{\mathrm{crat}}=1.52\,D_{\mathrm{ast}}^{\,\,\,0.88}\,v^{0.5}\,\left(\sin\theta\right)^{0.38}\left(\frac{\delta}{\rho}\right)^{0.38}g^{-0.25}D_{\mathrm{sc}}^{\,\,-0.13}\ .\label{eq:scal} \end{equation} Here, $\delta$ is the impactor's density, $\rho$ is the target's density, $g$ is the target's surface gravity (in $\mathrm{m\,s}{}^{-2}$), and $D_{\mathrm{sc}}$ is the simple-to-complex transition diameter (i.e., the diameter for which the crater starts to develop complex structures, such as multiple ridges, concentric rings, etc.). The values of these parameters adopted here for Ceres and Vesta are given in Table \ref{params}. \end{enumerate} \item We assign to each crater the initial weight $W=1$. \item To account for crater erasure, we consider, one by one, the model-generated craters with size $D_{\mathrm{crat}}$ and age $T$. We then select all craters with sizes $<D_{\mathrm{crat}}$ and ages $>T$, and subtract from their weights an amount $\pi D_{\mathrm{crat}}^{2}/(4S)$, which is the ratio of the crater surface area to the body surface area. When $W$ becomes 0, the corresponding crater is assumed to be totally obliterated. This recipe is designed to model the crater overlap only, i.e. a ``cookie cutting'' approach. \item The final size distribution of craters is obtained by adding all weights of craters with diameter $D_{\mathrm{crat}}$ together. \item The steps (3) to (7) are repeated 1000 times to build up statistics. We compute the mean and the $1\sigma$ uncertainty of crater size distributions and compare them to observations. \item Optionally, we can include formation of a large basin at a specific time. This is done, for example, to test the erasure of older and smaller craters by the Rheasilvia basin formation. \end{enumerate} The crater erasure algorithm in step (4) is a simple method that only accounts for crater overlap. It does not take into account, for example, that the material ejected from large craters may degrade/bury small craters at considerable distance from the impact site. It is therefore expected that our method could underestimate the erasure of small craters. Here, however, we restrict our analysis to $D_{\mathrm{crat}}>50$ km craters for which this effect may not be important. \subsection{Caveats\label{sec:Caveats}} As a by-product of the procedure outlined above we obtain a set of $D_{\mathrm{crat}}$ \textsl{vs.} $D_{\mathrm{ast}}$ values indicating that the scaling law in Eq. (3) approximately follows a linear dependence $D_{\mathrm{crat}}\simeq f_{\mathrm{sl}}\times D_{\mathrm{ast}}$, where $f_{\mathrm{sl}}$ is a constant factor, at least in the size range considered here. The typical values of $f_{\mathrm{sl}}$ are in the range $\sim11$-13 for Ceres and $\sim8$-10 for Vesta. Therefore, if we want to fit the size distribution of craters with $D_{\mathrm{crat}} > 60$ km, we have to set $D_{\mathrm{ast},0}\sim6$ km in the case of Vesta and $D_{\mathrm{ast},0}\sim4$ km in the case of Ceres. This creates a problem because the dynamical model used here is strictly reliable only for $D_{\mathrm{ast}}\gtrsim 10$ km (because it does not account for the size-dependent processes such as the Yarkovsky effect or collisional fragmentation). The Yarkovsky drift of a $D_{\mathrm{ast}} = 4$ km asteroid is expected to be $\sim0.04$ au~Gy$^{-1}$. The drift may be directed inward or outward, depending on asteroid's spin axis orientation. The intrinsic collision probability of the target is not expected to be significantly affected by this, because the inward and outward drifts would average out (assuming a random orientation of spin axes). The main effect of the Yarkovsky drift should consist in larger depletion of small main belt asteroids relative to our size-independent model where asteroid orbits are expected to be more stable. This could potentially mean that the chronology function would have a somewhat steeper dependence with time for $D_{\mathrm{ast}} < 10$~km than for $D_{\mathrm{ast}}>10$~km impactors. The investigation of this effect is left for future work. The effects of collisional grinding are difficult to estimate. The collisional grinding removes mass over time and thus reduces the population of small asteroids. This happens on the top of the dynamical depletion. The general expectation is that the belt should evolve faster initially when it is still massive \citep{2005Icar..175..111B}. Recall that we anchor the results of our dynamical model to the {\it current} population of small asteroids. Thus, running the clock back in time, our model must underestimate the actual number of impacts (because it does not account for impactors that were collisionally eliminated). The formation of asteroids families over the age of the solar system contributes to enhance the two effects discussed above, but it has also another consequence. There are several large collisional families in the outer asteroid belt (Themis, Hygiea and Eos families) and these families have many $D_{\mathrm{ast}} \sim 10$ km members (\citealp{2015aste.book..297N}; \citealp{2015PDSS..234.....N}). Including these bodies in our calibration effectively means that we assume that all these families existed for the whole duration of our simulation (i.e., formed 4.56 Ga), which is clearly not the case because, for example, the Eos family formed only $\sim1.3$ Ga \citep{2006Icar..182...92V}. To test how this approximation affects our results, we can remove asteroid families from the main belt and calibrate our chronology on the current main belt background. These tests show a variation in the number of impacts by a factor of $\sim2$. The uncertainty of our results, described below, cannot be better than that. Finally, another possible source of uncertainty is the contribution to the collisional rates in the main belt of the population of Hungaria asteroids, which may have constituted a significant early population depending on the eccentricity history of Mars \citep{2018Icar..304....9C}. Model CASE1B from \citet{2017AJ....153..103N} does account for a primordial population of asteroids in the range $1.6<a<2.1$ au, the so called E-belt \citep{2012Natur.485...78B}. Therefore, the derived production functions and chronologies used here include the effects of this population. However, model CASE1B did not reproduce well the currently observed population of Hungarias, because the E-belt became more depleted that it should, especially at later times \citep{2015AJ....150..186R}. In any case, the uncertainty introduced by this effect is small and would be within the factor of 2 discussed above. \section{Results\label{sec:Results}} \subsection{Comparison of lunar and asteroid chronologies} The chronology functions obtained in our model for Vesta, Ceres and the Moon are compared in Fig. \ref{crono}. The lunar chronology shows a vast number of impacts during the early epochs when the impactor flux is at least $\sim2$ orders of magnitude higher than at the present time \citep{2017AJ....153..103N}. This happens because many main belt asteroids become destabilized during the planetary migration/instability and evolve into the terrestrial planet region, which leads to a strong asteroid bombardment of the Moon and terrestrial planets. In contrast, the impact flux on Vesta and Ceres is more unchanging with time. This happens because Vesta and Ceres orbit within the main belt and are continuously impacted by asteroids. For them, the early bombardment is not as dramatic as for the Moon. This means that the lunar chronology does not apply to Vesta or Ceres. These considerations also imply that that Vesta's and Ceres's craters should be on average younger than the lunar craters. \citet{2014P&SS..103..131O} reached similar conclusions. To illustrate this, we show the Vesta chronology from \citet{2014P&SS..103..131O} in Fig. \ref{crono}b. We used Eqs. (16) and (18) in their paper and scaled their MPF (their figure 1) assuming a linear scaling law with $8\leq f_{\mathrm{sl}}\leq20$. Note that $f_{\mathrm{sl}}\sim9$ reproduce well the scaling law of \citet{2016Icar..271..350J} for Vesta. We would therefore expect that our results for Vesta should plot near the upper limit of their chronology function range, and this is indeed the case. In \citet{2014P&SS..103..131O}, the Vesta's chronology was pieced together from several publications and was compared with the lunar chronology of \citet{2001SSRv...96...55N} (which was obtained by yet another method). The advantage of our approach is that all chronologies are derived from a single, self-consistent physical model. \subsection{Impact flux for early and late instabilities} The time of planetary migration/instability is a crucial parameter for the Moon as it substantially changes the lunar impact flux during early stages and the overall number of impacts (Fig. \ref{crono}a). Vesta's and Ceres's impact records are much less sensitive to this parameter. Indeed, Fig. \ref{crono}a shows that the records are nearly identical for $T_{\mathrm{inst}}=4.5$ Ga and $T_{\mathrm{inst}}=3.9$ Ga. We therefore do not expect to find many clues about the LHB or the early evolution of the giant planets by analyzing the crater record of these asteroids. Given that other available constraints indicate that the instability happened early \citep{2018NatAs...2..878N}, we give preference to the early instability case in the rest of the paper. We find no appreciable difference for the Gaussian and Grand Tack initial distributions. The Gaussian initial distribution, as described in Sect. \ref{sec:Dynamical-model}, is used in the following analysis. The early instability model suggests that the Moon should have registered $\sim27$ impacts from $D_{\mathrm{ast}}>9$ km asteroids over the age of the solar system (see also \citealp{2017AJ....153..103N}), while Ceres and Vesta registered $\sim51$ and $\sim16$ such impacts, respectively (Fig. \ref{crono}b). According to \citet{2014P&SS..103..131O}, Vesta would have registered between 10 and 75 impacts of $D_{\mathrm{ast}}>9$ km asteroids, but $\sim70$\% of these impacts would have occurred during the first 50 My of evolution. In general, O'Brien et al.'s chronology produces $\sim1.5$ times fewer impacts per Gy during the last $\sim4$ Gy than our chronology (assuming $f_{\mathrm{sl}}\sim9$). This discrepancy is, at least in part, related to the fact that O'Brien et al.'s chronology shows a drop at the very beginning, reflecting their attempt to account for strong depletion of the main asteroid belt by processes not modeled here (e.g., planetary embryos, Grand Tack).\footnote{The strong depletion of the asteroid belt was thought to be needed, because the formation models based on the minimum mass solar nebula suggested that the primordial mass of the main belt was 100-1000 times larger than the present one \citep{1977Ap&SS..51..153W}. Also, the classical model of asteroid accretion by collisional coagulation required a large initial mass to produce 100-km class objects. The formation paradigm has shifted, however, with more recent models favoring a low initial mass \citep{2015aste.book..493M}.} \citet{2016NatCo...712257M} derived a chronology function for Ceres that has a very similar shape to O'Brien et al.'s chronology for Vesta. It also shows a drop during the first 50 My of evolution due to a presumably strong primordial depletion of the main belt. Using this chronology, they predicted 180 and 90 impacts from impactors with $D_{\mathrm{ast}}>10$ km and $D_{\mathrm{ast}}>13$ km, respectively. According to their scaling laws, these impactors produce craters with $D_{\mathrm{crat}}\sim100$ km. About 70\% of these impacts happen during the first 400 My of evolution (i.e. before the dynamical instability that they place at 4.1 Ga). Compared to that, our model implies $\sim$4 times fewer impacts and we do not find any significant difference between the number of impacts for the early and late instability cases. The number of craters of various sizes expected from our model is reported in Table \ref{tab-counts}. For Vesta, these numbers are in a good agreement with observations, especially if we account for modest crater erasure (see Sect. \ref{subsec:mpf}). For Ceres, strong crater erasure by viscous relaxation may be required (Sect. \ref{sec:Ceres-craters}). \subsection{Vesta's craters} Figure \ref{distvesta} compares our model size distributions of Vesta's craters to observations. To introduce this comparison, recall that we have blindly taken a dynamical model of the asteroid belt evolution (i.e., without any a priori knowledge of what implications the model will have for the Vesta's crater record) and used a standard scaling law to produce the crater record. There is not much freedom in this procedure. If the dynamical model were not accurate, for example, we could have obtained orders of magnitude more or less craters than what the {\it Dawn} mission found. But this is not the case. In fact, there is a very good general agreement between the model results and observations. This also shows that the caveats discussed in Sect. \ref{sec:Caveats} do not (strongly) influence the results. In more detail, in a model where no crater erasure is taken into account (left panel of Fig. \ref{distvesta}), the agreement is excellent for craters with $D_{\mathrm{crat}}>100$ km. There is a small difference for $D_{\mathrm{crat}}\lesssim100$ km, where the model distribution steeply raises and slightly overestimates the number of craters. A similar problem was identified in \citet{2014P&SS..103..131O}. We tested whether this issue may be a result of crater erasure. Indeed, when crater erasure is included in the model (the middle panel of Fig. \ref{distvesta}), the size distribution shifts down and becomes slightly shallower. It now better fits the data in the whole range modeled here. The results do not change much when we include the presumed Rheasilvia basin formation at $\sim1$ Gy ago (right panel of Fig. \ref{distvesta}).\footnote{If the dynamical model is calibrated on the main belt background (i.e., asteroid families removed; Sect. \ref{sec:Caveats}), we obtain $\sim2$ times fewer craters. This does not make much of a difference on the logarithmic scale in Fig. \ref{distvesta}, but the overall fit without crater erasure becomes slightly better.} In summary, our model works really well to reproduce the Vesta's crater record and a modest crater erasure may be needed to better fit the number of $D_{\mathrm{crat}} \lesssim 100$ km craters. \subsection{Ceres's craters\label{sec:Ceres-craters}} Figure \ref{distceres} shows a similar comparison for Ceres. In this case, the model without crater erasure predicts nearly an order of magnitude more craters on Ceres's surface than the number of actual craters. A similar problem was noted in \citet{2016NatCo...712257M}. The situation improves when the crater erasure is included in the model (middle panel of Fig. \ref{distceres}), but the problem is not entirely resolved. We could have tried to erase craters more aggressively, for example, by assuming that small craters are degraded by distal ejecta from large craters \citep{2019Icar..326...63M}. But this would create problems for Vesta where the model with our conservative erasure method (craters must overlap to be erased) worked quite well. Actually, \citet{2019Icar..326...63M} showed that crater degradation by energetic deposition of ejecta (e.g. secondary cratering/ballistic sedimentation) on the Moon works differently for the larger craters comparable to the crater sizes considered here \citep{2019EPSC...13.1065M}, so that mechanism would probably not be applicable in the cases of Ceres and Vesta. Following \citet{2016NatCo...712257M}, we therefore investigate the effects of viscous relaxation (which are specific to ice-rich Ceres). To empirically incorporate the effects of viscous relaxation in our model, we assume that the model weight of each crater diminishes according to the following prescription: \begin{equation} W=\exp\left(-T/\tau\right)\label{eq:ww} \end{equation} where e-folding timescale is a function of crater diameter, \begin{equation} \tau = C/ D_{\mathrm{crat}} \label{eq:tau} \end{equation} as supported by classical models of relaxation on icy surfaces \citep[e.g.][]{1973Icar...18..612J,2012GeoRL..3917204B,2013Icar..226..510B}. Here, $C = 4\pi\eta/\rho g$ is a constant depending on the viscosity $\eta$ of the surface layer. The right panel of Fig. \ref{distceres} shows the model results for Ceres considering crater erasure together with viscous relaxation. In this case, we are able to fit the observed crater record assuming a value of $C\simeq 200$~km~Gy, which would imply a surface viscosity of $\sim 3\times 10^{23}$~Pa~s. This is about three orders of magnitude larger than the viscosity of pure ice at 180 K (the approximate temperature of Ceres surface), meaning that the particulate content volume in the icy surface layer needs to be significant. In fact, viscous relaxation of a purely icy surface is expected to be an aggressive process, with a typical e-folding timescale of only 1 My for the erasure of topographic wavelengths as short as 100 km. Our result is in line with more rigorous studies of the Ceres internal structure \citep{2017E&PSL.476..153F}, which infer a mechanically strong crust, with maximum effective viscosity $\sim 10^{25}$~Pa~s. This gives some support to the viscous relaxation prescription discussed above. We caution, however, that the results are likely not unique and different combinations of crater erasure and viscous relaxation prescriptions (e.g., more aggressive crater erasure and longer viscous relaxation timescale) could produce similarly good fits. In summary, we find that both erasure processes should be important for Ceres and $D_{\mathrm{crat}}\sim100$ km Ceres's craters should viscously relax on an e-folding timescale of $\sim 1$-2 Gy. This represents an interesting constrain on geophysical models of viscous relaxation and Ceres's composition. \subsection{Basins formation} Here, we discuss the probability of forming large craters or basins ($D_{\mathrm{crat}}>400$~km) on Vesta and Ceres at different times in the past. One possible approach to this problem consists in computing the so-called isochrones for each body, i.e. the crater production function at different times $T$. For a given diameter $D_{\mathrm{crat}}$, each isochrone gives the expected number of craters $\mu(>D_{\mathrm{crat}},\,<T)$, and the probability of forming exactly $N$ (and only $N$) craters $>D_{\mathrm{crat}}$ in a time $<T$ is obtained from a Poisson distribution: \begin{equation} p_{\mu}(N)=\frac{\mu^N\,e^{-\mu}}{N!}\label{eq:pn} \end{equation} Figure \ref{isocro} shows the isochrones for Ceres and Vesta, as determined from our model, without considering any crater erasure. If we take the case of a 500 km basin on Vesta, we find that the expected value for the $T=1$~Ga isochrone is $\mu =0.10$, and from Eq. (\ref{eq:pn}) the probability of forming one basin is 9\%, while the probability of forming two basins is much smaller, 0.5\%. However, if we consider the $T=4.56$~Ga isochrone, the probability of forming two basins increases to 4.6\%. We recall that the probability of forming at least one 500 km basin in the last 1 Gy can be obtained as $1-p_{\mu}(0)$, which in this case would give a value of 9.5\%. Table \ref{tab-poison} summarizes the results for $D_{\mathrm{crat}}>400$ km. Another possible approach consists in using our model to directly determine the probability of producing at least $N$ craters larger than a given size over a certain time span. This approach differs from the previous one in that it does not rely on the Poisson statistics, but on the output of the Monte Carlo simulations. Figure \ref{probab} shows the probability of creating at least one crater (panel a) and at least two craters (panel b) larger than a cutoff diameter on Vesta. Again, no crater erasure is considered here. We find that the probability of creating the Rheasilvia basin with $D_{\mathrm{crat}}\simeq500$~km (the cyan line in panel a) in the last 1 Gy (or 2 Gy) is 10\% (or 18\%). This is about 2.5 times more likely than the probability reported in \citet{2014P&SS..103..131O}. This happens because our chronology function leaves more space for a relatively late formation of craters/basins. \citet{2014P&SS..103..131O}, instead, adopted a strong primordial depletion and had more basins forming early on (e.g. \citealp{2007Icar..191..434O}). If we consider $D_{\mathrm{crat}}>400$~km (blue line in panel a), the probabilities of forming at least one crater become 14\% in the last 1 Gy, and 25\% in the last 2 Gy. These values are slightly larger than those reported in Table \ref{tab-poison}, because the Poisson statistics constrains the formation of exactly $N$ craters. The probability of forming both the Rheasilvia and Veneneia basins (the blue line in Fig. \ref{probab}b corresponding to $D_{\mathrm{crat}}=400$ km) is 15\% over the age of the solar system, and 6\% in the last 3 Ga. Again, these value are slightly larger than those reported in Table \ref{tab-poison}. Table \ref{tab-bene} reports different probabilities assuming that the age of Rheasilvia is $\leq1$ Gy (note that we do not claim that this is an accurate age of the Rheasilvia basin; we merely test this assumption) and Veneneia is $>1$ Gy, for the models with early and late instabilities. The probabilities are slightly higher in the early instability model simply because, in this model, the rate of impacts is slightly higher in the past 1 Gy. Thus, a young age for Rheasilvia could potentially be more consistent with an early instability model. In any case, our chronology still implies that most Vesta's craters/basins should have preferentially formed early in the solar system history. Figure \ref{probceres} shows the results for Ceres. In this case, the probability of {\it not} creating any basin with $D_{\mathrm{crat}}>400$ km over the age of the solar system is only 1\% (the red line in Fig. \ref{probceres}). Combining this result with the one for Vesta (see above) we estimate that the joint probability of creating two $D_{\mathrm{crat}} > 400$~km basins on Vesta younger than 3 Gy and no $D_{\mathrm{crat}} > 400$ km basin on Ceres is only $<0.1$\%. Figure \ref{proyect} shows, at the top, the one exceptional case we found over 1000 realizations that fulfills the above condition. For comparison, an example of the typical outcome of our Monte Carlo model is shown at the bottom. This result emphasizes the need for efficient removal of Ceres's basins by viscous relaxation (or some other process). \section{Conclusions\label{sec:Conclusions}} Our findings can be summarized as follows: \begin{itemize} \item The crater chronologies of Ceres and Vesta are very different from that of the Moon. This is a consequence of the fact that both Vesta and Ceres spent their whole lifetimes in the asteroid belt and are impacted all the time, whereas the Moon experienced a more intense bombardment during the first $\sim1$ Gy. This means that using the lunar chronology for Ceres and Vesta is incorrect. The scaled lunar chronology would imply that Vesta's basins must have formed very early in the solar system history, which may not necessarily be the case. \item Our crater chronologies of Ceres and Vesta are similar to those obtained in some previous studies (\citealp{2014P&SS..103..131O}; \citealp{2016NatCo...712257M}). In our chronology, however, the crater ages are not as concentrated toward the early times as in these works, allowing more impacts in the past 3~Gy. \item The model crater record of Vesta matches observations (e.g., 10 known craters with $D_{\mathrm{crat}}>90$ km). The model with crater erasure overpredicts, by a factor of $\sim3$, the number of $D_{\mathrm{crat}}>90$ km craters observed on the Ceres's surface. An additional erasure process such as, for example, the size-dependent viscous relaxation of craters (with $\sim 2$ Gy timescale for $D_{\mathrm{crat}}=100$ km craters), may be responsible for this discrepancy. \item We estimate that the probability of creating the Rheasilvia and Veneneia basins ($D_{\mathrm{crat}} >400$ km) on Vesta during the last 3 Gy is $\simeq 6$\%, somewhat larger than found in the previous studies. A recent formation of the Rheasilvia basin can be more easily accepted in a dynamical model with the early instability, where the impact probabilities in the last 1 Gy are higher. \item The probability of producing two large basins ($D_{\mathrm{crat}} > 400$ km) on Vesta and simultaneously not producing any basin on Ceres is interestingly small ($<0.1$\%). The relative paucity of large craters/basins on Ceres may be explained in a model with crater erasure and viscous relaxation. \end{itemize} \acknowledgments The authors wish to thank David Minton for helpful comments and suggestions during the revision of this paper. FR's work was supported by the Brazilian National Council of Research (CNPq). DN's work was supported by the NASA SSERVI and SSW programs. The simulations were performed on the SDumont computer cluster of the Brazilian System of High Performance Processing (SINAPAD).
proofpile-arXiv_065-119
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The identity of dark matter is widely regarded as one of the most central and important questions in science today. A related question is: what is the phase-space distribution of dark matter in galactic halos, and particularly what is the phase-space distribution of dark matter in the halo of our own Milky Way galaxy. This latter question is important for at least two reasons. First, the Milky Way halo is the environment in which the Galactic Disk forms and lights up with stars such as our Sun. Second, knowledge of the phase-space distribution helps direct and indirect dark matter searches on Earth. Finally, there is the tantalizing possibility that the phase-space distribution depends on the identity of the dark matter particle, and hence that observations of the phase-space distribution may reveal or constrain the dark matter's identity. Dark matter is generally thought to be cold and collisionless. ``Cold" means that the primordial velocity dispersion of the dark matter particles is small, less than of order $10^{-8}$ today. Throughout, we use units in which $\hbar = c = k_B = 1.$ ``Collisionless" means that, to excellent approximation, gravity is the only relevant interaction among the dark matter particles as far as large scale structure formation is concerned. Particle candidates with these properties include weakly interacting massive particles (WIMPs), QCD axions or axion-like particles (ALPs), and sterile neutrinos \cite{PDM}. Cold collisionless dark matter lies on a thin 3-dimensional hypersurface in 6-dimensional phase-space \cite{Ipser}. We refer to this 3-dimensional hypersurface as the phase-space sheet. Its qualitative behavior is described in Fig. 1. The thickness of the phase-space sheet is the primordial velocity dispersion. In the non-linear regime of structure formation, the phase-space sheet wraps around massive objects, such as galaxies, in phase-space. As a result, at any physical point inside a galactic halo there is an odd number flows. Furthermore, the dark matter density diverges in the limit of zero velocity dispersion on the surfaces in physical space across which the number of flows changes by two. Such surfaces are called caustics. The number of flows expected on Earth is of order one hundred \cite{Ipser}. Present N-body simulations have inadequate resolution for revealing any but a small number of the expected flows. Arguments for the robustness of cold flows and caustics in galactic halos are given in ref.~\cite{robust}. The Caustic Ring Model is a proposal for the phase-space distribution of the dark matter halo of the Milky Way, and those of other disk galaxies \cite{Duffy}. The model is characterized by axial symmetry, self-similar time evolution, and large scale vorticity. Although real galaxies are not exactly axial symmetric, they are sufficiently so that the model may apply to them as a first approximation. According to the model, the inner caustics in the halos of disk galaxies are rings. Caustic rings are closed tubes whose cross-section is described by the elliptic umbilic ($D_{-4}$) catastrophe \cite{sing}. The cross-section has the shape of a tricusp; see Fig. 2. The rings lie in the galactic plane. Their radii in any given disk galaxy are predicted in terms of a single adjustable parameter $j_{\rm max}$; see Eq.~(\ref{crr}). Observational evidence is claimed in support of the model. Large scale vorticity is not possible if the dark matter is WIMPs or sterile neutrinos \cite{Natarajan}. QCD axions \cite{PQWW,invisible} and ALPs \cite{Arias} produced by the vacuum realignment mechanism \cite{axdm}, including ultra-light ALPs, differ from WIMPs and sterile neutrinos in that they form a Bose-Einstein condensate \cite{CABEC,Erken, Christopherson}. Thermalization is a necessary condition for Bose-Einstein condensation. The interaction through which cold dark matter axions thermalize is self-gravity. When falling onto galactic halos, cold dark matter axions thermalize sufficiently quickly that they almost all go to the lowest energy state available to them consistent with the total angular momentum they have acquired through tidal torquing \cite{Erken,angmom}. That state is one of rigid rotation in the angular variables. Thus axion (or ALP) dark matter does acquire vorticity in galactic halos, and accounts for the existence of caustic rings. It was shown that axion Bose-Einstein condensation accounts in fact for all properties of the Caustic Ring Model \cite{case}. The observational evidence for caustic rings and the Caustic Ring Model is therefore evidence that the dark matter is a Bose-Einstein condensate, of axions or ALPs, at least in part. The idea that the cold axion fluid thermalizes by gravitational self-interactoions, and as a result forms a Bose-Einstein condensate, was criticized in refs. \cite{Davidson1,Davidson2,Guth}. These criticisms were addressed in ref. \cite{SC}. One piece of evidence that has been claimed in support of caustic rings is a triangular feature in the IRAS map of the Milky Way plane; see \url{https://www.phys.ufl.edu/~sikivie/triangle/} . The triangular feature is interpreted as the imprint of the 5th caustic ring of dark matter in our galaxy upon baryonic matter in the Galactic Disk, seen in a direction tangent to the ring from our perspective \cite{MWcr}. The IRAS triangle is in the direction of Galactic Coordinates $(l,b) = (80^\circ, 0^\circ)$. The IRAS map does not have a matching triangular feature on the other side, near $(l,b) = (-80^\circ, 0^\circ)$. However, the recently released Gaia skymap \cite{Gaia1,Gaia2} has two triangular features. One of these coincides with the IRAS triangle on the left side. The other is on the right side, at $(l,b) = (-91^\circ, 0^\circ)$; see \url{https://www.phys.ufl.edu/~sikivie/Gaiamap/} . Our paper assumes the two features are effects of the 5th caustic ring on baryonic matter and explores the implications of this for the Caustic Ring Model. The fact that the two triangles do not match perfectly in direction is attributed to a displacement of the caustic ring center $5.8^\circ$ towards the right relative to the Galactic Center. Section III of our paper is concerned with the mechanism by which the caustic ring produces triangular features in the IRAS and Gaia maps. It was proposed in ref.~\cite{MWcr} that the features are produced by gas and dust in thermal equilibrium in the gravitational field of the caustic ring. However, the features produced in this way have been found to be much less sharp than the observed features \cite{starsgas}. We propose here instead that dust is entrained by the cold axion flows forming the nearby caustic. Because the dust particles follow the same trajectories as the axions, they form the same caustics. This proposal accounts for the sharpness of the triangular features. We derive a formula for the drag on a dust particle moving with respect to a cold axion fluid and compare this to the drag on the dust particle as a result of collisions with gas in the disk. A major source of uncertainty is the temperature of the axions. We discuss the present axion temperature considering primordial heat in the axions themselves and heat absorbed later by cooling baryons When only the IRAS triangle was known, it was thought that the Sun lies close to, but outside of, the tricusp volume of the nearby caustic ring because it was assumed that the caustic ring center coincides with the Galactic Center. Now, taking account of the Gaia right triangle, the Sun is much closer to the caustic than previously thought. In fact we believe the Sun almost certainly is inside the tricusp volume of the 5th caustic ring. There are then four prominent flows on Earth associated with that caustic. Our goal in Section IV is to derive the properties of the four flows from the observed triangular features. The outline of our paper is as follows. Section II describes caustic rings, the Caustic Ring Model and the observational evidence in support of it. The rotation curve of M31 and the Gaia triangles are presented as additional evidence. Section III discusses the entrainment of dust by cold axion flows, and the effect of dust-gas and dust-dust collisions on the flow of dust. In Section IV, we use the IRAS and Gaia triangular features to determine the velocity vectors on Earth of the four flows associated with the nearby caustic ring. We also give rough estimates of their densities. Section V provides a summary. \section{The Caustic Ring Model} The Caustic Ring Model is a proposal for the phase-space distribution of the halo of the Milky Way, and the halos of other disk galaxies. The model is axially symmetric, and self-similar in its time evolution. One of its distinguishing features is the presence of caustic rings of dark matter in the galactic plane. In this section we give background information on dark matter caustics, a detailed description of caustic rings, the model predictions for the caustic ring radii, and the previously claimed observational evidence in support of the model. Finally we present additional evidence from the rotation curve of M31 and from a couple of triangular features in the Gaia skymap. \subsection{Caustic rings} \subsubsection{Dark matter caustics} Caustics appear in a flow of energy or matter when two conditions are satisfied: 1) the flow is collisionless and 2) the velocity dispersion of the flow is small. Cold dark matter particles are collisionless - i.e. they have only gravitational interactions in excellent approximation - and have very small primordial velocity dispersion $\delta v$ \cite{sing,Natarajan}. Hence, caustics are expected in the distribution of dark matter. They appear at the very moment multi-streaming begins. Indeed, as mentioned already in Section I, cold dark matter particles lie on a thin three-dimensional hypersurface embedded in six-dimensional phase-space. This phase-space sheet wraps around inhomogeneities such as galaxies. This results in a number of discrete flows through any point in physical space. Caustics lie at the boundaries of regions with differing number of flows, one region having $K$ flows and the other $K+2$ flows. Each particle in the phase-space sheet can be labeled by three parameters $(\alpha_1, \alpha_2, \alpha_3) \equiv \vec{\alpha}$. Let $\vec{x}(\vec{\alpha},t)$ be the position of particle $\vec{\alpha}$ at time $t$. The number $K$ of discrete flows through a physical point $\vec{r}$ is the number of solutions $\alpha_j(\vec{r},t),~j=1,2, ..K$, of the equation $\vec{r} = \vec{x}(\vec{\alpha},t)$. In physical space, the number density is given by \begin{equation} n(\vec{r},t) = \sum_{j=1}^K \frac{1}{|D(\vec{\alpha}_j(\vec{r},t),t)|}\; \frac{d^3 N}{d\alpha^3}(\vec{\alpha}_j(\vec{r},t)) \label{density} \end{equation} where $\frac{d^3 N}{d\alpha^3}(\vec{\alpha})$ is the particle number density in parameter space, and $D(\vec{\alpha},t) = \text{det}\Big(\frac{\partial\vec{x}}{\partial\vec{\alpha}}\Big)$ is the Jacobian of the transformation $\vec{\alpha} \rightarrow \vec{x}(\vec{\alpha},t)$. Eq.~(\ref{density}) shows that the physical space density diverges where the map $\vec{\alpha} \rightarrow \vec{x}(\vec{\alpha},t)$ is singular. \subsubsection{Outer and inner caustics} Cold dark matter particles fall in and out of the gravitational potential well of a galaxy numerous times over the galaxy's history. The in and out flows of the particles necessarily form inner and outer caustics \cite{sing,Natarajan}. The outer caustics are topological spheres. They appear near where the particles in an outflow are at their maximum distance from the galactic center before falling back in. The inner caustics appear near where the particles with the most angular momentum in an inflow are closest to the galactic center before falling back out. The structure of the inner caustics is determined by the velocity field of the particles at their last turnaround, i.e. when their radially outward flow was last stopped by the gravitational attraction of the galaxy. A shell containing such particles is called a turnaround sphere. When the velocity field is irrotational ($\vec{\nabla} \times \vec v =0$), the inner caustics have a tent-like structure, described in ref. \cite{Natarajan}. When the velocity field is dominated by net overall rotation ($\vec{\nabla} \times \vec v \neq 0$), the inner caustics are closed tubes with a \textit{tricusp} cross-section, called caustic rings. The structure of caustics is stable under small perturbations in the velocity field \cite{Natarajan}. The inner caustics provide a tool to differentiate between a rethermalizing Bose-Einstein condensate of axions, or ALPs, and the other dark matter candidates. Ordinary CDM, including WIMPs and sterile neutrinos, never acquires a velocity field with large scale rotation. Indeed primordial rotational modes die out as a result of the Hubble expansion, setting $\vec{\nabla} \times \vec{v} = 0$ as an initial condition for all dark matter candidates. Galactic halos acquire angular momentum through tidal torquing by nearby protogalaxies in the early phases of structure formation \cite{Peebles}. Even though galactic halos acquire angular momentum, the velocity field of the dark matter particles remains irrotational if the dark matter is ordinary CDM \cite{Natarajan}. The inner caustics have tent-like structure in that case. Axions, as well as any axion-like particles (ALPs) produced by the vacuum realignment mechanism \cite{axdm}, behave differently from ordinary CDM because they thermalize as a result of their gravitational self-interactions and form a Bose-Einstein condensate (BEC) \cite{CABEC,Erken,Christopherson}. Dark matter axions rethermalize sufficiently fast by gravitational self-interactions while they fall in and out of a galactic gravitational potential well that almost all go to the lowest energy available state consistent with the angular momentum acquired by tidal torquing \cite{Erken,angmom}. That state is one of rigid rotation on the turnaround sphere. As a result dark matter axions fall in with a rotational velocity field and make caustic rings. The observational evidence for caustic rings, discussed below, suggests therefore that the dark matter is axions, or ALPs, at least in part. If the dark matter is a mixture of axions and ordinary cold dark matter, caustic rings still form provided the fraction of axions is large enough. Ref. \cite{angmom} places a lower limit of $\sim$ 35\% on the axion fraction based on the prominence of features in galactic rotation curves associated with caustic rings. \subsection{Caustic ring structure} Here we briefly describe the properties of an axially and reflection symmetric caustic ring and the flows associated with it \cite{sing}. The axial coordinate being irrelevant, the dark matter particles are conveniently labeled by two parameters $(\alpha , \eta)$ where $\eta$ is the time when a given particle crosses the $z=0$ plane and $\alpha$ the angle from the $z=0$ plane at the time of the particle's most recent outer turnaround. The coordinates of the particles near the caustic are given by \begin{eqnarray} \rho &=& a + \frac{1}{2} u (\eta - \eta_{0})^{2} - \frac{1}{2} s \alpha^{2} \nonumber \\ z &=& b \alpha \eta \label{GAIA_rhoz} \end{eqnarray} where $a, \; u, \; \eta_{0}, \; s$ and $b$ are constants characterizing the caustic ring. Since actual caustic rings are only approximately axially symmetric, the five constants vary to some extent along the ring. The caustic occurs where the Jacobian $|D_{2} (\alpha, \eta)| \equiv |\text{det} \frac{\partial(\rho, z)}{\partial(\alpha, \eta)}|$ is zero. In the $\rho$-$z$ plane, it takes the shape of a tricusp, shown in Fig. 2. The tricusp is the cross-section of a caustic ring. Its sizes, $p$ in the $\rho$-direction and $q$ in the $z$-direction, are given by \cite{sing} \begin{eqnarray} p &=& \frac{1}{2} u \eta_{0}^{2} \nonumber \\ q &=& \frac{\sqrt{27}}{4} \ \frac{1}{\sqrt{\zeta}} \ p \ , \label{GAIA_pq_defn} \end{eqnarray} where \begin{equation} \zeta = \frac{su}{b^2} = \frac{27}{16} \ \frac{p^2}{q^2} \ . \label{GAIA_zeta_defn} \end{equation} If $\zeta = 1$, the tricusp is invariant under rotations by $120^\circ$ in the $\rho$-$z$ plane. It is convenient to write Eqs.~(\ref{GAIA_rhoz}) in terms of the rescaled coordinates $X = \frac{\rho - a}{p}$, $Z = \frac{z}{p}$: \begin{eqnarray} X &=& (T-1)^2 - A^2 \nonumber \\ Z &=& \frac{2}{\sqrt{\zeta}} \ A T \label{GAIA_XZAT} \end{eqnarray} where $A = \alpha \sqrt{\frac{s}{u \eta_0 ^2}}$ and $T = \frac{\eta}{\eta_0}$. For a given point $(X, Z)$ near the caustic, the parameters $(T,A)$ of particles at that point are found by solving the quartic equation: \begin{equation} X = (T-1)^2 - \frac{\zeta Z^2}{4 T^2} \ , \label{GAIA_XT} \end{equation} and setting $A = {\sqrt{\zeta} Z \over 2 T}$. Each real solution corresponds to a flow of dark matter particles through that point. There are two flows through each point outside the tricusp, and four flows through each point inside. The density of the flow corresponding to a real solution $(A_j,T_j)$, $j=1,..,K$ with $K$ = 2 or 4, is \begin{equation} d_j (\rho,z) = \frac{1}{2 p b \rho} \; \frac{dM}{d\Omega d\eta} \; {\cos(\alpha_j) \over |(T_j-1)T_j + A_j^2|} \label{density_formula} \end{equation} where $ \frac{dM}{d\Omega d\eta} $ is the infall rate, i.e. the mass of dark matter particles falling in per unit time per unit solid angle. The velocity vector of the $j$th flow has components \begin{eqnarray} v_{j\rho} &=& u \eta_0 (1-T_j) \nonumber \\ v_{j z} &=& - u \eta_0 \ \frac{Z}{2T_j} \nonumber \\ v_{j \phi} &=& \sqrt{v^2 - v_{j\rho} ^2 - v_{j z} ^2} \label{GAIA_vrhozphi} \end{eqnarray} where $v$ is the speed of the flow, and the $\phi$-direction is $\hat{\phi} = \hat{\rho}\times\hat{z}$. For an axially symmetric caustic ring whose center coincides with the Galactic Center, $\hat{\rho}$ points radially outward, $\hat{z}$ points toward the Galactic North Pole and $\hat{\phi}$ in the direction of Galactic rotation. \subsection{Model properties} The Caustic Ring Model is described in detail in ref. \cite{Duffy}. Here we only list the properties directly relevant to our discussion of the Gaia triangles and the M31 rotation curve. The model assumes that the flow of dark matter particles in a galactic halo is self-similar in time \cite{FGB,STW}, and axially symmetric. It gives the overall phase-space distribution of a disk galaxy's halo in terms of the galaxy's rotation velocity $v_{\rm rot}$ and a parameter $j_{\rm max}$ which is a dimensionless measure of the galaxy's angular momentum. The model predicts a disk galaxy to have caustic rings in the galactic plane at radii given approximately by \cite{crdm}: \begin{equation} a_n\simeq \frac{40\text{ kpc}}{n} \left(\frac{v_{\text{rot}}}{220~\text{km/s}}\right) \left(\frac{j_{\text{max}}}{0.18}\right) ~~\ , \label{crr} \end{equation} where $n = 1,2,3, ..$. More precise predictions for the caustic ring radii are given in ref. \cite{Duffy}. The $n$th caustic ring forms in the flows of particles experiencing the $n$th inner turnaround in their history. The nominal values $v_{\rm rot} \simeq 220$ km/s and $j_{\rm max} \simeq 0.18$ apply to our Milky Way galaxy. The length scale in the $a_n \approx \frac{40\text{kpc}}{n}$ prediction for the Milky Way is set by the assumption that the radius of the solar orbit is 8.5 kpc. Four rings are therefore predicted to lie outside the solar orbit and the fifth just inside. The model is not so detailed as to predict the sizes $p$ and $q$ of the caustic ring cross-sections. Values of $p/a$ estimated from rises in the Milky Way rotation curve range from 0.016 to 0.1 \cite{MWcr}. When the Caustic Ring Model was proposed, the observational evidence claimed for caustic rings raised a puzzle because caustic rings require the velocity field to have net overall rotation whereas the velocity field of ordinary CDM is irrotational. As mentioned already, axion Bose-Einstein condensation resolves this puzzle because cold dark matter axions thermalize sufficiently fast by gravitational self-interactions that almost all go to the lowest energy available state consistent with the angular momentum acquired by tidal torquing and that state is one of rigid rotation on the turnaround sphere. It was shown in ref. \cite{case} that axion Bose-Einstein condensation justifies the Caustic Ring Model in all its aspects, including the fact that the caustic rings lie in the galactic plane, the magnitude of the parameter $j_{\rm max}$ and the pattern, Eq.~(\ref{crr}), of caustic ring radii. Ref. \cite{angmom} analyzed the behavior of the vortices that form in the axion BEC constituting galactic halos. Unlike the vortices in superfluid $^4$He, the vortices in axion BEC are mutually attractive and combine into one Big Vortex along the rotation axis of a disk galaxy. This results in a modification of the Caustic Ring Model since the infall was assumed to be isotropic \cite{Duffy} in the original formulation. The presence of a Big Vortex causes a depletion along the rotation axis and a compensating enhancement along the equatorial plane. The enhancement along the equatorial plane serves to explain why the bumps in galactic rotation curves attributed to caustic rings are so prominent. It had been noted in ref. \cite{MWcr} that the rises in the Milky Way rotation curve attributed to caustic rings are approximately a factor 4 larger than expected. The presence of a Big Vortex resolves this puzzle because caustic rings are formed in the flows of particles falling in and out close to the Galactic Plane where the density is enhanced. When estimating the densities of the flows associated with the nearby 5th caustic ring in Section IV, we will assume that there is a Big Vortex and that the densities are enhanced by a factor 4 compared to the predictions of ref. \cite{Duffy}. \subsection{Summary of previous evidence} Several observations have been previously claimed as supporting evidence for the Caustic Ring Model. We briefly review these observations here. \subsubsection{Combined rotation curve} Caustic rings produce bumps at their locations in the rotation curves of galaxies \cite{crdm,sing}. The extended and well measured rotation curves of 32 galaxies, published in refs.~\cite{BeSa}, were analyzed in ref.~\cite{Kinney}. The radial coordinate $r$ in each rotation curve was rescaled according to \begin{equation} r \rightarrow \tilde{r} = r \left(\frac{220\text{ km/s}}{v_{\text{rot}}}\right) \; , \label{resc} \end{equation} to remove the dependence of the caustic ring radii on $v_{\rm rot}$; see Eq.~(\ref{crr}). The rescaled rotation curves were then added to each other. The combined rotation curve, shown in Fig. 3, has two peaks, one near $40$ kpc and one near $20$ kpc, with significance of $3.0 \sigma$ and $2.6\sigma$ respectively. The presence and locations of the two peaks are explained if they are caused by the $n=1$ and $n=2$ caustic rings of dark matter in those galaxies. \subsubsection{Milky Way rotation curve \cite{MWcr}} The inner ($r<r_\odot$) Milky Way rotation curve derived \cite{Clemens} from the Massachusetts-Stony Brook North Galactic Plane CO Survey \cite{CO} shows a series of sharp rises between $3$ and $8.5$ kpc. Each rise starts with an upward kink and ends with a downward kink as expected for rises caused by caustic rings of dark matter \cite{sing}. The Caustic Ring Model predicts ten rises between 3 and 8.5 kpc, assuming the value of $j_{\rm max} = 0.18$ derived earlier from the ratio of baryonic to dark matter contributions to the Milky Way rotation curve at the solar radius \cite{STW}\cite{Duffy}. Allowing for ambiguities in identifying rises, the number of rises in the rotation curve between 3 and 8.5 kpc is in fact ten plus or minus one. When the predicted caustic ring radii are fitted to the radii where the rises start in the rotation curve, with $j_{\rm max}$ the only adjustable parameter, the remaining root mean square relative discrepancy is 3.1\% \cite{MWcr}. The outer rotation curve ($r>r_{\odot}$) is less well measured, but it does have a prominent rise between 12.7 and 13.7 kpc \cite{Olling,Binney}, which is the predicted location of the third caustic ring. Furthermore a ring of stars, named the ``Monoceros Ring", was discovered in the Galactic Plane at $r \sim 20$ kpc \cite{Newberg}. It is shown in refs.~\cite{Monoceros}\cite{starsgas} that this ring of stars is a plausible outcome of the second caustic ring's gravitational field. \subsubsection{IRAS triangle \cite{MWcr}} Looking in the direction tangent to a caustic ring, one may expect to observe the imprint of the ring's tricusp cross section on ordinary matter. A triangular feature seen in the IRAS map in the direction of Galactic Coordinates $(l,b)=(80^\circ,0^\circ)$ has been interpreted as the imprint of the $5$th caustic ring. Relevant parts of the IRAS map can be found at \url{https://www.phys.ufl.edu/~sikivie/triangle/}. The orientation of the IRAS triangle with respect to the Galactic Center and the Galactic Plane is consistent with the Caustic Ring Model. Also, the position of the triangle on the sky matches the position of the sharp rise (between 8.28 and 8.43 kpc) in the Milky Way rotation curve attributed to the $5$th caustic ring. However, no triangular feature along the other tangent direction to the $5$th caustic ring was found in the IRAS map. \subsection{M31 rotation curve} Our nearest major Galactic Neighbor M31 is viewed almost edge-on from our location. Fig. 4 shows the rotation curve of M31 derived by Chemin et al. \cite{Chemin} from their HI survey of that galaxy. There are several bumps in this rotation curve on both the receding and approaching sides. The first or outermost bumps are approximately at $30$ kpc, the second ones at $15$ kpc, and the third at $10$ kpc. These numbers are consistent with the relation $a_n \sim \frac{1}{n}$ predicted by the Caustic Ring Model. The first bumps near $30$ kpc are well outside the last observed spiral arm (at $r \sim 25$ kpc) and they match perfectly on both sides. This strongly indicates the presence of a ring-like structure in the dark halo. We also note that, although the first bumps match perfectly, the second ones appear at slightly different radii on one side than the other. And the third bump from the receding side is missing. These discrepancies give an indication of how closely we may expect the idealizations of the model to be matched by reality. The fact that the third caustic ring does not show on the receding side may mean that it lies somewhat outside of the galactic plane there. It is worth mentioning that there are several sources for the rotation curve of M31 at smaller radii, summarized and compared in Ref.~\cite{Sofue}, and they do not all agree with each other. \subsection{Gaia triangles} We mentioned that the triangular feature seen in the IRAS map in the direction $(l,b)=(80^\circ,0^\circ)$ is not matched by a similar feature on the other side, i.e. at $(-80^{\circ},0^{\circ})$. However, the sky map recently released by the Gaia Collaboration \cite{Gaia1,Gaia2} has a triangular feature at $(l,b) = (-91^{\circ},0^{\circ})$ as well as a feature at $(80^{\circ},0^{\circ})$ matching the triangular feature in the IRAS map. See: \url{https://www.phys.ufl.edu/~sikivie/Gaiamap/} . We refer to the features as right and left triangles respectively. Relevant parts of the Gaia skymap are reproduced in Figs. 5 and 6. The shape and the orientation of the right triangle are consistent with the expected shape and orientation of a caustic ring imprint. The right triangle is farther from the Galactic Center and approximately 40\% smaller than the triangle on the left, but we believe both are imprints of the $5$th caustic ring. We attribute the asymmetry in location to a displacement, by $5.8^\circ$ to the right, of the center of the caustic ring relative to the Galactic Center; see Fig. 7 for an illustration. The possibility that the ring is not a perfect circle between the two tangent points is discussed in Section IV.B. The triangular features in the Gaia map are darker than their surroundings. Obscuration is generally due to dust. The obscuration is more uniform over the Gaia right triangle than over the Gaia left triangle. Assuming the background of star light to be uniform, we estimate the optical depth for light absorption $\tau_d$ is of order 1.7 over the area of the right triangle, and varying approximately from 0.9 to 3.0 over the area of the left triangle. The 3D dust map constructed in ref.~\cite{Green1} on the basis on Gaia, PANSTAR-1 and 2MASS data, also shows a feature in the direction of the left triangle; see Fig. 8. Unfortunately, the direction of the right triangle is not covered by this map. For a better understanding of the distribution of dust in the direction of the left triangle, Fig. 9 shows the accumulated reddening in successive distance segments. Panels (b) and (c) of this figure have the most accumulated dust in the direction of interest. It seems that most of the dust making up the feature in Fig. 8 is situated between $0.8$ kpc and $1.6$ kpc. This is consistent with the location of the caustic ring in the left tangent direction based on the calculations in Section IV. We performed an analysis of the statistical significance of the left and right triangular features in the Gaia skymap. The analysis is described in Appendix A. When the IRAS triangle was first discussed as a candidate imprint of the 5th caustic ring \cite{MWcr}, it was proposed that this feature is produced by gas and dust in thermal equilibrium in the gravitational field of the 5th caustic ring. However in ref. \cite{starsgas} it is shown that the features produced in this way are not as sharp as the observed features. Here we propose instead that the features are produced by dust that is entrained by the axion flows forming the caustic ring. The entrained dust particles have the same trajectories as the axions and thus make the same caustics. This would explain why the triangular features are sharp. Section III discusses the entrainment of dust particles by cold axion flows. We also propose here an explanation why the right triangle does not show in the IRAS map whereas the left triangle does show. The caustic ring in the left tangent direction lies in the midst of the stellar activity associated with the Orion Spur of the Sagittarius Spiral Arm. The dust is heated there by stars and reradiates the heat in the infrared, where IRAS is sensitive. The caustic ring in the right tangent direction lies in a relatively quiet place. Its dust does not get heated sufficiently to show up in the IRAS map. \section{Dust entrainment} In this section we discuss physical processes that may be responsible for the formation of the triangles observed in the IRAS and Gaia maps of the Milky Way. As mentioned in the previous section, we interpret the triangles as due to the entrainment of dust by the axion flows forming the nearby caustic ring. The dust particles follow the same trajectories as the axions and hence form the same caustics. We propose this as an explanation for the sharpness of the triangles. For the explanation to be plausible, it must be shown not only that dust is entrained by cold axion flows but also that the entrainment occurs in spite of the drag on the dust due to gas in the Galactic Disk. We first derive a formula for the friction on a dust particle moving with respect to a highly degenerate Bose-Einstein condensed axion fluid. The frictional force is inversely proportional to the temperature $T$ of the axions. We next discuss the temperature that axion dark matter has today, and compare the friction of the axion fluid with the drag due to gas. Finally we consider dust-dust collisions and find that they plausibly explain why the features seen in the IRAS and Gaia maps are triangular rather than tricuspy. \subsection{Friction on a cold axion fluid} Cold dark matter axions thermalize by gravitational self-interactions in a regime, called the ``condensed regime", where their energy dispersion is much less than their thermalization rate. Their thermalization rate is of order \cite{CABEC,Erken} \begin{equation} \Gamma \sim 4 \pi G n m^2 \ell^2 \label{therm} \end{equation} where $G$ is Newton's gravitational constant, $m$ is the axion mass, $n$ is the number density of axions, and $\ell = {1 \over \delta p}$ their correlation length. $\delta p$ is the momentum dispersion of the axion fluid. By definition, $\Gamma = 1/\tau$ where $\tau$ is the time scale over which each axion may change its momentum by order $\delta p$. \subsubsection{Scale of inhomogeneity vs. correlation length} The correlation length $\ell$ is unrelated to the scale of homogeneity and should not be confused with it \cite{SC}. To make precise this distinction consider a generic state of the cold axion dark matter fluid. Almost all axions are in a small number of particle states with almost identical wavefunctions. Let $\Psi_0(\vec{x}, t)$ be the wavefunction of one of those highly occupied states. It defines a flow of density $n(\vec{x}, t) = N_0 |\Psi_0(\vec{x}, t)|^2$, where $N_0$ is the number of particles in the state, and velocity field $\vec{v}(\vec{x}, t) = {1 \over m} \vec{\nabla} Im \ln \Psi_0(\vec{x}, t)$. Starting with an arbitrary $\Psi_0$, one may construct a complete orthonormal set of wavefunctions \cite{SC} \begin{equation} \Psi_{\vec{k}}(\vec{x}, t) = \Psi_0(\vec{x}, t) e^{i \vec{k} \cdot \vec{\chi}(\vec{x}, t)} \label{ONC} \end{equation} where the $\vec{\chi}(\vec{x}, t)$ are the co-moving coordinates implied by the velocity field $\vec{v}(\vec{x}, t)$ and the requirement that the particle density is constant in co-moving coordinate space. For small $\vec{k}$ the particle states $\Psi_{\vec{k}}$ are very close to $\Psi_0$, having the same density field and almost the same velocity field. We may write the axion field in the non-relativistic limit as \begin{equation} \phi(\vec{x}, t) = \sum_{\vec{k}} {1 \over \sqrt{2m}} [e^{- i m t} \Psi_{\vec{k}}(\vec{x}, t) a_{\vec{k}}(t) + e^{ i m t} \Psi_{\vec{k}}^*(\vec{x}, t) a_{\vec{k}}^\dagger(t)] \label{expand} \end{equation} where $a_{\vec{k}}(t)$ and $a_{\vec{k}}^\dagger(t)$ are annihilation and creation operators satisfying canonical equal-time commutation relations. Let $N_{\vec{k}} = \langle \Phi| a_{\vec{k}}^\dagger a_{\vec{k}}|\Phi \rangle$ be the particle state occupation numbers in a state $|\Phi\rangle$ of the axion fluid. The correlation length in comoving coordinates is $\ell = {1 \over \delta k}$ where \begin{equation} \delta k = \sqrt{{1 \over N} \sum_{\vec{k}} \vec{k}\cdot\vec{k} N_{\vec{k}}} \label{momdis} \end{equation} and $N = \sum_{\vec{k}} N_{\vec{k}}$ is the total number of particles. If a Bose-Einstein condensate (BEC) forms, a fraction of order one of the total number $N$ of particles go to the same particle state, e.g. the state of wavefunction $\Psi_0$. In that case the two-point axion field equal-time correlation function is \cite{SC} \begin{equation} \langle \Phi| \phi(\vec{x}, t) \phi(\vec{y}, t) | \Phi \rangle = {N_0 \over 2 m}(\Psi_0^*(\vec{x}, t) \Psi_0(\vec{y}, t) + c.c.) + ... \label{corrf} \end{equation} where $N_0 \sim N$ is the number of particles in the condensate, and the dots represent the contributions, which fall off exponentially or as a power law, of particles that are not in the condensate. Eq. (\ref{corrf}) shows that a BEC is `perfectly' correlated over its full extent, i.e. its correlation length $\ell$ is the size of the region over which $\Psi_0(\vec{x}, t)$ has support. In contrast, the scale of inhomogeneity $s$ of the condensate is the distance scale over which the density $n(\vec{r}, t)$ varies by order one. $\ell$ can be arbitrarily large compared to $s$. In the axion case, $s$ may be the size of mini-clusters or of galaxies whereas $\ell$ may be the size of the horizon. \subsubsection{Thermal relaxation} In estimating the frictional force of the axion fluid on a dust particle, we will assume that the axion fluid has thermalized completely. However, it is not clear to us to what extent the axion fluid has thermalized. The assumption of complete thermalization is made to allow us to make an estimate of the aforementioned drag. If fully thermalized the axion fluid consists of a BEC of $N_0$ axions in a single state, of wavefunction $\Psi_0$, plus a thermal distribution of axions with chemical potential $\mu = m$ and temperature $T$. The wavefunction $\Psi_0$ is time-dependent on the Hubble time scale since this is the time scale over which large scale structure grows by gravitational instability. The thermal relaxation time scale is much shorter than that as we now show in the case, used as an example, of the cold axion flow forming the nearby caustic ring. The flow extends over a region of order 100 kpc in size. Its correlation length is that size, $\ell \sim$ 100 kpc, regardless of the inhomogeneities in it. Its average energy density $nm \sim 10^{-26}$ gr/cc over that length scale \cite{Duffy}. Eq.~(\ref{therm}) implies that its relaxation rate is \begin{equation} \Gamma \sim {10^4 \over {\rm sec}} \left({m \over 10^{-5}~{\rm eV}}\right)~~\ . \label{rate} \end{equation} The relaxation rate exceeds the Hubble rate by some 21 orders of magnitude. The relaxation process can be described heuristically as follows. The axion fluid consists of a huge number of axions occupying a small number of states with the wavefunctions given in Eq.~(\ref{ONC}). The average gravitational field produced by the axion fluid is $g(\vec{x},t)$ sourced by the density $m n(\vec{x},t) = m N_0 |\Psi_0(\vec{x},t)|^2$. However, the actual gravitational field fluctuates around this average because the actual density is modulated over a distance scale $\ell = 1/\delta k$. The root mean square deviation of the gravitational field from its average value $g(\vec{x},t)$ is \begin{equation} \sigma_g \sim 4 \pi G n m \ell~~\ . \label{grav} \end{equation} As discussed in ref. \cite{Erken}, $\sigma_g$ is the outcome of a random walk, the sum of many terms with random phases. The average gravitational field $g(\vec{x},t)$ determines the dynamical evolution of the axion fluid. Since we are interested in the relaxation of the axion fluid, as opposed to its overall dynamical evolution, we ignore $g(\vec{x},t)$ henceforth. We set it equal to zero by adopting a reference frame in which the fluid is freely falling. The gravitational field fluctuations change all the particle momenta by amounts of order $\delta p$ in a time $\tau \sim {\delta p \over m \sigma_g}$. Substituting Eq.~(\ref{grav}) yields Eq.~(\ref{therm}) as an estimate of the relaxation rate. Relaxation results in momentum distributions of ever increasing likelihood. Although far from thermal equilibrium to start with, the axion fluid may reach near-thermal equilibrium by gravitational self-interactions before the present. It may also absorb heat from other species, e.g. from baryons \cite{Erken}. There is however a maximum temperature the axion fluid may become fully thermalized at by gravitational self-interactions. Since the axions may change their momenta by order $\delta p$ in a time of order $\tau$, they can at most reach velocities of order \begin{equation} v_m \sim \sigma_g t_0 \sim 4 \pi G n m t_0 \ell \sim 0.4~\left({\Omega_a \over 0.27}\right) \left({\ell \over t_0}\right)~~\ . \label{vm} \end{equation} Here we have set $nm = \Omega_a \rho_{\rm crit}(t_0)$, where $\rho_{\rm crit}(t_0)$ is the present critical energy density for closing the universe and $\Omega_a$ is the fraction thereof in axions. This implies a maximum temperature \begin{equation} T_m \sim {1 \over 3} m v_m^2 \sim 6 \cdot 10^{-2}~m \left({\ell \over t_0}\right)^2 \left({\Omega_a \over 0.27}\right)^2 \label{Tm} \end{equation} that axions may become fully thermalized at before today. \subsubsection{Frictional force} Since the gravitational field produced by the axion fluid at any space-time point is a sum of many terms with random phases, with $\sigma_g$ the average outcome, the probability that the gravitational field has value between $g$ and $g + dg$ is \begin{equation} {\cal P}(g) dg = {1 \over \sqrt{2 \pi} \sigma_g} e^{-{1 \over 2}({g \over \sigma_g})^2}~dg \label{prob} \end{equation} by the central limit theorem. Eq.~(\ref{prob}) holds in the absence of the dust particle, whose presence we have ignored so far. In the presence of the dust particle, the random walk becomes biased toward gravitational fields that slow down the dust particle with respect to the axion fluid because such slowdown is accompanied by an increase in the energy, and therefore the entropy, of the axion fluid. Let $M$ be the mass of the dust particle and $\vec{v} = v \hat{n}$ its velocity with respect to the axion fluid. A gravitational field of strength $-g \hat{n}$ for the duration $\tau$ slows down the dust particle by $\Delta v = - g \tau$ and therefore increases the energy of the axion fluid by $\Delta E = M v |\Delta v| = M g \tau v$. Its entropy increases by $\Delta S = {1 \over T} \Delta E$. As Boltzmann pointed out, a $\Delta S$ increase in entropy signifies an increase by the factor $e^{\Delta S}$ in the number of available microstates and therefore an increase by that factor in the relative probability to have the gravitational field that causes it. Thus in the presence of the dust particle, the probability distribution of the gravitational field at the dust particle's location is modified from Eq.~(\ref{prob}) to \begin{equation} {\cal P}^\prime(g) dg = C e^{-{1 \over 2}({g \over \sigma_g})^2 + \Delta S}~dg \label{mprob} \end{equation} where \begin{equation} \Delta S = {1 \over T} M g \tau v = {M v \over m T \ell}{g \over \sigma_g} \label{DS} \end{equation} and $C$ is a normalization constant determined by the requirement that the total probability is one. The deceleration $d$ of the dust particle is the average of the $g$-distribution in Eq.~(\ref{mprob}). One readily finds \begin{equation} d = {M v \over m T \ell} \sigma_g \sim 4 \pi G n M v {1 \over T}~~\ . \label{dec} \end{equation} This formula for friction is different, and applies in different circumstances, from the standard formula for dynamical friction \cite{Chandra,BT}. Both formulae, Eq.~(\ref{dec}) and the standard formula whose RHS is $4 \pi G^2 n m M v^{-2} \ln(\Lambda)$ where $\Lambda$ is the ratio of maximum to minimum impact parameters, describe the drag on a heavy mass $M$ moving through a fluid, as a result of the gravitational interactions of the heavy mass with the particles in the fluid. The standard formula assumes that the particles in the fluid do not interact among themselves. Only their gravitational interaction with the heavy mass is taken into account. They get scattered by the heavy mass plowing through and as a result remove kinetic energy from it, slowing it down. Eq.~(\ref{dec}) assumes instead that the particles in the fluid interact with one another sufficiently strongly that they thermalize while interacting gravitationally with the mass $M$. Eq.~(\ref{dec}) is not expected to be valid when $M$ is large, such as the mass of a star or even a small planet, because the Gaussian distribution in Eq.~(\ref{mprob}) does not extend to arbitrarily large values of $g$. For example, for $M = 10^{25}$ gr, $m = 10^{-5}$ eV, $T = 10^{-9}$ eV, and $\ell = 100$ kpc, Eq.~(\ref{dec}) would need the Gaussian to extend to $g \sim 10^{44} v \sigma_g$, which it cannot possibly reach by the aforementioned random walk. \subsection{Axion temperature} The correlation length $\ell$ of the cold axion fluid, just after it was produced by vacuum realignment during the QCD phase transition, is of order the horizon at that time. Subsequently $\ell$ is stretched by the expansion of the universe. The relaxation rate $\Gamma$, decreasing as $a(t)^{-1}$ where $a(t)$ is the scale factor, exceeds the Hubble expansion rate $H(t) = {1 \over 2 t}$ when the photon temperature is of order 1 keV \cite{CABEC,Erken}. The cold axions form a BEC then and $\ell$ grows to be of order the horizon at that time. Whereas Bose-Einstein condensation occurs on the $\tau$ time scale, full thermalization takes much longer \cite{Erken,BJ}. Nonetheless, as was indicated above, nearly full thermalization may occur before the present and temperatures as high as $6 \cdot 10^{-2}~m$ may possibly be reached. The final temperature depends on the amount of heat that the axion fluid absorbs and thermalizes. \subsubsection{Heat from axions} The first and most obvious source of heat is the kinetic energy the cold axions themselves have because the axion field is inhomogeneous on the horizon scale at the QCD phase transition. We assume here that the axion field was not homogenized by inflation, so that the kinetic energy of cold axions is the highest possible. We will see that even then the heat associated with the initial kinetic energy of cold axions is negligible compared to the heat the axion fluid absorbs by cooling baryons (see below.). The axion kinetic energy density is \begin{equation} \rho_{a,{\rm kin}}(t) = {(\delta p (t))^2 \over 2 m} n(t) \sim \Omega_a \rho_{\rm crit}(t_0) {1 \over 2 t_1^2 m^2} {a(t_1)^2 \over a(t)^5} \label{akin} \end{equation} where $a(t)$ is the cosmological scale factor normalized such that $a(t_0) = 1$, and \cite{axdm} \begin{equation} t_1 \simeq 1.7 \cdot 10^{-7}~{\rm sec} \left({10^{-5}~{\rm eV} \over m}\right)^{1 \over 3} \label{t1} \end{equation} is the time at which the axion mass effectively turns on during the QCD phase transition. The last statement in Eq.~(\ref{akin}) follows from $\delta p(t) \sim {1 \over t_1} {a(t_1) \over a(t)}$ as is the case if inflation does not homogenize the axion field. Let us call $t_*$ the time when cold axions fully thermalize among themselves. They have at that time temperature $T_*$: \begin{equation} \rho_{a,{\rm kin}}(t_*) = 0.128 (m T_*)^{3 \over 2} T_* \label{encon} \end{equation} provided $T_* << m$, as will be the case. Combining Eqs.~(\ref{akin}) and (\ref{encon}) yields \begin{equation} T = T_* a(t_*)^2 \sim 10^{-14}~{\rm eV}~\Omega_a^{2 \over 5} \left({10^{-5}~{\rm eV} \over m}\right)^{19 \over 15} \label{aT} \end{equation} for the axion temperature today. Note that $T$ does not depend on $t_*$ because the axions are non-relativistic both before and after they thermalize. \subsubsection{Heat from baryons} The cold axion fluid may absorb heat from other species. Ref.~\cite{Li} considered the possibility that the axions cool the photons and offered this as an explanation for the Li anomaly in primordial nucleosynthesis. However, photon cooling can only occur marginally because it requires the correlation length to be as large as the horizon whereas by causality the correlation length must be at least somewhat shorter. The observations of the cosmic microwave background anisotropies by the Planck Collaboration indicate an effective number of neutrinos \cite{Planck} consistent with the absence of photon cooling. So we ignore heat from photon cooling. On the other hand, cooling of baryons by axion BEC occurs robustly according to the arguments of ref. \cite{Erken}. The reported observation by the EDGES Collaboration \cite{EDGES} of the trough in the cosmic microwave radiation spectrum due to its absorption by neutral hydrogen at cosmic dawn indicates that baryons are significantly colder at that time than expected under standard assumptions. The EDGES observation may be viewed as confirmation that axion BEC did indeed cool baryons \cite{Tmat,Houston}. If axions and baryons reach full kinetic equilibrium before today, a lower limit on the heat transfer from baryons to axions is the kinetic energy that baryons would have today in the absence of axion cooling. Keeping in mind that baryons and photons decouple from each other at a redshift $z_{\rm dec} \sim$ 160, the baryon kinetic energy density today, in the absence of axion cooling, is \begin{equation} \rho_{b,{\rm kin}}(t_0) \simeq {3 \over 2} T_\gamma(t_0) \Omega_b \rho_{\rm crit}(t_0) {1 \over m_b} {1 \over 1 + z_{\rm dec}}~~\ , \label{bkin} \end{equation} where $T_\gamma(t_0) \simeq 2.7$ K is the present photon temperature, $\Omega_b \simeq 0.05$ is the present energy density fraction in baryons, and $m_b \simeq$ GeV is an average baryon mass. Using Eq.~(\ref{encon}), baryon cooling by axions implies \begin{equation} T > 0.7 \cdot 10^{-7}~{\rm eV} \left({10^{-5}~{\rm eV} \over m}\right)^{3 \over 5} ~~\ , \label{Tb} \end{equation} if axions and baryons reach full kinetic equilibrium. The RHS of Eq.~(\ref{Tb}) exceeds, or saturates depending on the axion mass, our earlier estimate Eq.~(\ref{Tm}) of the highest temperature that the axion fluid may become fully thermalized at. \subsection{Dust entrainment} As mentioned in Section II, dark matter axions rethermalize sufficiently fast by gravitational self-interactions while they fall in and out of a galactic gravitational potential well that they almost all go to the lowest energy state consistent with the angular momentum they have acquired by tidal torquing interactions with neighboring galaxies. That state is one of rigid rotation on the turnaround sphere. As a result dark matter axions fall in with a rotational velocity field and make caustic rings. We will assume here, for simplicity, that the dark matter is entirely axions, or axion-like particles (as opposed to a mixture of axions and ordinary CDM). The infalling axions entrain gas and dust, but not stars for the reason stated at the end of subsection III.A.3. The gas is not collisionless, of course. Gas falling in with the axions collides with gas already in the galaxy and soon leaves the phase-space sheet on which the axions lie. Whereas the angular momentum of the gas is approximately conserved, the kinetic energy associated with its radial motion is dissipated into radiation. As a result the gas settles in a rotating disk, where it participates in star formation. The stars formed rotate along with the gas. Dust is produced in the late stages of stellar evolution. We consider dust particles of typical size $D \sim 5 \cdot 10^{-5}$ cm \cite{Draine}, and mass $M \sim 3 \cdot 10^{-13}$ gr. Eq.~(\ref{dec}) implies that the speed of a dust particle relative to the axion flow decreases according to $v(t) = v(0) e^{- \gamma t}$ with \begin{equation} \gamma \sim 4 \pi G \rho {M \over m}{1 \over T} \sim {4 \cdot 10^4 \over t_0} \left({\rho \over 10^{-26}~{\rm gr/cc}}\right) \left({10^{-5}~{\rm eV} \over m}\right) \left({M \over 3 \cdot 10^{-13}~{\rm gr}}\right) \left({10^{-9}~{\rm eV} \over T}\right)~~\ , \end{equation} implying that the dust particle is entrained in the absence of any other forces acting on it, even if the axion temperature is as high as we believe it can be. We now consider the effect of dust-gas and dust-dust collisions. \subsubsection{Dust-gas collisions} The density of gas in the solar neighborhood is approximately $\rho_g \sim 3 \cdot 10^{-24}$ gr/cc, comprising of order one atom or molecule per cm$^3$ \cite{BT}. The cross-section for hard-scattering on a dust particle is of order $\sigma \sim D^2 \sim 2.5 \cdot 10^{-9}$ cm$^2$. A dust particle moving with velocity $v_g$ relative to the gas experiences a deceleration \begin{eqnarray} d_g &\sim& {\rho_g \sigma (v_g)^2 \over M} = 2.2 \cdot 10^{-5}~{{\rm cm} \over {\rm sec}^2} \left({\rho_g \over 3 \cdot 10^{-24}~{\rm gr/cc}}\right)\cdot \nonumber\\ &\cdot& \left({\sigma \over 2.5 \cdot 10^{-9}~{\rm cm}^2}\right) \left({3 \cdot 10^{-13}~{\rm gr} \over M}\right) \left({v_g \over 300~{\rm km/s}}\right)^2~~\ . \label{dg} \end{eqnarray} At the caustic the axions flow relative to the gas with velocity 300 km/s in the direction of Galactic Rotation. The speed of the dust particle with respect to the axion flow is $v = 300~{\rm km/s} - v_g$. The acceleration of the dust particle in the direction of Galactic Rotation due to its friction on the axion flow is \begin{equation} d \sim 2.7 \cdot 10^{-6}~{{\rm cm} \over {\rm sec}^2} \left({\rho \over 10^{-26}~{\rm gr/cc}}\right) \left({10^{-5}~{\rm eV} \over m}\right) \left({M \over 3 \cdot 10^{-13}~{\rm gr}}\right) \left({10^{-9}~{\rm eV} \over T}\right) \left({v \over 300~{\rm km/s}}\right) \label{da} \end{equation} according to Eq.~(\ref{dec}). Setting $d_g = d$ yields a second order polynomial equation for $v_g$ whose relevant solution is \begin{equation} v_g = [\sqrt{\xi^2 + 2 \xi} - \xi]~300~{\rm km/s} \label{vg} \end{equation} with \begin{eqnarray} \xi &\sim& 6 \cdot 10^{-2} \left({\rho \over 10^{-26}~{\rm gr/cc}}\right) \left({10^{-5}~{\rm eV} \over m}\right) \left({M \over 3 \cdot 10^{-13}~{\rm gr}}\right)^2\cdot \nonumber\\ &\cdot& \left({10^{-9}~{\rm eV} \over T}\right) \left({2.5 \cdot 10^{-9}~{\rm cm}^2 \over \sigma}\right) \left({3 \cdot 10^{-24}~{\rm gr/cc} \over \rho_g}\right)~~\ . \label{xi} \end{eqnarray} Several factors on the RHS of Eq.~(\ref{xi}) are poorly known, including the axion mass and the temperature of the axion fluid. We consider two specific cases for illustrative purposes. If $m \sim 10^{-5}$ eV and the axion fluid is not heated by the baryons, or anything else, so that $T \sim 10^{-14}$ eV, $\xi \sim 6 \cdot 10^3$ and therefore $v \sim $ 26 m/s. The dust particle follows the axion fluid very closely in this case. If on the other hand $m \sim 10^{-6}$ eV and $T \sim 6 \cdot 10^{-8}$ eV, the latter being our estimate Eq.~(\ref{Tm}) of the highest temperature the axions may become thermalized at when $m \sim 10^{-6}$ eV, then $\xi \sim 0.01$ and $v_g \sim 39$ km/s. The dust particles move more slowly than the axion fluid in this case; their velocity is 259 km/s in the Galactic Rest Frame vs. 520 km/s for the axion fluid. However, even though they move more slowly, we may still expect the dust particles to follow the same trajectories as the axion fluid and hence form the same caustics. Collisions with gas diffuses the dust flow by imparting random transverse velocities to the dust particles, but not so much as to prevent caustic formation. Indeed, the number of dust-gas collisions during one pass of a dust particle through the Galactic Disk is order $10^{14}$, each collision producing a random transverse velocity of order $v_g m_g/M$, which is less than $3 \cdot 10^{-10}$ km/s. So the rms transverse velocity acquired is less than 3 m/s. \subsubsection{Dust-dust collisions} The density $n_d$ of the dust flows forming the triangular features in the Gaia map may be estimated from the optical depth $\tau_d \sim 2$ for the absorption of light over the area of the triangles. The absorption length $\lambda \sim {1 \over n_d \sigma} \sim L/\tau_d$ where $L \sim$ 1 kpc is the depth over which light travels through the tricusp volume of the caustic ring in the tangent directions and $\sigma \sim D^2 \sim 2.5 \cdot 10^{-9}~{\rm cm}^2$ is the absorption cross-section. This implies $n_d \sim 270/({\rm km})^3$. We take the cross-section for hard scattering of a dust particle on a dust particle to be of order $\sigma_d \sim 3~D^2 \sim 0.75 \cdot 10^{-8}~{\rm cm}^2$. Hence the mean free path between dust-dust scatterings $\lambda_d \sim {1 \over \sigma_d n_d} \sim$ 160 pc whereas the distance traveled transversely within the tricusp, where each flow is one of four different flows, is of order 100 pc, implying an optical depth for hard scattering of order 0.6. That the optical depth is of order one provides a plausible explanation why the caustic imprints seen in the IRAS and Gaia maps are triangular rather than tricuspy. Indeed, the trajectories forming the cusps encounter very high dust densities there. All trajectories through the tricusp pass through caustic folds, some trajectories participate in the formation of caustic folds, and among those some also participate in the formation of caustic cusps. The fact of going through or participating in the formation of a fold does not increase the optical depth for scattering with other dust much because the density at a fold increases as $n \propto {1 \over \sqrt{h}}$ where $h$ is the distance to the fold surface and the ${1 \over \sqrt{h}}$ singularity is integrable. On the other hand, the density diverges as ${1 \over h}$ when a cusp is approached in the plane of the cusp \cite{sing}, leading to a logarithmic divergence in the optical depth for the flows forming the cusp. Moreover, three flows participate in the formation of a cusp whereas only two flows participate in the formation of a fold. Since the optical depth is order one, it is plausible that dust-dust collisions mess up the formation of cusps without messing up the formation of folds. The sides of a triangular feature associated with a caustic ring seen in a tangent direction indicate then the location of folds whereas the cusps are effectively erased. A prediction of this interpretation is that the appearance of a tricusp imprint becomes more cuspy when the optical depth for obscuration by the dust in the feature is less, i.e. fainter caustic ring imprints will look more tricuspy. \section{Big Flow properties} We use the Gaia triangles to determine the properties of the four prominent flows on Earth associated with the nearby caustic. Knowledge of their densities and velocities helps axion dark matter searches, particularly those using the cavity method \cite{cavity} and the echo method \cite{Arza}. We call the four flows Big, Little, Up and Down. We estimate the Sun's position relative to the nearby caustic ring by interpolating between the triangular features observed in the left and right tangent directions to the ring. Then, from the Sun's position relative to the nearby caustic, we derive the flow velocities with respect to the Local Standard of Rest (LSR). We also estimate the flow densities, and our errors on the various flow properties. From the triangles, we find that the caustic ring center is displaced from the Galactic Center by $5.8^\circ$ to the right. However, our distance from the caustic ring center $r_{\odot C}$ is not known. If our distance to the inner tangent point is between 0.8 and 1.2 kpc (based on panel b of Fig.~9), then $r_{\odot C}$ has a value between 7.2 and 10.8 kpc. We assume $r_{\odot C}$ = 8.5 kpc in our estimates below. We also assume the velocity of the LSR to be $v_{\rm rot} = 220$ km/s in the direction of Galactic Rotation. All our distances scale as $r_{\odot C}$, and all our velocities as $v_{\rm rot}$. For this reason, we do not quote any errors associated with imperfect knowledge of $r_{\odot C}$ and $v_{\rm rot}$. If $r_{\odot C}$ is found to differ from 8.5 kpc, one should simply multiply all our distances ($a$, $p$, $q$ ..) by $r_{\odot C}$/8.5 kpc. Likewise for $v_{\rm rot}$. Our estimates of the flow velocities are independent of $r_{\odot C}$. Our estimates of the flow directions are independent of both $r_{\odot C}$ and $v_{\rm rot}$ since they are determined by ratios of velocity components and each component scales as $v_{\rm rot}$. There is however an error in the flow directions associated with uncertainty of the ratio $v/v_{\rm rot}$ where $v$ is the speed of the flows forming the 5th caustic ring. According to the model \cite{Duffy}, $v$ = 520 km/s when $v_{\rm rot}$ = 220 km/s. $v$ = 520 km/s is the central value we use here. From the success of the model in describing the pattern of caustic ring radii in the Milky Way \cite{MWcr}, we estimate that the error on $v/v_{\rm rot}$ is less than 3\%. \subsection{Previous estimates} When only the left triangular feature in the IRAS sky map was known, the radius $a$ and width $p$ of the $5$th caustic ring were estimated \cite{MWcr} as: \begin{eqnarray} a &\simeq& 8.31 \ \text{kpc} \nonumber \\ p &\simeq& 130 \ \text{pc} \ , \label{ap_old} \end{eqnarray} by assuming the ring to be axially symmetric and centered at the Galactic Center. The Sun would then be outside the tricusp implying that there are two flows through our location, called Big and Little. Their densities and velocities were estimated to be~\cite{Duffy}: \begin{eqnarray} d_+ &\simeq& 1.5 \times 10^{-24} \ \frac{\text{g}}{\text{cm}^3} \nonumber \\ d_- &\simeq& 0.15 \times 10^{-24} \ \frac{\text{g}}{\text{cm}^3} \nonumber \\ \vec{v} _\pm &\simeq& (\pm 120 \ \hat{\rho} + 505 \ \hat{\phi}) \ \frac{\text{km}}{\text{s}} \label{dv_old2} \end{eqnarray} where $\hat{\rho}$ points radially outward from the Galactic Center and $\hat{\phi}$ points in the direction of Galactic Rotation. Because of an ambiguity in the sign of $\eta_0$, it is not clear which flow has the larger density. If $\eta_0 < 0$, the flow of velocity $\vec{v}_\pm$ has density $d_\mp$. If $\eta_0 > 0$, the flow of velocity $\vec{v}_\pm$ has density $d_\pm$. In Ref.~\cite{angmom} it was argued that formation of a Big Vortex in the axion dark matter fluid results in an enhanced dark matter density in the caustic rings. Any rotating BEC must have vortices. Whereas the vortices in superfluid $^4 \text{He}$ are repulsive, those in axion BEC are attractive. Most of them merge to form a Big Vortex along the symmetry axis of the galaxy, enhancing the dark matter density in the Galactic Plane. We assume this enhancement to be approximately by a factor 4 because this accounts for the prominence of the bumps in the Milky Way rotation curve attributed to caustic rings \cite{MWcr,angmom}. The density estimates in Eq.~(\ref{dv_old2}) assume isotropic infall \cite{Duffy}. Assuming the presence of a Big Vortex, they are modified to $d_+ \sim 6 \times 10^{-24}$ and $d_- \sim 0.6 \times 10^{-24}$ g/cc. Let us emphasize that densities near caustics have in any case large uncertainties because they are sensitive to position. \subsection{The Sun's position relative to the nearby caustic} The Galactic Coordinates $(l,b)$ of the vertices of the left triangle as observed in both the IRAS and Gaia sky maps are: $(77.86^\circ \pm 0.04 ^\circ , 3.3^\circ \pm 0.5 ^\circ)$, $(83.1^\circ \pm 0.4^\circ , 0.25^\circ \pm 0.15^\circ)$, $(77.85^\circ \pm 0.04 ^\circ , -2.4^\circ \pm 0.2 ^\circ)$. Those of the right triangle observed in the Gaia sky map are: $(-89.35^\circ \pm 0.05^\circ, 1.25^\circ \pm 0.05^\circ)$, $(-92.95^\circ \pm 0.05^\circ , -0.65^\circ \pm 0.05^\circ)$, $(-89.55^\circ \pm 0.05^\circ , -2.27^\circ \pm 0.03^\circ)$. The right triangle is located farther from the Galactic Center compared to the left triangle. We interpret this to mean that the center of the $5$th caustic ring is displaced from the Galactic Center to the right as shown in Fig. 7. Assuming that the inner side of the caustic ring is an exact circle between the left and right tangent points, its center is located in the direction of Galactic Longitude $l = -5.80^\circ \ ^{+ 0.05^\circ} _{- 0.04^\circ}$. For the nominal value of our distance from the caustic ring center, $r_{\odot C}=8.5$ kpc, and assuming the caustic ring to be circular, its radius is $a = (8.448 \ ^{+ 0.001} _{-0.001})$ kpc. Assuming that the caustic ring between the two tangent points lies in the plane determined by its center and the two midpoints of the inner edges of the triangles, we find that the caustic ring plane is tilted to the right by $\theta_t = 0.48^\circ \ ^{+ 0.20^\circ} _{-0.20^\circ}$ relative to the Galactic Plane. We discuss below the errors due to failure of the assumption that the caustic ring is exactly circular and planar between the two tangent points, and include them in the error budget of the flow velocity vectors. The location of the Sun relative to the caustic is specified by the horizontal and vertical sizes of the tricusp near the Sun ($p_\odot$ and $q_\odot$), the horizontal distance of the Sun ($x_\odot = r_{\odot C} - a$) from the inner side of the tricusp and its perpendicular distance ($z_\odot$) from the plane. We calculate the central values of these quantities based on the following assumptions: 1) the observed features are triangles inscribed by the tricusp; see Fig. 10, 2) the caustic ring is planar with its center displaced from the Galactic Center, 3) it has constant $a$, whereas $p$ and $q$ vary linearly between the left and right tangent directions. The resulting central values and errors are: \begin{eqnarray} p_\odot &=& (78.5 \ ^{+ 23.7} _{- 20.3}) \ {\rm pc} \ , \nonumber \\ q_\odot &=& (113.5 \ ^{+ 10.5} _{- 10.4}) \ {\rm pc} \ , \nonumber \\ x_\odot &=& (52.1 \ ^{+ 0.7} _{- 8.6}) \ {\rm pc} \ , \nonumber \\ z_\odot &=& (0.8 \ ^{+ 0.5} _{- 0.4}) \ {\rm pc} \ . \label{values_err} \end{eqnarray} We now briefly discuss the various sources of uncertainty. The largest source of uncertainty derives from ambiguity in estimating the horizontal size $p$ of the tricusp. If the observed triangles represent the cross-sections of the caustic ring near the inner tangent points, the linearly interpolated value of $p$ near the Sun is $p_\odot = ( 95.6 \ ^{+ 6.6} _{- 6.5} )$ pc where the uncertainties are associated with reading the vertices. On the other hand, if the outer vertices of the observed triangles tell us the directions of the outer tangent points, then $p_\odot = ( 61.4 \ ^{+ 1.9} _{- 2.1} )$ pc. We take the central value of $p_{\odot}$ to be $78.5$ pc. The error on $p_\odot$ stated in Eq.~(\ref{values_err}) includes an additional contribution from the possible failure of the assumption of circularity of the ring, as discussed below. The uncertainty on $q_\odot$ is much less than that on $p_\odot$ because it is determined by the inner side of the triangles. The uncertainty in $q_\odot$ is due to the errors associated with reading the vertices. For a planar caustic ring, the value of $z_\odot$ is $(0.8 \ ^{+ 0.5} _{-0.4})$ pc, including uncertainties from the vertices. We consider the case of a non-planar caustic ring where its outer cusp has height $z(\phi) = A \cos \phi$ from a plane where $\phi$ is the azimuthal angle about the caustic ring center and $\phi = 0$ is the azimuth of the Sun. The error in $z_\odot$ associated with non-planarity is less than $0.3$ pc if $A <$ 200 pc. $A >$ 200 pc seems unlikely since the caustic ring is seen close to the Galactic Plane in both tangent directions. From a practical point of view, the errors in the flow velocities are dominated by the error in $p_\odot$ unless the error in $z_\odot$ is larger than $3$ pc. Such a large error in $z_\odot$ seems unlikely. For a constant ring radius $a$, we find $x_\odot = (52.1 \ ^{+0.7} _{-0.7})$ pc if only errors from reading the vertices are included. Let us consider the possibility that the ring radius $a$ changes between the left and right tangent points as $a(\phi) = a_0 + a_1 \phi + a_2 \phi ^2$ where $\phi$ varies from $-6.3^\circ$ to $+6.3^\circ$ between the two inner tangent directions, and $a_0$ = 8.448 kpc. When $|a_1|$ is increased, one tangent point comes closer to the Sun whereas the other moves away. We require $|a_1|$ to be less than 314.5 pc so that the distance to the nearest tangent point remains larger than half the distance in the constant radius case. Furthermore we assume that the second order coefficient $|a_2| \lesssim |a_1|/2$. Then, $x_\odot$ is found to range between $43.5$ and $51.6$ pc, and the value of $p_\odot$ between $58.2$ and $94.9$ pc including the uncertainty discussed in the paragraph before last. We find that the Sun is almost certainly within the tricusp volume of the caustic ring. Given $x_\odot < p_\odot$, whether the Sun is inside or outside the tricusp is determined by its vertical distance $z_{\odot}$ from the caustic ring plane. For the central values of $x_\odot , p_\odot$ and $q_\odot$, the Sun is outside the tricusp if $z_\odot \geq 7.0$ pc which is very unlikely according to our estimates. However, the Sun is outside the tricusp for some extreme values of the parameters, e.g. $p_\odot = 58.2$ pc, $x_\odot = 52.8$ pc, $z_\odot = 1.3$ pc and all plausible values $q_\odot$. We estimate the probability that the Sun is outside the tricusp to be less than 1\%. Assuming the Sun is indeed inside the tricusp, there are four prominent flows on Earth associated with the nearby caustic ring. In Fig. 11, for various values of $z_\odot$, we show the directions of the flows with respect to the LSR in the $\eta_0 < 0$ case and indicate their densities by the sizes of circles. As the Sun moves closer to the boundary of the tricusp, two flows approach each other in velocity space while their densities increase. They disappear the moment the Sun passes outside the tricusp. \subsection{Big Flow velocity vector and density estimates} The flow velocities in the frame of the caustic ring are calculated using Eqs.~(\ref{GAIA_XT}) and (\ref{GAIA_vrhozphi}). The caustic ring parameters near the location of the Sun are derived from the central values of $a$, $p_\odot$ and $q_\odot$ and setting $v = 520$ km/s as the speed of the flow \cite{Duffy}: \begin{eqnarray} u &=& \frac{v^2 - v_{\text{rot}}^2 }{a} = 26.3 \times 10^3 \ {\rm kpc}^{-1} {\rm (km/s)}^2 \nonumber \\ \eta_0 &=& \pm \sqrt{\frac{2p}{u}} = \pm 2.44 \times 10^{-3} \ {\rm kpc} {\rm (km/s)}^{-1} \nonumber \\ \zeta &=& \frac{27}{16} \frac{p^2}{q^2} = 0.807~~\ . \label{parameters_est} \end{eqnarray} Eqs.~(\ref{GAIA_vrhozphi}) give the components of the flow velocities in cylindrical coordinates attached to the caustic ring. Since the caustic ring center is displaced from the Galactic Center by $\theta = (5.80^\circ \ ^{+ 0.05^\circ} _{- 0.04^\circ} )$ to the right, and the caustic ring plane is tilted relative to the Galactic Plane by $\theta_t = (0.48^\circ \ ^{+ 0.20^\circ} _{-0.20^\circ} )$ also to the right, the velocity components $(v_{j \rho}^{\rm G}, v_{j \phi}^{\rm G}, v_{jz}^{\rm G})$ of the jth flow in Galaxy centered cylindrical coordinates are obtained from the components $(v_{j \rho}, v_{j \phi}, v_{jz})$ in caustic centered cylindrical coordinates using \begin{eqnarray} v_{j \rho} ^{\rm G} &=& \cos \theta \ v_{j\rho} - \sin \theta \ \cos \theta_{\rm t} \ v_{j\phi} + \sin \theta \ \sin \theta_{\rm t} \ v_{j z} \nonumber \\ v_{j \phi} ^{\rm G} &=& \sin \theta \ v_{j\rho} + \cos \theta \ \cos \theta_{\rm t} \ v_{j\phi} - \cos \theta \ \sin \theta_{\rm t} \ v_{j z} \nonumber \\ v_{j z} ^{\rm G} &=& \sin \theta_{\rm t} \ v_{j\phi} + \cos \theta_{\rm t} \ v_{j z} \label{GAIA_vcomps} \end{eqnarray} where $\hat{\rho} _{\rm G}$ points away from the Galactic Center, $\hat{\phi} _{\rm G}$ points in the direction of Galactic Rotation and $\hat{z} _{\rm G}$ points to the Galactic North Pole. The flow velocity with respect to the LSR is $\vec{v}_{j {\rm LSR}} = \vec{v}_{j} ^{\rm G} - v_{\rm rot} \ \hat{\phi} _{\rm G}$ with $v_{\rm rot} = 220$ km/s. The error associated with uncertainty on $v/v_{\rm rot}$ enters here. It contributes of order $0.25^\circ$ to the errors on the velocity directions, given in Eqs.~(\ref{veldir}) and (\ref{veldir2}) below. Table \ref{GAIA_table_1} gives the densities $d_j$ and velocities $\vec{v} _j ^{\rm G}$ of the four flows corresponding to the central values of $x_\odot$, $z_\odot$, $p_\odot$ and $q_\odot$ when $\eta_0 < 0$. Table \ref{GAIA_table_2} gives the same information in case $\eta_0 > 0$. The densities are calculated using Eqs.~(\ref{density_formula}). We set $b = v = 520$ km/s. We do not have enough information to determine $b$ precisely but it is expected to be of order $v$ \cite{sing}. The difference between $b$ and $v$ is relatively unimportant in view of the other uncertainties affecting the densities. For the infall rate, in view of the Big Vortex, we multiplied by 4 the estimate given in ref. \cite{Duffy}, i.e. ${dM \over d\Omega d\eta} = 4 \times {7.8~M_\odot \over {\rm sterad}~{\rm yr}}$. Estimates of the densities are very uncertain because the densities vary rapidly with position. Over the range of plausible parameter values, Eqs.~(\ref{values_err}), the Big and Up flows range from 1/2 their central values to infinity (when the Sun approaches the tricusp boundary), the Down flow ranges from 1/2 to 4 times its central value, and the Little flow changes by 20\%. All entries in Tables I and II are highly correlated since they are functions of a small number of parameters, mainly $x_\odot$, $p_\odot$, $q_\odot$ and $z_\odot$. The directions of the flow velocities with respect to the LSR are: \begin{eqnarray} (l,b)|_{\rm Big} &=& (70.17^\circ \ ^{+0.84^\circ} _{-0.19^\circ} , 1.14^\circ \ ^{+2.09^\circ} _{-0.59^\circ}) \nonumber\\ (l,b)|_{\rm Little} &=& (89.96^\circ \ ^{+0.07^\circ}_{-0.87^\circ} , 0.86^\circ \ ^{+0.38^\circ} _{-0.36^\circ}) \nonumber\\ (l,b)|_{\rm Up} &=& (67.97^\circ \ ^{+1.91^\circ} _{-1.81^\circ} , 8.28^\circ \ ^{+2.44^\circ} _{-5.01^\circ}) \nonumber\\ (l,b)|_{\rm Down} &=& (67.81^\circ \ ^{+1.72^\circ} _{-1.76^\circ} , -7.05^\circ \ ^{+3.58^\circ} _{-2.35^\circ})~~\ . \label{veldir} \end{eqnarray} in case $\eta_0 < 0$ and \begin{eqnarray} (l,b)|_{\rm Big} &=& (89.96^\circ \ ^{+0.19^\circ} _{-0.87^\circ} , 0.49^\circ \ ^{+0.61^\circ} _{-2.15^\circ}) \nonumber\\ (l,b)|_{\rm Little} &=& (70.17^\circ \ ^{+0.84^\circ}_{-0.06^\circ} , 0.77^\circ \ ^{+0.36^\circ} _{-0.36^\circ}) \nonumber\\ (l,b)|_{\rm Up} &=& (92.40^\circ \ ^{+1.82^\circ} _{-1.78^\circ} , 8.91^\circ \ ^{+2.43^\circ} _{-3.69^\circ}) \nonumber\\ (l,b)|_{\rm Down} &=& (92.18^\circ \ ^{+1.85^\circ} _{-1.93^\circ} , -6.90^\circ \ ^{+5.20^\circ} _{-2.52^\circ}) . \label{veldir2} \end{eqnarray} in case $\eta_0 > 0$. The directions and errors are displayed in Figs. 12 and 13 for $\eta_0 < 0$ and $\eta_0 > 0$ respectively. To obtain the flow velocities with respect to an observer on Earth, one needs to subtract from the velocities given in Tables I and II the velocity of the Sun with respect to the LSR and the velocity of the observer with respect to the Sun due to the orbital and rotational motions of the Earth. \begin{table*} \caption{\label{GAIA_table_1}Central values of the densities and velocities of the flows through the Sun associated with the nearby caustic in the Galactic Rest Frame in case $\eta_0<0$. The densities are uncertain by a factor of 2 or more, as discussed in the text. The error on the velocity components is dominated by the uncertainty in the rotation speed of the LSR, taken to be 220 km/s but known only within approximately 10\%. The velocity directions and their errors are given explicitly in Eqs.~(\ref{veldir}) and Fig. 12.} \begin{ruledtabular} \begin{tabular}{ccccc} Flow & $d$ [$10^{-24}$ g/cm$^3$] & $v_{\rho} ^{\rm G}$ [km/s] & $v_{\phi} ^{\rm G}$ [km/s] & $v_{z} ^{\rm G}$ [km/s] \\ \hline Big & $20.0$ & $-104.4$ & $509.4$ & $6.1$ \\ Little & $2.0$ & $-0.2$ & $520.0$ & $4.5$ \\ Up & $9.6$ & $-115.3$ & $505.1$ & $44.8$ \\ Down & $8.4$ & $-116.4$ & $505.4$ & $-38.1$ \\ \end{tabular} \end{ruledtabular} \end{table*} \begin{table*} \caption{\label{GAIA_table_2} Same as Table \ref{GAIA_table_1} but for $\eta_0 > 0$. The velocity directions and their errors are given explicitly in Eqs.~(\ref{veldir2}) and Fig. 13.} \begin{ruledtabular} \begin{tabular}{ccccc} Flow & $d$ [$10^{-24}$ g/cm$^3$] & $v_{\rho} ^{\rm G}$ [km/s] & $v_{\phi} ^{\rm G}$ [km/s] & $v_{z} ^{\rm G}$ [km/s] \\ \hline Big & $20.0$ & $-0.2$ & $520.0$ & $2.7$ \\ Little & $2.0$ & $-104.4$ & $509.4$ & $4.4$ \\ Up & $8.4$ & $12.7$ & $517.7$ & $47.1$ \\ Down & $9.6$ & $11.5$ & $518.6$ & $-35.8$ \\ \end{tabular} \end{ruledtabular} \end{table*} \section{Summary} In this paper we added to the observational evidence in support of caustic rings and the Caustic Ring Model. The additional evidence is found in the rotation curve of our closest large Galactic Neighbor M31 \cite{Chemin} and in a triangular feature in the Gaia map \cite{Gaia1,Gaia2} of the Milky Way in the direction of Galactic Coordinates $(l,b) = (-91^\circ, 0^\circ)$. The M31 rotation curve has bumps whose locations are in rough agreement with the model predictions for the radii of the first three ($n=1,2,3$) caustic rings. The bumps attributed to the $n=1$ ring are particularly striking as they appear at the same location on both the receding and approaching sides of the rotation curve, strongly suggesting the existence of a ring structure in the M31 halo with a diameter of order 60 kpc. The Gaia triangular feature at $(l,b) = (-91^\circ, 0^\circ)$ solves a question raised by a previous claim \cite{MWcr} that a triangular feature in the IRAS map, in the direction $(l,b) = (80^\circ,0^\circ)$, is the imprint of the 5th caustic ring ($n=5$) on baryonic matter in the Galactic Disk, seen in a tangent direction to the caustic ring from our viewpoint. The 5th caustic ring has two tangent directions from our viewpoint, and there is no triangular feature in the IRAS map on the opposite side, near $(l,b) = ( - 80^\circ, 0^\circ)$. In contrast, the Gaia map has two triangular features at nearly symmetrical locations, one which coincides with the IRAS feature on the left side, and the new feature on the right side. The triangular feature on the right side, like the one on the left, is to a high level of accuracy an isosceles triangle with axis parallel to the Galactic Plane. It has approximately the same aspect ratio as the left triangle but is 40\% smaller. It was emphasized earlier \cite{Natarajan} that the transverse size of a caustic ring cross-section may vary along the ring. The Gaia triangles are features in the distribution of dust. Like the IRAS triangle, they are much sharper than they would be if produced by gas and dust in thermal equilibrium in the gravitational field of the caustic ring \cite{starsgas}. We propose here instead that dust is entrained by the axion flows forming the 5th caustic ring. By following the same trajectories as the axions, the dust particles form the same caustics. This would explain the sharpness of the features. Section III discusses the entrainment of a dust particle by a cold axion flow. The axions in the flow are assumed to be a highly degenerate Bose gas at temperature $T$, with almost all axions therefore in a single state forming a Bose-Einstein condensate. We derived a formula, Eq.~(\ref{dec}), for the frictional deceleration of a dust particle moving with respect to the cold axion flow. The deceleration is proportional to $T^{-1}$. We estimate the temperature cold dark matter axions have today taking account of the heat in the axions themselves and of heat they absorb by cooling baryons. Our formula for friction indicates that the dust particle is efficiently entrained by the axion flow. However, in the Galactic Disk collisions with gas may slow down the dust particle considerably. On the other hand the collisions with gas do not diffuse the flow of dust particles. We conjecture that the dust particles, although slowed down by collisions with gas in the disk, follow the same trajectories as the axions and form the same caustics. We also proposed an explanation why the features seen in the IRAS and Gaia maps are triangular even though the cross-section of a caustic ring is tricuspy. For accepted values of the dust density and dust grain size, the flow of dust inside the caustic ring is collisionless but only barely so. The density in the three flows forming the cusps of a tricusp is much higher than elsewhere within the tricusp. Dust-dust collisions within the cusps are likely to fuzz them up. The observed features are then qualitatively triangles inscribed in the tricusp, as indicated in Fig. 10. Finally, we proposed an explanation why no right triangle is seen in the IRAS map. IRAS observes in the infra-red. Dust particles emit in the infrared when heated by stellar radiation. In the left tangent direction, the 5th caustic ring lies in the midst of a spiral arm with abundant stellar activity whereas in the right tangent direction, the ring lies in a quiet region. The dust on the right sides receives less heat and for this reason fails to show up in the IRAS map. The GAIA and IRAS triangular features imply that the caustic ring center is displaced from the Galactic Center to the right by $5.8^\circ$, and that the plane of the ring is tilted relative to the Galactic Plane $0.48^\circ$ to the right. In all likelihood, we on Earth are inside the tricusp volume of the nearby caustic ring. As a result there are four flows on Earth associated with the nearby caustic, called Big, Little, Up and Down. In contrast, on the basis of pre-Gaia observations when only the IRAS triangle was known and the caustic ring center was assumed to coincide with the Galactic Center, it was thought that we are outside the tricusp and hence that there are two flows on Earth associated with the nearby caustic, Big and Little. Being inside the tricusp implies two additional flows, Up and Down. The flows that form the 5th caustic ring are prominent on Earth. They produce narrow peaks in the axion energy spectrum which are observable with great resolution in the cavity haloscopes \cite{cavity}. These detectors are made more sensitive by searching for such narrow peaks \cite{Hires}. Knowing the velocity vectors of the flows on Earth one can calculate how the peaks move as a function of time of day and time of year \cite{Ling}, so that observations made at different times can be related to one another. Generally speaking, axion dark matter searches are helped by knowing the velocity distribution of the axions on Earth. In particular, a recently proposed axion dark matter detection scheme \cite{Arza}, called the axion echo method, is largely predicated on the existence of one or more cold flows and on knowledge of their velocity vectors. So there is strong motivation to determine the velocity vectors on Earth of the flows forming the nearby caustic ring. This is our purpose in Section IV. Even accepting that the IRAS and Gaia triangles show the imprints of the 5th caustic ring in the two tangent directions, uncertainties arise because of the need to interpolate between the two tangent points, which are 940 pc away from us on either side. To find the central values of the flow velocity vectors we assume that the ring between the two tangent points is planar and circular. We also assume that the caustic ring transverse sizes at our location, $p$ and $q$, can be obtained from estimates at the tangent points by linear interpolation. We estimate the likely errors associated with these assumptions. The results are given in Eqs.~(\ref{veldir}) and (\ref{veldir2}), Tables I and II, and Figs. 12 and 13. These predictions are testable as soon as axion dark matter is detected in the laboratory, and perhaps by other means. \begin{acknowledgments} We gratefully acknowledge stimulating discussions with Peter Barnes, David Tanner, Shriram Sadashivajois, Ariel Arza and Richard Bradley. This work was supported in part by the U.S. Department of Energy under grant DE-SC0010296 at the University of Florida. SSC is supported by the grant ``The Milky Way and Dwarf Weights with Space Scales'' funded by University of Torino and Compagnia di S. Paolo (UniTO-CSP). SSC also acknowledges partial support from the INFN grant InDark and the Italian Ministry of Education, University and Research (MIUR) under the Departments of Excellence grant L.232/2016. \end{acknowledgments}
proofpile-arXiv_065-120
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Panoramic segmentation is an image segmentation task proposed in recent years. The main task is to segment all objects with regular attributes in the image. The panoramic segmentation task needs to segment all the regular objects in the image, so it can be used in the field of industrial vision,autonomous driving,retail and other fields. The existing panoramic segmentation algorithm has evolved from the fully convolutional networks\cite{long2015fully}. After experiencing the emergence of DeepLab v1, v2, v3 \cite{chen2017deeplab}, and the attention mechanism\cite{fu2019dual}, etc., more attention is paid to the context of the image. HRNet\cite{wang2020deep} begins to pay more attention to large-scale images from the upper layers Features. OCRNet\cite{yuan2019object} focus on the relationship between image objects and its pixels. However, these algorithms can't handle the relationship between each object in the image and the pixels it owns, and the detail boundary pixels of the object cannot be processed well. \\ Most of the existing methods are mainly to build larger image receptive fields and texture information by lifting the pixels of the feature map, such as HRNet\cite{wang2020deep},PSPnet\cite{zhao2017pyramid}, and improve the relationship between the pixels in the feature image by applying the attention mechanism to define the category of pixels, Such as DANe\cite{fu2019dual}. Other methods use the relationship between pixels and objects to determine the type of pixels, such as OCRnet\cite{yuan2019object}. These methods only improve the ability of image segmentation to a certain extent. None of them solved the problem of large object boundary pixel segmentation and small object segmentation in the panoramic segmentation task.\\ In this work, in order to make the model have a better segmentation performance on the boundary pixels of large objects and have the ability for segmenting the little objects, we changed the basic network model so that it can pay more attention to the classification of the boundary pixels of large objects. At the same time, we use some useful tricks to improve the ability to segment small objects. Our main contributions can be summarized as three folds. 1). We change the basic model so that it can pay more attention to the boundary pixel values of large objects. 2). Some data augmentation methods are used to improve small object segmentation. 3). Semi-supervised methods are applied to create coarse-grained label and we put them into the training model to continuously improve the model performance. 4). We use multi-scale training and inferencing strategy to get the state-of-art performance on the famous dataset ADE20K. \section{Related Work} \subsection{Traditional Method} Threshold segmentation\cite{otsu1979threshold} is the simplest method of image segmentation and also one of the most common parallel segmentation methods. It directly divides the image gray scale information processing based on the gray value of different targets. Edge Detection Segmentation algorithm\cite{yuheng2017image} uses Sobel operator and Laplacian Operator, and clustering K-means algorithm\cite{kanungo2002efficient} is also applied. The advantages of these traditional algorithms are simple calculation and faster operation speed. The disadvantage is that it can not achieve a good image segmentation performance. \subsection{Deep Learning Method} \textbf{The Method Of Increasing The Receptive Field:} From the early FCN\cite{long2015fully}, the upsampling method was used to increase the receptive field of feature map, then the PFPN\cite{kirillov2019panoptic} method was added later to increase the receptive field of image in the form of the feature pyramid. Dense-ASPP\cite{yang2018denseaspp} uses atrous convolutions to build the Atrous Spatial Pyramid Pooling to increase the receptive field of the image. SegNet\cite{badrinarayanan2017segnet} uses an encoder-decoder model and retains high-quality feature maps. The recently proposed HRNet\cite{wang2020deep} can keep the specific features of the low-level image by connecting all levels of the network, while ensuring high-level semantic features and expanding the receptive field of the segmentation model. Although these methods can alleviate some of the problems of segmentation accuracy, but they cannot solve the related boundary pixel relationship with the objects in the image, and also they cannot focus on segmentation of small objects. \textbf{Pixel Association Method:} A major difficulty of panoramic segmentation is that regular objects must be classified in the image. There are so many different categories to arrange objects with wide variety of shapes and complex poses. To solve this problem, some algorithms use the attention mechanism. By associating each pixel of the feature map with the global pixel value, the classification effect of each pixel is obtained. The typical algorithm is DANet\cite{fu2019dual}, and there is also a conditional random field\cite{sutton2006introduction}. In this way, each pixel is related to the pixel at its boundary. The CRF algorithm applies the voting mechanism to use around pixels. ACNet\cite{ding2019acnet} and OCRNet\cite{yuan2019object} use the internal pixel values of each object in the image to determine the classification of boundary pixels. These methods can help get better segmentation performance on the boundary pixels, but they are still not effective enough. When the boundary of the object is not clear, the object in the image is blocked. \section{Method} Our improvement method includes: we use the basic HRNet\cite{wang2020deep} as backbone, and add Nonlocal\cite{wang2018non} and DANet\cite{fu2019dual} attention mechanism to make the model pay more attention to the boundary pixel relationship with its object. Then we use data augmentation methods like Mosaic\cite{bochkovskiy2020yolov4} to make the model pay more attention to the image segmentation effect of small objects. In addition, we use semi-supervised\cite{chen2020leveraging} form to create coarse-grained data and label, and incorporate it into the model for auxiliary training. Finally multi-scale training and inference are applied to achieve state-of-art performance on the famous panoramic segmentation dataset ADE20K\cite{zhou2017scene}. \subsection{Network} HRNet\cite{wang2020deep} is a particularly well-known network structure recently. In order to maintain high-pixel image characteristics, it uses the connection structure shown in Figure \ref{Hrnet}. The advantage of this structure is that it can ensure the segmentation of the whole large object and retain the low-latitude features and details of image at the same time. According to the network structure of DANet in Figure \ref{Danet}, we use the dual attention mechanism to accurately locate the fourth stage behind HRNet\cite{sun2019deep}. Finally, an OCRnet\cite{yuan2019object} shown in Figure \ref{OCRnet} enables the model to rematch the pixels of object with each pixel in the image. \begin{figure}[t] \includegraphics[width=1\linewidth]{hrnet_arc.pdf} \caption{{\small{Hrnet image segmentation network architecture}}} \label{Hrnet} \end{figure} \begin{figure}[t] \begin{centering} \includegraphics[width=1\linewidth]{danet.pdf} \par\end{centering} \caption{{\small{An overview of the Dual Attention Network.}}} \vspace{-15pt} \label{Danet} \end{figure} \begin{figure}[t] \begin{centering} \includegraphics[width=1\linewidth]{ocrnet.jpg} \par\end{centering} \caption{{\small{OCRnet Finally, we use the OCRnet network structure to determine the relationship between each object and the pixels on this object, and assign other pixels based on the overall pixel attributes of the object.}}} \vspace{-15pt} \label{OCRnet} \end{figure} \subsection{Auxiliary loss} We use two auxiliary losses to help the final model produce better segmentation results. The first auxiliary loss named $L_{da}$ uses the from of dual attention loss, shown in Figure\ref{Danet}. We use auxiliary loss here to ensure that the result of dual attention module can pay more attention to the overall result and the boundary pixel values of large objects. The second auxiliary loss comes from OCRNet named $L_{ocr}$. This loss is to make the object of OCRNet focus on the object itself. Finally, we use three loss functions for training. The total loss is a weighted sum of three parts, which is defined as following: \begin{equation} L_{final}=\alpha{L_{da}}+\beta{L_{ocr}}+\gamma{L_{re}} \end{equation} The $\alpha$,$\beta$ and $\gamma$ are auxiliary loss weights, these weights will be introduced in detail in Section \ref{experiment detail}. \subsection{Data Augmentation} \label{data augmentation section} There are 150 categories in the ADE20K dataset, and the shape, size and posture of different object in different scenes are different, so it is important to use data augmentation to increase data at different scales. The main method is to use data of different scale during training. four images are superimposed into one image, and at the same time, the corresponding label is also superimposed. This method can make the model pay more attention to the segmentation of small objects. The specific method is shown in Figure \ref{augmentation} \begin{figure}[t] \begin{centering} \includegraphics[width=1\linewidth]{augmetation.pdf} \par\end{centering} \caption{{\small{First choose four images and resize them to proper size, then stitch them together according to the position on the map to generate an training image and corresponding label.}}} \vspace{-15pt} \label{augmentation} \end{figure} The purpose of data augmentation is to increase the diversity of the image and the size of the objects in image, so that the corresponding correspondence of the objects in the image becomes smaller, so that the image segmentation algorithm model can segment small objects more robust. \subsection{Semi-supervised training} \label{semi-supervised training section} We use a semi-supervised training method\cite{chen2020leveraging} to train the model. The so-called semi-supervised training method refers to the following steps. First, train a basic model as a teacher according to the above methods. Second, predict the test dataset through this teacher, thus forming a test dataset composed of image and coarse-grained label pair. Then, use the image and coarse-grained label pair of this test dataset as the training set of the model to train the student model. Finally, use all the datasets with the test dataset pair to finetune the student model. The student model after training here can be used as a teacher model to complete a second-round semi-supervised learning task. We have used two semi-supervised learning tasks here to achieve state-of-art performance on the test dataset. The whole process is shown in the Figure \ref{semi_traing} \begin{figure}[t] \begin{centering} \includegraphics[width=1\linewidth]{semi_training.pdf} \par\end{centering} \caption{{\small{First, train a teacher model using train datasets, and then use the teacher model to inference the test dataset to generate the coarse-grained label. The generated coarse-grained label and the test dataset are paired form the train dataset which is used to train the student model. Then use the train datasets to fintune the student model to generate the teacher model, and then the generated model inference the test dataset, regenerates the new coarse-grained label, Finally useing this formed test dataset pair to replace the first stage test dataset pair and enter the loop training.}}} \vspace{-15pt} \label{semi_traing} \end{figure} \subsection{Multi-scale training and inference} The model is sensitive to the scale of data. In order to train a robust model, the training data needs to be multi-scale transformed. Here we take (520, 640) as the basic scale, and the largest scale is (1024, 1280). So multiple scales of data can be used to train the model. The random crop method is also applied to increase the multi-scale information of the data. Similarly, when predicting, we use multiples of the base scale for inference. The multiples are selected as parameters such as 0.5, 1.0, 1.5, 2.0, etc. Finally all inferences are fused to obtain the final result. \section{Experiments} The most well-known image segmentation datasets are Pascal Voc\cite{everingham2010pascal}, MSCOCO\cite{lin2014microsoft}, Cityscapes\cite{cordts2016cityscapes},Kitti\cite{geiger2013vision}, ADE20K\cite{zhou2017scene}, etc. We choose the ADE20K datasets to verify the advancedness of our algorithm in the field of panoramic segmentation. The ADE20K\cite{zhou2017scene} dataset created by MIT which is a very famous panoramic segmentation dataset, which contains 20,000 images for training and 2,000 images for validation, and 3,000 images for test. It has 150 categories in total. The size of different objects varies a lot, and the occlusion between different objects is very serious. At the same time, all the regular objects in the panoramic segmented image are segmented, so the segmentation is very difficult, as shown in Figure \ref{ADE20K}. The final score is composed of two parts, one is the score of pixel accuracy and the other is the score of Miou(Mean IoU). The formula is defined as following: \begin{equation} S_{s}=(S_{acc}+S_{Miou})/2.0 \end{equation} \begin{equation} Miou=\frac{Intersection(B_{p},B_{gt})}{Union(B_{p},B_{gt})} \end{equation} Among them, $B_{p}$ represents the pixel range predicted by the algorithm, and $B_{gt}$ represents ground truth. Intersection() and Union() indicate the intersection and union area between $B_{p}$ and $B_{gt}$. According to this formula, we can know that pixel-level accuracy and pixel classification of small objects are particularly important to the overall result. \begin{figure}[t] \begin{centering} \includegraphics[width=1\linewidth]{adek.pdf} \par\end{centering} \caption{{\small{Images in the ADE20K dataset are densely annotated in details with objects and parts. The first row shows the sample images, the second row shows the annotation of objects and stuff, and the third row shows the annotation of object parts.}}} \vspace{-15pt} \label{ADE20K} \end{figure} \subsection{Experiment detail} \label{experiment detail} At first, we created s new dataset based on the data augmentation method described in \ref{data augmentation section} with a ratio of 1:0.3 data volume. Here we put 6000 images into the training data according to the construction of the training data plus the validation dataset. Second, we assemble the model according to the above-mentioned way of changing the model. The weights of the auxiliary loss selected here are: 0.1, 0.3, and 1. The weight of 0.1 is the weight of the result after Dual attention named $\alpha$, 0.3 is the first loss weight of OCRNet named$\gamma$, and 1 is the loss of the final generated image named $\gamma$. Third, we train the model in a semi-supervised way which is described in Section \ref{semi-supervised training section}. First, we train the teacher model on the ADE20K training set. Second, we use the teacher model's prediction as test dataset and get the rough label of this test dataset. Then, we use the test dataset and the rough label as the training set, retrain a new model which is called student model. Finally we use the training dataset to fintune the student model. This forms a training loop. After multiple training loops, we got a model which can achieve state-of-art performance. Finally, we use multi-scale training and inference. The specific details are to use the three modes of base scale: 520, 640, 800, and then use 7 base multiples when inference: 0.5, 0.75, 1, 1.25, 1.5, 1.75 2.0. All the results are fused together to get the final output result. Eventually, we can see our results compared with other algorithm in Figure \ref{results}. \begin{figure}[t] \begin{centering} \includegraphics[width=1\linewidth]{results.pdf} \par\end{centering} \caption{{\small{The above figure illustrates that neither HRNet nor DANet can effectively segment the object boundary pixels and little objects, But our algorithm can deal with these well.}}} \vspace{-15pt} \label{results} \end{figure} \section{Conclusion} In this paper, we use some recent useful tricks to propose a high-performance algorithm on the famous panoramic segmentation dataset ADE20K. These tricks include: (A) changing the basic network structure to use a multi-stage attention mechanism, which can enable the algorithm to take the pixels of large objects into account, and at the same time can have a good classification effect on boundary pixels of objects with multiple auxiliary loss functions; (B) adopting the way of data enhancement, so that the algorithm can segment small objects better; (C) proposing a semi-supervised training strategy to make the network more effective; (D) using multi-scale training and inference to obtain state-of-art performance on the dataset ADE20K. \bibliographystyle{abbrv}
proofpile-arXiv_065-121
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Databases or tables of all elliptic curves subject to various constraints have been published since the 1970s, including in the well known Antwerp IV conference proceedings~\cite{BirchKuyk}. Such tables are useful both in identifying a given curve appearing in nature, or for proving a curve with certain properties does not exist. Tables can also be used to answer distributional questions about properties of elliptic curves when ordered in different ways. The most well known such tables are those of elliptic curves over \(\QQ\) with bounded conductor due to Cremona~\cite{Cremona97book,CremonaData}, which now form part of the LMFDB~\cite{lmfdb}. One may instead however construct tables of elliptic curves with bad reduction only at primes in a specified set of rational primes $S$. These are exactly the primes dividing the conductor. Organising curves by their primes of bad reduction can be quite useful in practise; it is often possible to prove a particular curve has good reduction outside certain places, and then conclude that the curve is contained in such a table for some $S$. In particular many classical diophantine equations can be phrased in terms of the existence of elliptic curves with specified places of bad reduction, see Sections~\ref{secSUE_connection} and~\ref{secReductionsOfDiphantineEquations}. \begin{samepage} In this paper we compute and study what is conjecturally the complete set of isomorphism classes of elliptic curves over \(\QQ\) with good reduction away from the first six primes \(\{2,3,5,7,11,13\}\). This set and the code and auxiliary data used to compute it (including Mordell--Weil bases for almost 100,000 Mordell curves) are available at\begin{center} \href{https://github.com/elliptic-curve-data/ec-data-S6}{\textit{https://github.com/elliptic-curve-data/ec-data-S6}}. \end{center} \end{samepage} Many of the curves in this set have quite large conductor, but nevertheless by virtue of having bad reduction at only a few small primes can be simpler arithmetically than other curves with smaller conductor. \paragraph{History.} We now give a non-exhaustive overview of previous work computing databases of elliptic curves over~$\QQ$. In the late 1980's Brumer and McGuinness~\cite{brumermcguinness} computed rational elliptic curves of prime discriminant bounded by $|\Delta|\leq 10^8$. Stein and Watkins~\cite{SteinWatkins} then extended this database to include almost all curves up to $|\Delta|\leq 10^{12}$ with either conductor $N\leq 10^8$ or prime conductor less than~$10^{10}$. To compute the set of elliptic curves with with bounded conductor, Tingley~\cite{Tingley75thesis} used modular symbols to find all elliptic curves with $N\leq 200$. This was greatly extended and improved by Cremona~\cite{Cremona97book,CremonaData}, who has currently computed all of these curves up to $N\leq 500000$. Initially this approach was only known to compute modular elliptic curves, and it was only when modularity was proved that it was confirmed~\cite{BCDT01modularityOverQ} that over~$\QQ$ being modular is not a restriction. A third natural basis on which to construct a database of elliptic curves, is by restricting the set of places of bad reduction, i.e.\ the primes that divide~$N$ (or equivalently, primes that divide the minimal discriminant). For any finite set of rational primes~$S$, let $M(S)$ denote the finite set of elliptic curves over $\QQ$ with good reduction outside of~$S$, up to $\QQ$-isomorphism, and let \[ N_S:=\prod_{p\in S} p\text. \] We may then hope to compute the set $M(S)$ for various sets $S$. The set $M(\{2,3\})$ was computed by Coghlan~\cite{Coghlan67ellipticCurves23} and Stephens~\cite{Stephens65thesis}, and Coghlan's data was republished as Table 4 in~\cite{BirchKuyk}. Agrawal, Coates, Hunt and van der Poorten~\cite{AgrawalCoatsHuntVDPoorten80} computed $M(\{11\})$ via a reduction to Thue--Mahler equations. Cremona and Lingham~\cite{CremonaLingham07ellipticCurves} computed $M(\{2,p\})$ for $p\leq 23$ via a reduction to the computation of $S$-integral points on Mordell curves. Koutsianas~\cite{Koutsianas19ellipticCurvesOverNFs} used a reduction to $S$-unit equations over number fields to compute $M(\{2,3,23\})$, as well as curves $E\in M(S)$ for various other $S$ satisfying certain restrictions on the $2$-division field of~$E$. Von K\"anel and the second author~\cite{vKMa14sUnitAndMordellEquationsUsingShimuraTaniyama} computed $M(\{2,3,5,7,11\})$ as well as all $M(S)$ with $N_S \leq 1000$ using an elliptic logarithm sieve to compute $S$-integral points on elliptic curves. Bennett and Rechnitzer~\cite{BennettRechnitzer} and Bennett, Gherga and Rechnitzer~\cite{BennettGhergaRechnitzer19ellipticCurvesOverQ} computed $M(\{p\})$ for all $p\leq 50000$ using a refinement of the reduction to Thue--Mahler equations and Thue equations. The latter paper also recomputes $M(\{2,3,5,7,11\})$ using this approach. Moreover, using a heuristic they computed all curves in $M(\{p\})$ for $p\leq 10^{10}$, without guaranteeing completeness. Finally we mention that there are various extensions to the above methods to compute elliptic curves over number fields with good reduction outside of a given set of places. In particular the aforementioned approaches of Cremona and Lingham~\cite{CremonaLingham07ellipticCurves} and of Koutsianas~\cite{Koutsianas19ellipticCurvesOverNFs} generalise to the number field setting. \medskip \paragraph{Outline.} The aim of this paper is to compute the set $M(\{2,3,5,7,11,13\})$. We have computed a subset of this is heuristically the full set, but is not proved to be complete by our method at present.\footnote{However work in progress by the second author gives the same set of curves using a different method.} In Sections~\ref{secSummary} and~\ref{secDistribution} we give a summary of our data and discuss some statistics of the data. We compare our data to Cremona's database in Section~\ref{secComparisionToCremonaDB}. Our computation relies on a reduction to solving Mordell equations in $S$-integers, this is discussed in Section~\ref{secComputationMethod}. The main computational bottleneck is to compute the Mordell--Weil bases of a large set of Mordell curves, this is elaborated upon in Section~\ref{secMW}. In Sections~\ref{secCompletenessHeuristic} and~\ref{secSHallandABC} we discuss a heuristic that our database should be complete, and the possibility of proving completeness via additional computation. In Section~\ref{secMaximalConductor} we show some results suggested by the data regarding the question for which sets $S$ there are elliptic curves with good reduction outside $S$ of maximal possible conductor. In Section~\ref{secApplications} we discuss connections and applications to solving other classical diophantine equations including $S$-unit, Thue--Mahler and Ramanujan--Nagell equations. \paragraph{Acknowledgement.} It is our pleasure to thank Edgar Costa for various useful comments and for computing the analytic ranks of all curves in our database, as well as the leading coefficients and root numbers of the associated $L$-series. They are available from the same GitHub repository. \subsection{Summary of the database} \label{secSummary} Let $S(n)$ denote the set of the first $n$ rational primes. According to our computation, the set $M(S(6))$ contains $4576128$ curves in total; see Table~\ref{table:counts}. Here, $j(M(S(n)))$ is the set of distinct $j$-invariants of curves in $M(S(n))$, the cardinality of this set is therefore the number of $\overline\QQ$-isomorphism classes of curves in $M(S(n))$. \begin{table}[!ht] \centering \begin{tabular}{lllr} \toprule $n$ & $\# M(S(n))$ & $\#j(M(S(n)))$\tabularnewline \midrule $0$ & $0$ & $0$ & Tate (cf.\ Ogg~\cite{Ogg66_2powerConductor}) \tabularnewline $1$ & $24$ & $5$ & \cite{Coghlan67ellipticCurves23,Stephens65thesis,Ogg66_2powerConductor} \tabularnewline $2$ & $752$ & $83$ & \cite{Coghlan67ellipticCurves23,Stephens65thesis} \tabularnewline $3$ & $7600$ & $442$ & \cite{vKMa14sUnitAndMordellEquationsUsingShimuraTaniyama} \tabularnewline $4$ & $71520$ & $2140$ & \cite{vKMa14sUnitAndMordellEquationsUsingShimuraTaniyama} \tabularnewline $5$ & $592192$ & $8980$ & \cite{vKMa14sUnitAndMordellEquationsUsingShimuraTaniyama, BennettGhergaRechnitzer19ellipticCurvesOverQ} \tabularnewline $6$ & $4576128^*$ & $34960^*$ & this paper \tabularnewline \midrule \end{tabular} \caption{Numbers of elliptic curves with good reduction outside $S(n)$ up to $\QQ$-isomorphism and up to $\overline\QQ$ isomorphism. The asterisk refers to the possible incompleteness of this paper's table. The case $n=0$ is the classical result that there is no elliptic curve over $\QQ$ with everywhere good reduction. } \label{table:counts} \end{table}% When $n \ge 2$ we can obtain all of $M(S(n))$ by taking a representative of each $\overline\QQ$-isomorphism class of curves in $M(S(n))$ and twisting this representative by all integers divisible only by primes in $S(n)$. For $j \ne 0,1728$ we only have quadratic twists, when $j = 1728$ we have quartic twists, and for $j= 0$ sextic twists (our assumption that $n\geq 2$ implies that $0,1728 \in j(M(S(n)))$), giving the equation \[ \#M(S(n)) = 2^{n+1}(\#j(M(S(n))) - 2) + 2\cdot 4^{n} + 2\cdot 6^n\text. \] This holds in all cases above, and provides a quick check that nothing that obviously should have be in the database has been missed. Each curve in $M(S(6))$ has conductor $N\,|\, 2^83^55^27^211^213^2$, which gives, a priori, $4374$ possibilities for~$N$. It turns out that exactly $4344$ of them are indeed attained by curves in our set. The $30$ exceptions for which there is no curve with that conductor are \[ \begin{split} \{& 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 16, 18, 22, 25, 28, 60, 81, 165, \\ & 169, 351, 945, 1280,1820,2673,2816,9984,13365,362880\}. \end{split} \] These exceptions factor as follows \[ \begin{split} \{& 1, 2, 3, 2^{2}, 5, 2 \cdot 3, 7, 2^{3}, 3^{2}, 2 \cdot 5, 2^{2} \cdot 3, 13, 2^{4}, 2 \cdot 3^{2}, 2 \cdot 11, 5^{2}, 2^{2} \cdot 7, 2^{2} \cdot 3 \cdot 5, 3^{4}, 3 \cdot 5 \cdot 11, \\ & 13^{2}, 3^{3} \cdot 13, 3^{3} \cdot 5 \cdot 7, 2^{8} \cdot 5, 2^{2} \cdot 5 \cdot 7 \cdot 13, 3^{5} \cdot 11, 2^{8} \cdot 11, 2^{8} \cdot 3 \cdot 13, 3^{5} \cdot 5 \cdot 11, 2^{7} \cdot 3^{4} \cdot 5 \cdot 7 \}. \end{split} \] These (non-)conductors are all within the range of Cremona's database, and we can therefore check that there are indeed no elliptic curves with any of these numbers as their conductor. We note that the largest conductor for which no elliptic curve of that conductor exists is less than the square root of the largest possible conductor of a curve in~$M(S(6))$. \medskip Next we consider isogeny classes in~$M(S(6))$. This is also a natural partition of curves in the database as $M(S(n))$ is closed under taking isogenies (any two isogeneous curves have the same conductor). Our data contains $3688192$ disjoint isogeny classes in total: $2966912$ classes of cardinality~$1$, $646784$ of cardinality~$2$, $4608$ of cardinality~$3$, $60928$ of cardinality~$4$, $6784$ of cardinality~$6$, $2176$ of cardinality~$8$, and no others. An example of a curve in $M(S(6))$ with isogeny class of cardinality~$8$ is \[ y^2 = x^3 + 827614112325\,x + 276113445805174250. \] Edgar Costa has computed the analytic ranks of all curves in our table, as well as the leading coefficients and roots numbers of the associated $L$-series. His computations use interval arithmetic and hence the leading coefficients are given with exact error bounds. The standard problem that remains is that it is impossible to verify numerically that the lower derivatives vanish exactly, and thus the computed analytic rank is actually only an upper bound once the rank is large enough. According to his computations, there are $1884428$ curves of analytic rank~$0$ in our data, $2267261$ of analytic rank~$1$, $406309$ of analytic rank~$2$, $18003$ curves of analytic rank~$3$, and the remaining $127$ curves are of analytic rank~$4$. We can compare this to the number of rational elliptic curves with conductor bound $N\leq 500000$ with each rank using Cremona's database: For these curves, Cremona computed analytic and algebraic ranks (and checked that they coincide), and found that there are $1632686$ curves of rank~$0$, $2124004$ of rank~$1$, $461670$ of rank~$2$, $11243$ of rank~$3$, and $1$ of rank~$4$. In both tables, we observe a similar larger number of rank~$1$ curves than rank~$0$ curves. An intriguing difference is the larger number of rank~$4$ curves in our data, compared to a similar total number of curves when ordered by conductor. \subsection{Distribution of quantities} \label{secDistribution} In this section we study the distribution of various arithmetical quantities associated to curves in our dataset. As these curves have bad reduction at only the first six primes, they are quite structured and it is interesting to compare answers to distributional questions to when curves are ordered with respect to conductor or discriminant. \begin{figure}[thb] \centering \begin{minipage}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{cond.pdf} \subcaption{Elliptic curves in our data set.} \end{minipage} \hfill \begin{minipage}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{cond_cremona.pdf} \subcaption{Elliptic curves with $N \leq 500000$.} \end{minipage} \caption{Histograms of logarithms of conductors: (a) shows the curves we computed within~$M(S(6))$. For a comparison, (b) shows all rational elliptic curves with $N\leq 500000$ according to Cremona's database. The bar at $\log(500000)\approx 13.1$ signifies the end of the overlap of both tables.} \label{figLogConductors} \end{figure} One fundamental quantity is the conductor. We plot the distribution of the logarithm of the conductor for the curves in our database as a histogram in Figure~\ref{figLogConductors}(a). We take the logarithm of $N$ due to the multiplicative nature of the conductor. Indeed, if the conductor exponents $f_p$ in $N=\prod_{p\in S}p^{f_p}$ were uniformly and independently distributed (which they are not), then in Figure~\ref{figLogConductors}(a) we would see an approximately normal distribution with mean~$14.037$ and standard deviation $4.382$. The observed distribution of $\log(N)$ is comparatively lopsided: It appears denser in the larger conductor range. This could be explained by the fact that one can turn good into additive reduction at $p\geq 3$ via twisting by~$p$ (as the reduction of $E$ at $p$ will have Kodaira symbol~$\mathrm{I}_0^*$ by Tate's algorithm), without leaving $M(S(6))$. \begin{figure}[thb] \centering \begin{minipage}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{szpiro_ratio.pdf} \subcaption{Elliptic curves in $M(S(6))$.} \end{minipage} \hfill \begin{minipage}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{szpiro_ratio_cremona.pdf} \subcaption{Elliptic curves with $N \leq 500000$.} \end{minipage} \caption{Histograms of Szpiro ratios $\sigma=\log(\Delta_E)/\log(N)$. (a) shows the curves we computed within~$M(S(6))$. For a comparison, (b) shows all rational elliptic curves with $N\leq 500000$ according to Cremona's database. We observe three differences: the larger maximal value for~$\sigma$ in~(b) (namely $8.903700$), the larger mean for~$\sigma$ in~(a), and that (b) contains a significant number of curves with~$\sigma=1$ (namely $602$). } \label{figSzpiroRatios} \end{figure} The \emph{Szpiro ratio} of an elliptic curve over $\QQ$ is defined to be the ratio \[\sigma= \frac{\log|\Delta_{E}|}{\log N}\] of the logarithms of the minimal discriminant of the curve and its conductor. Figure~\ref{figSzpiroRatios}(a) sketches the distribution of Szpiro ratios for the curves in our database. Szpiro's conjecture states that $| \Delta_E| = O_\varepsilon (N^{6 + \varepsilon})$, or equivalently that for any $\delta > 0 $ there are only finitely many elliptic curves over $\QQ$ with $ \sigma > 6 + \delta$. Indeed the largest Szpiro ratios occurring among all curves in our dataset are approximately \[8.757316,\, 8.371586,\, 8.11481\text{ and } 8.034917\ldots\text,\] the curves for which these ratios occur are all in the LMFDB and have labels \href{https://lmfdb.org/EllipticCurve/Q/858.k2}{\textit{858.k2}}, \href{https://lmfdb.org/EllipticCurve/Q/2574.j2}{\textit{2574.j2}}, \href{https://lmfdb.org/EllipticCurve/Q/910.e1}{\textit{910.e1}}, and \href{https://lmfdb.org/EllipticCurve/Q/9438.m2}{\textit{9438.m2}} respectively. The second and fourth of these are both quadratic twists of the first. It seems that these three have a large Szpiro ratio due to a factor of $3^{21}$ in each of their discriminants. The third has a factor of $2^{63}$ in its discriminant. These are the only four curves in our set with $ \sigma \ge 8$. There are $123$ curves in the database with $\sigma \ge 7$, only $15$ of which have conductor larger than~$500000$. The largest $\sigma$ from our data with $N > 500000$ has $N = 532350$ and $\sigma \approx 7.161459$. \subsection{Comparison with Cremona's database} \label{secComparisionToCremonaDB} Cremona~\cite{Cremona97book,CremonaData} has computed the set of all rational elliptic curves with conductor less than various bounds, currently up to $N\leq 500000$. If $S$ is the set of primes of bad reduction of an elliptic curve $E$ of conductor $N$, then \[ N_S\leq N \leq 1728 N_S^2 \] Thus in principle, the problems of computing $M(S)$ and all curves of bounded conductor are equivalent. Both parameters~$S$ and~$N$ stratify the infinite set of rational elliptic curves. In practise however these stratifications differ considerably: for example, $M(S(6))$ contains $14216$ curves of conductor $2^8 3^5 5^2 7^2 11^2 13^2 \approx 10^{12}$, which is considerably larger than $500000$; and on the other hand, $M(S(6))$ does not contain the four curves with conductor~$17$. Cremona's database contains at present $1238682$ distinct $j$-invariants, whereas the computation we performed resulted in $34960$, because for each $j$-invariant, our set contains at least $128$ distinct twists. On the other hand, Cremona's database contains $3064705$ $\QQ$-isomorphism classes of curves, whereas our contains $4576128$. Despite the fact that the two databases contain more or less the same number of curves, there are $4376070$ curves in our set not contained in the Cremona database, that is, less than 5\% of our set overlaps with his. We observe significant differences in the distributions of $\log(N)$ and of $\sigma$ for both data sets, see Figures~\ref{figLogConductors} and~\ref{figSzpiroRatios}. Cremona's tables contain a lot more information about each curve present there than our tables currently do, including Manin constants, generators for the Mordell--Weil group, BSD invariants, modular degrees, optimality data, sets of integral points and image types of Galois representation. Much of this data would be prohibitively difficult to compute for every curve in our set due in part to the size of the conductors of some of the curves in our table. \section{Computation} In this section we discuss the reduction of computing $M(S)$ to the problem of solving Mordell equations, the computation of the requisite Mordell--Weil bases, which is then the dominant computational task to be undertaken, and the heuristic completeness of the obtained data. The code implementing the methods described here and computed data are available online. The repository \href{https://github.com/elliptic-curve-data/ec-data-S6}{\textit{https://github.com/elliptic-curve-data/ec-data-S6}} contains the majority of the code, and the file \texttt{mordell.sage} of \href{https://github.com/bmatschke/solving-classical-diophantine-equations/blob/master/mordell.sage}{\textit{https://github.com/bmatschke/solving-classical-diophantine-equations/}} contains an implementation of the algorithm of von K\"anel and the second author~\cite{vKMa14sUnitAndMordellEquationsUsingShimuraTaniyama}. \subsection{Computation method} \label{secComputationMethod} Let $S$ denote a finite set of rational primes, let $M(S)$ denote the set of elliptic curves over $\QQ$ with good reduction outside of~$S$, up to $\QQ$-isomorphism, and let \[ N_S:=\prod_{p\in S} p\text. \] For this section we assume that $2,3\in S$, which can be achieved by enlarging~$S$ if necessary. Let $\mathcal{O}_S=\ZZ[1/N_S]$ denote the ring of $S$-integers and $\mathcal{O}_S^*$ the group of $S$-units. A theorem of Shafarevich~\cite{shafarevich1962algebraic} states that for any~$S$ the set of curves $M(S)$ is finite. This can be seen as follows: For any $E\in M(S)$ choose a minimal Weierstrass model for $E$ and consider the $c_4$ and $c_6$ invariants and discriminant~$\Delta_E$ of this model. These invariants satisfy the equation $c_6^2 = c_4^3 - 1728\Delta_E$ and $\Delta_E\in \ZZ\cap \mathcal{O}_S^*$. If necessary we may divide this equation by a power of $p^6$ for each~$p\in S$ to obtain an equality of the form $Y^2 = X^3 + a$, where $X,Y\in \mathcal{O}_S$ and $a=\pm \prod_{p\in S} p^{e_p}$ with $0\leq e_p \leq 5$ ($p\in S$). The pair $(X,Y)$ can then be regarded as an $S$-integral point on the Mordell curve $E_a\colon y^2 = x^3 + a$. By a theorem of Siegel~\cite{siegel29anwendungenDiophantApprox,Silverman86arithmeticBook}, $E_a(\mathcal{O}_S)$ is finite. From any point in $E_a(\mathcal{O}_S)$ we can recover potential invariants $c_4$ and $c_6$ that produce the point, up to any factors of $p^6$ in $c_4^3$ and $c_6^2$ for $p\in S$. This recovers $E$ up to a quadratic twist by a positive $S$-unit. Moreover there are exactly $2^{|S|}$ such twists. We deduce that $M(S)$ is finite and its computation reduces to the computation of~$E_a(\mathcal{O}_S)$ for finitely many values of~$a$. To determine $E_a(\mathcal{O}_S)$ we use the algorithm of von K\"anel and the second author~\cite{vKMa14sUnitAndMordellEquationsUsingShimuraTaniyama}, who gave a method to compute $S$-integral points on rational elliptic curves $E$ provided that generators of the free part of~$E(\QQ)$ are known. Their implementation uses an elliptic logarithm sieve, which can compute~$E_a(\mathcal{O}_S)$ in quite an efficient manner. Thus to compute $M(S)$ it turns out that computing the necessary Mordell--Weil bases of $2\cdot 6^{|S|}$ Mordell curves is the computational bottleneck. In Section~\ref{secMW} we discuss this in detail. \subsection{Computing Mordell--Weil bases} \label{secMW} We have carried out the approach outlined above for $S = S(6)$. We now discuss the most computationally intensive part of the process, which is finding the generators of the free part of the Mordell--Weil group for a number of Mordell curves, many of which have large discriminant. We will use the term Mordell--Weil basis to refer to these generators. Note that finding the generators of the torsion subgroup is both computationally easier and completely classified for Mordell curves~\cite{fueter}, so we assume it is known from now on. The curves we consider are those with \begin{equation} \label{eqAs_for_S6} a \in \{\pm 2^{e_2}3^{e_3}5^{e_5}7^{e_7}11^{e_{11}}13^{e_{13}} \colon 0\le e_p \le 5\}, \end{equation} giving us $93312$ curves to find the Mordell--Weil bases of. We can reduce the number of curves that we need to consider using the following fact. \begin{lemma}\label{lem:threeisog} All Mordell curves have a 3-isogeny given by \begin{align} y^2 = x^3 +a &\to y^2 = x^3 - 27a\\ (x,y) &\mapsto \left(\frac{y^2 + 3a}{x^2}, y\frac{y^2 - 9 a}{x^3}\right) \end{align} \end{lemma} As the composition of two such isogenies is an isomorphism between two models of the same curve, these $3$-isogenies partition our set of Mordell curves into pairs. The upshot is that if we can find generators of the Mordell--Weil group of one of each pair we can easily find generators for the other by pushing the basis forward along the isogeny and saturating, if necessary. Using this we need only compute the Mordell--Weil bases of half the curves, and we may choose which of each pair to consider. \subsubsection{Standard techniques} Out of the $93312$ Mordell curves, we have computed what should be the analytic rank of those with positive $a$ using Pari/GP's \texttt{ellanalyticrank}~\cite{pari2}, via Sage~\cite{sage}. Using the above isogeny, the other half of the curves will have the same Mordell--Weil rank. Of these curves, $20215$ have analytic rank~$0$, $23186$ have analytic rank~$1$, $3112$ have analytic rank~$2$, $142$ have analytic rank~$3$, and only $1$ curve has analytic rank~$4$ which is \[y^2 = x^3 + 82063881900\text.\] We assume that the output of \texttt{ellanalyticrank} is correct and that the analytic ranks are as stated above. As Pari does not use interval arithmetic it is not clear to what extent these computations are guaranteed to be correct (especially for the high rank cases). As we shall see below we have found as many generators as there should be for almost all curves. For many of the curves, once a set of generators is found descent techniques can be used to prove that the algebraic rank equals to what is implied by BSD, and that the set of generators is complete. By the work of Gross--Zagier~\cite{GrossZagier1986heegner} and Kolyvagin~\cite{kolyvagin2007euler} it is known that analytic rank~$\le 1$ implies the rank equals the analytic rank. Therefore no further computation is required for the analytic rank~$0$ curves above. For the analytic rank~$1$ curves we need only to find a single non-torsion point which we can then saturate to find a basis. For many rank~$1$ and~$2$ curves in the set and for all curves of rank at least~$3$, a combination of the built-in Magma and Sage functions and a few other techniques summarized below sufficed to compute the Mordell--Weil bases. These included two and four-descents methods, point searching with Stoll's \texttt{ratpoints} program~\cite{Stoll14ratpoints} and Simon's \texttt{ellQ}~\cite{Simon02ellQ} to search for points in some instances. For the curves of rank at least~$2$, sometimes it was only possible to find a subset of a set of generators on each curve of each three-isogenous pair. However in this case it was often possible to mapping one set of generators via the isogeny to the other curve, and combine the generators to give a basis for the Mordell--Weil group of one (and therefore both) curves. This happened mostly when the height of he found generators grew when mapped to the isogenous curve. In rank~$1$, Heegner points are available in addition to the other machinery of point searching and descent \cite{cohen_number_2007}. In theory, computing a Heegner point is guaranteed to terminate and if the found point is non-torsion then it is known that the curve has algebraic rank~$1$. However in order to compute Heegner points we need to find the images of points under the modular parameterization and hence we may need to compute a large number of Frobenius eigenvalues to find the image to a large enough precision in order to recover an algebraic point. Using a combination of all of these techniques we found bases for all curves but $16481$ of the rank~$1$ curves and we found a single generator (but not the full basis) for all but~$33$ of the rank~$2$ curves. There was one additional rank~$2$ curve for which we did not find any infinite order points with these methods ($E_a$ for $a = 2 \cdot 3 \cdot 5 \cdot 7 \cdot 11^4 \cdot 13^5$). It is likely that a part of the rank~$1$ cases would be amenable to the techniques mentioned, by using larger search bounds or more time or memory. However it seemed a different approach was needed to find bases on the hard rank~$1$ curves as well as all remaining rank~$2$ curves. \subsubsection{$12$-descent} To determine the generators on these harder curves we used the 12-descent routine in Magma designed and implemented by Fisher~\cite{fisher_finding_2007}. This works by combining a $3$-cover obtained from a $3$-descent procedure with a $4$-cover from doing $2$-descent and then $4$-descent. In our setting the presence of a $3$-isogeny for all of our target curves allows us to use $3$-descent by isogeny to obtain the $3$-cover, this is more efficient as the number fields involved are smaller than a general $3$-descent. The implementation for this in Magma is due to Creutz. Fisher's algorithm then determines a $12$-cover and a map to the original curve from each pair of one $3$-cover and one $4$-cover coming from these lower descents. Therefore to find a generator of the Mordell curve we loop over all $4$-covers and $3$-covers of the curve coming from descent and search for points on the corresponding $12$-cover. It is expected that if an $n$-cover has small enough coefficients that the height of a preimage of a point of height $h$ is roughly $h/2n$. Therefore given an estimate of the canonical height of a generator of the Mordell curve (coming from the regulator estimated via BSD) and a bound for the difference of the na{\"i}ve and canonical heights on an elliptic curve (such as \cite{mullerstoll}) we can search for points on the cover which should be mapping to a generator. Because this point should have smaller height this should substantially reduce the time needed to search for points, compared with simply searching on the original curve. Using this we reduce the height to be searched up to by a factor of up to~$24$ if the coefficients of the $12$-cover are not too large. To search for points on the $12$-covers we use the Magma method \texttt{PointSearch}, implemented by Watkins~\cite{watkinspadic}, see also~\cite{womack_explicit_2003}. This approach has been used to used to find generators of large height on single Mordell curves previously \cite{weigandt}. Due to the fact that we do not know $|{\mbox{\textcyr{Sh}}}|$ for our curves, the regulator may give an overestimate for the height of a generator, as BSD will only allow us to determine $\sqrt{|{\mbox{\textcyr{Sh}}}|} \cdot R$ from readily available information. This procedure was carried out with increasing timeout, up to a maximum of~$12$ hours, and was broadly successful in finding a generator of the rank~$1$ and~$2$ curves for which more standard methods failed. \subsubsection{Remaining curves} The combination of these methods has been broadly successful. However there are~$306$ rank~$1$ curves remaining (up to the $3$-isogeny above), which we have so far been unable to find the Mordell--Weil bases of. A combination of large conductor and large regulator (and hence either large generator height or large $|{\mbox{\textcyr{Sh}}}|$) has prevented any of the above methods from working in a reasonable time frame. The Mordell curve with smallest regulator for which we do not know a generator is \[ y^2 = x^3 + 730033053750 \] with regulator approximately $167.305352$ The largest regulators occurring for the remaining curves arise for \[ y^2 = x^3 \pm 904509009004500900000, \] which interestingly are quadratic twists of each other (by $-1$). Their regulators are $17550.10$ in the $+$ case and $17628.52$ in the $-$ case. However these curves are somewhat exceptional, not all curves are quite so large. The mean of the remaining regulators is $2622.49$. To attack the remaining curves, several options exist to compute a generator. We have trialled these for a few of the remaining curves that we expect to be ``easier''. We attempted to make use of Magma's \texttt{HeegnerPoint} method. As described in Watkins~\cite{watkins_remarks_2006} this allows the user to use $4$-descent to construct a $4$-cover of the target elliptic curve and then find a Heegner point on the cover, reducing the required precision needed and hence the number of required Frobenius traces. Unfortunately the Magma method fails on many of our difficult examples, presumably because both the conductor of the curve and the height of the Heegner point are both large enough that the number of Frobenius traces needed becomes unwieldy for Magma. Due to the closed source nature of Magma (and the \texttt{HPInternal2} and \texttt{FrobeniusTracesDirect} methods in particular) we have been unable to rectify these problems. It is also unclear whether or not Magma's algorithm for computing all traces of Frobenius for primes below a given bound for one of our curves is optimal. As our curves are Mordell curves they have CM (by $\sqrt{-3}$), therefore to compute the Frobenius traces we may make use of Cornacchia's algorithm \cite[pp.~597]{cohen_number_2007}. The highly optimised \texttt{smalljac} package \cite{kedlaya_computing_2008} (available from Sutherland's webpage) includes an implementation of this algorithm in the case of $j$-invariant~$0$ and we expect that using this will be the most effective way to compute enough Frobenius traces to find a Heegner point on the remaining curves. Happily Pari/GP's \texttt{ellheegner} method is more reliable on our examples, though it does appear to use the covering method, thus we expect that it will conclude on several of the remaining curves given enough time. It is not clear that Cornacchia's algorithm is used to compute all of the Frobenius traces. For instance this function has returned successfully for one of the ``missing'' curves. We have found a generator of \[ y^2 = x^3 + 4259854045547100000 \] of height $956.2822$, and it is possible this case was more tractable due to the fact that this is really the double of a generator which has height $239.07055$ instead. As we are not actually missing a generator on this curve we may check that indeed it does not give any extra elements of $M(S(6))$, however as it took far longer to find this point and it required more interactive experimentation with parameters than the descent methods that we used for the vast majority of the curves we prefer to present it separately to the main data. In theory with an increased height bound for point searching on 12-covers and with enough time a point should be found on such a cover in the same way as we found the above. There are two potential issues with this. Firstly a lattice reduction algorithm is used in the point search procedure. It often happens that this method gets stuck if these lattices happen to be ill-conditioned for Magma's algorithm. This can stall the point search and we are not aware of the true cause or of ways of avoiding this other than restarting and hoping to get lucky. The second is that the coefficients of the $12$-cover can be quite large, which can reduce the effectiveness of the height saving of the algorithm. Thus it is very important to minimise the $12$-cover, as described by Fisher, as much as is possible in order to get the most use out of the method. It is plausible that with more work minimizing the $12$-covers the runtime of point searching can be made more feasible. We have checked another ``missing'' example where $12$-descent succeeds with more individual care than we were able to take at scale. This was curve $E_a$ for $a= 139413405126996000$, which has regulator $1504.24027$, with a height bound of~$10^{21}$ on the $12$-covers the descent finds a point which gives us a generator of height $1504.24027$ on $E_a$. It is interesting that this point is not a multiple of any smaller generator, suggesting that ${\mbox{\textcyr{Sh}}}$ is trivial here. Higher descents are also a potential avenue to complete the process of finding generators for the remaining curves. The work of Fisher allows one to combine covers of coprime degrees subject to a numerical condition on the degrees. This includes the case of combining an $n$ cover with an $n+1$ cover to obtain an $n(n+1)$ cover. This could conceivably be used to compute $8\cdot 9 = 72$ covers on Mordell curves by combining 8-descent and 9-descent (as a second $p$-isogeny descent) both of which have been implemented in Magma. It is unknown at present how to make describing and combining such covers practical however. \subsection{Completeness of the data} \label{secCompletenessHeuristic} First, for many Mordell curves we have computed what should be their rank by computing the analytic rank. This is easier to compute than the algebraic rank in general. According to BSD these ranks are equal, but this is not known in general. Computing the algebraic rank is more computationally intensive and can be obstructed by non-trivial ${\mbox{\textcyr{Sh}}}$. However the analogous computation was performed in \cite{vKMa14sUnitAndMordellEquationsUsingShimuraTaniyama} for~$S(5)$. We have in some cases allowed Magma to assume GRH, which speeds up computation of class groups and hence descent machinery. This does not invalidate searching for points on the corresponding covers, any rational points found are then verified unconditionally to be independent elements of the Mordell--Weil group, but when proving that the algebraic ranks agree with the analytic ones, either GRH or a longer computation time is required. Secondly and more seriously, we are missing any $S$-integral points on 612 Mordell curves $E_a$ of rank $1$, because so far we were not able to find the generator of Mordell--Weil for 306 curves (as once one curve from each isogeny class's basis is found, the other may be computed relatively easily). Assuming BSD we may estimate the regulators of these curves up to a factor of $\sqrt{|{\mbox{\textcyr{Sh}}}|}$. In the rank~$1$ case the regulator is simply the height of a Mordell--Weil generator. So we have that in the missing cases either the generators are of large height or $\sqrt{|{\mbox{\textcyr{Sh}}}|}$ is large as their product is at least~$150$. To relate this to the $S$-integral points on these curves, we recall that the \(abc\) conjecture can be used to prove the weak Hall conjecture, which states that integral points $(x,y)$ on the Mordell curve $E_a\colon y^2 = x + a$ satisfy $x = O(a^{2+\varepsilon})$ for any $\varepsilon>0$, see Schmidt~\cite{Schmidt91diophantineApproximationsBook}. The same proof can be used to show (asymptotic) upper height bounds for $S$-integral points on~$E_a$. These make it seem unlikely that an $E_a$ of rank~$1$ with a very large Mordell--Weil generator has an $S$-integral point. These estimates could be made explicit if we assume for example Baker's explicit \(abc\) conjecture~\cite{BakerABC}. We give more details on this heuristic in Section~\ref{secSHallandABC}. These missing Mordell--Weil generators of curves of rank $1$ could be computed via the Heegner point method, which is for example implemented in Pari/GP~\cite{pari2}, whose complexity to find $P\in E_a(\QQ)$ is proportional to $\sqrt{N}h(P)$. Thus together with BSD we estimate that we can prove completeness of our database in about $50$ CPU years. This is probably less than the (quote) ``many thousand machine hours on 80 cores'' that Bennett, Gherga and Rechnitzer~\cite{BennettGhergaRechnitzer19ellipticCurvesOverQ} used to recompute the database of \cite{vKMa14sUnitAndMordellEquationsUsingShimuraTaniyama} for~$S(5)$. The original computation of $M(S(5))$~\cite{vKMa14sUnitAndMordellEquationsUsingShimuraTaniyama} was not timed, but recalling from memory it took in the order of one CPU year. \subsection{An $S$-integral weak Hall conjecture and the $abc$ conjecture} \label{secSHallandABC} In this section we will discuss an $S$-integral analogue of the classical Hall conjecture and how it adds to our heuristic for why our database should be complete. As for the classical Hall conjecture, we will show that it is implied by the $abc$ conjecture. For this section we will use the following terminology. For any finite set of rational primes~$S$, we call a pair of integers $(x,y)$ \textdef{$S$-primitive} if there is no $p\in S$ such that $p^6$ divides both $x^3$ and~$y^2$. We formulate an $S$-integral generalization of the weak Hall conjecture. \begin{conjecture}[An $S$-integral weak Hall conjecture] \label{conjSHall} Let $S$ be a finite set of rational primes. Let $D\neq 0$ be an integer. For any $\varepsilon>0$, any $S$-primitive solution $(x,y)$ of the equation \begin{equation} \label{eqSintegralHallEquation} y^2 = x^3 + aD,\qquad x,y\in \ZZ, \quad a\in\ZZ\cap\mathcal{O}_S^\times, \end{equation} satisfies \begin{equation} \label{eqSintegralWeakHallInequality} \max(|x|^{1/2},|y|^{1/3}) = O_\varepsilon((N_SD)^{1+\varepsilon}). \end{equation} \end{conjecture} Recall that the $abc$ conjecture states that for any $\varepsilon>0$ the following holds. If $a,b,c$ are coprime integers with $a+b+c=0$, then \begin{equation} \label{eqABC} \max(|a|,|b|,|c|)\leq O_\varepsilon(\textnormal{rad}(abc)^{1+\varepsilon}), \end{equation} where $\textnormal{rad}(abc) = \prod_{p\,|\, abc} p$. More explicitly, \eqref{eqABC} states that $\max(|a|,|b|,|c|)\leq K_\varepsilon\,\textnormal{rad}(abc)^{1+\varepsilon}$, where $K_\varepsilon$ is a constant that depends only on~$\varepsilon$. \begin{theorem} The $abc$ conjecture implies the $S$-integral weak Hall conjecture. More explicitly, if the $abc$ conjecture holds for some $0<\varepsilon\leq 0.1$ with constant $K_\varepsilon$, then any $S$-primitive solution $(x,y)$ of~\eqref{eqSintegralHallEquation} satisfies \begin{equation} \label{eqWeakSintegralHallBound_from_ABC} \max(|x|^{1/2},|y|^{1/3}) \leq K_\varepsilon^{1+10\varepsilon}(N_sD)^{1+12\varepsilon}. \end{equation} \end{theorem} Our proof largely follows Schmidt's proof~\cite{Schmidt91diophantineApproximationsBook} that $abc$ implies the classical weak Hall conjecture, although the proof below avoids some technicalities by choosing $s$ and $t$ (see proof) in an efficient way. \begin{proof} Suppose $(x,y)$ is an $S$-primitive solution of~\eqref{eqSintegralHallEquation}. Let $g=\gcd(x^3,y^2)$. Let $A = x^3/g$, $B = -y^2/g$ and $C=aD/g$, which are coprime integers. As $A+B+C=0$, the $abc$ conjecture implies that \begin{equation} \label{eqabcForABCinProof} \max(|x|^3/g,|y|^2/g) \leq K_\varepsilon\, \textnormal{rad}(ABC)^{1+\varepsilon}. \end{equation} We claim that \begin{equation} \label{eqRadABCdividesXYNSDoverG} \textnormal{rad}(ABC) \,|\, \frac{xyN_SD}{g}. \end{equation} To see this we consider two cases. Case 1.) If some $p\in S$ divides $ABC$, then by $S$-primitivity of $(x,y)$ we have $\textnormal{ord}_p(y)\leq 2$ or $\textnormal{ord}_p(x)\leq 1$. In either case, $\textnormal{ord}_p(g)\leq 4$. If $\textnormal{ord}_p(g)=4$, then $p^2\,|\, x$, $p^2\,|\, y$, $p\,|\, N_S$, and thus $p\,|\, xyN_SD/g$. The cases $\textnormal{ord}_p(g)\in\{0,2,3\}$ are similar, and $\textnormal{ord}_p(g)=1$ is a priori not possible. Case 2.) Suppose some $p\not\in S$ divides $ABC$. If $p\nmid g$ then the obvious $p\,|\, xyD$ suffices. If $p\,|\, g$, then $\textnormal{ord}_p(xyD/g)\geq \textnormal{ord}_p(g)(1/3+1/2+1-1) >0$ and so $p\,|\, xyD/g$. This finishes the proof of~\eqref{eqRadABCdividesXYNSDoverG}. Plugging \eqref{eqRadABCdividesXYNSDoverG} into~\eqref{eqabcForABCinProof} implies that $\max(|x|^3,|y|^2)\leq K_\varepsilon (xyN_SD)^{1+\varepsilon}$ and hence \[ |x|^{3s}|y|^{2t}\leq K_\varepsilon^{s+t}(xyN_SD)^{(s+t)(1+\varepsilon)}. \] For $s = (1-\varepsilon)/(1-5\varepsilon)$ and $t = (1+\varepsilon)/(1-5\varepsilon)$ we obtain \[ |x| \leq K_\varepsilon^{2/(1-5\varepsilon)} (N_SD)^{(2+2\varepsilon)/(1-5\varepsilon)}. \] Similarly for $s = (1+\varepsilon)/(1-5\varepsilon)$ and $t = (2-\varepsilon)/(1-5\varepsilon)$ we obtain \[ |y| \leq K_\varepsilon^{3/(1-5\varepsilon)} (N_SD)^{(3+3\varepsilon)/(1-5\varepsilon)}. \] This yields \[ \max(|x|^{1/2},|y|^{1/3}) \leq K_\varepsilon^{1/(1-5\varepsilon)} (N_SD)^{(1+\varepsilon)/(1-5\varepsilon)}. \] For $\varepsilon\leq 0.1$ this reduces to the claimed bounds. \end{proof} Let us relate this to $S$-integral points on the above Mordell curves~$E_a\colon y^2 = x^3 + a$, where $a$ is as in~\eqref{eqAs_for_S6} an $S$-unit with bounded exponents. Suppose $P=(X,Y)\in E_a(\mathcal{O}_S)$. We can clear denominators of $X$ and $Y$ by multiplying $X^3$ and $Y^2$ by suitable powers of $p^6$ for each $p\in S$, and call the resulting integers $\wt X$ and $\wt Y$. This yields a relation $\wt Y^2 = \wt X^3 + \wt a$, to which we can apply the $S$-integral weak Hall conjecture~\ref{conjSHall} (with $D=1$), or alternatively~\eqref{eqWeakSintegralHallBound_from_ABC} as implied by the $abc$ conjecture. We obtain conjectural asymptotic height bounds for $|\wt X|^3$ and $|\wt Y|^2$, which imply up to a small explicit constant (depending on~$S$) the same bound on the na\"i{}ve height of $P$, which in turn is up to an explicitly bounded error the N\'eron--Tate height~$\hat h(P)$. In case $S=S(6)$ we can thus make the following heuristic. First, assume that the $abc$ conjecture holds for $\varepsilon=0.1$ with a constant $K_\varepsilon\leq 1.1\cdot 10^8$. We checked that this bound indeed holds for all $abc$-triples of the ABC@Home project by de~Smit~\cite{deSmitABCatHome} for which we could compute the radical. Using this $\varepsilon$ and $K_\varepsilon$ and the above reasoning, we would obtain a bound for $\hat h(P)$ of approximately $2(2\log K_\varepsilon + 2.2\log N_S)\leq 120$. \newcommand{M_S}{M_S} \section{Attainability of maximal conductor by curves in $M(S)$} \label{secMaximalConductor} In this section, we prove some results suggested by empirical observations of our data. Specifically we ask the following: Given a set of rational primes~$S$, what is the highest possible conductor of an elliptic curve over $\QQ$ with good reduction outside~$S$? An immediate upper bound is $M_S:= \prod_{p\in S} p^{f_p}$ where $f_2=8$, $f_3=5$, and $f_p = 2$ for $p\geq 5$. More specifically we may then ask: \begin{question} \label{quMaximalConductor} Does there exist a curve of conductor of $M_S$ for any set $S$? \end{question} The answer to this question is know without further conditions on~$S$. For example there does not exist an elliptic curve with good reduction away from~$5$, however the answer is positive for a large class of $S$. Motivated by our data we have the following sufficient criterion. \begin{theorem} \label{thmMaximalConductor} Let $S$ be a finite set of rational primes that contains either $2$ or $3$ (or both). Then there exists an elliptic curve over $\QQ$ with conductor $N=M_S$. \end{theorem} In order to prove the theorem we recall the notion of quadratic twists of elliptic curves. For any rational elliptic curve $E\colon y^2 = x^3 + ax + b$ and an integer $d$, we denote by $E^d\colon y^2 = x^3 + d^2ax + d^3b$ its quadratic twist by~$d$. The theorem now follows immediately from the following lemma. The proof is constructive. \begin{lemma} \label{lemMaximalConductor} Let $d$ be a square-free product of primes $p\geq 5$. \begin{enumerate} \item Let $E_{\{2,3\}}\colon y^2 = x^3 - 18x + 24$. Then $E_{\{2,3\}}^d$ has conductor $N = 2^8 3^5 d^2$ and Kodaira type $\mathrm{III}$ at $2$, $\mathrm{II}$ at $3$, and $\mathrm{I}_0^*$ at $p\geq 5$ with $p\,|\, d$. \item Let $E_{\{2\}}\colon y^2 = x^3 + 8x$. Then $E_{\{2\}}^d$ has conductor $N = 2^8 d^2$ and Kodaira type $\mathrm{III}^*$ at $2$ and $\mathrm{I}_0^*$ at $p\geq 5$ with $p\,|\, d$. \item Let $E_{\{3\}}\colon y^2 + y = x^3 - 1$. Then $E_{\{3\}}^d$ has conductor $N = 3^5 d^2$ and Kodaira type $\mathrm{II}$ at $3$ and $\mathrm{I}_0^*$ at $p\geq 5$ with $p\,|\, d$. \end{enumerate} \end{lemma} \begin{proof} This is a straightforward computation with Tate's algorithm, which we omit here. For the convenience of the reader it is available as an appendix of the GitHub and arXiv version of this paper, which can be found at: \href{https://github.com/elliptic-curve-data/ec-data-S6/blob/master/docs/paper.pdf}{\textit{https://github.com/elliptic-curve-data/ec-data-S6/blob/master/docs/paper.pdf}} \end{proof} We remark that in general, twisting an elliptic curve~$E$ by a prime $p\geq 5$ may change the reduction type of~$E$ at~$2$ and~$3$, but this does not happen for the three curves listed in the lemma. Silverman~\cite[Exercises 4.52, 4.53]{Silverman94advancedTopicsBook} gives two families of elliptic curves defined over~$\QQ$, which have maximal possible conductor exponent at~$3$ and at~$2$, respectively, and also have this property after base changing to a number field. The above curve $E_{\{2\}}$ belongs to Silverman's latter family. \section{Applications} \label{secApplications} In this section we will briefly discuss some applications of the dataset. \subsection{Solving $S$-unit equations} \label{secSUE_connection} Let $S$ be a finite set of rational primes. As above denote by $\mathcal{O}_S$ and $\mathcal{O}_S^*$ the $S$-integers and $S$-units, respectively. The $S$-unit equation is the equation \begin{equation} \label{eqSunit} x+y=1, \qquad x,y\in\mathcal{O}_S^*. \end{equation} This classical diophantine equations is intimately related to the \(abc\) conjecture, this can be seen by clearing denominators to obtain an $abc$ equation. Also, more generally, $S$-unit equations over number fields are known to have only finitely many solutions, as was first shown by Siegel~\cite{siegel29anwendungenDiophantApprox} and Mahler~\cite{mahler1933approximation}. Siegel~\cite{siegel29anwendungenDiophantApprox,Silverman86arithmeticBook} used this to prove that any hyperelliptic curve of genus at least one has only finitely many $S$-integral points. It turns out that solving $S$-unit equations can be reduced to the computation of $M(S\cup\{2\})$ via Frey--Hellegouarch curves: If $(x,y)$ is a solution of the $S$-unit equation, then $E_x\colon Y^2 = X(X-1)(X-x)$ lies in $M(S\cup\{2\})$. Moreover any curve $E\in M(S\cup\{2\})$ can be obtained in this way from at most six different solutions of~\eqref{eqSunit}, and these can be computed from the six possible modular $\lambda$-invariants of~$E$. In our case, \eqref{eqSunit} for $S=S(6)$ is exactly the case that has been considered by de Weger~\cite{deWeger87solvingExponentialDiophantineEquations}. He proved that, up to symmetry, it has exactly 545 solutions. We checked that the curves associated to all of these can be found in our database, which means that our database certainly contains all Frey--Hellegouarch curves with good reduction outside~$S(6)$. We remark that \eqref{eqSunit} has been solved for~$S=S(16)$, as well as for all $S$ with $N_S\leq 10^7$~\cite{vKMa14sUnitAndMordellEquationsUsingShimuraTaniyama}. This is far out of reach for the above method of reducing~\eqref{eqSunit} to computing~$M(S)$. In the other direction, the computation of~$M(S)$ can be reduced to solving $S'$-unit equations over finitely many number fields, where the number fields are all possible number fields $K$ of degree at most six that are unramified outside $S\cup\{2\}$ and $S'$ being the primes in~$K$ above $S\cup\{2\}$. This link was made into an algorithm by Koutsianas~\cite{Koutsianas19ellipticCurvesOverNFs}. \subsection{Other diophantine problems} \label{secReductionsOfDiphantineEquations} Many other diophantine problems reduce to the computation of~$M(S)$, notably cubic Thue--Mahler equations \[ ax^3 + bx^2y + cxy^2 + dy^3 = m\prod_{p\in S}p^{e_p}, \qquad x,y\in\ZZ,\ \ e_p\in\ZZ_{\geq 0} \ (p\in S), \] where $a,b,c,d,m \in\ZZ$ and $m\neq 0$ are given such that the left-hand side has non-vanishing discriminant. Likewise generalized Ramanujan--Nagell equations \[ x^2+b = y, \qquad x\in\mathcal{O}_S,\ y\in\mathcal{O}_S^*, \] where $b\neq 0$ is a given integer, can be reduced to computing~$M(S)$. In particular we can find solutions for these equations for $S=S(6)$ via our computation of curves in $M(S)$, which subject to the hypothesis that we have in fact found the whole set $M(S)$ should be the complete sets of solutions of these equations, see the above discussion on completeness in Section~\ref{secCompletenessHeuristic}. \subsection{$n$-congruences between elliptic curves} Given $n\in \NN$, a pair of elliptic curves $E_1,E_2/\QQ$ for which $E_1[n]\simeq E_2[n]$ as Galois modules are called $n$-congruent. The Frey--Mazur conjecture implies that there should be an absolute bound $C$ such that if $p\ge C$ and $E_1,E_2$ are $p$-congruent, then $E_1$ and $E_2$ must be isogenous. The only known example of a pair of non-isogenous 17-congruent elliptic curves, found by Cremona and then Billerey \cite{billerey2016remarkable}, occurs for a pair of curves with good reduction outside of ${3, 5, 7, 13}$. Using our database we searched for similar examples of $n$-congruences between curves for primes $13\le n\le 47$. We found several instances of 13-congruences that were outside the range of existing databases. Fisher has recently found an infinite family of 13-congruent curves \cite{fisher13}, which the examples in our database are all members of. We did not find any further examples of 17 (or higher) congruences between curves in our database, other than quadratic twists of the example of Cremona--Billerey mentioned above.
proofpile-arXiv_065-122
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The problem of seven bridges of K\"onigsberg considered by Leonhard Euler in 1736 \cite{Euler1736} was one of the most notable mathematical achievements which laid the foundations of graph theory and topology. In 1936 this seeding idea was used by Linus Pauling in physics \cite{Pauling36} in order to describe a quantum particle moving in a physical network, the model known today as a quantum graph. The idea of quantum graphs was further extensively developed in Refs. \cite{Exner88,Kottos1997, Blumel2002, BK, Pluhar2014}. In the considered model a metric graph $\Gamma=(V,E)$ is formed by the edges $e \in E$ connected together at the vertices $v \in V$. Each edge is seen as an interval on (a separate copy of) the real line $\mathbb{R}$ having the length $l_e$, then the vertices can be defined as disjoint unions of edge endpoints. Let us consider the Laplace operator $L(\Gamma) = -\frac{d^2}{dx^2}$ acting in the Hilbert space of square integrable functions on $\Gamma$ satisfying in addition the standard vertex conditions (also called natural, Neumann or Kirchhoff): the function is continuous at the vertex $v$ and the sum of oriented derivatives at the vertex $v$ is equal to zero. Such Laplacian is uniquely determined by the metric graph, is self-adjoint and its spectrum is pure discrete \cite{BK}. Moreover, the operator is non-negative with zero being a simple eigenvalue (provided the graph is connected) with the eigenfunction given by the constant. For more details on quantum graphs, we can refer the reader to the book \cite{BK} and the references therein. Quantum graphs were used to simulate, e.g., mesoscopic quantum systems \cite{Kowal1990,Imry1996}, quantum wires \cite{Sanchez1988}, and optical waveguides \cite{Mittra1971}. In this letter we report breakthrough results on topology of quantum graphs and microwave networks. We show that measuring several dozen of eigenvalues of the system one may recover its Euler characteristic without seeing a graph, i.e., knowing the number of the graph's vertices and edges. In particular one may even determine structural properties of the network, e.g., whether the graph is planar or fully connected. The original formula for $\chi $ \cite{Ku05} requires knowledge of all eigenenergies of the system and plays a very important role in the study of inverse problems for quantum graphs, but its applicability to laboratory measurements is limited, since only a finite number of eigenenergies can be obtained in any real world experiment. From the experimental point of view it is important to point out that quantum graphs can be modeled by microwave networks \cite{Hul2004,Lawniczak2008, Hul2012,Sirko2016,Dietz2017,Lawniczak2019b}. It is attainable because both systems are described by the same equations: the one-dimensional Schr\"odinger equation appearing in quantum graphs is formally equivalent to the telegrapher's equation for microwave networks \cite{Hul2004,Sirko2016}. Microwave networks, as the only ones, allow for the experimental simulation of quantum systems corresponding to all three classical ensembles in the random-matrix theory (RMT): the systems with $T$ invariance belonging to Gaussian orthogonal ensemble (GOE) \cite{Hul2004,Lawniczak2008,Hul2012,Dietz2017,Lawniczak2019} and Gaussian symplectic ensemble (GSE) \cite{Stockmann2016}, and the systems without $T$ invariance belonging to Gaussian unitary ensemble (GUE) \cite{Hul2004,Lawniczak2010,Allgaier2014,Bialous2016,Lawniczak2017,Lawniczak2019b}. Microwave networks were successfully used, e.g., to demonstrate the usefulness of missing level statistics in variety of applications \cite{Bialous2016} and to show that there exist graphs which do not obey a standard Weyl's law, called non-Weyl graphs \cite{Lawniczak2019}. The most important characteristics of a metric graph $\Gamma=(V,E)$ are the Euler characteristic $\chi =|V|-|E|$ and the total length $\mathcal{L} =\sum_{e\in E} l_e$. The Euler characteristic $\chi$ determines the number $\beta$ of independent cycles in a graph \begin{equation} \beta = |E|-|V|+1 \equiv 1-\chi \,,\label{eq:cycles} \end{equation} while the total length $\mathcal{L}$ determines the asymptotics of a graph's eigenvalues $\lambda_n$ via the Weyl's formula \begin{equation} \lambda_n = \Big(\frac{\pi}{\mathcal{L}}\Big)^2n^2 + \mathcal{O}(n) \,,\label{eq:Weyl} \end{equation} where $\mathcal{O}(n)$ is a function which in the limit $n\rightarrow +\infty $ is bounded by a constant. The number of independent cycles measures how different a graph is from a tree and is equal to the number of edges that have to be deleted to turn the graph into a tree. It might seem that the determination of both characteristics would require the knowledge of the whole sequence of eigenvalues. Such an assumption is natural in mathematics and allows to derive the precise formulas for $ \mathcal{L} = \pi\lim_{{\textmd n\rightarrow +\infty }}\frac{n}{k_n} $ and $\chi = X(t) \vert_{t \geq t_0}$ \cite{Ku05,Ku06}, where \begin{equation} X(t) := 2 + 2\pi \sum_{n=1}^{ \infty} \cos(k_n/2t)\Big(\frac{\sin(k_n/4t)}{k_n/4t}\Big)^2 , \label{eq:chi} \end{equation} and $k_n$ are the square roots of the eigenenergies $\lambda_n$ and $t_0=\frac{1}{2l_{min}}$ with $l_{min}$ being the length of the shortest edge of a simple graph. While derivation of formula for $ \mathcal{L} $ is elementary, formula (\ref{eq:chi}) can be obtained either from the trace formula \cite{GuSm3,KuNo8,Ro11} connecting the spectrum to the set of periodic orbits on $\Gamma$ \cite{Ku05} or by analyzing the heat kernels \cite{Ro11}. The knowledge of the whole spectrum allows one to reconstruct the metric graph, provided the edge lengths are rationally independent (see e.g. \cite{vBe1,GuSm3,KuNo8}) thus providing an affirmative answer to the classical question asked by Mark Kac \cite{Kac4} adopted to quantum graphs as ``Can one hear the shape of a graph?" \cite{Hul2012}. However, in the real world experiments there is no chance to determine the entire spectrum. For example in microwave networks because of openness of the systems and the existence of internal absorption one can measure up to several hundreds of eigenfrequencies. Moreover, one cannot guarantee that the edge lengths are rationally independent, therefore it is natural to investigate the question whether the total length $\mathcal{L}$ and the Euler characteristic $\chi$ can be reconstructed directly from the spectrum without determining a precise form of the graph. Formulas for $\mathcal{L}$ and $X(t)$ provide such a possibility but their character is completely different. The total length $\mathcal{L}$ is a positive real number, hence to determine it with a high precision one needs to know high energy eigenvalues $\lambda_n$. More eigenvalues are determined the better approximation of $\mathcal{L}$ is obtained. The Euler characteristic $\chi$ is an integer number (often negative), hence to determine it precisely it is enough to know the right hand side of (\ref{eq:chi}) with an error less than 1/2. Therefore, knowing that in the experiment only a limited number of the eigenvalues can be measured, we shall concentrate in this letter on determining the Euler characteristic $\chi$. \section{A new formula for the Euler characteristic} The series in formula (\ref{eq:chi}) for the Euler characteristic is slowly converging. Its application requires the measurements of several hundreds or even more of eigenenergies which in the most cases is not achievable. Therefore, we derived a new function \begin{equation} X(t) := 2 + 8\pi^2\sum_{k_n\neq 0}\frac{\sin(k_n/t)}{(k_n/t)[(2\pi)^2-(k_n/t)^2]} , \label{eq:chi_2} \end{equation} which gives the Euler characteristic $\chi = X(t) \vert_{t \geq t_0}$ and is characterized by a much better convergence. The details of derivation are given in the Appendix. \section{Experimental implementation} Let us assume that in the experiment the K lowest resonances (eigenvalues) are measured. We shall calculate the Euler characteristic $ \chi $ by evaluating the function $ X(t)$ by substituting the infinite series with a finite sum and assuming that $t \geq t_0$. Let us introduce the function $X_K(t)$ corresponding to a new formula (\ref{eq:chi_2}) \begin{equation} X_K(t) = 2 + 8\pi^2\sum_{n=1}^K\frac{\sin(k_n/t)}{(k_n/t)[(2\pi)^2-(k_n/t)^2]} \,\,. \label{eq:c2} \end{equation} We are going to analyze whether this function gives a good approximation for the Euler characteristic $\chi$ when $t = t_0 =\frac{1}{2l_{min}}$. Comparing (\ref{eq:chi_2}) with (\ref{eq:c2}) we obtain $\epsilon = |X(t_0) -X_K(t_0)|$. In order to guarantee the difference $\epsilon$ is less than 1/2, e.g. 1/4, it is enough to take the first $K$ eigenvalues evaluated by the following formula \begin{equation} K \simeq |V|-1 + 2\mathcal L t_0 \left [1-\exp\left (\frac{-\epsilon \pi}{\mathcal L t_0}\right )\right ]^{-1/2} \,. \label{eq:k2} \end{equation} The details of the proof are given in the Appendix. The new formula for the Euler characteristic (\ref{eq:chi_2}) was tested experimentally using planar and non-planar microwave networks for which the counting function of the number of resonances satisfies the Weyl's law \cite{Lawniczak2019}. For such networks the Euler characteristic is the same as for the corresponding closed quantum graphs. In Fig.~1(a) and (b) we present the schemes of a planar quantum graph $\Gamma$ with $|V|=4$ vertices and $|E|=6$ edges and a planar microwave network with the same topology as $\Gamma$. The total optical length of the microwave network is $\mathcal L=1.494\pm0.006$ m and the optical length of the shortest edge is $l_{min}=0.155\pm0.001$ m. The optical lengths $l^{opt}_i$ of the edges of the network are connected with their physical lengths $l^{ph}_i$ through the relationship $l^{opt}_i = \sqrt{\varepsilon }l^{ph}_i$, where $\varepsilon=2.06$ is the dielectric constant of the Teflon used for the construction of the microwave cables. The quantum graph is a closed dissipationless system for which according to the definition of the Euler characteristic $\chi=|V|-|E|=-2$. One should point out that the lack of dissipation is a standard assumption considered in the mathematical analysis of graphs. In Fig.~2(a) we show that the formula (\ref{eq:c2}) can be easily used to reconstruct the Euler characteristic of the microwave network in Fig.~1(b) and obtain the correct result $ \chi = -2$. As all real life systems, this system is open and is characterized by small dissipation \cite{Lawniczak2010}. The resonances $\nu_1, \ldots, \nu_N$ of the microwave network required for the evaluation of the Euler characteristic were determined from the measurements of a one-port scattering matrix $S(\nu)$ of the network using the vector network analyzer (VNA) Agilent E8364B. One should note that it is customary for microwave systems to make measurements of the scattering matrices as a function of microwave frequency $\nu$. Then the real parts of the wave numbers $k_n$ are directly related to the positions $\nu_n$ of the resonances $\mathrm{Re\,}k_n=\frac{2\pi }{c}\nu_n$. The VNA was connected to the microwave network with the flexible HP 85133-616 microwave cable which is equivalent to attaching an infinite lead to a quantum graph \cite{Lawniczak2019}. Before each measurement the VNA was calibrated using the Agilent 4691-60004 electronic calibration module to eliminate the errors in the measurements. In order to avoid the missing resonances we analyzed the fluctuating part of the integrated spectral counting function $N_{fl}(\nu_i)=N(\nu_i)-N_{av}(\nu_i)$ \cite{Dietz2017}, that is the difference of the number of identified eigenfrequencies $N(\nu_i) = i$ for ordered frequencies $\nu_1 \leq \nu_2 \leq \ldots$ and the average number of eigenfrequencies $N_{av}(\nu_i)$ calculated in the considered frequency range. Using this well known method \cite{Dietz2017} we were able to identify the first $N=106$ resonances in the frequency range of $0.31-10.76$ GHz. The problem with the resolution of the resonances begins for $N \simeq 100-150$ but then the sensitivity of the Euler characteristic (\ref{eq:chi_2}) for the missing resonances is very weak. In Fig.~2(a) we show the approximation function for the Euler characteristic $X_K(t)$ (\ref{eq:c2}) calculated using the first $K=28$ (green full line) and $K=106$ (red dash-dotted line) experimentally measured resonances of the system, respectively. The value $K=28$ was estimated from the formula (\ref{eq:k2}) assuming that $\epsilon = 1/4$ and taking into account the optical size of the network $\mathcal{L}t_0=4.82\pm0.05$. In Fig.~1(f) we show, as an example, the modulus of the scattering matrix $|S(\nu)|$ of the experimentally studied microwave network $\Gamma$ with $|V|=4$ measured in the frequency range $\nu=3.0-4.5$ GHz. Fig.~2(a) demonstrates that it is enough to use the first $K=28$ resonances (green full line) to identify a clear plateau close to the expected value $\chi=-2$. This plateau extends from $3\textrm{ m}^{-1}<t<6\textrm{ m}^{-1}$ and includes the parameter $t_0 \simeq 3.23$ $\textrm{ m}^{-1}$ which was used for the evaluation of the required number of resonances $K=28$ (see the formula (\ref{eq:k2})). The Euler characteristic calculated for $K=N=106$ resonances (red dash-dotted line) displays a very long plateau along the expected value $\chi=-2$. The plateau extends from $3~\textrm{m}^{-1}<t<17\textrm{ m}^{-1}$ showing that we actually deal with the excessive number of resonances required for the practical evaluation of the Euler characteristic. Just for the comparison we also show in Fig.~2(a) the Euler characteristic calculated from the Eq.~(\ref{eq:chi}) using the first $K=28$ resonances (brown dotted line). As expected, the formula (\ref{eq:chi}) shows much worse convergence to the predicted value of $\chi=-2$. Although for the analysis of the convergence of the formula (\ref{eq:chi_2}) (see the Eq. (\ref{eq:k2})) we used the graph's parameters $\mathcal L$ and $t_0$ in the real applications we do not need them. The power of the formula (\ref{eq:chi_2}) stems from the fact that the sequence of the lowest resonances allows for the determination of the Euler characteristic without knowing physical parameters of the graph. In practice, if a plateau in $X_K(t)$ along a given integer number is not observed it means that the number of resonances used in the calculations is insufficient and it ought to be increased. It is important to point out that the formula (\ref{eq:chi_2}) allows also for the determination whether a system is planar. In the analyzed cases of the graph $\Gamma$ and the microwave network the number of cycles yielded from the formula (\ref{eq:cycles}) is $\beta=1-\chi=3$. In accordance with the Kuratowski's theorem \cite{Kuratowski1930} every non-planar graph should contain $K_5$ (the complete graph on $5$ vertices) or $K_{3,3}$ (the complete bipartite graph on $3$ and $3$ vertices) as subgraphs. These graphs have $6$ and $4$ cycles, respectively, therefore, without even seeing a graph or having a complete information about the number of vertices and edges we found out that the graph is planar and the microwave network simulates the planar graph. Let us now analyze the situation of non-planar fully connected (complete in the mathematical terminology) graphs and networks. In Fig.~1(c)~and~(d) we present the non-planar fully connected quantum graph $K_5$, complete graph on $ |V|=5 $ vertices, characterized by the Euler characteristic $\chi=-5$, and the microwave network with the same topology. The total optical length of the microwave network is $\mathcal L=3.949\pm0.010$ m and the optical length of the shortest edge is $l_{min}=0.202\pm0.001$ m. To perform the measurements of the first $N=132$ eigenresonances the network was connected to the VNA with the flexible microwave cable (see Fig.~1(e)). In Fig.~1(f) we show the modulus of the scattering matrix $|S(\nu)|$ of this network ($|V|=5$) measured in the frequency range $\nu=3.0-4.5$ GHz. The approximation function for the Euler characteristic (\ref{eq:c2}) calculated for the first $K=74$ (green full line) and $K=132$ (red dash-dotted line) experimentally measured resonances of the system, respectively, is shown in Fig.~2(b). The value $K=74$ was estimated from the formula (\ref{eq:k2}) assuming again that $\epsilon = 1/4$ and taking into account the optical size of the $ K_5 $ network $\mathcal{L}t_0 = 9.74 \pm 0.10$. Fig.~2(b) shows that using $K=74$ resonances measured for the non-planar microwave network in Fig.~1(d) the correct Euler characteristic $\chi=-5$ can be easily evaluated (full green line). In this case a long plateau close to the expected value $\chi=-5$ is seen in the parameter range $2.5 \textrm{ m}^{-1}<t<4 \textrm{ m}^{-1}$. The situation improves even further for the Euler characteristic calculated for $K=N=132$ resonances measured in the frequency range of $0.19-5.12$ GHz (red dash-dotted line). In this case the plateau is extended in the range $2.5\textrm{m}^{-1}<t<7.5\textrm{m}^{-1}$ clearly indicating that the Euler characteristic can be also properly evaluated using much less resonances. In Fig.~2(b) we also show the approximation function for the Euler characteristic $X_K(t)$ calculated from the Eq.~(\ref{eq:chi}) using the first $K=74$ resonances (brown dotted line). It is visible that the formula (\ref{eq:chi}) yields the results which are far away from the predicted value of $\chi=-5$ and can be only used for much higher number of resonances $K=1243$ (see the formula \eqref{est4} in the Appendix). For the analyzed graph $K_5 $ and the microwave network the number of cycles calculated from the formula (\ref{eq:cycles}) is $\beta=1-\chi=6$. Since the number of cycles is higher than $3$ we cannot directly assess whether the system is planar or not since the application of the Kuratowski's theorem requires the full information about the topology of the investigated graph which in principle is not available. In such a situation, in general, we can only test whether graphs and networks analyzed by us are fully connected. The fully connected simple networks and graphs are especially interesting because there is an explicit link between the number of vertices $|V|$ of a graph and the Euler characteristic \begin{equation} |V|= \frac{3+\sqrt{9-8\chi}}{2} \,. \label{eq:fully} \end{equation} This formula holds for both planar and non-planar graphs. Applying the formula (\ref{eq:fully}) in the case of the microwave network $\Gamma$ with the measured Euler characteristic $\chi=-2$ we get $|V|=4$. Since the number of vertices yielded by the formula (\ref{eq:fully}) is integer it means that our planar network is also fully connected. In the case of the network $K_5$ with the measured Euler characteristic $\chi=-5$ we directly find out that the number of vertices of the network is $|V|=5$, in obvious agreement with the number of the vertices of the network. Therefore, in this case the experimental evaluation of the Euler characteristic $\chi$ allowed us to find out that we deal with the fully connected non-planar $K_5$ network. In summary, we showed that the Euler characteristic $\chi$ can be determined (heard) from a finite sequence of the lowest resonances $\nu_1, \ldots, \nu_N$ of a microwave network. We also demonstrated that the spectrum of a simple microwave network can be used to find the number $\beta=1-\chi $ of independent cycles. If $\beta \leq 3$ then a studied system is planar. Moreover, the Euler characteristic $\chi$ allows to identify whether the networks and graphs are fully connected. In such cases it is possible to determine the number of vertices and edges of the systems. Thus, we clearly showed that the Euler characteristic $\chi$ is a powerful measure of graphs or networks properties, including topology, complementing in the important way the inverse spectral methods that require the infinite number of eigenenergies or resonances for their application. \section{Acknowledgements} This work was supported in part by the National Science Centre, Poland, Grant No. 2016/23/B/ST2/03979, the Swedish Research Council (Grant D0497301) and the Center for Interdisciplinary Research (ZiF) in Bielefeld in the framework of the cooperation group on {\it Discrete and continuous models in the theory of networks}. \section{Appendix} \subsection{A new formula for the Euler characteristic} The formula \eqref{eq:chi} for the Euler characteristic derived in \cite{Ku05,Ku06} using the trace formula coming from \cite{Ro11,GuSm01,KuNo8} is not effective when the number of known eigenvalues is limited. Therefore, we derived a new formula for the Euler characteristic $\chi $ with a better convergence of the series. A new formula is obtained by applying the distribution $ u(k) $ \cite{Ku05,Ku06} \begin{equation} u(k) := 2\delta(k) +\sum_{k_n>0}(\delta(k-k_n)+\delta(k+k_0)) = \chi \delta(k)+\frac{\mathcal{L}}{\pi} + \frac{1}{\pi}\sum_{p \in P}\ell(\mbox{prim}(p))S(p)\cos(k\ell(p)) \,,\label{eq:uk} \end{equation} where the sum is taken over all periodic orbits $P$ on $\Gamma$, $ \ell(p) $ is the length of the orbit $ p $, and the coefficients $S(p)$ are products of scattering coefficients along the orbit $p$, to the test function \begin{equation} \varphi (x) = \left\{ \begin{array}{ll} 1 - \cos (2 \pi x), & 0 \leq x \leq 1; \\ 0, & \mbox{otherwise}, \end{array} \right. \end{equation} which is continuous and has continuous first derivative. The formula (\ref{eq:uk}) alone shows that knowing the spectrum, equivalently, the distribution on the left hand side of the formula (\ref{eq:uk}), allows one to reconstruct the Euler characteristic $\chi$. The Fourier transform of the test function $\varphi (x)$ is \begin{equation} \sqrt{2\pi} \hat{\varphi} (k) = \int_0^1 (1- \cos (2 \pi x)) e^{-ikx} dx = - i (e^{-ik} -1) \frac{4 \pi^2}{k[k^2 - (2\pi)^2]}, \end{equation} and its real part is given by \begin{equation} \Re \sqrt{2\pi} \hat{\varphi} (k) = - \frac{\sin (k)}{k} \frac{4 \pi^2}{k^2- (2\pi)^2 }. \end{equation} The key point of the proof is to use the relation between the Fourier transforms of the distributions and the test functions \begin{equation} u [ \hat{\varphi}_t (k) ] = \hat{u} [ \varphi_t (x)]. \end{equation} Applying $ \hat{u} $ to $ \varphi_t (x) $ for \begin{equation} \label{ineqt} 2t \ell_{\rm min} \geq 1 \Leftrightarrow t \geq t_0 = \frac{1}{2\ell_{\rm min}}, \end{equation} where $ \ell_{\rm min} $ is the length of the shortest edge of the graph and therefore $ 2 \ell_{\rm min} $ is the length of the shortest periodic orbit, we get \begin{equation} \hat{u} [\varphi_t(x)] = \chi, \quad t \geq t_0. \end{equation} Calculating $ u[ \hat{\varphi}_t(k)] $ we obtain a new formula \eqref{eq:chi_2} for the Euler characteristic \begin{equation} \label{Euler2} \chi = X(t) \vert_{t \geq t_0}, \quad \; X(t) = 2 + 8 \pi^2 \sum_{k_n \neq 0} \frac{\sin (k_n/t)}{(k_n/t) [(2\pi)^2-(k_n/t)^2] }, \end{equation} improving the formula \eqref{eq:chi}. The possible zeros in the denominator are not dangerous since they cancel with the zeroes in the numerator. \subsection{The error estimate for the new formula} We are interested in estimating how many resonances are needed to determine the Euler characteristic $ \chi $, in other words how many terms in the series are enough to evaluate $ X(t) $. Since the Euler characteristic $ \chi $ takes integer values it is enough to require that the error $\epsilon$ is less than $ 1/2$: \begin{equation} \epsilon = |X(t) -X_K(t)|_{\mid_{t=t_0}} = \left|8\pi^2\sum_{n=K+1}^{\infty}\frac{\sin(k_n/t_0)}{(k_n/t_0)[(2\pi)^2-(k_n/t_0)^2]}\right| , \label{eq:c2chi} \end{equation} where \begin{equation} X_K(t) = 2 + 8\pi^2\sum_{n=1}^K\frac{\sin(k_n/t)}{(k_n/t)[(2\pi)^2-(k_n/t)^2]} . \label{eq:c2} \end{equation} Our claim is that it is enough to take \begin{equation} \label{eq:k2} K > |V|-1 + 2\mathcal L t_0 \left [1-\exp\left (\frac{-\epsilon \pi}{\mathcal L t_0}\right )\right ]^{-1/2} , \end{equation} where $\mathcal L t_0 = \frac{\mathcal L}{2l_{min}}$. For $\frac{\mathcal L}{2l_{min}}\gg 1$ the condition (\ref{eq:k2}) can be approximated by \begin{equation} \label{eq:k2app} K> |V|-1 + \frac{2}{\sqrt{\epsilon \pi}} \left ( \frac{\mathcal L}{2l_{min}} \right )^{3/2} . \end{equation} To prove (\ref{eq:k2}) we assume first that $ K $ is sufficiently large to guarantee that the denominator in \eqref{eq:c2chi} is negative $k_{K+1} > 2 \pi t_0. $ Taking into account the elementary lower estimate for the eigenvalues \begin{equation} \label{estk} k_n^2 \geq \big( \frac{\pi}{\mathcal L} \big)^2 (n+1-|V|)^2, \end{equation} where $ |V| $ is the number of vertices, we arrive at the following sufficient condition for the denominator to be negative: \begin{equation} \label{estk2} K > |V|-1 + \frac{\mathcal L}{\ell_{\rm min}}. \end{equation} Then the series can be estimated as \begin{equation} \label{estk3} \begin{array}{ccl} \displaystyle \left\vert X(t_0)- X_K (t_0) \right\vert & \leq & \displaystyle 8\pi^2\sum_{n=K+1}^{\infty}\frac{\left|\sin(k_n/t_0)\right|}{(k_n/t_0)[(k_n/t_0)^2-(2\pi)^2]} \\[5mm] & \leq & \displaystyle 8 \frac{(\mathcal L t_0)^3}{\pi} \sum_{n = K+1}^\infty \frac{1}{(n+1-|V|)[(n+1-|V|)^2 - 4 \mathcal L^2 t_0^2]} \\[5mm] & \leq & \displaystyle 8 \frac{(\mathcal L t_0)^3}{\pi} \int_K^\infty \frac{dx}{(x+1-|V|)[(x+1-|V|)^2 - 4 \mathcal L^2 t_0^2]} \\[5mm] & = & \displaystyle \frac{\mathcal L t_0}{\pi} \log\frac{(K+1-|V|)^2 }{(K+1-|V|)^2 - 4 \mathcal L^2 t_0^2} , \end{array} \end{equation} where we again used \eqref{estk} and substituted series with an integral on the last step. Requiring that the error is less than $\epsilon$ leads to \eqref{eq:k2}. \subsection{The error estimate for the original formula} Using similar arguments we may derive a rigorous estimate for the number of necessary resonances $K$ required in the case of the formula \eqref{eq:chi} \begin{equation} \label{est4} K > |V| -1 + \frac{32 \mathcal L^2 }{\epsilon \pi^2} t_0^2 \equiv |V| -1 + \frac{32}{\epsilon \pi^2} \Big( \frac{\mathcal L }{2\ell_{\rm min}} \Big)^2 . \end{equation} Since the ratio $\frac{\mathcal L }{2\ell_{\rm min}}$ in the formula (\ref{est4}) is raised to the second power the above estimate for $\frac{\mathcal L}{2l_{min}}\gg 1$ is definitely much worse than \eqref{eq:k2app}, which clearly explains why the old formula \eqref{eq:chi} for the Euler characteristic is ineffective in the real world applications. \section{References}
proofpile-arXiv_065-123
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} We study deletion-correcting codes over the binary alphabet. Specifically, we are interested in codes $C \subset \{0,1\}^n$ such that if a codeword $x \in C$ is corrupted by deleting up to $k$ bits to obtain a subsequence $y \in \{ 0,1\}^{n-k}$, then one can reconstruct $x$ from $y$. Crucially, the location of the deleted bits are unknown. The parameter $k$ bounds the maximum number of deletions the code is designed to correct. The $k$-deletion correcting property is equivalent to the property that the length of the longest common subsequence between any two distinct codewords is less than $n-k$. The goal is to find codes of as large a size as possible that can correct up to $k$ deletions. For the case of fixed $k$ and growing $n$, which is the regime of interest in this paper, the size of the optimal $k$-deletion correcting code, say $D(n,k)$, satisfies \[ \Omega_k\Bigl(\frac{2^n}{n^{2k}} \Bigr) \le D(n,k) \le O_k\Bigl(\frac{2^n}{n^k}\Bigr) \ , \] where $O_k(\cdot)$ and $\Omega_k(\cdot)$ suppress factors that depend only on $k$. The lower bound follows by a simple (but inefficient) greedy construction of picking codewords no two of which share a common subsequence of length $n-k$. The upper bound follows from a packing argument since the length $n-k$ subsequences of various codewords have to be distinct, and a typical string has $\Omega_k(n^k)$ such subsequences. Defining the redundancy of a code $C$ to be $n - \log_2 |C|$ (since $n$ bits are transmitted to communicate one of $|C|$ possible messages), the optimal redundancy of $k$-deletion codes is between $k \log_2 n$ and $2k \log_2 n$ (ignoring additive constants depending on $k$). For the single deletion case, the Varshamov-Tenengolts (VT) construction~\cite{VarshamovTenengolts65} is an explicit code of asymptotically optimal size $\Theta(2^n/n)$ as shown by Levenshtein~\cite{Levenshtein66} over 50 years ago. This codes consists of all strings $x \in \{0,1\}^n$ for which $f_1(x) := \sum_{i=1}^n i x_i \equiv 0 \pmod{n+1}$. The next simplest case of two deletions, however, already turns out to be much more challenging, and attempts to recover from two deletions by augmenting the VT construction with various natural additional check equations have not met with success. For $k \ge 2$, closing the gap between the lower and upper bounds on redundancy, and finding explicit constructions that come close to the existential bound (i.e., with redundancy $\approx 2k\log_2 n$), are two central challenges that still remain open. This work considers the latter question for the case $k=2$. By an explicit construction, we mean a code of length $n$ that has a deterministic $\text{poly}(n)$ time encoding algorithm. Until recently, the constructions of $k$-deletion codes even for $k=2$ had redundancy $\Omega(n)$~\cite{HF02,PGFC12}. A construction with redundancy about $O(\sqrt{n})$ is implicit in the work~\cite{GW-ieeetit} which considered high rate codes for correcting a small \emph{fraction} of deletions. Explicit constructions of size $2^n/n^{O(1)}$, i.e., $O(\log n)$ redundancy, were only constructed recently. Specifically, $k$-deletion codes of redundancy $O(k^2 \log k \log n)$ were constructed in \cite{BGZ-soda}. Following this, a sequence of works starting with \cite{Belazzougui15} studied $k$-deletion codes in the framework of deterministic document exchange protocols, leading to codes with redundancy $O(k \log^2 (n/k))$~\cite{CJLW18,Haeupler19} and even $O(k \log n)$ for small $k$~\cite{CJLW18}. (In the document exchange problem, Alice holds $x \in \{0,1\}^n$ and Bob holds a subsequence $y$ of $x$ with $k$ deletions, and Alice must send a \emph{short} ``sketch" $s(x)$ to Bob that will enable Bob to recover $x$. When $s(x)$ is a deterministic function of $x$, such a protocol is closely connected to $k$-deletion codes with redundancy roughly equal to the length of the sketch; see Section~\ref{subsec:reduce-to-sketch} for more on this connection.) However, these constructions use hashing based recursive approaches and other ideas, and even for $k=2$ will have redundancy $C \log n$ for a rather large constant $C$. For the case of two deletions specifically, two recent works constructed codes with redundancy $\approx 8 \log_2 n$~\cite{GS19} and $\approx 7 \log_2 n$~\cite{SB19}. The construction in \cite{GS19} followed the rough approach in \cite{BGZ-soda} based on hashing the pattern of occurrence of certain substrings in the string, and also considered several cases based on the identity and location within runs of the two bits deleted. The construction in \cite{SB19} is more explicit and can be viewed as a higher-dimensional version of the VT construction with certain modular check sums constraining indicator vectors of the string. \medskip \noindent \textbf{Our results.} In this work, we present an explicit construction of $2$-deletion codes in the mold of VT codes with redundancy close to the existential $4 \log_2 n$ bound. In addition to the position-weighted VT-sketch $f_1(x)$ of the string $x \in \{0,1\}^n$ to be recovered, we also include a quadratically-weighted VT-like sketch as well as a sketch based on the run-number sequence of the string. These are the functions $f_1(x), f_2(x)$ and $f_1^r(x)$ defined in Equations \eqref{eq:f1}-\eqref{eq:f1r}. The goal is to recover $x$ from any subsequence $y$ formed by deleting two bits and the knowledge of these functions. If the two deleted bits are both \textsf{0}'s or \textsf{1}'s, $f_1(x)$ and $f_2(x)$ together with $y$ suffice to reconstruct $x$. When one \textsf{0}\ and one \textsf{1}\ are deleted, we bring the run-number sequence into the picture. The two deletions alter the number of runs by $0,2$ or $4$. When the run count changes by $0$ or $4$, the values $f_1(x)$ and $f_1^r(x)$ together with $y$ suffice to reconstruct $x$. The remaining case when one \textsf{0}\ and one \textsf{1}\ are deleted and the number of runs decreases by $2$ turns out to be a lot harder. In this situation, we prove that $f_1(x), f_2(x)$ and $f_1^r(x)$ together localize the deletions to a small $O(\log n)$ long stretch of $x$, assuming that $x$ has a certain regularity property (namely that $x$ contains substrings $00$ and $11$ often enough). To finish the recovery, we employ a less efficient sketch of $O(\log \log n)$ bits that enables recovery of two deletions in $O(\log n)$-length strings. To satisfy the regularity assumption, we encode messages into regular strings with negligible rate loss. Our final construction combining these ingredients gives $2$-deletion correcting codes with a redundancy matching the best known existential bound of $4 \log_2 n$ up to lower order terms. \begin{theorem} \label{thm:unique-decoding-intro} There is an explicit (efficiently encodable) $2$-deletion correcting code $C \subseteq \{0,1\}^n$ with redundancy $4 \log_2 n + O(\log \log n)$. \end{theorem} As a warm-up to the above construction, we also present a new code to tackle the single deletion case based on the run length sequence (specifically the sketch $f_1^r(x)$ defined in \eqref{eq:f1r}). While slightly more redundant than the VT code, by including a quadratic version of this run-based sketch (namely $f_2^r(x)$ defined in \eqref{eq:f12r}), we also give a $2$-deletion code with redundancy smaller than the existential $4 \log_2 n$ bound at the expense of pinning down the codeword to one of two possibilities. \begin{theorem} \label{thm:list-decoding-intro} There is an explicit (efficiently encodable) code $C \subseteq \{0,1\}^n$ with redundancy $3 \log_2 n + O(\log \log n)$ that can be list decoded from two deletions with a list of size $2$. \end{theorem} For the decoding, we can of course recover the two missing bits in quadratic time by trying all possible placements, only one of which (or at most two of which, in the case of Theorem~\ref{thm:list-decoding-intro}) will match the sketches. However, we can in fact perform the decoding in linear time. Once we find a single placement of the bits that gets the correct value for the VT sketch $f_1(x)$ correct, the algorithm consists of sweeping each of the bits either left or right across the string just once, and the updates to the sketches can be maintained online in $O(1)$ time per move (on the RAM model where operations on $O(\log n)$ bit integers take constant time). For simplicity, we do not elaborate on the linear complexity decoding further but an interested reader can verify this based on the details of our (algorithmic) proof of the $2$-deletion correction property. It is well known that a code capable of correcting $k$ deletions is capable of correcting any combination of up to a total of $k$ insertions and deletions~\cite{Levenshtein66}. However, this doesn't necessarily preserve the efficiency of the decoding algorithm. We have not explored decoding strategies from two insertions for our codes (of course the naive quadratic time approach still applies). The case of insertion/deletion combination is more subtle for list decoding (see for instance the recent work~\cite{GHS-stoc20}), and we did not investigate how our list-decodable codes behaves under insertions. Our work raises the intriguing possibility that it might be possible to achieve a redundancy smaller than $4 \log_2 n$ for $2$-deletion codes, which would be quite exciting (and perhaps surprising) progress on this classical question. Another natural question is whether our methods can extended to the case of more deletions. This appears rather difficult already for three deletions due to the many more combinations in which bits can be inserted. \medskip\noindent\textbf{Outline.} In Section~\ref{sec:prelims}, we reduce the task of constructing deletion-correction codes to finding good short sketch functions that together with any subsequence enable recovery of the original string, and also describe the sketch functions we will use in our constructions. As a warm-up, in Section~\ref{sec:single-deletion} we present our run-sequence based construction of a single-deletion code which also lets us establish the framework of moving the to-be-inserted bit(s) that we use to analyze all our constructions. We then present our construction of $2$-deletion codes for list decoding with size two lists in Section~\ref{sec:list-decode}. Finally, we give the $2$-deletion code establishing our main result Theorem~\ref{thm:unique-decoding-intro} in Section~\ref{sec:unique-decode}. \iffalse \subsection{Other work on deletion codes} \todo[inline]{Do we even need this section?} Our focus in this paper is on 2-deletion correcting codes, and above we mentioned related works on codes to correct a small number of deletions. We now quickly mention some works on deletion codes in other regimes. \begin{itemize} \itemsep=0ex \vspace{-1ex} \item Correcting constant fraction of deletions. Mention Schulman Zuckerman . Revived in Guruswami-Wang, with high rate codes. BGH considered the high noise regime. \item For large but fixed alphabets, the optimal trade-off is understood and can be matched with explicit constructions based on synchronization strings. \item Mention works on list decoding. \end{itemize} \fi \vspace{-1ex} \section{Preliminaries} \label{sec:prelims} In our basic setup, we have an unknown string $x \in \{ 0,1\}^n$ which is corrupted by deleting up to $k$ bits to obtain a subsequence $y \in \{ 0,1\}^{n-k}$. The goal is to reconstruct $x$ from $y$. The location(s) of the deleted bits are unknown. Our focus in this work is on the case $k=2$, though we will consider the single deletion case a warm-up to our construction for tackling two deletions. \vspace{-1ex} \subsection{Reduction to recovery from known sketches} \label{subsec:reduce-to-sketch} If an arbitrary $x$ is allowed this is clearly an impossible task and we are interested in \emph{codes} $C$, which are carefully constructed subsets of $\{0,1\}^n$, such that under the guarantee that $x \in C$, the reconstruction is always possible. The goal is to maximize the size of $C$. There are many possible ways to construct a set $C$ but in this paper we are interested in the case when there are one or more integer valued functions $(f_i)_{i=1}^t$ such that knowing the value of $f_i(x)$ for $1 \leq i \leq t$ (which we can think of as \emph{sketches} or deterministic hashes of $x$) and the subsequence $y$, it is possible to reconstruct $x$. If there are only $T$ possible values of $(f_i(x))_{i=1}^t$ then this implies the existence of a code $C$ of size at least $2^n/T$ for which reconstruction is possible. Indeed, one can take $C$ to be the pre-image of the most common value for these outputs. However, an explicit description of the strings attaining this most frequent value is necessary in order to construct and efficiently encode into the code $C$. Even for modestly complex functions $f_i(\cdot)$, this can be difficult. Instead, below we give an alternate (standard) reduction of the code construction problem to recovering the string $x$ from its (known) sketches and the subsequence $y$. The idea is simply to encode the sketches, which are much shorter, by a known but less efficient $k$-deletion correcting code. We can then encode a message $x$ by appending these encoded sketches to $x$ itself. The formal proof is omitted as it implicitly appears in several previous works, including \cite{BGZ-soda}. \begin{lemma} \label{lem:reduce-to-sketch} Fix an integer $k\ge 1$. Let $s : \{0,1\}^n \to \{0,1\}^{\lceil c \log n \rceil+O(1)}$ be an efficiently computable function. Suppose that $x \in \{0,1\}^n$ can be recovered from $s(x)$ and $y$ for any subsequence $y \in\{0,1\}^{n-k}$ of $x$. Then there is an efficiently encodable map $E$ mapping strings of length $n$ to strings of length $N \le n + c\log n + O_k(\log \log n)$ such that the image of $E$ is a $k$-deletion correcting code. In other words, we have a length code $C \subset \{0,1\}^N$ of size $2^{N}/N^{c+o(1)}$ that can correct $k$ deletions and into which we can efficiently encode. \end{lemma} Given the above lemma, we turn to the definition of suitable sketches of total length $c \log n$ for as small $c$ as possible that help with recovery from (two) deletions. For our construction, the recovery will be guaranteed only certain ``regular" $x$ which constitute most of the strings but not all of them. So in order to obtain deletion codes out of our construction we will also need to encode into regular strings, which we will handle separately on top of Lemma~\ref{lem:reduce-to-sketch}. \vspace{-1ex} \subsection{Position and run based sketches} For a binary string $x \in \{ 0,1\}^n$, with $x_1$ as the first bit, we define the {\em run string} $r$ associated with it as follows. To make the arguments slightly more uniform avoiding special cases at the beginning and end of $x$, we artificially insert a zero before $x$ and add a one at the end of $x$. Thus we have $x_0=0$, $x_{n+1}=1$, $r_0=0$ and set $r_{i+1}=r_i$ if $x_{i+1}=x_i$ and $r_{i+1}=r_i+1$ otherwise for $0 \leq i\leq n$. The quantity $r_i$ is referred to as the \emph{rank} (or run number) of the $i$'th bit of $x$. Note that with the inclusion of $x_0$ and $x_{n+1}$ at either end of a subsequence $y$ of $x$, the insertion of a bit into $y$ creates either no run or exactly two runs, even if the insertion happens at either end (just to the right of $x_0$ or just to the left of $x_{n+1}$). To see an example if $x=001000111010$ then we first add the extra bits obtaining $(0)001000111010(1)$ producing the run string $(0)001222333456(7)$. Clearly there is a one-to-one correspondence between run strings and binary strings. Given a string $x$ we define some ``sketch" functions of interest. \begin{align} f_1(x)& = \sum_{i=1}^{n} i \cdot x_i \label{eq:f1} \\ f_2(x) &= \sum_{i=1}^{n} {i \choose 2} \cdot x_i \label{eq:f2} \\ f^r_1(x)& = \sum_{i=1}^{n+1} r_i \label{eq:f1r} \\ f^r_2(x) & = \sum_{i=1}^{n+1} {r_i \choose 2} \label{eq:f12r} \end{align} Note that we include $r_{n+1}$ in the sums but not $x_{n+1}$. This is of no great consequence but simply convenient. It is easy to see that $ 0 \leq f_1(x) \leq n(n+1)/2$ and thus it seems like we would need $\Omega(n^2)$ values for $f_1(x)$, but a moment's reflection indicates that we can do significantly better. Suppose we are in the one-deletion case and we are given $y$ and we try to reconstruct $x$. It is easy to see that $f_1(y) \leq f_1(x) \leq f_1(y)+n$. Thus it is sufficient to give the value of $f_1(x)$ modulo $n+1$ and then use $y$ to reconstruct $f_1(x)$ over the integers. In the case of two deletions it is sufficient to specify the same number modulo $2n+1$. We will not be particularly careful with constant factors in the \emph{size} of the code. Let us simply note that it is sufficient to specify $f_2(x)$ and $f_2^r(x)$ modulo a number that is $O(n^2)$. For $f_1^r (n)$ the corresponding number is $O(n)$. This information, together with $y$, makes it possible to reconstruct these numbers over the integers. \medskip\noindent {\bf Constant-sized sketches.} Finally it is several times convenient to know the total number of runs in $x$ as well as the number $\sum_{i=1}^n x_i$, the number of ones in $x$. It is sufficient to specify these quantities modulo a number that is $O(1)$. These two numbers are not needed in all the reconstruction algorithms but we assume they are available whenever needed. \vspace{-1ex} \section{The single deletion case and moving bits} \label{sec:single-deletion} \vspace{-1ex} We begin by developing our ideas in the simpler context of recovery from one deleted bit. When given $y$ it is many times convenient to insert the missing bit(s) in some position(s) in $y$ possibly giving the correct value for one of the functions and see what possible changes can be done maintaining the already established equality. In particular if we can make the output of another function monotone under these changes it follows that we have a unique placement of the missing bits. As a simple example let us analyze the (well-known) single deletion case. We think of the string $x$ written from left to right starting with $x_1$. If $x$ is formed by inserting a 0 in position $i$ then $x_j=y_j$ for $1 \leq j < i$, $x_i=0$ while $x_j=y_{j-1}$ for $i< j \leq n$. Moving the inserted bit one step to the left means forming a new string $x'$ with $x'_j=x_j$ for $j \not\in \{ i-1,i\}$ while $x'_{i-1}=0$ and $x'_{i}=x_{i-1}$. Moving the bit to the right is defined analogously. We find it easier to use $x$ to denote a dynamic string and hence we mostly abstain from using $x'$. The following is the basis of the single deletion correcting property of the VT code which is defined as the set of strings $x$ with $f_1(x) \equiv 0 \pmod {(n+1})$. \begin{theorem}\label{thm:1sum} In the case of one deletion, the value of $f_1(x)$ modulo $n+1$ jointly with $y$ determines $x$ uniquely. \end{theorem} \begin{proof} Let us first insert a bit with the value \textsf{0}\ at the very end of $y$, i.e. setting $x_i=y_i$ for $1 \leq i \leq n-1$ and $x_n=0$. Clearly in this case we have $f_1(x)=f_1(y)$. Now keeping this bit as a \textsf{0}\ and moving it left, the value of $f_1(x)$ increases by one each time the moving \textsf{0}\ passes a \textsf{1}. When the moving \textsf{0}\ passes another \textsf{0}, the value of $f_1(x)$ does not change, but this is natural as $x$ does not change, only the information which of its bits come from $y$ and where the inserted zero is placed changes. Once the moving \textsf{0}\ has moved all the way to become $x_1$ we change its value to \textsf{1}. Also this increases the value of $f_1(x)$ by one. Finally moving this \textsf{1}\ to the right, each time the \textsf{1}\ passes a \textsf{0}, $f_1(x)$ increases by one and finally when the \textsf{1}\ is inserted as $x_n$ the value of $f_1(x)$ is $f_1(y)+n$. As each value of $f_1(x)$ gives a unique string $x$ and we have considered all possibilities of inserting a bit, the theorem follows. \end{proof} We now give a diferent, almost as good construction, using the run based function, to develop some ideas that will be useful for later. \begin{theorem}\label{thm:1sumr} In the case of one deletion the value of $f_1^r(x)$ modulo $2n+2$ jointly with $y$ determines $x$ uniquely. \end{theorem} \begin{proof} Let $y$ be the string obtained by deleting a bit from $x_0x_1 \dots x_{n+1}$ which is not the artificial bits $x_0=0$ and $x_{n+1}=1$ at either end. When we insert a bit into $y$ we either create a new run or not. If the inserted bit is inserted without creating a new run we have $f_1^r(x)=f_1^r(y)+r$ if and only if the bit is inserted in the $r$'th run of $y$. On the other hand, if the inserted bit creates a run, the smallest increase in $f_1^r(y)$ is obtained by inserting this bit just to the left of $x_{n+1}$ (the last bit of $y$). Note that since $x_{n+1}=1$, for this to happen we must have $y_{n-1}=1$ and the inserted bit must be a \textsf{0}. In this case we have $f_1^r(x)=f_1^r(y)+r_s+3$ where $r_s$ is the rank of $x_{n+1}$ in $y$ (the rank of the inserted bit is $r_s+1$ and the rank of $x_{n+1}$ increases from $r_s$ to to $r_s+2$). Note that even this minimum increase in the run-creating case is strictly larger than the maximum increase possible when we do not create a new run, so there can be no clash in the value of $f_1^r(x)$ between these two cases. \iffalse Suppose $r_s$ is final run including the bit $x_{n+1}$. Let us look at the rightmost placement of the bit that can create a new run. If $y_{n-1}=1$ we can create a run by setting $x_n=0$ giving a string with $f_1^r(x)=f_1^r(y)+r_s+3$. If $y_{n-1}=0$ we create a non legal string by setting $x_n=1$ and $x_{n+1}=0$ and thinking of $x_{n+1}$ as moving bit creating a new run. This is a non-legal string (as $x_{n+1}$ takes the wrong value) which has $f_1^r(x)=f_1^r(y)+r_s+1$. Now we can starting moving this run-creating bit left and in the second case this immediately creates a legal string with $x_{n+1}=1$. \fi When we move this inserted bit left to the next position where it can be placed between two equal bits we get the next possible placement for a run-creating bit. Suppose we need to move the bit passed $t$ bits (which must be alternating), then each of these elements increase their rank by 2 while the rank of the moving element decreases by $t-1$. Thus there is a net increase of $f_1^r(x)$ by $t+1$ and in particular there is a strict increase. The maximal total increase in $f_1^r(y)$ is achieved when the moving bit is placed between the first two equal bits. If this happens between $y_i$ and $y_i+1$ then $y_i$ has rank $i$, the inserted bit gets rank $i+1$ and we have $n-i$ bits that have increased their ranks by 2. We thus have $f_1^r(x)=f_1^r(y)+i+1+2(n-i) \leq f_1^r(y)+1+2n$. Thus knowing $y$ and $f_1^r(x) \mod (2n+2)$ suffices to construct the integer value $f_1^r(x)$. As we have considered all possibilities of inserting a bit each of which gives a different value of $f_1^r(x)$, the theorem follows. \end{proof} \begin{obs}\label{obs:increase} When we move a run-creating bit left, the value of $r_i$ is non-decreasing for each $i$. In fact, the rank increases by one for each position passed by the moving bit (the rank of the bit we pass over actually increases by two, but as the bit also moves one position to the right and the bits are alternating, the increase is one compared to the element previously in that position). Therefore the moving bit gets a rank that is one more than the rank of the element previously in the same position. \end{obs} \vspace{-1ex} \section{Correcting two deletions with size-$2$ lists} \label{sec:list-decode} \vspace{-1ex} In this section, we move to the case of two deletions, so $y$ is a subsequence with two bits from $x$ deleted. We will prove that knowing $f_1^r(x)$ and $f_2^r(x)$ allows us to list decode the subsequence $y$ with list size 2, i.e., pin down $x$ to one of two possible strings. We start with a very simple lemma that will spare us some calculations. We skip the proof which is a simple consequence of convexity. \begin{lemma} \label{lemma:convex} Let $a_i$ and $a_i'$ be two sequences of non-negative integers such that $\sum a_i = \sum a_i'$ and there is a value $t$ such that for all $i$ such that $a_i < a_i'$ we have $a_i' \leq t$ and for all $i$ such that $a_i > a_i'$ we have $a_i' \geq t$. Then, unless the two sequences are equal, $\sum a_i(a_i-1) > \sum a_i'(a_i'-1)$. \end{lemma} Returning to the main theme of the paper, we first give some situations where we have unique decodability. Note that we insert two bits, we create either zero, two, or four runs. In two of these cases, it is easy to identify $x$ uniquely. \begin{lemma}\label{lemma:04run} Suppose we add zero or four new runs when inserting the two bits. Then $f^r_1(x)$ and $f^r_2(x)$ jointly with $y$ determine $x$ uniquely. \end{lemma} Note that as we assume that the we know the total number of runs, we can tell when the condition of the lemma is true. \begin{proof} Suppose first that neither of the two inserted bits create a new run. If the two bits are inserted into runs $r_1$ and $r_2$ respectively, with $r_1 \leq r_2$, then given the value of $r_1+r_2$ and $r_1(r_1-1)+r_2(r_2-1)$ it is easy to reconstruct $r_1$ and $r_2$. In the case when four runs are created, place the bits as close to each other as possible that results in the correct value of $f_1^r(x)$. As two adjacent bits cannot both create two new runs there is some separation between the bits. As the value of $f_1^r$ is strictly increasing when run-creating bits move left, all other possible insertion locations yielding the same value for $f_1^r(x)$ is obtained by moving the leftmost bit left and the rightmost bit right. It follows from Lemma~\ref{lemma:convex} and Observation~\ref{obs:increase} that the value of $f_2^r(\cdot)$ is strictly decreasing during such moves. The implies that the configuration obtaining the correct value for $f_2^r(x)$ is unique and Lemma~\ref{lemma:04run} follows. \end{proof} In the case of when only two new runs are created by the re-insertion of the deleted bits, we do sometimes get some ambiguity. However, we can pin down the string $x$ to one of two possibilities. \begin{lemma}\label{lemma:2run} Suppose the insertion of two bits in $y$ to recover $x$ adds two new runs. Then given $y$, there are at most two values of $x$ that can have the same values of $f^r_1(x)$ and $f^r_2(x)$. \end{lemma} \begin{proof} We claim that we can view the process of inserting the two missing bits as we first add one bit causing two new runs and then add the second bit not causing any new run. It is easy to see that this is possible unless the two bits are inserted next to each other and surrounded by two unequal bits. From now on we use the bold font to indicate inserted bits and thus the situation we describe is given, for example, by 00{\bf 10}1. Independently on which order we insert the bits, it is the second bit that creates the extra runs. In this case, however, we can create the same string as 0{\bf 01}01 or 001{\bf 01}, by only changing the identity of the inserted bits. In either of these cases it is possible to first insert a bit creating two new runs. This works in general --- whenever the two added bits are part of a sequence of alternating bits we make sure that the two added bits are at the beginning or end of this sequence. The process is thus that we start with the string $y$ and add a bit $b_0$ causing two new runs. We now compute the rank of $b_1$ to add to get the correct value of $f_1^r(x)$. If this rank is smaller than 0 or larger than the maximal rank of the created string, then the placement of $b_0$ was impossible. Otherwise, we have a unique run into which to insert $b_1$. Let us first place $b_0$ as far right as possible giving a possible placement of $b_1$. Let us see what happens when we move $b_0$ from any position to the first position to its left where it also creates two runs. We call this an {\em elementary move}. The bit $b_0$ starts between two equal bits and after the elementary move, it shifts to the first place to its left where it can again be placed between two adjacent equal bits. We note that the value of the bit $b_0$ might change from $0$ to $1$ after the elementary move. Suppose $b_0$ passes $t$ alternating bits in such an elementary move. This decreases its rank by $t-1$ and increases the rank of each of the $t$ passed elements by $2$ each. To compensate for this net increase in total rank of $t+1$, $b_1$ must decrease its rank by $t+1$ to maintain the value of $f_1^r(x)$. To achieve this, $b_1$ must move past at least $t$ bits from $y$ (it might pass $b_0$) so it always passes at least as many bits in $y$ as does $b_0$. Thus if $b_1$ starts to the left of $b_0$ it remains to the left. If it starts to the right, it might overtake $b_0$ once but after this it remains to the left of $b_0$. Suppose the string before an elementary move is $x$ and after the same move it is $x'$. We have three cases. \begin{enumerate} \vspace{-1ex} \itemsep=0ex \item $b_1$ is to the left of $b_0$ in both $x$ and $x'$. \item $b_1$ is to the right of $b_0$ in both $x$ and $x'$. \item $b_1$ is to the right of $b_0$ in $x$ but to the left in $x'$. \end{enumerate} We have the following claim. \begin{claim}\label{claim:monotone} In in Case 1, $f_2^r(x) >f_2^r(x')$, and in Case 2 $f_2^r(x) <f_2^r(x')$. \end{claim} Before we establish this claim let us see that it finishes the proof of Lemma~\ref{lemma:2run}. The claim says that $f_2^r$ is strictly monotone before and after the take-over. This implies that each fixed value of $f_2^r(x)$ can only be achieved once before and once after the take-over for a total of at most two times. We move to establish Claim~\ref{claim:monotone}. In the two cases under consideration $b_1$ does not pass $b_0$. There can, however, be some positions passed by both bits but if we move the leftmost bit first, the bits do not really interact. Let us first see what happens to the ranks for old elements. Clearly the ranks and positions of elements not passed by either moving element remain the same and we focus on the more interesting elements. \begin{itemize} \itemsep=0ex \vspace{-1ex} \item Elements passed by only $b_0$ move one position to the right and increase their rank by two. Thus if you compare their ranks to that of the element previously in the same position it increases by at least $1$. \item Elements passed by only $b_1$ move one position to the right and keep the same rank. Thus if compared to the element previously in the same position their rank remains the same or decreases by $1$. \item Elements passed by both elements move two positions to the right and increase their rank by two. As $b_0$ moves passed alternating elements their new rank is equal to the rank of the old element in the same position. \item $b_0$ gets a rank that is one larger than the old element in the same position. \item $b_1$ gets a rank equal to that of the old element in the same position, unless it is in a position passed by $b_0$ in which case its rank is greater than the rank of the element previously in the same position. \end{itemize} We see that all positions with decreased ranks are passed only by $b_1$. From this it follow that, in Case 1, we have the all positions with decreased rank are to the left of all positions with increased ranks. The claim in this case now follows from Lemma~\ref{lemma:convex}. Similarly, in Case 2, all positions with decreased rank are to the right of all positions with increased ranks and we conclude that the claim is once again true. Appealing to Lemma~\ref{lemma:convex}, this completes the proof of Lemma~\ref{lemma:2run}. \end{proof} We summarize the two lemmas into a theorem. \begin{theorem} Let $y \in \{0,1\}^{n-2}$. There can be at most two strings $x \in \{0,1\}^n$ that have $y$ as a subsequence and which share a common value of $f_1^r(x)$, $f_2^r(x)$, and total number of runs. \end{theorem} Using the connection outlined in Lemma~\ref{lem:reduce-to-sketch} between recovery from known sketches $f_1^r(x)$ and $f_2^r(x)$ and deletion-correcting encodings, and the fact that $f_1^r$ can be specified modulo $O(n)$, $f_2^r$ modulo $O(n^2)$, and the number of runs modulo $O(1)$, we have our result on list-decodable codes for two deletions. We remind the reader that the existence of such list-decodable codes of size asymptotically bigger than $2^n/n^4$ was not known prior to our work. \begin{theorem} \label{thm:list2} There is a 2-deletion code of size $\Omega ( 2^n/n^{3} )$ that is list-decodable with list size 2. \end{theorem} Since our sketches $f_1^r(x)$ and $f_2^r(x)$ are simple explicit functions, by Lemma~\ref{lem:reduce-to-sketch} we can get explicit codes with $O(\log \log n)$ extra redundant bits. \begin{corollary} \label{cor:list2} There is an explicit (efficient encodable) 2-deletion code of size $\Omega ( 2^n n^{-3} (\log n)^{-O(1)})$ that is list-decodable with list size 2. \end{corollary} \medskip \noindent {\bf Remark.} Let us give an example to show that we do have list size two in many situations. Take any numbers $t_0$ and $t$ and consider the following two ways inserting two bits. \begin{itemize} \vspace{-1ex} \itemsep=0ex \item Insert a bit not creating a run in position $t_0-3t$ and one creating a run in position $t_0-t$. \item Insert a bit not creating a run in position $t_0+3t$ and one creating a run in position $t_0+t$. \end{itemize} We make an approximate calculation for the difference of $f_1^r$ and $f_2^r$ between these two ways of inserting the bits. Let us for simplicity assume that an element in position $i$ has rank exactly $i/2$ and ignore the difference between $r_i (r_i-1)/2$ and $r_i^2/2$ in the definition of $f_2^r$. As the ranks of all existing elements to the left of position $t_0-t$ and to the right of position $t_0+t$ are the same after the insertions, we ignore them when we calculate the the values of $f_1^r(x)$ and $f_2^r(x)$ and we sum up only terms that are different in the two sums. \begin{itemize} \vspace{-1ex} \itemsep=0ex \item In the first case the inserted elements get ranks $(t_0-3t)/2$, and $(t_0-t)/2$, respectively. All existing elements between positions $t_0-t$ and $t_0+t$ get their ranks increased by 2. This implies that the increase in $f_1^r(x)$ is roughly $$ (t_0-3t)/2+(t_0-t)/2+4t= t_0+2t \ . $$ As the average rank of the elements increasing their ranks by two is $t_0/2$ the increase in $f_2^r(x)$ is roughly $$ \frac 12 ((t_0-3t)/2)^2+ \frac 12 ((t_0-t)/2)^2+2t \cdot 2 \cdot t_0/2 = \frac 14 (t_0^2+4t t_0+5t^2) $$ \item In the second case the inserted elements get ranks about $(t_0+t)/2$ and $(t_0+3t)/2$, respectively while no existing elements before position $t_0+t$ change their ranks. This implies that the increase in $f_1^r(x)$ is roughly $$ (t_0+t)/2+(t_0+3t)/2= t_0+2t, $$ and the increase in $f_2^r(x)$ is roughly $$ \frac 12 ((t_0+t)/2)^2+\frac 12((t_0+3t)/2)^2= \frac 14 (t_0^2+4tt_0+5t^2), $$ both matching the first case. \end{itemize} It is easy to construct situations where we get an exact match. Looking at the example we see that the non run-creating bit will pass the run-creating bit around position $t_0$ and it is natural that at equal times before and after this event we get about the same value for $f_2^r$. We believe that the given family of examples giving the same values for $f_1^r(x)$ and $f_2^r(x)$ are essentially all such examples. We do not see any fundamental objection to the existence of a third function $f^?$ that would be able to distinguish all such pairs, but we have been unable to construct such a function with a small range. In the next section we achieve unique decodability and this analysis is heavily based on $f_1$. We invite the reader to check that $f_1$ is not sufficient to distinguish the two cases in the example above in general. If, in the two situations described, the two leftmost bits are equal and the two rightmost bits also are equal but different from the first pair, $f_1$ is also approximately preserved. \section{Unique decodable codes for two deletions} \label{sec:unique-decode} We now return to our main goal, namely the construction of a $2$-deletion code with sketches of size totaling about $4 \log_2 n$ bits. We return to studying $f_1(x)$ and our analysis focuses on all ways of inserting the two bits to get the correct value of $f_1$. We start with some possible configuration and obtain all other configurations by moving the bits in a way that preserves $f_1$, and analyze the impact on $f_2$ and $f_1^r$ in this process. In this section, we will not require the function $f_2^r$. An inserted \textsf{1}\ decreases the value of $f_1$ if it moves left and passes a \textsf{0}\ and increases the value if it moves right passing a \textsf{0}. For an inserted \textsf{0}, the two cases are reversed. Remember that, by the proof of Theorem~\ref{thm:1sum}, once we have placed one of the bits, the location of the other bit is uniquely determined. \vspace{-1ex} \subsection{When two \textsf{0}'s or two \textsf{1}'s go missing} We start with the easy case, analogous to Lemma~\ref{lemma:04run}, when the two deleted bits have the same value. \begin{lemma} \label{lem:same-bits-deleted} If we have deleted two \textsf{1}'s or two \textsf{0}'s in forming the subsequence $y$ from $x$, then $f_1(x)$ and $f_2(x)$ together with $y$ identify $x$ uniquely. \end{lemma} \begin{proof} Suppose we insert two \textsf{0}'s as close to each other as possible giving the correct value of $f_1(x)$. Now all other insertions of the two bits giving the correct value of $f_1(x)$ are obtained by moving the \textsf{0}\ on the left further left and the \textsf{0}\ on the right further right. Each bit moves past one bit of the opposite type and we again call such a move an elementary move. At each such step two terms in the sum (\ref{eq:f1}) defining $f_1(\cdot)$ change. The left moving \textsf{0}\ causes one term to increase by one and the right one causes one term to decrease by one. It follows by Lemma~\ref{lemma:convex} that $f_2(x)$ is monotonically and strictly decreasing in this process. This implies that the location of the two bits giving the correct value for $f_2(x)$ is unique. The case of inserting two \textsf{1}'s is completely analogous except that the sign reverses and $f_2(x)$ is monotonically and strictly \emph{increasing} as the two \textsf{1}'s move apart. \end{proof} Note that we once again assume we know the total number of \textsf{1}'s in $x$ (modulo $3$ say), so we can detect that we are in the case of Lemma~\ref{lem:same-bits-deleted}. It now remains to consider the case when we have to insert a \textsf{0}\ and a \textsf{1}\ in $y$ to recover $x$. It is good to remember that once the moving \textsf{0}\ has passed a \textsf{1}, if it runs into one or more \textsf{0}'s, it moves past these ``effortlessly'' and stops next to the first \textsf{1}\ it encounters. Similarly a \textsf{1}\ moves effortlessly past a run of \textsf{1}'s. The existence of these effortless moves makes the bits move at (slightly) different speeds. They move, on the average, two steps to get past the next bit of the opposite type but there are some random fluctuations. \vspace{-1ex} \subsection{Elementary moves and overtaking} Suppose we insert a \textsf{0}\ and a \textsf{1}\ as far right as possible and we need to move both left in an elementary move. The following easy to check observation will be handy multiple times. \begin{obs} \label{obs:overtake} When we move the inserted \textsf{0}\ and \textsf{1}\ to the left, the value of $f_2(x)$ decreases if the \textsf{0}\ is to the left of the \textsf{1}\ and increases otherwise. Thus to obtain several possibilities where one can place the moving bits with the correct values of $f_2(x)$ the lead must change between \textsf{0}\ and \textsf{1}. \end{obs} As the bits move at the same speed but with different random fluctuations, if the bits start close to each other the bits can overtake each other many times. This is in contrast to the case when analyzing $f_1^r$ and $f_2^r$ in Lemma~\ref{lemma:2run} where the bit not causing any new runs moved at a strictly greater speed than the bit causing new runs. This fact was the key to the proof of Lemma~\ref{lemma:2run} and the analogous lemma (with $f_1$ and $f_2$ instead of $f_1^r$ and $f_2^r$) is not true in the current situation. As the moving bits can overtake each other we need to be slightly careful when defining an elementary move. \begin{definition}[Elementary move] We first move the leftmost moving bit left past one bit of opposite value and then past a run of bits of its own value until it is adjacent to bit of the opposite value. We then repeat this procedure with the second moving bit. \end{definition} Note that both the moving \textsf{0}\ and moving \textsf{1}\ have a bit of opposite value of their immediate left before and after an elementary move. These two moves together may not change the string as can be seen from the following example (again with moving bits in bold). Suppose the current string is $100{\bf 10}$. Moving the first bit produces $10{\bf 1} 0{\bf 0}$ and moving the second bit we get $1{\bf 0} 0{\bf 1} 0$ the same string as we started with but the identity of the moving bits have changed. It is easy to see that each of moving bits always move past at least one old bit (so there is a notion of progress in the position of the moving bits even if the string itself doesn't change in an elementary move). When the moving bits are adjacent to each other, after the left bit moves, the moving bit on the right will have a bit of the same value to its left which it will also jump over during the elementary move (as happens in the above example). This example also shows the mechanism of overtaking. When the bits are close and move in an area of mostly \textsf{0}'s, the moving \textsf{0}\ moves faster. It is easy to see that for an elementary move to cause one moving bit to overtake the other, the bits must have started next to each other. \medskip \noindent {\bf Remark.} If the two erased bits are far from each other we do get unique decodability. We claim that the values of $f_1(x)$ and $f_2(x)$ are, with high probability sufficient to reconstruct $x$ if this string is random and two random bits are deleted. This follows as two random bits are likely to be far apart and the fluctuations in the speeds is small for a random $x$. \vspace{-1ex} \subsection{When the run count changes by 0 or 4} While the values $f_1(x)$ and $f_2(x)$ might guarantee unique decoding in many cases, we are interested in a worst case result and thus we now bring $f_1^r(x)$ into the picture (which incurs an additional $\log_2 n +O(1)$ bits of redundancy since we can specify $f_1^r$ modulo $O(n)$). It turns out this information is sufficient for unique decodability in half of the remaining cases. \begin{lemma}\label{lemma:04run01} Suppose we add 0 or 4 new runs when inserting a \textsf{0}\ and a \textsf{1}. Then the information $f_1(x)$ and $f_1^r(x)$ jointly with $y$ is sufficient to identify $x$ uniquely. \end{lemma} \begin{proof} Let us start with the case of no new runs. Let us insert the two bits as far right as right as possible (as allowed by $f_1(x)$) and let us move bits to the left keeping the correct value of $f_1(x)$. It easy to see that $f_1^r(x)$ is strictly decreasing as moving bits to the left that do not create runs makes $f_1^r$ strictly decrease. The case of four new runs (i.e. both inserted bits giving two new runs) is similar. We again insert the bits as far right as possible based on $f_1(x)$. This time when moving both bits to the left, $f_1^r(x)$ is strictly increasing. \end{proof} \subsection{When the run count changes by 2} \label{subsec:run-2-0-and-1} We need to analyze the final case when we insert a \textsf{1}\ and a \textsf{0}\ and exactly one of the two bits creates two new runs. This takes the bulk of the work given that Lemmas \ref{lem:same-bits-deleted} and \ref{lemma:04run01} had short and easy proofs. Since we have to get the value of $f_1(x)$ right, the insertion of a \textsf{1}\ in position $i$ determines the position at which the \textsf{0}\ must be inserted. For the analysis, we track certain \emph{pseudorank} functions with the property if the \textsf{1}\ and \textsf{0}\ are inserted into positions (with the correct value of $f_1(x)$) where one of them creates two runs and other creates no runs, the pseudorank equals $f_1^r(x)$. We can then track how the pseudorank changes as we move the inserted bits to understand the positions where the correct value of $f_1^r(x)$ can also be obtained. \begin{definition}[Pseudorank] \label{def:pseudorank} The \emph{1-pseudorank}, denoted $A_1(i)$ indexed by the position $i$ where the moving \textsf{1}\ is inserted (into $y$), is defined by the following process: \begin{enumerate} \itemsep=0ex \vspace{-1ex} \item Insert a \textsf{1}\ in position $i$. \item Insert a \textsf{0}\ in a position to ensure that $f_1(x)$ takes the correct value. \item For each bit of $y$, its 1-pseudorank is its rank in $y$ unless it is to the right of the inserted \textsf{1}\ in which case its 1-pseudorank is two more than its rank in $y$. \item The 1-pseudoranks of the inserted \textsf{1}\ and \textsf{0}\ equal their actual ranks. \item Finally, $A_1(i)$ is defined to be the sum of these 1-pseudoranks of the individual bits. \end{enumerate} The \emph{0-pseudorank} function $A_0(\cdot)$ is defined analogously, reversing the roles of \textsf{1}\ and \textsf{0}\ (so bits of $y$ to the right of where the \textsf{0}\ is inserted have pseudoranks equal to their rank in $y$ plus $2$). However, we index $A_0$ also by the position of the inserted \textsf{1}\ (rather than the inserted \textsf{0}) enabling us to reason about and compare $A_0$ and $A_1$ at the same location where we insert the \textsf{1}. \end{definition} Note that whenever the described process makes the inserted \textsf{1}\ create two new runs while the inserted \textsf{0}\ does not, $A_1(i)$ agrees with $f_1^r(x)$. A similar claim holds for $A_0(i)$ when the inserted \textsf{0}\ creates two runs and the inserted \textsf{1}\ creates no runs. The following lemma establishes a crucial monotonicity of the pseudorank functions. \begin{lemma}\label{lemma:weakincrease} $A_1(i)$ never decreases by an elementary move. Whenever at least one of the moving bits encounters a run of at least two adjacent \textsf{1}'s, $A_1(i)$ strictly increases. Also if the moving \textsf{1}\ overtakes the moving \textsf{0}\ then $A_1(i)$ strictly increases. Similarly, $A_0(i)$ never decreases by an elementary move. Whenever at least one of the moving bits encounters a run of at least two adjacent \textsf{0}'s, $A_0(i)$ strictly increases. Also if the moving \textsf{0}\ overtakes the moving \textsf{1}\ then $A_0(i)$ strictly increases. \end{lemma} \begin{proof} As \textsf{0}'s and \textsf{1}'s are symmetric it is enough to prove the first part of the lemma. We move the leftmost bit first and let us first assume that the second moving bit does not overtake the first in which case we can analyze the two moving bits independently. The moving \textsf{1}\ either moves past a single \textsf{0}\ encountering another \textsf{0}, or it moves past a \textsf{0}\ and then passes at least one additional \textsf{1}\ until it hits the next \textsf{0}. In the first case, the single \textsf{0}\ increases its 1-pseudorank by 2 and no other rank (including that of the moving \textsf{1}) changes. In the second case, the rank of the moving bit decreases by 2 but there are at least two bits whose 1-pseudorank increases by 2. In either case the total 1-pseudorank increase of all bits around the moving \textsf{1}\ (including itself) is at least 2. Also, if the moving \textsf{1}\ moves effortlessly past at least two 1's, then the 1-pseudorank increases by at least $4$. The 1-pseudorank of the moving \textsf{0}, which equals its actual rank, stays the same if there is a run of at least two $1$'s immediately to its left of the moving \textsf{0}, and it drops by $2$ if there is a run with a single \textsf{1}\ to its left. This is illustrated respectively by the two cases where 11\textbf{0}000 changes to 1\textbf{0}1000 (the 1-pseudorank stays the same) and where 101\textbf{0}00 changes to 1\textbf{0}0100 (the 1-pseudorank decreases by 2). No other change of 1-pseudorank is caused by the moving \textsf{0}. Adding together the two contributions caused by the moving \textsf{0}\ and the moving \textsf{1}, we see that $A_1$ cannot decrease. Further, if at least one of the moving bits encounters a run of two or more $1$'s during the elementary move, we get a strict increase in $A_1$. We now consider the situation when the second moving bit overtakes the first. There are two cases based on which bit is to the left. Suppose that the moving \textsf{1}\ is to the left and moves first. It moves exactly one step to its left (otherwise the moving \textsf{0}\ cannot overtake it). Its rank remains the same and it passes exactly one bit of $y$ whose 1-pseudorank increased by two. After this the moving \textsf{0}\ decreases its rank by two and does not change any other 1-pseudorank. In this case $A_1(i)$ does not change. This case is captured by the transformation $w 0^b \textbf{1} \textbf{0} z \to w \textbf{0} 0^{b-1} \textbf{1} 0 z$ for some $b \ge 2$. Note that neither moving bit encounters a run of two or more $1$'s in such an elementary move. If the moving \textsf{0}\ moves first, it takes one step to its left and does not does not change its rank. After this the moving \textsf{1}\ moves past the \textsf{1}\ just passed by the moving \textsf{0}\, passes the moving \textsf{0}\ and additionally at least one \textsf{1}. The moving \textsf{1}\ decreases its rank (which equals its 1-pseudorank) by two but at least two bits increase their 1-pseudoranks by two. Thus in this case there is a strict increase in $A_1$. \end{proof} The above lemma implies that for $A_1$ (resp. $A_0$) to not increase, both the moving bits must be moving in a region without $11$ (resp. $00$) as a sub-string. In view of this, the following definition is natural. Below $d$ be an absolute constant to be fixed later (but the choice $d=7$ will suffice). \begin{definition}[Regularity] \label{def:regular} We say that a string $x \in \{0,1\}^n$ is {\em regular} if each (contiguous) sub-string of $x$ of length at least $d\log_2 n$ contains both 00 and 11. \end{definition} \iffalse\begin{itemize} \item No sub-string of length $d\log n$ has all its 0-runs be of length one. \item No sub-string of length $d\log n$ has all of its 1-runs be of length one. \end{itemize} \fi In other words, there is no sub-string of length at least $d\log_2 n$ all of whose $1$-runs are of length one, or all of whose $0$-runs are of length one. As we establish in Lemmas \ref{lem:fibonacci} and \ref{lem:regenc}, when $d$ is a large enough absolute constant, most of the $n$-bit strings are regular and further one can efficiently encode into a large subset of regular strings. So we can focus on deletion recovery assuming that $x$ is regular. \smallskip Denote by $P_1$ (resp. $P_0$) the set of positions $i$ for which $A_1(i)$ (resp. $A_0(i)$) equals the desired value of $f_1^r(x)$. By Lemma~\ref{lemma:weakincrease}, we have that $P_1$ is contained in an interval, say $I_1$, that does not contain two adjacent $1$'s. Similarly, $P_0$ is contained in an interval, say $I_0$, that does not contain $00$. If we are guaranteed that $x$ is regular, then the length of $I_0,I_1$ is at most $d\log n$. Thus, to get the value of $f_1^r(x)$ correct, the possible locations where the \textsf{1}\ can be inserted are contained in $I_0 \cup I_1$. We now prove that if we also have to get the correct value of $f_2(x)$, then the positions where $1$ can be inserted must be contained in one of these two intervals $I_0$ or $I_1$. \begin{lemma} \label{lem:only-one-interval} There cannot be $p_1 \in I_1 \setminus I_0$ and $p_0 \in I_0 \setminus I_1$ such that inserting the \textsf{1}\ at $p_1$ or $p_0$ (and the \textsf{0}\ at the corresponding position implied by $f_1(x)$) both leads to the correct values of $f_2(x)$ and $f_1^r(x)$. \end{lemma} \begin{proof} Suppose, for contradiction, there exist positions $p_1 \in I_1 \setminus I_0$ and $p_0 \in I_0 \setminus I_1$ where the \textsf{1}\ can be inserted both of which lead to correct values of $f_2(x)$ and $f_1^r(x)$. We have \begin{equation} \label{eq:a0-a1-fr} A_0(p_0)=A_1(p_1)=f_1^r(x) \ . \end{equation} Suppose, without loss of generality, that $p_0$ is to the right of $p_1$. First note that since $p_0$ is strictly to right of $I_1$ and $A_1$ is monotone, we have that $A_1(p_0) \le A_1(p_1) = f_1^r(x)$. But since $p_0 \notin I_1$, $A_1(p_0) \neq f_1^r(x)$ and thus $A_1(p_0) < A_1(p_1)=A_0(p_0)$. Similarly, as $p_1$ is strictly to the left of $I_0$ we have $A_0(p_1) > A_0(p_0)=A_1(p_1)$. We claim that at any position such that $A_0 (p) > A_1(p)$, the inserted \textsf{1}\ is at least two positions to the right of the inserted \textsf{0}. This follows from the definition as a \textsf{0}-pseudorank is larger than the \textsf{1}-pseudorank only for elements that are to the right of the inserted \textsf{0}\ but to the left of the inserted \textsf{1}. Thus if $A_0(p) > A_1(p)$ we have at least one such element and the claim follows. We conclude that the inserted \textsf{1}\ is to the right of the inserted \textsf{0}\ both at $p_0$ and $p_1$. Recall Observation~\ref{obs:overtake} that for $f_2(x)$ to return to a previous value while $f_1(x)$ is preserved, the lead must change in the race between the moving \textsf{1}\ and the moving \textsf{0}. We conclude that this must happen between the positions $p_0$ and $p_1$. In fact since the moving \textsf{0}\ is to the left of the moving \textsf{1}\ at both $p_0$ and $p_1$ there must be at least two such overtaking events and there must be an elementary move at which the moving \textsf{1}\ overtakes the moving \textsf{0}. Call this position $p^\ast$. We first observe that $A_1(p^\ast)= A_0(p^\ast)$. This is obvious by definition as each bit has the same $0$-pseudorank and $1$-pseudorank when the inserted \textsf{0}\ and \textsf{1}\ are next to each other. By Lemma~\ref{lemma:weakincrease} whenever the moving \textsf{1}\ overtakes the moving \textsf{0}\ we have a strict increase in $A_1$. Combining this with the monotonicity, we have $$ A_1(p_1) > A_1(p^\ast)=A_0(p^\ast) \geq A_0(p_0) $$ which contradicts (\ref{eq:a0-a1-fr}). Thus such $p_1, p_0$ cannot exist. \end{proof} Therefore, we conclude that all possible alternatives for inserting the \textsf{1}\ must fall within either $I_0$ or $I_1$. A similar claim shows that the possible positions to insert the \textsf{0}\ must be confined to a single interval that does not contain two adjacent $0$'s. If $x$ is regular, we can conclude that the positions where the \textsf{1}\ may be inserted is confined to an interval, say $I$, of width $d \log n$, and similarly the possibilities to insert the \textsf{0}\ are confined to a width $d \log n$ interval $J$. By Observation~\ref{obs:overtake}, these intervals $I$ and $J$ must intersect so that the lead can change between the moving bits. Thus, both the insertions must be confined to the interval $I \cup J$, which has width $2d \log n$.\footnote{We can in fact claim that both insertions happen in an interval of size $d \log n$, namely either $I$ or $J$, but this factor $2$ savings is inconsequential.} We have thus established the following lemma. \begin{lemma}\label{lemma:bothclose} Assume that $x$ is regular in the sense of Definition~\ref{def:regular}. Suppose we add two exactly new runs when inserting a \textsf{0}\ and a \textsf{1}\ into $y$ to obtain $x$. Then given $f_1(x)$, $f_1^r(x)$, $f_2(x)$, and $y$ either \begin{itemize} \itemsep=0ex \item The information is sufficient to identify $x$ uniquely, or \item There is an interval $I$ of length at most $2d\log n$ such that both insertions in $y$ are located in this interval. \end{itemize} \end{lemma} \noindent {\bf Remark.} (Insufficiency of $f_1(x)$, $f_1^r(x)$ and $f_2(x)$ to uniquely pin down $x$.) Let us given an example showing that the information $f_1(x)$, $f_1^r(x)$ and $f_2(x)$ is not sufficient to uniquely determine $x$. The strings 1101111{\bf 0}10{\bf 1}1 and 11{\bf 1}01{\bf 0}111101 which both can become 1101111101 have the same values for these three functions. Both insertions are located in the interval $I_0$ which has no two adjacent $0$'s in this case. There are two positions $i,j \in I_0$, $i > j$, with $A_0(i) = A_0(j)=f_1^r(x)$ and the moving \textsf{1}\ overtakes the moving \textsf{0}\ between positions $i$ and $j$. \subsection{Handling ambiguity within small intervals} Combining Lemmas~\ref{lem:same-bits-deleted}, \ref{lemma:04run01}, and \ref{lemma:bothclose}, the original string $x$ is either uniquely determined, or we have found an interval of size $\Delta \le O(\log n)$ such that both deletions happened in that interval, provided $x$ is regular. To recover from this last case, we can encode each of those intervals by a two-deletion code. Since this code is for short lengths, it can either sketches of length $\approx 4 \log_2 \Delta \le O(\log \log n)$ matching the existential bound that is found by brute-force (cf. \cite[Lemma 1]{BGZ-soda}), or an explicit sub-optimal sketch of length $c \log \Delta \le O(\log \log n)$ for a larger constant $c$, for instance the construction with $c = 7$ from \cite{SRB20}. Of course we cannot include this sketch for each interval as that would make the overall sketch way too long, but since we know the deletions are confined to one of the intervals, we can simply XOR all these sketches. There is one small catch in that the two deletions might occur in two adjacent intervals if we pick fixed interval boundaries. This is easily handled by computing the sketches also for another set of intervals which straddle the first set of intervals. Below we execute this idea by introducing explicit sketches for these intervals based on the ranks of the elements, using also the fact that we only need this in the case of Section~\ref{subsec:run-2-0-and-1}, so we have a self-contained solution that also fits the mold of our other sketches. Suppose without loss of generality that we know that the two bits are inserted in the interval $I_1$ and we have at least two alternatives. As the moving \textsf{0}\ moves at least as fast as the moving \textsf{1}\ in $I_1$ there is single take-over point in the interval. Similar to the proof of Lemma~\ref{lemma:2run} we can conclude that $f_2(x)$ is monotone before and after this take-over and thus there are exactly two alternatives where to insert the moving \textsf{1}. Suppose the corresponding strings are $x$ and $x'$ where $x$ has the moving bits further to the right. When moving the bits left going from $x$ to $x'$ the moving \textsf{0}\ has overtaken the moving \textsf{1}\ and the moving \textsf{1}\ causes two new runs in both positions while the moving \textsf{0}\ does not cause a new run. Thus if we look at the corresponding rank string $r$ and $r'$, we have: \begin{itemize} \item $r_i=r_i'$ to the left of the moving \textsf{0}\ in $x'$, not including this last bit. \item $r_i\geq r_i'$ between the moving \textsf{0}\ of $x'$ (inclusive) and the moving \textsf{1}\ of $x'$, not including this last bit. Let us call this interval $J_1$ \item $r_i \leq r_i'$ between the moving \textsf{1}\ of $x'$ (non-inclusive) and the moving \textsf{1}\ of $x$, including this last bit. Let us call this interval $J_2$ \item $r_i\geq r_i'$ between the moving \textsf{1}\ of $x$ (non-inclusive) and the moving \textsf{0}\ of $x$, including this last bit. Let us call this interval $J_3$ \item $r_i=r_i'$ to the right of the moving \textsf{0}\ in $x$. \end{itemize} Looking at the values $r_i$ and $r_i'$, we have integers $a$, $b$, $c$ and $d$ such that $r_i, r_i' \in [a,b]$ when $i \in J_1$, $r_i, r_i' \in [b,c]$ when $i \in J_2$, and $r_i, r_i' \in [c,d]$ when $i \in J_3$. Now suppose have a function $P$ such $P(i) > P(i+1)$ when $i \in [a, b-1]$ or $i \in [c, d-1]$ while $P(i) < P(i+1)$ when $i \in [b, c-1]$. The by the above reasoning we have \begin{equation} \label{eq:diff} \sum_{i=1}^{n+1} P (r_i) < \sum_{i=1}^{n+1} P(r_i'). \end{equation} This follows as whenever $r_i \not= r_i'$ we have $P(r_i) < P (r_i')$ by the properties of $P$. Now we claim that it is possible to find a polynomial of degree three with the required properties. Indeed we can take the quadratic polynomial $Q(x)=(x-b)(c-x)$ which is positive exactly in the interval $[b,c]$ and demand that the derivative of $P$ equals $Q(x)$. We conclude that if we define $f_3^r = \sum_{i=1}^{n+1} r_i(r_i-1)(r_i-2)/6$ then we must have $f_j^r(x) \not= f_j^r(x')$ for at least one $j\in \{1,2,3\}$. Indeed, otherwise the sum of any cubic polynomial in $r_i$ would be the same at $x$ and $x'$ contradicting (\ref{eq:diff}). Thus if we specify these three numbers, then we can distinguish the two remaining cases. This seems very expensive but it is sufficient to specify these numbers locally as follows. \begin{itemize} \item Divide $x$ into blocks of length $2d\log n$, $x^1$, $x^2, \ldots, x^m$ where $m= \lceil n/ 2d \log n \rceil$ and we pad the last block with zeroes to make it full length. \item Output $F_2^r(x)=\oplus_{i=1}^m f_2^r(x^{(i)})$, and $F_3^r(x) =\oplus_{i=1}^m f_3^r(x^{(i)})$ where $\oplus$ is bitwise exclusive-or. \end{itemize} If the interval, $I$, where the two bits are to be inserted as specified by Lemma~\ref{lemma:bothclose}, fall completely within one block $x^i$ then $F_2^r(x)$ and $F_3^r(x)$ makes it possible to reconstruct $x$ uniquely. This follows as if both bits are inserted in $x^{(i)}$ then we know $x^{(j)}$ for $j\not =i$ and we can compute $f_2^r(x^{(j)})$ and $f_3^r(x^{(j)})$ and hence deduce $f_2^3(x^{(i)})$ and $f_3^r(x^{(i)})$. It is not difficult to see that we can deduce $f_1^r(x^{(i)})$ from $f_1^r(x)$. As discussed above this information makes it possible to distinguish the two alternatives for $x$ to give a unique reconstruction. This does not work if the interval where to insert the two bits intersects two blocks. We remedy this by making a different division into blocks of size $2d \log n$, but shifted $d \log n$ positions and compute the quantity similar to $F_2^r(x)$ and $F_3^r(x)$ with this block division. The interval of uncertainty is fully contained in a single block in one of the two block divisions. As the lengths of the blocks are $O(\log n)$ it is not difficult to see that it is sufficient to specify $O( (\log n)^2)$ different values for $f_2^r(x^i)$ and $O( (\log n)^3)$ different values for $f_3^r(x^{(i)})$ to identify the correct value over the integers. This gives a total of $O( (\log n)^5)$ different values for each block division and we have finally proved the following theorem. The referred to sketch includes $f_1(x)$, $f_2(x)$, $f_1^r(x)$, and the local sketches $F_2^r(x)$ and $F_3^r(x)$ for the two divisions of the positions into intervals of size $2d \log n$, and any constant sized sketches to determine which case we fall in (regarding identity of the bits deleted and their effect on the number of runs). \begin{theorem} \label{thm:main-0} There is an explicitly computable sketch function $s$ mapping $n$ bits to $4 \log n + 10 \log \log n + O(1)$ bits such that for any \emph{regular} $x \in \{0,1\}^n$, given $s(x)$ and any subsequence $y$ of $x$ obtained by deleting two bits, one can uniquely recover $x$. \end{theorem} Note that the above only works for regular strings. Appealing to Lemma~\ref{lem:reduce-to-sketch}, we will have our desired two-deletion code if we can encode messages into regular strings. We show how to do this in Section~\ref{subsec:encode-regular}, leading finally to our main theorem giving explicit two-deletions codes of size matching the best known existential bound up to lower order terms. \begin{theorem}[Main] \label{thm:main-body} There is an explicit (efficiently encodable) binary code $C \subseteq \{0,1\}^n$ of size $\Omega (2^n n^{-4} (\log n)^{-10})$ that can be uniquely decoded from two deletions. \end{theorem} \subsection{Encoding into regular strings} \label{subsec:encode-regular} All that remains to be done to complete the proof of Theorem~\ref{thm:main-body} is a way to efficiently encode into regular strings in $\{0,1\}^n$ in a rate-efficient manner. First let us show that the number of regular strings is large. The following lemma shows that non-regular strings form a negligible (exponentially small) fraction of all strings since the $m$'th Fibonacci number $F_m$ is at most $(1.62)^m$. \begin{lemma} \label{lem:fibonacci} The number of $m$ bit strings not containing two adjacent \textsf{0}'s is $F_{m+2}$ where $F_i$ is the $i$'th Fibonacci number. \end{lemma} \begin{proof} Let $S_m$ be the number of strings of the type described in the lemma. It is immediate to check that $S_1=2$ and $S_2=3$. For general $m$ a string of $m$ bits without 00 is either a string starting with 1 and an arbitrary string with the property of length $m-1$ or a string starting with 01 follow by such a string of length $m-2$. We conclude that $S_m=S_{m-1}+S_{m-2}$ and the lemma follows. \end{proof} We can now encode into a large subset of regular strings by enumerating strings of length $O(\log n)$ that contain both $00$ and $11$ and using these locally to piece together an $n$-bit string. \begin{lemma} \label{lem:regenc} There is a one-to-one map $\text{RegEnc}: \{1,2,\dots,M\} \to \{0,1\}^n$ for $M \ge 2^{n-1}$ such that that $\text{RegEnc}$ is computable in $\text{poly}(n)$ time and its image is contained in the set of regular strings (per Definition~\ref{def:regular} with the choice $d=7$). \end{lemma} \begin{proof} Let $Q$ be the set of binary strings of length $\Delta := \lfloor \frac{d}{2} \log_2 n \rfloor$ which contain both $00$ and $11$ as a substring. Denote $m =\lfloor n/\Delta \rfloor$, and $M = |Q|^m 2^{n-m\Delta}$. By Lemma~\ref{lem:fibonacci}, we have \[ M \ge 2^n \Bigl(1 - m (1.62)^2 (1.62/2)^{\Delta} \Bigr) \ge 2^n (1 - 3 \Delta^{-1} n^{1-0.15 d}) \ge 2^{n-1} \] for $d \ge 7$ and $n$ big enough. Fix any efficiently computable bijection $\phi$ from $[M] := \{1,2,\dots,M \}$ to $Q^m \times \{0,1\}^{n-m\Delta}$. Consider the map $\psi : Q^m \times \{0,1\}^{n-m\Delta} \to \{0,1\}^n$ that enumerates the strings in $Q$ corresponding to the first $m$ components and then concatenates it with the last $n-m\Delta$ bits to form an $n$-bit string. The composition of $\psi \circ \phi$ is our desired map $\text{RegEnc}$. The regularity of the output string follows because any contiguous substring of length $d \log n$ must include a string from $Q$ and thus have both a $00$ and a $11$ occurring within it. \end{proof} \iffalse \section{Extra material and comments} This section is on some thoughts Johan has and not really part of the paper. \subsection{The number of non-regular strings} We have the following lemma counting non-regular strings. \begin{lemma} The number of $n$ bit strings not containing two adjacent zeros is $F_{n+2}$ where $F_i$ is the $i$th Fibonacci number. \end{lemma} \begin{proof} Let $S_n$ be the number of strings of the type described in the lemma. It is immediate to check that $S_1=2$ and $S_2=3$. For general $n$ a string of $n$ bits without 00 is either a string starting with 1 and an arbitrary string with the property of length $n-1$ or a string starting with 01 follow by such a string of length $n-2$. We conclude that $S_n=S_{n-1}+S_{n-2}$ and the lemma follows. \end{proof} \subsection{The efficiency of the codes} We claim that our constructions are explicit and/or efficient, which I suppose many mathematicians would agree with, but fewer computer scientists. Firstly regularity is not obvious how to deal with but we could proceed as follows. Make a list of all strings of length $d \log n$ that are regular. Suppose there are $t$ such strings. Then we write an arbitrary string in base $t$ and take the corresponding strings (which I suppose doubles $d$ in the analysis but this is not important). We, however, also want to get a favorite value for $f_1(x)$, $f_2 (x)$ and $f_1^r(x)$. I do not see how to do this really efficiently. For single deletion you can send $n- \log n$ bits by taking arbitrary bits and then placing them in all positions except positions $2^t$ for all integer values $t$. We can later fix these bits to make $f_1(x)$ take the desired value. Is this how it is usually is done or do people not care? This is a little bit of a hack and even for the single deletion code using $f_1^r(x)$ it is not obvious to me how to best encode but this can probably be figured out. Even worse, doing the ``very explicit'' list decodable code based on $f_1^r$ and $f_2^r$, I am less hopeful for something simple. I do not see how to efficiently encode $n-3\log n$ bits before transmission. Of course taking regularity into account and encoding slightly less than $n-4\log n$ bits is even more of a challenge. \fi \bibliographystyle{alpha}
proofpile-arXiv_065-124
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Tension between the distance anchors} \label{sec:anchors} R11 and R16 use the LMC, NGC 4258 and Milky Way (MW) Cepheid parallaxes as the primary distance anchors. In Sect. \ref{subsec:LMCanchor}, we discuss the LMC and NGC 4258 anchors and show that their geometrical distances are in tension with the R16 photometry. The MW distance anchor is discussed in Sect. \ref{subsec:MWanchor}. \subsection{The LMC and NGC 4258 anchors} \label{subsec:LMCanchor} Recently, there have been substantial improvements in the accuracies of the distance anchors. Using 20 detached eclipsing binary (DEB) systems, \cite{Pietrzynski:2019} determine a distance modulus for the Large Magellanic Cloud (LMC) of \begin{equation} \mu_{\rm LMC} = 18.477 \pm 0.026 \ {\rm mag}. \label{equ:mu1} \end{equation} A reanalysis of VLBI observations \cite{Humphreys:2013} of water masers in NGC 4258 by \cite{Reid:2019} gives a distance modulus of \begin{equation} \mu_{\rm N4258} = 29.397 \pm 0.033 \ {\rm mag}, \label{equ:mu2} \end{equation} substantially reducing the systematic error in the geometric distance to this galaxy compared to the distance estimate used in R16. In addition, R19 have published HST photometry for LMC Cepheids using the same photometric system as that used for the cosmological analysis of $H_0$ in R16. With these observations, calibration errors associated with ground based photometry of LMC Cepheids are effectively eliminated as a significant source of systematic error. I use the data for the 70 LMC Celpheids listed in R19 and the data for the 139 Cepheids in NGC 4258 from R16 to perform a joint $\chi^2$ fit: \begin{subequations} \begin{eqnarray} (m^W_H)_{\rm LMC} & = & \mu_{\rm LMC} + c + b \ {\rm log}_{10} \left ({ P \over 10 \ {\rm days} } \right), \label{equ:M3a} \\ (m^W_H)_{\rm N4258} & = & \mu^P_{\rm N4258} + c + b \ {\rm log}_{10} \left ({ P \over 10 \ {\rm days} } \right), \label{equ:M3b} \end{eqnarray} \end{subequations} where $\mu_{\rm LMC}$ is fixed at the value of Eq. (\ref{equ:mu1}) and $\mu^P_{\rm N4258}$, $c$, $b$ are parameters to be determined from the fit\footnote{For the LMC Cepheids an intrinsic scatter of $0.08$ mag is added to the error estimates given in R19 which is necessary to produce a reduced $\chi^2$ of unity.}. The results are as follows: \begin{subequations} \begin{eqnarray} \mu^P_{\rm N4258} & = & 29.220 \pm 0.029, \label{equ:M4a} \\ c & = & -5.816 \pm 0.011, \label{equ:M4b}\\ b & = & -3.29 \pm 0.04. \label{equ:M4c} \end{eqnarray} \end{subequations} The difference between (\ref{equ:M4a}) and (\ref{equ:mu2}) is \begin{equation} \Delta \mu_{\rm N4258} = (\mu_{\rm N4258} - \mu^P_{\rm N4258}) = 0.177 \pm 0.051, \label{equ:M4} \end{equation} and differs from zero by nearly $3.5 \sigma$. In other words, the DEB LMC distance together with SH0ES Cepheids is placing NGC 4258 at a distance of $6.98 \ {\rm Mpc}$ if metallicity effects are ignored, whereas the maser distance is $7.58 \ {\rm Mpc}$. The best fit PL relation to the combined LMC and NGC 4258 data is shown in Fig. \ref{fig:PLanchor}. \begin{figure} \centering \includegraphics[width=150mm, angle=0]{figures/pgPLanchor.pdf} \caption{The joint PL relation for the LMC and NGC 4258 Cepheids. The line shows the best fit from Eqs. (\ref{equ:M4a}) -- (\ref{equ:M4c}).} \label{fig:PLanchor} \end{figure} There are a number of possible explanations of this result: \smallskip \noindent (i) There may be unidentified systematic errors in the geometric distance estimates of Eqs. (\ref{equ:mu1}) and (\ref{equ:mu2}). \smallskip \noindent (ii) Part of the discrepancy may be attributable to a metallicity dependence of the PL relation. \smallskip \noindent (iii) There may be a photometric offset between the R16 and LMC Cepheid magnitudes. \smallskip Since this discrepancy involves two of the three primary distance anchors used by the SH0ES team, it needs to be considered extremely seriously. Point (i) can be tested by using independent distance indicators. For example, a recent paper \cite{Huang:2018} used the near-infrared PL relation of Mira variables in the LMC and NGC 4258 to determine a relative distance modulus of \begin{equation} \mu_{\rm N4258} - \mu_{\rm LMC} = 10.95 \pm 0.06, \qquad {\rm Miras}, \qquad \qquad \label{equ:mu3} \end{equation} (where the error is dominated by systematic errors). This estimate is in good agreement with the geometric estimates of Eqs. \ref{equ:mu1} and \ref{equ:mu2}: \begin{equation} \mu_{\rm N4258} - \mu_{\rm LMC} = 10.92 \pm 0.04, \qquad {\rm DEB+maser}. \label{equ:mu4} \end{equation} \subsection{Milky Way parallaxes and NGC 4258} \label{subsec:MWanchor} \begin{figure} \centering \includegraphics[width=150mm, angle=0]{figures/pgMWPL.pdf} \caption{The PL relations for the MW Cepheids with parallaxes and NGC 4258 Cepheids. The solid line shows the best fit to the joint data.} \label{fig:MWPL} \end{figure} Parallaxes of Milky Way (MW) Cepheids have been used by the SH0ES team as a third distance anchor. Following the publication of R16, \cite{Riess:2018} have measured parallaxes for an additional 7 MW Cepheids with periods greater than 10 days, supplementing the parallax measurements of 13 MW Cepheids from \cite{Benedict:2007, vanLeeuwen:2007}. Figure \ref{fig:MWPL} shows the equivalent of Fig. \ref{fig:PLanchor} but now using the 20 MW Cepheids as an anchor in place of the LMC\footnote{Note that I repeated the GAIA DR2 analysis of \cite{Riess:2018b} finding identical results for the DR2 parallax zero point offset. I therefore do not consider GAIA parallaxes any further.}. The best fit gives \begin{subequations} \begin{eqnarray} \mu^P_{\rm N4258} & = & 29.242 \pm 0.052, \label{equ:M5a} \\ c & = & -5.865 \pm 0.033, \label{equ:M5b}\\ b & = & -3.21 \pm 0.08. \label{equ:M5c} \end{eqnarray} \end{subequations} As with the LMC, this comparison disfavours the maser distance, though because the error bar is larger this is significant at only $2.2\sigma$. However, since the metallicities of the MW and NGC 4258 Cepheids are very similar, this comparison suggests that metallicity differences of the PL relation cannot explain the shift reported in Eq. (\ref{equ:M4}). \begin{adjustwidth}{0.25in}{0.25in} \ \par \noindent{\underline{\bf Response from SH0ES team:} The full analysis of the SH0ES Cepheid data includes an empirically determined correction for Cepheid metallicity as shown in equation 4.1 below. The difference stated here of 3.5$\sigma$ between the LMC and NGC 4258 appears so only when ignoring the metallicity term. Including it brings the LMC and NGC 4258 closer together by $\sim$ 0.06 mag and reduces the significance of the difference in these anchors to slightly over 2$\sigma$. Because the metallicity term is measured internal to the data and included as part of the $H_0$ analysis, we think the 2$\sigma$ number is the fair statement of the significance. We note there are now 5 other geometric anchors from different methods of Milky Way parallax (from HST FGS, HST spatial scan, Gaia DR2 Cepheids, Gaia DR2 Cepheid companions, and Gaia DR2 cluster hosts, which yield $H_0$=76.2 (2.2\%), 75.7 (3.3\%), 73.7 (3.3\%), 72.7 (3.8\%) and 73.3 (3.5\%) (see Breuval et al. 2020) which makes it appear that the difference between 72.0 (NGC 4258, 1.5\%) and 74.2 (LMC,1.3\%) is not significant. However, this point which G.E. shared with us motivated us to acquire new Cepheid observations of the outer regions of NGC 4258 with HST to measure the anchor distance at the same (i.e., low) metallicity so we can revisit this issue with an improved characterization of metallicity. We appreciate G.E.'s suggestions and we expect an update on this in the next SH0ES paper.} \ \par \end{adjustwidth} \section{Object-by-object comparison of the R11 and R16 Cepheid photometry} \label{sec:appendix} \begin{table}[h] \begin{center} \begin{tabular}{llllllll} \hline & & & & \multicolumn{2}{c}{all Cepheids} & \multicolumn{2}{c}{outliers removed} \\ galaxy & $N_{\rm R11}$ & $N_{\rm R16}$ & $N_{\rm match}$ & $\qquad \langle \Delta m \rangle$& $\langle \Delta C \rangle$ & $ \qquad \langle \Delta m \rangle$ & $\langle \Delta C \rangle$ \\ \hline N4536 & 69 & 33 & 28 & $-0.114 \pm 0.057$ & $0.153$ & $-0.069 \pm 0.062$ & $0.153$\\ N4639 & 32 & 25 & 17 & $-0.071 \pm 0.100$ & $0.091$ & $-0.071 \pm 0.100$ & $0.091$ \\ N3370 & 79 & 63 & 51 & $-0.105 \pm 0.055$ & $0.146$ & $-0.090 \pm 0.055$ & $0.145$\\ N3982 & 29 & 16 & 12 & $-0.178 \pm 0.090$ & $0.092$ & $-0.081 \pm 0.094$ & $0.092$\\ N3021 & 29 & 18 & 13 & $+0.120 \pm 0.146$ & $0.196$ & $+0.120 \pm 0.146$ & $0.196$ \\ N1309 & 36 & 44 & 16 & $-0.087 \pm 0.091$ & $0.330$ & $-0.087 \pm 0.091$ & $0.330$ \\ N5584 & 95 & 83 & 65 & $-0.028 \pm 0.049$ & $0.039$ & $+0.001 \pm 0.051$ & $0.038$ \\ N4038 & 39 & 13 & 11 & $-0.239 \pm 0.153$ & $0.109$ & $-0.239 \pm 0.153$ & $0.109$ \\ N4258 & 165 & 139 & 73 & $-0.217 \pm 0.055$ & $0.145$ & $-0.020 \pm 0.062$ & $0.143$ \cr \hline \end{tabular} \caption{Offsets to the magnitudes and colours. $N_{\rm R11}$ is the number of Cepheids in Table 2 of R11. $N_{\rm R16}$ is the number of Cepheids in Table 4 of R16. $N_{\rm match}$ is the number of Cepheids common to both tables.} \label{table:fits} \end{center} \end{table} This analysis is based on matching Cepheids from Table 2 of R11 and Table 4 of R16. Note the following: \smallskip \noindent (i) The R11 table contain Cepheids with a rejection flag. Cepheids with IFLAG = 0 were accepted by R11 and those with IFLAG = 1 were rejected. \smallskip \noindent (ii) The data in the R16 table has been `pre-clipped' by the authors and does not list data for Cepheids that were rejected by R16. The R16 table contains Cepheids that do not appear in the R11 table. \smallskip \noindent (iii) The R16 magnitudes have been corrected for bias and blending errors from scene reconstruction. Each Wesenheit F160W magnitude has an error estimate: \begin{equation} \sigma_{\rm tot} = (\sigma^2_{\rm sky} + \sigma^2_{\rm ct} + \sigma^2_{\rm int} + (f_{\rm ph} \sigma_{\rm ph}^2)) ^{1/2}, \end{equation} where $\sigma_{\rm sky}$ is the dominant error and comes from contamination of the sky background by blended images, $\sigma^2_{ct}$ is the error in the colour term $R(V-I)$, which is small and of order $0.07$ {\rm mag}; $\sigma_{\rm int}$ is the internal scatter from the width of the instability strip, which is known from the LMC and M31 to be small ($\approx 0.08$ mag); $f_{\rm ph} \sigma_{\rm ph}$ is the error in the phase correction of the Cepheid light curves. \smallskip \noindent (iv) The positions in R11 are not listed to high enough precision to uniquely identify a Cepheid in the R16 table. There are ID numbers listed by R11 and R16, but for three galaxies (NGCs 3370, 3021, 1309) these numbers do not match. Where possible, we have matched Cepheids using their ID numbers. For the remaining three galaxies, we have used a combination of positional coincidence and agreement in periods to match Cepheids. (This gives perfect agreement for the six galaxies with matching ID numbers.) Outliers can have a significant effect on fits to the magnitude and colour differences. We fit: \begin{subequations} \begin{eqnarray} (m_H^W)_{\rm R16} &= & (m_H^W)_{\rm R11} + \langle \Delta m_H^W \rangle, \\ (V-I)_{\rm R16} &= & (V-I)_{\rm R11} + \langle \Delta C \rangle, \end{eqnarray} \end{subequations} with and without outliers, where outliers are defined as having \begin{equation} \left \vert {((m_H^W)_{\rm R11} - (m_H^W)_{\rm R16}) \over (\sigma_{\rm tot})_{\rm R16}} \right \vert > 2.5. \end{equation} The results are given in Table \ref{table:fits}. The rest of this appendix shows the equivalent plots for each of the nine galaxies. \begin{figure} \includegraphics[width=150mm, angle=0]{figures/A4536.pdf} \caption {The plot to the left shows R11 Wesenheit $H$ magnitudes plotted against R16 Wesenheit H magnitudes. Central plot shows R11 (V-I) colours plotted against R16 (V-I) colours. The dashed line in the central panel shows the best fit offset. Plot to the right shows the difference in $H$-band magnitudes in units of the R16 error, $((m_H^W)_{\rm R11} - (m_H^W)_{\rm R16})/(\sigma_{\rm tot})_{\rm R16}$, plotted against R16 $(V-I)$ colour. Blue points show Cepheids with IFLAG=1 in R11 (i.e. these were rejected by R11 but accepted by R16). Red points show Cepheids with IFLAG=0 in R11. } \label{fig:4536} \end{figure} \begin{figure} \includegraphics[width=75mm, angle=0]{figures/R4536R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R4536R16.pdf} \caption {Left plot shows the PL relation for R11 Cepheids. Blue points show Cepheids rejected by R11 (IFLAG =1); red points show Cepheids accepted by R11 (IFLAG = 0). The solid line shows the best fit linear relation fitted to the red points. The dashed line shows thebest fit with the slope constrained to $b=-3.3$. Right plot shows the PL relation for R16 Cepheids. The solid line shows the best fit linear relation fitted to the red points. The dashed line shows the best fit with the slope constrained to $b=-3.3$. The parameters of these fits are given in Table \ref{table:PL}.} \label{fig:R4536} \end{figure} \begin{figure} \includegraphics[width=150mm, angle=0]{figures/A4639.pdf} \caption {As Fig. \ref{fig:4536}.} \label{fig:4639} \end{figure} \begin{figure} \includegraphics[width=75mm, angle=0]{figures/R4639R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R4639R16.pdf} \caption {As Fig. \ref{fig:R4536}} \label{fig:R4639} \end{figure} \begin{figure} \includegraphics[width=150mm, angle=0]{figures/A3370.pdf} \caption {As Fig. \ref{fig:4536}.} \label{fig:3370} \end{figure} \begin{figure} \includegraphics[width=75mm, angle=0]{figures/R3370R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R3370R16.pdf} \caption {As Fig. \ref{fig:R4536}} \label{fig:R3370} \end{figure} \begin{figure} \includegraphics[width=150mm, angle=0]{figures/A3982.pdf} \caption {As Fig. \ref{fig:4536}.} \label{fig:3982} \end{figure} \begin{figure} \includegraphics[width=75mm, angle=0]{figures/R3982R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R3982R16.pdf} \caption {As Fig. \ref{fig:R4536}} \label{fig:R3982} \end{figure} \begin{figure} \includegraphics[width=150mm, angle=0]{figures/A3021.pdf} \caption {As Fig. \ref{fig:4536}.} \label{fig:3021} \end{figure} \begin{figure} \includegraphics[width=75mm, angle=0]{figures/R3021R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R3021R16.pdf} \caption {As Fig. \ref{fig:R4536}} \label{fig:R3021} \end{figure} \begin{figure} \includegraphics[width=150mm, angle=0]{figures/A1309.pdf} \caption {As Fig. \ref{fig:4536}.} \label{fig:1309} \end{figure} \begin{figure} \includegraphics[width=75mm, angle=0]{figures/R1309R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R1309R16.pdf} \caption {As Fig. \ref{fig:R4536}} \label{fig:R1309} \end{figure} \begin{figure} \includegraphics[width=150mm, angle=0]{figures/A5584.pdf} \caption {As Fig. \ref{fig:4536}.} \label{fig:5584} \end{figure} \begin{figure} \includegraphics[width=75mm, angle=0]{figures/R5584R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R5584R16.pdf} \caption {As Fig. \ref{fig:R4536}} \label{fig:R5584} \end{figure} \begin{figure} \includegraphics[width=150mm, angle=0]{figures/A4038.pdf} \caption {As Fig. \ref{fig:4536}. There are 39 Cepheids listed in R11 and 13 in R16 with 11 matches.} \label{fig:5584} \end{figure} \begin{figure} \includegraphics[width=75mm, angle=0]{figures/R4038R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R4038R16.pdf} \caption {As Fig. \ref{fig:R4536}} \label{fig:R4038} \end{figure} \begin{figure} \includegraphics[width=150mm, angle=0]{figures/A4258.pdf} \caption {As Fig. \ref{fig:4536}.} \end{figure} \begin{figure} \includegraphics[width=75mm, angle=0]{figures/R4258R11.pdf} \includegraphics[width=75mm, angle=0]{figures/R4258R16.pdf} \caption {As Fig. \ref{fig:R4536}} \label{fig:R4258} \end{figure} \newpage \section{Conclusions} \label{sec:conclusions} In the abstract, I asked the question `what would it take to make SH0ES compatible with early time measurements?'. The right hand panel of Fig. \ref{fig:H0_contour1} provides an answer. A bias in the intercept of the PL relation of the crowded field SH0ES photometry common to all SH0ES galaxies of about 0.1 - 0.14 mag, which I have termed the SH0ES degeneracy, resolves the tension between late and early time measurements of $H_0$ without the need for new physics. Such a bias also resolves the tension between the geometric distance of the LMC, MW parallaxes and the maser distance to NGC 4258, and resolves the tension between the TRGB distance scale of F19 and the Cepheid scale of the SH0ES team (which is based on combining the LMC, MW and NGC 4258 anchors). To my knowledge, there is no convincing way at present of ruling out such a common mode systematic in the SH0ES data. {\it Excluding the SH0ES degeneracy as a possibility should be a priority for future research.} Fortunately, this can be done by concentrating on {\it calibrations}, since the Hubble tension is a near 10\% effect. There is really no need to focus on acquiring data for more SN host galaxies (which may cause $\sim 2$\% changes to the value of $H_0$, see Fig. \ref{fig:H0}). Here are some possible ways forward: \smallskip \noindent (i) The CCHP and SH0ES teams disagree on the calibration of the TRGB \cite{Yuan:2019, Freedman:2020}. The main issue concerns corrections for rededening and extinction to the LMC. As discussed by \cite{Freedman:2020} Gaia parallaxes of MW globular clusters should improve the Galactic calibration of the TRGB and JWST should allow precision measurements of additional calibrators. Calibrating the TRGB ladder using NGC 4258 would provide an important consistency check of the LMC calibration. \smallskip \noindent (ii) The discrepancy between the LMC and NGC 4258 anchors identified in Sect. \ref{subsec:LMCanchor} needs to be resolved. Even if this turns out to be a consequence of a rare statistical fluctuation (which I doubt), one needs to establish which of these anchors is closer to the truth. In my view, the DEB LMC distance appears to be reliable and so it would be worth rechecking the NGC 4258 maser VLBI analysis \cite{Reid:2019}. Independent checks of the maser distance, for example, refining the accuracy of the TRGB distance to NGC 4258 \cite{Mager:2008}, would be particularly valuable. Another interesting test would be to obtain HST data on Cepheids in the outskirts of NGC 4258, reducing the impact of crowding corrections and metallicity difference with the LMC (Riess, private communication). If the distance to NGC 4258 can be shown to be lower than the $7.58 \ {\rm Mpc}$ found by \cite{Reid:2019}, this would strengthen the case for the Hubble tension. \smallskip \noindent (iii) I have included the Appendix on the R11 and R16 photometric comparison because I am far from convinced that the crowded field photometry is unbiased and returns realistic errors. It should be possible to apply MCMC techniques for scene reconstruction and to calculate posterior distributions on the magnitude errors (which will almost certainly be asymmetric). It would also be helpful if researchers published as much data as possible, including data on outliers, to allow independent checks. \smallskip \noindent (iv) In the longer term, other techniques such as strong lensing time delays, distant masers, and gravitational wave sirens will hopefully become competitive with the traditional distance ladder approaches\footnote{As this article was nearing completion \cite{Birrer:2020} presented a new analysis of strong gravitational lensing time delays analysing the mass-sheet degeneracy and sensitivity to galaxy mass profiles. These authors find $H_0=67.4^{+4.1}_{-3.2} ~\text{km}~\text{s}^{-1} \Mpc^{-1}$, lower than the value derived by \cite{Shajib:2020} but with a larger error. Also, the Atacama Cosmology Telescope collaboration released the DR4 maps and cosmological parameters \cite{Naess:2020, Aiola:2020}. Their results are in very good agreement with the results from \Planck. Athough this was not a surprise to me, it surely lays to rest any remaining concerns of even the most hardened of skeptics that the \Planck\ results are affected by systematic errors.}. \begin{adjustwidth}{0.25in}{0.25in} \ \par \noindent{\underline{\bf Response from SH0ES team: } The case that Vanilla $\Lambda$CDM calibrated in the Early Universe predicts a value of $H_0$ of 67-68 appears very sound. It is a mystery to us why the distance ladder methods consistently find a higher value, but one we feel worth paying attention to. It is worth highlighting a recent result that is independent of all of G.E.'s considerations in this talk and all other rungs, Cepheids, TRGB, SNe Ia, etc and which has not received much attention. That is the final result from the Maser Cosmology Project, Pesce et al. 2020, which measures geometric distances to 6 masers in the Hubble flow and finds $73.9 \pm 3.0$, which corroborates prior indications that the local value of $H_0$ exceeds the predicted value with $\sim$ 98\% confidence if G.E. sees no problems with it (tongue back in cheek). } \ \par \end{adjustwidth} \medskip \medskip \section{The SH0ES degeneracy} \label{sec:degeneracy} One interpretation of the results of the previous section is that \cite{Reid:2019} have overestimated the maser distance to NGC 4258 and/or underestimated the error. The maser analysis is considerably more complicated than the DEB distance estimates, and so it is extremely important that the maser analysis is revisited and, if possible, checked against independent techniques (such as TRGB \cite{Mager:2008, Jang:2017} and Mira \cite{Huang:2018} distance measurements). In this Section, I want to discuss another possibility, which I will call the `SH0ES degeneracy'. \subsection{Global fits and values of $H_0$} \label{subsec:global_fits} I will begin by analysing global fits to determine $H_0$ in (almost) the same way as in the SH0ES papers. A metallicity dependence of the PL relation is included by adding an extra term to Eq. (\ref{equ:dataP}) so that the magnitude of Cepheid $j$ in host galaxy $i$ is \begin{equation} (m_H^W)_{i, j} = a_i + b \log_{10} \left [ { P_{i,j} \over 10 {\rm days}} \right ] + Z_w \Delta \log_{10} (O/H)_{i, j}, \end{equation} where $Z = 12 + \log_{10} (O/H)_{i,j}$ is the metallicity listed in Table 4 of R16, $\Delta \log_{10} (O/H)_{i,j}$ is the difference relative to Solar metallicity, for which I adopt $Z_\odot = 8.9$. For the LMC I adopt a uniform metallicity of $Z_{\rm LMC} = 8.65$. First I list the results\footnote{The fits discussed here were computed using the {\tt MULTINEST} sampling algorithm \cite{Feroz:2009, Feroz:2011}. } using NGC 4258 as a distance anchor, adopting the distance modulus of Eq. (\ref{equ:mu2}). In these fits, I use all Cepheids with periods in the range $10-200 \ {\rm days}$. In the first solution, labelled `no priors', no slope or metallicity priors are imposed. In this solution, the best fit slope is much shallower than the slope of the M31 PL relation (as discussed in Sect. \ref{subsec:photometry}). For the solution labelled `slope prior', I impose a tight prior on the slope of $b = -3.300 \pm 0.002$ to counteract the HST photometry and force the slope to match the M31 and LMC slopes. \begin{subequations} \begin{eqnarray} {\rm NGC} \ {\rm 4258}\ {\rm anchor}, \ {\rm no} \ {\rm priors:} & & H_0 = 72.0 \pm 1.9 \ {\rm km} {\rm s}^{-1} {\rm Mpc}^{-1}, \qquad \qquad \label{equ:deg2a}\\ & & b = -3.06 \pm 0.05, \nonumber \\ & & Z_w = -0.17 \pm 0.08, \nonumber \\ {\rm NGC} \ {\rm 4258}\ {\rm anchor}, \ {\rm slope} \ {\rm prior:} & & H_0 = 70.3 \pm 1.8 \ {\rm km} {\rm s}^{-1} {\rm Mpc}^{-1}, \label{equ:deg2b}\\ & & b = -3.30 \pm 0.002, \nonumber \\ & & Z_w = -0.15 \pm 0.08. \nonumber \end{eqnarray} \end{subequations} These solutions are not strongly discrepant with the \Planck\ value of $H_0$; the `no prior' solution for $H_0$ is high by $2.3\sigma$ and the `slope prior' solution is high by $1.5\sigma$ (see also E14). Using the LMC as a distance anchor, the LMC photometry from R19, and adopting the distance modulus of Eq. (\ref{equ:mu1}) I find \begin{subequations} \begin{eqnarray} {\rm LMC} \ {\rm anchor}, \ {\rm no} \ {\rm priors:} & & H_0 = 76.5 \pm 1.7 \ {\rm km} {\rm s}^{-1} {\rm Mpc}^{-1}, \qquad \qquad \label{equ:deg3a}\\ & & b = -3.17 \pm 0.04, \nonumber \\ & & Z_w = -0.18 \pm 0.08, \nonumber \\ {\rm LMC} \ {\rm anchor}, \ {\rm slope} \ {\rm prior:} & & H_0 = 74.8 \pm 1.6 \ {\rm km} {\rm s}^{-1} {\rm Mpc}^{-1}, \label{equ:deg3b}\\ & & b = -3.30 \pm 0.002, \nonumber \\ & & Z_w = -0.18 \pm 0.08, \nonumber \end{eqnarray} \end{subequations} and using both anchors: \begin{subequations} \begin{eqnarray} {\rm NGC}\ 4258 + {\rm LMC} \ {\rm anchors}, \ {\rm no} \ {\rm priors:} & & H_0 = 74.6 \pm 1.5 \ {\rm km} {\rm s}^{-1} {\rm Mpc}^{-1}, \qquad \qquad \label{equ:deg4a}\\ & & b = -3.18 \pm 0.04, \nonumber \\ & & Z_w = -0.22 \pm 0.07, \nonumber \\ {\rm NGC} \ 4258 + {\rm LMC} \ {\rm anchors}, \ {\rm slope} \ {\rm prior:} & & H_0 = 73.5 \pm 1.3 \ {\rm km} {\rm s}^{-1} {\rm Mpc}^{-1}, \label{equ:deg4b}\\ & & b = -3.30 \pm 0.002, \nonumber \\ & & Z_w = -0.21 \pm 0.07. \nonumber \end{eqnarray} \end{subequations} These fits differ slightly from those given in R16 and R19 because the SH0ES team include M31 Cepheids in the fits. The only effect of adding the M31 data is to pull the best fit slope closer to $b=-3.3$; as a consequence their best fits are intermediate between the `no prior' and `slope prior' results \footnote{There are other minor differences, for example in R19, the SH0ES team include the ground based LMC data to better constrain the slope of the PL relation; they also include the NGC 4258 photometry when using the LMC or MW parallaxes as distance anchors, which only affects the slope of the PL relation. These differences are unimportant for our purposes.}. The joint solution of Eq. (\ref{equ:deg4a}) is actually quite close to the R19 solution of Eq. ({\ref{equ:H0_1}) and is higher than the \Planck\ value by $4.5 \sigma$. \begin{figure} \centering \includegraphics[width=75mm, angle=0]{figures/4258N_LMCN_joint.pdf} \includegraphics[width=75mm, angle=0]{figures/4258N_LMCN_mu.pdf} \caption{The panel to the left shows 68 and 95\% contours in the $H_0-b$ plane for the solutions of Eqs. (\ref{equ:deg2a}) (red contours) and (\ref{equ:deg3a}) (blue contours). The joint solution for the NGC 4258 and LMC anchors ( Eq. (\ref{equ:deg4a})) is shown by the grey contours. The horizontal and vertical bands shows $1\sigma$ and $2\sigma$ ranges of the Planck value of $H_0$ (Eq. (\ref{equ:H0_2}) ) and M31 PL slope (Eq. (\ref{equ:data4})). The plot to the right shows the 68\% and 95\% constraints on the NGC 4258 and LMC distance moduli from the joint fit. The geometric distance moduli of Eqs. (\ref{equ:mu1}) and (\ref{equ:mu2}) are shown by the dotted lines. } \label{fig:H0_contour} \end{figure} The `no prior' solutions are shown by the left hand panel of Fig. \ref{fig:H0_contour}. One might think that the blue and red contours are consistent with each other, but in fact, {\it all} of SN data and {\it almost all} of the Cepheid data are common to both analyses. The difference between these two solutions reflects the tension between the LMC and NGC 4258 anchor distances discussed in Sect. \ref{subsec:LMCanchor}. The joint fit (grey contours) tries to share this tension, but lies closer to the LMC fit because the LMC anchor carries more weight than NGC 4258. The joint fit then leads to a value of $H_0$ that is strongly in tension with \Planck. However, there is a statistical inconsistency in the joint fit. This is illustrated by the right hand plot in Fig. \ref{fig:H0_contour} which shows constraints on the LMC and NGC 4258 distance moduli from the joint fit. These parameters are, of course, highly correlated, but one can see that the geometrical best fit values (shown by the intersection of the dotted lines) sits well outside the 95\% contours. This is the statistically more rigorous way of quantifying the discrepancy discussed in Sect. \ref{subsec:LMCanchor}, including metallicity effects. \subsection{The SH0ES degeneracy and values of $H_0$} \label{subsec:SH0ES degeneracy} At this point, one might follow R16 and argue that the safest way to proceed is to average over distance anchors. However, this is extremely dangerous if one or more of the distance anchors is affected by systematic errors, or if there are systematics in the photometry that affect some distance anchors but not others. \begin{figure} \centering \includegraphics[width=75mm, angle=0]{figures/4258_LMC_MW_jointDP.pdf} \includegraphics[width=75mm, angle=0]{figures/4258_LMC_MWF_jointDP.pdf} \caption{The plot to the left shows 68 and 95\% contours in the $H_0-b$ plane for different combinations of distance anchors. In this analysis, I have added M31 Cepheids which shifts the contours towards $b=-3.3$ and slightly lower values of $H_0$. The plot to the right shows the effect of subtracting a constant offset $\delta a$ from the distant Cepheid magnitudes, illustrating the SH0ES degeneracy.} \label{fig:H0_contour1} \end{figure} The SH0ES degeneracy is an example of the latter type of systematic. Imagine that there is a constant offset between the SH0ES PL intercepts, and the intercepts of the MW and LMC distance anchors. Write the true intercept of the PL relation as \begin{equation} a^T = a_{\rm SH0ES} - \delta a. \label{equ:deg5} \end{equation} There are a number of possible sources of error that might produce such a shift; for example, systematic errors in crowding corrections (though see \cite{Riess:2020}), asymmetries in the distribution of outliers, asymmetries in the magnitude errors, scene reconstruction errors, or selection biases such as the short period incompleteness discussed in Sect. \ref{subsec:slopes}. The assumption here is that the shift $\delta a$ is the same for all R16 galaxies, whereas for well resolved nearby bright Cepheids such as those in the LMC and MW, there is no such shift. This model should not be taken literally because the data quality in the SH0ES sample varies (for example, although NGC 4258 is nearer than most of the SN host galaxies, the Cepheids are more crowded in this galaxy). We will approximate the model of Eq. (\ref{equ:deg5}) by subtracting a constant offset from all SH0ES H-band magnitudes \begin{equation} m_H = m_{H,{\rm SH0ES}} - \delta a. \label{equ:deg6} \end{equation} Since $\delta a$ is assumed to be a constant, it will cancel if we use NGC 4258 as a distance anchor. However, a constant offset will lead to a bias in the value of $H_0$ if the LMC, MW or M31 are used as distance anchors. This is the SH0ES degeneracy\footnote{The possible cancellation of systematic errors if NGC 4258 is used as a distance anchor was recognised in the early SH0ES papers. \cite{Riess:2009, Riess:2011}.}. \begin{wrapfigure}[]{l}{3.0in} \vspace{-0.14in} \includegraphics[width=0.4\textwidth, angle=0]{figures/4258_LMCF_1d.pdf} \caption {Posterior distributions for the offsets $\delta a$ for the 4258+LMC and 4258+LMC+MW solutions show in the right hand panel of Fig. \ref{fig:H0_contour1}.} \label{fig:offset} \end{wrapfigure} The effect of this degeneracy is illustrated in Fig. \ref{fig:H0_contour1}. The panel to the left shows the progressive combination of NGC 4258, LMC and MW anchors in the usual way. Note that for this comparison, I have added Cepheids from M31, which pushes the contours closer to $b=-3.3$. The panel to the right shows what happens if we add a constant offset $\delta a$ as a free parameter. Clearly, this has no impact on the results using NGC 4258 as an anchor. However, if we combine the 4258 anchor with any combination of LMC, MW anchors, all that happens is that $\delta a$ adjusts to bring the combined constraints close to those from the NGC 4258 anchor. All of these contours have substantial overlap with the \Planck\ value of $H_0$. The posterior distributions of $\delta a$ for these solutions are shown in Fig. \ref{fig:offset}. The offsets are as follows: \begin{subequations} \begin{eqnarray} {\rm NGC}\ 4258 + {\rm LMC} \ {\rm anchors}, & & \delta a = -0.14 \pm 0.05, \\ {\rm NGC}\ 4258 + {\rm LMC} + {\rm MW}\ {\rm anchors}, & & \delta a = -0.10 \pm 0.05. \end{eqnarray} \end{subequations} The first of these constraints is another way of expressing the nearly $3 \sigma$ tension between the LMC and NGC 4258 anchors. As expected from the discussion in Sect. \ref{sec:SH0ES_data}, we see that an offset of $\approx 0.1 \ {\rm mag}$ (in the sense that the R16 photometry is too bright) can largely resolve the Hubble tension. {\it Increasing the precision of the MW and LMC distance anchors (as has been done in R19 and R20) does not strengthen the case for the Hubble tension, unless one can rule out the SH0ES degeneracy convincingly}. This can be done: (a) by independently determining the distance modulus of NGC 4258, and/or (b) comparing Cepheid distance moduli for SN hosts with distance moduli determined from other techniques, to which we turn next. \begin{adjustwidth}{0.25in}{0.25in} \ \par \noindent{\underline{\bf Response from SH0ES team: } We agree that use of NGC 4258 as the only anchor, excluding the LMC and the 5 sets of Milky Way parallaxes reduces the tension. Further fixing the slope to be steeper than these fits provide by what is more than a few $\sigma$ as G.E. does in this example reduces the tension further. We don't think this approach is reasonable (not to mention the danger of CMB-confirmation bias) but we have made all the photometry for these exercises available to the Community for their own analyses and the Community has reanalyzed the data, consistently concluding that the data are not easily moved to a place of diminished tension. See Follin and Knox (2018), Cardona et al. (2017) or Burns et al. (2018) for examples. On the second point G.E. hypothesizes a ``common mode'' error where measurements of nearby, bright Cepheids such as those in the LMC or MW are measured differently than other, extragalactic Cepheids in SN hosts or NGC 4258. This is a concern that keeps us up at night! The full test of this as G.E. notes is comparing $H_0$ using only the anchor NGC 4258 where we get $H_0=72.0$ and as discussed above only 2 $\sigma$ different than the LMC, hence no real evidence of an issue. We expect this test to improve with the new NGC 4258 Cepheid data now in hand. The most likely way such a common mode error would arise is by ``crowding''. We just published a new paper, Riess et al. (2020) that used the amplitudes of Cepheid light curves to measure such a unrecognized, crowding-induced, common mode error and constrained it to 0.029 $\pm$ 0.037 mag, ruling out crowding as the source of such an error. Count-rate non-linearity is another potential source of common-mode error but was recently calibrated to 0.3\% precision in $H_0$ making it an unsuitable source. We welcome any specific hypotheses for another mechanism for such an error so we can test for it. We also note that we neglect the possibility that Cepheids in nearby hosts (Milky Way, LMC, NGC 4258 and M31) are actually different than those in SN hosts as it would seem to violate the principle that we should not live in a special region of space where Cepheids are fainter.} \ \par \end{adjustwidth} \section{Introduction} \label{sec:introduction} We are experiencing very strange times. One consequence of prolonged enforced isolation has been to fuel my obsession with `cosmic tensions'. Without the normal constraints of a `day job', and the restraining influences of close colleagues, this obsession has led to the work described here. By now, the `Hubble tension' has become well known to the astronomical community and beyond, and so needs very little introduction. The latest analysis of the Cepheid-based distance scale measurement of $H_0$ from the SH0ES collaboration \citep{Riess:2011, Riess:2016, Riess:2019} gives \begin{equation} H_0 = 74.03 \pm 1.42 ~\text{km}~\text{s}^{-1} \Mpc^{-1}, \label{equ:H0_1} \end{equation} whereas the value inferred from \Planck\ assuming the standard six parameter \LCDM\ model\footnote{I will refer to this model as base \LCDM.} is \begin{equation} H_0 = 67.44 \pm 0.58 ~\text{km}~\text{s}^{-1} \Mpc^{-1}, \label{equ:H0_2} \end{equation} \cite{PCP18, Efstathiou:2019}. The difference of $6.9 ~\text{km}~\text{s}^{-1} \Mpc^{-1}$ between (\ref{equ:H0_1}) and (\ref{equ:H0_2}) is, apparently, a $4.3\sigma$ discrepancy. Another way of expressing the tension is to note that the difference between (\ref{equ:H0_1}) and (\ref{equ:H0_2}) is nearly $10\%$ and is much larger than the $2\%$ error of the SH0ES estimate, {\it which includes estimates of systematic errors}. This is, therefore, an intriguing tension which has stimulated a large (almost daily) literature on extensions to the base \LCDM\ model, focussing mainly on mechanisms to reduce the sound horizon \citep[for a review see][]{Knox:2020}. Although the Hubble tension first became apparent following the publication of the first cosmological results from \Planck\ \cite{PCP13}, the low value of $H_0$ in the base \LCDM\ cosmology can be inferred in many other ways. For example, applications of the inverse distance ladder assuming either the \Planck\ or WMAP \cite{Bennett:2013} values of the sound horizon, $r_d$, or a CMB-free value of $r_d$ inferred from primordial nucleosynthesis, consistently yield a low value of $H_0$ \citep[e.g.][]{Aubourg:2015, Verde:2017, Addison:2018, Abbott:2018, Macaulay:2019}. Furthermore, using BAO and SN data, it is possible to reconstruct $H(z)$, independently of the late time behaviour of dark matter and dark energy \cite{Bernal:2016, Lemos:2019}. The resulting low value of $H_0$ strongly suggests that any departure from the base \LCDM\ model must involve changes to the physics at early times. This new physics must mimic the exquisite agreement of the base \LCDM\ model with the \Planck\ temperature and polarization power spectra and hopefully preserve the consistency of observed light element abundances and primordial nucleosynthesis. Models invoking a transient scalar field contributing to the energy density just prior to recombination (early dark energy) have been suggested as a possible solution to the Hubble tension \cite{Poulin:2019, Agrawal:2019, Smith:2020}, but these models fail to match the shape of the galaxy power spectrum \cite{Hill:2020, Ivanov:2020, D'Amico:2020}. The shape of the galaxy power spectrum itself provides tight constraints on the Hubble constant assuming the base \LCDM\ cosmology, consistent with Eq. (\ref{equ:H0_2}) \cite{ D'Amico:2020b, Philcox:2020}. I think it is fair to say that despite many papers, no compelling theoretical solution to the the Hubble tension has yet emerged. An alternative explanation\footnote{With all due respect to the SH0ES team.} is that the SH0ES result is biased by systematic errors that are not included in the error estimate of Eq. (\ref{equ:H0_1}). As is well known, distance ladder measurements of $H_0$ are extremely challenging and have a long and chequered history. However, before progressing further it is important to note that at the time of writing the balance of evidence seems to be tilted in favour of the SH0ES result. Gravitational lensing time delays \citep{Bonvin:2017, Wong:2019, Shajib:2020}, distant maser galaxies \citep{Pesce:2020} and Mira variables \cite{Huang:2020} all yield high values of the Hubble constant compared to the base \LCDM\ value of (\ref{equ:H0_2}) though with larger errors than the SH0ES estimate. However, the recent TRGB measurements \citep{Freedman:2019, Freedman:2020} give \begin{equation} H_0 = 69.6 \pm 1.9 ~\text{km}~\text{s}^{-1} \Mpc^{-1}, \label{equ:H0_3} \end{equation} apparently intermediate between the \Planck\ and SH0ES values (but see Sect. \ref{sec:TRGB}). In this article, I address the following questions: \smallskip \noindent (i) Can we imagine systematics that would bias the SH0ES analysis? \smallskip \noindent (ii) Is there any evidence for such systematics? \smallskip \noindent (iii) Are the TRGB results of \citep{Freedman:2019, Freedman:2020} compatible with SH0ES? \smallskip \noindent (iv) What new observations are needed to resolve the discrepancies identified in this article? \section{Comments on the SH0ES data} \label{sec:SH0ES_data} I will refer to the sequence of SH0ES papers \cite{Riess:2011, Riess:2016, Riess:2019} as R11, R16, and R19 respectively and I will begin with some representative numbers. Suppose we wanted to explain the Hubble tension by invoking an additive error $\delta m$ in the magnitudes of SH0ES Cepheids. The shift in the value of $H_0$ would be \begin{equation} {\delta H_0 \over H_0} = 0.2\ln 10 \ \delta m. \label{equ:data1} \end{equation} The $6.9 ~\text{km}~\text{s}^{-1} \Mpc^{-1}$ difference between (\ref{equ:H0_1}) and (\ref{equ:H0_2}) requires $\delta m = 0.2 \ {\rm mag} $ (in the sense that the SH0ES Cepheids would have to be too bright)\footnote{Note that a shift of $\delta m = 0.1 \ {\rm mag.}$ would reduce the Hubble tension to about $2\sigma$.}. However, the errors on individual distance moduli of SN host galaxies (e.g. from Table 5 of R16) are typically quoted as $\delta \mu \sim 0.06 \ {\rm mag}$. Averaging over $19$ SN host galaxies, the error in the Cepheid magnitude scale is of order $0.06 /\sqrt{19} \sim 0.014 \ {\rm mag.}$, {\it i.e. } about an order of magnitude smaller than the required offset. At first sight it seems implausible that a magnitude offset could have any bearing on the Hubble tension. \begin{figure} \centering \includegraphics[width=150mm,angle=0]{figures/HR11residual.pdf} \caption {R11 PL magnitude residuals relative to the global fit 5 of Table 2 in E14. Red points show residuals for Cepheids that are accepted by R11 while blue points are rejected by R11.} \label{fig:HR11residual} \end{figure} \subsection{Outliers} \label{subsec:outliers} However, consider Fig. \ref{fig:HR11residual}. This is a combination of the two panels of Fig. 2 from \cite{Efstathiou:2014} (hereafter E14). This shows residuals of the Wesenheit H-band ($m^W_H$) period luminosity (PL) relation from a global fit to the 9 galaxies in the R11 sample. As in R11, \begin{equation} m_H^W = m_H - R(V-I), \label{equ:data2} \end{equation} where $H=F160W$, $V=F555W$ and $I=F814W$ in the HST photometric system. In Fig. 2, I used $R=0.41$ (consistent with R11), though for the rest of this article I use $R = 0.386$ to be consistent with R16 and later SH0ES papers. One can see from Fig. \ref{fig:HR11residual} that there are many `outliers', with residuals that differ by 3 mag or more from the best fit. R11 identified outliers by fitting the PL relation for each galaxy separately (rather than identifying outliers with respect to the global fit). The R11 rejection algorithm works as follows: \noindent $\bullet$ The $m_H$ PL relations are fitted galaxy-by-galaxy, weighted by the magnitude errors in Table 2 of R11, to a power law with slope fixed at $b=-3.1$ in the first iteration. Cepheids with periods $> 205$ days are excluded. \noindent $\bullet$ Cepheids are rejected if they deviate from the best-fit relation by $\ge 0.75 \ {\rm mag}$, or by more than 2.5 times the magnitude error. \noindent $\bullet$ The fitting and rejection is repeated iteratively 6 times. \smallskip The Cepheids rejected by R11 are shown by the blue points in Fig. \ref{fig:HR11residual} and those accepted by R11 are shown by the red points. For the red points, the dispersion around the mean is $0.47$ mag and it is $0.60$ mag. for all points. {\it To avoid a bias in the measurement of $H_0$, the mean must be determined to an accuracy much better than $0.1$ mag (or a quarter of a tick mark on the y-axis in Fig. \ref{fig:HR11residual}}). The finite width of the instability strip will produce an `intrinsic' scatter in the PL relation. In the H-band, the intrinsic scatter is about $0.08$ mag and can be measured accurately from photometry of LMC and M31 Cepheids. The much higher dispersion apparent in Fig. \ref{fig:HR11residual} is a consequence of photometric errors and misidentification of Cepheids (type II Cepheids misclassified as classical Cepheids) and is larger than the error estimates given in R11. In the H-band, Cepheid photometry of SN host galaxies is challenging even with HST because of crowding. Photometry requires deblending of the images, as illustrated in Fig. 5 of R16, and also crowding corrections to the local sky background. In addition, one can see from Fig. \ref{fig:HR11residual} that the blue points are distributed asymmetrically around the mean, and so the question that has worried me is whether the PL relations are really unbiased at the $0.1$ mag level, given that the underlying data are so noisy and contain so many outliers\footnote{E14 experimented with different ways of rejecting outliers relative to {\it global} fits.}. The SH0ES methadology requires the absence of any systematic biases, relying on large numbers of Cepheids to reduced the errors on distance moduli well below the $\sim 0.5$ dispersion seen in Fig. \ref{fig:HR11residual}. \begin{figure} \centering \includegraphics[width=150mm,angle=0]{figures/HR16residual.pdf} \caption {R16 magnitude residuals relative to a global fit of the PL relation for the 19 SN host galaxies and NGC 4258.} \label{fig:HR16residual} \end{figure} If we now fast forward to R16, the equivalent plot to Fig. \ref{fig:HR11residual} is shown in Fig. \ref{fig:HR16residual}. The number of SN host galaxies with Cepheid photometry was increased from 8 in R11 to 19 in R16. The optical data (in the F350LP, F555W and F814W filters) for the R16 sample is described in detail in \cite{Hoffmann:2016}\footnote{In the optical, the images are largely the same, but with WIFPC3 data for three galaxies (NGC 1309, 3021, 3370) supplementing earlier ACS and WFPC2 images.}. For the 9 galaxies in common between R11 and R16, the F160W images are identical in the two analyses, but were reprocessed for R16. There are now no obvious outliers in Fig. \ref{fig:HR16residual}. and the dispersion around the mean is $0.35 \ {\rm mag}$ with $\chi^2=1114$ for 1037 Cepheids. What happened to the outliers? The outliers in R16 were removed in two stages: first for each individual galaxy, Cepheids were rejected if their F814W-F160W colours lay outside a $1.2 \ {\rm mag}$ colour range centred on the median colour. An additional $\approx 5\%$ of the Cepheids were rejected if their magnitudes differed by more than $2.7\sigma$ from a global fit. Since colour selection removes a large fraction of the outliers, R16 argue that outlier rejection is not a serious issue. However, R16 did not publish magnitudes for the rejected Cepheids and so it is not possible to check the impact of colour selection and outlier rejection. It is not obvious to this author that applying symmetric rejection criteria to exclude outliers will produce unbiased results. The R16 photometric data has been investigated by \cite{Follin:2018} and (extensively) by myself and appears to be self consistent. \vfill\pagebreak\newpage \begin{adjustwidth}{0.25in}{0.25in} \noindent{\narrower{\underline{\bf Response from SH0ES Team:} The discovery of fundamental-mode Cepheid variables from time-series imaging requires a number of winnowing steps from non-variables which outnumber them more than 1000-fold and other variable types which rival their frequency. Colors are useful to distinguish Cepheids (which have spectral types F-K, a modest color range) from other variables like Miras and Semi-regular variables which are redder (spectral type M) or RR Lyraes which are bluer (spectral types A-F). Colors may also remove strong blends of a Cepheid with a red or blue Giant. R11 did not use $I-H$ colors in this way and subsequently rejected many ``outliers'' off of the period-luminosity relation though these were published for further analyses, e.g., by G.E. R16 added $I-H$ color selection and at the same rejection threshold of the PL relation found very few PL relation outliers, about 2\% of the sample and their inclusion or exclusion has little impact on $H_0$. So most of the R11 outliers were too red or blue to be Cepheids and are not outliers in R16 because they never make it to the PL relation. (We consider outliers only as objects that pass selection criteria.) We think the R16 selection is a clear and defined improvement in method and supersedes the R11 results which are no longer relevant.}} \ \par \ \par \end{adjustwidth} Nevertheless, we can test the R16 analysis by invoking additional data. Here I describe four tests: \noindent (i) Consistency of the slope of the SH0ES PL relation with that of nearby galaxies. \noindent (ii) Consistency of the R11 and R16 photometry. \noindent (iii) Consistency of the distance anchors. \noindent (iv) Consistency of Cepheid and TRGB distance moduli. \begin{figure} \centering \includegraphics[width=130mm,angle=0]{figures/slopes.pdf} \caption {PL relations for M31 (upper plot) and M101 (lower plot). The best fit slope and its $1\sigma$ error are listed in each plot. } \label{fig:slopes} \end{figure} \subsection{The slope of the PL relation} \label{subsec:slopes} The upper panel of Fig. \ref{fig:slopes} shows the PL relation for M31\footnote{The M31 photometry is listed in R16 and is based on the following sources \cite{Dalcanton:2012, Riess:2012, Kodric:2015, Wagner-Kaiser:2015}.}. The solid line shows a fit \begin{equation} m_H^W = a + b \log_{10} \left [ { P \over 10 \ {\rm days}} \right ], \label{equ:dataP} \end{equation} where $P$ is the Cepheid period. For M31, the slope is\footnote{There has been some discussion in the literature, e.g. \cite{Kodric:2015} of a break in the slope of the PL relation at $P \sim \ 10 \ {\rm days}$. There is no evidence of such a break at H-band. Fitting the M31 PL relation to Cepheids with $P \ge 10 \ {\rm days}$ gives $b = -3.30 \pm 0.06$. Note that for the LMC, using the 70 Cepheids with HST photometry listed in R19, I find $b=-3.32 \pm 0.04$.} \begin{equation} b = -3.30 \pm 0.03. \label{equ:data4} \end{equation} The lower panel in Fig. \ref{fig:slopes} shows the PL relation for M101 using the photometry from R16. The slope for M101 is much shallower than that for M31, differing by $3.9 \sigma$, strongly suggesting a bias. One finds similar results for other R19 galaxies, including NGC 4258. Global fits to the PL relation using either the R11 or R16 data therefore give a slope that is shallower than (\ref{equ:data4}). The reason for this is clear from Fig. 15 of \cite{Hoffmann:2016} which shows that the Cepheid sample is incomplete at short periods. The short period Cepheids are faint and close the photometric limits of the R16 observations. As a consequence, the optical selection is biased towards brighter Cepheids. \cite{Hoffmann:2016} impose lower limits on the periods for the final Cepheid sample, but it is clear from their Fig. 15 that these limits are not sufficiently conservative. If we assume that the Cepheid PL relation is universal\footnote{Clearly a necessary assumption if we are to use Cepheids in distance ladder measurement.} then we can assess the impact of this incompleteness bias by comparing values of $H_0$ with and without imposing a prior on the slope (see E14). We will show below that imposing a strong slope prior of $b=-3.300 \pm 0.002$\footnote{The width of this prior is not a typo. To force $b=-3.3$, the prior must be tight enough to counteract the shallow slow of the R16 PL relation.} {\it lowers} $H_0$ by $1.7 ~\text{km}~\text{s}^{-1} \Mpc^{-1}$. This is a systematic shift, which is larger than the $1\sigma$ error quoted in Eq. (\ref{equ:H0_1}). \begin{adjustwidth}{0.25in}{0.25in} \ \par \noindent{\underline{\bf Response from SH0ES Team:} We find the statistical uncertainty in individual slopes is often an underestimate because individual slopes are strongly influenced by the presence or absence of rare very long period Cepheids ($P>70$ days). For example, in the LMC we measured the H-band slope from 43 Cepheids with $P>10$ days (Riess et al. 2019) to be $-3.38 \pm 0.07$ mag while Persson et al. (2004) measured it to be $-3.16 \pm 0.1$ from 39 $P>10$ day Cepheids, a highly significant difference considering this is the same galaxy and mostly same objects. The difference comes from the inclusion of 2 rare ones at $P>50 $ days, $P=99$ and $P=134$ days, which were too bright for us to observe with HST. At long periods Cepheids are too rare to well populate the instability strip and may even depart somewhat in slope. We also note that the two G.E. compared, M101 and M31 lie at opposite extremes among the sample of 20 plus hosts. A fair test of the uniformity of slopes should be made in a narrower period range where all hosts are well-populated.} \ \par \end{adjustwidth} \subsection{Comparison of the R11 and R16 photometry} \label{subsec:photometry} R11 and R16 contain 9 galaxies in common. As a consistency check, it is interesting to compare the R11 and R16 photometry on an object-by-object basis. It is not possible to do this for all Cepheids within a particular galaxy, because the R16 data have been pre-clipped: data for Cepheids rejected as outliers are not presented. Also, there are Cepheids listed in R16 that are not listed in R11. The overlap between the two samples is high enough, however, that one can draw reliable conclusions. The details of this comparison are given in Appendix \ref{sec:appendix}. I summarize the results in this subsection. \begin{figure} \centering \includegraphics[width=74mm,angle=0]{figures/R4536R11.pdf} \includegraphics[width=74mm,angle=0]{figures/R4536R16.pdf} \caption {Left plot shows the PL relation for R11 Cepheids in the SN host galaxy NGC 4536. Blue points show Cepheids rejected by R11 (IFLAG =1); red points show Cepheids accepted by R11 (IFLAG = 0). The solid line shows the best fit linear relation fitted to the red points. The dashed line shows the best fit with the slope constrained to $b=-3.3$. Right plot shows the PL relation for R16 Cepheids. The solid line shows the best fit linear relation and the dashed line shows the best fit with the slope constrained to $b=-3.3$.} \label{fig:R4536a} \end{figure} \begin{table}[h] \begin{center} \begin{tabular}{lllllll} \hline & \multicolumn{3}{c}{R11} & \multicolumn{3}{c}{R16} \\ galaxy & \qquad $b$ & \qquad $a$ & \qquad $a_{-3.3}$ & \qquad $b$ & \qquad $a$ & \qquad $a_{-3.3}$ \\ \hline N4536 & $-2.95 \pm 0.18$ & $24.88 \pm 0.10$ & $25.05 \pm 0.05$ & $-3.27 \pm 0.11$ & $24.99 \pm 0.11$ & $25.01 \pm 0.04$ \\ N4639 & $-2.68 \pm 0.50$ & $25.41 \pm 0.30$ & $25.78 \pm 0.08$ & $-2.33 \pm 0.47$ & $25.03 \pm 0.29$ & $25.61 \pm 0.06$ \\ N3370 & $-3.17 \pm 0.25$ & $26.18 \pm 0.17$ & $26.28 \pm 0.04$ & $-3.34 \pm 0.21$ & $26.21 \pm 0.14$ & $26.18 \pm 0.04$ \\ N3982 & $-3.77 \pm 0.56$ & $26.18 \pm 0.32$ & $25.91 \pm 0.08$ & $-2.54 \pm 0.21$ & $25.40 \pm 0.29$ & $25.85 \pm 0.06$ \\ N3021 & $-2.86 \pm 0.49$ & $26.08 \pm 0.28$ & $26.31 \pm 0.10$ & $-2.29 \pm 0.60$ & $26.04 \pm 0.34$ & $26.58 \pm 0.09$ \\ N1309 & $-2.35 \pm 0.60$ & $26.08 \pm 0.45$ & $26.78 \pm 0.07$ & $-3.09 \pm 0.38$ & $26.47 \pm 0.29$ & $26.63 \pm 0.04$ \\ N5584 & $-2.87 \pm 0.23$ & $25.57 \pm 0.16$ & $25.85 \pm 0.04$ & $-3.07 \pm 0.18$ & $25.72 \pm 0.12$ & $25.88 \pm 0.03$ \\ N4038 & $-2.84 \pm 0.29$ & $25.45 \pm 0.26$ & $25.84 \pm 0.08$ & $-3.81 \pm 0.75$ & $25.79 \pm 0.63$ & $25.37 \pm 0.11$ \\ N4258 & $-3.22 \pm 0.15$ & $23.39 \pm 0.06$ & $23.43 \pm 0.04$ & $-3.15 \pm 0.10$ & $23.35 \pm 0.04$ & $23.40 \pm 0.03$ \cr \hline \end{tabular} \caption{Fits to the PL relation for data given in R11 and R16. } \label{table:PL} \end{center} \end{table} To give an idea of the differences between R11 and R16, Fig. \ref{fig:R4536a}\footnote{ This figure is reproduced from Appendix \ref{sec:appendix}, which includes equivalent plots for all 9 galaxies in common between R11 and R16.} shows the PL relations for NGC 4536. The solid lines show fits to Eq. (\ref{equ:data1}) (fitted only to the IFLAG = 0 Cepheids for the R11 data). The dashed lines show fits with slope constrained to $b=-3.3$; the intercept of these fits is denoted as $a_{-3.3}$. The parameters for these fits are listed in Table 1. The error weighted averages of the slopes are: \begin{subequations} \begin{eqnarray} \langle b \rangle & = & -2.95 \pm 0.10, \quad {\rm R11 \ excluding \ NGC \ 4258}, \\ \langle b \rangle & = & -3.04 \pm 0.08, \quad {\rm R11 \ including \ NGC \ 4258}. \\ \langle b \rangle & = & -3.09 \pm 0.09, \quad {\rm R16 \ excluding \ NGC \ 4258}, \\ \langle b \rangle & = & -3.11 \pm 0.07, \quad {\rm R16 \ including \ NGC \ 4258}. \end{eqnarray} \end{subequations} consistent with the shallow slopes determined from global fits. \begin{wrapfigure}[]{l}{3.0in} \vspace{-0.00in} \includegraphics[width=0.4\textwidth, angle=0]{figures/pg_intercept.pdf} \caption {The intercept $(a_{-3.3})_{R11}$ plotted against $(a_{-3.3})_{R16}$. The dashed line shows the best fit relation with a slope of unity. Solid line shows $(a_{-3.3})_{R11} = (a_{-3.3})_{R16}$. } \label{fig:intercept} \vspace{0.09in} \end{wrapfigure} The uncertainties in the intercepts are very large if the slopes are allowed to vary and so we focus on the constrained intercepts $a_{-3.3}$. These are plotted in Fig. \ref{fig:intercept} for the 8 SN host galaxies. The dashed line shows the `best fit' relation with a slope of unity and an offset, \begin{subequations} \begin{equation} (a_{-3.3})_{R16} = (a_{-3.3})_{R11} + \Delta a, \label{equ:data3a} \end{equation} where \begin{equation} \Delta a = -0.062 \pm 0.027. \label{equ:data5} \end{equation} \end{subequations} The fit Eq. (\ref{equ:data5}) assumes that the errors on $(a_{-3.3})_{R11}$ and $(a_{-3.3})_{R16}$ are independent, which is clearly not true since the imaging data is largely common to both samples. In reality the offset is much more significant than the $2.3 \sigma$ suggested by Eq. (\ref{equ:data5}). It is also clear from Fig. \ref{fig:intercept} that the errors are underestimated\footnote{Assuming independent errors, the dashed line gives $\chi^2= 21.27$ for 8 degrees of freedom, which is high by $\sim 3.3\sigma$.}. The offset of Eq. (\ref{equ:data5}) translates to a systematic shift in $H_0$ of about $2 ~\text{km}~\text{s}^{-1} \Mpc^{-1}$, in the sense that the R16 data gives a higher $H_0$, for all distance anchors\footnote{This can easily be verified by repeating the $H_0$ analysis for the 9 galaxies in R11 using the R16 photometry.}. Again, this systematic shift is larger than the error given in Eq. (\ref{equ:H0_1}). The origin of this offset is surprising and is discussed in detail in Appendix \ref{sec:appendix}. The object-by-object comparison of the photometry between R11 and R16 yields mean offsets (after removal of a few outliers) of \begin{subequations} \begin{eqnarray} (m_H^W)_{\rm R16} &= & (m_H^W)_{\rm R11} + \langle \Delta m_H^W \rangle, \qquad \langle \Delta m_H^W \rangle = -0.051 \pm 0.025, \label{equ:data6a}\\ (V-I)_{\rm R16} &= & (V-I)_{\rm R11} + \langle \Delta C \rangle, \qquad \langle \Delta C \rangle = 0.14 \pm 0.03. \label{equ:data6b} \end{eqnarray} \end{subequations} The offset in Eq. (\ref{equ:data5}) comes almost entirely from the difference in colours. As discussed in Appendix \ref{sec:appendix}, the colour offset is very significant: the errors in photometric scene reconstruction are correlated between different passbands (and, in any case, are smaller in $V$ and $I$ than in $H$ band), so the errors in $V-I$ colours are much smaller than the errors for individual passbands. This large systematic difference in colours is not discussed by R16. \begin{adjustwidth}{0.25in}{0.25in} \ \par \noindent{\underline{\bf Response from SH0ES Team:} There have been many updates to the official STScI pipeline between R11 in 2011 (produced shortly after WFC3 was installed) and R16 in 2016 (when WFC3 was mature) including changes in zeropoints, geometric distortion files, flat fields, CTE correction procedures, dark frames, hot pixel maps, bias frames, count linearity calibration, and count-rate non-linearity corrections which change photometry. We believe the sum of these improved accuracy and the comparison to the early knowledge of WFC3 (or earlier methods of SH0ES) is not informative on present accuracy (or else we are all doomed by the past!). We note with levity better communicated as a verbal touche that the Planck calibration of the overall power spectrum, indicated by the parameter $A_s e^{-2 \tau}$ changed by 3.5 $\sigma$ between the 2013 and 2015 Planck release as documented in Table 1 of the 2015 paper. We celebrate the improved accuracy of Planck and promise we do not see this as any indication that improved accuracy cannot be established (stated with tongue in cheek). We also note an important improvement to the SH0ES data since 2016 (e.g., Riess et al. 2018,2019) is to have now measured Cepheids in the LMC and the Milky Way on the same WFC3 system as extragalactic Cepheids which makes $H_0$ insensitive to future changes in the calibration of WFC3 since $H_0$ is only sensitive to differences in Cepheid measurements.} \ \par \end{adjustwidth} \section{Comparison of Cepheid and TRGB distance moduli} \label{sec:TRGB} As discussed in the Sect. \ref{sec:introduction}, recently \cite{Freedman:2019} (herafter F19) have determined a value for $H_0$ using TRGB as a standard candle. There are $10$ SN host galaxies in common between R16 and F19, which are listed in Table \ref{table:distance_moduli}. Five of these have TRGB distance measurements as part of the Carnegie-Chicago Hubble Program (CCHP); the remaining five galaxies labelled `JL' (which are more distant) have TRGB distances from data analysed by \cite{Jang:2017a} but reanalysed by F19. These distance moduli are listed in Table \ref{table:distance_moduli} together with the distance moduli for the solution of Eq. (\ref{equ:deg2a}) using NGC 4258 as a distance anchor and the solution of Eq. (\ref{equ:deg3a}) using the LMC as an anchor. \begin{figure} \centering \includegraphics[width=120mm,angle=0]{figures/distance_mod_residual.pdf} \caption {Differences in the TRGB and Cepheid distance moduli from Table \ref{table:distance_moduli} plotted against $\mu_{\rm TRGB}$. Filled and open symbols are for the galaxies labelled `CCHP' and `JL' respectively in Table \ref{table:distance_moduli}. The red and blue dotted lines shows the best-fit offsets to the red points and blue points respectively.} \label{fig:distance_modulus} \end{figure} \begin{table}[h] \begin{center} \begin{tabular}{lllll} \hline & & {\rm LMC} \ {\rm anchor} & {\rm N4258} \ {\rm anchor} & {\rm LMC} \ {\rm anchor} \\ galaxy & {\rm TRGB} & $\mu_{\rm TRGB}$ & $\mu_{\rm Cepheid}$ & $\mu_{\rm Cepheid}$ \\ \hline N4536 & CCHP & $30.96 \pm 0.05$ & $30.92 \pm 0.05$ & $30.80 \pm 0.05$ \\ N3370 & JL & $32.27 \pm 0.05$ & $32.09 \pm 0.05$ & $31.97 \pm 0.04$ \\ N3021 & JL & $32.22 \pm 0.05$ & $32.43 \pm 0.08$ & $32.30 \pm 0.07$ \\ N1448 & CCHP & $31.32 \pm 0.06$ & $31.35 \pm 0.04$ & $31.22 \pm 0.04$ \\ N1309 & JL & $32.50 \pm 0.07$ & $32.51 \pm 0.05$ & $32.40 \pm 0.05$ \\ N5584 & JL & $31.82 \pm 0.10$ & $31.80 \pm 0.04$ & $31.68 \pm 0.04$ \\ N4038 & JL & $31.68 \pm 0.05$ & $31.39 \pm 0.09$ & $31.27 \pm 0.09$ \\ M101 & CCHP &$29.08 \pm 0.04$ & $29.22 \pm 0.04$ & $29.07 \pm 0.04$ \\ N4424 & CCHP & $31.00 \pm 0.06$ & $30.86 \pm 0.11$ & $30.73 \pm 0.11$ \\ N1365 & CCHP & $31.36 \pm 0.05$ & $31.32 \pm 0.06$ & $31.19 \pm 0.05$ \cr \hline \end{tabular} \caption{Galaxies with F19 TRGB and SH0ES Cepheid distance moduli. The second column denotes the source of the TRGB data (see text). Third column lists the TRGB distance modulus and error from Table 3 of F19, calibrated with the LMC DEB distance. The fourth and fifth columns list the distance moduli for the solutions of Eqs. (\ref{equ:deg2a}) and (\ref{equ:deg3a}). The errors on these estimates reflect errors arising from the PL fits only. They do not include errors in the anchor distances, which would shift all distance moduli up or down by the same number.} \label{table:distance_moduli} \end{center} \end{table} The dotted lines in Fig. \ref{fig:distance_modulus} show least squares fits of a constant offset. For the red points, the offset is close to zero. However, for the blue points, there is an offset \begin{equation} \mu_{\rm Cepheid} - \mu_{\rm TRGB} = -0.139 \pm 0.024 \ {\rm mag} , \label{equ:dist} \end{equation} and since both sets of distance moduli are based on the geometric distance to the LMC, the error in the DEB distance cancels. If the calibration of the TRGB is correct (a topic of some controversy \cite{Yuan:2019, Freedman:2019}), this comparison reveals a statistically significant ($\approx 6 \sigma$) offset compared to the LMC calibration of the Cepheid distances. The TRGB distance scale is, however, compatible with the NGC 4258 calibration of the Cepheid distances\footnote{Interestingly, \cite{Reid:2019} reached a similar conclusion.}. The tension between these two calibrations leads to the offset of Eq. (\ref{equ:dist}) which, of course, is almost identical to the offset $\delta a$ found in Sect. \ref{subsec:SH0ES degeneracy}. As a consequence of these offsets, the TRGB value of $H_0$ must be close the value inferred from the NGC 4258 calibration of the Cepheid distance scale and should strongly disfavour the value derived from the LMC calibration. \begin{figure} \centering \includegraphics[width=48mm,angle=0]{figures/pgH0F.pdf} \includegraphics[width=48mm,angle=0]{figures/pgH0R_4258.pdf} \includegraphics[width=48mm,angle=0]{figures/pgH0R_LMC.pdf} \caption {Distance moduli from Table \ref{table:distance_moduli} plotted against {\tt Supercal} Type Ia SN B-band peak magnitudes from Table 5 of R16. The solid lines show the best fit linear relation of Eq. (\ref{equ:H0a}). The best fit values for $H_0$ from Eq. (\ref{equ:H0b}) for each relation are given in each panel. Note that the quoted errors on $H_0$ includes only the photometric errors on $\mu$ and $m_{\rm B, SN} +5a_{\rm B}$.} \label{fig:H0} \end{figure} This is illustrated in Fig. \ref{fig:H0}. Here values and errors on $m_{\rm B, SN} + 5a_{\rm B}$ are from Table 5 of R16, where $m_{\rm B, SN}$ is the {\tt Supercal} B-band peak magnitude and $a_{\rm B}$ is the intercept of the SN Ia magnitude-redshift relation. These are plotted against the SN host distance moduli from Table \ref{table:distance_moduli}. We perform a least squares fit to determine the offset $\alpha$, \begin{equation} \mu = m_{\rm B, SN} + 5a_{\rm B} - \alpha, \label{equ:H0a} \end{equation} which gives a value for $H_0$, \begin{equation} H_0 = 10^{(0.2\alpha + 5)}, \quad \delta H_0 = 0.2 H_0 \log10 \delta\alpha. \label{equ:H0b} \end{equation} Only the errors in $m_{\rm B, SN} +5a_{\rm B}$ and the errors in the distance moduli listed in Table \ref{table:distance_moduli} are included in the fits. The best fit values of $H_0$ and $1\sigma$ error are listed in each panel of Fig. \ref{fig:H0}. Note that these error estimates do not include the error in the anchor distance. The resulting values of $H_0$ can be easily understood. The value of $H_0$ in panel (a) for the TRGB distance moduli is consistent with the F19 value of Eq. (\ref{equ:H0_3}), showing that the subsample of galaxies and SN data used in Fig. \ref{fig:H0} gives similar results to the full sample analysed by F19. Likewise, the fit to panel (b) agrees well with the solution Eq. (\ref{equ:deg2a}) and the fit to panel (c) agrees with Eq. (\ref{equ:deg3a}). Since these results use the same SN data, the low value of $H_0$ derived from the TRGB distance moduli is caused almost entirely by a {\it calibration} difference. The TRGB calibration strongly disfavours the LMC calibration of the R16 Cepheids, as quantified by Eq. (\ref{equ:dist}). \begin{adjustwidth}{0.25in}{0.25in} \ \par \noindent{\underline{\bf Response from SH0ES team:} In our own analyses we find that the {\it relative} distances measured by Cepheids and TRGB agree well, matching G.E. here, thus the difference in derived $H_0$ (about 1.5 $\sigma$ as stated in Freedman et al. 2020) must occur from a disparity in their absolute calibration. We think the best way to compare their calibrations is by using the same geometric distance calibration for each. Doing so using NGC 4258 we see no difference as Cepheids give $H_0$=72.0 and TRGB gives 71-72 (see Jang and Lee 2017 and Reid et al. 2019) and if we use the same SN Ia measurements we find that they are spot on. The disparity then arises only for the LMC where TRGB with the F20 calibration gives $H_0$=70 and Cepheids give 74. We think it likely that this difference may be traced to the extinction of TRGB in the LMC that is used in the F20 analysis. F20 finds $A_I=0.16 \pm 0.02$ mag and $H_0=70$ while Yuan et al. (2019) finds $A_I=0.10$, as did Jang and Lee (2017) and $H_0=72.7$. (Cepheids use NIR Wesenheit magnitudes so determination of absolute extinction is not an issue for Cepheids.) The determination of extinction of TRGB in the LMC is challenging and its resolution is required to conclude whether TRGB and Cepheid agree when using the LMC as an anchor. The comparison using NGC 4258 is cleaner for TRGB since the dust is negligible, the measurement is thus more similar to TRGB in SN hosts, and suggests excellent agreement with Cepheids. It is also worth noting that Freedman et al. (2012) recalibrated the HST KP Cepheid distance ladder with the revised LMC distance, fortuitously the same distance value used today, and found $H_0=74.2 \pm 2.2$ but using different Cepheids, a different HST instrument, optical data, software, etc indicating Cepheids scales have been consistent.} \ \par \end{adjustwidth}
proofpile-arXiv_065-125
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} One of the main goals of scientific research is to contribute to our understanding of the world. The primary way that scientific knowledge gets disseminated in most fields is through academic papers, written by researchers. However, most researchers --- whether professors, students, or employees of industry labs --- get credit for the number of papers that are accepted to prestigious conferences or journals. For professors, the number of accepted papers determine whether they will get tenure; for students, it determines their graduation times and job prospects. Thus, the incentives of individual researchers (writing papers that get accepted to prestigious conferences, among other things) is not the same as the behavior we want from the scientific field as a whole (producing and sharing interesting and useful scientific knowledge). This mismatch leads to a variety of problems in scientific communities, from replication crises in fields like psychology \citep{john2012measuring}, to poor scholarship in machine learning (ML) papers \citep{lipton2018troubling}. This mismatch has been exacerbated in machine learning by the rapid growth of the field, as indicated by the explosion of arXiv papers and ML conference attendees, leading to the perception of increased competition. The goal of the NeurIPS 2019 Retrospectives workshop\footnote{\href{https://ml-retrospectives.github.io/neurips2019/}{https://ml-retrospectives.github.io/neurips2019/}}~(workshop videos can be viewed at~\cite{mlretro2020}) was to experiment with a new kind of scholarship: retrospectives. A retrospective is a document, like a blog post, that describes authors' thoughts about their past paper that were not present in the original work.\footnote{For a more detailed description of the motivation for retrospectives, see: \href{https://thegradient.pub/introducing-retrospectives/}{https://thegradient.pub/introducing-retrospectives/}.} The workshop also accepted meta-analyses --- papers which reflect on the state of a sub-field as a whole, including disseminating newly-emerging consensus or conflict or sharing practical advice for training models or tuning hyperparameters. This led to a fruitful workshop with many interesting talks and discussions. Central to the workshop was a panel and a brainstorming session. The topics discussed during these sessions were not confined to retrospectives and meta-analyses; rather, they spanned a variety of concerns with the current state of the field of machine learning. This report summarizes many of the ideas that were discussed during these sessions, with the goal of disseminating them more broadly and encouraging further discussion of these issues in the machine learning community. While we primarily present ideas discussed in the workshop, in some cases we point out limitations to the implementation of these ideas. However, the intent of this report is not to provide a thorough analysis of these ideas, nor to provide specific recommendations, nor to examine how these ideas are reflected in other scientific fields outside of ML. Many of the ideas in this report were presented by individuals who are not authors of this report, but were instead participants in the panel or brainstorming sessions; we acknowledge these individuals at the end of the report. This report is by no means an exhaustive list of the ways the field of machine learning could be improved. Important topics not discussed in this report include: increasing the inclusivity and diversity of researchers the field, paper reproducibility, training ML researchers in ethics to better understand the potential harms of ML technologies, increasing collaboration between ML researchers and other scientific disciplines, and more. The report is organized as follows. In Section \ref{sec:retrospectives}, we discuss the lack of incentives for alternate forms of scholarhsip, such as retrospectives and meta-analyses. In Section \ref{sec:review}, we discuss ideas for re-structuring the review process. In Section \ref{sec:industry}, we discuss the interaction between academia and industry in machine learning research. In Section \ref{sec:science}, we discuss ways to better train computer scientists in the scientific process. Finally, we conclude in Section \ref{sec:conclusion}. \section{Incentivizing openness and alternate forms of scholarship} \label{sec:retrospectives} \paragraph{The problem.} One of most visible symptoms of the aforementioned misalignment between the goal of the ML field and the incentives of individual researchers is paper obfuscation. It has been noted elsewhere~\citep{lipton2018troubling} that researchers often add equations to a paper that are not necessary for understanding the content of the paper. This is a specific symptom of a broader set of behaviors, including omitting empirical results on certain datasets if they are not convincing, cherry-picking qualitative results, and setting aside hyperparameter tuning for baseline models~\citep{rendle2019difficulty}. One idea for encouraging openness about a paper's limitations is by publishing a retrospective. A retrospective of a paper is a commentary, written by one of the authors of the original paper, that reflects on the work with the benefit of hindsight. It may include some lessons learned after the paper was published, new insights inspired by follow-up work, or details that were missed in the original work due to paper length. Authors could also use retrospectives to report intuitive ideas that did not work in practice. This information is very valuable for follow-up works, but currently, there isn't a framework to systematically disseminate such information. Hence, these insights remain within the author's network, mostly disseminated by personal conversations. Authors may be more willing to write retrospectives as they can be written \textit{after} the acceptance of the original paper, whereas authors might fear including this information in the original paper would be grounds for rejection. In a similar spirit of self-reflection, the retrospectives workshop also accepted meta-analyses. Meta-analyses are different than review papers: a review paper aims to summarize and synthesize a wide range of papers in a specific subfield, with the goal of being very thorough, as well as providing some insights as to how the papers relate to each other. On the other hand, the goal of a meta-analysis paper isn’t to summarize the content of papers, but rather to discuss and analyze an interesting aspect of a set of papers (e.g. evaluation methodology, conflicting claims, etc.), or give an opinion about emerging trends. A meta-analysis paper doesn’t have to be thorough (it could discuss only a few papers) or limited to a narrow subfield (it could analyze broader trends across the ML community). In the submission process, meta-analyses will be unlike most other papers, with a lack of novel results. Retrospectives and meta-analyses, though, take time to write. Some researchers may decide to write these kinds of works in a personal blog post. However, there is a lack of formal venues that accept these kinds of works, which limits the incentive for a researcher to spend the time writing them. The workshop discussion was centered around how we can provide additional incentives for researchers to produce this kind of scholarship, or to simply be more open about the limitations of their papers. \paragraph{The discussion: Encouraging retrospectives and meta-analyses. } One suggestion at the workshop highlighted the need for numerous smaller venues, akin to the size of some NeurIPS workshops. These smaller, closer-knit communities call for more conscientious members and might catch errors in papers that might go unseen in a larger group setting. It might be easier to discuss a retrospective in smaller communities. Having a recurring workshop across conferences that documents its proceedings could be helpful but could create an unhealthy two-tier system. The community could force all papers to make their results open or write retrospectives. However, in that scenario, authors may do the bare minimum to satisfy the requirement, and it could lead to resentment and be ultimately counter-productive. Several other ideas were floated: conferences request authors of established papers (e.g.\@ those that have won a best paper award\footnote{Note though that we do not endorse the existence of conference `best paper awards' in general, as a mechanism for incentivizing high-quality papers, as the selection criteria are often arbitrary.}) to write such commentary; as their work has already gained recognition, they may be more willing to be open. Seniors in the community could encourage transparency and honest writing in papers, enforce those rules in their organizations and emphasize the long-term benefits. Conference organizers could randomly sample accepted papers and ask them to write a retrospective. If this is not implemented correctly, there could be a misuse of such power. Conferences could ask for insights and practicality of proposed methods in the submission form. To integrate the idea of retrospectives into the original paper, the authors could use the discussion section more than how it's currently used in machine learning papers. To support this, conferences could provide recognition for the ``best discussion of limitations". Another idea was to either eliminate best paper awards or have lots of them for different categories. With hundreds of papers being published in a conference, the idea of ``best paper" may be misleading. Rather celebrating different aspects of a paper, such as ``best discussion of limitations", ``best comparison of results", ``best literature review", etc. may make the community more inclusive and open. One concern raised during the workshop was that publishing mistakes in a retrospective may hurt the career of more junior members of our community. This situation can be normalized if the more senior researchers start to write retrospectives of their work, thus encouraging junior members of our community to write retrospectives. The suggestions for encouraging meta-analysis papers centered around changes that conferences could implement, such as adding a separate track for submissions. A new track would show support for researchers spending time analyzing trends in past papers, instead of focusing on new experiments. The lack of novel results in a meta-analysis raises the possibility of these submissions being overlooked without a different set of criteria for evaluation. In the brainstorming discussion on meta-analyses, one participant mentioned a paper track at the IEEE Symposium on Security and Privacy, called Systematization of Knowledge (SoK). Similar to the meta-analysis track at the NeurIPS 2019 workshop, this call for papers seeks to generate discussion on existing works and subfields, and submissions do not need to contain novel research results. The SoK project has more in common with survey papers than the meta-analysis track at the workshop, because part of the objective is to summarize existing research. However, unlike some criteria for survey papers, submissions must bring a new perspective by critically examining an unspoken rule, proposing a new taxonomy, or evaluating competing approaches to a problem\footnote{https://www.ieee-security.org/TC/SP2020/cfpapers.html}. Fellow discussants agreed that the machine learning field would benefit from a project analogous to this one. While describing the motivation for the project, a Frequently Asked Questions page for SoK\footnote{http://oakland31.cs.virginia.edu/cfp.html} notes,``our community seems to lose memory of things that have been done in the past and produces too many incremental results that don't always lead to better general understanding," and workshop participants noted the same problems in ML research. Returning to the suggestion of a track for meta-analysis submissions at ML conferences, when the discussion expanded to include the SoK project (which only accepts a small number of papers each year), it was suggested that accepted papers be given an automatic oral presentation. Additionally, this track could have a longer paper length to be more conducive to extended discussions, and offer a second round of review to promising papers (a practice adopted by the IEEE symposium). Participants felt these strategies would encourage researchers to contribute to the discussion on the current state of the ML field. The main consensus on this topic was that there is a lack of alternate styles of papers, such as retrospectives and meta-analyses. By encouraging these various formats of knowledge dissemination, we can cultivate a diversity of thought and nurture an ecosystem where various kinds of academic papers thrive as all these papers help us to push the field of machine learning forward. \paragraph{The discussion: Other ways to increase openness.} A separate brainstorming session focused on ways to increase transparency other than retrospectives. It was noted that, rather than writing a retrospective, researchers could simply update the original version of the paper. While this is often done to add new results, it is rarely done for sharing the authors' updated views about their paper. To encourage a culture of more ongoing updates to a research paper, the panel came up with an idea of citable paper versions. arXiv is one of the key platforms of dissemination of research. While it supports paper versioning, the versions themselves are not individually citable. The panel suggested that it would be useful to have citable versions which act like Git commits, which can be referenced back to other papers. This could also incentivize retrospectives, as authors writing a retrospective could add content to their original paper which could be individually cited. However, it remains to be seen whether crediting citations to an updated version of the paper would lead to more net citations for that paper than simply updating the paper in the current paradigm. More broadly, the idea of an open-source, `Git-style version tracking' approach to machine learning research is appealing in that it could promote a more collaborative approach to conducting research. Similar ideas, such as the AI Open Network\footnote{https://ai-on.org/}, have already been tried to mixed success. Several other ideas were discussed for increasing openness and transparency in ML. An easy-to-adopt suggestion was to encourage paper authors to have a blog post accompanying their paper. A blog post is very informative because it is the sole point of view of the first author, who is not necessarily the only author shaping the original paper. Thus, a blog post can contain critical implementation notes which are not added to the original paper, mostly due to lack of space and time. Paper blog posts are becoming increasingly popular, as they increase the exposure of a paper, Another very relevant aspect of transparency in research is the reproducibility of papers. This has gained more attention in the last couple of years, as there has been a strong movement to improve reproducibility in machine learning papers with the promotion of ML reproducibility checklist\footnote{\href{https://www.cs.mcgill.ca/~jpineau/ReproducibilityChecklist.pdf}{https://www.cs.mcgill.ca/~jpineau/ReproducibilityChecklist.pdf}} and Reproducibility Challenge \citep{ICLR2018Reproducibility2018, Pineau:2019, Sinha:2020,pineau2020improving}, or even reproducible code environments (CodeOcean,\footnote{\href{https://codeocean.com}{https://codeocean.com/}} CodaLab\footnote{\href{https://codalab.org/}{https://codalab.org/}}) which can be packaged together to ensure exact reproducibility. Not long after the panel discussion, PapersWithCode\footnote{\href{https://paperswithcode.com/}{https://paperswithcode.com/}} released the ML Code Completeness Checklist,\footnote{\href{https://github.com/paperswithcode/releasing-research-code}{https://github.com/paperswithcode/releasing-research-code}} which encourages conference submissions to adhere to a detailed repository readme file. Both code environment packages and the checklist serve the same purpose of ensuring enhanced transparency in ML experimentation. Reproducibility shares many similarities with retrospectives in that both reflect on a paper with a focus of doing good science, evaluate if the hypothesis was reasonable, validate the hypothesis, assess the process of the proposed methodology without hiding any flaws, use better statistics to understand results, and evaluate the paper in hindsight. The key difference is that a paper is generally reproduced by a third-party. Due to the amount of effort and time needed by the third-party to re-implement a paper and reproduce the results, such efforts need to be recognized and given the correct incentives. Recently, formal publications that accept replications of previous research have been created, notably ReScience;\footnote{https://rescience.github.io/} however, this is not a widespread form of scholarship, and we should seek to find ways to enhance and amplify these efforts. \section{Re-structuring the review process} \label{sec:review} \paragraph{The problem.} Most scientific communities use some form of a review process to evaluate papers on their scientific merits (like correctness, applicability, trade-offs, etc.). Passing the process adds credibility to work. In the ideal situation, the submitted work would be reviewed by a team of quality reviewers who would provide useful feedback for improving the work and decide if the work is ready for sharing more broadly. The authors would use the reviews to improve their work, and science would progress. However, the review process has several flaws, and ``reviewing the review process" was frequently brought up during the workshop. With the rapid growth of the machine learning field, the rate of submission of papers has far outpaced the rate at which quality reviewers are added to the pool. As a result, the quality (and usefulness) of reviews has decreased. The fast-paced ML research culture is bringing out the inherent limitations of the review process. For instance, the culture of ``crushing the benchmark” (requiring that papers be both ``novel'' and ``state-of-the-art'' to be accepted) leads to many of the issues in paper scholarship pointed out in \citep{lipton2018troubling}. These criteria are often not correlated with scientific quality of the paper; anecdotally, many of the most influential papers contain some form of negative results.\footnote{This was stated, for example, by Prof. Yoshua Bengio during the panel.} The over-reliance on ``number of published papers” as the metric to measure the merit of a researcher’s work is also affecting the mental well-being of students and practitioners. The sense of perpetual competition can make the research culture fearful and toxic. It prompts practitioners to churn out as many papers as possible; these submitted papers are often incomplete, thus straining the already overloaded review system. The famous NeurIPS 2014 experiment\footnote{See: http://blog.mrtz.org/2014/12/15/the-nips-experiment.html.} highlighted that ``most papers at N[eur]IPS would be rejected if one reran the conference review process (with a $95\%$ confidence interval of 40-75$\%$)”. The review process has shown no signs of becoming less noisy, thus authors have an incentive to submit the rejected work for the next cycle of review, hoping to find more amenable reviewers which further strains the system. \paragraph{The discussion.} The panel discussion noted the over-reliance on the conferences (to measure the worth of research work) and the need to rethink the publication structure. Prof. Yoshua Bengio proposed an alternate arrangement where research is put up on arXiv and submitted to journals. The paper is available publicly while it is being reviewed at a journal without worrying about the cycle of deadlines. The conferences can then pick papers from the journals, and the role of conferences is to broadcast good papers rather than to filter good papers. This arrangement can reduce the anxiety that comes with the conference deadlines. An example of this is TACL, where the paper accepted in TACL can be presented at ACL, EMNLP, or NAACL. Another option is rolling deadlines. For example, UbiComp invites papers published by the ACM journal on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) which has $4$ deadlines in a calendar year\footnote{http://ubicomp.org/ubicomp2020/cfp/papers.html}. The multiple deadlines alleviate unnecessary stress on the researchers. There was a concern that practitioners are starting to review much sooner in their careers and can easily make mistakes (even when acting in good faith) due to the lack of training. Lack of experience makes the review process very noisy, thus reducing the value of the review. During the panel, Jonathan Frankle added that in some computer science fields, senior members spend a lot more time reviewing the papers. Several ideas were proposed to counter this problem. First, workshops and training sessions can be organized to coach new reviewers. For example, CVPR 2018 had a workshop called ``Good Citizens of CVPR”~\citep{cvprworkshop2018}, which focused on how to be ``A good CVPR reviewer[,] A good Area Chair[, and] A good author”. Similarly, CVPR 2020 had a tutorial on ``How to write a good review"~\citep{cvprworkshop2020}. Second, professors and senior researchers could train their students to be competent reviewers by helping them write mock reviews for papers or by covering relevant training material as part of the coursework~\citep{sambowmancourse}. Another suggestion was to acknowledge good reviewers by providing Best Reviewer Awards (as already done by NeurIPS). Making reviews public for all the papers could help to fix lack of accountability on the part of the reviewer and is already done at some conferences (e.g., ICLR). It was also suggested that the reviewers' identity should be made public after the passage of time, though this could also lead to a potential rivalry between authors and reviewers. Other suggested measures include: \textbf{i)} Capping the number of submissions by any author, to reduce the workload for the reviewers (AAAI experimented with this strategy\footnote{See, e.g., AAAI-21 call for papers where ``Each individual author is limited to no more than a combined limit of 15 submissions to the AAAI-21 technical track.'' \href{https://aaai.org/Conferences/AAAI-21/aaai21call/}{https://aaai.org/Conferences/AAAI-21/aaai21call/}}); \textbf{ii)} Increasing the pool of reviewers by requiring all authors of the submitted papers to help as reviewers if required (EMNLP experimented with this idea\footnote{See, e.g., ``Although recent global affairs will have an effect on submissions, we still anticipate another record year in terms of numbers of submissions, and thus need an ever larger troupe of qualified, engaged and committed individuals to serve as reviewers on the PC to handle these papers... For this reason, in EMNLP 2020 we are introducing a new policy: In order to submit paper(s) to EMNLP, you must nominate at least one author to serve as a reviewer, (usually the most senior author) and for that author to take on a full load (of up to 6 reviews).'' \href{https://2020.emnlp.org/blog/2020-04-26-reviewing-policy}{https://2020.emnlp.org/blog/2020-04-26-reviewing-policy}}); \textbf{iii)} Using a single reviewer (someone senior) to screen the submissions and flag out the extremely weak submissions. This way, reviewers have more time to focus on the remaining submissions. NeurIPS 2020 has begun experimenting with some of these techniques\footnote{https://medium.com/@NeurIPSConf/getting-started-with-neurips-2020-e350f9b39c28}, which have received mixed feedback, particularly on desk rejections of weak submissions.\footnote{https://syncedreview.com/2020/07/16/poorly-explained-neurips-2020-desk-rejects-peeve-ml-researchers/} \section{Participation from academia and industry} \label{sec:industry} \paragraph{The problem.} Academic institutions worldwide are struggling to retain their top scientists.\footnote{See, e.g., `` `We can't compete': why universities are losing their best AI scientists'' at \href{https://www.theguardian.com/science/2017/nov/01/cant-compete-universities-losing-best-ai-scientists}{https://www.theguardian.com/science/2017/nov/01/cant-compete-universities-losing-best-ai-scientists}} This portends negative ramifications not only for academic research but also for the education of future AI talent. Although the quality of basic AI research in academia is now on par with industrial AI labs, a significant portion of fundamental AI research continues to be performed in industry. The main reasons for the relatively low retention rate of AI researchers in academia include: 1) gross disparity of salary in academia and industry, 2) disproportionate amount of time spent on grant writing in combination with a high teaching load inhibits actual research time and inordinately shifts the scientist’s role from that of ‘explorer/discoverer’ to ‘manager’, and 3) academic institutions do not make it easy for academicians to dually serve as advisers or consultants to companies. \paragraph{The discussion.} A view shared among some workshop participants was that the best way forward is via governments working with industry through public/private partnerships to promote an economically sustainable, virtuous AI cycle, while ensuring a free basic research base that is independent of industry interests. There are many advantages for companies to invest in institutional AI infrastructure with ‘no-strings-attached,’ besides maintaining a good public relations policy. Companies already invest a lot of their time, money, and resources to vet, screen, and train scientists. Moreover, companies compete on a rolling basis for first access to top talent that has not yet entered the job market. Academic institutions continuously supply a quality AI cohort that can be approached by companies in the form of institutionally supported career fairs for those companies which have fairly contributed to a given institution as determined by that institution. On a more individual scale, some companies already provide opportunities for thesis internships and research experience at most academic levels. However, these are relatively few and specialized. Companies also will profit from the democratization of AI – investing in girls’ and women’s programs for coding and investing in AI development in poorer countries as described, for instance, in the MIT Technology Review 2019 article entitled “The future of AI research is in Africa”.\footnote{\href{https://www.technologyreview.com/2019/06/21/134820/ai-africa-machine-learning-ibm-google/}{https://www.technologyreview.com/2019/06/21/134820/ai-africa-machine-learning-ibm-google/}} To enable the valorization of academic research,~\citet{mendrik2019framework} advocate for a Kaggle\footnote{\href{https://www.kaggle.com/}{https://www.kaggle.com/}}-like direction: \begin{quote} [D]eployment challenges are perhaps particularly well suited to be set-up by companies, as part of their clinical trials or cohort studies. This could aid in bridging the gap between industry and academia. If data, truth criteria, and metrics are representative of the problem, direct assessment of algorithms from academia could result in more practical and accelerated use of academic algorithms. \end{quote} To facilitate this process, there needs to be an infrastructure that supports academics with funding for both benchmark and algorithm submission and their dissemination. A system\footnote{An example of such a system is Eyra Nova, the non-profit part of the Enhance-Your-Research Alliance (EYRA) company.} is envisioned where benchmarks would be submitted to a platform in direct analogy to submitting a paper to a journal, and receiving credit for it would mirror a paper citation index. The submitted benchmark would subsequently be reviewed via a rating scheme which would contain suggestions for improvement and would provide a better understanding of the quality of the benchmark itself. The same would apply for algorithm submission: the algorithm would be submitted to such benchmarks on the platform, just like submitting a paper to a journal and obtaining citation value for it. Currently, academic grants have money allocated for publishing papers. This grant coverage could be expanded to include other types of submissions. In order to have a sustainable model there would need to be an amount of money paid for each submission, both benchmark and algorithm submission, similar to the paper submission system: once a paper is accepted a certain amount is paid to the publisher to have it published. However, the proposed system would be fairer than that of the journal publisher in that the money earned would be reinvested into platform maintenance and upgrades to support scientists with new features~\citep{mendrikpriv20}. \section{How do we train computer scientists to do better science?} \label{sec:science} \paragraph{The problem.} Research communities of all kinds -- from medicine to physics to computer science -- grapple with the mechanics of making sound scientific inferences and educating their students with their community’s norms. Each field has its own approach to rigorous analyses and its own issues with scientific rigor (e.g. reproducibility and replication crises). For example, machine learning, though it conducts experiments as many other fields do, often makes less use of the rigorous statistical methods for drawing scientific conclusions that are more common in fields such as economics, psychology, and medicine. One reason is that as a community we may be failing to train computer scientists to do the kind of research we hope to see at flagship conferences like NeurIPS in the future. Motivated by this issue, a group at the workshop’s brainstorming session asked the question, how should we train computer scientists to do better science? A significant problem is that many machine learning researchers have never been explicitly educated in research methods or the philosophy of science. For many computer science graduate students, their advisor and fellow graduate students are their exemplars and tutors in how to perform, present, and review ML research, and how their work relates to science as a whole. In practice, advisors may be too busy to explicitly instruct their students in the machinery of science, and students may indirectly and noisily infer from their peers, reviews they receive, and papers they read what the purpose of scientific processes is. That is, they may learn the superficial lesson that papers can only be accepted if they beat the current state-of-the-art, or that it is okay for reviews to be dogmatic, uninformed, and/or conversational. The danger is that there may be little understanding as to why we do research. Another aspect complicating educating students about scientific rigor are incentives for faculty advisors, graduate students, and those hoping to get into the field. Machine learning conferences happen frequently throughout the year and there is a pressure to publish multiple papers within the year -- several top machine learning faculty had more than $80$ papers published in 2019. Top universities and companies hire those who have a track record of “first author publications” at top ML conferences (NeurIPS, ICML, etc.). Even before getting into a PhD program in machine learning, there is often an expectation that applicants have published in the past, reinforcing the rush to publish quickly even before enrolling in graduate school. The implicit priority is getting work accepted rather than understanding or advancing science. As a result, proper scientific practices may be sidelined in the rush to put out more papers -- why learn how to, and then run a suite of sensitivity analyses or ablation studies when such studies may not be necessary for publication given current standards. \paragraph{The discussion.} Discussants looked not only at the problems, but towards possible solutions such as how incentive structures could be changed. For example, ML could learn from other fields and require for promotion or hiring a ``job market paper” that is expected to be very robust and is held to high scientific standards. Indeed, some universities do this already to some extent, focusing on ``best three papers” of candidates.\footnote{See, for example, a recent job posting which requests the ``three papers that best represent their research/experience'': \href{https://www.aclweb.org/portal/content/umass-amherst-computer-science-hiring-faculty-data-science}{https://www.aclweb.org/portal/content/umass-amherst-computer-science-hiring-faculty-data-science}.} Additionally, reviewer guidelines could be changed to emphasize the robustness of the scientific inferences being made -- though lack of quality reviews certainly contributes to current trends. Another approach would be to switch focus of publications from conferences to journals (as is more common in other fields), where there are more iterations of peer-review and more time to release polished work; however, discussants noted that the perception within ML is that this would slow down the pace of research and may therefore be unlikely to be adopted. Another concrete suggestion was to create online courses that introduce the scientific method and scientific thinking to machine learning researchers in particular. That is, many newcomers to ML learn through popular online courses (such as through Coursera\footnote{\href{https://www.coursera.org/}{https://www.coursera.org/}} or Stanford’s lectures on Convolutional Neural Networks\footnote{\href{http://cs231n.stanford.edu/}{http://cs231n.stanford.edu/}} that introduce technical aspects of machine learning. However, they do not touch on what makes for good science in ML, which new classes could address. Such classes could additionally discuss current struggles in doing good ML science, such as how the community’s rapid growth has led to low reviewer quality at large conferences. In this way, a course could introduce concepts of what makes for good reviews, and what the whole process of peer review is designed to do. The benefit of learning material that is available online is that it is scalable; however, another impactful intervention would be to have such classes (or others in methods or the philosophy of science) as part of the curriculum at universities for CS graduate students. During the discussion, participants remarked that it was strange that they graduated with a Ph.D., a degree designed to prepare them to contribute to the scientific community, without ever having taken a class on the philosophy of science or research methods. Such gaps in the curriculum likely contribute to problematic trends in the ML community. A final line of discussion related to what ML conferences could do to help contribute to the continuing education of scientists. A discussant noted how useful the ``New in ML workshop'' at NeurIPS\footnote{\href{https://nehzux.github.io/NewInML2019/}{https://nehzux.github.io/NewInML2019/}} could be towards that end. The workshop aimed to help researchers who had not previously published at NeurIPS, and potentially could include talks on methods in research and current problems in the ML landscape. Conferences could include space for workshops (such as the retrospectives workshop) that encourage the community to self-reflect, or to introduce new tracks that encourage papers that provide objective evidence for problematic trends within the community and how to address them. The conclusion of the session was that right now, as a community we may not be well-preparing the next generation of ML researchers to do the kind of science that we ultimately want reported in an ideal future conference. However, discussants were optimistic that there were practical ways to improve ML education if there was interest and energy to implement them. \iffalse \section{Ideas for increasing transparency in ML} \label{sec:transparency} Several other ideas were discussed for increasing transparency in ML. An easy-to-adopt suggestion was to have a blog post accompanying a paper, which a number of papers accompany these days. A blog post is very informative because it is the sole point of view of the first author, who is not necessarily the only author shaping the original paper. Thus, a blog post can contain critical implementation notes which are not added to the original paper, mostly due to lack of space and time. Another very relevant but (widely ignored until recently) aspect of research is the reproducibility of papers. In the last couple of years, there has been a strong movement to improve reproducibility in machine learning papers with the promotion of Reproducibility Challenge, or even reproducible code environments (\href{https://codeocean.com/}{CodeOcean}, \href{https://codalab.org/}{CodaLab}) which can be packaged together to ensure exact reproducibility. Not long after the panel discussion, \href{https://paperswithcode.com/}{PapersWithCode} released the \href{https://medium.com/paperswithcode/ml-code-completeness-checklist-e9127b168501}{ML Code Completeness Checklist}, which encourages conference submissions to adhere to a detailed repository readme file. Both code environment packages and the checklist serve the same purpose of ensuring enhanced transparency in ML experimentation. However, this is still not a recognized form of academic scholarship. Reproducibility shares many similarities with retrospectives in that both reflect on a paper with a focus of doing good science, evaluates if the hypothesis was reasonable, validates the hypothesis, assess the process of the proposed methodology without hiding any flaws, uses better statistics to understand results, and evaluate the paper in hindsight. The key difference being a paper is generally reproduced by a third-party. Due to the amount of effort and time needed by the third-party to re-implement a paper and reproduce the results, such efforts need to be recognized and given the correct incentives. \shagun{Would like to discuss the next paragraph a bit more as these recommendation could possibly be controversial.} One of the primary concerns identified by the panel was incentive structure surrounding honest retrospective efforts. Junior researchers are less incentivized to talk about shortcomings of their work as it might impact career prospects. Also, writing a retrospective does not earn citations, which are the currency in academia - thus disincentivizing the senior researchers as well. To encourage retrospectives and updates to a research work, the panel came up with an idea of Citable versions. arXiv is one of the key platforms of dissemination of research. While it supports paper versioning, the versions themselves are not individually citable. The panel suggested that it would be useful to have citable versions which act like Git commits, which can be referenced back to other papers. Thus, while writing one's retrospective, an author will be more incentivized to add content to their original paper as that can also be individually cited. Citation tracking mechanisms such as Google Scholar can display a breakdown of citations per version of the paper. Following the same line of thought, the panel propose reproducibility reports of papers also get their own citations, and citing a reproducibility report automatically cites the paper it reproduced. Now depending on the report, the authors can choose to update their own paper by strengthening it in the process. The reproducibility report thus only cites the “previous version” of the paper - removing any ambiguity issues of the reader. \shagun{think more on this} The panel discussed various other interesting ideas such as making the review process better by making reviewers anonymous, forums for researchers to vent out about their or other work without being reprimanded (something along the lines of Alcoholics Anonymous for ML researchers), commentary section on a paper (such as Disqus or Hypothesis), etc. \fi \section{Conclusion} \label{sec:conclusion} In this report, we summarize ideas and discussion from the NeurIPS 2019 Retrospectives workshop. Our hope is that this report brings awareness to some of the ideas that are currently being discussed for improving the machine learning field as a whole, and contributes towards the goal of better aligning the field with the goal of improving scientific understanding. \section*{Acknowledgements} Thanks to Prof. Yoshua Bengio and Jonathan Frankle. We thank the panelists at the ML Retrospectives workshop at NeurIPS 2019: Yoshua Bengio, Jonathan Frankle, Melanie Mitchell, Joelle Pineau, Gael Varoquaux (alphabetically ordered by last name), and the organizers of the workshop: Ryan Lowe, Koustuv Sinha, Abhishek Gupta, Jessica Forde, Xavier Bouthillier, Peter Henderson, Michela Paganini, Shagun Sodhani, Kanika Madan, Joel Lehman, Joelle Pineau, and Yoshua Bengio. Additionally, we would like to thank the participants of the brainstorming session at the workshop: Robert Aduviri, Diogo Almeida, Emmanuel Bengio, Wonmin Byeon, Kamil Ciosek, Debajyoti Datta, Jesse Dodge, Ann-Kathrin Dombrowski, Marc Dymetman, Jonathan Frankle, Niklas Gebauer, Barb Hirsch, Rishub Jain, David Jensen, Pan Kessel, Andrey Kurenkov, Shayne Longpre, Kanika Madan, Ilja Manakov, Luke Merrick, Elliot Meyerson, Trishala Neeraj, Andre Pacheco, Michela Paganini, Vipin R. Pillai, Edward Raff, Daniel Seita, and Rupesh Srivastava (alphabetically ordered by last name).
proofpile-arXiv_065-126
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:intro} Modern robotic agents demand increased flexibility in the way the assigned task is represented and executed. Indeed, autonomous robotic systems need to be capable of dealing with sensing and planning operations at low cost, as well as monitoring actions in a goal-oriented fashion~\cite{lopez2019formal}. Behavior Trees (BTs) constitute a powerful tool for task switching and decision making, and are receiving increasing attention in the robotics community~\cite{colledanchise2018behavior, colledanchise2016behavior, rovida2017extended}. The reason for this growing attention mostly lie in the fact that \acp{bt} are self-explanatory, modular, facilitate code reuse, and are simple to design. A \ac{bt} is built combining a limited number (six) of node types. This greatly simplifies the design of new \acp{bt}, makes them \textit{human-readable}, and eases the formal verification of the generated task plan without curbing their expressive power~\cite{colledanchise2016behavior}. {Modularity has been exploited in robotics in both hardware \cite{yim2007modular,moubarak2012modular} and software \cite{elkady2012robotics}.} \acp{bt} are modular in the sense that each subtree may be seen as a subblock that may be added or replaced by any other subblock. This makes the code reusable for different applications and further simplifies the design of new \acp{bt}. However, \ac{bt} engines are not designed to operate within the sense-plan-act paradigm, nor do they provide an optimized trade-off between reactiveness and execution cost for low-level control. Also, \acp{bt} may easily grow when the number of actions and conditions needed for closing the execution loop increases. Moreover, continuous monitoring of the task execution as well as online resolution of possible ambiguities in the task plan are typically not supported by \acp{bt}. {This limits the applicability of \acp{bt} in dynamic or uncertain environments.} In order to overcome these limitations, a robotic executive framework has to: \textit{i)} ensure low complexity in terms of cost and implementation when dealing with task executions, \textit{ii)} make a connection between low-level stimuli and high-level decision making, and \textit{iii)} perform a continuous and goal-oriented planning. This paper proposes an executive framework that meets the robotic task requirements by combining the planning capabilitites of \acp{bt} with attentional mechanisms for control features~\cite{kawamura2008implementation, caccavale2016flexible}. The proposed \textit{Reconfigurable Behavior Trees (RBTs)} exploit the high modularity of traditional \acp{bt} to define a tree structure that can be reconfigured at runtime, i.e. dynamically during the task execution, by adding and/or removing parts of the tree. The reconfiguration is ruled by environmental stimuli corresponding to changes in the sensed information and by the successful execution of goal-directed actions. This paper presents a formal definition of the \ac{rbt} framework and evaluates its performance {in scheduling robotic tasks.} {To summarize, our contribution is two-fold: \begin{itemize} \item We extend the \ac{bt} formalism to incorporate continuous information in the decision process. This makes it possible to monitor the execution and react to unexpected changes in the environment. \item We propose an algorithm to dynamically allocate and deallocate tree branches. Subtrees are stored as JSON (JavaScript Object Notation) schemas, a format that is supported by most existing databases. \end{itemize} } Section~\ref{sec:related} presents related work. In Sec.~\ref{sec:rbt}, we describe the proposed approach. Section~\ref{sec:evaluation} presents simulation experiments and evaluates the results. Section~\ref{sec:conclusion} states the conclusions and proposes further extensions. \section{Related Work} \label{sec:related} Finite state machines have been widely used in different areas of computer science and engineering. A finite state machine provides a basic mathematical computational model consisting of a definite set of states, transitions, and events. Finite state machines are flexible and easy to design, but as soon as the system grows in complexity, a reactivity/modularity trade-off problem arises and the approach becomes impractical. Decision trees~\cite{breiman1984classification}, the subsumption architecture~\cite{brooks1986robust}, and sequential behavior composition~\cite{burridge1999sequential} are widely-used approaches to decision making and task execution in robotics. A decision tree is an analytical decision support tool consisting of control structures and predicates located in the leaves and internal nodes, respectively, which map the possible outcomes of a series of choices. One motivation for their use is their ``white-box'' nature, i.e.\ decisions can be intuitively explained, they are simple to visualize, and may be easily implemented. However, decision trees lack robustness to noisy perceptual data, and their size rapidly increases in complex scenarios. The subsumption architecture relies on having a number of controllers in parallel which are ordered with different priorities so that each one is allowed to output both actuator commands and a binary value, signaling if the control of the robot is active or inactive. {In sequential behavior composition, the behavior is driven by local controllers. The state space is split into cells, corresponding to the basin of attraction of each controller. The task of each controller is to drive the system into the basin of attraction of another controller that is closer to the overall goal state.} Colledanchise and {\"O}gren~\cite{colledanchise2016behavior} have shown that \acp{bt} represent an elegant generalization of finite state machines, decision trees, the subsumption architecture, and sequential behavior composition. Several fixed-logic control models have emerged that extend the functionalities of \acp{bt} and attempt to overcome their limitations in highly dynamic environments. For instance, Conditional \acp{bt} \cite{giunchiglia2019conditional} extend traditional \acp{bt} to monitor the execution of an action considering logic pre- and postconditions. The work in~\cite{segura2017integration} proposes to cast an automated plan created by a hierarchical task network planner~\cite{nau2003shop2, kaelbling2011hierarchical} into executable \acp{bt}. In this way, \acp{bt} provide reactiveness and modularity, whereas the planner is responsible for the deliberative behavior of the robot. Along the same lines, other work~{\cite{rovida2017extended,neufeld2018hybrid, lan2019autonomous} explores different ways of synthesizing a \ac{bt} using a planner.} {Automatic synthesis is employed to produce \acp{bt} with guaranteed performance \cite{paxton2019representing} or safety \cite{tadewos2019automatic}}. Other approaches extend \acp{bt} by integrating models where domain knowledge can be learned automatically, for instance using {reinforcement learning~\cite{zhang2017modeling,zhu2019behavior}}, genetic programming~\cite{zhang2018learning}, or imitation learning~\cite{french2019learning}. Learning techniques can overcome the limitations of classical planners that require significant engineering effort~\cite{ilghami2005learning}. Finally, the work in~\cite{robertson2015building} attempts to map the practical solutions developed for action sequencing in real-time strategy games to robotic applications. Characteristics like human-readability, expressivity, modularity, and reusability make \ac{bt}-based techniques popular and unquestionably attractive. However, critical aspects that these approaches do not cover are the connection to the physical executive state and the possibility of ambiguities in the decision making. Their rigidity is not desirable for systems with strong perceptual constraints, like robots intended to interact with the environment and able to flexibly behave and react to unexpected events. To tackle this issue, some work exploits attentional mechanisms for visual task learning~\cite{borji2010online}, for cognitive control of humanoid robots~\cite{kawamura2008implementation}, and for flexible orchestration of cooperative tasks~\cite{caccavale2016flexible}. Inspired by the way humans monitor and orchestrate task execution~\cite{norman1986attention, cooper2006hierarchical}, the attentional framework in~\cite{caccavale2016flexible} loads tasks from a long-term memory and instantiates them in a working memory using a mechanism analogous to that used in Hierarchical Task Network planning~\cite{nau2003shop2}. Additionally, continuous sensory data are exploited to solve any ambiguities in the task plan and to quickly react to environmental changes. This attentional system has been effectively integrated in an imitation learning framework to learn, plan, and monitor complex robotic tasks~\cite{caccavale2017imitation, caccavale2019kinesthetic, saveriano2019symbolic, agostini2020manipulation}. In this work, we take the best of these two worlds and propose \acp{rbt}, an executive framework that combines the human-readability, modularity, and reusability of \acp{bt} with the additional flexibility offered by attention-based cognitive architectures. \section{Reconfigurable Behavior Trees} \label{sec:rbt} \begin{table}[t!] \caption{Types of BT nodes and their return status} \resizebox{1\columnwidth}{!}{% \begin{tabular}{|l|c|c|c|} \hline \textbf{Type} & \textbf{Symbol} & \textbf{Success} & \textbf{Failure}\\ \hline \hline \textit{Fallback/Selector} & $?$ & One child succeeds & All children fail \\ \hline \textit{Sequence} & $\rightarrow$ & All children succeed & One child fails \\ \hline \textit{Parallel} & $\rightrightarrows$ & \textgreater$M$ children succeed & \textgreater$N-M$ children fail \\ \hline \textit{Decorator} & $\Diamond$ & Custom & Custom \\ \hline \hline \textit{Action} & $\hrectangle$ & Upon completion & Impossible to complete \\ \hline \textit{Condition} & \tikz \draw (1,1) ellipse (5pt and 2.5pt); & \texttt{True} & \texttt{False} \\ \hline \end{tabular}} \label{tab:btformulation} \end{table} \begin{figure}[t!] \centering \input{img/bt_example.tex} \vspace*{-2mm} \caption{A \ac{bt} representing a pick-use-place task. $\rightarrow$ is a Sequence node; $?$ are Fallback nodes. Ellipses define condition nodes and rectangles define action nodes.} \label{fig:bt} \end{figure} \begin{figure*}[t] \centering \input{img/architecture_overview.tex} \caption{The generic RBT. The branch of the tree surrounded by the blue polygon allows execution to be terminated after a global goal is reached and is the same for RBT and BT. The green action node is the \textit{Emphasizer} that modifies the priority of each subtree considering the environmental stimuli. The branch of the tree surrounded by the orange polygon is dynamically allocated/deallocated by the \textit{Instantiator} every time the subtree priority order changes. The action node \texttt{execute subtree} contains small \acp{bt} like the one in Fig.~\ref{fig:bt}. The blue labels are the node names used in Listing~\ref{lst:jsonschema}.} \label{fig:rbt-overview} \end{figure*} \subsection{Behavior Trees}\label{subsec:bt} A \ac{bt} is a graphical model language to control {the actions (or behaviors)} of an autonomous agent and execute a task. A BT consists of a tree structure containing a combination of the six types of nodes shown in Table~\ref{tab:btformulation}. These types are divided into two categories: \textit{control flow} and \textit{execution} nodes. The four types of control flow nodes are Sequence, Fallback/Selector, Decorators, and Parallel nodes. The two types of execution nodes are Condition and Action nodes. Each type of node returns a \textit{running} state during the execution and \textit{success} or \textit{failure} after the execution. The execution of a \ac{bt} is possible by periodically traversing the tree from the root node to all child nodes from left to right. The traversing mechanism is periodically activated by sending a signal called ``tick''. Each child node responds to this signal according to its own type and to the return state of the other nodes. Table~\ref{tab:btformulation} describes the behavior of each type of node for the Success and Failure cases. The running state will behave in a similar fashion. A minimal example of a \ac{bt} applied to a pick-use-place object task is shown in Fig.~\ref{fig:bt}. The root being a Sequence node, the BT is executed sequentially from left to right. If the condition \texttt{object picked} has not been fulfilled, i.e. the node returns \texttt{False}, the action node \texttt{pick object} enters the \textit{Running} state. The action node returns success upon successful completion of the pick action. As a consequence, the first (far left) Fallback node also returns success and the traversal mechanism enters the second (middle) Fallback node. This procedure is repeated until all the Fallback nodes return success, indicating that the task has been successfully completed. \subsection{RBT components} The generic \ac{rbt}, depicted in Fig.~\ref{fig:rbt-overview}, is a \ac{bt} enriched with extra functionalities that permits the continuous monitoring of environmental stimuli and the dynamic reconfiguration of the tree to execute. Interestingly, those functionalities are implemented using the same six types of nodes considered in the standard \acp{bt} (see Table~\ref{tab:btformulation}), leaving unaffected the design simplicity typical of \acp{bt}. Traversing the RBT from the root (Fallback) node, we first check if the end goal of the task is fulfilled. If not, we check if the blackboard is initialized and eventually run \texttt{initialize blackboard}. The blackboard is a mechanism used in \acp{bt} to store and update runtime variables and to make them accessible by each node in the tree. In the \ac{rbt} framework, the blackboard contains the logical pre- and postconditions used to regulate the task execution and to determine when the task goal is fulfilled, the sensed information used to set the priorities of each subtree, and the current value of the subtrees priorities. The blackboard is dynamically updated and greatly simplifies the communication between nodes by handling the concurrent access in a transparent and thread safe manner. We would like to point out that this part of the \ac{rbt}, surrounded by a blue polygon in Fig.~\ref{fig:rbt-overview}, allows execution to be terminated after a global goal is reached. A similar branch has to be introduced in the standard BT to successfully terminate the task and therefore is does not introduce extra nodes in the RBT. Once the blackboard is initialized the right branch of the RBT is traversed and two parallel processes start. On one side, sensory data are processed to determine the priority of the $S$ available subtrees. On the other side, the most emphasized subtree is instantiated and executed asynchronously with respect to the perceptual input. The instantiation mechanism is dynamic and allows for flexible task orchestration. Compared to a BT, the RBT has the following extra components: \begin{enumerate} \item A \textit{Long-Term Memory (LTM)} and a \textit{Short-Term} or \textit{Working Memory (WM)} that are typical of attention-based control frameworks~\cite{caccavale2016flexible}. \item A priority handler, namely the \textit{Emphasizer}, that computes the highest-priority task considering the sensed information and logical pre- and postconditions. \item An \textit{Instantiator} that accesses the LTM and casts the subtask into the corresponding subtree loaded in the WM. The Instantiator enables the \ac{rbt} reconfiguration capabilities by dynamically loading and instantiating the subtree with higher priority. \end{enumerate} The distinctive components of \acp{rbt} are detailed in the rest of this section. \subsection{LTM and WM} The LTM can be considered a database that contains all subtasks that the robot is able to execute. A typical subtask is the pick-use-place object task described in Sec.~\ref{subsec:bt}. In order to store and retrieve subtasks from the LTM, we propose a common representation of the $4$ control flow nodes in Table~\ref{tab:btformulation}. The generic control flow node $\mathcal{B}$ is represented as a quadruple \begin{equation} \mathcal{B}=(l,t,c,p) \label{eq:node_tuple} \end{equation} where $l$ is the unique name (label) of the node, $t$ is one of the types in Table~\ref{tab:btformulation}, $c$ is a list of children, and $p$ is a list of \textit{parameters} like the priority value or pre- and postconditions. In principle, it is possible to represent also the $2$ execution nodes in Table~\ref{tab:btformulation} as the quadruple defined in~\eqref{eq:node_tuple}. However, we found a more convenient way of exploiting the fact that execution nodes correspond to leaves in the BT. In more detail, action nodes are specified only in the children list of the parent node, while the condition nodes are used to represent the pre- and postconditions that are listed in the parameter list $p$. The successful execution of an action also changes the state of the relative postcondition, while the preconditions of an action are changed by other nodes in the tree. Following this representation, the LTM can be conveniently organized in JSON schemas. As shown in Listing~\ref{lst:jsonschema}, the root of a tree is identified by the key word \texttt{root} in its name (\texttt{rbt\_root}). Actions are simply listed as children of a node and are identified by the string \texttt{A(action\_name)}. For preconditions, we use the notation \texttt{C\_ij} where \texttt{i} is the child number and \texttt{j} indicates the $j$-th condition. Postconditions (or goals) are identified by \texttt{G\_ij} where \texttt{i} is the child number and \texttt{j} indicates the $j$-th condition. It is worth noticing that the described representation contains all the information needed to instantiate an executable BT and that no further JSON schemas are needed to describe the leaves of the BT. \begin{lstlisting}[linewidth=\columnwidth,label=lst:jsonschema, basicstyle=\small\ttfamily, float, caption=JSON schemas representing the generic RBT in Fig.~\ref{fig:rbt-overview}.] { "name":"rbt_root", "type":"fallback", "children": ["sequence_1"], "params": ["G_11", "goal reached"] }, { "name":"sequence_1", "type":"sequence", "children": ["A(initialize blackboard)", "parallel_1"], "params": ["G_11", "blackboard initialized"] }, { "name":"parallel_1", "type":"parallel", "children": ["A(handle priority)", "fallback_1"], "params": [""] } { "name":"fallback_1", "type":"fallback", "children": ["A(load subtree)", "A(execute subtree)"], "params": ["C_11", "priority changed"] } \end{lstlisting} The \textit{Instantiator} is responsible for loading the task from the LTM and creating an instance of the BT to execute in the WM. This procedure is summarized in Algorithm~\ref{alg:allocate-tree}. Given the task name (root of the BT), the Instantiator first loads the JSON schemas describing the task (line $2$ in Algorithm~\ref{alg:allocate-tree}). Starting from the root, the BT is built by iteratively expanding the nodes until the leaves are reached (lines $4$ to $19$). The current JSON schema is converted into the BT node specified in the \texttt{type} field and attached to the current tree (line $5$). Pre- and postcondtions are handled using a modified version of the Planning and Acting PA-BT approach in~\cite{colledanchise2018behavior} that allows multiple postconditions. In this approach, a postcondition is transformed into a Condition node (line $7$) that is connected to the rest of the tree via a Fallback node ($\mathcal{T}_{fal}$). In this way, execution ends when the postcondition becomes \texttt{True}. The case of a single postcondition is handled in lines $8$--$9$. In case of multiple postconditions, a Sequence node is created with all the postconditions attached (line $11$). In this way, the postconditions are sequentially verified. The Sequence node containing all the postconditions is then attached to $\mathcal{T}_{fal}$ (line $12$). In both cases the generated Fallback node is attached to the current tree (line $18$). Lines $14$--$17$ handle Action nodes with associated preconditions. As for the postconditions, the preconditions are considered as Condition nodes (line $16$). Action and Condition nodes are then connected to a Sequence node that is attached to the current tree (lines $17$--$18$). In this way, an Action is executed only if all the preconditions are \texttt{True}. As a final note, the functions \textsc{sequenceNode} and \textsc{fallbackNode} in Algorithm~\ref{alg:allocate-tree} return an empty subtree if the input is empty, while \textsc{sequenceNode}($\{\}$, $\mathcal{A}$) returns the Action nodes $\mathcal{A}$. \begin{algorithm}[t] \begin{algorithmic}[1] \Function{instantiateSubTree}{$l$} \Comment{$l$: subtask name} \State \texttt{schemaList} $ \leftarrow$ \textsc{getTaskFromLTM}($l$) \State $\mathcal{T} \leftarrow \{\}$ \Comment{empty BT} \For{\texttt{schema} in \texttt{schemaList}} \State $\mathcal{T} \leftarrow$ \textsc{schemaToNode}($\mathcal{T}$, \texttt{schema}) \State \texttt{postC} $ \leftarrow$ \textsc{getPostConditions}(\texttt{schema}) \State $\mathcal{C} \leftarrow$ \textsc{conditionNodes}(\texttt{postC}) \If{$\mathcal{C}$\texttt{.length} $== 1$} \State $\mathcal{T}_{fal} \leftarrow$ \textsc{fallbackNode}(\texttt{$\mathcal{C}$}) \Else \State $\mathcal{T}_{seq}$ $\leftarrow$ \textsc{sequenceNode}($\mathcal{C}$) \State $\mathcal{T}_{fal} \leftarrow$ \textsc{attachSubTree}($\mathcal{T}_{seq}$) \EndI \State \texttt{a}, \texttt{preC} $ \leftarrow$ \textsc{getActions}(\texttt{schema}) \State $\mathcal{A} \leftarrow$ \textsc{actionNodes}(\texttt{a}) \State $\mathcal{C} \leftarrow$ \textsc{conditionNodes}(\texttt{preC}) \State $\mathcal{T}_{seq}$ $\leftarrow$ \textsc{sequenceNode}($\mathcal{C}$, $\mathcal{A}$) \State $\mathcal{T} \leftarrow $ \textsc{attachSubTree}($\mathcal{T}_{fal}$, $\mathcal{T}_{seq}$) \EndFor \State \Return{$\mathcal{T}$} \EndFunction \end{algorithmic} \caption{Load and instantiate a BT} \label{alg:allocate-tree} \end{algorithm} \subsection{Subtree priority} \label{subsec:priority} The modularity of standard BT allows a complex task (tree) to be decomposed into subtasks (subtrees). For instance, a stacking task can be decomposed as a combination of pick and place subtasks. However, in standard BT, the execution order of each subtask is predefined. Changing the execution order depending on discrete values is possible, but requires extra branches in the BT. Changing the execution order by monitoring continuous values like sensory data is typically not supported. In contrast, \acp{rbt} exploit logical pre- and postconditions, as well as continuous sensory data, to monitor the task execution. As already mentioned, a complex task is divided into subtrees. We assign pre- and postconditions to the root of each subtree. Therefore, a specific subtree is \emph{active} if all the preconditions are satisfied while the postconditions are not. A subtree correctly terminates by setting the postconditions. At each tick, the Emphasizer accesses the blackboard and looks for active subtrees. An execution conflict occurs every time more than one active subtree exists. In this case, we exploit a priority-based mechanism to dynamically decide the subtask to execute. We define the priority of a subtree as the runtime prominence for the execution of a specific subtask. The priority is a real value, normalized between $0$ and $1$, which tells the Instantiator which subtask to load and transform into an executable BT. In this work, the priority $\epsilon$ is defined as \begin{equation} \label{eq:emph} \epsilon(\theta)= \begin{cases} 1 & \textrm{if } \theta \leq \theta_{\min} \\ \dfrac{\theta-\theta_{\max}}{\theta_{\min} - \theta_{\max}} & \textrm{if } \theta_{\min} < \theta < \theta_{\max} \\ 0 & \textrm{if } \theta \geq \theta_{\max} \end{cases} \end{equation} where $\theta$ is a continuous value coming from sensory data, and the thresholds $\theta_{\min}$ and $\theta_{\max}$ are tunable parameters. In this work, $\theta$ represents the inverse of the distance from objects to manipulate, but other choices are possible. \subsection{Task execution and monitoring} \label{sec:task_execution} The generic \ac{rbt}, as the one depicted in Fig.~\ref{fig:rbt-overview}, is a goal-oriented tree that successfully terminates only if a certain goal is achieved (the \texttt{goal reached} condition becomes \texttt{True}). In our implementation, the goal of the RBT is achieved if the postconditions of all the subtrees are satisfied. As already mentioned, the execution of the RBT is achieved by periodically traversing the tree top to bottom and left to right (tick function). At each tick, the \texttt{goal reached} condition is tested. If \texttt{goal reached} is \texttt{False}, we check that the blackboard is initialized and then enter a Parallel node. Executing the Emphasizer and the subtree in parallel is convenient because sensory data and task execution are, in general, asynchronous processes. The Parallel node is designed to successfully terminate if both children are successfully executed. Since the Action node \texttt{handle priority} is always in the running state, the Parallel node cannot terminate. This implies that the RBT successfully terminates if and only if the goal is reached. If new sensory data arrive or a subtree postcondition changes, the Emphasizer recomputes the priority of the subtrees and sets the \texttt{priority changed} flag. This triggers the Instantiator that preempts the current execution, deallocates the current subtree, and instantiates the subtree with highest priority. As already mentioned, this branch of the tree---the branch inside the orange polygon in Fig.~\ref{fig:rbt-overview}---is dynamically allocated at each tick. The dynamic allocation is needed for the correct execution of the task. To better understand this point, consider what happens if \texttt{execute subtree} returns \texttt{True}. In this case, the active subtask has been successfully executed and the Fallback node (\texttt{fallback\_1}) also returns \texttt{True} (see Table~\ref{tab:btformulation}). With a static branch, the tick would not enter \texttt{fallback\_1} anymore, letting the remaining active subtasks unexecuted. With a dynamic branch, instead, the return state of \texttt{fallback\_1} is reset and the active subtask with highest priority is correctly instantiated and executed. \section{Evaluation}\label{sec:evaluation} \begin{figure}[t] \centering \subfigure[Possible initial configuration.]{\includegraphics[width=0.48\columnwidth]{img/Simulation_start.png}} \subfigure[Desired final configuration.]{\includegraphics[width=0.48\columnwidth]{img/Simulation_stop.png}} \caption{The sorting task where the robot has to pick $3$ boxes from the table and place them at a specific location in the white storage area.} \label{fig:setup} \end{figure} \begin{figure}[t!] \centering \input{img/pick_place_example.tex} \vspace*{-2mm} \caption{The BT used to pick a box from the table and place it in the storage area. This BT is compactly represented by the \texttt{execute subtree} node in the RBT of Fig.~\ref{fig:rbt-overview}. The RBT uses the dashed nodes to impose the preconditions in case study~1, while in case study~2 they are omitted.} \label{fig:bt-pick-place} \end{figure} We evaluate \acp{rbt} in the sorting task shown in Fig.~\ref{fig:setup} where the robot has to pick $3$ colored boxes (red, blue, and green) from a table (Fig.~\ref{fig:setup}(a)) and place them at specific locations in a storage area (Fig.~\ref{fig:setup}(b)). The scenario is simulated in CoppeliaSim~\cite{coppeliaSim} using the Panda robot model provided by Gaz et al.~\cite{gaz2019dynamic}. We consider two different case studies and compare the performance of \acp{rbt} and \acp{bt} in terms of execution time and tree complexity---the number of nodes in the tree. {Note that the execution time is computed by summing up the tick times until the goal is reached and without considering the time spent by the robot to perform the commanded actions.} \acp{bt} are implemented using the open-source Python library \texttt{py\_tree}~\cite{pytrees_doc}. \acp{rbt} are also implemented in Python exploiting the basic \ac{bt} nodes provided by \texttt{py\_tree}. Figure~\ref{fig:bt-pick-place} shows the BT used to plan box sorting subtasks. The solid nodes are common between \acp{bt} and \acp{rbt}, while the dashed nodes are exploited by \acp{rbt} to enforce a certain execution order specified by a set of preconditions \{\texttt{C\_11}, \texttt{C\_12}\}. As discussed in Sec.~\ref{sec:rbt}, \acp{rbt} dynamically attach the subtree in Fig.~\ref{fig:bt-pick-place} to the \textit{static} tree in Fig.~\ref{fig:rbt-overview}. Moreover, the standard BT is endowed with $6$ extra nodes (contained in the blue polygon in Fig.~\ref{fig:rbt-overview}) to monitor the execution of the task and successfully terminate after a goal is reached. The task is fulfilled when all the boxes are sorted in the storage area, as indicated by the task goal \texttt{G\_11 = b\_box placed $\land$ g\_box placed $\land$ r\_box placed}. The thresholds used to compute the priority in~\eqref{eq:emph} are empirically set to $\theta_{\min}=0.05\,$m (the length of the box side) and to $\theta_{\max}=1\,$m (the maximum distance that still allows grasping a box). We compare RBT and BT in terms of number of nodes and execution time measured assuming that action nodes directly return \texttt{True} without entering the running state. \begin{figure}[t!] \centering \input{img/RBT_case_study_1.tex} \caption{The RBT used to plan the sorting task. In case study 1, \texttt{sort box} is subject to preconditions to constrain the execution. The blue nodes are common to \acp{bt} and \acp{rbt}.} \label{fig:rbt-sorting} \end{figure} \begin{table}[t] \caption{Comparison between RBT and standard BT.} \begin{center} \resizebox{0.8\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|} \hline Method & Case & \# Nodes & Execution Time (ms) \\ \hline \hline RBT & 1 & 19 - 22 & 507.04\\ \hline BT & 1 & 27 & 503.24 \\ \hline \hline RBT & 2 & 19 & 505.17\\ \hline BT & 2 & 151 & 819.78\\ \hline \end{tabular}} \end{center} \label{tab:tabev1} \end{table} \subsubsection{Case study 1} The goal of this experiment is to compare \acp{rbt} and \acp{bt} in a situation that favours the BT, i.e. when the task plan is rigid, no ambiguities are possible, and no external disturbances occur. In this case, we assume that the boxes need to be sorted in a specific order: First we pick and place the blue box \texttt{b\_box}, then the green box \texttt{g\_box}, and finally the red box \texttt{r\_box}. This sorting order has been arbitrarily decided and does not affect the obtained results. The sorting task successfully terminates when the task goal \texttt{G\_11} is reached. \begin{figure}[t!] \centering \subfigure[]{\input{img/BT_case_study_1.tex}} \subfigure[]{\input{img/BT_case_study_2.tex}} \caption{The BT used to plan the sorting task of case study 1 (a) and case study 2 (b). The blue nodes are common to \acp{bt} and \acp{rbt}. The variable $d_i$, $i=b,r,g$, indicates the distance between the \texttt{i\_box} and the robot. Due to the limited space, we only show a partial \ac{bt} (b). In particular, the stump containing the black sequence node and its children ($24$ nodes in total) handles the cases $d_b \leq d_g$ and $d_g \leq d_r$. Five similar stumps are needed to handle all possible combinations of box distances and are compactly indicated here by the symbol $\cdots$.} \label{fig:bt-pick-place-cases} \end{figure} The BT used to perform the sorting task is shown in Fig.~\ref{fig:bt-pick-place-cases}(a), where the $3$ Action nodes \texttt{sort} \texttt{\textsl{box\_name}} compactly represent the BT in Fig.~\ref{fig:bt-pick-place} (only the solid nodes are considered). As listed in Table~\ref{tab:tabev1}, the BT of Fig.~\ref{fig:bt-pick-place-cases}(a) has $27$ nodes. The RBT used to plan this task is shown in Fig.~\ref{fig:rbt-sorting} and is almost identical in the two case studies. The only difference is in the \texttt{sort box} Action node. In case study 1, we exploit preconditions (dashed nodes in Fig.~\ref{fig:bt-pick-place}) to impose a certain execution order. More specifically, the Action node \texttt{sort box} is allocated by the \textit{Instantiator} that, at runtime, dynamically instantiates a specialized version of the sorting task where the generic box is replaced by the one with highest priority. As described in Sec.~\ref{subsec:priority}, the priority of a subtask depends on logical precondition and continuous stimuli. In this case, external stimuli play no role and the execution order is fully determined by the preconditions. In particular, \texttt{sort b\_box} has no preconditions and is the first to be executed. \texttt{sort g\_box} has \texttt{C\_11 = b\_box placed} as precondition, while \texttt{sort r\_box} has \texttt{C\_11 = b\_box placed} and \texttt{C\_12 = g\_box placed} as preconditions. This guarantees that the sorting task is executed in the desired sequential order. As listed in Table~\ref{tab:tabev1}, the RBT has a variable number of nodes ($19$ to $22$). This is because the \texttt{sort} node has a variable number of preconditions for each box. Even if in case study 1 the \ac{rbt} does not show its full potential, we still have a reduction of the number of nodes in the tree ($22$ nodes in RBT in the worst case, $27$ in the \ac{bt}). The time to execute the \ac{bt} is slightly smaller in this case, but the difference is negligible (less than $4\,$ms). \subsubsection{Case study 2} This experiment is designed to show the benefits of the priority-based task execution introduced by \acp{rbt}. The scenario is the same as in case study~1 but here the boxes are sorted without a predefined order. {In principle, there are $6$ possibilities (plans) to pick the $3$ boxes in Fig.~\ref{fig:setup}(a) and place them in the goal configuration shown in Fig.~\ref{fig:setup}(b). \texttt{Sort b\_box, r\_box, g\_box} and \texttt{sort b\_box, g\_box, r\_box} are $2$ of the $6$ of feasible plans. We use the robot-box distances to determine which plan the robot executes. In particular, we always sort first the box closest to the gripper.}. The \ac{rbt} used to plan this task is the same as in case study 1, except for the \texttt{sort box} Action node that has no general preconditions. Therefore, the \ac{rbt} always has $19$ nodes (see Table~\ref{tab:tabev1}). At each time, all the boxes on the table are eligible to be sorted. These execution conflicts are managed in \acp{rbt} using the task priorities computed with~\eqref{eq:emph}. Hence, the \textit{Instantiator} is free to allocate the subtree with highest priority, i.e. to start sorting the closest box. Once a box, for instance the blue one, is placed in the storage area, the postcondition \texttt{b\_box placed} becomes \texttt{True} and the \textit{Emphasizer} removes \texttt{sort b\_box} from the list of active nodes (see Sec.~\ref{subsec:priority}). This implies that \texttt{sort b\_box} is not instantiated in future ticks, letting the robot sort the other boxes and successfully complete the task. In order to reach a similar level of flexibility with standard \acp{bt}, we need {many more nodes to represent the $6$ feasible plans in one tree and a more complicated control flow logic to determine the plan to execute based on robot-box distances}. A possible solution that requires $151$ nodes is sketched in Fig.~\ref{fig:bt-pick-place}(b). In this case, \acp{rbt} require $\approx 85\,$\% less nodes, which clearly makes the RBT easier to visualize, and a reduction of the execution time of $\approx 38\,$\%. \section{Conclusion and Future Work} \label{sec:conclusion} We introduced Reconfigurable Behavior Trees, a novel framework that combines high-level decision making and continuous monitoring and control features. By combining the expressivity and modularity of standard behavior trees with the flexibility of attentional execution monitoring, \acp{rbt} allow the AI agent to perform actions in a robust and versatile way, while being capable of adjusting its behavior based on continuous input from the environment. The proposed framework has been tested in a sorting task and compared with standard \acp{bt} in terms of tree complexity and computation time. The evaluation shows that \acp{rbt} outperform standard \acp{bt}, especially when the task plan is not rigidly defined and ambiguities in the execution need to be solved. {This paper focused on theoretical description of \acp{rbt} and comparisons with traditional \acp{bt}. In \cite{saveriano2020combining}, we combined \acp{rbt} with a reactive motion generation approach to correct the robot trajectory on-the-fly without a computationally expensive motion re-planning step. The preliminary results shown in this paper and in \cite{saveriano2020combining} are promising and suggest that \acp{rbt} are better suited to dynamic and uncertain environments than \acp{bt}. Therefore, we expect that \acp{rbt} can significantly contribute to the deployment of social robotics applications. This clearly requires further development of the proposed framework. Among others, the \ac{rbt} framework has to be tested in human-robot interaction scenarios. Uncertainty introduced by the human in the loop can be estimated using, for instance, neural networks \cite{kabir2018neural} and can explicitly be considered in the decision process.} \balance \bibliographystyle{IEEEtran}
proofpile-arXiv_065-127
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{section:1} The \emph{balance question} appears as an IQ test as well as a good point from which one begins an introductory lecture on information theory. A simple balance question often involves a fair balance and a set of visually identical coins in which only a counterfeit one has a different weight from the others. For example: \emph{There is one heavier counterfeit coin among twelve coins, the rest eleven ones weigh equally. Given a two-armed balance, at least how many times does it take to find the overweight coin?} Elements from information theory such as entropy provide an elegant bound to this question. Balance question has many variants, e.g., we might only know that one coin has a different weight (but we do not know whether it is heavier or lighter), the number of coins could be an arbitrary integer, the balance might be biased, etc. Some late variants adopt even more far-fetching assumptions such as there are more than one counterfeit coins, the balance is multi-armed, etc \cite{de1998predetermined,wen2004optimal,liu2005searching}. The traditional balance setting in a gravity field often asserts that the numbers of coins on two sides of the balance are equal. We adopt an electromagnetic setting where the genuine coins are neutral while the counterfeit coin is electrified. So the numbers of coins on two sides of the balance need not to be identical. This setting yields some tighter bounds than the gravity setting. In the balance question, we consistently play as the human player who tries to find the counterfeit coin. What if the player has to determine his/her strategy beforehand instead of deciding it adaptively? In \emph{balance game} one takes the position of the balance and tries to hide the counterfeit coin from the human player. The same set of methods can be applied to the game setting parallely. After reviewing the traditional understanding of the balance question, we study the \emph{balance game}. Finally, we introduce the \emph{dishonest balance game}, in which the balance can cheat the human player. We present some analysis over this game. The contributions of this paper are: \begin{enumerate} \item We propose the balance game in an electromagnetic setting. A coding framework, together with probabilistic method is adopted to analyze the balance question/game under this setting. \item Some results on the balance game are derived under this framework. Such as the winning condition for the human player or the balance. Moreover, we prove an interesting result: when the player adopts a complete stochastic strategy, it is optimal for him/her to put any coin on the left/right side of the balance or off the balance independently and uniformly (each with probability $\frac{1}{3}$) at each round. \item The dishonest balance game is proposed and an elementary bound on the winning strategy of the balance is provided. \end{enumerate} Section \ref{section:2} covers a review on the traditional understanding of the balance question from information entropy. Section \ref{section:3} introduces the coding formulation with probabilistic method on the balance question as well as the balance game. Section \ref{section:4} introduces the dishonest balance game, together with some games that motivate this proposal, an analytic bound on the dishonest balance game is given. Section \ref{section:5} gives the conclusion and some discussions. \section{Balance Question and Entropy} \label{section:2} To quantify a balance question/game, we use $(n,q,\text{prior})$ to indicate the setting, where $n$ is the number of coins, $q$ the number of measuing and the third parameter denotes the prior information. It is known that \emph{entropy} is a good metric in measuring the quantity of information \cite{cover1999elements}. One lower bound for the number of human player's moves in balance question is obtained using entropy. Assuming that one wants to find one overweight coin out of $n$ coins. There are altogether $n$ possibilities (each of the $n$ coins could be heavier), so the entropy of this set of coins is no higher than $\log_{2}n$ bits. Each time the balance yields a result, the information released is at most $\log_{2} 3$ bits (the balance can tilted to either side or stay unbiased). Thus in the worst case, the player is impossible to detect the correct coin with less than $$\frac{\log_{2}n}{\log_{2}3}=\log_{3}n$$ balancings. That is to say, if the human player is not allowed to use the balance for more than $\log_{3}n-1$ times then it is always possible (for the balance) to hide a coin such the player cannot find it. Generalization to a special coin with either higher/lower weight is similiar, one simply replace the number of possibilities from $n$ to $2n$. However, even if the player is allowed to use the balance for $\log_{3}2n$ times, he/she is not guaranteed to correctly find the counterfeit coin. The reason behind is that the player might fail to design a configuration of coins so that the probabilities for the balance to yields three different results are not identical, and the information yields by each balancing is less than $\log_{2} 3$, hence the total amount of information is insufficient. Occasionally, there are cases where the discrete nature of balancing is contradictive to the theory. This is often observed when $n$ is small. For example, if the player is asked to find one coin with different weight from $n$ coins with $q=3$ times of balancings, then entropy guarantees that $n\leq 13$. But in practice we must have $n \leq 12$. That is to say, the player can never win a $(13,3,\text{unknown})$-balance question. To see this, we write the space of all possibilities as $H=\left\{1^{-},1^{+},\cdots,13^{-},13^{+} \right\}$, where $i^{+}(i^{-})$ means that the $i$-th coin is heavier(lighter). During the first balancing, the human player can do not more than putting $a$ coins on both side of the balancing (w.l.o.g. we assume that the player puts coins indexed 1 to $a$ on the left, those indexed $a+1$ to $2a$ on the right), which is going to divide $H$ into three partitions: $$\left\{1^{+},2^{+},\cdots,a^{+},(a+1)^{-},\cdots,(2a)^{-}\right\},$$ $$\left\{1^{-},2^{-},\cdots,a^{-},(a+1)^{+},\cdots,(2a)^{+}\right\},$$ $$\left\{(2a+1)^{+},(2a+1)^{-},\cdots,13^{+},13^{-} \right\}.$$ Whose sizes are $2a,2a,26-4a$. To make sure the next two balancings can yield an answer, it is necessary that $2a\leq 9$ and $26-4a \leq 9$, resulting in: $$4.25 \leq a\leq 4.5.$$ Which is inconsistent with the fact that $a$ must be an integer. But when the number of chips $n$ grows very large, it is almost always safe to expect that $\log_{3} n$ times of balancing can yield a good result. \section{Balance Game} \label{section:3} Now we turn to the balance's position and study the \emph{balance game}. In the balance question, the human player plays with an \emph{adaptive strategy}, i.e., the configuration of coins at the $j+1$-th round is decided given the tilting situation of the previous $j$ rounds. To design a strategy for the balance, we now have the human player design a \emph{predetermined strategy} without observing the balance. At the beginning of the game, the human player has to determine which coins be put onto the left/right side of the balance or off the balance at all $q$ rounds. The balance then yields a sequence of weighing results from which the human player tries to deduce the index of the exceptional coin. We further assume that the game takes place in an electromagnetic settings. The geniune coins are neutral while the counterfeit is charged either positively or negatively. The target of the human player is to distinguish a (positively or negatively) charged coin from other neutral coins, but the only device available is a test electron which reacts to only the charged coin as Figure. \ref{figure:0}. \begin{figure}[htb] \centering \begin{tikzpicture} \draw (0,0) rectangle (2,2); \draw (3,1) [red] circle (0.45); \node at (3,1) [red] {$+Q$}; \draw (4,0) rectangle (6,2); \draw (0.3,0.4) circle (0.2); \draw (1.1,0.7) circle (0.2); \draw (0.4,1.5) circle (0.2); \draw (1.7,0.6) [red] circle (0.2); \node at (1.7,0.6) [red] {$+$}; \draw (5.5,1.3) circle (0.2); \draw (4.3,1.7) circle (0.2); \draw (3.55,1)--(3.9,1) [-latex,red]; \draw (3.55,1.3)--(3.9,1.3) [-latex,red]; \draw (3.55,0.7)--(3.9,0.7) [-latex,red]; \end{tikzpicture} \caption{An electromagnetic balance.} \label{figure:0} \end{figure} An electromagnetic balance in the vacuum consists of two distant boxes that contains coins and one test electron $+Q$. After coins are put into the two boxes, the test electron shifts to either side or stay at where it was. Unlike in the gravity field, the charged coin is the only reason that causes the shifting of the test electron. One needs not to ensure the numbers of coins in two boxes are the same. At each round, the human player selects two subsets of coins and observes the shifting of the test electron. This assumption is essential to the balance game since this grants the balance to yield an arbitrary result given the human player's configuration. While the human player cannot claim that the balance is violating the physical law on the result of \emph{one single} balancing. For the historic consideration, we still use the diction of \emph{light, weight} in the following discussion. One might intuitively assert that the balance game is harder than the balance question for the human player, since the player has to decide the strategy without the gradual exposure of information. One might even question the compactness of the bound yields by the entropy $2n \approx 3^{q}$. However, as what is going to be presented, it turns out that the balance game is no harder. Moreover, given the electromagnetic setting, the bound derived by entropy seems to be tighter than it has been in balance questions. For example, a human play has a must-win strategy in a $(13,3,\text{unknown})$-balance game using a delicate coding method. While there exists a concise algorithm that guarntees the victory of the balance when $2n>3^{q}$ as well. Even if the human player is incapable of adopting a perfect scheme at $2n=3^{q}-1$, he/she can make the balance suffer a lot by deploying a naive randomnized strategy. We begin the analysis with an easier case where we know the counterfeit coin is heavier(negatively charged). \subsection{The $(n,q,\text{heavy})$-balance game} The $(n,q,\text{heavy})$-balance game involves $n$ visually identical coins, one of which is heavier than others, the player is allowed to use the balance for $q$ rounds. Assuming the balance has known the strategy of the human player, under which circumstances does it have a winning strategy? To comprehend this game, it is better to adopt a coding framework. The strategy of the human player is embedded into an $n*q$ strategy matrix $S$, while the strategy of the balance is embedded into a $1*q$ mask code $M$, the player observes $M$ and transcripts his/her strategy matrix $S$ into the $n*q$ observation matrix $O$ and tries to infer the counterfeit coin from it. Taking the $(n,q,\text{heavy})$-balance game as an instance. The human player chooses $n$ codes, each of length $q$ from an alphabet of $\left\{\text{L},\text{R},\text{O}\right\}$ to form an $n*q$ matrix $S$. If $S_{i,j}=\text{L}$ then the $i$-th coin is put on the left side of the balance at the $j$-th round. If $S_{i,j}=\text{R}$ then the $i$-th coin is put on the right side of the balance at the $j$-th round. If $S_{i,j}=\text{O}$ then the $i$-th coin is not put on the balance at the $j$-th round. The balance (or the evil spirit within), knowing the strategy of the human player as $S$, puts forward a mask $M$ of length $q$ using three characters $\left\{\hat{\text{L}},\hat{\text{R}},\hat{\text{D}}\right\}$, if $M_{j}=\hat{\text{L}}$ then the balance says that the left side is heavier at the $j$-th round, $\hat{\text{R}}$ then the right side is heavier and $\hat{\text{D}}$ represents a draw. After observing $M$, the human player translates the effect of the mask code onto $S$ using the following transcription Table. \ref{table:1}: \begin{table} \Large \caption{Transcription table for the $(n,q,\text{heavy})$-balance game.} \begin{center} \begin{tabular}{c|c|c|c} \toprule \ & $\hat{\text{L}}$ & $\hat{\text{R}}$ & $\hat{\text{D}}$\\ \midrule L & $+$ & $\times$ & $\times$ \\ R & $\times$ & $+$ & $\times$ \\ O & $\times$ & $\times$ & $+$ \\ \bottomrule \end{tabular} \label{table:1} \end{center} \end{table} Transcripting $S$ using $M$ is simply assigning each entry $O_{i,j}$ the intersection entry between the $C_{i,j}$-th row and the $M_{j}$-th column in Table. \ref{table:1}. The physical significance behind is that: $O_{i,j}=+$ means the $i$-th coin is possibly heavier according to the $j$-th examination (since the side on which it lies is judged to be heavier according to the $j$-th balancing, or the balance yields a draw while it is kept off the balance), $O_{i,j}=\times$ means that it is firmly not heavier accordingly. As a toy illustration, we demonstrate the example of $(4,2,\text{heavy})$-balance game. Entropy says that human can \emph{possibly} win a $(4,2,\text{heavy})$-balance question. What about forcing the human player to present the strategy without knowing the weighing result? Consider: \begin{equation} \label{equation:1} S= \begin{pmatrix} \text{L} & \text{L}\\ \text{L} & \text{R}\\ \text{R} & \text{L}\\ \text{R} & \text{R} \end{pmatrix}, \end{equation} and $$M=\left(\hat{\text{L}},\hat{\text{R}} \right).$$ Then the corresponding transcripted matrix is: $$O= \begin{pmatrix} + & \times\\ + & +\\ \times & \times\\ \times & + \end{pmatrix}.$$ Hence the only coin that is possibly heavier is the second one. (One should check this process to examine whether this formulation is consistent with the setting!) In this case the human player wins the game and the balance loses with $M=(\hat{\text{L}},\hat{\text{R}})$. Generally, the $i$-th coin is possibly heavier iff the $i$-th row of $O$ contains only $+$. So the player wins if the transcripted matrix contains zero or only one row with no $\times$. If there are no row with only $+$ then the player knows that the balance must have yielded at least one incorrect result (e.g., let $S$ be defined as previous and $M$ be $(\hat{\text{D}},\hat{\text{L}})$). If there are multiple rows that contains only $+$ then the player cannot distinguish between them and the balance wins. In the $(4,2,\text{heavy})$-balance game, adopting the strategy as \eqref{equation:1} guarantees the victory of the human player. One can check this argument by examing all nine possible masks and seeing that the number of $(+\ \ +)$ rows as the function of $M$ returns either one or zero. To delve into the game theory aspect, we ask this question: \emph{Given $S$, is it possible for the balance to select a mask code such that the player can not locate the counterfeit coin from the transcripted matrix $O$?} The case before shows that for some design of $S$, the balance is determined to fail. In other words, once the human player constructs such an $S$, he/she can claim victory without seeing the weighing result or actually conducting deduction. On the other hand, in order to win the $(n,q,\text{heavy})$-balance game, the balance has to design mask code $M(S)$ for each strategy $S$ of the human player such $O(M(S))$ contains more than one pure $+$ rows. To study this property, we resort to a probabilistic method. The idea is to assume that the balance encodes a mask randomly (remind that the balance does not has to fix a counterfeit coin beforehand, instead, it only tries to fool the human player without violating logic). We now exert a probability measure on the space of the mask code of the balance $$\Omega=\left\{\hat{\text{L}},\hat{\text{R}},\hat{\text{D}} \right\}^{q}.$$ Let each position of $M$ be selected independently and uniformly from $A,B,C$, so each $M\in\Omega$ has probability $3^{-q}$. For the $i$-th coin, consider the event $B_{i}$: \emph{ the $i$-th coin is possibly the heavier coin}. The probability of such an event is simply $3^{-q}$. Because be $S_{i,j}$ L, R or O, its probability of being considered as possibly heavier is uniformly $\frac{1}{3}$, corresponding to $M_{j}$ be $\hat{\text{L}}, \hat{\text{R}}$ or $\hat{\text{D}}$. In other words, $B_{i}$ is true iff the mask $M$ appears as an image of $S_{i}$ under the one-to-one mapping $\text{L}\rightarrow \hat{\text{L}},\text{R}\rightarrow \hat{\text{R}},\text{O}\rightarrow \hat{\text{D}}$. The indicator random variable of this event is $X_{i}:\Omega\rightarrow\left\{0,1\right\}$, let $X=\sum_{i=1}^{n}X_{i}$ be the random variable that counts the number of coins possibly heavier, now: $$\mathbb{E}[X_{i}]=\text{Pr}(B_{j})=3^{-q},$$ so $$\mathbb{E}[X]=\sum_{i=1}^{n}\mathbb{E}[X_{i}]=\frac{n}{3^{q}}.$$ If $\mathbb{E}[X] > 1$ then there must exist a mask $M'\in\Omega$ such that $X(M')>1$, which means that more than one coins are possibly heavier and the human player lacks sufficient information then the balance wins the game. In this naive case (we know that the counterfeit coin is heavier). the bound seems to be the same as the one yields by the entropy. On the other hand, if $n,q$ makes $\mathbb{E}[X]\leq 1$, we only know that it is possible for the balance to lose, and we can only conclude that the balance does not have a must-win strategy, but be the human player unwise, the balance can still choose a $M$ to win the game. Additionally, when $X(M')=0$, the human player can argue that the balance violates logic so it loses. Some interesting and straightforward results can be readed from this setting as well: \textbf{Theorem:} If $n=2^{q}$ then there exists a must-win strategy for the human player, where at each round all $n$ coins are put onto the balance. \textbf{Proof:} Let the $i$-th row of $S$ be the binary representation of $i$ with $0\rightarrow \text{L}$ and $1\rightarrow \text{R}$. Since no coin is ever kept off the balance, the character $\hat{\text{D}}$ has been removed from the balance' s alphabet (or the value of $X$ becomes zero and the balance loses). Mapping $\hat{\text{L}}$($\hat{\text{R}}$) in $M$ into $0$($1$) then there must exist one row in $S$ that is consistent with $M$ and its transcription is $q$ consecutive $+$s, this is the only coin that is possibly heavier since other transcripted row contains at least one zero. \qed \textbf{Corollary:} If $n\leq 2^{q}$ then there exists a must-win strategy for the human player, where at each round all $n$ chips are put onto the balance. \textbf{Proof:} Using binary coding, the rest is the same as the theorem before. \qed Finally we have: \textbf{Theorem:} If $n\leq 3^{q}$ then there exists a must-win strategy for the human player. Otherwise the balance has a must-win strategy. \textbf{Proof:} Using the ternary code of $i$ as $S_{i}$ ($0\rightarrow\text{L},1\rightarrow\text{R},2\rightarrow\text{O}$), since $n\leq 3^{q}$ such a configuration is always legal. Now if $M$ is the ternary code of $l\leq n$ (taking $0\rightarrow\hat{\text{L}}, 1\rightarrow \hat{\text{R}},2\rightarrow\hat{\text{D}}$) then the $l$-th coin is the only candidate according our previous construction. If $l > n$ then $M$ blocks all candidate coins and the human player can claim that the balance is cheating. If $n > 3^{q}$ then there must be two indices $i_{1},i_{2}$ such that $S_{i_{1}}=S_{i_{2}}$ componentwise. Now let $M$ be constructed as taking $\text{L}\rightarrow\hat{\text{L}} , \text{R}\rightarrow \hat{\text{R}},\text{O}\rightarrow \hat{\text{D}}$ then the player can neither distinguish the $i_{1}$-th coin from the $i_{2}$-the coin nor argue that the balance is lying. \qed The bound $n=3^{q}$ is a tight one in a sense that if $n\leq 3^{q}$ then the human must win, if $n>3^{q}$ then the balance must win. Therefore when both the player and the balance play wisely, the condition $n\leq 3^{q}$ is both necessary and sufficient for the player to win, \emph{vice versa}. This is the same as the original $(n,q,\text{heavy})$-balance question where the player adopts an adaptive strategy in a gravity field. \textbf{Remark A:} Although one might eagerly move to the $(n,q,\text{light})$-balance game, we have to point out that such a generalization is not straightforward. In the $(n,q,\text{heavy})$ setting we can always assume that the heavier coin' s weight is larger than the summation of all the rest so no matter how the player put coins on either side of the balance, the balance can always use character $\hat{\text{L}}$ or $\hat{\text{R}}$. But $(n,q,\text{light})$ setting rejects the balance to assign character $\hat{\text{L}}$ to a configuration as putting one coin on the left and two coins on the right. So it is better to adopt the electromagnetic setting. \textbf{Remark B:} We have been exerting a probability measure on $\Omega$, the space of mask codes. One can also making the space of $S$ into a probability space. The result is exactly the same. However, it is more intuitive to assume that the human player has to give the sequence of coin configurations. While it seems physically improper to ask the human player to device such a sequence given the weighing result of the balance. But we are going to see how a random strategy of the human player can equally torture the balance. \subsection{The $(n,q,\text{unknown})$-balance game} What if the prior information is hiden from the human player such he/she has to infer whether the counterfeit coin is heavier or lighter? Adopting the same way of constructing $S$ as before (since the human player can do no more than putting coins onto two sides of the balance or off the balance), but the transcription table is more complex since not only $+$ and $\times$ are possible characters. We have to include the $-$ character into the observation matrix to represent \emph{possibly lighter}. If the balance yields a draw, we can no longer assign a uniform $+$ or $-$ character to those coins off the balance, instead they are assigned a $\pm$ sign to suggest that they could be heavier or lighter. So the transcription table reads as Table. \ref{table:2}: \begin{table}[htbp] \Large \caption{Transcription table for the $(n,q,\text{unknown})$-balance game.} \begin{center} \begin{tabular}{c|c|c|c} \toprule \ & $\hat{\text{L}}$ & $\hat{\text{R}}$ & $\hat{\text{D}}$\\ \midrule L & $+$ & $-$ & $\times$ \\ R & $-$ & $+$ & $\times$ \\ O & $\times$ & $\times$ & $\pm$ \\ \bottomrule \end{tabular} \end{center} \label{table:2} \end{table} The transcription rule is exactly the same as the $(n,q,\text{heavy})$-balance game. For example, when $S_{i,j}=\text{R}$ and $M_{j}=\hat{\text{L}}$, then $O_{i,j}$ is assigned the character $-$. $O_{i,j}=+$($-,\pm$) means that the $i$-th coin is possibly heavier (lighter, heavier or lighter) according to the $j$-th examination. While $O_{i,j}=\times$ means that it is firmly not counterfeit accordingly. We illustrate a toy example $(8,3,\text{unkown})$-balance game to examine the accuracy of this paradigm, let the strategy matrix of the player be: $$S= \begin{pmatrix} \text{R} & \text{L} & \text{L}\\ \text{R} & \text{L} & \text{R}\\ \text{R} & \text{L} & \text{O}\\ \text{R} & \text{R} & \text{L}\\ \text{L} & \text{R} & \text{R}\\ \text{L} & \text{R} & \text{O}\\ \text{L} & \text{O} & \text{L}\\ \text{L} & \text{O} & \text{R} \end{pmatrix},$$ and let $$M=\left(\hat{\text{L}},\hat{\text{R}},\hat{\text{L}} \right).$$ Then the transcripted matrix reads: $$O= \begin{pmatrix} - & - & \times\\ - & - & \times\\ - & - & \pm \\ - & + & \times\\ + & + & \times\\ + & + & \pm \\ + & \times & \times\\ + & \times & \times\\ \end{pmatrix}.$$ Hence it is possible that the 3rd coin is lighter or the 6th coin is heavier. In this case the balance successfully beats the human player and wins the game. The entropy knowledge tells that the human player \emph{might} have a must-win strategy against a balance, which fact is implied by the current framework if the following delicate strategy is selected: $$S= \begin{pmatrix} \text{R} & \text{L} & \text{R} \\ \text{R} & \text{R} & \text{O} \\ \text{R} & \text{O} & \text{L} \\ \text{L} & \text{L} & \text{R} \\ \text{L} & \text{R} & \text{O} \\ \text{L} & \text{O} & \text{L} \\ \text{O} & \text{L} & \text{L} \\ \text{O} & \text{R} & \text{L}\\ \end{pmatrix}.$$ It might be a little anti-intuitive that in the third round, only two coins are put onto the right side of the balance while four coins are on the left side. One can check (although tedious) that for any of the 27 possible masks of the balance, the number of coins that are possibly heavier/lighter is either zero or one, so either the balance yields a logically wrong balancing sequence (e.g., $M=\hat{\text{D}}\hat{\text{D}}\hat{\text{L}}$) or the sequence is sufficient for the human player to decide the counterfeit coin (e.g., $M=\hat{\text{D}}\hat{\text{L}}\hat{\text{R}}$). We first shift to a statement that reveal the essense of $(n,q,\text{unknown})$-balance game: \textbf{Theorem: } If $2n \leq 3^{q}-1$ then the human player has a must-win strategy, otherwise the balance has a must-win strategy. \textbf{Proof: } Consider two ternary codes $S_{1}, S_{2}$ (with alphabet L, R, O) with length $q$. If they satisfy the following conditions, we say that they are \emph{partially complementary to each other}: \begin{itemize} \item Whenever $S_{1,j}=\text{O}$, $S_{2,j}$ has to be O, \emph{vice versa}. \item Whenever $S_{1,j}=\text{L}$(R), $S_{2,j}$ has to be R(L). \end{itemize} Concisely, the non-O part of $S_{1},S_{2}$ are complementary to each other. Now if both $S_{1}$ and $S_{2}$ appear as rows of $S$ then the balance wins the game by adopting the following mask $M$: $$ M_{j}^{1}= \left\{ \begin{aligned} &\hat{\text{L}}, \quad\ \ S_{1,j}=\text{L},S_{2,j}=\text{R},\\ &\hat{\text{R}}, \quad\ \ S_{1,j}=\text{R},S_{2,j}=\text{L},\\ &\hat{\text{D}}, \quad\ \ S_{1,j}=S_{2,j}=\text{O}.\\ \end{aligned} \right. $$ Then the human player cannot distinguish where the 1st coin is heavier or the 2nd coin is lighter. Thus if two ternary codes of length $q$ with alphabet L, R, O are partially complementary to each other then they cannot appear simultaneously as rows of $S$. That is to say, the size of the set of legal codes for coins is no larger than: $$\frac{3^{q}-1}{2}+1.$$ If $n\geq \frac{3^{q}-1}{2}+1$ then the balance can win in either one of the following two ways: \begin{itemize} \item If there exist $i_{1}$ and $i_{2}$ such that $S_{i_{1}}$=$S_{i_{2}}$ or $S_{i_{1}}$ and $S_{i_{2}}$ are partially complementary to each other then the balance wins by transforming $S_{i_{1}}$ into $M$ with $\text{L}\rightarrow\hat{\text{L}},\text{R}\rightarrow\hat{\text{R}},\text{O}\rightarrow \hat{\text{D}}$. Under this setting the $i_{1}$-th coin and the $i_{2}$-th coin remain a mystery for the human player. \item If $n=\frac{3^{q}-1}{2}+1$ and there are neither duplication nor partial complementary then there must exists $i$ such that $S_{i}=(\text{O},\text{O}\cdots,\text{O})$ Let $M=(\hat{\text{D}},\hat{\text{D}},\cdots,\hat{\text{D}})$ then the player can only know that the $i$-th coin is different in weight, but whether or not it is heavier remains unknown. \end{itemize} On the other hand, if $n\leq \frac{3^{q}-1}{2}$ then the human player has a must-win strategy. He/she only has to select $n$ codes from the codebook $\left\{\text{L},\text{R},\text{O} \right\}^{q}/(\text{O},\text{O},\cdots,\text{O})$ module partial complementarity. Now if the mask code $M$ is partially complementary/identical to some $S_{i}$ then the unique $i$-th coin is lighter/heavier. Otherwise the player can conclude that the balance is lying on some experiments. \qed We illustrate an example of $(13,3,\text{unknown})$-balance game with the configuration designed in the proof of the previous theorem: $$S= \begin{pmatrix} \text{L} & \text{L} & \text{L} \\ \text{L} & \text{L} & \text{R} \\ \text{L} & \text{R} & \text{L} \\ \text{L} & \text{R} & \text{R} \\ \text{O} & \text{R} & \text{R} \\ \text{O} & \text{L} & \text{R} \\ \text{R} & \text{O} & \text{L} \\ \text{L} & \text{O} & \text{L} \\ \text{R} & \text{L} & \text{O} \\ \text{L} & \text{L} & \text{O} \\ \text{O} & \text{O} & \text{R} \\ \text{L} & \text{O} & \text{O} \\ \text{O} & \text{R} & \text{O} \\ \end{pmatrix}, $$ Then a mask, e.g., $M=(\hat{\text{D}},\hat{\text{L}},\hat{\text{R}})$ can identify only $(\text{O},\text{R},\text{L})$ or $(\text{O},\text{L},\text{R})$, in which one and only one code must have appeared as one row of $S$. \textbf{Corollary: } If $n=\frac{3^{q}-1}{2}$ then the number of different strategy matrices is: $$2^{n}\cdot n!q!.$$ \textbf{Proof: } Let $l$ be the number of $\left\{\text{L},\text{R}\right\}$ in one row then the subspace with $(q-l)$ Os has size $\binom{q}{l}\cdot 2^{l}$. In such a space exist $\binom{q}{l}\cdot 2^{l-1}$ pairs of complementary codes from which one have to select one from each pair, hence there are altogether $2^{\binom{q}{l}\cdot 2^{l-1}}$ different choices. Multipling over different $l$ yields that the number of strategies is: $$\prod_{l=1}^{q}2^{\binom{q}{l}\cdot 2^{l-1}}.$$ The binomial theorem reduces this value to $2^{n}$. Finally, one strategy can be compile into $2^{n}\cdot n!q!$ matrices. \qed \textbf{Remark:} The ratio between the number of perfect strategy and that of the entire strategy space, or the probability that a complete random strategy is optimal is: $$\text{Pr}(\text{success})=\frac{2^{n}n!q!}{3^{n\cdot q}}.$$ It is better to evaluate its logarithm which is a function of $q$. The result is in Figure. \ref{figure:ratio}. \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth] {./logs.jpeg} \caption{$\log \text{Pr(success)}$.} \label{figure:ratio} \end{figure} Generally, it is almost impossible for a random strategy appear to be optimal given a large $q$. However, it is still inspiring to study the randomnized strategy. \subsection{Randomnized strategy} We now take a probabilistic perspective into this game. What makes the $(n,q,\text{unknown})$-balance game harder to analyze with probabilistic method is that the transcription table is not rowwise homogeneous (i.e., different rows might preserve different sets of elements), hence the probability that $O_{i,j}$ be assigned each one of the four character $\left\{\times,+,-,\pm \right\}$ is no longer independent of $S_{i,j}$. Fortunately, the first two rows of the transcription Tables \ref{table:2} contain the same set of characters, so we let $q_{i}$ be the times that the $i$-th coin be put onto the balance (note that $q_{i}$ is a statistics of $S$, when we compute conditioning on $q_{i}$ we actually compute conditioning on $S$, hence the factor $\binom{q}{q_{i}}$ is not needed). For the balance, at each round, the character $\hat{\text{L}}$ and $\hat{\text{R}}$ is inserted into $M$ with probabiltiy $p$ and $C$ with probability $(1-2p)$. Therefore the probability that the $i$-th coin turns out to be possibly heavier/lighter is $$\left\{ \begin{aligned} &2(1-2p)^{q-q_{i}}p^{q_{i}},\quad 0<q_{i}\leq q, \\ &0,\quad\quad\quad\quad\quad\quad\quad\quad q_{i}=0. \end{aligned}\right. $$ To see why it holds, note that the $i$-th coin is considered to be heavier/lighter iff the $i$-th row of $O$ contains only $\left\{+,\pm \right\}$ or $\left\{-,\pm \right\}$. That is to say, it must get $q_{i}$ consecutive $+$/$-$ signs when on the balance, and the balance has to yield a draw for all $q-q_{i}$ times whenever it is off the balance. If $q_{i}=0$ then there is no way of determing whether it is heavier/lighter even if all the rest $(n-1)$ coins weigh equally. Having this observation, we are ready to analyze and optimize a random strategy. Imagine a less intellectual human player who hates delicate arrangement. Instead, the player can only conduct a random strategy in which at each round, each coin is put onto the left side of the balance with probability $\frac{r}{2}$, onto the right side with probability $\frac{r}{2}$ and off the balance with probability $(1-r)$. We now conduct an analysis over this player. Let $\textbf{q}=\left\{q_{i} \right\}_{i=1}^{n}$ be the sufficient statistics from $S$, whose possible assignments is a subset of $\left\{0,1,2,\cdots,q\right\}^{n}$ chosen by the player. \textbf{Theorem:} Using the previous notations, the balance can always win the $(n,q,\text{unknown})$-balance game where the strategy of the human player has to be predetermined if: \begin{equation} \label{equation:2} \min_{\textbf{q}}\left\{\max_{p}\left\{\sum_{i=1}^{n}2(1-2p)^{q-q_{i}}p^{q_{i}} \right\} \right\} >1. \end{equation} \textbf{Proof:} Given $\textbf{q}$ and $p$, the probability that the $i$-th coin is possibly heavier/lighter is $2(1-2p)^{q-q_{i}}p^{q_{i}}$. Then the expectation of the number of possibly heavier/lighter coins is just $$f(\textbf{q},p)=\sum_{i=1}^{n}2(1-2p)^{q-q_{i}}p^{q_{i}}.$$ If for any configuration of $\textbf{q}$, there exists one $p$ such $f(\textbf{q},p)>1$ then it is always possible to choose one mask code such that more than one coins are dubious and are indistinguishable for the human player. Hence the balance wins the game.\qed Once the coding delicacy in $S$ breaks down, we can do no more than addressing this sufficient condition for the balance to win. However, it is possible for the human player to modify $r$ and hence $\textbf{q}$ to increase the threshold $n$ for \eqref{equation:2} given $q$. After all, \eqref{equation:2} is quite intimidating so we can only try to derive some asymptotical behavior. Begin with assuming $r\approx\frac{1}{2}$, so $q_{i}\approx \frac{q}{2}$, i.e., each coin is put onto the balance for approximately half of the rounds, then $$f(\textbf{q},p)=2n(p-2p^{2})^{\frac{q}{2}}\leq 2n 8^{-\frac{q}{2}}.$$ Straightforward algebra yields that if $$2n>(2\sqrt{2})^{q},$$ then the balance has a must-win strategy. One should note that this bound is miserably worse than the one derived by the section before ($2n>3^{q}$). So we can conclude that if the less intelligent human player chooses a bad random strategy (specifically a bad $r$) then the game becomes much easier for the balance in multiple perspectives: \begin{enumerate} \item The balance can possibly win even if $2n<3^{q}$ (where a clever human player can always win) since a random player might unfortunately include duplicated/partial complementary rows of $S$. \item The must-win threshold for the balance could be exponentially declined, hence the region of $n$ where the balance has a must-win strategy sharply increases. \end{enumerate} We now optimize \eqref{equation:2} w.r.t. $r$ to cope with the second privileged aspect of the balance so the random player behaves less lamantable. Plugging $rq$ into $q_{i}$, then: $$f(q)\leq 2n\left[(1-r)\left(\frac{r}{2(1-r)} \right)^{r} \right]^{q}.$$ Hence the sufficient condition for the balance to win reduces to: \begin{equation} \label{equation:3} 2n > \left[(1-r)\left(\frac{r}{2(1-r)} \right)^{r} \right]^{-q}=g(r)^{q}, \end{equation} where $g(r)=(1-r)^{-1}\left(\frac{r}{2(1-r)} \right)^{-r}$. By plotting $g(r)$ explicitly in figure. \ref{figure:1}, we can conclude that the bound $2n > 3^{q}$ only appear at the peak value of $g$ since $\max_{r}\left\{g(r) \right\}=3$, in other area, the bound can be significantly lower, hence the balance is much easier to win than one might imagine (once $r$ is removed from $\frac{1}{3}$ for a non-trivial distance, the must-win threshold for the balance declines exponentially). Another particularly interesting region of $r$ besides $\frac{1}{2}$ and $\arg\max_{r}\left\{g(r) \right\}$ is $r\rightarrow 1$, in which case $g(r)\rightarrow 2$. Then $p\rightarrow \frac{1}{2}$ (since almost no coin is put out of the balance, the probability that the balance select $\hat{\text{D}}$ into the mask code approaches zero), hence given $2n > 2^{q}$ ensures the victory of the balance. In conclusion, it is unwise to eagerly put all chips onto the balance at all time, the best ratio between $q_{i}$ and $q$ should be approximately $\frac{2}{3}$, since $\frac{\text{d}g}{\text{d}r}$ is zero at $r=\frac{2}{3}$. \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth] {./g.jpeg} \caption{$g(r)$.} \label{figure:1} \end{figure} \textbf{Remark:} The value $r=\frac{2}{3}$ sheds light on an \emph{optimal random strategy}. The optimal random strategy is to put each coin on either side of the balance or off the balance with probability $\frac{1}{3}$ at all rounds. Only in this way the must-win strategy can be improved to the level of the theoretical optimal one. Otherwise the balance can easily invade the region ($2n<3^{q}$, given $q$) and marks extra $n$s as ''must-win area for the balance''. What might be an interesting result is that the winning bound for the balance yielded by a total random player is the same as the strict one yield by the section before, but the meaning of this bound is different. For a rational player, $2n<3^{q}$ means a victory, while it only means that the balance cannot necessarily win for a random player. Although setting $r=\frac{2}{3}$ cannot ensure that $q_{i}$ converges to $\frac{2}{3}q$ with probability one, we have that when $q\rightarrow\infty$, it is always sure that $$\frac{q_{i}}{q}=\frac{2}{3}.$$ To see this point, for $s=1,2,\cdots,q$, let $Z_{s}$ be the random variable that takes value one with probability $\frac{2}{3}$ and value zero with probability $\frac{1}{3}$. Let $Z'_{s}=Z_{s}-\frac{2}{3}$. Now let $$Z=\sum_{s=1}^{q}Z_{s},$$ $$Z'=\sum_{s=1}^{q}Z'_{s},$$ according to the Chernoff bounding theory: $$\text{Pr}(|Z'|>\epsilon)<2\text{e}^{-\frac{2\epsilon^{2}}{q}}.$$ Finally, let $\epsilon=\delta\cdot q$, where $\delta$ is an arbitrary small number: $$\text{Pr}(|\frac{Z}{q}-\frac{2}{3}|>\delta)<2\text{e}^{-2\delta^{2}q}.$$ Let $q$ approaching infinity (naturally $n$ approaching infinity faster accordingly) yields the observation on stability. \section{The Dishonest Balance Game} \label{section:4} We now proceed to the dishonest balance game that further reveal the power of probability based analysis. To make the start painless, we begin with $(n,q,\text{heavy})$-balance game, the balance is granted $k$ times of telling lies, this makes up the $(n,q,k,\text{heavy})$-balance game. \subsection{The $(n,q,k,\text{heavy})$-balance game} \textbf{Theorem: } If $$n\sum_{j=0}^{k}\binom{q}{j}\geq 3^{q},$$ then the balance has a must-win strategy. \textbf{Proof: } Exerting a probability measure on $\Omega$ as before, each $M\in\Omega$ is assigned probability $3^{-q}$. Let $D_{i}$ be the event \emph{the $i$-th coin is possibly the heavier counterfeit coin}. Since the balance can tell at most $k$ lies, the $i$-th coin is possibly heavier iff the $i$-row in $O$ contains no more than $k$ $\times$s. Now the probability of $D_{i}$ is: $$\text{Pr}(D_{i})=\sum_{j=0}^{k}\binom{q}{j}3^{-q},$$ where $j$ counts the number of $\times$s in $O_{i}$. Let $Y_{i}$ be the indicator random variable of $D_{i}$ and $Y=\sum_{i=1}^{n}Y_{i}$. Now the random variable $Y$ that counts the number of coins possibly heavier has expectation: $$\mathbb{E}[Y]=\sum_{i=1}^{n}\text{Pr}(D_{i})=n\sum_{j=0}^{k}\binom{q}{j}3^{-q}.$$ If $\mathbb{E}[Y] >1$ then there exists one mask $M''\in\Omega$ such $Y(M'')>1$ hence the balance is determined to win. \qed \textbf{Corollary: } If the player has to put all $n$ coins onto the balance during the $(n,q,k,\text{heavy})$-balance game then if $$n\sum_{j=0}^{k}\binom{q}{j}\geq 2^{q},$$ then the balance has a must-win strategy. \begin{table} \Large \caption{Transcription table for a special case of the $(n,q,k,\text{heavy})$-balance game} \begin{center} \begin{tabular}{c|c|c} \toprule \ & $\hat{\text{L}}$ & $\hat{\text{R}}$ \\ \midrule \text{L} & $+$ & $\times$ \\ \text{R} & $\times$ & $+$ \\ \bottomrule \end{tabular} \end{center} \label{table:3} \end{table} \textbf{Proof: } In this case the balance erases $\hat{\text{D}}$ from the alphabet and adopting the transcription Table. \ref{table:3}: Exerting a probability measure on $\Omega=\left\{\hat{\text{L}},\hat{\text{R}}\right\}$ with each characters being selected independently and uniformly. Let $D_{i}$ be defined as in the proof of the previous theorem, then $$\text{Pr}(D_{i})=\sum_{j=0}^{k}\binom{q}{j}2^{-q}.$$ The rest is the same.\qed \subsection{A note on the liar game} The motivation and the solution of the dishonest balance game is similar to that of a \emph{liar game}. Let $N=\left\{1,2,\cdots,n\right\}$. In a liar game, Bob selects one specific $a\in N$, Alice tries to figure out $a$ by asking a series of questions ''is $a$ in $Q$?'' where $Q$ is a subset of $N$. Alice can ask altogether $q$ questions in which Bob can tell $k$ lies. On can easily prove the equivalence between the $(n,q,k,\text{heavy})$-balance game and the $(n,q,k)$-liar game, thus the sufficient condition for Bob to win is: $$n\sum_{i=0}^{k}\binom{q}{i}\geq 2^{q}.$$ The derivation of this bound is identical to the paradigm we adopted in the previous section. Alice gives an $n*q$ matrix $S$, then Bob gives a mask $M$ and the transcription matrix reads Table. \ref{table:4}: \begin{table} \Large \caption{Transcription table for the liar game} \begin{center} \begin{tabular}{c|c|c} \toprule \ & $A$ & $B$ \\ \midrule 0 & $0$ & $1$ \\ 1 & $1$ & $0$ \\ \bottomrule \end{tabular} \end{center} \label{table:4} \end{table} Semantically, $S_{i,j}=0$ iff Alice selects $i$ into $Q$ at the $j$-the round, $M_{j}=A$ means that Bob gives a positive answer. The $O_{i,j}$ entry of the transcripted matrix is zero iff $i$ is considered to be possibly $a$ during the $j$-th round of game. Now since Bob can lie for $k$ times, the event that $i$ is possibly $a$ is true if the $i$-th row of the transcripted matrix contains at most $k$ ones. Define the probability space by having Bob choose each position of the mask with $A$ or $B$ independently and uniformly. So the probability of the event that $i$ is possibly $a$ is true is: $$\sum_{i=0}^{k}\binom{q}{i}2^{-q}.$$ Hence the expectation of the number of elements possibly $a$ is $$n\sum_{i=0}^{k}\binom{q}{i}2^{-q}.$$ If it is larger than unity then there exists one mask such that more than one numbers are possibly $a$ and Alice cannot distinguish them. There is another game, \emph{liar chip game}, which is isomorphic to the liar game and hence the dishonest balance game \cite{alon2004probabilistic}. \subsection{An observation between the dishonest balance game and Shannon' s theorem} Though implicitly, the liar game/liar chip game/dishonest balance game is essentially related to Shannon' s theorem. The identical topic behind both reasoning is essentially communication under the disturbance of noise. Recall that Shannon' s theorem roughly states that in a noisy channel (with $n$ signals and code length of $q$, while $r$ is the bitwise error probability), it is almost always possible to adopt a coding scheme so that every Hamming ball with radius $rq$ in the code space does not interset, hence the reliable communication is ensured and the rate of transmission is higher than $1-H(r)$. In the dishonest balance game where all $n$ coins are put onto the balance at all rounds, we can adopt a similar reasoning and arrive at the same bound without mentioning a probability measure over the space of mask code. In $\left\{0,1\right\}^{q}$, a Hamming ball with radius $k$ contains codes at most $$\sum_{i=0}^{k}\binom{q}{i}.$$ The size of the code space is exactly $2^{q}$, and there are $n$ signals to be encoded. Now if $$n\sum_{i=0}^{k}\binom{q}{i}>2^{q},$$ then there exist two Hamming balls whose intersection is not empty. Pick one code $M$ from this intersected area and denote the centers of these two balls as $S_{1}$ and $S_{2}$. So the Hamming distance between $M$ and $S_{1}$ and that between $M$ and $S_{2}$ are both no larger than $k$. Following the construction before, if $S_{1}$ and $S_{2}$ are two rows of the player's $S$ and the balance selects the mask code $M$ then the human player cannot distinguish between these two coins and the balance wins. In fact, one can show that: $$\sum_{i=0}^{pn}\binom{n}{i}\sim 2^{n(H(p)+o(1))},$$ thus the bound of the dishonest balance game is almost the same as the bound given by Shannon's Theorem II with error probability $\frac{k}{n}$. \subsection{The $(n,q,k,\text{unknown})$-dishonest balance game} Finally, we study the setting where the balance can cheat for $k$ times out of altogether $q$ rounds of balancing, and the player has no extra prior information. We denote this game as $(n,q,k,\text{unknown})$-balance game. When cheating comes into the stage, we can hardly rely on a deterministic coding scheme, so we resort to the probabilistic method for a bound of $n$ for the balance to have a must-win strategy. Analogously, we have the following theorem: \textbf{Theorem:} Let $\textbf{q}=(q_{1},\cdots,q_{n})$ be defined as the previous section, define: $$ h(q_{m},p)=2(1-2p)^{q-q_{m}}p^{q_{m}}\sum_{i=0}^{q_{m}-(k+1)}\binom{q_{m}}{i}.$$ Now if: $$\min_{\textbf{q}}\left\{\max_{p}\left\{\sum_{m=1}^{n}h(q_{m},p)\right\}\right\} > 1,$$ then the balance has a must-win strategy given the policy of the human player. \textbf{Proof:} As defined, $q_{m}$ is the number of times that the $m$-th coin be put onto the balance. Define a probability measure on the space of the balance's mask code, where it selects one side to be heavier/lighter with probability $p$. Let $D_{q_{m},p}$ be the event that \emph{the $m$-th coin is considered to be possibly heavier/lighter after the game}. Now if $q_{m}\leq k$ then $D_{q_{m},p}$ naturally fails. The $m$-th coin is deduced to be heavier iff the number of $\left\{\times,- \right\}$ in $O_{m}$ (the $m$-th row of $O$) is less than or equal to $k$, and the number of $\left\{+\right\}$ in that row is strictly larger than $k$. For the $m$-th coin to be considered lighter, the situation is similar. Hence $D_{q_{m},p}$ is true if the mask takes character $\hat{\text{D}}$ at all positions where $O_{m}$ is $2$ and selects no more than $q_{m}-(k+1)$ positions to transcript the corresponding component of $O_{m}$ into those signs against the dominance decision (the reverse statement is not strictly true, since the components on where the $m$-th coin is not selected can also appear to be false). So $$\text{Pr}(D_{q_{m},p}) \geq h(q_{m},p).$$ Finally, the random variable that counts the number of possibly heavier/lighter coins has expectation no less than $$\sum_{m=1}^{n}h(q_{m},p).$$ If $\forall \textbf{q}$ there exists a $p$ such that $\sum_{m=1}^{n}h(q_{m},p) > 1$ then in all cases the balance is determined to win. \qed Finally, let the human player takes a random strategy, we are now interest in the asymptotic performance of the sufficient condition given by the theorem before. Again, let: $$q_{m}\approx r\cdot q,$$ $$k =r_{2}\cdot q.$$ Of course we should have $r > r_{2}$. Approximating the term $\sum_{i=0}^{q_{m}-(k+1)}\binom{q_{m}}{i}$ by $2^{rqH(\frac{r-r_{2}}{r})}$ and maximizing $h(r,r_{2},p)$ w.r.t. $p$ yields $p=\frac{r}{2}$ as in the previous conclusion on honest $(n,q,\text{unknown})$-balance game. Finally, the only thing that a random player can do to decrease the must-win region of $n$ (given $r_{2}$ and $q$) for the balance is to choose $r$ in order to maximize: \begin{equation} \label{equation:4} \left[(1-r)\left(\frac{r}{2(1-r)} \right)^{r} \right]^{-q}\cdot 2^{-rqH(\frac{r-r_{2}}{r})}=v(r)^{q}. \end{equation} One should note that \eqref{equation:4} is a graceful generalization of \eqref{equation:3} (taking $r_{2}=0$ so $k=0$ then \eqref{equation:4} reduces to \eqref{equation:3}). However, it is hard to find an analytic solution of the optimum of \eqref{equation:4}. Instead we plot the value of $v(r)$ in \eqref{equation:4} with $r_{2}=0.2,0.1,0.005$ in Figure. \ref{figure:2}, as an intuitive illustration. \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth] {./v.jpeg} \caption{$g(r)$ vs. $v(r)$, $r_{2}=0.2,0.1,0.005$.} \label{figure:2} \end{figure} One can conlude from Figure. \ref{figure:2} that after adding the pertubation as a result of dishonest, the optimal $r$ is no longer uniformly $\frac{2}{3}$ (e.g., when $r_{2}=0.2$). While Figure. \ref{figure:2} also shows that the decline of $r_{2}$ reduces $v(r)$ to $g(r)$. Assume that the random human player always plays optimally w.r.t. $r_{2}$ so that: $$r(r_{2})=\arg\max\left\{v(r) \right\}.$$ We plot $r(r_{2})$ and $v(r(r_{2}))$ in Figure. \ref{figure:3}. \begin{figure}[htbp] \centering \subfigure[$r(r_{2})$.]{ \begin{minipage}[c]{0.4\textwidth} \centering \includegraphics[width=6.8cm]{./r_r2.jpeg} \end{minipage} \label{figure:3a} } \subfigure[$v(r(r_{2}))$.]{ \begin{minipage}[c]{0.4\textwidth} \centering \includegraphics[width=6.8cm]{./v_r_r2.jpeg} \end{minipage} \label{figure:3b} } \caption{The variation of the random player's optimal strategy w.r.t. $r_{2}$.} \label{figure:3} \end{figure} Generally speaking, it is suggested that with the growth of $r_{2}$, the human player should better adopt a less $r$. That is to say, the more deceptive the balance is, the less should the human put coins onto the balance. The rate of decreasing is visualized in Figure. \ref{figure:3a}. Meanwhile, with the dishonest mechanism, it is easier for the balance to win the balance game. The threshold for the balance to have a must-win strategy declines from approximately $\frac{3^{q}}{2}$ to $\frac{a^{q}}{2}$ where $a=\max_{r}\left\{v(r) \right\}$ is a function of $r_{2}$ and is uniformly smaller than three according to Figure. \ref{figure:3b}. \section{Conclusion and Discussion} \label{section:5} In this paper we study the balance game. By exerting the extra electromagnetic assumption, the winning conditions for the human player and the balance are derived under various settings. An analysis on the random strategy shows that if the human player plays randomly, he/she can still hinder the balance by choosing good parameters. The dishonest balance game, in which the balance can cheat the player, is then studied. The behavior of an randomnized human player in this game is demonstrated to be correlated to the bound in noisy-channel communication. Amongst the infinity of extra assumptions/generalizations that can be exerted onto the naive balance-coin setting, it is hard to conclude which combinations of assumptions can give rise to fruitful results without actually delving into them and applying appropriate mathematical tools. \subsection{Generalization case: more counterfeit coins?} For example, let us deviate from the previous discussion and consider the generalization of two counterfeit coins, one heavier, one lighter, and the sum of the weight is the weight of two normal coins. In our setting we can have two counterfeit coins charged $+Q$ and $-Q$ respectively. Is it still safe to adopt the previous framework? One easily see that the transcription table has to become Table. \ref{table:5}. \begin{table}[htbp] \Large \caption{Transcription table for the $(n,q,\text{unknown})$-balance game.} \begin{center} \begin{tabular}{c|c|c|c} \toprule \ & $\hat{\text{L}}$ & $\hat{\text{R}}$ & $\hat{\text{D}}$\\ \midrule L & $+$ & $-$ & $\pm$ \\ R & $-$ & $+$ & $\pm$ \\ O & $\pm$ & $\pm$ & $\pm$ \\ \bottomrule \end{tabular} \end{center} \label{table:5} \end{table} For the $i$-th coin which has been put onto the balance for $q_{i}$ times, the probability that it be considered heavier is: $$ \begin{aligned} &\sum_{j=1}^{q_{i}}\binom{q_{i}}{j}p^{j}(1-2p)^{q_{i}-j}\\ &=(1-2p)^{q_{i}}\cdot\sum_{j=1}^{q_{i}}\binom{q_{i}}{j}\left(\frac{p}{1-2p} \right)^{j}\\ &=(1-2p)^{q_{i}}\left[\left(\frac{1-p}{1-2p}\right)^{q_{i}}-1 \right]\\ &=(1-p)^{q_{i}}-(1-2p)^{q_{i}}\\ &\approx (1-p)^{rq}-(1-2p)^{rq}=\phi(p,r). \end{aligned} $$ The problem is: the sufficient condition for the balance to win can no longer be calculated using expectation. The orthodox probabilistic treatment at this stage is to let $X_{i}^{+}$ be the indicator random variable of the event that the $i$-th coin is considered heavier and approximately compute: \begin{equation} \label{equation:5} \begin{aligned} \text{Pr}(X^{+}&=\sum_{i=0}^{n}X_{i}^{+}=0)\\ &\approx \prod_{i=1}^{n}\text{Pr}(X_{i}^{+}=0)\\ &= (1-\phi(p,r))^{n}. \end{aligned} \end{equation} \begin{equation} \label{equation:6} \begin{aligned} \text{Pr}(X^{+}&=\sum_{i=0}^{n}X_{i}^{+}=1)\\ &\approx \sum_{i=1}^{n}p^{q_{i}}(1-2p)^{q-q_{i}}\\ &\times\prod_{j=1,j\neq i}^{n}\left(1-p^{q_{j}}(1-2p)^{q-q_{j}} \right)\\ &=n\phi(p,r)(1-\phi(p,r))^{n-1}. \end{aligned} \end{equation} Now if the configuration $(n,q)$ implies that $\text{Pr}(X^{+}<2)<\frac{1}{2}$ then we can symmetrically assume that $\text{Pr}(X^{-}<2)<\frac{1}{2}$ as well, hence: $$\text{Pr}(X^{+}\geq 2 \text{ and }X^{-}\geq 2)>0,$$ then the balance has a must-win strategy. Again adopting a randomnized strategy to simplify the discussion so: $$ \text{Pr}(X^{+}<2)=(1-\phi(p,r))^{n-1}\left[1+(n-1)\phi(p,r) \right]. $$ For instance, let use have $r=p=\frac{1}{2}$, then: $$\phi=2^{-\frac{q}{2}}.$$ When $q$ and $n$ is large, we have: $$\text{Pr}(X^{+}<2)\approx 1-(n-1)^{2}\phi^{2}=1-(n-1)^{2}2^{-q}.$$ Thus $\text{Pr}(X^{+}<2)<\frac{1}{2}$ is tantamount to: $$(n-1)^{2}>2^{q-1},$$ a bound of the similar form of that yield by the entropy $n(n-1)>3^{q}$. One should note that this bound is significantly lower than that of the entropy-based bound since the entropy rate goes to 2 from 3. A result from $p=\frac{1}{2},r=1$. However, optimize $\text{Pr}(X^{+}<2)$ and $\phi$ w.r.t. $p$ and $r$ is more complex and an analytic solution is hard to find. Generally, the balance tries to minimize $\text{Pr}(X^{+}<2)$ w.r.t. $p$ while the human player tries to maximize $\text{Pr}(X^{+}<2)$ w.r.t. $r$. During which process the approximations made in \eqref{equation:5} and \eqref{equation:6} might fail and a concrete solution is not available. In more general cases where the transcription table is different, the same difficulty appears. Consider two counterfeit coins with $+Q$. One can yield that if $p=\frac{1}{3}$ and $r=\frac{2}{3}$ then if $$n>2\left(\frac{3}{2} \right)^{\frac{q}{3}}$$ then the balance has a must-win strategy. \subsection{Generalization case: multi-armed electromagnetic balance} Figure. \ref{figure:0} gives an example of a two-armed electromagnetic balance. What about multi-armed electromagnetic balance? In traditional cases, it is always assumed that a multi-armed balance is capable of addressing one direction to be heavier/lighter out of all its arms. Does such a device exist in reality? If the number of counterfeit charged coin is one then it is straightforward to design a $t$-armed balance. Let $t'$ be the smallest odd number that is no smaller than $t$, divide a circle into $t'$ equally curves, putting $t$ boxes onto the mid-points of $t$ different curves will meet the need. The trick here is to make sure that the test charge does not fall onto the connection line between two boxes. What if there are two counterfeit charged coin with $+Q$ and $-Q$? One good suggestion for you is to independently and uniformly sample $t$ positions for boxes along the circle centered at the test charge. After all, any seemingly \emph{insignificant} assumption or generalization might potentially result in a sharp difference in methodology and reasoning. This line of applying probabilistic method to the balance game is a mere evidence or corollary of this idea. \bibliographystyle{ieeetr}
proofpile-arXiv_065-128
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \begin{figure*}[t!] \centering \includegraphics[width=0.75\textwidth]{introduction_modified.pdf} \caption{(a) Each angle of illumination, here labelled as angular axis, corresponds to a time step in an analogous temporal axis. (b) The raw intensity diffraction pattern $\mathbf{g}_n,\: n\!=\!1,\ldots,N\!\!=\!\!42$\ \ of the at $n$-th angular sequence step is followed by gradient descent and moving average operations to construct a shorter Approximant sequence $\mathbf{\tilde{f}}_m{}^{[1]},\: m\!=\!1,\ldots,M\!\!=\!\!12$. The Approximants $\mathbf{\tilde{f}}_m{}^{[1]}$ are encoded to $\xi_m$ and fed to the recurrent dynamical operation whose output sequence $\mathbf{h}_m, m\!=\!1,\ldots,12$\ \ the angular attention scheme merges into a single representation $a$, and that is finally decoded to produce the 3D reconstruction $\mathbf{\hat{f}}$. Training adapts the weights of the learned operators in this architecture to minimize the training loss function $\mathcal{E}(\mathbf{f},\hat{\mathbf{f}})$ between $\mathbf{\hat{f}}$ and the ground truth object $\mathbf{f}$.} \label{fig:introduction} \end{figure*} Optical tomography reconstructs the three-dimensional (3D) internal refractive index profile by illuminating the sample at several angles and processing the respective raw intensity images. The reconstruction scheme depends on the scattering model that is appropriate for a given situation. If the rays through the sample can be well approximated as straight lines, then accumulation of absorption and phase delay along the rays is an adequate forward model, {\it i.e.} the projection or Radon transform approximation applies. This is often the case with hard x-rays through most materials including biological tissue; for that reason, Radon transform inversion has been widely studied\ \cite{radon1986determination,radon1917determination,bracewell1967inversion,feldkamp1984practical,dreike1976convolution,wang1993general,kudo1991helical,grangeat1991mathematical,katsevich2002analysis,choi2007tomographic}. The next level of complexity arises when diffraction and multiple scattering must be taken into account in the forward model; then, the Born or Rytov expansions and the Lippmann-Schwinger integral equation \cite{ishimaru2017electromagnetic,tatarski2016wave,wolf1969three,devaney1981inverse,pham2020three} are more appropriate. These follow from the scalar Helmholtz equation using different forms of expansion for the scattered field \cite{marks2006family}. In all these approaches, weak scattering is obtained from the first order in the series expansion. Holographic approaches to volumetric reconstruction generally rely on this first expansion term\ \cite{milgram2002computational,tian2010quantitative,hahn2008wide,park2009recent,nehmetallah2012applications,williams2013digital,brady2009compressive,choi2010compressive,rivenson2018phase,wu2019bright,rivenson2019deep,zhang2018twin}. Often, solving the Lippmann-Schwinger equation is the most robust approach to account for multiple scattering, but even then the solution is iterative and requires excessive amount of computation especially for complex 3D geometries. The inversion of these forward models to obtain the refractive index in 3D is referred to as inverse scattering, also a well studied topic \cite{kamilov2016recursive,kamilov2016optical,giorgi2013application,chew1990reconstruction,sun2018efficient,lu1985multidimensional,lu1986jkm,tsihrintzis2000higher}. An alternative to the integral methods is the beam propagation method (BPM), which sections the sample along the propagation distance $z$ into slices, each slice scattering according to the thin transparency model, and propagates the field from one slice to the next through the object\ \cite{feit1980computation}. Despite some compromise in accuracy, BPM offers comparatively light load of computation and has been used as forward model for 3D reconstructions\ \cite{pham2020three}. The analogy of the BPM computational structure with a neural network was exploited, in conjunction with gradient descent optimization, to obtain the 3D refractive index as the ``weights'' of the analogous neural network in the learning tomography approach \cite{kamilov2015learning,shoreh2017optical,lim2018learning}. BPM has also been used with more traditional sparsity-based inverse methods\ \cite{kamilov2016optical,chowdhury2019high}. Later, a machine learning approach with a convolutional neural network (CNN) replacing the iterative gradient descent algorithm exhibited even better robustness to strong scattering for layered objects, which match well with the BPM assumptions \cite{goy2019high}. Despite great progress reported by these prior works, the problem of reconstruction through multiple scattering remains difficult due to the extreme ill-posedness and uncertainty in the forward operator; residual distortion and artifacts are not uncommon in experimental reconstructions. Inverse scattering, as inverse problems in general, may be approached in a number of different ways to regularize the ill-posedness and thus provide some immunity to noise \cite{bertero1998introduction,candes2006robust}. Recently, thanks to a ground-breaking observation from 2010 that sparsity can be learnt by a deep neural network \cite{gregor2010learning}, the idea of using machine learning to approximate solutions to inverse problems also caught on \cite{barbastathis2019use}. In the context of tomography, in particular, deep neural networks have been used to invert the Radon transform \cite{jin2017deep} and recursive Born model \cite{kamilov2016recursive}, and were also the basis of some of the papers we cited earlier on holographic 3D reconstruction\ \cite{wu2019bright,rivenson2018phase,rivenson2019deep}, learning tomography\ \cite{kamilov2015learning,shoreh2017optical,lim2018learning}, and multi-layered strongly scattering objects\ \cite{goy2019high}. In prior work on tomography using machine learning, generally, the intensity projections are all fed as inputs to a computational architecture that includes a neural network, and the output is the 3D reconstruction of the refractive index. The role of the neural network is to learn the priors that apply to the particular class of objects being considered and the relationship of these priors to the forward operator (Born, BPM, etc.) so as to produce a reasonable estimate of the inverse. Here we propose a rather distinct approach to exploit machine learning for 3D refractive index reconstruction under strong scattering conditions. Our motivation is that, as the angle of illumination is changed, the light goes through {\em the same scattering volume,} but the scattering events follow a different sequence. At the same time, the intensity diffraction pattern obtained from a new angle of illumination adds information to the tomographic problem, but that information is constrained by ({\it i.e.}, is not orthogonal to) the previously obtained patterns. We interpret this as similar to a dynamical system, where as time evolves and new inputs arrive, the output is constrained by the history of earlier inputs. (The convolution integral is the simplest and best known expression of this relationship between the output of a system and the history of the system's input.) The analogy between strong scattering tomography and a dynamical system suggests the recurrent neural network (RNN) architecture as a strong candidate to process intensity diffraction patterns in sequence, as they are obtained one after the other; and process them recurrently so that each intensity diffraction pattern from a new angle improves over the reconstructions obtained from the previous angles. Thus, we treat multiple diffraction patterns under different illumination angles as a temporal sequence, as shown in Figure~\ref{fig:introduction}. The angle index $\theta$ replaces what in a dynamical system would have been the time $t$. This idea is intuitively appealing; it also leads to considerable improvement in the reconstructions, removing certain artifacts that were visible in \cite{goy2019high}, as we will show in section~\ref{sec:results}. The way we propose to use RNNs in this problem is quite distinct from the recurrent architecture proposed first in \cite{gregor2010learning} and subsequently implemented, replacing the recurrence by a cascade of distinct neural networks, in \cite{jin2017deep,inv:mardani2017a,inv:mardani2017b}, among others. In these prior works, the input to the recurrence can be thought of as clamped to the raw measurement, as in the proximal gradient \cite{inv:daubechies04} and related methods; whereas, in our case, the input to the recurrence is itself dynamic, with the raw intensity diffraction patterns from different angles forming the input sequence. Moreover, by utilizing a modified gated recurrent unit (more on this below) rather than a standard neural network, we do not need to break the recurrence up into a cascade. Typical applications of RNNs \cite{williams1989learning,hochreiter1997long} are in temporal sequence learning and identification. In imaging and computer vision, RNN is applied in 2D and 3D: video frame prediction \cite{xingjian2015convolutional,wang2018eidetic,wang2017predrnn,wang2018predrnn++}, depth map prediction \cite{cs2018depthnet}, shape inpainting \cite{wang2017shape}; and stereo reconstruction \cite{liu2020novel,choy20163d} or segmentation \cite{le2017multi,stollenga2015parallel} from multi-view images, respectively. Stereo, in particular, bears certain similarities to our tomographic problem here, as sequential multiple views can be treated as a temporal sequence. To establish the surface shape, the RNNs in these prior works learn to enforce consistency in the raw 2D images from each view and resolve the redundancy between adjacent views in recursive fashion through the time sequence ({\it i.e.}, the sequence of view angles). Non-RNN learning approaches have also been used in stereo, e.g. Gaussian mixture models\ \cite{hou2019multi}. In this work, we replaced the standard long-short term memory (LSTM)\ \cite{hochreiter1997long} implementation of RNNs with a modified version of the newer gated recurrent unit (GRU) \cite{cho2014learning}. The GRU has the advantage of fewer parameters but generalizes comparably with the LSTM. Our GRU employs a split convolutional scheme to explicitly account for the asymmetry between the lateral and axial axes of propagation, and an angular attention mechanism that learns how to reward specific angles in proportion to their contribution to reconstruction quality. For isotropic (in the ensemble sense) samples as we consider here, it turns out that the attention mechanism treats all angles equally, yet we found that its presence still improves the quality of the training algorithm. For more general sample classes with spatially anisotropic structure, angular attention may be expected to treat different angles of illumination with more disparity. Details in experiments are delineated in Section~\ref{sec:experiment}. The computational elements are all described in Section~\ref{sec:comput_arch}, while training and testing procedures are illustrated in Section~\ref{sec:train_and_test}. The results of our experimental study are in Section~\ref{sec:results}, showing significant improvement over static neural network-based reconstructions of the same data both visually and in terms of several quantitative metrics. We also include results from an ablation study that indicates the relative significance of the new components we introduced to the quality of the reconstructions. \iffalse Performance of our RNN architecture is qualitatively and quantitatively compared with the baseline from earlier works in Section~\ref{sec:results}.\ref{subsec:comparison_baseline}. An ablation study to quantify contribution of each element to the overall performance is given in Section~\ref{sec:results}.\ref{subsec:ablation}. Section~\ref{sec:results}.\ref{subsec:number_of_patterns} shows how quality of reconstructions is incrementally enhanced as the number of patterns that enters the trained network for testing increases. Finally, in Section~\ref{sec:conclusion} we share some concluding thoughts and suggestions for future work. \fi \section{Experiment} \label{sec:experiment} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{optical_apparatus.pdf} \caption{Optical apparatus used for experimental data acquisition\ \protect{\cite{goy2019high}}. L$1-4$: lenses, F$1$: pinhole, A$1$: aperture, EM-CCD: electron-multiplying charge coupled device. $f_{\text{L}_3}:f_{\text{L}_4} = 2:1$. The object is rotated along both $x$ and $y$ axes. The defocus distance between the conjugate plane to the exit object surface and the EM-CCD is $\Delta z = 58.2\:\text{mm}$.} \label{fig:optical_apparatus} \end{figure} The experimental data are the same as in \cite{goy2019high}, whose experimental apparatus is summarized in Figure~\ref{fig:optical_apparatus}. We repeat the description here for the readers' convenience. The He-Ne laser (Thorlabs HNL210L, power: $20\:\text{mW}$, $\lambda = 632.8\:\text{nm}$) illuminated the sample after spatial filtering and beam expansion. The illumination beam was then de-magnified by the telescope ($f_{\text{L}_3} : f_{\text{L}_4} = 2:1$), and the EM-CCD (Rolera EM-C$2$, pixel pitch: $8\:\mu\text{m}$, acquisition window dimension: $1002\:\times\:1004$) captured the experimental intensity diffraction patterns. The integration time for each frame was $2\:\text{ms}$, and the EM gain was set to $\times 1$. The optical power of the laser was strong enough for the captured intensities to be comfortably outside the shot-noise limited regime. Each layer of the sample was made of fused silica slabs ($n=1.457$ at $632.8$ nm and at $20\:^\circ$C). Slab thickness was $0.5\text{mm}$, and patterns were carefully etched to the depth of $575\pm 5$ nm on the top surface of each of the four slabs. To reduce the difference between refractive indices, gaps between adjacent layers were filled with oil ($n = 1.4005\pm 0.0002$ at $632.8$ nm and at $20^\circ$C), yielding binary phase depth of $-0.323\pm 0.006\:\text{rad}$. The diffraction patterns used for training were prepared with simulation precisely matched to the apparatus of Figure~\ref{fig:optical_apparatus}. For testing, we used a set of diffraction patterns that was acquired experimentally. Objects used for both simulation and experiment are dense-layered, transparent, \textit{i.e.} of negligible amplitude modulation, and of binary refractive index. They were drawn from a database of IC layout segments\ \cite{goy2019high}. The feature depth of $575\pm 5\:\text{nm}$ and refractive index contrast $0.0565\pm0.0002$ at $632.8$ nm and at $20\:^\circ$C were such that weak scattering assumptions are invalid and strong scattering has to be necessarily taken into account. The Fresnel number ranged from $0.7$ to $5.5$ for the given defocus amount $\Delta z=58.2\:\text{mm}$ for the range of object feature sizes. To implement the raw image acquisition scheme, the sample was rotated from $-10$ degree to $10$ degree with a $1$-degree increment along both the $x$ and $y$ axes, while the illumination beam and detector remained still. This resulted in $N=42$ angles and intensity diffraction patterns in total (see Section~\ref{sec:comput_arch}.\ref{subsec:conv_enc_dec}). Note that \cite{goy2019high} only utilized $22$ patterns out of with a $2$-degree increment along both $x$ and $y$ axes. The comparisons we show later are still fair because we retrained all the algorithms of \cite{goy2019high} for the $42$ angles and $1^\circ$ increment. \section{Computational architecture}\label{sec:comput_arch} \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{architecture.pdf} \caption{Details on implementing the dynamical scheme of Figure~\protect{\ref{fig:introduction}}. (a) Overall network architecture; (b) tensorial dimensions of each layer; (c) down-residual block (DRB); (d) up-residual block (URB); and (e) residual block (RB). $K$ and $S$ indicate the sizes of kernel and stride, respectively, and the values shown apply only to the row and column axes. For the layer axis, $K=4$ and $S=1$ always. The disparities are to implement the split convolution scheme; please see Section~\protect{\ref{sec:comput_arch}}.\protect{\ref{subsec:sc_gru}} and Figure~\protect{\ref{fig:split_convolution}}.} \label{fig:architecture} \end{figure*} The proposed RNN architecture is shown in detail in Figure~\ref{fig:architecture}. The forward model and gradient descent Approximant (pre-processing) algorithm are described in Section~\ref{subsec:approximants}. The split-convolutional GRU, convolutional encoder and decoder, and the angular attention mechanism are described in Sections~\ref{subsec:sc_gru}, \ref{subsec:conv_enc_dec}, and \ref{subsec:angular_att}, respectively. The total number of parameters in this computational architecture is $\sim 21\text{M}$ (more on this topic in section~\ref{sec:train_and_test}.\ref{subsec:training_rnn}). \subsection{Approximant computations}\label{subsec:approximants} The dense-layered, binary-phase object is illuminated at a sequence of angles, and the corresponding diffraction intensity patterns are captured by a detector. At the $n$-th step of the sequence, the object is illuminated by a plane wave at angles $\left(\theta_{nx},\theta_{ny}\right)$ with respect to the propagation axis $z$ on the $xz$ and $yz$ planes, respectively. Beyond the object, the scattered field propagates in free space by distance $\Delta z$ to the digital camera (the numerical value is $\Delta z=58.2$mm, as we saw in section~\ref{sec:experiment}). Let the forward model under the $n$-th illumination angle be denoted as $H_n$, $n=1,2,\ldots, N$; that is, the $n$-th intensity diffraction pattern at the detector plane produced by the phase object $\mathbf{f}$ is $\mathbf{g}_n\equiv H_n(\mathbf{f})$. In the simulations, the forward operators $H_n$ are obtained from the non-paraxial beam propagation method (BPM) \cite{feit1980computation,goy2019high,kamilov2016optical}. Let the $j$-th cross-section of the computational window perpendicular to $z$ axis be $f^{[j]} = \exp\left(i\varphi^{[j]}\right),\: j=1,\ldots,J$ where $J$ is the number of slices the we divide the object into, each of axial extent $\delta z$. At the $n$-th illumination angle, the BPM is initialized as $f_n^{[0]}=\text{exp}\left[ik\left(x\sin\theta_{nx}+y\sin\theta_{ny}\right)\right]$, where $k$ is the wavenumber. The optical field at the $(j+1)$-th slice is \begin{equation}\label{eq:BPM-iteration} \begin{split} \psi_n^{[j+1]} = \mathcal{F}^{-1}&\bigg[\mathcal{F}\left[\psi_n^{[j]}\circ f_n^{[j]}\right](k_x,k_y)\\ &\cdot\exp\left(-i\left(k-\sqrt{k^2-k_x^2-k_y^2}\right)\delta z\right)\bigg], \end{split} \end{equation} where $\delta z$ is equal to the slab thickness, \textit{i.e.} $0.5\:\text{mm}$; ${\cal F}$ and ${\cal F}^{-1}$ are the Fourier and inverse Fourier transforms, respectively; and $\chi_1\circ\chi_2$ denotes the Hadamard (element-wise) product of the functions $\chi_1$, $\chi_2$. The Hadamard product is the numerical implementation of the thin transparency approximation, which is inherent in the BPM. To obtain the intensity at the detector, we define the $(J+1)$-th slice displaced by $\Delta z$ from the $J$-th slice (the latter is the exit surface of the object) to yield \begin{equation}\label{eq:forw} \mathbf{g}_n\equiv H_n(\mathbf{f})=\left|\psi_n^{[J+1]}\right|^2. \end{equation} The purpose of the Approximant, in general, is to produce a crude estimate of the volumetric reconstruction using the forward operator alone. This has been well established as a helpful form of preprocessing for subsequent treatment by machine learning algorithms\ \cite{goy2018low,goy2019high}. Previous works constructed the Approximant as a single-pass gradient descent algorithm \cite{kamilov2016optical,goy2019high}. Here, due to the sequential nature of our reconstruction algorithm, as each intensity diffraction pattern from a new angle of illumination $n$ is received, we instead construct a sequence of Approximants, indexed by $n$, by minimizing the functionals \begin{equation}\label{eq:new_loss_function} \mathcal{L}_n(\mathbf{f}) = \frac{1}{2}||H_n(\mathbf{f})-\mathbf{g}_n||_2^2,\quad n=1,2,\ldots,N. \end{equation} The gradient descent update rule for this functional is \begin{multline}\label{eq:new_approximants} \mathbf{f}_n^{[l+1]} = \mathbf{f}_n^{[l]} -s\left(\nabla_\mathbf{f}\mathcal{L}_n\left(\mathbf{f}_n^{[l]}\right)\right)^\dagger = \\ = \mathbf{f}_n^{[l]} -s\left(H_n^T\left(\mathbf{f}^{[l]}\right)\nabla_\mathbf{f} H_n\left(\mathbf{f}_n^{[l]}\right)-\mathbf{g}_n^T\nabla_\mathbf{f}H_n\left(\mathbf{f}_n^{[l]}\right)\right)^\dagger, \end{multline} where $\mathbf{f}_n^{[0]}=\mathbf{0}$ and $s$ is the descent step size and in the numerical calculations was set to $0.05$ and the superscript $\dagger$ denotes the transpose. The single-pass, gradient descent-based Approximant was used for training of the RNN but with an additional pre-processing step that will be explained in (\ref{eq:moving_window}). We also implemented a denoised Total Variation (TV) based Approximant, to be used only at the testing stage of the RNN. In this case, the functional to be minimized is \begin{equation}\label{eq:TV_Approx} \mathcal{L}^{\text{TV}}_n(\mathbf{f}) = \frac{1}{2} ||H_n(\mathbf{f})-\mathbf{g}||_2^2 + \kappa\text{TV}_{l_1}(\mathbf{f}),\quad n=1,2,\ldots,N, \end{equation} where the TV-regularization parameter was chosen as $\kappa=10^{-3}$, and for $\mathbf{x}\in \mathcal{R}^{P\times Q}$ the anisotropic $l_1$-TV operator is \begin{equation} \begin{split} \text{TV}_{l_1}(\mathbf{x}) = &\sum_{p=1}^{P-1}\sum_{q=1}^{Q-1} \Big(\left|x_{p,q} - x_{p+1,q}\right| + \left|x_{p,q} - x_{p,q+1}\right|\Big)\\ & + \sum_{p=1}^{P-1} \left|x_{p,Q}-x_{p+1,Q}\right| +\sum_{q=1}^{Q-1} \left|x_{P,q}-x_{P,q+1}\right| \end{split} \end{equation} with reflexive boundary conditions \cite{beck2009fast,chambolle2004algorithm}. To produce the Approximants for testing from this functional, we first ran $3$ iterations of the gradient descent and ran $2$ iterations of the FGP-FISTA (Fast Gradient Projection with Fast Iterative Shrinkage Thresholding Algorithm)\ \cite{beck2009fast,beck2009fista}. The sequence of $N$ Approximants for either training or testing procedure is a $4$D spatiotemporal sequence $\mathbf{F}=\left(\mathbf{f}_1^{[1]},\mathbf{f}_2^{[1]},\ldots,\mathbf{f}_N^{[1]}\right)$. As an additional processing step, to suppress unwanted artifacts in the Approximants of the experimentally captured intensities $\mathbf{g}_n$, we reduce the sequence size to $M$ by applying a moving average window as \begin{equation}\label{eq:moving_window} \tilde{\mathbf{f}}_m^{[1]} = \begin{dcases} \frac{1}{N_{\text{w}}+1}\sum_{n=m}^{m+N_{\text{w}}} \mathbf{f}_n^{[1]}, & 1\leq m\leq N_{\text{h}}\\ \frac{1}{N_{\text{w}}+1}\sum_{n=m}^{m+N_{\text{w}}} \mathbf{f}_{n+N_{\text{w}}}^{[1]}, & N_{\text{h}}+1\leq m\leq M. \end{dcases} \end{equation} To be consistent, the moving average window was applied to the Approximants for both training and testing. In this study, $N_{\text{w}}=15$, $N_{\text{h}}=6$ and $M=12$. These choices follow from the following considerations. We have $N=42$ diffraction patterns for each sequence: $21$ captured along the $x$ axis ($1-21$) and the remaining ones along the $y$ axis ($22-42$). The window is first applied to $21$ patterns from $x$-axis rotation, which thus generates $6$ averaged diffraction patterns, and then the window is applied to the remaining $21$ patterns from $y$-axis rotation, resulting in the other $6$ patterns. Therefore, the input sequence to the next step in the architecture of Figure~\ref{fig:architecture}, {\it i.e.} to the encoder (Section~\ref{subsec:conv_enc_dec}), consists of a sequence of $M=12$ averaged Approximants~$\tilde{\mathbf{f}}_m^{[1]}$. \subsection{Split-convolutional gated recurrent unit (SC-GRU)}\label{subsec:sc_gru} Recurrent neural networks involve a recurrent unit that retains memory and context based on previous inputs in a form of latent tensors or hidden units. It is well known that the Long Short-Term Memory (LSTM) is robust to instabilities in the training process. Moreover, in the LSTM, the weights applied to past inputs are updated according to usefulness, while less useful past inputs are forgotten. This encourages the most salient aspects of the input sequence to influence the output sequence\ \cite{hochreiter1997long}. Recently, the Gated Recurrent Unit (GRU) was proposed as an alternative to LSTM. The GRU effectively reduces the number of parameters by merging some operations inside the LSTM, without compromising quality of reconstructions; thus, it is expected to generalize better in many cases\ \cite{cho2014learning}. For this reason, we chose to utilize the GRU in this paper as well. The governing equations of the standard GRU are as follows: \begin{equation}\label{eq:gru_equations} \begin{gathered} r_m = W_r \xi_m + U_r h_{m-1}+b_r\\ z_m = W_z \xi_m + U_zh_{m-1} + b_z\\ \Tilde{h}_m = \text{tanh}\left(W\xi_m+U\left(r_m\circ h_{m-1}\right)+b_h\right)\\ h_m = (1-z_m)\circ \Tilde{h}_m + z_m\circ h_{m-1}, \end{gathered} \end{equation} where $\xi_m$, $h_m$, $r_m$, $z_m$ are the inputs, hidden features, reset states, and update states, respectively. Multiplication operations with weight matrices are performed in a fully connected fashion. We modified this architecture so as to take into account the asymmetry between the lateral and axial dimensions of optical field propagation. This is evident even in free-space propagation, where the lateral components of the Fresnel kernel \[ \expb{i\pi\frac{x^2+y^2}{\lambda z}} \] are shift invariant and, thus, convolutional, whereas the longitudinal axis $z$ is not. The asymmetry is also evident in nonlinear propagation, as in the BPM forward model (\ref{eq:BPM-iteration}) that we used here. This does not mean that space is anisotropic --- of course space is isotropic! The asymmetry arises because propagation and the object are 3D, whereas the sensor is 2D. In other words, the orientation of the image plane breaks the symmetry in object space so that the scattered field from a certain voxel within the object {\em apparently} influences the scattered intensity from its neighbors at the detector plane differently in the lateral direction than in the axial direction. To account for this asymmetry in a profitable way for our learning task, we first define the operators $W_r$, $U_r$, etc. as convolutional so as to keep the number of parameters down (even though in free space propagation the axial dimension is not convolutional and under strong scattering neither dimension is nonlinear); and we constrain the convolutional kernels of the operators to be the same in the lateral dimensions $x$ and $y$, and allow the axial $z$ dimension kernel to be different. This approach justifies the term Split-Convolutional, and we found it to be a good compromise between facilitating generalization and adhering to the physics of the problem. \begin{figure}[htbp!] \centering \includegraphics[width=\linewidth]{split_convolution_figure.pdf} \caption{Split convolution scheme: different convolution kernels are applied along the lateral $x,y$ axes {\it vs.} the longitudinal $z$ axis. In our present implementation, the kernels' respective dimensions are $3 \times 3 \times 1$ (or $1 \times 1 \times 1$) and $1 \times 1 \times 4$. The lateral and longitudinal convolutions are computed separately and the results are then added element-wise. The split convolution scheme is used in both the gated recurrent unit (Section~\protect{\ref{sec:comput_arch}}.\protect{\ref{subsec:sc_gru}}) and the encoder/decoder (Section~\protect{\ref{sec:comput_arch}}.\protect{\ref{subsec:conv_enc_dec}}).} \label{fig:split_convolution} \end{figure} We also replaced the tanh activation function of the standard GRU with a rectified linear unit (ReLU) activation \cite{dey2017gate} as the ReLU is computationally less expensive and helpful to avoid local minima with fewer vanishing gradient problems \cite{nair2010rectified,glorot2011deep}. The final form of our SC-GRU dynamics is \begin{equation}\label{eq:new_gru_equations} \begin{gathered} r_m = W_r*\xi_m + U_r*h_{m-1}+b_r\\ z_m = W_z*\xi_m + U_z*h_{m-1} + b_z\\ \Tilde{h}_m = \text{ReLU}\left(W*\xi_m+U*\left(r_m\circ h_{m-1}\right)+b_h\right)\\ h_m = (1-z_m)\circ \Tilde{h}_m + z_m\circ h_{m-1}, \end{gathered} \end{equation} where $*$ denotes our split convolution operation. \subsection{Convolutional encoder and decoder}\label{subsec:conv_enc_dec} Convolutional neural networks (CNNs) are placed before and after the SC-GRU as encoder and decoder, respectively. This architectural choice was inspired by \cite{sinha2017lensless,gehring2016convolutional,hori2017advances,zhao2017learning}. The encoder and decoder also utilize split convolution, as shown in Figure~\ref{fig:split_convolution}, in conjunction with residual learning, which is known to improve generalization in deep networks\ \cite{he2016deep}. As in \cite{sinha2017lensless}, the encoder and decoder utilize down-residual blocks (DRB), up-residual blocks (URB), and residual blocks (RB); however, there are no skip connections in our case, {\it i.e.} this is not a U-net\ \cite{ronneberger2015u} architecture. The encoder learns how to map its input ({\it i.e.} the $\tilde{\mathbf{f}}_m^{[1]}$ sequence) onto a low-dimensional nonlinear manifold. The compression factor is $16$ for the lateral input dimensions, but the axial dimension is left intact, as shown in Figure~\ref{fig:architecture}. This eases the burden on the training process as the number of parameters is reduced; more importantly, encoding abstracts features out of the high-dimensional inputs, passing latent tensors over to the recurrent unit. Letting the encoder for the $m$-th angle Approximant be symbolized as $\text{Enc}_m\left(\cdot\right)$, $\xi_m = \text{Enc}_m\left(\tilde{\mathbf{f}}_m^{[1]}\right)$ in (\ref{eq:new_gru_equations}). The decoder restores the output of the RNN to the native dimension of the object we are reconstructing. \subsection{Angular attention mechanism}\label{subsec:angular_att} Each intensity diffraction pattern from a new angle of illumination is combined at the SC-GRU input with the hidden feature $h_m$ from the same SC-GRU's previous output. After $M$ iterations, there are $M$ different hidden features resulting from $N$ illumination angles, as seen in (\ref{eq:moving_window}). Since the forward operator $H_n(\mathbf{f})$ is object dependent, the qualitative information that each such new angle conveys will vary with the object. It then becomes interesting to consider whether some angles of illumination convey more information than others. The analogue in temporal dynamical systems, the usual domain of application for RNNs, is the {\em attention} mechanism. It decides which elements of the system's state are the most informative. In our case, of course, time has been replaced by the angle of illumination, so we refer to the same mechanism as {\em angular attention:}\ it evaluates the contents of the previously received intensity diffraction patterns from different angles of illumination and assigns to each a compatibility function $e_m$, essentially a weight that is relevant to that illumination's importance for the overall reconstruction. Following the summation style attention mechanism\ \cite{bahdanau2014neural}, we compute the compatibility function $e_m$ as output of a neural network with hidden units (layers) $V_e$, $W_e$ and the weights $\alpha_m$ from the compatibility function as \begin{equation}\label{eq:attention-VeWe} \begin{gathered} e_m = V_e\:\text{tanh}\left(W_e h_m\right), \\ \alpha_m = \text{softmax}\left(e_m\right) = \frac{\text{exp}(e_m)}{\sum_{m=1}^{M} \text{exp}(e_m)}, \\ \quad m = 1,2,\ldots, M. \end{gathered} \end{equation} The final angular attention output $a$ is then computed from a linear combination of the hidden features as \begin{equation}\label{eq:attention} a=\sum_{m=1}^{M} \alpha_m h_m. \end{equation} For the ablation study of Section~\ref{sec:results}, only the last hidden feature $h_M$ is passed on to the decoder, {\it i.e.} the angular attention mechanism is not used. There is an alternative, dot-product attention mechanism\ \cite{vaswani2017attention}, but we chose not to implement it here. \section{Training and testing procedures}\label{sec:train_and_test} \subsection{Training the recurrent neural network}\label{subsec:training_rnn} For training and validation, $5000$ and $500$ layered objects were used, respectively. For each object, a sequence of intensity diffraction patterns from the $N=42$ angles of illumination was produced by BPM, as described earlier. The Approximants were obtained each as a single iteration of the gradient descent. All of the architectures were trained for $100$ epochs with a training loss function (TLF) of negative Pearson correlation coefficient (NPCC) \cite{li2018imaging}, defined as \begin{equation} \sst{\mathcal{E}}{NPCC}\big(f,\hat{f}\big) \equiv -\:\frac{\displaystyle{\sum_{x,y}}\Big(f(x,y)-\big<f\big>\Big)\Big(\hat{f}(x,y)-\big<\hat{f}\big>\Big)}{\sqrt{\displaystyle{\sum_{x,y}}\Big(f(x,y)-\big<f\big>\Big)^2}\sqrt{\displaystyle{\sum_{x,y}}\Big(\hat{f}(x,y)-\big<\hat{f}\big>\Big)^2}}, \label{eq:tlf-npcc} \end{equation} where $f$ and $\hat{f}$ are a ground truth image and its corresponding reconstruction. In this article, our NPCC function was defined to perform computation in $3$D. We used a stochastic gradient descent scheme with the \textit{Adam} optimizer \cite{kingma2014adam}. The learning rate was set to be $10^{-3}$ initially and halved whenever validation loss plateaued for $5$ consecutive epochs. Batch size was set to be $10$. The desktop computer used for training has Intel Xeon W-$2295$ CPU at $3.00$ GHz with $24.75$ MB cache, $128$ GB RAM, and dual NVIDIA Quadro RTX $8000$ GPUs with $48$ GB VRAM. For comparison, we also re-trained the $3$D-DenseNet architecture with skip connections in \cite{goy2019high} with the same training scheme above, \textit{i.e.} on \textit{Adam} for $100$ epochs and with batch size of $10$ and the same learning rate initial value and halving strategy. This serves as baseline; however, the number of parameters in this network is $0.5\:\text{M}$, whereas in our RNN architecture the number of parameters is $21\:\text{M}$. We also trained an enhanced version of the $3$D-DenseNet by tuning the number of dense blocks, the number of layers inside each dense block, filter size, and growth rate to match the total number of parameters with that of the RNN, {\it i.e.} $21\:\text{M}$. In the next section, we refer to these two versions of the $3$D-DenseNet as Baseline ($0.5\:\text{M}$) and Baseline ($21\:\text{M}$), respectively. \subsection{Testing procedures and metrics} A simple affine transform is first applied to the raw experimentally obtained intensity diffraction patterns to correct slight misalignment. Then we run the gradient descent method up to $3$ iterations of the gradient descent (\ref{eq:new_approximants}) and the FGP-FISTA up to $2$ iterations, to test the trained network using the TV-based Approximants (\ref{eq:TV_Approx}). Even though training used NPCC as in (\ref{eq:tlf-npcc}), we investigated two additional metrics for testing: probability of error (PE), the Wasserstein distance \cite{villani2003topics,kolouri2017optimal}. We also quantified test performance using the SSIM (Structural Similarity Index Metric) \cite{wang2004image}, shown in the Supplementary material. PE is the mean absolute error between two binary objects; in the digital communication community it is instead referred to as Bit Error Rate (BER). To obtain the PE, we first threshold the reconstructions and then define \begin{equation} \text{PE} = \frac{\left(\text{\# false negatives}\right)\: + \:\left(\text{\# false positives}\right)}{\text{total \# pixels}}. \end{equation} We found that it oftentimes helps to accentuate the differences between a binary phase ground truth object and its binarized reconstruction as even small residual artifacts, if they are above the threshold, are thresholded to be one, and thus they are taken into account to the probability of error calculation more than they would have been to other metrics. With these procedures, PE is a particularly suitable error metric for the kind of objects we consider in this paper. PE is also closely related to the two-dimensional Wasserstein distance as we will now show through an analytical derivation. The latter metric involves an optimization process in terms of a transport plan to minimize the total cost of transport from a source distribution to a target distribution. The two-dimensional Wasserstein distance is defined as \begin{equation} \begin{gathered} W_{p=1} = \min_P \langle P,C\rangle = \min_P\sum_{ij}\sum_{kl}\gamma_{ij,kl}C_{ij,kl},\\ \text{s.t.}\:\: \sum_{kl}\gamma_{ij,kl} = f_{ij},\: \sum_{ij}\gamma_{ij,kl}=g_{kl},\:\gamma_{ij,kl}\geq 0, \end{gathered} \end{equation} where $f_{ij}$ and $g_{kl}$ are a ground truth binary object and its binary reconstruction, \textit{i.e.} $f_{ij}, g_{kl}, \gamma_{ij,kl} \in\{0,1\}$, a coupling tensor $P=\left(\gamma_{ij,kl}\right)$, and a cost tensor $C_{ij,kl}=\left|x_{ij}-x_{kl}\right|$. PE can be reduced to have a similar, but not equivalent, form to that of the Wasserstein distance. For $i,j,k,l$ where $\gamma_{ij,kl}\neq 0$, \begin{equation}\label{eq:prob} \begin{split} \text{PE} &= \frac{1}{N^2}\sum_{ij}\left|f_{ij}-g_{ij}\right|\\ &= \frac{1}{N^2}\sum_{ij}\left|f_{ij}-\sum_{kl}g_{kl}\:\delta\left[i-k,j-l\right]\right|\\ &= \frac{1}{N^2}\sum_{ij}\left|\sum_{kl}\gamma_{ij,kl}\left(1-\frac{g_{kl}\:\delta\left[i-k,j-l\right]}{\gamma_{ij,kl}}\right)\right|\\ &\equiv \sum_{ij}\left|\sum_{kl}\gamma_{ij,kl}\Tilde{C}_{ij,kl}\right|\\ &= \sum_{ij,kl;\gamma_{ij,kl}\neq 0} \gamma_{ij,kl}\Tilde{C}_{ij,kl}, \qquad \text{where} \end{split} \end{equation} \begin{equation} N^2\Tilde{C}_{ij,kl} = 1\!-\!\frac{g_{kl}\:\delta\left[i-k,j-l\right]}{\gamma_{ij,kl}}= \begin{dcases} \:1, & \:\text{if}\:\: ij \neq kl\\ \:1\! -\! g_{kl}, & \:\text{if}\:\: ij = kl. \end{dcases} \end{equation} This shows that the PE is a version of the Wasserstein distance with differently defined cost tensor. \section{Results}\label{sec:results} \begin{figure*}[htbp] \centering \includegraphics[width=0.55\textwidth]{number_of_patterns.pdf} \caption{Progress of 3D reconstruction performance as new windowed Approximants $m=1,\ldots, M\!\!=\!\!12$ according to (\protect{\ref{eq:moving_window}}) applied on experimental data are presented to the recurrent scheme. The same progression can be found in the Online Materials as a movie.} \label{fig:number_of_patterns} \end{figure*} \begin{table*}[htbp!] \begin{center} \begin{tabular}{c||c c c c|c} \hline \textbf{Probability of error ($\%$)} ($\downarrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Baseline (0.5 M)} & 6.604 & 5.255 & 7.837 & 3.204 & 5.725\\ \text{Baseline (21 M)} & 6.604 & 5.725 & 5.652 & 2.856 & 5.209\\ \hdashline \text{Proposed RNN (21 M)} & \textbf{5.408} & \textbf{4.828} & \textbf{2.332} & \textbf{1.660} & \textbf{3.557}\\ \hline\hline \textbf{Wasserstein distance} ($\times\:10^{-2}$) ($\downarrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Baseline (0.5 M)} & 2.854 & 1.466 & 2.783 & 0.9900 & 2.023\\ \text{Baseline (21 M)} & 2.703 & 1.171 & 2.475 & 0.8112 & 1.790\\ \hdashline \text{Proposed RNN (21 M)} & \textbf{1.999} & \textbf{1.093} & \textbf{1.749} & \textbf{0.6403} & \textbf{1.370}\\ \hline\hline \textbf{PCC} ($\uparrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Baseline (0.5 M)} & 0.8818 & 0.6426 & 0.8658 & 0.6191 & 0.7523\\ \text{Baseline (21 M)} & 0.8859 & 0.6430 & 0.9021 & 0.6132 & 0.7611\\ \hdashline \text{Proposed RNN (21 M)} & \textbf{0.8943} & \textbf{0.6612} & \textbf{0.9551} & \textbf{0.7039} & \textbf{0.8036}\\ \hline \iffalse \hline \textbf{SSIM} ($\uparrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Baseline (0.5 M)} & 0.7606 & 0.7409 & 0.7299 & 0.8046 & 0.7590\\ \text{Baseline (21 M)} & 0.7702 & 0.7557 & 0.7978 & 0.8357 & 0.7899\\ \hdashline \text{Proposed RNN (21 M)} & \textbf{0.7987} & \textbf{0.8128} & \textbf{0.8652} & \textbf{0.9154} & \textbf{0.8480}\\ \hline \fi \end{tabular} \end{center} \caption{Quantitative comparison between the baseline (static) and dynamic reconstruction from testing on experimental data, according to PE, Wasserstein distance ($p=1$), and PCC. SSIM comparisons are in the Supplementary materials.} \label{tab:quantitative_comparison} \end{table*} Our RNN is first trained as described in Section~\ref{sec:train_and_test}, and then tested with the TV-based Approximants (\ref{eq:TV_Approx}) applied to the experimentally obtained diffraction patterns. The evolution of the RNN output as more input patterns are presented is shown in Figure~\ref{fig:number_of_patterns}. When the recurrence starts with $m=1$, the volumetric reconstruction is quite poor; as more orientations are included, the reconstruction improves as expected. A movie version of this evolution for $m=1,\ldots, M$ is included in the online materials. \begin{figure*}[htbp!] \centering \includegraphics[width=0.72\textwidth]{qualitative_comparison.pdf} \caption{Qualitative comparison on test performance between the baseline and proposed architectures using experimental data. The baseline architectures are $3$D-DenseNet CNN architectures with $0.5$ M and $21$ M parameters. Our proposed architecture is a recurrent neural network with elements described in Section~\ref{sec:comput_arch}.} \label{fig:qualitative_comparison} \end{figure*} \begin{table*}[htbp!] \begin{center} \begin{tabular}{c||c c c c|c} \hline \textbf{Probability of error ($\%$)} ($\downarrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Proposed RNN (21 M)} & \textbf{5.408} & 4.828 & \textbf{2.332} & \textbf{1.660} & \textbf{3.557}\\ \hdashline \text{-- ReLU activation (21 M)} & 6.262 & \textbf{4.718} & 3.241 & 1.904 & 4.031\\ \text{-- angular attention (21 M)} & 9.399 & 5.566 & 11.64 & 3.375 & 7.495\\ \text{-- split convolution (43 M)} & 9.674 & 6.342 & 14.43 & 2.405 & 8.212\\ \hline\hline \textbf{Wasserstein distance} ($\times\:10^{-2}$) ($\downarrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Proposed RNN (21 M)} & \textbf{1.999} & \textbf{1.093} & \textbf{1.749} & \textbf{0.6403} & \textbf{1.370}\\ \hdashline \text{-- ReLU activation (21 M)} & 2.291 & 1.156 & 1.886 & 0.6692 & 1.501\\ \text{-- angular attention (21 M)} & 3.016 & 1.587 & 3.672 & 1.063 & 2.335\\ \text{-- split convolution (43 M)} & 4.005 & 2.863 & 3.651 & 2.233 & 3.188\\ \hline\hline \textbf{PCC} ($\uparrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Proposed RNN (21 M)} & \textbf{0.8943} & 0.6612 & \textbf{0.9551} & \textbf{0.7039} & \textbf{0.8036}\\ \hdashline \text{-- ReLU activation (21 M)} & 0.8832 & \textbf{0.6836} & 0.9406 & 0.6725 & 0.7950\\ \text{-- angular attention (21 M)} & 0.8281 & 0.6252 & 0.8145 & 0.4657 & 0.6834\\ \text{-- split convolution (43 M)} & 0.8005 & 0.4525 & 0.7313 & 0.4910 & 0.6188\\ \hline \iffalse \hline \textbf{SSIM} ($\uparrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Proposed RNN (21 M)} & \textbf{0.7987} & \textbf{0.8128} & \textbf{0.8652} & \textbf{0.9154} & \textbf{0.8480}\\ \hdashline \text{-- ReLU activation (21 M)} & 0.7787 & 0.8088 & 0.8459 & 0.8971 & 0.8326\\ \text{-- angular attention (21 M)} & 0.6876 & 0.7175 & 0.6612 & 0.7826 & 0.7122\\ \text{-- split convolution (43 M)} & 0.6205 & 0.4740 & 0.5953 & 0.5274 & 0.5543\\ \hline \fi \end{tabular} \end{center} \caption{Quantitative assessment of ablation effects. Values inside the parentheses in the first column indicate the number of parameters. When we ablate split convolution, we rather choose $3\times 3\times 3$ being the uniform kernel, and, hence, the number of parameters increases. SSIM comparisons are in the Supplementary materials.} \label{tab:ablation_study_quantitative} \end{table*} Visual comparisons with the baseline $3$D-DenseNets with $0.5$ M and $21$ M parameters are shown in Figure~\ref{fig:qualitative_comparison}. The RNN results show substantial visual improvement, with fewer artifacts and distortions compared to static approaches, e.g. \cite{goy2019high}. Quantitatively comparisons in terms of our chosen metrics PE, Wasserstein, and PCC are in Table~\ref{tab:quantitative_comparison}. \begin{figure*}[t!] \centering \includegraphics[width=0.61\textwidth]{ablation_study_qualitative.pdf} \caption{Visual quality assessment from the ablation study on elements described in Section~\ref{sec:comput_arch}. Rows $3-5$ show reconstructions based on experimental data for each layer upon ablation of ReLU activation (\ref{eq:new_gru_equations}), {\it i.e.}, using the more common tanh activation function instead (row 3); angular attention mechanism (row 4); and split convolution (row 5). The rows are ordered by increasing severity of the ablation effect.} \label{fig:ablation_study_qualitative} \end{figure*} We conducted an ablation study, and its purpose is to isolate and compare quantitatively the contribution to the reconstruction of each element described in Figure~\ref{fig:architecture} and Section~\ref{sec:comput_arch}. We remove, one at a time, the split convolution, angular attention mechanism, and ReLU activation, and quantify performance again. Ablation in the case of ReLU activation means that we replace it with the tanh activation function, which is more usual. The ablated architectures are also trained under the same training scheme in Section~\ref{sec:train_and_test}.\ref{subsec:training_rnn} and tested with the same TV-based Approximants. Visually, the ablation of the split convolution affects and degrades the testing performance worst, followed by the ablation of the angular attention mechanism and the ReLU activation. These findings are supported quantitatively as well in Table~\ref{tab:ablation_study_quantitative}. Note that the substitution of the ReLU with the tanh does not bring a large increase compared to others, but even slightly better in some case (see the probability of error of Layer $2$ in Table~\ref{tab:ablation_study_quantitative}). Thus, we find that (1) the split convolution should be considered to replace a general $3$D convolution when designing a recurrent unit and a convolutional encoder/decoder; (2) the angular attention mechanism is helpful when the inputs are formulated into temporal sequences; and (3) the choice of ReLU over tanh is still helpful but somewhat less significant and may be application-dependent. With respect to attention, in particular, even though the module's presence clearly contributes to good training quality, we found that the coefficients converge to $\alpha_m\approx 1/M$ for all $m$, consistent with the more-or-less angularly invariant class of samples---at least in the statistical sense, and for the small range of illumination angles that we used. A more detailed study of the angular attention module can be found in the Supplementary Material. \section{Conclusions and discussion}\label{sec:conclusion} We have proposed a radically new recurrent neural network scheme for processing raw inputs from different angles of illumination dynamically, {\it i.e.} as a sequence, with each new angle improving the 3D reconstruction. We have found this scheme to offer significant qualitative and quantitative improvement over static machine learning schemes, where the raw inputs from all angles are processed at once by a neural network. Through an ablation study, we found that sandwiching the recurrent structure between a convolutional encoder/decoder helps improve the reconstructions. Even more interestingly, an angular attention mechanism, rewarding raw inputs from certain angles as more informative and penalizing others, also contributes significantly to improving reconstruction fidelity albeit less than the encoder/decoder pair. Even though we used the dynamic machine learning approach in the most difficult case of 3D reconstruction when strong scattering is present, there is no reason to doubt that it would be applicable to less ill-posed cases as well, e.g. optical diffraction tomography and Radon inverse. Also possible are alternative implementations of the RNN, e.g. with LSTMs or Reservoir Computing \cite{lukovsevivcius2009reservoir,lukovsevivcius2012reservoir,schrauwen2007overview}, and further exploration of split convolutional variants or DenseNet variants for the encoder/decoder and dynamical units; we leave these investigations to future work. \section{Funding} \noindent Southern University of Science and Technology (6941806); Intelligence Advanced Research Projects Activity (FA8650-17-C-9113); Korea Foundation for Advanced Studies.~\\ \section{Acknowledgments} \noindent I. Kang acknowledges partial support from KFAS (Korea Foundation for Advanced Studies) scholarship. We are grateful to Jungmoon Ham for her assistance with drawing Figures~\ref{fig:introduction} and \ref{fig:split_convolution}, and to Subeen Pang, Mo Deng and Peter So for useful discussions and suggestions.~\\ \noindent\textbf{Disclosures.} The authors declare no conflicts of interest.
proofpile-arXiv_065-129
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Acknowledgment} \section*{Reviews and Summary of Revision} \noindent Following are the reviews we received from our latest submission to IEEE S\&P 2020, and our summary of changes following the reviews. \section{Summary of revision} Following the reviews from IEEE S\&P 2020, we have made the following major improvements to this paper. \vspace{3pt}\noindent\textbf{Better highlighted the novelty}. We elaborated the strengths of our work (Section~\ref{sec-introduction}) and compared them with the most related works (Section~\ref{sec-relatedwork}). To the best of our knowledge, we are the first one proposing the CAT model and trying to address the confidential attestation dilemma (that the service code provider is reluctant to expose the implementation details of its services for public attestation) with the unbalanced proof carrying code framework. Barely related work takes into account the privacy issues of both code providers and data owners. Works such as Ekiden~\cite{cheng2018ekiden} provide a computing environment for running a smart contract on sensitive data privately. However, they only care about the confidentiality of the input data and the contract state, but don't need to cope with the secrecy of contract(program) because of its transparency. Thus, the CAT model solves a disparate problem in a totally different scenario. Fortunately, the confidential attestation challenge can be addressed perfectly using Proof Carrying Code (PCC) framework. According to the XFI~\cite{erlingsson2006xfi}, a brilliant work co-authored by the designer - George C. Necula, XFI modules can be seen as an example of proof-carrying code, even though they do not include logical proofs. Similar as XFI, our work uses static analysis with inline software guards that perform checks at runtime, which could be regarded as an example of PCC that trusts only the verifier, not the means of proof generation. Sand-boxing is a suitable technique to perform privacy-preserving computations. Related works like Ryoan~\cite{hunt2018ryoan} leverage Google NaCl to build its prototype, whose core library is about 19 MB including several LLVM components. Instead, our implementation is more lightweight, adding about 2MB of binary code to the TCB. On the other side, our approach is more practical compared with formal methods, e.g., a type-safe language. A type-safe language compiler/interpreter along cannot finish the verification of properties such as confidentiality. A verification condition generator (VCGen) and a proof solver should work together with the compiler/interpreter, which introduce huge TCB and computation overhead. In our design, we let the proof generator (the customized LLVM) to provide rich control/data flow information and then check them strictly inside the enclave, which removes the compiler from the TCB and further helps saving memory consumption and performance overhead. And we fully considered the characteristics of policy-compliant computations before proposing our design. Real-world applications often contain hundreds of lines of code, and thousands of instructions. Comparing to formal verification, generating security annotations is significantly less weighted, allowing our solution to scale to real world software of such amount of code. \vspace{3pt}\noindent\textbf{Added more policies}. We extended the number of policies that we need to enforce from 5 to 7, to fully protect the privacy of user data. These policies include data leak control, control-transfer management, self-modifying code block, and side/covert channel mitigation. We used the implemented prototype to demonstrate how policies can be enforced (using an unbalanced design of assembly level instrumentations on the code provider side with a lightweight verification on the code consumer side). Those additional policies including input/output constraint (P0) and side/covert channel mitigation (P6) are described (Section~\ref{subsec-policies}), which are designed to limit the amount of information an attacker can get. Specifically, attackers can not access the plaintext of input/output message directly via enforcing P0. We made OCall stubs for calling system calls. Restrictions on the length of OCall stubs can handle the issue of the Intel SGX SDK's Null-terminated string. Enforcing P6 could protect the data owner from page fault-based controlled channel attacks and most L1/L2 cache-based side channel attacks. Meanwhile, Data Execution Prevention (DEP) can not be skipped since the loaded binary will be allocated on enclave's heap memory, leading to a W+X problem. The goal of enforcing DEP (P4) was elaborated in Section~\ref{subsec-policies}. This will not be a problem in SGXv2 since dynamic memory management of EPC is introduced in the new version of SGX hardware, which can protect dynamic loaded pages with permission changing and verifying instructions such as \texttt{EAUG/EMODPE/EACCEPT}. \vspace{3pt}\noindent\textbf{Enriched implementation details}. We added details on how the proof is generated and how the whole system works (Section~\ref{sec-implementation} and Appendix~\ref{appendix-instrumentation}). Specifically, we illustrate how to use CAT to implement information release prevention policies and common SGX side/covert channel alleviation policies. We built an assembly-level PCC framework from scratch. The instrumentation module for enforcing P1 and also be used in enforcing DEP, aka. P4, with few modification. We showed that the code generator consists of portable components, which introduces a great flexibility for supporting replaceable policies. Wider policies can be integrated via implementing other LLVM passes. Exquisite implementation was applied in building one bootstrap enclave as well (Section~\ref{subsec:bootstrap-impl}). We illustrated how the loader and the verifier can be integrated seamlessly (Section~\ref{subsec-loading}). The binary parsing module in the loader (in Figure~\ref{fg-workflow}) is not only designed for assisting to get offsets for relocation, but also can give the program entry for later disassembling. Moreover, legal indirect branch labels need to be translated by the binary parser. The loader and the verifier formed a whole and work in harmony. Our implementation did not use two enclave for separating the verification stage and the code loading and execution stage. The only advantage of using two enclaves is that there will be no W+X issue during the target binary loading, which actually can be dealt with the software DEP. Meanwhile, using two enclave may induce more communication overhead. The attestation protocol should be re-designed, while introduces larger TCB and more overhead since more parties are involved. We believe both designs are reasonable. However, at this time, we may stick to the original ``one-enclave'' design for the following reasons. Firstly, memory store operation instrumentation can be easily extended to support W+X. Secondly, within the one-enclave design, the code of resolving symbols (during loading) can be easily reused by the rewritter and the verifier which reduces the memory consumption of limited enclave memory. \vspace{3pt}\noindent\textbf{Thoroughly evaluated the CAT system}. We built more benchmarks (including micro-benchmarks and real-world applications) and tested them with the prototype we implemented (Section~\ref{subsec-experiments}). Performance analysis was given, showing the difference under multiple granularities of protections (P1, P1+P2, P1\textasciitilde P5, and P1\textasciitilde P6). Security suggestions on which policies should be incorporated can be made according to their performance and what security properties they can bring in at different scenarios. In addition, the TCB of our CAT system was evaluated. Policies had been analyzed for soundness and completeness as well (Section~\ref{subsec-securityanalysis}). The goal of enforcing policies on memory write operations is to build an information confinement, and a CFI policy can gruarantee the check on other policies cannot be circumvented. \vspace{3pt}\noindent\textbf{Better summarized the lessons learnt}. We have to acknowledge that our current CAT-SGX system has limitations and we discussed them in Section~\ref{sec-discussion}. We summarized those limitations as follows. Side channels have been recognized as a major threat for SGX. Attacks include cache, TLB, branch prediction, page table, etc. Side channel defenses for SGX remains an important research problem. With the help of SSA monitoring to detect and prevent AEX-based channels, our work can limit the data leakage via most interrupt-based covert channels. Further, covert channel based on parameters of system calls can be eliminated by enforcing P0 - a strong constraint on input/output messages that only allows certain syscalls (\texttt{send/recv}). System functions like setjmp/longjmp are disallowed in our system since they can break call-return semantics. System call interval can not be a covert channel in CDaaS cases since we only process input/output once during the in-enclave service. Individual request can only leak few bits. However, it can be a threat in CCaaS scenarios such as web-server as a service. Attackers can deliberately construct requests which would cost different processing time, and transfer some secrets via the processing time difference. Overall, we have to admit that our work cannot take over all kinds side/covert channels, policies to defend against them (e.g., fixed processing time) can be added into our framework though. Multi-threading/Multi-user is not supported currently, but can be accomplished with certain extensions. We elaborated how our work can be extended towards solving those issues in Section~\ref{sec-discussion}. Supporting the single-thread multi-user scenario could be very easy by adding a data cleansing policy as well as a session key management for multiple users. Unfortunately, supporting a multi-thread scenario would take much more effort. Recent works have been proposed on this hot topic. Occlum~\cite{shen2020occlum} and MPTEE~\cite{zhao2020mptee} both leverage Intel MPX~\cite{shen2018isolate} to isolate the address space between multiple threads for multitasking efficiently. Although, incluing LibOS into the TCB may not be accepted, we can learn from them and memory read auditing policies for reserved regions for different threads. Specifically, the code provider can divide the `.data' section of the target program into memory spaces for $N$ threads. Then on the bootstrap enclave side, the loader should keep the upper/lower bounds when performing thread switch, and the verifier can enforce the memory accessing policies for all $N$ threads. \section{Reviews} \begin{markdown} ## Review A ================================================= ### Brief Paper Summary The paper proposes CAT, an approach for ensuring to a user that code meets privacy requirements without the user observing the code. CAT relies on a combination of compiler modification to generate code that meets the privacy requirements and an SGX enclave that loads and execute the code only after verifying it meets the requirements. ### Strengths Ensuring data privacy is important. ### Weaknesses There is a huge gap between what the paper promises and what it delivers Considering what is being delivered, it's not clear what the novelty is. ### Detailed Comments for Authors Thank you for submitting your work to IEEE SP. While I appreciate the amount of work that went into CAT, I am afraid that I cannot recommend that the work be accepted. The paper starts with a promise of a framework that can validate the compliance of code with privacy policies. However, it only investigates a single privacy policy. There is absolutely no discussion of how the research can be extended to other privacy policies, and such extensions do not seem trivial. The privacy policy the paper claims to enforce is to ensure that data is not leaked from the enclave. The paper reduces this policy to five requirements (P1-P5, Page 6). While I agree that all requirements are necessary for enforcing the policy, I am not convinced that they are sufficient. In particular, I do not see that these requirements are sufficient to prevent data from one query leaking into another. In the case of a service provider that provides a service to multiple clients, this would allow leaking information between clients. Another potential information leak is via system calls or their arguments - the paper indicates that system calls are supported but does not explain how their arguments are validated. When reducing the problem to only maintaining the five requirements, it is not clear what the advantage of the suggested approach over using type-safe languages or mechanisms such as suggested for Google Chrome Native Clients (Yee et al. IEEE SP 2009). Side channel attacks are considered out-of-scope for the paper. However, given the vulnerability of SGX to such attacks, and the attempt to protect from malicious code, ignoring these seem to render CAT completely useless for protecting privacy. There is absolutely no discussion of the limitations of the approach. Can the verified code communicate with the service provider? Can it use longjmp? Is multithreading supported? etc. Overall, I think that the idea is promising, but the gap between the promise and the current implementation is too wide. I suggest that the authors work on reducing the gap e.g. by showing how the approach could apply for a wider range of security/privacy policies. ## Review B ================================================= ### Brief Paper Summary This paper proposes to solve a problem with remote attestation as used in SGX. One party provides code and another provides data; neither wants to provide their part in the clear; but the party providing data needs to be sure that their data's privacy is assured. They propose to do this with a loader enclave that checks and runs proof-carrying code (PCC) guaranteed to comply with the privacy policy. ### Strengths The system is relatively simple. ### Weaknesses The simplicity of this system comes from its lack of expressiveness, and fixed not-very-useful security policy. The lack of covert channel protection is very problematic. There's not much point in blocking the front door if the back door is wide open. While unintentional side channels can be tricky to exploit, covert channels are really easy. The only supported policy is that no data is written outside the enclave. It is not clear how the loader will determine whether the results of the computation comply with any kind of privacy policy, which is implied in some of the suggested use cases. In fact, the paper doesn't discuss how output is to be produced at all, though I assume that the loader would have a trusted output routine of some kind. The paper is riddled with typos. This isn't proof-carrying code. It's instrumented code. I'm not clear why this would be better in practice than a LISP interpreter with some security/privacy policy annotations. If you must run binary code, why must the loader be in the same enclave as the running code? Surely verifying it from afar would be safer and would avoid some of this CFI mess? ### Detailed Comments for Authors needs certified -> needs to be certified quite larger -> significantly larger unintended redirected -> redirected, conclude with malicious OS or hypervisor -> collude with a malicious OS or hypervisor during the compiling -> during compilation would leakage user's data -> would leak users' data rewritting -> rewriting As mentioned above, to facilitate PCC framework working well -> cut this paragraph, you said it already Instrumentations -> Instrumentation Theses -> These stack pointer never point -> stack pointer never points cannot modify alter -> cannot modify i.e., one does not use external elements -> i.e., one that does not use external elements code generator firstly extract -> code generator first extracts is very small that only consists -> consists 500 lines that is made from scratch -> 500 new lines of code (in addition to ...). more small and exquisite -> smaller and more exquisite but leverage -> but leverages we still want to acclaim -> we still claim [or expect?] ## Review C ================================================= ### Brief Paper Summary The paper makes three main contributions: the CAT model where the privacy of both code and data are preserved (leakage is limited to what the code/data premptively agree to), a proof-carrying code based implementation to reduce the trusted code base size (TCB), and performance/programmability evaluation. ### Strengths The paper includes a pragmatic set of decisions to reduce the TCB of a two-way (between data owner and code owner) privacy contract. ### Weaknesses The CAT model has been proposed and studied extensively; often coupled with a blockchain based audit logs of the transaction between the code- and data-owner. Examples: - private data objects (Intel): https://arxiv.org/abs/1807.05686 - similar paper (cheng et al): https://arxiv.org/abs/1804.05141 and others. The advance of CAP over a standard attestation protocol is a direct extension -- perhaps I am missing something. ### Detailed Comments for Authors Good work! I like the problem space -- this is in an area where advances can be very impactful towards building a low TCB system where data-owners and code-owners can transact. The paper can be greatly improved by focusing and going deeper into what is new here. The attestation protocol might be new; but I couldn't figure out why. Allocating the tasks across untrusted and trusted parts, with one generating the proof while a simpler trusted part checks it inside the enclave seems novel and worth leading the story with. Bolting on DEP etc protections to cover for SGX v1 details seems unnecessary -- can this be skipped? If focusing on the second theme above, it would be great to discuss the properties and their impact on proof gen/verification. Security properties that are being checked: (leaving out DEP etc) - prevent explicit stores of data outside the enclave range - prevent implicit spills out of the enclave - prevent tampering of the bootstrap enclave contents Should these be enforced in hardware (if the goal is to use hardware-based enclaves like SGX)? Why should this require a proof-carrying code -- I am not sure and would love to learn more. Interestingly, I can see why one might want to verify side-channel security of code that runs inside an enclave and ensuring that side-channel defenses compiled into the code indeed check off certain properties. Overall, I am very supportive of this problem and the effort seems honest -- I couldn't pinpoint the key advances that this paper really pushes, but am happy to be convinced otherwise. \end{markdown} \ignore{ \section{Response to reviews} To Review A: Comment 1. *”...There is absolutely no discussion of how the research can be extended to other privacy policies, and such extensions do not seem trivial...The privacy policy the paper claims...While I agree that all requirements are necessary for enforcing the policy, I am not convinced that they are sufficient...”* We extend the number of policies that we need to enforce from 5 to 9, to fully protect the privacy of user data. Proofs are sufficient, which were proved in previous work. Comment 2. *”Another potential information leak is via system calls or their arguments - the paper indicates that system calls are supported but does not explain how their arguments are validated”* We made Ocall stubs for calling system call. Restrictions on all Ocall stubs output length, including string type of parameters, can handle the issue of the Intel SGX SDK's Null-terminated string. The policy we enforced on Ocalls (P1) will make sure the arguments are valid. Comment 3. *”...it is not clear what the advantage of the suggested approach over using type-safe languages or mechanisms such as suggested for Google Chrome Native Clients...”* One important downside of a type-safe intermediate language (IL) is that no current formal tools can transform a binary to IL for verification inside Intel SGX enclave. And compared with a sandbox implementation for SGX (e.g., Ryoan at OSDI `16) that adds more TCB and computations, our work only consists of less than 2 MB binary code as the TCB and has much better performance than approaches using other mechanisms. Comment 4. *”Side channel attacks are considered out-of-scope for the paper...”* Side channels are difficult to eliminate since they are close related to the hardware architecture. Nevertheless, we illustrate that we can insert proof to mitigate the AEX-based side channel leakages. Comment 5. *”...Can the verified code communicate with the service provider? Can it use longjmp? Is multithreading supported? etc.”* Setjmp/longjmp is disallowed in our system since they can break call-return semantics. Multi-threading/Multi-user is supported. Other discussions are made in Section~\ref{sec-discussion}. To Review B: Comment 1. *“The lack of covert channel protection is very problematic ..."* We added new policies on side channel/covert channel prevention. Comment 2. *”The only supported policy is that no data is written outside the enclave...”* Besides the policy that no data is written outside the enclave, the CAT-SGX system now supports categories of security policies, including data leak control, control-transfer management, self-modifying code block and side/covert channel mitigation (Section~\ref{subsec-policies}). Comment 3. *”The paper is riddled with typos.”* Mentioned typos are carefully rectified. Comment 4. *”This isn't proof-carrying code. It's instrumented code.”* According to XFI (OSDI `06) (a paper coauthored by George C. Necula, the designer of PCC) XFI modules can be seen as an example of proof-carrying code, even though they do not include logical proofs. Similar as XFI our work uses static analysis with inline software guards that perform checks at runtime, which could be regarded as an example of PCC that trusts only the verifier, not the means of proof generation. Comment 5. *”I'm not clear why this would be better in practice than a LISP interpreter with some security/privacy policy annotations.”* First of all, an enclave program is designed written in C/C++ Comparing to formal verification, generating security annotations is significantly less weighted, allowing our solution to scale to real world software Comment 6. *”...why must the loader be in the same enclave as the running code? Surely verifying it from afar would be safer and would avoid some of this CFI mess?”* CFI is still needed even two enclaves are applied. CFI is a meaningful confidentiality policy because it guarantees that attackers cannot hijack the control flow and bypass the check, even when the attack collude with the service code provider. The only advantage of using two enclave is that there will be No W+X issue during the binary loading, which in return can be dealt with the software DEP. However, using two enclave may induce more communication overhead and we still need a binary rewriter within the bootstrap enclave. We believe both designs are reasonable. However, at this time, we may stick to the original `one-enclave' design for the following reasons: memory store operation instrumentation can be easily extended to support W+X. With the one-enclave design, the code of resolving symbols (during loading) can be easily reused by the rewritter and the verifier which reduces the memory consumption of SGX memory. To Review C: Comment 1. *“The CAT model has been proposed and studied extensively...- private data objects (Intel): https://arxiv.org/abs/1807.05686 - similar paper (cheng et al): https://arxiv.org/abs/1804.05141 and others."* Those two papers only care about the data privacy, but not the smart contract's secrecy. Thus, the CAT model is not exactly the same with them. Comment 2. *“...Bolting on DEP etc protections to cover for SGX v1 details seems unnecessary -- can this be skipped?"* The goal of enforcing DEP was elaborated (since the bootstrap enclave built on SGXv1 ....) The DEP can not be skipped since the loaded binary will be allocated on enclave's heap memory, leading to a W+X problem. Comment 3. *“Should these be enforced in hardware (if the goal is to use hardware-based enclaves like SGX)? Why should this require a proof-carrying code..."* Although Intel SGX provides good security guarantees, an enclave still can write data outside of its EPC memory region arbitrarily. Thus, we need to make various ground rules which are not naturally supported in SGX to screen and prevent such memory operations and the integrity of the check itself. Allocating the tasks across untrusted and trusted parts unbalanced, with one generating the proof while a simpler trusted part checks it inside the enclave, will result in a very small TCB and computation on the trusted parts. As a PCC example, CAT-SGX can make sure the properties are checked only by verifying if the proof exists. Comment 4. *“I can see why one might want to verify side-channel security of code that runs inside an enclave and ensuring that side-channel defenses compiled into the code indeed check off certain properties.*“ We added policies and demonstrated that side channels can be checked through proof compiled into the code. } \section*{Appendix} \subsection{Instrumentation Details}\label{appendix-instrumentation} Here we illustrate other instrumentation modules in our code generator. \vspace{3pt}\noindent\textbf{RSP modification instrumentation}. Since RSP spilling would cause illegal implicit memory writing, RSP modification instructions should also be checked. This module first locates all RSP modification instructions in the program and then instruments assembly code after them to check whether the RSP values are out of bounds. Just like storing instruction instrumentation, the upper and lower boundaries of RSP are specified by the loader and written into the assembly instructions by the rewriter, while the compiler only fills them with speical immediates (5ffffffffffff and 6ffffffffffff). When the instrumentation finds that the stack pointer is modified to an illegal address, it will cause the program to exit. Fig.~\ref{fg-rsp} shows eight instructions be inserted after the \texttt{ANDQ} instruction, which is tend to reserve new stack spaces (minus 16 from the value in RSP register). We leave the enforcement of implicit modification of the stack pointer using \texttt{PUSH} and \texttt{POP} by adding guard pages (a page with no permission granted) to the dynamic loader. \input{figures/fg-rsp.tex} \vspace{3pt}\noindent\textbf{Indirect branch instrumentation}. For checking indirect branches, we first extract all legal target names at assembly level, and output them to a list. After that, we insert a inspection function calling in front of every indirect branch instruction (in Fig.~\ref{fg-indirect}), to achieve forward-edge CFI check at runtime. Specifically, the inspection function \verb|CFICheck| is written and included in the target binary, to search if the indirect branch is on that list, therefore ensuring they conform to the program control flow. \input{figures/fg-indirect.tex} \vspace{3pt}\noindent\textbf{Shadow stack}. For function returns, the code generator instruments instructions to support a shadow call stack, which is a fully precise mechanism for protecting backwards edges~\cite{burow2019sok}. The shadow stack’s base address is specified by the loader, and will be rewritten by the Imm rewriter (to replace the imm filled in by the compiler in advance). As shown in Fig.~\ref{fg-shadowstack}, at every function entry, we insert instructions (before the function stack alignment) that will modify the shadow stack top pointer and push the function's return address into the shadow stack. Similar to instrumentation at the function entry, instructions inserted before the function returns modify the stack pointer and pop the return address. Comparing the saved return address with the real return address, \texttt{RET} will be checked. \input{figures/fg-shadowstack.tex} \vspace{3pt}\noindent\textbf{SSA monitoring instrumentation}. As demonstrated in previous works~\cite{gruss2017strong,chen2018racing}, AEX can be detected by monitoring the SSA. Therefor, to enforce P6, we instrument every basic block to set a marker in the SSA and monitor whether the marker is overwritten by AEX within the basic block. The execution is terminated once the number of AEXes within the basic block exceeds a preset threshold. A function is also implemented to get the interrupt context information in the bootstrap enclave's SSA area. At the beginning of each basic block, we call this function through instrumentation to check whether there are too many interruptions during execution. When a basic block is too large, this function will also be called in the middle of basic block every k ($k=20$) instructions. We count the number of interrupts/AEXs that occurred from the last check to the current check. When 22 or more are triggered, the target program aborts. \vspace{3pt}\noindent\textbf{Alternatives}. To mitigate AEX based side-channel risks, CAT-SGX provides an alternative enforcement mechanisms, through TSX, which can be chosen when compiling the target program. The TSX approach is based upon T-SGX~\cite{shih2017t}, putting transaction memory protection on each basic block and running a fall-back function to keep track of the number of interrupts observed. Just like T-SGX, when more than 10 consecutive AEXes happen, the computation aborts, due to the concern of an ongoing side-channel attack. The protection is instrumented by the generator and its presence is verified by the code consumer in the enclave. We have implemented a function, in which \texttt{XBEGIN} and \texttt{XEND} is called and fallback is specified. Around each branch and \texttt{CALL}/\texttt{RET} instruction and at the begin/end of each basic block, we call this function so that the program leaves the last transaction and enters a new transaction when a possible control flow branch occurs and completes. Some code snippets are shown in Figure~\ref{fg-tsgx}. To deal with the compatibility problems caused by calling functions that has no need to be checked (e.g., the system calls via OCall stubs), we implemented another non-TSX wrapper for external functions. For instance, our pass will generate an alternative function \verb|wrapper_foo| to replace original function \verb|foo|, to avoid the TSX instrumentation. \input{figures/fg-tsgx.tex} \subsection{Preparing Target Binary}\label{appendix-preparing} \vspace{3pt}\noindent\textbf{Libc}. To manage interactions with the untrusted operating system, we make some Ocall stubs for system calls. Related works~\cite{shinde2017panoply,tsai2017graphene,priebe2019sgx,shindebesfs} provide various great Ocall interfaces. But some of them still require additional interface sanitizations. We use parts of Musl Libc~\cite{musllibc} for completing the code loading support (Section~\ref{subsec:code-loading-support}). Undoubtedly, the Musl Libc also should be instrumented. Then, it can be linked against other necessary libraries statically, e.g., mbedTLS for buiding an HTTPS server. \vspace{3pt}\noindent\textbf{Stack and heap}. We also reserved customized stack and heap space for the target program execution. During the above-mentioned loading phase, the CAT-SGX system will initialize a 4MB size memory space for the stack, and will link against a customized and instrumented \verb|malloc| function for later heap usage. In current version of our prototype, the memory ranges of the additional stack and the heap provided for the target program are fixed, for efficient boundary checking. \vspace{3pt}\noindent\textbf{Other necessary functions}. The instrumented proof includes not only the assembly instructions. Some necessary functions and objects also should be compiled and linked. Since we need an algorithm to check if the address of an indirect branch target is on the legal entry label list (for P5 enforcement), a binary search function \verb|CFICheck| is inserted into the target program. Similarly, as we need a function to enforce P6, necessary functions need to be called for SSA monitoring frequently. Those objects would also be disassembled and checked during the stage of proof verification, to ensure that these can not be compromised when they are called. \section{Background}\label{sec-background} \noindent\textbf{Intel SGX}. Intel SGX~\cite{mckeen2013innovative} is a user-space TEE, which is characterized by flexible process-level isolation: a program component can get into an enclave mode and be protected by execution isolation, memory encryption and data sealing, against the threats from the untrusted OS and processes running in other enclaves. Such protection, however, comes with in-enclave resource constraints. Particularly, only 128 MB encryption protected memory (called Enclave Page Cache or EPC) is reserved for enclaves for each processor. Although the virtual memory support is available, it incurs significant overheads in paging. Another problem caused by SGX's flexibility design is a large attack surface. When an enclave program contains memory vulnerabilities, attacks can happen to compromise enclave's privacy protection. Prior research demonstrates that a Return-Oriented-Programming (ROP) attack can succeed in injecting malicious code inside an enclave, which can be launched by the OS, Hypervisor, or BIOS~\cite{lee2017hacking,biondo2018guard,schwarz2019practical}. Another security risk is side-channel leak~\cite{schwarz2017malware,lee2017inferring,gras2018translation}, caused by the thin software stack inside an enclave (for reducing TCB), which often has to resort to the OS for resource management (e.g., paging, I/O control). Particularly, an OS-level adversary can perform a controlled side channel attack (e.g.,~\cite{xu2015controlled}). Also in the threat model is the physical adversary, such as a system administrator, who tries to gain unauthorized access to a TEE’s computing units to compromise its integrity or confidentiality. \vspace{3pt}\noindent\textbf{SGX remote attestation}. As mentioned earlier, attestation allows a remote user to verify that the enclave is correctly constructed and run on a genuine SGX-enabled platform. In Intel’s attestation model, three parties are involved: (1) the Independent Software Vendor (ISV) who is registered to Intel as the enclave developer; (2) the Intel Attestation Service (IAS) hosted by Intel to help enclave verification, and (3) the SGX-enabled platform that operates SGX enclaves. The attestation begins with the ISV sending an attestation request challenge, which can be generated by an enclave user or a data owner who wants to perform the attestation with the enclave to check its state. Upon recipient of the challenge, the enclave then generates a verification report including the enclave measurement, which can be verified by a quoting enclave (QE) through \textit{local attestation}. The QE signs the report using the attestation key and the generated \textit{quote} is forwarded to the Intel Attestation Service (IAS). The IAS then checks the quote and signs the verification result using Intel's private key. The ISV can then validate the attestation result based upon the signature and the enclave measurement. \vspace{3pt}\noindent\textbf{PCC}. PCC is a software mechanism that allows a host system to verify an application's properties with a formal proof accompanying the application's executable code. Using PCC, the host system is expected to quickly check the validity of the proof, and compare the conclusions of the proof to its own security policies to determine whether the application is safe to run. Traditional PCC schemes tend to utilize formal verification for proof generation and validation. Techniques for this purpose includes verification condition generator/proof generator~\cite{homeier1995mechanically,colby2000certifying}, theorem prover/proof assistant~\cite{paulson2000isabelle,de2008z3,bertot2013interactive}, and proof checker/verifier~\cite{appel2003trustworthy}, which typically work on type-safe intermediate languages (IL) or higher level languages. A problem here is that up to our knowledge, no formal tool today can automatically transform a binary to IL for in-enclave verification. BAP~\cite{brumley2011bap} disassembles binaries and lifts x86 instructions to a formal format, but it does not have a runtime C/C++ library for static linking, as required for an enclave program. Moreover, the PCC architecture today relies on the correctness of the VCGen and the proof checker, so a direct application of PCC to confidential computing needs to include both in TCB. This is problematic due to their complicated designs and implementations, which are known to be error-prone~\cite{necula2001oracle}. Particularly, today's VCGens are built on interpreter/compiler or even virtual machine~\cite{leroy2006formal}, and therefore will lead to a huge TCB. Prior attempts~\cite{appel2001foundational} to move VCGen out of TCB are found to have serious performance impacts, due to the significantly increased proof size. Actually, the proof produced by formal verification is typically large, growing exponentially with the size of the program that needs certified~\cite{necula1997proof}. It is common to have a proof 1000 times larger than the code~\cite{pirzadeh2010extended}.\ignore{ The proof/certificate in PCC is a formal representation that can be encoded as e.g. LF term. Such proof terms include a lot of repetition which means it includes huge certificates. } Although techniques are there to reduce the proof size~\cite{appel2001foundational,pirzadeh2010extended},\ignore{ e.g. OPCC introduces a non-deterministic proof checker that makes the proof 30 times smaller. However, }they are complicated and increase the TCB size~\cite{appel2003trustworthy}. Therefore as far as we are aware, no existing PCC techniques can be directly applied to enable the CAT model on today's TEE. \ignore{\noindent\textbf{Intel SGX}. Today, many cloud providers (e.g., Microsoft Azure) have provided trusted computing services to customers by using techniques such as SGX~\cite{mckeen2013innovative}. Such user-space TEE is characterized by limited in-enclave resources along with convenient and flexible process-level isolation. It protects a program not only from the untrusted OS but also from other enclave processes. Although Intel SGX has various protections on the code and data inside the enclave (such as execution isolation, memory encryption, and data sealing), it has a limit regarding memory usage - only a very limited area which comes from the BIOS, up to 128M in size, can be protected by the processor at one time. To go beyond that limit, there needs to be paging support, which on the other hand, introducing additional performance overhead. Such a design is known for its thin software TCB and flexibility in isolation (per process), but suffers from a large attack surface. While SGX promises strong protection to bug-free software, memory corruption vulnerability in-enclave code still could jeopardize the enclave's confidentiality by certain exploitation techniques. Malicious privileged software like operating systems, Hypervisor, or BIOS may attempt to tamper with the execution of a program under TEE protection using ROP attacks~\cite{lee2017hacking,biondo2018guard,schwarz2019practical}. Intel SGX also suffers from malware collecting or inferring sensitive information inside the TEE. Enclave often has to resort to the OS for resource sharing (e.g., page management, I/O control), which introduces side-channel leaks~\cite{schwarz2017malware,lee2017inferring,gras2018translation}. Particularly, an OS-level adversary can launch controlled side channel attacks (e.g.,~\cite{xu2015controlled}). Also in the threat model is the physical adversary, such as a system administrator, who tries to gain unauthorized access to a TEE’s computing units to compromise its integrity or confidentiality. \vspace{3pt}\noindent\textbf{Intel's RA for SGX.} Remote attestation allows a remote user to verify that the enclave is correctly constructed and run on a genuine SGX-enabled platform. In Intel’s attestation model, three parties are involved: (1) The Independent Software Vendor (ISV) who is registered to Intel as the enclave developer; (2) The Intel Attestation Service (IAS) hosted by Intel which verifies the enclave; and (3) The SGX-enabled platform, which operates the SGX enclaves. The attestation begins with the ISV sending an attestation request challenge, which can be generated by an enclave user who would like to perform the attestation, to the enclave. The attested enclave then generates a verification report including the enclave measurement, which can be verified by an Intel-signed quoting enclave (QE) through \textit{local attestation}. The QE signs the report using the attestation key and the generated \textit{quote} is forwarded to the Intel Attestation Service (IAS). The IAS verifies the quote and signs the verification result using the Intel private key. The ISV or the user of the enclave can be convinced by the verification result by verifying the signature and comparing the enclave measurement. \vspace{3pt}\noindent\textbf{PCC}. PCC is a software mechanism that allows a host system to verify properties about an application via a formal proof that accompanies the application's executable code. The host system can quickly verify the validity of the proof, and compares the conclusions of the proof to its own security policy to determine whether the application is safe to execute. To achieve the mobility of both the code and the proof, PCC divides the verification process into two parts: trustworthy component and untrustworthy component. Traditional PCC schemes tend to use formal verification tools to do the steps like proof generation and proof validation. Verification condition generator/proof generator~\cite{homeier1995mechanically,colby2000certifying}, theorem prover/proof assistant~\cite{paulson2000isabelle,de2008z3,bertot2013interactive}, and proof checker/verifier~\cite{appel2003trustworthy} have been proposed and typically they work on a type-safe intermediate language (IL) or higher level language. One important downside of those work is that no current formal tools can transform a binary to IL for verification inside Intel SGX enclave automatically. BAP~\cite{brumley2011bap} can disassemble binaries and lift x86 instructions to a formal format, but it does not have a runtime C/C++ library for static linking (which is required in SGX). Moreover, PCC architecture relies on the correctness of components such as a Verification Condition Generator (VCGen) and a proof checker. In order to recover the proof on the consumer’s side in a secure manner, VCGen and proof checker should be included in the consumer’s TCB. But, these components are complex and the implementations are error-prone~\cite{necula2001oracle}. Current VCGens are usually built with an interpreter/compiler, or a virtual machine~\cite{leroy2006formal}, thus leading to a huge TCB. Although approaches such as~\cite{appel2001foundational} can move VCGen out of the consumer’s TCB, they will result in a significantly larger proof size. On the other hand, proof that generated by formal verification tools is extremely large. The proof size grows exponentially with the size of the program that needs certified~\cite{necula1997proof} and it is common to have proofs that are 1000 times larger than the associated code~\cite{pirzadeh2010extended}. The proof/certificate in PCC is a formal representation that can be encoded as e.g. LF term. Such proof terms include a lot of repetition which means it includes huge certificates. Approaches to reduce certificate size~\cite{appel2001foundational,pirzadeh2010extended} have been proposed, e.g. OPCC introduces a non-deterministic proof checker that makes the proof 30 times smaller. However, these methods in turn increase the TCB~\cite{appel2003trustworthy}. } \section{Confidential Attestation}\label{sec-CAT} Consider an organization that provides data-processing services, such as image editing (Pixlr), tax preparation (TurboTax), personal health analysis (23andMe) and deep learning inference as a service. To use the services, its customers need to upload their sensitive data, such as images, tax documents, and health data, to the hosts operated by the organization. To avoid exposing the data, the services run inside SGX enclaves and need to prove to the customers that they are only accessible to authorized service programs. However, the organization may not want to release the proprietary programs to protect its intellectual property. This problem cannot be addressed by today's TEE design. In this section, we present the \textit{Confidential ATtestation} (CAT) model to allow the data owner to verify that the enclave code satisfies predefined security policy requirements without undermining the privacy of the enclave code. \input{figures/fg-cat-model} \subsection{The CAT Model} The CAT model can be described by the interactions among 4 parties, as follows: \vspace{3pt}\noindent\textbf{Attestation service}. Attestation service (AS) assists in the remote attestation process by helping the data owner and/or the code provider verify the quote generated by an enclave, as performed by the Intel attestation service for SGX. \vspace{3pt}\noindent\textbf{Bootstrap enclave}. The boostrap enclave is a built-in control layer on the software stack of an enclave supporting CAT (see Figure~\ref{fg-cat}). Its code is public and initial state is measured by hardware for generating an attestation quote, which is later verified by the data owner and the code provider with the help of the AS. This software layer is responsible for establishing security channels with enclave users, authenticating and dynamically loading the binary of the target program from the code provider and data from its owner. Further it verifies the code to ensure its compliance with predefined security policies before bootstrapping the computation. During the computing, it also controls the data entering or exiting the enclave, e.g., through SGX ECalls and OCalls to perform data sanitization. \vspace{3pt}\noindent\textbf{Data owner}. The data owner uploads sensitive data (e.g., personal images) to use in-enclave services (e.g., an image classifier) and intends to keep her data secret during the computation. To this end, the owner runs a remote attestation with the enclave to verify the code of the bootstrap enclave, and sends in data through a secure channel only when convinced that the enclave is in the right state so expected policy compliance check will be properly performed on the target binary from the code provider. Note that there could be more than one data owner to provide data. \vspace{3pt}\noindent\textbf{Code provider}. The code provider (owner) can be the service provider (Scenario 1 in Section~\ref{subsec-scenarios}), and in this case, her target binary (the service code) can be directly handed over to the bootstrap enclave for compliance check. In general, however, the code provider is a different party and may not trust the service provider. So, similar to the data owner, she can also request a remote attestation to verify the bootstrap enclave before delivering her binary to the enclave for a compliance check. \subsection{Application Scenarios}\label{subsec-scenarios} The CAT model can be applied to the following scenarios to protect both data and code privacy in computing. \input{figures/fg-all-scenarios} \vspace{3pt}\noindent\textbf{Scenario~1: Confidential Computing as a Service}. We consider confidential computing as a service (\textit{CCaaS}) as a privacy extension of today's online data processing services like machine-learning as a service~\cite{russinovich2017introducing}, as the example presented at the beginning of the section. CCaaS is hosted by the party that operates its own\ignore{ or another code provider's} target binary on the data provided by its owner (e.g., an online image classifier to label uploaded user photos). \textit{The outcome of the computation will be sent back to the data owner}. Here, the target binary cannot be released for verification so needs to go through an in-enclave compliance check. \vspace{3pt}\noindent\textbf{Scenario~2: Confidential Data as a Service}. In this scenario (\textit{CDaaS}), it is the data owner who hosts the online service. The code provider dispatches her program (the target binary) to analyze the data and get the result back, all through a secure channel. An example is that a pharmaceutical company inspects the electronic medical records on a hospital's server to seek suitable candidates for a drug trial. Here, the code provider wants to ensure that her algorithm will be properly executed and will not be released, which is done through a remote attestation to verify the bootstrap loader. The data owner also needs to put a policy in place to control the amount of information that can be given to the code provider. \vspace{3pt}\noindent\textbf{Scenario~3: Confidential Data Computing Market}. Another scenario (called \textit{CDCM}) is that the enclave is hosted by an untrusted third party, a market platform, to enable data sharing and analysis. In this case, both the data owner and the code provider upload to the platform their individual content (data or code) through secure channels. They all go through remote attestations to ensure the correctness of the bootstrap enclave, which could also arrange payment transactions between the data owner and the code provider through a smart contract. \subsection{Requirements for a CAT System} \label{subsec-challenges} To instantiate the CAT model on a real-world TEE such as SGX, we expect the following requirements to be met by the design: \vspace{3pt}\noindent\textbf{Minimizing TCB}.\label{challenge-tcb} In the CAT model the bootstrap enclave is responsible for enforcing security and privacy policies and for controlling the interfaces that import and export code/data for the enclave. So it is critical for trust establishment and needs to be kept as compact as possible for code inspection or verification. \vspace{3pt}\noindent\textbf{Reducing resource consumption}.\label{challenge-size} Today's TEEs operate under resource constraints. Particularly, SGX is characterized by limited EPC. To maintain reasonable performance, we expect that the software stack of the CAT model controls its resource use. \vspace{3pt}\noindent\textbf{Controlling dynamic code loading}.\label{challenge-dep} The target binary is dynamically loaded and inspected by the bootstrap enclave. However, the binary may further sideload other code during its runtime. Some TEE hardware, SGX in particular, does not allow dynamic change to enclave page's RWX properties. So the target binary, itself loaded dynamically, is executed on the enclave's heap space. Preventing it from sideloading requires a data execution prevention (DEP) scheme to guarantee the W $\oplus$ X privilege. \vspace{3pt}\noindent\textbf{Preventing malicious control flows}.\label{challenge-cfi} Since the target binary is not trusted, the CAT software stack should be designed to prevent the code from escaping policy enforcement by redirecting its control flow or tampering with the bootstrap enclave's critical data structures. Particularly, previous work shows that special SGX instructions like ENCLU could become unique gadgets for control flow redirecting~\cite{biondo2018guard}, which therefore need proper protection. \vspace{3pt}\noindent\textbf{Minimizing performance impact}.\label{challenge-perf} In all application scenarios, the data owner and the code provider expect a quick turnaround from code verification. Also the target binary's performance should not be significantly undermined by the runtime compliance check. \subsection{Threat Model} \label{subsec-threat} The CAT model is meant to establish trust between the enclave and the code provider, as well as the data owner, under the following assumptions: \vspace{2pt}\noindent$\bullet$ We do not trust the target binary (service code) and the platform hosting the enclave. In CCaaS, the platform may deliberately run vulnerable target binary to exfiltrate sensitive data, by exploiting the known vulnerabilities during computation. The binary can also leak the data through a covert channel (e.g., page fault~\cite{xu2015controlled}). \vspace{2pt}\noindent$\bullet$ Under the untrusted service provider, our model does not guarantee the correctness of the computation, since it is not meant to inspect the functionalities of the target binary. Also, although TEE is designed to prevent information leaks to the untrusted OS, denial of service can still happen, which is outside the scope of the model. \vspace{2pt}\noindent$\bullet$ We assume that the code of the bootstrap enclave can be inspected to verify its functionalities and correctness. Also we consider the TEE hardware, its attestation protocol, and all underlying cryptographic primitives to be trusted. \vspace{2pt}\noindent$\bullet$ Our model is meant to protect data and code against different kinds of information leaks, not only explicit but also implicit. However, side channel for a user-land TEE (like SGX) is known to be hard to eliminate. So our design for instantiating the model on SGX (Section~\ref{subsec-policies}) can only mitigate some types of side-channel threats. \ignore{ \section{Confidential Attestation}\label{sec-CAT} \subsection{Motivating Example}\label{subsec-motivation} Consider a company providing data-processing services, such as image editing (Pixlr), tax preparation (TurboTax), personal health analysis (23andMe) and deep learning inference as a service, the users need to disclose their sensitive data, such as images, tax documents, and health data, to leverage the convenience and expertise of these services. The user can acquire the data processing performed inside an SGX enclave, and further, verify the enclave code through remote attestation to prevent unintended data disclosure. However, the service provider may not want to disclose the enclave code to the user for verification due to intellectual property reasons, in which case the SGX attestation model fails to fulfill the privacy requirement of both the user (i.e. the data owner) and the service provider. In this section, we propose the \textit{Confidential ATtestation} (CAT) model to allow the data owner to verify that the enclave code satisfies predefined privacy and security policies without undermining the privacy of the enclave code. We enumerate several application scenarios that could benefit from the proposed CAT model. We will further list the design goals in designing a practical CAT model in the context of Intel SGX, as well as the threat model considered in the paper. \subsection{The CAT model} There are 4 parties involved in the CAT model. \vspace{3pt}\noindent\textbf{The IAS}. We assume a standard IAS as in Intel's attestation model who verifies the quote and generates the attestation report. \vspace{3pt}\noindent\textbf{The bootstrap enclave}. It is the initial code built into the enclave, which is measured by the hardware to generate the measurement report. It is responsible to authenticate and dynamically load the code and data for the computation to be conducted. It then verifies the code and data is compliant to predefined policies and bootstraps the computation. It is responsible for establishing protected communication channels, and audit the ECalls/OCalls to sanitize the data sent to or received from outside of the enclave. \vspace{3pt}\noindent\textbf{The data providers}. Depending on the applications, there can be multiple data providers who feed data into the enclave and would like to keep the data from leaking out of the enclave. For this purpose they initiate a remote attestation by sending an attestation request challenge to the enclave developer, inspect and verify the code of the bootstrap enclave by checking the remote attestation report. If they are convinced the bootstrap enclave can guarantee the predefined policies are applied to the loaded code, they can send the data (in encrypted form) to the bootstrap enclave. \vspace{3pt}\noindent\textbf{The code providers}. Similar as the data providers, code provider/owner could verify the code of the bootstrap enclave through remote attestation. Only after they believe that the bootstrap enclave guarantees the predefined policies, the code can be sent into the bootstrap enclave secretly. \subsection{Application Scenarios}\label{subsec-scenarios} In the following, we enumerate several application scenarios that could benefit from the proposed CAT model. \input{figures/fg-scenario_1.tex} \weijie{change the name of scenarios to: confidential computing as a service, confidential data as a service, etc.} \vspace{3pt}\noindent\textbf{Scenario~\ref{fg-scenario_1}: Privacy preserving online data processing service}. As the motivating example (Sec.~\ref{subsec-motivation}) shows, the data-processing service (i.e., the bootstrap enclave) is hosted on an SGX-enabled platform owned by the service provider (i.e., the code provider), while the user (i.e., the data provider) sends her encrypted sensitive data for processing. The result will be sent back to the data provider (possibly in encrypted form). As such, the user would like that the bootstrap enclave can enforce the security policy that her data would never leave the enclave. \input{figures/fg-scenario_2.tex} \vspace{3pt}\noindent\textbf{Scenario~2: Privacy preserving online data as a service}. In this scenario, the data-as-a-service (i.e., the bootstrap enclave) is hosted on an SGX-enabled platform owned by the data provider. A user (i.e., the code owner) who would like to conduct computation (such as genome analysis) on the data sends her sensitive code via a secure channel . The result will be sent back to the code provider (possibly in encrypted form). As such, the user would like that the bootstrap enclave can enforce the security policy that the data used are not faked or impure, and her code will not be unintended redirected so that the result is reliable. \input{figures/fg-scenario_3.tex} \vspace{3pt}\noindent\textbf{Scenario~3: Privacy preserving online data market}. In this scenario, a data market platform (i.e., the bootstrap enclave) is hosted on a third party. The data owner uploads her encrypted data to the platform so that she gets paid if the data is used. The data user uploads her code (e.g. genome analysis) also secretly to be computed on people's data satisfying some preset conditions. As such, besides enforcing the confidentiality of both the code and data, the data owner would like to ensure that she gets paid as long as the data is used, while the data user want to ensure that she's not overcharged. Besides the above scenarios, CAT may be used in more applications, such as privacy preserving smart contract, etc. \subsection{Design goals} \label{subsec-challenges} At the core of instantiating the CAT model is the design of the bootstrap enclave who is responsible for enforcing the privacy and security policies. We list the design goals to enable a secure and efficient bootstrap enclave. \vspace{3pt}\noindent\textbf{Minimizing the Trusted Computing Base (TCB)}.\label{challenge-tcb} In the CAT model the bootstrap enclave is responsible for enforcing the security and privacy policies and to control the interfaces between the loaded code/data and outside of the enclave. The trust built by the CAT model will collapse once it is compromised. It is essential to control the size of the bootstrap enclave and that it can be (formally) verified in the future. \vspace{3pt}\noindent\textbf{Reducing the memory consumption}.\label{challenge-size} There are limited EPC memory that could be used by the SGX enclave. Considering that the data-processing computation itself consume considerable memory, the design needs to reduce the memory cost adequately. \vspace{3pt}\noindent\textbf{Confining the (untrusted) loaded code}.\label{challenge-dep} On current SGX hardware, dynamically changing the RWX properties of enclave pages are not supported. So the loaded program will be executed on enclave's heap space, and we need a fine data execution prevention (DEP) scheme to guarantee W $\oplus$ X privilege \vspace{3pt}\noindent\textbf{Preventing malicious control flows}.\label{challenge-cfi} Previous work shows that special instructions in SGX like ENCLU would make unique gadgets for control flow redirecting attacks. Since the loaded code can not be trusted, the design needs to prevent code from escaping the policy enforcement by redirecting the control flow and tampering the security-sensitive data structures of the bootstrap code. \vspace{3pt}\noindent\textbf{Performance considerations}.\label{challenge-perf} It is also important in many scenarios to reduce the time for checking the policy compliance and to induce low run-time overhead for the computation. \subsection{Threat model} \label{subsec-threat} To demonstrate how to instantiate the CAT model, in the rest of the paper we consider a specific application scenario, i.e., privacy preserving online data processing (Scenario 1 in Sec.~\ref{subsec-scenarios}). As shown in Sec.~\ref{subsec-motivation}, many real world privacy preserving data processing tasks fall into this scenario. As described earlier for Scenario 1, the user (i.e., the data provider) submits her sensitive data to a service provider (i.e., the code owner) for data processing tasks. In our threat model, we make the following assumptions. \vspace{2pt}\noindent$\bullet${ The service code and the SGX-enabled host platform are not trusted.} The service provider may (intentionally) write vulnerable service code which causes the leakage of the users' data, e.g., the enclave may be compromised by another user with memory corruption attacks. The service code can even collude with the SGX-enabled platform owned and controlled by the attacker, e.g., through covert channels (such as page faults etc.). \vspace{2pt}\noindent$\bullet$ Since the service provider is untrusted, our design does not intend to guarantee the correctness of the results returned to the users. The service users can refuse to pay for the service or turn to other service providers if the results are inferior. This is similar as Denial-of-Service (DoS) attacks which are also out of scope. \vspace{2pt}\noindent$\bullet$ Besides, we assume the bootstrap enclave code can be inspected (or formally verified) by the users through remote attestation, so that it is trusted to be functional as designed. We trust the SGX hardware and the Intel's attestation protocol as well as the cryptographic algorithms underneath. The mutual authentication between the users and service provider is orthogonal to the design and is omitted in the paper. } \section{Conclusion}\label{sec-conclusion} In this paper we proposed the CAT, a remote attestation model that allows the user to verify the code and data provided by untrusted parties without undermining their privacy and integrity. Meanwhile, we instantiated the design of a code generator and a code consumer (the bootstrap enclave) - a lightweight PCC-type verification framework. Due to the differences between normal binary and SGX binary, we retrofit the PCC-type framework to be fitted into SGX. In return, we reduce the framework's TCB as small as possible. Our work does not use formal certificate to validate the loaded private binary, but leverage data/control flow analysis to fulfill the goal of verifying if a binary has such data leakage, allowing our solution to scale to real-world software. Moreover, our method provides a new paradigm for PCC to use a TEE (other than the OS) as an execution environment, which provides more powerful protection. \section{Enhancing SGX with CAT}\label{sec-design} In this section we present our design, called \textit{CAT-SGX}, that elevates the SGX platform with the support for the CAT model. This is done using an in-enclave software layer -- the bootstrap enclave running the code consumer and an out-enclave auxiliary -- the code generator. Following we first describe the general idea behind our design and then elaborate the policies it supports, its individual components and potential extension. \ignore{ \subsection{Threat model} \label{subsec-threat} To demonstrate how to instantiate the CAT model, in the rest of the paper we consider a specific application scenario, i.e., privacy preserving online data processing (Scenario 1 in Sec.~\ref{subsec-scenarios}). As shown in Sec.~\ref{subsec-motivation}, many real world privacy preserving data processing tasks fall into this scenario. As described earlier for Scenario 1, the user (i.e., the data provider) submits her sensitive data to a service provider (i.e., the code owner) for data processing tasks. In our threat model, we make the following assumptions. \vspace{2pt}\noindent$\bullet${ The service code and the SGX-enabled host platform are not trusted.} The service provider may (intentionally) write vulnerable service code which causes the leakage of the users' data, e.g., the enclave may be compromised by another user with memory corruption attacks. The service code can even collude with the SGX-enabled platform owned and controlled by the attacker, e.g., through covert channels (such as page faults etc.). \vspace{2pt}\noindent$\bullet$ Since the service provider is untrusted, our design does not intend to guarantee the correctness of the results returned to the users. The service users can refuse to pay for the service or turn to other service providers if the results are inferior. This is similar as Denial-of-Service (DoS) attacks which are also out of scope. \vspace{2pt}\noindent$\bullet$ Besides, we assume the bootstrap enclave code can be inspected (or formally verified) by the users through remote attestation, so that it is trusted to be functional as designed. We trust the SGX hardware and the Intel's attestation protocol as well as the cryptographic algorithms underneath. The mutual authentication between the users and service provider is orthogonal to the design and is omitted in the paper. } \subsection{CAT-SGX: Overview} \label{subsec-overview} \noindent\textbf{Idea}. Behind the design of CAT-SGX is the idea of PCC, which enables efficient in-enclave verification of the target binary's policy compliance on the proof generated for the code. A direct application of the existing PCC techniques, however, fails to serve our purpose, as mentioned earlier, due to the huge TCB introduced, the large proof size and the exponential time with regards to the code size for proof generation. To address these issues, we design a lightweight PCC-type approach with an untrusted code producer and a trusted code consumer running inside the bootstrap enclave. The producer compiles the source code of the target program (for service providing), generates a list of its indirect jump targets, and instruments it with security annotations for runtime mediation of its control flow and key operations, in compliance with security policies. The list and security annotations constitute a ``proof'', which is verified by the consumer after loading the code into the enclave and before the target binary is activated. \input{figures/fg-overview.tex} \vspace{3pt}\noindent\textbf{Architecture}. The architecture of CAT-SGX is illustrated in Figure~\ref{fg-overview}. The code generator and the binary and proof it produced are all considered untrusted. Only in the TCB is the code consumer with two components: a dynamic-loader operating a rewriter for re-locating the target binary, and a proof verifier running a disassembler for checking the correct instrumentation of security annotations. These components are all made public and can therefore be measured for a remote attestation (Section~\ref{subsec:ra-impl}). They are designed to minimize their code size, by moving most workload to the code producer. \input{figures/fg-workflow.tex} We present the workflow of CAT-SGX in Figure~\ref{fg-workflow}. The target program (the service code) is first instrumented by the code producer, which runs a customized LLVM-based compiler (step 1). Then the target binary with the proof (security annotations and the jump target list) are delivered to the enclave through a secure channel. The code is first parsed (step 2) and then disassembled from the binary's entry along with its control flow traces. After that, the proof with the assembly inspected by the verifier and if correct (step 3) before some immdiates being rewriten (step 4), is further relocated and activated by the dynamic loader (Step 5). Finally, after the bootstrap transfers the execution to the target program, the service begins and policies are checked at runtime. \ignore{ \vspace{3pt}\noindent\textbf{Dynamic Code Loading and Unloading.} \label{subsec-dynamicloader} In our design, the linking procedures (linking and rebasing) of a target program are separated into both inside and outside the enclave respectively. SGX only accepts that code running inside an enclave is linked against the SGX SDK at build time. For a self-contained function (i.e., one does not use external elements), compiling and sending the bytes of the assembled code is enough. However, if the function uses external elements, a distributed mechanism is needed to map these elements into their corresponding positions at the enclave side. So we use separated linking and rebasing to assemble all the symbols of the entire code (including necessary libraries and dependencies) into one relocatable file (aka. linking), and then parse the symbols and load/relocate the code at runtime inside the enclave (aka. rebasing). Further, during the linkage procedure, we also load and relocate an indirect branch target entry list as part of the proof for later runtime verification. The rebasing process starts with the bootstrap enclave receiving the generated binary code through a buffer. The dynamic loader's primary task is to rebase all of its symbols according to information in its relocation table. Therefore, the loader reads relocation tables in the code, updates symbol offsets in its symbol tables, and loads symbols to addresses designated in the relocation table. After rebasing, the detailed memory layout is shown on the right side in Fig.~\ref{fg-dynloader}. } \subsection{Security Policies}\label{subsec-policies} Without exposing its code for verification, the target binary needs to be inspected for compliance with security policies by the bootstrap enclave. These policies are meant to protect the privacy of sensitive data, to prevent its unauthorized disclosure. The current design of CAT-SGX supports the policies in the following five categories: \vspace{3pt}\noindent\textbf {Enclave entry and exit control}. CAT-SGX can mediate the content imported to or exported from the enclave, through the ECall and OCall interfaces, for the purposes of reducing the attack surface and controlling information leaks. \vspace{2pt}\noindent$\bullet$\textit{ P0: Input constraint, output encryption and entropy control}. We restrict the ECall interfaces to just serving the purposes of uploading data and code, which perform authentication, decryption and optionally input sanitization (or a simple length check). Also only some types of system calls are allowed through OCalls. Particularly, all network communication through OCalls should be encrypted with proper session keys (with the data owner or the code provider). For CCaaS, the data owner can demand that only one OCall (for sending back results to the owner) be allowed. For CDaaS, the data owner can further impose the constraint on the amount of information (number of bits) that can be returned to the code provider: e.g., one bit to indicate whether suitable patients for a drug trial exist or one byte to tell the number. \vspace{3pt}\noindent\textbf{Memory leak control}. Information leak can happen through unauthorized write to the memory outside the enclave, which should be prohibited through the code inspection. \vspace{2pt}\noindent$\bullet$\textit{ P1: Preventing explicit out-enclave memory stores}. This policy prevents the target binary from writing outside the enclave, which could be used to expose sensitive data. It can be enforced by security annotations through mediation on the destination addresses of memory store instructions (such as \texttt{MOV}) to ensure that they are within the enclave address range \texttt{ELRANGE}). \vspace{2pt}\noindent$\bullet$\textit{ P2: Preventing implicit out-enclave memory stores}. Illicit RSP register save/spill operations can also leak sensitive information to the out-enclave memory by pushing a register value to the address specified by the stack pointer, which is prohibited through inspecting the RSP content. \vspace{2pt}\noindent$\bullet$\textit{ P3: Preventing unauthorized change to security-critical data within the bootstrap enclave}. This policy ensures that the security-critical data would never be tampered with by the untrusted code. \vspace{2pt}\noindent$\bullet$\textit{ P4: Preventing runtime code modification}. Since the target code is untrusted and loaded into the enclave during its operation, under SGXv1, the code can only be relocated to the pages with \texttt{RWX} properties. So software-based DEP protection should be in place to prevent the target binary from changing itself or uploading other code at runtime. \vspace{3pt}\noindent\textbf{Control-flow management}. To ensure that security annotations and other protection cannot be circumvented at runtime, the control flow of the target binary should not be manipulated. For this purpose, the following policy should be enforced: \vspace{2pt}\noindent$\bullet$\textit{ P5: Preventing manipulation of indirect branches to violate policies P1 to P4}. This policy is to protect the integrity of the target binary's control flow, so security annotations cannot be bypassed. To this end, we need to mediate all indirect control transfer instructions, including indirect calls and jumps, and return instructions. \vspace{3pt}\noindent\textbf{AEX based side/covert channel mitigation}. SGX's user-land TEE design exposes a large side-channel surface, which cannot be easily eliminated. In the meantime, prior research shows that many side-channel attacks cause Asynchronous Enclave Exits (AEXs). Examples include the controlled side channel attack~\cite{xu2015controlled} that relies on triggering page faults, and the attacks on L1/L2 caches~\cite{wang2017leaky}, which requires context switches to schedule between the attack thread and the enclave thread, when Hyper-threading is turned off or a co-location test is performed before running the binary~\cite{chen2018racing}. CAT-SGX is capable of integrating existing solutions to mitigate the side- or covert-channel attacks in this category. \vspace{2pt}\noindent$\bullet$\textit{ P6: Controlling the AEX frequency}. The policy requires the total number of the AEX concurrences to keep below a threshold during the whole computation. Once the AEX is found to be too frequent, above the threshold, the execution is terminated to prevent further information leak. \subsection{Policy-Compliant Code Generation} \label{subsec-producer} As mentioned earlier, the design of CAT-SGX is to move the workload from in-enclave verification to out-enclave generation of policy-compliant binary and its proof (security annotations and the list of indirect jump targets). In this section we describe the design of the code generator, particularly how it analyzes and instruments the target program so that security policies (P1\textasciitilde P6, see Section~\ref{subsec-policies}) can be enforced during the program's runtime. Customized policies for purposes other than privacy can also be translated into proof and be enforced flexibly. \vspace{3pt}\noindent\textbf{Enforcing P1}. The code generator is built on top of the LLVM compiler framework (Section~\ref{subsec-instrument}). When compiling the target program (in C) into binary, the code generator identifies (through the LLVM API \verb|MachineInstr::mayStore()|) all memory storing operation instructions (e.g., \texttt{MOV}, Scale-Index-Base (SIB) instructions) and further inserts annotation code before each instruction to check its destination address and ensure that it does not write outside the enclave at runtime. The boundaries of the enclave address space can be obtained during dynamic code loading, which is provided by the loader (Section~\ref{subsec:verify}). The correct instrumentation of the annotation is later verified by the code consumer inside the enclave. \vspace{3pt}\noindent\textbf{Enforcing P2}. The generator locates all instructions that explicitly modify the stack pointer (the RSP in X86 arch) from the binary (e.g., a \texttt{MOV} changing its content) and inserts annotations to check the validity of the stack pointer after them. This protection, including the content of the annotations and their placement, is verified by the code consummer (Section~\ref{subsec-loading}). Note that RSP can also be changed implicitly, e.g., through pushing oversized objects onto the stack. This violation is prevented by the loader (Section~\ref{subsec-loading}), which adds guard pages (pages without permission) around the stack. \vspace{3pt}\noindent\textbf{Enforcing P3}. Similar to the enforcement of P1 and P2, the code generator inserts security annotations to prevent (both explicit and implicit) memory write operations on security-critical enclave data (e.g., SSA/TLS/TCS) once the untrusted code is loaded and verified. These annotation instructions are verified later by the verifier. \vspace{3pt}\noindent\textbf{Enforcing P4}. To prevent the target binary from changing its own code at runtime, the code generator instruments all its write operations (as identified by the APIs \verb|readsWritesVirtualRegister()| and \verb|mayStore()|) with the annotations that disallow alternation of code pages. Note that the code of the target binary has to be placed on \texttt{RWX} pages by the loader under SGXv1 and its stack and heap are assigned to \texttt{RW} pages (see Sec.~\ref{subsec-loading}), so runtime code modification cannot be stopped solely by page-level protection (though code execution from the data region is defeated by the page permissions). \vspace{3pt}\noindent\textbf{Enforcing P5}. To control indirect calls or indirect jumps in the target program, the code generator extracts all labels from its binary during compilation and instruments security annotations before related instructions to ensure that only these labels can serve as legitimate jump targets. The locations of these labels should not allow an instrumented security annotations to be bypassed. Also to prevent the backward-edge control flow manipulation (through \texttt{RET}), the generator injects annotations after entry into and before return from every function call to operate on a shadow stack (see Figure~\ref{fg-shadowstack}), which is allocated during code loading. Also all the legitimate labels are replaced by the loader when relocating the target binary. Such annotations are then inspected by the verifier when disassembling the binary to ensure that protection will not be circumvented by control-flow manipulation (Section~\ref{subsec-disassembling}). \vspace{3pt}\noindent\textbf{Enforcing P6 with SSA inspection}. When an exception or interrupt take place during enclave execution, an AEX is triggered by the hardware to save the enclave context (such as general registers) to the state saving area (SSA). This makes the occurrence of the AEX visible~\cite{gruss2017strong,chen2018racing}. Specifically, the code generator can enforce the side-channel mitigation policy by instrumenting every basic block with an annotation that sets a marker in the SSA and monitors whether the marker is overwritten, which happens when the enclave context in the area has been changed, indicating that an AEX has occurred. Through counting the number of consecutive AEXes, the protected target binary can be aborted in the presence of anomalously frequent interrupts. This protection can also be verified by the code consumer before the binary is allowed to run inside the enclave. \vspace{3pt}\noindent\textbf{Code loading support}.\label{subsec:code-loading-support} Loading the binary is a procedure that links the binary to external libraries and relocates the code. For a self-contained function (i.e., one does not use external elements), compiling and sending the bytes of the assembled code is enough. However, if the function wants to use external elements but not supported inside an enclave (e.g., a system call), a distributed code loading support mechanism is needed. In our design, the loading procedure is divided into two parts, one (linking) outside and the other (relocation) inside the enclave. Our code generator assembles all the symbols of the entire code (including necessary libraries and dependencies) into one relocatable file via static linking. While linking all object files generated by the LLVM, it keeps all symbols and relocation information held in relocatable entries. The relocatable file, as above-mentioned target binary, is expected to be loaded for being relocated later (Section~\ref{subsec-loading}). \subsection{Configuration, Loading and Verification} \label{subsec:verify} With the annotations instrumented and legitimate jump targets identified, the in-enclave workload undertaken by the bootstrap enclave side has been significantly reduced. Still, it needs to be properly configured to enforce the policy (P0) that cannot be implemented by the code generator, load and relocate the target binary so instrumented protection can be properly executed and also verify the ``proof'' for policy compliance through efficient dissembling and inspecting the binary. Following we elaborate how these critical operations are supported by our design. \vspace{3pt}\noindent\textbf{Enclave configuration to enforce P0}. To enforce the input constraint, we need to configure the enclave by defining certain public ECalls in Enclave Definition Language (EDL) files for data and code secure delivery. Note such a configuration, together with other security settings, can be attested to the remote data owner or code provider. The computation result of the in-enclave service is encrypted using a session key (with the data owner or code provider) after the remote attestation and is sent out through a customized OCall. For this purpose, CAT-SGX only defines allowed system calls (e.g., \texttt{send/recv}) in the EDL file, together with their wrappers for security control. Specially, the wrapper for \texttt{send} encrypts the message to be delivered and pads it to a fixed length. To support the CCaaS setting, only \texttt{send} and \texttt{recv} are allowed to communicate with the data owner. When necessary, the wrappers of these functions can pad the encrypted output and ensure that the inter-packet timings are constant to mitigate the side-channel risk. For CDaaS, we only permit a \texttt{send} OCall to be invoked once to deliver the computing result to the code provider, which can be enforced by the wrapper of the function through a counter. Further the wrapper can put a constraint on the length of the result to control the amount of information disclosed to the code provider: e.g., only 8 bits can be sent out. \vspace{3pt}\noindent\textbf{Dynamic code loading and unloading}. \label{subsec-loading} The target binary is delivered into the enclave as data through an ECall, processed by the wrapper placed by CAT-SGX, which authenticates the sender and then decrypts the code before handing it over to the dynamic loader. The primary task of the loader is to rebase all symbols of the binary according to its relocation information (Section~\ref{subsec:code-loading-support}). For this purpose, the loader first parses the binary to retrieve its relocation tables, then updates symbol offsets\ignore{ based upon the symbol tables}, and further reloads the symbols to designated addresses. During this loading procedure, the indirect branch label list is ``translated'' to in-enclave addresses, which are considered to be legitimate branch targets and later used for policy compliance verification. As mentioned earlier (Section~\ref{subsec-producer}), the code section of the target binary is placed on pages with \texttt{RWX} privileges, since under SGXv1, the page permissions cannot be changed during an enclave's operation, while the data sessions (stack, heap) are assigned to the pages with \texttt{RW} privileges. These code pages for the binary are guarded against any write operation by the annotations for enforcing P4. Other enclave code, including that of the code consumer, is under the \texttt{RX} protection through enclave configuration. Further the loader assigns two non-writable blank guard pages right before and after the target binary's stack for enforcing P2, and also reserves pages for hosting the list of legitimate branch targets and the shadow stack for enforcing P5. \vspace{3pt}\noindent\textbf{Just-enough disassembling and verification}. \label{subsec-disassembling} After loading and relocating, the target binary is passed to the verifier for a policy compliance check. Such a verification is meant to be highly efficient, together with a lightweight disassembler. Specifically, our disassembler is designed to leverage the assistance provided by the code generator. It starts from the program entry discovered by the parser and follows its control flow until an indirect control flow transfer, such as indirect jump or call, is encountered. Then, it utilizes all the legitimate target addresses on the list to continue the disassembly and control-flow inspection. In this way, the whole program will be quickly and comprehensively examined. For each indirect branch, the verifier checks the annotation code (Figure~\ref{subsec-producer}) right before the branch operation, which ensures that the target is always on the list at runtime. Also, these target addresses, together with direct branch targets, are compared with all guarded operations in the code to detect any attempt to evade security annotations. With such verification, we will have the confidence that no hidden control transfers will be performed by the binary, allowing further inspection of other instrumented annotations. These annotations are expected to be well formatted and located around the critical operations as described in Section~\ref{subsec-producer}. Figure~\ref{fg-mov} presents an example and more details are given in Section~\ref{subsec-instrument} and Appendix. \ignore{ \vspace{3pt}\noindent\textbf{Enforcing P0}. To enforce the input constraint, we define certain public ECalls in Enclave Definition Language (EDL) files for data and code secret delivery, which can be attested by the remote data owner. The computation result of the in-enclave service is encrypted using session key after RA and will be output in a customized OCall. Before that, to construct the output, the messages are padded to a constant length. In this CCaaS setting, only one Ocall (for outputing the computation result) is allowed to execute once. In scenarios like serving an HTTPS server and other network services, only the send/receive Ocall interfaces are allowed, and the interval time of each send/receive can be padded to prevent the timing covert channels. \vspace{3pt}\noindent\textbf{Enforcing P1}. The code generator finds all memory storing operations and instruments annotation code before them to make sure they are writing to the address space out of the enclave. The boundaries of the enclave address space can be obtained during the procedure of dynamic code loading (Sec.~\ref{subsec:verify}). The code consumer the enclave confirms such security check instructions are in place. \vspace{3pt}\noindent\textbf{Enforcing P2}. The generator locates all the instructions modifying the stack pointer (aka. the RSP in X86 arch) explicitly, and inserts instructions to check the validity of the stack pointer after them. The code consumer confirms the placement of these check instruction. Furthermore, the code consumer prevents implicit modification (pushing oversized objects to the stack) to the stack pointer by adding guard pages (i.e., pages granted no permissions) around the stack boundaries. \vspace{3pt}\noindent\textbf{Enforcing P3}. Similar to enforcing P1 and P2, the code generator further enforces that (both explicit and implicit) memory write operations cannot alter the security-critical data once the untrusted code is loaded and verified. These annotation instructions are verified later in the verifier. \vspace{3pt}\noindent\textbf{Enforcing P4}. Similar to enforcing P1 and P2, the code generator and consumer enforce that memory write operations cannot modify the \texttt{RWX} pages. We can combine the enforcement of from P1 to P4, setting several boundaries for all of them. \vspace{3pt}\noindent\textbf{Enforcing P5}. For indirect calls or indirect jumps, the code generator firstly extract all legal destination addresses, then store them as a list in a specific reserved region. It instruments code to check the targets of all indirect call/jump instructions in the code to ensure they only direct to addresses on that list. A shadow stack is also included, to prevent backward-edge control flow manipulation. Instruments can be efficiently verified, while the integrity of the forward-edge indirect branches will be checked during disassembling (Section~\ref{subsec-disassembling}). \vspace{3pt}\noindent\textbf{Enforcing P6 by detecting page faults with TSX support (P6-TSX)}. We enforce P6 by introducing the idea of T-SGX~\cite{shih2017t}. As a compiler-level scheme that automatically transforms a normal enclave program into a secured one, T-SGX can isolate the fallback handler and other transaction control code, called springboard, from the original program’s code and data pages to ensure that exceptions including page faults and timer interrupts can only be triggered on the springboard. We take advantage of it and implement instruction wrappers that encompass all boundaries between any basic blocks and branches. The fallback route of the TSX wrapper records the number of transaction aborts, which ensures that if a threshold is exceeded, the program is forced to exit. \vspace{3pt}\noindent\textbf{Enforcing P6 by detecting AEX with monitoring the SSA (P6-SSA)}. When an exception or interrupt is triggered during the enclave execution, the AEX performed by the hardware saves the enclave context (such as general registers) to the state saving area (SSA). As demonstrated in previous works~\cite{gruss2017strong,chen2018racing}, AEX can be detected by monitoring the SSA. We instrument every basic block to set a marker in the SSA and monitor whether the marker is overwritten by AEX within the basic block. The execution is terminated once the number of AEXs within the basic block exceeds a preset threshold. \subsection{Code Loading and Compliance Verification} \label{subsec:verify} Due to the thorough generated proof with compliance, the bootstrap enclave can quickly verify the validity of the proof, and it can compare the conclusions of the proof to its own security policy to determine whether the application is safe to execute. \vspace{3pt}\noindent\textbf{Dynamic code loading and unloading}. \label{subsec-dynamicloader} In our design, the linking procedures (linking and rebasing) of a target program are separated into both inside and outside the enclave respectively. SGX only accepts that code running inside an enclave is linked against the SGX SDK at build time. For a self-contained function (i.e., one does not use external elements), compiling and sending the bytes of the assembled code is enough. However, if the function uses external elements, a distributed mechanism is needed to map these elements into their corresponding positions at the enclave side. So we use separated linking and rebasing to assemble all the symbols of the entire code (including necessary libraries and dependencies) into one relocatable file (aka. linking), and then parse the symbols and load/relocate the code at runtime inside the enclave (aka. rebasing). Further, during the linkage procedure, we also load and relocate an indirect branch target entry list as part of the proof for later runtime verification. The rebasing process starts with the bootstrap enclave receiving the generated target binary code through a buffer. The dynamic loader's primary task is to rebase all of its symbols according to the information in its relocation table. Therefore, the loader reads relocation tables of the binary, updates symbol offsets of its symbol tables, and reloads symbols to designated addresses. \vspace{3pt}\noindent\textbf{Just-enough disassembling and verification}. \label{subsec-disassembling} Accurate and complete binary disassembly is a difficult problem in general due to indirect control flow transfers. We built a lightweight disassembler with the assistance of the compiler outside the enclave. Our disassembler starts with the program entry and follows the program control flow. When we encounter indirect control flow transfers such as indirect jumps and indirect calls, we use the valid target list provided by the compiler to find the targets of indirect control flows. Note that our verifier will ensure that there are runtime checks before every indirect control flow, which guarantees that the actual control flow targets are inside the list provided by the compiler. When jumping to next target, we check if the target is inside any instrumentations enforced by policies (in Section~\ref{subsec-producer}). In this way, the untrusted compiler is not able to hide dangerous transfer targets by omitting them in the list. The runtime check ensures the integrity of the target list provided by the compiler. } \ignore{ As described in Sec.~\ref{sec-CAT}, a confidential attestation process encompasses a standard Intel attestation to attest the bootstrap enclave and establish a secure communication channel. If they are convinced that the bootstrap enclave can enforce the desired security policies, the data or code providers can send the data and code (in encrypted form) to the bootstrap enclave. In this section we present our design for realizing the CAT model under the privacy preserving data processing scenario. In this setting, the service user (i.e., the data owner) communicates with the remote data processing service with encrypted messages for uploading her data and receiving the results. To minimize the size of the bootstrap enclave, the design borrows the idea of PCC and consists of an untrusted code producer outside the enclave, and a trusted code consumer inside. Sec.~\ref{subsec-threat} presents the threat model. Sec.~\ref{subsec-policies} presents the categorized security policies enforced by the bootstrap enclave. Sec.~\ref{subsec-producer} and Sec.~\ref{subsec:verify} presents how the policies can be enforced and verified with the cooperative design of the code producer and code consumer. We will discuss how to extend the design to other scenarios with carefully crafted security policies in Sec.~\ref{subsec:morescenario}. \ignore{ \subsection{Threat model} \label{subsec-threat} To demonstrate how to instantiate the CAT model, in the rest of the paper we consider a specific application scenario, i.e., privacy preserving online data processing (Scenario 1 in Sec.~\ref{subsec-scenarios}). As shown in Sec.~\ref{subsec-motivation}, many real world privacy preserving data processing tasks fall into this scenario. As described earlier for Scenario 1, the user (i.e., the data provider) submits her sensitive data to a service provider (i.e., the code owner) for data processing tasks. In our threat model, we make the following assumptions. \vspace{2pt}\noindent$\bullet${ The service code and the SGX-enabled host platform are not trusted.} The service provider may (intentionally) write vulnerable service code which causes the leakage of the users' data. The service code can even collude with the SGX-enabled platform owned and controlled by the attacker, e.g., through covert channels (such as page faults etc.). \vspace{2pt}\noindent$\bullet$ Since the service provider is untrusted, our design does not intend to guarantee the correctness of the results returned to the users. The service users can refuse to pay for the service or turn to other service providers if the results are inferior. This is similar as Denial-of-Service (DoS) attacks which are also out of scope. \vspace{2pt}\noindent$\bullet$ Besides, we assume the bootstrap enclave code can be inspected (or formally verified) by the users through remote attestation, so that it is trusted to be functional as designed. We trust the SGX hardware and the Intel's attestation protocol as well as the cryptographic algorithms underneath. The mutual authentication between the users and service provider is orthogonal to the design and is omitted in the paper. } \subsection{CAT-SGX: Overview} \label{subsec-overview} A common method to enable efficient verification of the code properties within the bootstrap enclave is to use PCC. We find, however, applying traditional PCC schemes directly to the CAT model is impractical due to the following reasons: (1) Traditional PCC schemes induce huge TCB, including the VC generator with an intermediate language interpreter/compiler, or a virtual machine that can support runtime proof checking. (2) In traditional PCC schemes, the proof size is usually large. (3) Although the procedure of generating proof runs outside the enclave and is not trusted, it usually takes exponential time with respect to the code size and may take too long for real world data processing computations. \input{figures/fg-overview.tex} Instead, we construct a lightweight PCC-type scheme which includes an untrusted code producer and a trusted code consumer running in the bootstrap enclave (Fig.~\ref{fg-overview}). The overview design is derived from the original PCC idea while we have simplified it for our own purpose. The illustration of our design lists all components that are divided into two parts, trusted part and untrusted part. In our new PCC-based system, the proof is generated from the outside of the enclave during the compiling and can be verified at runtime inside. The inside verifier can cooperate with the outside compiler to make the verifier as lightweight as possible, using static verification of dynamic checks. In traditional PCC framework, the VCGen often exists as a compiler~\cite{colby2000certifying,leroy2006formal} or a sandbox~\cite{pirzadeh2010extended}, which is too heavy for limited executive resources. So here, we build our own lightweight PCC system to verify if a cloud service would leakage user’s data. Generally, we provide a code transformer for service code which needs to be verified, and a secure enclave for executing the verified service code. The only TCB in our design is the verifier code inside the enclave, including the dynamic-loader, the disassembler/rewriter (for rewritting structured guards), and the proof verifier (for checking verification hints). On the other hand, the whole trusted code consumer can be remotely attested (in Subsection~\ref{subsec-dynamicloader}), which can indirectly protect the integrity of for code provider, as well as the isolate valuable implementation detail from be accessed. As mentioned above, to facilitate PCC framework working well in SGX and to make PCC more efficient, specifically to reduce the size of code consumer side, we move as much proof generation workload as possible to the code producer side, and leave as few verification workload as possible to the inside of the enclave. \input{figures/fg-workflow.tex} \vspace{3pt}\noindent\textbf{Workflow.} The workflow of our privacy-preserving TEE and PCC framework is shown as Figure~\ref{fg-workflow}. Unlike the typical interaction between the producer and consumer, the workflow encompasses several steps. First, the service code (target program) is instrumented using our own `code + proof' generator - a customized LLVM-based compiler (step 1,2). Then on the code consumer side, the loader source code and the verifier source code are compiled with SGX SDK to build the bootstrap enclave binary (step 3). Our loader can load and unload code after initialization (step 4), followed by being remote attested (step 5). Moreover, the `code + proof' can be transferred to code consumer where SGX is deployed, waiting for being dynamic loaded and rebased into the enclave (step 6). After being loaded, the verifier will rewrite some key immediate operands (Imm) and finally transfer the execution to the target program (step 7). \vspace{3pt}\noindent\textbf{Dynamic Code Loading and Unloading.} \label{subsec-dynamicloader} In our design, the linking procedures (linking and rebasing) of a target program are separated into both inside and outside the enclave respectively. SGX only accepts that code running inside an enclave is linked against the SGX SDK at build time. For a self-contained function (i.e., one does not use external elements), compiling and sending the bytes of the assembled code is enough. However, if the function uses external elements, a distributed mechanism is needed to map these elements into their corresponding positions at the enclave side. So we use separated linking and rebasing to assemble all the symbols of the entire code (including necessary libraries and dependencies) into one relocatable file (aka. linking), and then parse the symbols and load/relocate the code at runtime inside the enclave (aka. rebasing). Further, during the linkage procedure, we also load and relocate an indirect branch target entry list as part of the proof for later runtime verification. The rebasing process starts with the bootstrap enclave receiving the generated binary code through a buffer. The dynamic loader's primary task is to rebase all of its symbols according to information in its relocation table. Therefore, the loader reads relocation tables in the code, updates symbol offsets in symbol tables, and loads symbols to addresses designated in relocation tables. After rebasing, the detailed memory layout is shown on the right side in Fig.~\ref{fg-dynloader}. \subsection{Security Policies}\label{subsec-policies} In this scenario, the bootstrap enclave needs to enforce that the data will not be leaked by the untrusted service code, which is not exposed to the data provider. It can be achieved by enforcing several policies as follows. Notably, the policies put some constraints on the service code, yet we make a lot of effort on these constraints to ensure that the code functionality are intact and necessary for the CAT model. \vspace{3pt}\noindent\textbf {Constraining Ecalls/Ocalls}. In such service-oriented scenarios, all the bridge functions should be public and attestable for normal use, we should make restrictions on them. The output is to be produced encrypted and the loader must deal with system calls via a trusted Ocall routine. All ECalls/OCalls will be audited and configured correctly by the bootstrap enclave. \vspace{2pt}\noindent$\bullet$\textit{ P0: Standard and Encrypted Output via legitimate Ocalls}. After the Remote Attestation and the session key exchange, messages sent from the enclave should be all encrypted. And for security consideration, they could be the same length to prevent further inference attacks. \vspace{3pt}\noindent\textbf {Verifying Memory Operations}. In order to prevent data leakage during it being processed by a untrusted code provider, we need a verifier to check if this program is prone to write sensitive information from inside enclave to the outside world. \vspace{2pt}\noindent$\bullet$\textit{ P1: Preventing explicit memory stores to the outside of the enclave}. An enclave has the ability to write data outside of its EPC memory region arbitrarily. Therefore, the major policy is to prevent the untrusted code from copying the data across enclave boundaries. The policy-compliance verifier needs to ensure that the destinations of memory store instructions such as \texttt{MOV} are within the enclave address range (also known as the \texttt{ELRANGE}). \vspace{2pt}\noindent$\bullet$\textit{ P2: Preventing implicit memory stores to the outside of the enclave}. Illicit RSP register save/spill operations can do the trick of leaking sensitive information to memory via pushing a register value to an address specified by the stack pointer. \vspace{2pt}\noindent$\bullet$\textit{ P3: Preventing tampering of security-critical data within the bootstrap enclave}. This ensures that the code never reads or writes secrets in the SSA/TCS area, which is necessary because strong security properties no longer hold if the thread control structure data is used. In this case the verifier needs to ensure that the untrusted code does not tamper with this kind of data structure. \vspace{3pt}\noindent\textbf{Constraining Control Transfers}. In our PCC-type scheme, the policy-compliance verifier performs static analysis of the untrusted code. It is essential to prevent attackers from dynamically redirecting the control flow at runtime, which may bypass the check performed when the code is initially loaded. In this respect, the following policies need to be enforced. \vspace{2pt}\noindent$\bullet$\textit{ P4: Data execution prevention for the RWX region}. In the currently SGX platforms that support only SGXv1 instructions, the untrusted code are loaded to pages with \texttt{RWX} properties. A software DEP scheme is needed to prevent the untrusted code from changing the code at runtime. \vspace{2pt}\noindent$\bullet$\textit{ P5: Indirect branches shall not point to destinations that violate policies P1 to P4}. Such control flow integrity definitely should be guaranteed since the loaded code could be malicious so that it could bypass the mentioned policies. Therefore, this enforcement should be performed for all indirect control transfer instructions, including indirect calls, indirect jumps, and return instructions. \vspace{3pt}\noindent\textbf{Detecting leakage through AEX based side channels}. Side channels are difficult to eliminate and are severe threats to TEEs such as SGX. As shown in previous works, the abnormal AEXs can be used to detect many low-noise side channels within SGX, such as controlled channel attacks~\cite{xu2015controlled}, and same core L1/L2 cache attacks~\cite{chen2018racing}. We are not meant to design new side channel defenses, nevertheless we propose to transplant existing side channel detecting techniques, which illustrates the generality of the CAT model. \vspace{2pt}\noindent$\bullet$\textit{ Alternative P6: Detecting page faults with TSX support}. \vspace{2pt}\noindent$\bullet$\textit{ Alternative P7: Detecting AEX by monitoring the SSA}. \vspace{3pt}\noindent\textbf{Multi-Usesr Isolation}. When protecting in-enclave time-sharing services, a big challenge is to prevent the service program infected by one user from victimizing another user, and the confidentiality of a user’s data left in the enclave to protect it from leaking to the subsequent user receiving the service. \vspace{2pt}\noindent$\bullet$\textit{ P8: User data cleansing}. The purpose of the data cleansing and the exit sanitization is to reset the service state and clean up the old user’s data right after execution is done. Except the persistent data of the user, all other data will be cleaned up, together with the content of SSA and registers. \subsection{Policy-Compliant Code Generation} \label{subsec-producer} In this section we present the design of the code generator, which given the source program can produce binary code that enforces the policies P1 to P5, and can be easily verified. The design of the policy verifier will be presented in Sec.~\ref{subsec:verify}. Since we do the verification at the assembly level and the target binary loaded in the bootstrap enclave will finally be disassembled, the code generator does not need to be trusted. \vspace{3pt}\noindent\textbf{Enforcing P0}. The output message for the in-enclave service is encrypted using session key after RA. Before that, to construct the output message, the plaintext is padded to a constant length if have to. Meanwhile, the output functions are wrapped by our customized Ocall stubs. On the other hand, 37 common system calls are also wrapped with Ocall stubs for possible system interactions.\wenhao{we need to discuss: admitting arbitrary ocalls could lead to information leakage by ocall patterns} \vspace{3pt}\noindent\textbf{Enforcing P1}. The generator instruments all storing instructions to check if they are writing to the memory out of the enclave. The code consumer inside the enclave confirms such security check instructions are in place. \vspace{3pt}\noindent\textbf{Enforcing P2}. The generator prevents implicit memory write crossing the enclave boundaries by making sure the stack pointer never point to memory regions outside the enclave. It locates all the instructions modifying the stack pointer explicitly, and inserts instructions to check the validity of the stack pointer after them. The code consumer confirms the placement of these check instruction. Furthermore, the code consumer prevents implicit modification of the stack pointer by adding guard pages (i.e., pages granted no permissions) around the stack boundaries. \vspace{3pt}\noindent\textbf{Enforcing P3}. Similar to enforcing P1 and P2, the code generator further enforces that (both explicit and implicit) memory write operations cannot alter the security-critical data once the untrusted code is loaded and verified. These check instructions are verified by the code consumer inside the enclave. \vspace{3pt}\noindent\textbf{Enforcing P4}. Similar to enforcing P1 and P2, the code generator and consumer enforce that (both explicit and implicit) memory write operations cannot modify the \texttt{RWX} pages. \vspace{3pt}\noindent\textbf{Enforcing P5}. For indirect calls or indirect jumps, the code generator firstly extract all legal destination addresses of them. Theses addresses are stored in a specific data region. Then it instruments code to check the targets of all the indirect call/jump instructions in the code to ensure they only direct to addresses listed in the data region. These instruments can be efficiently verified by the in-enclave code consumer. \vspace{3pt}\noindent\textbf{Enforcing P6}. We enforce P6 by introducing the idea of T-SGX~\cite{shih2017t}. As a compiler-level scheme that automatically transforms a normal enclave program into a secured one, T-SGX can isolate the fallback handler and other transaction control code, called springboard, from the original program’s code and data pages to ensure that exceptions including page faults and timer interrupts can only be triggered on the springboard. We take advantage of it and implement instruction wrappers that encompass all boundaries between any basic blocks and branches. The fallback route of the TSX wrapper records the number of transaction aborts, which ensures that if a threshold is exceeded, the program is forced to exit. \vspace{3pt}\noindent\textbf{Enforcing P7}. We enforce P7 by retrofitting Hyperrace~\cite{chen2018racing}. However, unlike Hyperrace doing the physical-core co-location tests, here we only monitor how often the interrupts/AEXs happen. L1/L2 cache-based channel can be detected when certain number or more interrupts/AEXs occur in one basic block or every k instructions. \vspace{3pt}\noindent\textbf{Enforcing P8}. Before the bootstrap enclave is destroyed, all user data is cleared after the execution transferred back to the loader. \subsection{Compliance Verification} \label{subsec:verify} Due to the thorough generated proof with compliance, the bootstrap enclave can quickly verify the validity of the proof, and it can compare the conclusions of the proof to its own security policy to determine whether the application is safe to execute. \vspace{3pt}\noindent\textbf{Just-enough Disassembling.} Accurate and complete binary disassembly is a difficult problem in general due to indirect control flow transfers. We built a lightweight disassembler with the assistance of the compiler outside the enclave. Our disassembler starts with the program entry and follows the program control flow. When we encounter indirect control flow transfers such as indirect jumps and indirect calls, we use the valid target list provided by the compiler to find the targets of indirect control flows. Note that our verifier will ensure that there are runtime checks before every indirect control flow, which guarantees that the actual control flow targets are inside the list provided by the compiler. In this way, the untrusted compiler is not able to hide dangerous transfer targets by omitting them in the list. The runtime check ensures the integrity of the target list provided by the compiler. } \section{Discussion}\label{sec-discussion} In previous sections we have shown that the design of CAT offers lightweight and efficient in-enclave verification of privacy policy compliance. Here we discuss some extensions. \vspace{3pt}\noindent\textbf{Supporting other side/covert channel defenses}. In Section~\ref{subsec-producer}, we talked about policy enforcement approaches for side channel resilience. It demonstrated that our framework can take various side channel mitigation approaches to generate code carried with proof. Besides AEX based mitigations which we learnt from Hyperrace~\cite{chen2018racing}, others~\cite{doychev2015cacheaudit,almeida2016verifying,shih2017t,gruss2017strong,wu2018eliminating,wang2019identifying,orenbach2019cosmix} can also be transformed and incorporated into the CAT design. Even though new attacks have been kept being proposed and there is perhaps no definitive and practical solutions to all side/covert channel attacks, we believe eventually some efforts can be integrated in our work. \vspace{3pt}\noindent\textbf{Supporting SGXv2}. Our approach currently relies on SGXv1 instructions that prevents dynamically changing page permissions using a software DEP. The design could be simplified with SGXv2 instructions~\cite{mckeen2016intel} since dynamic memory management inside an enclave is allowed and protected in SGXv2 hardware. However, Intel has not shipped SGXv2 CPUs widely. So we implement the CAT model on SGXv1 to maximize its compatibility. \vspace{3pt}\noindent\textbf{Supporting multi-user}. Currently we only support single user scenarios. Of course for multi-user scenarios, we can easily add a data cleansing policy which ensures that once the task for one data owner ends, all her data will be removed from the enclave before the next owner's data is loaded, together with the content of SSA and registers, while not destroying the bootstrap enclave after use. Further, to fully support multi-user in-enclave services, we need to ensure each user's session key remains secret and conduct remote attestation for every user when they switch. Hardware features like Intel MPX~\cite{shen2018isolate} can be applied to enforce memory permission integrity~\cite{zhao2020mptee}, as a supplementary boundary checking mechanism. \vspace{3pt}\noindent\textbf{Supporting multi-threading}. When taking multi-threading into account, the proof generation process become more complicated and cumbersome~\cite{guo2007certified}. Furthermore, multi-threading would introduce serious bugs~\cite{weichbrodt2016asyncshock}. However, auditing memory read operations from other threads seems taking the multi-threading leakage once and for all. Actually, if we don't prevent attacks mentioned in CONFirm~\cite{xu2019confirm}, the proof enforcement of CFI is still broken due to a time of check to time of use (TOCTTOU) problem. To cope with that, we can make all CFI metadata to be read from a register instead of the stack, and guarantee that the instrumented proof could not be written by any threads~\cite{burow2019sok}. \vspace{3pt}\noindent\textbf{Supporting on-demand policies}.\label{subsec:morescenario} The framework of our system is highly flexible, which means assembling new policies into current design can be very straightforward. Different on-demand policies can be appended/withdrawn to serve various goals. For example, we can attach additional instrumentation to the code and the policy enforcement to the in-enclave verifier in case of the discovery of new side/covert channels and newly-published security flaws. CAT can make the quick patch possible on software level, just like the way people coping with 1-day vulnerabilities - emergency quick fix. Besides, users can also customize the policy according to their need, e.g., to verify code logic and its functionalities. \section{Evaluation}\label{sec-evaluation} In this section we report our security analysis and performance evaluation of CAT-SGX. \subsection{Security Analysis}\label{subsec-securityanalysis} \noindent\textbf{TCB analysis}. The hardware TCB of CAT-SGX includes the TEE-enabled platform, i.e. the SGX hardware. The software TCB includes the following components to build the bootstrap enclave. \vspace{2pt}\noindent$\bullet$\textit{ Loader and verifier}. The loader we implemented consists of less than 600 lines of code (LoCs) and the verifier includes less than 700 LoCs, which also integrates SGX SDK and part of Capstone libraries. \vspace{2pt}\noindent$\bullet$\textit{ ECall/OCall stubs for supporting P0}. This was implemented in less than 500 LoCs. \vspace{2pt}\noindent$\bullet$\textit{ Simple RA protocol realization}. The implementation (Section~\ref{subsec:ra-impl}) introduces about 200 LoCs. \noindent Altogether, our software TCB contains less than 2000 LoCs and some dependencies, which was compiled into a self-contained binary with 1.9 MB in total. \vspace{3pt}\noindent\textbf{Policy analysis}. Here we show how the policies on the untrusted code, once enforced, prevent information leaks from the enclaves. In addition to side channels, there are two possible ways for a data operation to go across the enclave boundaries: bridge functions~\cite{van2019tale} and memory write. \vspace{2pt}\noindent$\bullet$\textit{ Bridge functions}. With the enforcement of P0, the loaded code can only invoke our OCall stubs, which prevents the leak of plaintext data through encryption and controls the amount of information that can be sent out (to the code provider in CDaaS). \vspace{2pt}\noindent$\bullet$\textit{ Memory write operations}. All memory writes, both direct memory store and indirect register spill, are detected and blocked. Additionally, software DEP is deployed so the code cannot change itself. Also the control-flow integrity (CFI) policy, P5, prevents the attacker from bypassing the checker with carefully constructed gadgets by limiting the control flow to only legitimate target addresses. As such, possible ways of information leak to the outside of the enclave are controlled. As proved by previous works~\cite{sinha2015moat,sinha2016design} the above-mentioned policies (P1\textasciitilde P5) guarantee the property of confidentiality. Furthermore the policy (P5) of \textit{protecting return addresses and indirect control flow transfer, together with preventing writes to outside} has been proved to be adequate to construct the confinement~\cite{schuster2015vc3,sinha2016design}. So, enforcement of the whole set of policies from P0 to P5 is sound and complete in preventing explicit information leaks. In the meantime, our current design is limited in side-channel protection. We can mitigate the threats of page-fault based attacks and exploits on L1/L2 cache once Hyper-threading is turned off or HyperRace~\cite{chen2018racing} is incorporated (P6). However, defeating the attacks without triggering interrupts, such as inference through LLC is left for future research. \subsection{Performance Evaluation}\label{subsec-experiments} \input{tables/tb-simple-perf.tex} \input{tables/tb-nben-perf.tex} \noindent\textbf{Testbed setup}. In our research, we evaluated the performance of our prototype and tested its code generation and code execution. All experiments were conducted on Ubuntu 18.04 (Linux kernel version 4.4.0) with SGX SDK 2.5 installed on Intel Xeon CPU E3-1280 with 64GB memory. Also we utilized GCC 5.4.0 to build the bootstrap enclave and the SGX application, and the parameters `-fPIC', `-fno-asynchronous-unwind-tables', `-fno-addrsig', and `-mstackrealign' to generate X86 binaries. \vspace{3pt}\noindent\textbf{Performance on simple applications}. We used the applications provided by the SGX-Shield project~\cite{seo2017sgx} as a micro-benchmark. In our experiment, we ran each test case for 10 times, measured the resource consumption in each case and reported the median value. Specifically, we first set the baseline as the performance of an uninstrumented program running through a pure loader (a loader that only does the dynamic loading but no policy-compliance checking). The we compared the baseline with the performance of instrumented programs to measure the overheads. Also the compilation time of each micro-benchmark varies from several seconds to tens of seconds, which is negligible compared with conventional PCC methods (2\textasciitilde 5$\times$)~\cite{necula2001oracle}. Table~\ref{tb-simple-perf} illustrates overheads of our approach. From the table, we can see that the size of instrumented binaries (aka. the ``code + proof'') is 18.1\% larger than the original code and their executions were delayed by 9.8\% on average when only P1\textasciitilde P5 are enforced. It becomes 130\% in memory and 119\% in time when all policies, including P6, are enforced. Note that this batch of benchmarks are mostly a `first-simple-processing-then-syscall' program. At the worst case - `bm\_malloc\_and\_sort', CAT-SGX showed 27.6\% overhead in execution time. \vspace{3pt}\noindent\textbf{Performance on nBench}. We instrumented all applications in the SGX-nBench~\cite{sgxnbench}, and ran each testcase of the nBench suites under a few settings, each for 10 times. These settings include just explicit memory write check (P1), both explicit memory write check and implicit stack write check (P1+P2), all memory write and indirect branch check (P1\textasciitilde P5), and together with side channel mitigation (P1\textasciitilde P6). Table~\ref{tb-nben-perf} shows the average execution time under different settings. Without side channel mitigation (P1\textasciitilde P5), CAT-SGX introduces an 0.3\% to 25\% overhead (on FP-emulation). Apparently, the store instruction instrumentation alone (P1) does not cause a large performance overhead, with largest being 6.7\%. Also, when P1 and P2 are applied together, the overhead just becomes slightly higher than P1 is enforced alone. Besides, almost all benchmarks in nBench perform well under the CFI check P5 (less than 3\%) except for the benchmarks Bitfield (whose overhead is about 4\%) and the Assignment (about 10\% due to its frequent memory access pattern). \vspace{3pt}\noindent\textbf{Performance on real-world applications}. We further evaluated our prototype on various real-world applications, including personal health data analysis, personal financial data analysis, and Web servers. We implemented those macro-benchmarks and measured the differences between their baseline performance (without instrumentation) in enclave and the performance of our prototype. \vspace{2pt}\noindent$\bullet$\textit{ Sensitive health data analysis}. We studied the following two applications: \noindent 1) Sequence Alignment. We implemented the Needleman–Wunsch algorithm~\cite{needleman1970general} that aligns two human genomic sequences in the FASTA format~\cite{fasta-format} taken from the 1000 Genomes project~\cite{1000genomes}. The algorithm uses dynamic programming to compute recursively a two dimensional matrix of similarity scores between subsequences; as a result, it takes $N^2$ memory space where $N$ is the length of the two input sequences. Again, we measured the program execution time under the aforementioned settings. Figure~\ref{fg-nw-perf} shows the performance of the algorithm with different input lengths (x-axis). The overall overhead (including all kinds of instrumentations) is no more than 20\% (with the P1 alone no more than 10\%), when input size is small (less than 200 Bytes). When input size is greater than 500 Bytes, the overhead of P1+P2 is about 19.7\% while P1\textasciitilde P5 spends 22.2\% more time than the baseline. \input{figures/fg-nw-perf.tex} \input{figures/fg-fasta-perf.tex} \noindent 2) Sequence Generation. We re-implemented the FASTA benchmark~\cite{fasta}, which is designed to simulate DNA sequences based on pre-defined nucleotide frequencies. The program can output the nucleotide sequences of length 2$N$, 3$N$ and 5$N$, where $N$ is used to measure the output size. Figure~\ref{fg-fasta-perf} shows the performance when the output size (x-axis) varies from 1K to 500K nucleotides. Enforcing P1 alone results in 5.1\% and 6.9\% overheads when 1K and 100K are set as the output lengths. When the output size is 200K, our prototype yields less than 20\% overhead. Even when the side channel mitigation is applied, the overhead becomes just 25\%. With the increase of processing data size, the overhead of the system also escalates; however, the overall performance remains acceptable. \vspace{2pt}\noindent$\bullet$\textit{ Personal credit score analysis}. We further looked into another realistic and more complicated workload. Credit scoring is a typical application that needs to be protected more carefully - both the credit card holder's cost calendar and card issuer's scoring algorithm need to be kept secret. In our study, we implemented a BP neural network-based credit scoring algorithm~\cite{jensen1992using} that calculates user's credit scores. The input file contains users' history records and the output is a prediction whether the bank should approve the next transaction. The model was trained on 10000 records and then used to make prediction (i.e., output a confidence probability) on different test cases. \input{figures/fg-credit-score.tex} As shown in Figure~\ref{fg-credit-score}, on 1000 and 10000 records, enforcement of P1\textasciitilde P5 would yields around 15\% overhead. While processing more than 50000 records, the overhead of the full check does not exceed 20\%. The overhead of P1\textasciitilde P6 does not exceed 10\% when processing 100K records. \input{figures/fg-https-all} \vspace{2pt}\noindent$\bullet$\textit{ HTTPS server}. We also built an HTTPS server to run in enclave using the mbed TLS library~\cite{mbedtls}. Our protection only allows two system calls (\texttt{send/recv}) to be executed via the OCall stubs for client/server communication. A client executes a stress test tool - Siege~\cite{siege} - on another host in an isolated LAN. Siege was configured to send continuous HTTPS requests (with no delay between two consecutive ones) to the web server for 10 minutes. We measured its performance in the presence of different concurrent connections to understand how our instrumented HTTPS server implementation would perform. Figure~\ref{fg-https-all} shows the response times and throughput when all policies are applied to the HTTPS server benchmark. When the concurrent connections are less than 75, the instrumented HTTPS server has similar performance of the in-enclave https server without instrumentation. When the concurrency increases to 100, the performance goes down to some extent. While after the concurrency increases to 150, the response time of instrumented server goes up significantly. On average, enforcing P1\textasciitilde P6 results in 14.1\% overhead in the response time. As for throughput, when the number of the concurrent connections is between 75 and 200, the overhead is less than 10\%. These experiments on realistic workloads show that all policies, including side-channel mitigation, can be enforced at only reasonable cost. \ignore{ In this section we report both security analysis and performance evaluation of the proposed CAT-SGX scheme. \subsection{Security Evaluation}\label{subsec-securityanalysis} \noindent\textbf{TCB analysis}. In accordance with the design, the hardware TCB includes the TEE-enabled platform, i.e. the SGX hardware. The software TCB includes the following components to build the bootstrap enclave. \vspace{2pt}\noindent$\bullet$\textit{ Loader and verifier}. The loader we implemented consists of less than 600 lines of code (LoCs) which is made from scratch, and the verifier consists of less than 700 LoCs (supported by SGX SDK and Capstone, counted below). \vspace{2pt}\noindent$\bullet$\textit{ ECall/OCall stubs for supporting P0}. This is implemented in less than 500 LoCs. \vspace{2pt}\noindent$\bullet$\textit{ Simple RA protocol realization}. The implementation (Section~\ref{subsec:ra-impl}) introduces about 200 LoCs. \vspace{2pt}\noindent$\bullet$\textit{ Dependencies}. Our framework is built upon several existing libraries, which include the SGX SDK libraries, parts of Capstone libraries, and an ELF parser. \vspace{3pt}\noindent\textbf{Policy justification}. Now we study how the enforced policies on the untrusted code prevent information leakages from the enclaves. Except for side channels, there are two possible way for data operations crossing the enclave boundaries: bridge functions~\cite{van2019tale} and memory write operations. \vspace{2pt}\noindent$\bullet$\textit{ Bridge functions}. With the enforcement of P0, the loaded code can only invoke the customized OCall stubs, which prevents the leakage of plaintext data with encryption and controls the amount of information that can be output. \vspace{2pt}\noindent$\bullet$\textit{ Memory write operations}. All memory writes, both direct memory stores and indirect register spill, can be detected and blocked. Additionally, software DEP is deployed so the code cannot change itself. CFI policies are enforced to prevent the attacker from bypassing the checker with carefully constructed gadgets. As such, possible ways of leakage to the outside of the enclave are screened. As proved by previous works~\cite{sinha2015moat,sinha2016design} the above-mentioned policies (P1\textasciitilde P5) guarantee the property of confidentiality. Furthermore the policies of \textit{protecting return addresses}, \textit{protecting indirect control flow transfers}, and \textit{preventing writes to outside} have been proved adequate to construct the confinement~\cite{schuster2015vc3,sinha2016design}. In conclusion enforcing the whole set of policies from P0 to P6 is sound and complete in verifying confidentiality. \subsection{Performance Evaluation}\label{subsec-experiments} We now focus on the overhead evaluation. The benchmarks include various workloads. We quantify the time and space costs of our prototype by enforcing the execution of the described policies. \input{tables/tb-simple-perf.tex} \input{tables/tb-nben-perf.tex} \vspace{3pt}\noindent\textbf{Testbed setup.} As same as the implementation, our evaluation was also completed on Linux/X86. We tested our prototype's performance on code generation and code execution. All experiments were conducted on Ubuntu 18.04 (Linux kernel version 4.4.0) with SGX SDK 2.5 installed on Intel Xeon CPU E3-1280 with 64GB memory. And we use GCC 5.4.0 to build the bootstrap enclave and the SGX application. For code generationr, we use `-fPIC', `-fno-asynchronous-unwind-tables', `-fno-addrsig', and `-mstackrealign' as parameters to generate X86 binaries. \vspace{3pt}\noindent\textbf{Performance on simple applications.} We reused and adapted the applications provided by SGX-Shield~\cite{seo2017sgx} as the simplest benchmarks. We run each test case of this micro benchmark for 10 times and report the median value. Specifically, we first measured the runtime overhead with the baseline being an uninstrumented program running in a pure loader (a loader that only does the dynamic loading but no policy-compliance checking). Then we instrument those programs and measure the execution time. The compilation time of each micro-benchmark varies from several seconds to tens of seconds, which is negligible compared with conventional PCC methods (2\textasciitilde 5$\times$)~\cite{necula2001oracle}. Table~\ref{tb-simple-perf} indicates the execution time of native and instrumented code. From it we can see that the size of instrumented binaries (aka. the ``code + proof'') are 3.8\% - 68.1\% larger than the original code if we only enforce P1\textasciitilde P5. This batch of benchmarks are mostly a `first-simple-processing-then-syscall' program. At the worst case - `bm\_malloc\_and\_sort', CAT-SGX showed 38.3\% overhead. \vspace{3pt}\noindent\textbf{Performance on nBench.} We instrumented all applications in the SGX-nBench~\cite{sgxnbench}, and run each testcase of the nBench suites in different instrumentation levels for 10 times. We performed the measurements at different granularities - just explicit memory write check (P1), both explicit memory write check and implicit stack write check (P1+P2), all memory write and indirect branch check (P1\textasciitilde P5), and checks with alternative side channel mitigations (P1\textasciitilde P6). Table~\ref{tb-nben-perf} indicates the average execution time of different instrumentation levels. Once protections without side channel defenses (P1\textasciitilde P5) are applied, CAT-SGX imposes 0.3\% (on FP-emulation) and more overhead. The Assignment algorithm - a well-known task allocation algorithm - takes the largest overhead, while just P1 would increase 6.7\% execution time. Certainly, this store instruction instrumentation alone does not cause a large performance loss. Also, when P1 and P2 applied, the overhead just becomes slightly higher than P1 was applied. Besides, almost all benchmarks in nBench suffer few overhead on CFI check (less than 3\%) except for the benchmarks Bitfield (which costs about 4\%) and the Assignment (which costs nearly 10\% due to its frequent memory access pattern). \vspace{3pt}\noindent\textbf{Performance on real-world cases.} We now turn to multiple realistic cases on personal health data analysis, personal financial data analysis, and Web servers. We implement those macro-benchmarks and evaluate the differences between their baseline performance running in enclave and the performance on our prototype. \vspace{2pt}\noindent$\bullet$\textit{ Sensitive health data analysis.} 1) Sequence Alignment. We implemented the Needleman–Wunsch algorithm~\cite{needleman1970general} that aligns two human genomic sequences in FASTA format~\cite{fasta-format} taken from the 1000 Genomes project ~\cite{1000genomes}. The algorithm applied the dynamic programming to compute recursively a two dimensional matrix of similarity scores between subsequences; as a result, it takes $N^2$ memory space where $N$ is the length of the two input sequences are. Still, we measured the program execution time at different granularities. Figure~\ref{fg-nw-perf} shows the performance of the algorithm for different sizes of inputs (x-axis). The overall performance overhead (including all kinds of instrumentations) is no more than 20\% (with the P1 overhead no more than 10\%), when input size is small (less than 200 Bytes). When input size is greater than 500 Bytes, the overhead of P1+P2 is about 19.7\% overhead while P1\textasciitilde P5 spends 22.2\% more time than the baseline. \input{figures/fg-nw-perf.tex} \input{figures/fg-fasta-perf.tex} 2) Sequence Generation. We re-implemented the FASTA benchmark~\cite{fasta}, which is designed to simulate DNA sequences based on pre-defined nucleotide frequencies. The program can output the nucleotide sequences of length 2$N$, 3$N$ and 5$N$, where $N$ is used to measure the output size. Figure~\ref{fg-fasta-perf} displays the performance when the output size (x-axis) varies from 1K to 500K nucleotides. Enforcing P1 alone results in 5.1\% and 6.9\% greater overhead when 1K and 100K were designated as the output length. When the output size is 200K, our prototype yields less than 20\% overhead. Even when alternative side channel mitigation techniques are applied, the overhead would exceed 125\%. With the increase of processing data size, the performance overhead of the system also escalates; however, the overall performance remains acceptable. \vspace{2pt}\noindent$\bullet$\textit{ Personal credit score analysis.} Here, we focus on another realistic workload which is more complex. Credit scoring is a typical application that needs to be protected more carefully - both the credit card holder's cost calendar and card issuer's scoring algorithm need to be remain secret. We implement a BP neural network-based credit scoring algorithm~\cite{jensen1992using} that calculates user's credit scores. The input file contains users' history records and the output is a {\em judgement} whether the bank should approve the next transaction. The model is trained on 10000 records and then is used to make prediction (i.e., output an output of confidence probability) on different number of test cases. \input{figures/fg-credit-score.tex} As shown in Figure~\ref{fg-credit-score}, for testing on 1000 and 10000 records, P1\textasciitilde P5 would yield over 15\% overhead. While processing more than 50000 records, the overhead of the full check's performance does not exceed 25\%. And the overhead of P1\textasciitilde P6 does not exceed 10\% when processing 100K records. \input{figures/fg-https-all} \vspace{2pt}\noindent$\bullet$\textit{ HTTPS server.} To evaluate how our design would affect web services, we built an HTTPS server within an enclave using mbed TLS library~\cite{mbedtls}. And we only allow two system calls (\texttt{send/recv}) to be executed via the OCall stubs for client/server communication. A client executes a stress test tool - Siege~\cite{siege} - on another host in an isolated LAN. Siege is configured to send continuous HTTPS requests (with no delay between two consecutive ones) to the web server for 10 minutes. We make a comparison among the concurrent connections of pressure tests to see how our instrumented HTTPS server implementation would perform. Figure~\ref{fg-https-all} shows the response times and throughput for full checks being applied onto the SSL server benchmark. When the concurrency is less than 75, the instrumented HTTPS server has similar performance of the original in-enclave https server. When concurrency increases to 100, the response time and the throuphput drop to some extent. While after the concurrency increases to 150, the response time of instrumented server escalates significantly. On average, enforcing P1\textasciitilde P6 results in 14.1\% overhead on the response time. As for throughput, when the number of the concurrent connections is between 75 and 200, the baseline always achieves higher throughput, though just less than 10\% higher. All these experiments on realistic workloads show that proper policies on side channel mitigations can be enforced with only reasonable overhead. } \section{Implementation}\label{sec-implementation} We implemented the prototype on Linux/X86 arch. Specifically, we implemented the code generator with LLVM 9.0.0, and built other parts on an SGX environment. We implemented one LLVM back-end pass consisting of several types of instrumentations for the code generator, about 1200 lines of C++ code in total. Besides, we implemented the bootstrap enclave with over 1900 lines of code based on Capstone~\cite{capstone} as the disassembler. \subsection{Assembly-level Instrumentation}\label{subsec-instrument} \input{figures/fg-codegen.tex} The code generator we built is mainly based on LLVM (Fig.~\ref{fg-codegen}), and the assembly-level instrumentation is the core module. To address the challenge of limited computing resources described in Section~\ref{challenge-tcb}, this code generator tool is designed and implemented comprehensively, to make the policy verifier small and exquisite. More specifically, we implemented modules for checking memory writing instructions, RSP modification, indirect branches and for building shadow stack. And we reformed a instrumentation module to generate side-channel-resilient annotations. Note that we can not only demonstrate the security policies for several real-world scenarios can be efficiently enforced with our framework, modules of the annotation generation for customized functionalities can also be integrated into the code generator. For convenience, switches to turn on/off these modules are made. Here is an example. The main function of the module for checking explicit memory write instructions (P1) is to insert annotations before them. Suppose there is such a memory write instruction in the target program, `\texttt{mov reg, [reg+imm]}', the structured annotation first sets the upper and lower bounds as two temporary Imms (3ffffffffffff and 4ffffffffffff), and then compares the address of the destination operand with the bounds. The real upper/lower bounds of the memory write instruction are specified by the loader later. If our instrumentation finds the memory write instruction trying to write data to illegal space, it will cause the program to exit at runtime. The code snippet (structured format of the annotation) is shown in Figure~\ref{fg-mov}. More details can be found at Appendix~\ref{appendix-instrumentation}. \input{figures/fg-mov.tex} Although using the code generator we could automatically produce an instrumented object file, we still need to deal with some issues manually that may affect practical usage. As the workflow described in Figure~\ref{fg-workflow}, the first job to make use of CAT system is preparing the target binary. Service-specific libraries and some dependencies also should be built and linked against the target program (detailed in Appendix~\ref{appendix-preparing}). \subsection{Building Bootstrap Enclave}\label{subsec:bootstrap-impl} Following the design in Section~\ref{subsec:verify}, we implemented a \textit{Dynamic Loading after RA mechanism} for the bootstrap enclave. During the whole service, the data owner can only see the attestation messages which are related with the bootstrap's enclave quote, but nothing about service provider’s code. \input{figures/fg-dynloader.tex} \vspace{3pt}\noindent\textbf{Remote attestation}.\label{subsec:ra-impl} Once the bootstrap enclave is initiated, it needs to be attested. We leverage the original RA routine~\cite{originalra} and adjust it to our design. The original RA routine requires that the host, which is assumed to run the enclave as the `client', initiates the attestation towards the `server', who owns the data. While in this CCaaS scenario, the service runs in the enclave while the remote user owns the data. So, we modify this routine to enable a remote CCaaS user to initiate the attestation. The RA procedures can be invoked by calling \verb|sgx_ra_init()| inside the service provider’s enclave after secret provision between the remote user and the service provider. After obtaining an enclave quote of the bootstrap enclave which is signed with the platform's EPID key, the remote data owner can submit the quote to IAS and obtain an attestation report. \vspace{3pt}\noindent\textbf{Dynamic loader}. When the RA is finished, the trust between data owner and the bootstrap enclave is established. Then the user can locally/remotely call the Ecall (\verb|ecall_receive_binary|) to load the service binary instrumented with security annotations and the indirect branch list without knowing the code. User data is loaded from untrusted memory into the trusted enclave memory when the user remotely calls Ecall (\verb|ecall_receive_userdata|), to copy the data to the section reserved for it. Then, the dynamic loader in the bootstrap enclave loads and relocates the generated code. The indirect branch list, which is comprised of symbol names that will be checked in indirect branch instrumentations, will be resolved at the very beginning. In our implementation, there are both 4M memory space for storing indirect branch targets, as well as for shadow stack. And we reserve 64M memory space for received binary and for `.data' section. The heap base address is slightly larger than the end of received binary, and 0x27000 Bytes (156 KB) space is reserved for the loader's own heap. After relocation, the detailed memory layout and some key steps are shown in Figure~\ref{fg-dynloader}. \vspace{3pt}\noindent\textbf{Policy verifier}.\label{subsec-boundarychecking} The policy-compliance verifier, is composed with three components - a clipped disassembler, a verifier, and a immediate operand rewritter. \vspace{2pt}\noindent$\bullet$\textit{ Clipped disassembler.} We enforce each policy mostly at assembly level. Thus, we incorporate a lightweight disassembler inside the enclave. To implement the disassembler, we remove unused components of this existing wide-used framework, and use Recursive Descent Disassembly to traverse the code. We used the \textit{diet} mode, in which some non-critical data are removed, thus making the engine size at least 40\% smaller~\cite{quynh2014capstone}. \vspace{2pt}\noindent$\bullet$\textit{ Policy verifier.}\label{subsec-policyverifer} The verifier and the following rewriter do the work just right after the target binary is disassembled, according the structured guard formats provided by our code generator. The verifier uses a simple scanning algorithm to ensure the policies applied in assembly language instrumentation. Specifically, the verifier scans the whole assembly recursively along with the disassembler. It follows the clipped disassembler to scan instrumentations before/after certain instructions are in place, and checks if there is any branch target pointing between instructions in those instrumentations. \vspace{2pt}\noindent$\bullet$\textit{ Imm rewriter.}\label{subsec:immrewriter} One last but not least step before executing the target binary code is to resolve and replace the Imm operands in instrumentations, including the base of the shadow stack, and the addresses of indirect branch targets (i.e. legal jump addresses). For example, the genuine base address of shadow stack is the start address \verb|__ss_start| of the memory space reserved by the bootstrap enclave for the shadow stack. And the ranges are determined using functions of Intel SGX SDK during dynamic loading (Section~\ref{subsec:verify}). We use the simplest way to rewrite Imm operands. Table~\ref{tb-rewritter} shows what the specific values should be before and after rewriting, respectively. The first column of table~\ref{tb-rewritter} shows the target we need to rewrite while loading. For instance, the upper bound address of data section would be decided during loading, but it would be 3ffffffffffffffff (shown in the 2nd. column) during the proof generation and will be modified to the real upper data bound address. The third column shows the variable name used in our prototype. \input{tables/tb-rewritter.tex} \section{Introduction}\label{sec-introduction} Recent years have witnessed the emergence of hardware trusted execution environments (TEEs) that enable efficient computation on untrusted platforms. A prominent example is Intel SGX~\cite{mckeen2013innovative}, a TEE widely deployed on commercial-off-the-shelf (COTS) desktops and server processors, providing secure memory called \textit{enclave} to host confidential computing on sensitive data, which are protected from the adversary in control of the operating system and even with physical access to the data-processing machine. Such a computing model has already been supported by major cloud providers today, including Microsoft Azure and Google Cloud~\cite{russinovich2017introducing,asylo2019}, and its further adoption has been facilitated by the Confidential Computing Consortium~\cite{ccc2019}, a Linux Foundation project that brings together the biggest technical companies such as Intel, Google, Microsoft and IBM etc. However, before TEEs can see truly wide deployment for real-world confidential computing, key technical barriers still need to be overcome, \textit{remote attestation} in particular. \vspace{3pt}\noindent\textbf{Remote attestation}. At the center of a TEE's trust model is remote attestation (RA), which allows the user of confidential computing to verify that the enclave code processing her sensitive data is correctly built and operates on a genuine TEE platform, so her data is well protected. This is done on SGX through establishing a chain of trust rooted at a platform attestation key owned by the hardware manufacturer and using the key to generate a \textit{Quote} -- a signed report that contains the measurement of the code and data in an enclave; the Quote is delivered to the data owner and checked against the signature and the expected measurement hash. This trust building process is contingent upon the availability of the measurement, which is calculated from the enclave program either by the data owner when the program is publicly available or by a trusted third party working on the owner's behalf. This becomes problematic when the program itself is private and cannot be exposed. Programs may have exploitable bugs or they may write information out of the enclave through corrupted pointers easily. For example, different banks and financial agencies would like to jointly calculate a person's credit score based on each other's data, without disclosing their individual data and the proprietary algorithm processing it. As another example, the pharmaceutical companies want to search for suitable candidates for their drug trial without directly getting access to plaintext patient records or exposing their algorithm (carrying sensitive genetic markers discovered with million dollar investments) to the hospital. With applications of this kind on the rise, new techniques for protecting both data and code privacy are in great demand. \vspace{3pt}\noindent\textbf{Confidential attestation: challenges}. To address this problem, we present in this paper a novel \textit{Confidential ATtestation} (\textit{CAT}) model to enable verification of an enclave program's compliance with user-defined security policies without exposing its source or binary code to unauthorized parties involved. Under the CAT model, a \textit{bootstrap enclave} whose code is public and verifiable through the Intel's remote attestation, is responsible for performing the compliance check on behalf of the participating parties, who even without access to the code or data to be attested, can be convinced that desired policies are faithfully enforced. However, building a system to support the CAT model turns out to be nontrivial, due to the complexity in static analysis of enclave binary for policy compliance, the need to keep the verification mechanism, which is inside the enclave's \textit{trusted computing base} (\textit{TCB}), small, the demand for a quick turnaround from the enclave user, and the limited computing resources today's SGX provides (about 96 MB physical memory on most commercial hardware~\cite{chakrabarti2019scaling}). Simply sand-boxing the enclave code significantly increases the size of TCB, rendering it less trustworthy, and also brings in performance overheads incurred by confinement and checkpoint/rollback~\cite{hunt2018ryoan}. A promising direction we envision that could lead to a practical solution is \textit{proof-carry code} (\textit{PCC}), a technique that enables a \textit{verification condition generator} (\textit{VCGen})~\cite{colby2000certifying,leroy2006formal,pirzadeh2010extended} to analyze a program and create a proof that attests the program's adherence to policies, and a \textit{proof checker} to verify the proof and the code. The hope is to push the heavy-lifting part of the program analysis to the VCGen outside the enclave while keeping the proof checker inside the enclave small and efficient. The problem is that this \textit{cannot} be achieved by existing approaches, which utilize formal verification (such as~\cite{necula2001oracle,pirzadeh2010extended}) to produce a proof that could be 1000$\times$ larger than the original code. Actually, with years of development, today's formal verification techniques, theorem proving in particular, are still less scalable, unable to handle large code blocks (e.g., over 10000 instructions) when constructing a security proof. \vspace{3pt}\noindent\textbf{Our solution}. In our research, we developed a new technique to instantiate the CAT model on SGX. Our approach, called \textit{CAT-SGX}, is a PCC-inspired solution, which relies on out-of-enclave targeted instrumentation for lightweight in-enclave information-flow confinement and integrity protection, instead of heavyweight theorem proving. More specifically, CAT-SGX operates an untrusted \textit{code producer} as a compiler to build the binary code for a data-processing program (called \textit{target program}) and instrument it with a set of \textit{security annotations} for enforcing desired policies at runtime, together with a lightweight trusted \textit{code consumer} running in the bootstrap enclave to statically verify whether the target code indeed carries properly implanted security annotations. To reduce the TCB and in-enclave computation, CAT-SGX is designed to simplify the verification step by pushing out most computing burden to the code producer running outside the enclave. More specifically, the target binary is expected to be well formatted by the producer, with all its indirect control flows resolved, all possible jump target addresses specified on a list and enforced by security annotations. In this way, the code consumer can check the target binary's policy compliance through lightweight \textit{Recursive Descent Disassembly} to inspect its complete control flow (Section~\ref{subsec-boundarychecking}), so as to ensure the presence of correctly constructed security annotations in front of each critical operation, such as read, store, enclave operations like OCall, and stack management (through a shadow stack). Any failure in such an inspection causes the rejection of the program. Also, since most code instrumentation (for injecting security annotations) is tasked to the producer, the code consumer does not need to make any change to the binary except relocating it inside the enclave. As a result, we only need a vastly simplified disassembler, instead of a full-fledged, complicated binary analysis toolkit, to support categories of security policies, including data leak control, control-transfer management, self-modifying code block and side/covert channel mitigation (Section~\ref{subsec-policies}). A wider spectrum of policies can also be upheld by an extension of CAT-SGX, as discussed in the paper (Section~\ref{sec-discussion}). We implemented CAT-SGX in our research, building the code producer on top of the LLVM compiler infrastructure and the code consumer based upon the Capstone disassembly framework~\cite{capstone} and the core disassembling engine for X86 architecture. Using this unbalanced design, our in-enclave program has only 2000 lines of source code, and together with all the libraries involved, it is compiled into 1.9 MB binary. This is significantly smaller than the NaCl's core library used by Ryoan, whose binary is around 19 MB and the theorem prover Z3, with 26 MB. We further evaluated our implementation on micro-benchmarks (nBench), as well as macro-benchmarks, including credit scoring, HTTPS server, and also basic biomedical analysis algorithms (sequence alignment, sequence generation, etc.) over various sizes of genomic data (1000 Genomes Project~\cite{1000genomes}), under the scenario of confidential computing as a service (Section~\ref{subsec-scenarios}). CAT-SGX incurs on average (calculated by geometric mean) 20\% performance overhead and less than 30\% storage overhead enforcing all the proposed security policies, and leads to around 10\% performance overhead and less than 20\% storage overhead without side/covert channel mitigation. We have released our code on Github~\cite{our-prototype}. \vspace{3pt}\noindent\textbf{Contributions}. The contributions of the paper are outlined as follows: \vspace{3pt}\noindent$\bullet$\textit{ Confidential attestation model}. We propose CAT, a new model that extends today's TEE to maintain the data owner's trust in protection of her enclave data, without exposing the code of the data-processing program. This is achieved through enforcing a set of security policies through a publicly verifiable bootstrap enclave. This new attestation model enables a wide spectrum of applications with great real-world demand in the confidential computing era. \vspace{3pt}\noindent$\bullet$\textit{ New techniques for supporting CAT on SGX}. We present the design for instantiating CAT on SGX, following the idea of PCC. Our approach utilizes out-of-enclave code analysis and instrumentation to minimize the workload for in-enclave policy compliance check, which just involves a quick run of a well-formatted target binary for inspecting correct instrumentation. This simple design offers supports for critical policies, ranging from memory leak prevention to side channel mitigation\ignore{ to multi-thread isolation}, through a much smaller TCB compared with sand-box solutions. \vspace{3pt}\noindent$\bullet$\textit{ Implementation and evaluation}. We implemented our design of CAT-SGX and extensively evaluated our prototype on micro- and macro- benchmarks, together with popular biomedical algorithms on human genomic data. Our experiments show that CAT-SGX effectively enforces various security policies at small cost, with the delay incurred by memory leak prevention around 20\% and side-channel mitigation usually no more than 35\%. \ignore{ The rise of SaaS has led to the migration of many computing tasks to remote cloud servers. Due to the convenience and flexibility of deploying a data processing task on a pre-built platform, data owners tend to submit their data to a remote server for processing (e.g. deep learning inference or genomic data analysis). In many cases, these data are sensitive and need to be protected from being exposed. Cryptographic techniques, such as homomorphic encryption and secure multi-party computation enable computation performed on data in encrypted form, but are still too slow to handle real-world data processing tasks efficiently. As previous study shows, homomorphic encryption can induce a slow-down factor of 6 to 9 orders of magnitude. In recent years hardware trusted execution environments (TEEs) emerge as a promising technique to enable efficient computations on the untrusted platform. Intel SGX~\cite{mckeen2013innovative}, which is widely available in commercial off the shelf (COTS) desktop and server processors, provides a natural way to build a secure data processing service running in a so-called \textit{enclave} for not exposing uploaded data to potential attackers outside the SGX enclave, who may control the hardware platform hosting SGX and the entire software stack including the operating system etc. Major cloud service providers such as Microsoft Azure and Google Cloud platform already are providing SGX-enabled secure computing services~\cite{russinovich2017introducing,asylo2019}. The Confidential Computing Consortium~\cite{ccc2019}, a Linux Foundation project announced recently bringing open collaboration among the biggest technique companies in the world (such as Intel, Google, Microsoft and IBM etc.) to accelerate the adoption of TEE-based confidential computing to protect data in use. \vspace{3pt}\noindent\textbf{Intel's Remote Attestation (RA)}. One important requirement of the remote secure data processing is for the data owner to verify the service code is correctly built and runs on a genuine SGX platform. SGX resolves this issue through remote attestation. The chain of trust used in software attestation is rooted at a platform attestation key owned by the hardware manufacturer. The data owner can verify the validity of the quote by checking the signature (which reflects the hardware identity) and comparing the enclave measurement hash (which identifies the software code that is executing inside the enclave) against the expected value obtained from the enclave binary. However, Intel's remote attestation is insufficient in certain use cases: \textit{the service provider may not want to expose the implementation details of its services.} For example, the banks and financial agencies would like to calculate the credit score without disclosing the algorithms applied to the card holders' data. It is a dilemma that must be solved for the service provider to build a practical privacy-preserving TEE-based service that can ensure the privacy of both data providers and code providers. \vspace{3pt}\noindent\textbf{The CAT Model}. To solve the above-mentioned issues, we propose a novel Confidential ATtestation (CAT) model, that enables users to verify whether a remote service code satisfies predefined security properties without touching the binary/source code from the service provider. In the CAT model, a \textit{bootstrap enclave} whose code is public and verifiable through the Intel's remote attestation, is responsible for verifying the compliance of predefined security policies on behalf of the participating parties, who even though have no access to the code or data to be attested, can be convinced that the bootstrap enclave is trustworthy to enforce the policies. Besides the data processing service, we find that using the CAT model can benefit many practical application scenarios, such as privacy preserving data as a service and privacy preserving data market. However, it is non-trivial to build a system supporting the CAT model since SGX has only limited computing resources (about 96 MB physical memory at the maximum on most commercial hardware~\cite{chakrabarti2019scaling}) and it is vital to reduce the size of code running inside SGX to reduce the trusted computing base (TCB). A candidate method to enable efficient verification of the code properties within the bootstrap enclave is proof carrying code (PCC), which unfortunately induces huge TCB that includes the verification condition generator (VCGen) (as a compiler~\cite{colby2000certifying,leroy2006formal} or a sandbox~\cite{pirzadeh2010extended}) and the proof checker, which uses formal verification methods to lift instructions to intermediate language (IL) and verifies secure annotations. On the other hand, formal verification tools (such as~\cite{necula2001oracle,pirzadeh2010extended}) usually produce a huge proof which can be 1000$\times$ larger than the original code. Moreover the current formal verification methods still cannot handle large code blocks (e.g., over 10000 instructions) when generating the security proof due to the limitation of current theorem proving techniques. Sandbox is not the best option, either. System like Ryoan~\cite{hunt2018ryoan} leverages Google's NaCl to load and executes untrusted modules, but the performance penalty of confinement and checkpoint/rollback is relatively high. In this paper we present a design to instantiate the CAT model by constructing a lightweight PCC-type scheme, learning from XFI~\cite{erlingsson2006xfi}. Instead of using the heavy formal verification based method, we introduce an untrusted \textit{code producer} as a compiler tool to build service code with security annotations that are executed at run-time given the desired security policies, and the trusted \textit{code consumer} running in the bootstrap enclave that verifies whether the produced code was indeed complied with these security annotations using static analysis. To reduce the TCB and computations inside the bootstrap enclave, we move as much annotation generation workload as possible to the code producer and design simple-to-verify annotation formats. In this way, the code consumer only includes a simple disassembler, instead of a full-fledged, complicated binary analysis component, to verify the security checks. The verifier performs only a few simple binary rewriting by overwriting immediate operands in instructions and the policy-compliance verification is performed by traversing the code in a recursive descent manner. The verifier defers the challenging problem of resolving indirect control flow to the code producer, requiring the code producer to provide a list of indirect control flow targets and security annotation before each indirect control flow for verifying the actual target at run-time. We apply the new design to a specific and important type of application scenario, which enables practical privacy-preserving online data processing service. We demonstrate that the confidentiality of the sensitive data can be protected by enforcing a set of policies that support lightweight and efficient verification by verifying the memory operations and control transfers. We implemented the untrusted code producer by retrofitting the LLVM compiler infrastructure with a IR-level pass and a customized back-end pass. The code consumer is implemented inside the bootstrap enclave by tailoring the Capstone disassembly framework~\cite{capstone} and retaining the core disassembling engine for X86 architecture. The size of the code consumer is only 7.7 MB in total, way less than a compiler/interpreter or a sandbox. The evaluation on several benchmarks and genomic processing algorithms shows that the size of the additional security annotation is between 3.8\% and 134\%, while the running time overhead is between 0.3\% and 54.8\%. In summary, the contributions of the paper are as follows. \begin{itemize} \item \textit{A new remote attestation model}. We propose CAT, a remote attestation model that can enforce the privacy and security policies of both the code and data provided by untrusted parties, while preserving the confidentiality of them. We find that a number of application scenarios can be benefited from the proposed model. \item \textit{A system design of the bootstrap enclave}. We present a lightweight design of the bootstrap enclave which consists of an untrusted code producer (which runs outside of the enclave) and a trusted code consumer (which runs in the bootstrap enclave). Such a design reduces the size of and computation inside the trusted code consumer and minimizes the TCB. \item \textit{Implementation and Evaluation}. We implemented the code producer as a compiler by customizing LLVM. An efficient binary loader ported Capstone into SGX enclave to build the code consumer. Evaluations on several benchmarks and real world genomic processing algorithms shows that the performance overhead is around 20\%.\footnote{We plan to publicly release the code of our technique online when the paper is published, while the current project code can be available upon request for review purposes.} \end{itemize} The rest of this paper is organized as follows. Section~\ref{sec-background} provides background information on Intel SGX and PCC. Section~\ref{sec-CAT} elaborates the Confidential Attestation model, the adversary model, and fundamental challenges. Section~\ref{sec-design} and Section~\ref{sec-implementation} present the detail design of our system: \textit{CAT}, and our system implementation in deploying PCC for the SGX environment. Section~\ref{sec-evaluation} discusses impacts of attacks against our system and performance evaluation. Section~\ref{sec-relatedwork} describes related work, and Section~\ref{sec-conclusion} concludes the paper. } \section{Related Work}\label{sec-relatedwork} \vspace{3pt}\noindent\textbf{Secure computing using SGX}. Many existing works propose using SGX to secure cloud computing systems, e.g., VC3~\cite{schuster2015vc3}, TVM~\cite{hynes2018efficient}, by using sand-boxing~\cite{hunt2018ryoan}, containers~\cite{arnautov2016scone}, and library OSes~\cite{tsai2017graphene,shen2020occlum}. These systems relies on remote attestation to verify the platform and the enclave code, as a result, they either do not protect the code privacy or they consider a one-party scenario, i.e., the code and data needed for the computation are from the same participant. In contrast, we consider 3 real world scenarios (CCaaS, CDaaS and CDCM) protecting code and data from multiple distrustful parties. \vspace{3pt}\noindent\textbf{Data confinement with SFI}. Most related to our work are data confinement technologies, which confines untrusted code with confidentiality and integrity guarantees. Ryoan~\cite{hunt2018ryoan} and its follow-up work~\cite{hunt2018chiron} provide an SFI-based distributed sand-box by porting NaCl to the enclave environment, confining untrusted data-processing modules to prevent leakage of the user’s input data. However the overhead of Ryoan turns out huge (e.g., 100\% on genes data) and was evaluated on an software emulator for supporting SGXv2 instructions. XFI~\cite{erlingsson2006xfi} is the most representative unconventional PCC work based on SFI, which places a verifier at OS level, instead of a lightweight TEE. Occlum~\cite{shen2020occlum} is a design of SGX-based library OS, enforcing in-enclave task isolation with MPX-based multi-domain SFI scheme. As the goal of SFI scheme is not to prevent information leakage from untrusted code, none of them employs protections against side channel leakages. \vspace{3pt}\noindent\textbf{Code privacy}. Code secrecy is is an easy to be ignored but very important issue~\cite{mazmudar2019mitigator,kuccuk2019managing}. DynSGX~\cite{silva2017dynsgx} and SGXElide~\cite{bauman2018sgxelide} both make possible that developers execute their code privately in public cloud environments, enabling developers to better manage the scarce memory resources. However, they only care about the developer's privacy but ignore the confidentiality of data belonging to users. \vspace{3pt}\noindent\textbf{Confidentiality verification of enclave programs}. With formal verification tools, Moat~\cite{sinha2015moat} and its follow-up works~\cite{sinha2016design} can verify if an enclave program has the risk of data leakage. The major focus of them is to verify the confidentiality of an SGX application outside the enclave formally and independently. Although it is possible that the verification could be performed within a ``bootstrap enclave'', the TCB would include the IR level language (BoogiePL) interpreter~\cite{barnett2005boogie} and a theorem prover~\cite{de2008z3}. Moreover, neither of them can discharge the large overhead introduced by instruction modeling and assertion proving when large-scale real-world programs are verified. \vspace{3pt}\noindent\textbf{Side channel attacks and defenses}. Side channels pose serious threats to secure computing using SGX as attackers can use them to circumvent explicit security defenses implemented by SGX. A rich literature has focused on discovering SGX side channels~\cite{lee2017inferring,wang2017leaky,van2018foreshadow,chen2019sgxpectre} and their defenses~\cite{shinde2016preventing,shih2017t,oleksenko2018varys,sinha2017compiler,chen2018racing}. Existing SGX secure computing work often assumes side channels as an orthogonal research topic~\cite{sinha2015moat,subramanyan2017formal,shen2020occlum}. Our framework is designed with side channels in mind and we have shown that it can flexibly support integration of instrumentation based side channel defenses. \ignore{ Many existing works have proposed approaches to perform privacy-preserving computation tasks or constitute a computing environment. There are quite some solutions using cryptography, e.g., fully homomorphic encryption (FHE)~\cite{gentry2009fully} that allows a computing task to be executed directly on encrypted data However, these techniques often could not scale to meet the requirements for running complicated tasks. A more scalable alternative solution is TEE, especially Intel SGX, which can run as fast as CPU allows except some small overheads introduced by encryption, decryption, and authentication~\cite{gueron2016memory}, and therefore can potentially achieve a performance that comes close to native execution (within one order of magnitude slowdown~\cite{tramer2018slalom}). Researchers proposed VC3, the first system that allows users to run distributed MapReduce computations in the cloud while keeping their code and data always encrypted~\cite{schuster2015vc3}. From then on, there are works to verify if a program has the risk of data leakage~\cite{sinha2015moat,subramanyan2017formal}. TVM is also a Privacy-preserving Machine Learning framework that can be used on SGX\cite{hynes2018efficient}. But in threat models of theirs, the remote user owns both data and code, which means the code may not be attestable for confidentiality. Ryoan~\cite{hunt2018ryoan} and its follow-up work~\cite{hesamifard2018privacy} provide an SFI-based distributed sandbox, confining untrusted data-processing modules to prevent leakage of the user’s input data, while its confinement overhead is sometimes high (100\% on Genes data) and the checkpoint restore overhead is significant (55\% on Genes data) with SGXv2 instruction emulation. MPTEE~\cite{zhao2020mptee} also introduces a loader and a trampoline table to ensure the dynamic changing of page privileges. And it applies some boundary check mechanisms like Intel MPX~\cite{shen2018isolate} to enforce memory permission integrity. Similar work such as Occlum~\cite{shen2020occlum} can also guarantee a secure and efficient multitasking environment. However, if we only depend on SGX's built-in integrity protection mechanism - the RA protocol - the in-enclave code would be still unreliable and there is no privacy since the code must be public for attestation. Code secrecy is is an easy to be ignored but very important issue~\cite{mazmudar2019mitigator,kuccuk2019managing}. DynSGX~\cite{silva2017dynsgx} and SGXElide~\cite{bauman2018sgxelide} both make possible that developers execute their code privately in public cloud environments, enabling developers to better manage the scarce memory resources. However, they only care about the developer's privacy but ignore the confidentiality of data belonging to remote users. Although we learn from the principle of PCC to implement our scheme, there are some significant differences between the traditional PCC scheme and ours. Traditional PCC usually uses the theory of formal proof to abstract a model, generate proof annotations and verify them by a theorem prover. Such formal verification still has two limitations: the degree of automation (complex implementation) and the degree of availability (inadequate expressiveness)~\cite{d2008survey}. Instead, our scheme does not use the formal method but leverages the control/data flow analysis. The overhead for each check is far less than the solving expressions~\cite{de2008z3}, and the proof verifier does not have to traverse every execution path. XFI~\cite{erlingsson2006xfi} is the most representative unconventional PCC work, which places a verifier at OS level for minimizing the TCB. However, if we apply XFI to verify an SGX program but build a verifier as a kernel module inside the OS, it makes the scheme meaningless since the OS is also not trustworthy. Another similar work to reduce the TCB greatly is the Flicker~\cite{mccune2008flicker}, which is a TPM-based solution to execute security-sensitive code in isolation from an OS. Unlike traditional methods including a VCGen into its trusted part~\cite{necula1997proof}, in our design, we let the proof generator (the customized LLVM) to provide rich control/data flow information and then check them strictly inside the enclave, which removes the compiler from the TCB and further helps saving memory consumption and performance overhead. }
proofpile-arXiv_065-130
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \subsection{} Let $(\bfG,\bfG')$ be a reductive dual pair over a finite field $\bff_q$ of odd characteristic. Let $G,G'$ denote the groups of rational points of $\bfG,\bfG'$ respectively. By restricting the Weil character with respect to a non-trivial character $\psi$ of $\bff_q$ to $G\times G'$, we obtain a decomposition \[ \omega^\psi_{\bfG,\bfG'} =\sum_{\rho\in\cale(G),\ \rho'\in\cale(G')}m_{\rho,\rho'}\rho\otimes\rho' \] where $m_{\rho,\rho'}=0,1$, and $\cale(G)$ denotes the set of irreducible characters of $G$. Define \begin{align*} \Theta_{\bfG,\bfG'} &=\{\,(\rho,\rho')\in\cale(G)\times\cale(G')\mid m_{\rho,\rho'}\neq 0\,\};\\ \Theta_{\bfG'}(\rho) &=\{\,\rho'\in\cale(G')\mid (\rho,\rho')\in\Theta_{\bfG,\bfG'}\,\}. \end{align*} Now $\Theta_{\bfG,\bfG'}$ establishes a relation between $\cale(G)$ and $\cale(G')$ and will be called the \emph{$\Theta$-correspondence} or \emph{Howe correspondence}. This correspondence is not one-to-one in general. When $(\bfG,\bfG')$ is a dual pair of one orthogonal group and one symplectic group and in stable range, a one-to-one sub-correspondence is defined and called the \emph{$\eta$-correspondence} in \cite{gurevich-howe}. In \cite{pan-eta}, we propose two one-to-one sub-correspondences $\underline\theta,\overline\theta$ of $\Theta$ for a general orthogonal/symplectic dual pair $(\bfG,\bfG')$ and show that three correspondences $\eta,\underline\theta,\overline\theta$ coincide when $(\bfG,\bfG')$ is in stable range. Therefore, both $\underline\theta$ and $\overline\theta$ can be regarded as extensions of the $\eta$-correspondence beyond dual pairs in stable range. The main purpose of this article is to prove the analogous results for dual pairs of two unitary groups. \subsection{} Now let $(\bfG,\bfG')$ be a dual pair of two finite unitary groups. The one-to-one correspondences $\underline\theta$ and $\overline\theta$ are construct in two steps. The first step is to define the correspondences on unipotent characters. It is well known that the irreducible unipotent characters $\rho_\lambda$ of a unitary group $\rmU_n(q)$ are parametrized by the partitions $\lambda$ of $n$. One the other hand, the unipotent characters are also parametrized by certain symbols $\Lambda_\lambda$ (\cf.~\cite{FS}). Because the unipotent characters are preserved by the $\Theta$-correspondence, the unipotent part of the correspondence can be described explicitly in terms of symbols (\cf.~\cite{amr} and \cite{pan-Lusztig-correspondence}). Then both correspondences $\underline\theta$ and $\overline\theta$ on unipotent correspondence are defined in terms of these symbols. It is known in \cite{amr} that the $\Theta$-correspondence commutes with the Lusztig correspondence as follows. For a semisimple element $s$ is the dual group $G^*$ of $G$, we can define two subgroups $G^{(1)},G^{(2)}$ so that $C_{G^*}(s)\simeq G^{(1)}\times G^{(2)}$ and we have a bijection $\Xi_s\colon\cale(G)_s\rightarrow\cale(G^{(1)}\times G^{(2)})_1$. Then the following diagram \[ \begin{CD} \cale(G)_s @> \Theta_{\bfG,\bfG'} >> \cale(G')_{s'} \\ @V \Xi_s VV @VV \Xi_{s'} V \\ \cale(G^{(1)}\times G^{(2)})_1 @> \id\otimes\Theta_{\bfG^{(2)},\bfG'^{(2)}} >> \cale(G'^{(1)}\times G'^{(2)})_1 \end{CD} \] is commutative. Then we can extend both correspondences $\underline\theta$ and $\overline\theta$ outside the unipotent characters by letting them commute with the Lusztig correspondence. Then we can show that when $(\bfG,\bfG')=(\rmU_n,\rmU_{n'})$ is in stable range (i.e., $n\leq\lfloor\frac{n'}{2}\rfloor$), for $\rho\in\cale(G)$, we have \begin{itemize} \item $\underline\theta(\rho)=\overline\theta(\rho)$; \item $\overline\theta(\rho)$ is of maximal order in the set $\Theta_{\bfG'}(\rho)$. \end{itemize} This means that both $\underline\theta$ and $\overline\theta$ can be regarded as extensions of the analogous $\eta$-correspondence for a dual pair of two unitary groups. A sub-relation $\vartheta$ of $\Theta$ is called a \emph{theta-relation} of it is symmetric, semi-persistent, and compatible with the Lusztig correspondence (\cf.~Subsection~\ref{0402}). The set of theta-relation can be partially ordered by inclusion. Then our next result is to show that both $\underline\theta$ and $\overline\theta$ are maximal one-to-one theta-relation. This means that any theta-relation which properly contains $\underline\theta$ or $\overline\theta$ will not be one-to-one. That is, if we require that $\underline\theta,\overline\theta$ to be one-to-one, then they can not be extended any more. \subsection{} The contents of this article is as follows. In Section 2, we discuss the relation between unipotent characters of a finite unitary group and certain type of symbols. In particular, the degree formula of the unipotent character is given in terms of the entries of the symbol. In section 3, we define the two one-to-one correspondences $\underline\theta$ and $\overline\theta$ on unipotent characters and show that both correspondences are equal when the dual pair is in stable range. In the final section, we discuss the relation between the Lusztig correspondence and both correspondences $\underline\theta,\overline\theta$. Then we show that both $\underline\theta$ and $\overline\theta$ are maximal one-to-one theta-relations. The strategy of the proofs are similar to those given in \cite{pan-eta} and so some of the proofs will be sketchy in this article. \section{Dimension Formula of Unipotent Characters of a Unitary Group} In this section we give a description of a symbol $\Lambda_\lambda$ associated to a partition $\lambda$. The we give a formula of the degree of the unipotent character $\rho_\lambda$ in terms of entries of $\Lambda$. Part of the descriptions in this section is modified from \cite{FS}. \subsection{Partitions and Unipotent Characters} Let $\bff_q$ be a finite field of $q$ elements where $q$ is a power of an odd prime $p$. For a finite set $X$, let $|X|$ denote the cardinality of $X$, and let $|X|_{p'}$ denote the part of $|X|$ prime to $p$. It is well-known that \[ |\rmU_n(q)|_{p'}=\prod_{i=1}^n(q^i-(-1)^i) \] where $\rmU_n(q)$ is a finite unitary group defined over $\bff_q$. Let $\lambda=[\lambda_1,\ldots,\lambda_m]$ denote a \emph{partition} of $n$, i.e., each $\lambda_i\in\bbN\cup\{0\}$, $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_m\geq 0$, $|\lambda|=\lambda_1+\cdots+\lambda_m=n$ (called the \emph{weight} of $\lambda$). Each $\lambda_i$ is called a \emph{part} of $\lambda$. The number $\ell(\lambda)=|\{\,i\mid \lambda_i>0\,\}|$ of positive parts is called the \emph{length} of $\lambda$. Two partitions are regarded to be the same if one is obtained from the other by adding several $0$'s, for example, $[\lambda_1,\ldots,\lambda_m]=[\lambda_1,\ldots,\lambda_m,0]$. Let $\calp(n)$ denote the set of all partitions of $n$. For two partitions $\lambda,\lambda'$, let $\lambda\cup\lambda'$ denote the partitions whose parts consists of the disjoint union of parts of $\lambda$ and $\lambda'$. For each index $j$, we define $\lambda^*_j=|\{\,i\mid\lambda_i\geq j\,\}|$. And we call the partition $\lambda^\rmT=[\lambda_1^*,\lambda_2^*,\ldots]$ the \emph{dual partition} of $\lambda$. The (Young) \emph{diagram} of $\lambda$ is defined to be the set of points $(i,j)\in\bbN\times\bbN$ such that $1\leq j\leq\lambda_i$. To each point $(i,j)$ in the diagram of $\lambda$ we can associate a subset \[ \xi=\xi_{i,j}=\{\,(i,j')\mid j\leq j'\leq\lambda_i\,\}\cup\{\,(i',j)\mid i\leq i'\leq\lambda^*_i \,\}, \] which is called the \emph{hook} at $(i,j)$. The cardinality \[ |\xi|=\lambda_i+\lambda^*_j-i-j+1. \] is called the \emph{hook-length} of $\xi$. A hook of length $2$ is also called a \emph{$2$-hook}. A hook $\xi_{i',j'}$ of $\lambda$ is said to \emph{be above} $\xi_{i,j}$ if $j'=j$ and $i'<i$; a hook $\xi_{i',j'}$ of $\lambda$ is said to \emph{be left to} $\xi_{i,j}$ if $i'=i$ and $j'<j$. If $\lambda=[\lambda_1,\ldots,\lambda_k]$ ($\lambda_1\geq\cdots\lambda_k\geq 0$) and $\lambda'=[\lambda'_1,\ldots,\lambda'_k]$ ($\lambda'_1\geq\cdots\lambda'_k\geq 0$) be two partitions such that either \begin{itemize} \item there exists an index $i_0$ such that $\lambda'_{i_0}=\lambda_{i_0}+2$, and $\lambda'_i=\lambda_i$ for $i\neq i_0$; or \item there exists an index $i_0$ such that $\lambda'_{i_0}=\lambda_{i_0}+1$, $\lambda'_{i_0+1}=\lambda_{i_0+1}+1$, and $\lambda'_i=\lambda_i$ for $i\neq i_0,i_0+1$, \end{itemize} then we said that \emph{$\lambda'$ is obtained from $\lambda$ by adding a $2$-hook} or \emph{$\lambda$ is obtained from $\lambda'$ by removing a $2$-hook}. Let $\lambda$ be a partition of $n$. After removing all possible $2$-hooks step by step, the resulting partition is denoted by $\lambda_\infty$ and called the \emph{2-core} of $\lambda$. Note that $\lambda_\infty=[d,d-1,\ldots,1]$ for some non-negative integer $d$. A partition $\lambda$ is called \emph{cuspidal} if $\lambda=\lambda_\infty$. The irreducible unipotent of $\rmU_n(q)$ associated to $\lambda\in\calp(n)$ is denoted by $\rho_\lambda$. It is known that the dimensional of $\rho_\lambda$ is \[ \dim(\rho_\lambda)=q^{\kappa(\lambda)}g_\lambda(q) \] where $\kappa(\lambda)=\sum_{i=1}^{\ell(\lambda)}(i-1)\lambda_i$ and $g_\lambda$ is a polynomial given by \[ g_\lambda =g_\lambda(q) =\frac{\prod_{i=1}^n(q^i-(-1)^i)}{\prod_{\xi}(q^{|\xi|}-(-1)^{|\xi|})} \] where $\prod_{\xi}$ runs over all hooks $\xi$ of $\lambda$. It is clear that $g_\lambda=g_{\lambda^\rmT}$. Now $\dim(\rho_\lambda)$ is a polynomial of $q$ whose degree is denoted by $\ord(\rho_\lambda)$. From above, we know that \[ \ord(\rho_\lambda)=\kappa(\lambda)+\frac{n(n+1)}{2}-\sum_\xi|\xi|. \] \begin{exam}\label{0202} Suppose that $\lambda=[d,d-1,\ldots,1]$. Then $\ell(\lambda)=d$ and \[ \kappa(\lambda) =(d-1)+2(d-2)+\cdots+(d-1)\cdot1 =\binom{d}{2}+\binom{d-1}{2}+\cdots+\binom{2}{2}. \] Now $\lambda$ has exactly $i$ hooks of length $2d-(2i-1)$ for $i=1,2,\ldots,d$, and no hooks of other lengths, so \[ \dim(\rho_\lambda) =\frac{q^{\binom{d}{2}+\binom{d-1}{2}+\cdots+\binom{2}{2}}\prod_{i=1}^n(q^i-(-1)^i)} {(q+1)^{d}(q^3+1)^{d-1}\cdots(q^{2d-3}+1)^2(q^{2d-1}+1)}. \] It is known that $\rho_\lambda$ is cuspidal and the dimension of $\rho_\lambda$ is also given in \cite{lg}. \end{exam} \subsection{Partitions and $\beta$-Sets} A \emph{$\beta$-set} $X=\{x_1,\ldots,x_m\}$ is a finite set of non-negative integers with elements written in decreasing order, i.e., $x_1>x_2>\cdots>x_m$. Define an equivalence relation (denoted by ``$\sim$'') on $\beta$-sets generated by \[ X\sim \{\,x+1\mid x\in X\}\cup\{0\}. \] Define the \emph{rank} of $X$ by \[ {\rm rk}(X)=\sum_{x\in X} x-\binom{|X|}{2}. \] It is clear that ${\rm rk}(X')={\rm rk}(X)$ if $X'\sim X$ and the mapping \begin{equation}\label{0213} \Upsilon\colon\{x_1,x_2,\ldots,x_m\}\mapsto[x_1-(m-1),x_2-(m-2),\ldots,x_{m-1}-1,x_m] \end{equation} gives a bijection from set of equivalence classes of $\beta$-sets of rank $n$ onto the set $\calp(n)$ of partitions of $n$. For a $\beta$-set $X$, we define four $\beta$-sets: \begin{align*} X^1 &=\{\,x\in X\mid x\equiv 1\pmod 2\,\}, & \overline X^1 &=\left\{\,\frac{x-1}{2}\mid x\in X^1\,\right\};\\ X^0 &=\{\,x\in X\mid x\equiv 0\pmod 2\,\}, & \overline X^0 &=\left\{\,\frac{x}{2}\mid x\in X^0\,\right\} . \end{align*} \begin{lem} Let $X,X'$ be two $\beta$-sets. If $X'\sim X$, then the difference $|X'^1|-|X'^0|$ is equal to either $|X^1|-|X^0|$ or $|X^0|-|X^1|-1$. \end{lem} \begin{proof} Suppose that $X'=\{\,x+1\mid x\in X\,\}\cup\{0\}$. Then clearly $|X'^1|=|X^0|$ and $|X'^0|=|X^1|+1$, and hence $|X'^1|-|X'^0|=|X^0|-|X^1|-1$. Note that $|X'^0|-|X'^1|-1=|X^1|-|X^0|$ and the lemma is proved. \end{proof} \begin{lem} Let $X$ be a $\beta$-set of rank $n$, and let $\delta=|X^1|-|X^0|$. Then $n-\frac{\delta(\delta+1)}{2}$ is a non-negative even integer. \end{lem} \begin{proof} Suppose that $|X^1|=m_1$ and $|X^0|=m_0$. Then $|X|=m_0+m_1$ and $\delta=m_1-m_0$. Now \begin{align*} & n-\frac{(m_1-m_0)(m_1-m_0+1)}{2} \\ &= \sum_{x\in X}x-\frac{(m_1+m_0)(m_1+m_0-1)}{2}-\frac{(m_1-m_0)(m_1-m_0+1)}{2} \\ &= \sum_{x\in X}x-m_1^2-m_0(m_0+1) \end{align*} Because any element in $X^0$ is even and $m_0(m_0+1)$ is also even, we have \[ \sum_{x\in X}x\equiv \sum_{x\in X^1}x \equiv m_1 \equiv m_1^2\pmod 2. \] So the lemma is proved. \end{proof} For a partition $\lambda$, we define a $\beta$-set \begin{align*} X=X_\lambda &=\begin{cases} \{\lambda_1+(m-1),\lambda_2+(m-2),\ldots,\lambda_{m-1}+1,\lambda_m\}, & \text{if $\ell(\lambda)+\ell(\lambda_\infty)$ is even};\\ \{\lambda_1+m,\lambda_2+(m-1),\ldots,\lambda_{m-1}+2,\lambda_m+1,0\}, & \text{otherwise}, \end{cases} \end{align*} It is clear that the rank of $X_\lambda$ is equal to $|\lambda|$. Moreover, $X_{\lambda'}\sim X_\lambda$ if $\lambda'$ is obtained from $\lambda$ by adding several $0$'s. \begin{lem} Let $\lambda$ be a partition. Then $|X_\lambda^1|\geq |X_\lambda^0|$. \end{lem} \begin{proof} Let $X=X_\lambda$. After removing all possible $2$-hooks, we will obtain a $\beta$-set $X_\infty$ of the form \[ \{2k-1,2k-3,\ldots,1\}\cup\{2l-2,2l-4,\ldots,0\} \] for some non-negative integers $k,l$, and $\lambda_\infty=[d,d-1,\ldots,1]$ where \[ d=\begin{cases} k-l, & \text{if $k\geq l$};\\ l-k-1, & \text{if $k<l$}. \end{cases} \] Now $|X^1|=k$, $|X^0|=l$ and $\ell(\lambda_\infty)=d$. \begin{enumerate} \item Suppose that $\ell(\lambda)+\ell(\lambda_\infty)$ is even. Then $\ell(\lambda)=|X|=k+l$. If $l>k$, then $\ell(\lambda_\infty)+\ell(\lambda)=(l-k-1)+(l+k)=2l-1$, which is odd, and we get a contradiction. \item Suppose that $\ell(\lambda)+\ell(\lambda_\infty)$ is odd. Then $\ell(\lambda)=|X|-1=l+k-1$. If $l>k$, then $\ell(\lambda_\infty)+\ell(\lambda)=(l-k-1)+(l+k-1)=2l-2$, which is even, and we get a contradiction, again. \end{enumerate} Therefore, we must have $|X^1|\geq |X^0|$. \end{proof} \begin{lem}\label{0211} Let $\lambda$ be a partition. Then $\ell(\lambda_\infty)=|X_\lambda^1|-|X^0_\lambda|$. \end{lem} \begin{proof} Let $X=X_\lambda$. After removing all possible $2$-hooks, we will obtain a $\beta$-set $X_\infty$ of the form \[ \{2k-1,2k-3,\ldots,1\}\cup\{2l-2,2l-4,\ldots,0\} \] for some non-negative integers $k,l$. From the previous lemma, we know that $k\geq l$ and $\ell(\lambda_\infty)=k-l=|X^1|-|X^0|$. \end{proof} For a partition $\lambda$ (or for a $\beta$-set $X_\lambda$), we associate a symbol \[ \Lambda_\lambda= \begin{cases} \binom{X^1_\lambda}{X^0_\lambda}, & \text{if $\ell(\lambda_\infty)$ is even};\\ \binom{X^0_\lambda}{X^1_\lambda}, & \text{if $\ell(\lambda_\infty)$ is odd}. \end{cases} \] For a symbol $\Lambda$, let $\Lambda^*$ (resp.~$\Lambda_*$) denote the first row (resp.~second row) of $\Lambda$. Recall that the \emph{defect} of a symbol $\Lambda$ is defined to be ${\rm def}(\Lambda)=|\Lambda^*|-|\Lambda_*|$. Then by Lemma~\ref{0211}, we have \begin{equation}\label{0214} {\rm def}(\Lambda_\lambda)=\begin{cases} \phantom{-}\ell(\lambda_\infty), & \text{if $\ell(\lambda_\infty)$ is even};\\ -\ell(\lambda_\infty), & \text{if $\ell(\lambda_\infty)$ is odd}. \end{cases} \end{equation} For a non-negative even integer or a negative odd integer $\delta$, let $\cals_{n,\delta}$ denote the set of equivalence class of symbols $\Lambda_\lambda$ such that ${\rm rk}(X_\lambda)=n$ and ${\rm def}(\Lambda_\lambda)=\delta$. Let $\calp_2(n)$ denote the set of \emph{bi-partitions} of $n$, i.e., the set of $\sqbinom{\mu}{\nu}$ where $\mu,\nu$ are partitions such that $|\sqbinom{\mu}{\nu}|=|\mu|+|\nu|=n$. Now we define a mapping \begin{equation}\label{0212} \Lambda_\lambda\mapsto\begin{cases} \sqbinom{\Upsilon(\overline X^1)}{\Upsilon(\overline X^0)}, & \text{if $\ell(\lambda_\infty)$ is even};\\ \sqbinom{\Upsilon(\overline X^0)}{\Upsilon(\overline X^1)}, & \text{if $\ell(\lambda_\infty)$ is odd} \end{cases} \end{equation} where $\Upsilon$ is given in (\ref{0213}). By abusing the notation a little bit, the above mapping is also denoted by $\Upsilon$. It is easy to check that (\cf.~\cite{FS} p.223) \begin{equation}\label{0516} |\lambda|=|\lambda_\infty|+2|\Upsilon(\Lambda_\lambda)|. \end{equation} Moreover, $\Upsilon$ gives a bijection from $\cals_{n,\delta}$ onto $\calp_2(\frac{1}{2}(n-\frac{|\delta|(|\delta|+1)}{2}))$. \begin{exam} Suppose that $\lambda=[d,d-1,\ldots,1]$. Then $\lambda_\infty=\lambda$ and hence $X_\lambda=\{2d-1,2d-3,\ldots,1\}$. Then $\overline X^1=\{d-1,d-2,\ldots,0\}$, $\overline X^0=\emptyset$, and hence \[ \Upsilon(\Lambda_\lambda)=\sqbinom{0}{0}\in\calp_2(0). \] \end{exam} \begin{exam} Suppose that $\lambda=[1,\ldots,1]\in\calp(n)$. \begin{enumerate} \item Suppose that $n$ is even. Then $\lambda_\infty=[0]$, and $\ell(\lambda)+\ell(\lambda_\infty)=n$. Then $X_\lambda=\{n,n-1,\ldots,1\}$, and hence $\overline X^1=\{\frac{n}{2}-1,\frac{n}{2}-2,\ldots,0\}$ and $\overline X^0=\{\frac{n}{2},\frac{n}{2}-1,\ldots,1\}$. Then \[ \Upsilon(\Lambda_\lambda)=\sqbinom{0}{1,1,\ldots,1}\in\calp_2(\tfrac{n}{2}). \] \item Suppose that $n$ is odd. Then $\lambda_\infty=[1]$, and $\ell(\lambda)+\ell(\lambda_\infty)=n+1$. Then $X_\lambda=\{n,n-1,\ldots,1\}$, and hence $\overline X^1=\{\frac{n-1}{2},\frac{n-3}{2},\ldots,0\}$ and $\overline X^0=\{\frac{n-1}{2},\frac{n-3}{2},\ldots,1\}$. Then \[ \Upsilon(\Lambda_\lambda)=\sqbinom{1,1,\ldots,1}{0}\in\calp_2(\tfrac{n-1}{2}). \] \end{enumerate} \end{exam} Define \[ \cals_{\rmU_n}=\{\,\Lambda_\lambda\mid\lambda\in\calp(n)\,\}. \] Therefore, $\cals_{n,\delta}\subset\cals_{\rmU_n}$ if and only if \begin{itemize} \item $\delta$ is either a non-negative integer or a negative odd integer; and \item $\frac{1}{2}(n-\frac{|\delta|(|\delta|+1)}{2})$ is a non-negative integer. \end{itemize} \begin{exam} From above, we see that \[ \cals_{\rmU_7}=\cals_{7,-1}\cup\cals_{7,2},\qquad \cals_{\rmU_8}=\cals_{8,0}\cup\cals_{8,-3},\qquad \cals_{\rmU_{10}}=\cals_{10,0}\cup\cals_{10,-3}\cup\cals_{10,4}. \] \end{exam} \begin{rem} The definition of the symbol $\Lambda_\lambda\in\cals_{\rmU_n}$ associated to a given partition $\lambda$ of $n$ is slightly different from that given in \cite{pan-Lusztig-correspondence}. The new definition here is more convenient for us, in particular, (\ref{0214}) is simpler than lemma 5.5 in \cite{pan-Lusztig-correspondence}. \end{rem} \subsection{Order of a $\beta$-Set} For two $\beta$-sets $A,B$, we define several polynomials in $q$: \begin{align*} \Delta(A) &:= \prod_{a,a'\in A,\ a>a'}(q^a-q^{a'}) \\ \Theta(A) &:= \prod_{a\in A}\prod_{h=1}^a(q^a-(-1)^a) \\ \Xi(A,B) &:= \prod_{a\in A,\ b\in B}(q^a+q^b) \\ f_{A,B} &:=\frac{\Delta(A)\Delta(B)\Xi(A,B)}{\Theta(A)\Theta(B)q^{\binom{|A|+|B|-1}{2}+\binom{|A|+|B|-2}{2}+\cdots+\binom{2}{2}}}. \end{align*} For a $\beta$-set $X$, we define $f_X=f_{X^0,X^1}$. \begin{lem} If $X\sim X'$, then $f_X=f_{X'}$. \end{lem} \begin{proof} Without loss of generality, we may assume that \[ X'=\{\, x+1\mid x\in X\,\}\cup\{0\}. \] Then \[ X'^0=\{\,b+1\mid b\in X^1\,\}\cup\{0\}\quad\text{ and }\quad X'^1=\{\,a+1\mid a\in X^0\,\}. \] From definitions above, it is clear that \begin{align*} \Delta(X'^0) &= \Delta(X^1)\cdot q^{\binom{|X^1|}{2}}\cdot\prod_{b\in X^1}(q^{b+1}-1) \\ \Delta(X'^1) &= \Delta(X^0)\cdot q^{\binom{|X^0|}{2}} \\ \Theta(X'^0) = \Theta(X^1)\cdot\prod_{b\in X^1}(q^{b+1}-(-1)^{b+1}) &= \Theta(X^1)\cdot\prod_{b\in X^1}(q^{b+1}-1) \\ \Theta(X'^1) = \Theta(X^0)\cdot\prod_{a\in X^0}(q^{a+1}-(-1)^{a+1}) &= \Theta(X^0)\cdot\prod_{a\in X^0}(q^{a+1}+1). \end{align*} Moreover, we have \begin{align*} \Xi(X'^0,X'^1) &= \prod_{a'\in X'^0,\ b'\in X'^1}(q^{a'}+q^{b'}) \\ &= \prod_{a\in X^0,\ b\in X^1}(q^{a+1}+q^{b+1})\cdot\prod_{a\in X^0}(q^{a+1}+1) \\ &= \Xi(X^0,X^1)\cdot q^{|X^0||X^1|}\cdot\prod_{a\in X^0}(q^{a+1}+1) \end{align*} Now $|X'^0|=|X^1|+1$, $|X'^1|=|X^0|$, and \[ \binom{|X'|-1}{2}=\binom{|X^0|+|X^1|}{2}=\binom{|X^0|}{2}+\binom{|X^1|}{2}+|X^0|\cdot|X^1|. \] Then the lemma follows. \end{proof} Now we want to express the dimension of $\rho_\lambda$ in terms of the $\beta$-set $X_\lambda$. Similar but different expression can be found in \cite{FS}. \begin{prop} Let $\lambda$ be a partition of $n$, and let $X=X_\lambda$. Then \begin{align*} \dim(\rho_\lambda) &= \frac{\Delta(X^0)\Delta(X^1)\Xi(X^0,X^1)} {\Theta(X^0)\Theta(X^1)q^{\binom{|X|-1}{2}+\binom{|X|-2}{2}+\cdots+\binom{2}{2}}}\cdot \prod_{i=1}^n(q^i-(-1)^i) \\ &= f_X\cdot |\rmU_n(q)|_{p'}. \end{align*} \end{prop} \begin{proof} Note that every partition $\lambda$ is built from a cuspidal partition $\lambda_\infty$ by adding several $2$-hooks. So we now prove the proposition by induction on the number of $2$-hooks. First suppose that $\lambda=\lambda_\infty=[d,d-1,\ldots,1]$ for some $d$. Then $n=\frac{d(d+1)}{2}$, $X=X^1=\{2d-1,2d-3,\ldots,1\}$ and $X^0=\emptyset$. Therefore $|X|=|X^1|=d$, and \begin{align*} \Delta(X^0)=\Theta(X^0)=\Xi(X^0,X^1) &=1, \\ \Delta(X^1) &=q^{\binom{d}{2}+2\left[\binom{d-1}{2}+\binom{d-2}{2}+\cdots+\binom{2}{2}\right]}\prod_{x=1}^{d-1}\prod_{h=1}^{x}(q^{2h}-1), \\ \Theta(X^1) &= (q+1)^{d}\prod_{x=1}^{d-1}\prod_{h=1}^x \left[(q^{2h}-1)(q^{2h+1}+1)\right]. \end{align*} So we see that \[ f_X\cdot|\rmU_n(q)|_{p'} =\frac{q^{\binom{d}{2}+\binom{d-1}{2}+\cdots+\binom{2}{2}}\prod_{i=1}^n(q^i-(-1)^i)} {(q+1)^{d}(q^3+1)^{d-1}\cdots(q^{2d-3}+1)^2(q^{2d-1}+1)} =\dim(\rho_\lambda) \] from Example~\ref{0202}. Therefore the lemma is true when the partition $\lambda$ is cuspidal. Next we assume that the lemma is true for some partition $\lambda$ of $n$ and suppose that $\lambda'$ is obtained from $\lambda$ by adding a $2$-hook. Then $\lambda'$ is a partition of $n+2$. We have the following two situations: \begin{enumerate} \item There is an index $i_0$ such that $\lambda'_{i_0}=\lambda_{i_0}+2$ and $\lambda_i'=\lambda_i$ for any $i\neq i_0$. Moreover, without loss of generality, we may assume that the part of rows below $\lambda_{i_0}$ is cuspidal, i.e., \begin{equation}\label{0201} [\lambda_{i_0+1},\lambda_{i_0+2},\ldots,\lambda_{\ell(\lambda)}]=[d',d'-1,\ldots,1] \end{equation} for some non-negative integer $d'$. Then clearly $\kappa(\lambda')=\kappa(\lambda)+2(i_0-1)$, and \begin{multline*} \frac{g_{\lambda'}}{g_{\lambda}} =\frac{(q^{n+1}-(-1)^{n+1})(q^{n+2}-(-1)^{n+2})}{(q-1)(q^2+1)}\cdot \prod_{\xi'}\frac{q^{|\xi'|}-(-1)^{|\xi'|}}{q^{|\xi'|+2}-(-1)^{|\xi'|}} \\ \cdot \prod_{\xi''}\frac{q^{|\xi''|}-(-1)^{|\xi''|}}{q^{|\xi''|+1}-(-1)^{|\xi''|+1}}\cdot \prod_{\xi'''}\frac{q^{|\xi'''|}-(-1)^{|\xi'''|}}{q^{|\xi'''|+1}-(-1)^{|\xi'''|+1}} \end{multline*} where $\prod_{\xi'}$ runs over all hooks left to $(i_0,\lambda_{i_0})$, $\prod_{\xi''}$ runs over all hooks above $(i_0,\lambda_{i_0}+1)$, and $\prod_{\xi'''}$ runs over all hooks above $(i_0,\lambda_{i_0}+2)$. Note that \[ |\xi_{i,\lambda_{i_0+1}}|=|\xi_{i,\lambda_{i_0+2}}|+1 \] for $i=1,\ldots,i_0-1$, so \begin{multline}\label{0203} \frac{g_{\lambda'}}{g_{\lambda}} =\frac{(q^{n+1}-(-1)^{n+1})(q^{n+2}-(-1)^{n+2})}{(q-1)(q^2+1)}\cdot \prod_{\xi'}\frac{q^{|\xi'|}-(-1)^{|\xi'|}}{q^{|\xi'|+2}-(-1)^{|\xi'|}} \\ \cdot \prod_{\xi'''}\frac{q^{|\xi'''|}-(-1)^{|\xi'''|}}{q^{|\xi'''|+2}-(-1)^{|\xi'''|}}. \end{multline} Now we have two possibilities: \begin{enumerate} \item $X^1_{\lambda'}=X^1_\lambda$ and there is an $a_0\in X^0_\lambda$ such that $a_0+2\not\in X^0_\lambda$ and $X^0_{\lambda'}=X^0_\lambda\cup\{a_0+2\}\smallsetminus\{a_0\}$. Moreover, we know that if $b\in X^1_\lambda$, then either $b>a_0+2$ or $b<a_0$. Then we have \begin{align}\label{0204} \begin{split} \Delta(X^1_{\lambda'}) &= \Delta(X^1_\lambda); \\ \Theta(X^1_{\lambda'}) &= \Theta(X^1_\lambda); \\ \Theta(X^0_{\lambda'}) &= \Theta(X^0_\lambda)(q^{a_0+1}+1)(q^{a_0+2}-1); \end{split} \end{align} \begin{align}\label{0205} \begin{split} \frac{\Delta(X^0_{\lambda'})}{\Delta(X^0_\lambda)} &= \prod_{a\in X^0_\lambda,\ a>a_0+2}\frac{q^a-q^{a_0+2}}{q^a-q^{a_0}} \cdot \prod_{a\in X^0_\lambda,\ a<a_0}\frac{q^{a_0+2}-q^a}{q^{a_0}-q^a} \\ &= q^{2l} \cdot \prod_{a\in X^0_\lambda,\ a>a_0+2}\frac{q^{a-a_0-2}-1}{q^{a-a_0}-1} \cdot \prod_{a\in X^0_\lambda,\ a<a_0}\frac{q^{a_0+2-a}-1}{q^{a_0-a}-1} \end{split} \end{align} where $l$ is the number of elements $a\in X^0_\lambda$ which is greater than $a_0$; \begin{align}\label{0206} \begin{split} \frac{\Xi(X^0_{\lambda'},X^1_{\lambda'})}{\Xi(X^0_\lambda,X^1_\lambda)} &= \prod_{b\in X^1_\lambda}\frac{q^{a_0+2}+q^b}{q^{a_0}+q^b} \\ &= q^{2l'} \cdot \prod_{b\in X^1_\lambda,\ b>a_0+2}\frac{q^{b-a_0-2}+1}{q^{b-a_0}+1} \cdot \prod_{b\in X^1_\lambda,\ b<a_0}\frac{q^{a_0-b+2}+1}{q^{a_0-b}+1} \end{split} \end{align} where $l'$ is the number of elements $b\in X^1_\lambda$ which is greater than $a_0$. Now $l+l'$, the number of rows $\lambda_i$ which is greater than $\lambda_{i_0}$, i.e., $l+l'=i_0-1$. Now (\ref{0201}) implies that \[ \{\,a\in X^0_\lambda\mid a<a_0\,\}=\emptyset\quad\text{ and }\quad \{\,b\in X^1_\lambda\mid b<a_0\,\}=\{2d'-1,2d'-3,\ldots,1\}. \] Then \[ \prod_{a\in X^0_\lambda,\ a<a_0}\frac{q^{a_0+2-a}-1}{q^{a_0-a}-1} \cdot\prod_{b\in X^1_\lambda,\ b<a_0}\frac{q^{a_0-b+2}+1}{q^{a_0-b}+1} =\frac{q^{a_0+1}+1}{q^{a_0-2d'+1}+1}. \] There is a one-to-one correspondence between the set of hooks $\xi'''$ above $(i_0,\lambda_{i_0}+2)$ and the set \[ \{\,a\in X^0_\lambda\mid a>a_0+2\,\}\cup\{\,b\in X^1_\lambda\mid b>a_0+2\,\} \] and \[ |\xi'''|=a-a_0-2\quad\text{ or }\quad |\xi'''|=b-a_0-2. \] Note that $a-a_0-2$ is even and $b-a_0-2$ is odd. Therefore \begin{equation}\label{0207} \prod_{\xi'''}\frac{q^{|\xi'''|}-(-1)^{|\xi'''|}}{q^{|\xi'''|+2}-(-1)^{|\xi'''|}} =\prod_{a\in X^0_\lambda,\ a>a_0+2}\frac{q^{a-a_0-2}-1}{q^{a-a_0}-1} \cdot\prod_{b\in X^1_\lambda,\ b>a_0+2}\frac{q^{b-a_0-2}+1}{q^{b-a_0}+1}. \end{equation} Now consider the hooks left to $(i_0,\lambda_{i_0})$. From (\ref{0201}), we know that \begin{equation}\label{0208} \prod_{\xi'}\frac{q^{|\xi'|}-(-1)^{|\xi'|}}{q^{|\xi'|+2}-(-1)^{|\xi'|}} =\frac{(q-1)(q^2+1)}{(q^{a_0+2}-1)(q^{a_0-2d'+1}+1)}. \end{equation} Therefore, by (\ref{0203}), (\ref{0204}), (\ref{0205}), (\ref{0206}), (\ref{0207}), and (\ref{0208}), we conclude that \[ \frac{f_{X^0_{\lambda'},X^1_{\lambda'}}|\rmU_{n+2}(q)|_{p'}}{f_{X^0_\lambda,X^1_\lambda}|\rmU_n(q)|_{p'}} =\frac{q^{\kappa(\lambda')}g_{\lambda'}}{q^{\kappa(\lambda)}g_\lambda} =\frac{\deg(\rho_{\lambda'})}{\deg(\rho_\lambda)}. \] Thus the proposition is true for $\lambda'$ by induction hypothesis. \item $X^0_{\lambda'}=X^0_\lambda$ and there is a $b_0\in X^1_\lambda$ such that $b_0+2\not\in X^1_\lambda$ and $X^1_{\lambda'}=X^1_\lambda\cup\{b_0+2\}\smallsetminus\{b_0\}$. The proof is similar to case (a) and is skipped. \end{enumerate} \item There exists an index $i_0$ such that $\lambda_{i_0}'=\lambda_{i_0}+1$ and $\lambda'_{i_0+1}=\lambda_{i_0+1}$, and $\lambda_j'=\lambda_j$ for any $j\neq i_0,i_0+1$. Again, without loss of generality, we may assume that the part of rows below $\lambda_{i_0+1}$ is cuspidal, i.e., \begin{equation} [\lambda_{i_0+2},\lambda_{i_0+3},\ldots,\lambda_{\ell(\lambda)}]=[d',d'-1,\ldots,1] \end{equation} for some non-negative integer $d'$. Then $\kappa(\lambda')=\kappa(\lambda)+i_0+(i_0-1)=\kappa(\lambda)+(2i_0-1)$, and \begin{multline*} \frac{g_{\lambda'}}{g_{\lambda}} =\frac{(q^{n+1}-(-1)^{n+1})(q^{n+2}-(-1)^{n+2})}{(q-1)(q^2+1)}\cdot \prod_{\xi'}\frac{q^{|\xi'|}-(-1)^{|\xi'|}}{q^{|\xi'|+1}-(-1)^{|\xi'|+1}} \\ \cdot \prod_{\xi''}\frac{q^{|\xi''|}-(-1)^{|\xi''|}}{q^{|\xi''|+1}-(-1)^{|\xi''|+1}}\cdot \prod_{\xi'''}\frac{q^{|\xi'''|}-(-1)^{|\xi'''|}}{q^{|\xi'''|+2}-(-1)^{|\xi'''|}} \end{multline*} where $\prod_{\xi'}$ runs over all hooks left to $(i_0,\lambda_{i_0})$, $\prod_{\xi''}$ runs over all hooks left to $(i_0+1,\lambda_{i_0+1})$, and $\prod_{\xi'''}$ runs over all hooks above $(i_0,\lambda_{i_0}+1)$. Note that \[ |\xi_{i_0,j}|=|\xi_{i_0+1,j}|+1 \] for $j=1,\ldots,\lambda_{i_0}$, so \begin{multline} \frac{g_{\lambda'}}{g_{\lambda}} =\frac{(q^{n+1}-(-1)^{n+1})(q^{n+2}-(-1)^{n+2})}{(q-1)(q^2+1)}\cdot \prod_{\xi''}\frac{q^{|\xi''|}-(-1)^{|\xi''|}}{q^{|\xi''|+2}-(-1)^{|\xi''|}} \\ \cdot \prod_{\xi'''}\frac{q^{|\xi'''|}-(-1)^{|\xi'''|}}{q^{|\xi'''|+2}-(-1)^{|\xi'''|}}. \end{multline} \begin{enumerate} \item $X^1_{\lambda'}=X^1_\lambda$ and there is an $a_0\in X^0_\lambda$ such that $a_0+2\not\in X^0_\lambda$ and $X^0_{\lambda'}=X^0_\lambda\cup\{a_0+2\}\smallsetminus\{a_0\}$. Moreover, we know that there exists a (unique) element $b_0\in X^1_\lambda$ such that $b_0=a_0+1$. Then we have \begin{align*} \Delta(X^1_{\lambda'}) &= \Delta(X^1_\lambda); \\ \Theta(X^1_{\lambda'}) &= \Theta(X^1_\lambda); \\ \Theta(X^0_{\lambda'}) &= \Theta(X^0_\lambda)(q^{a_0+1}+1)(q^{a_0+2}-1); \end{align*} \begin{align*} \frac{\Delta(X^0_{\lambda'})}{\Delta(X^0_\lambda)} &= q^{2l} \cdot \prod_{a\in X^0_\lambda,\ a>a_0+2}\frac{q^{a-a_0-2}-1}{q^{a-a_0}-1} \cdot \prod_{a\in X^0_\lambda,\ a<a_0}\frac{q^{a_0+2-a}-1}{q^{a_0-a}-1} \end{align*} where $l$ is the number of elements $a\in X^0_\lambda$ which is greater than $a_0$. Note that there is an element $b_0\in X^1_\lambda$ such that $b_0=a_0+1$, so \[ \frac{q^{a_0+2}+q^{b_0}}{q^{a_0}+q^{b_0}}=q. \] Then \begin{align*} \frac{\Xi(X^0_{\lambda'},X^1_{\lambda'})}{\Xi(X^0_\lambda,X^1_\lambda)} &= q^{2l'-1}\cdot \prod_{b\in X^1_\lambda,\ b>a_0+2}\frac{q^{b-a_0-2}+1}{q^{b-a_0}+1} \cdot \prod_{b\in X^1_\lambda,\ b<a_0}\frac{q^{a_0-b+2}+1}{q^{a_0-b}+1} \end{align*} where $l'$ is the number of elements $b\in X^1_\lambda$ which is greater than $a_0$. Now $l+l'$, the number of rows $\lambda_i$ which is greater than $\lambda_{i_0+1}$, i.e., $l+l'=i_0$. Then $\kappa(\lambda')-\kappa(\lambda)=2i_0-1=2(l+l')-1$. Again, we have \begin{align*} \prod_{\xi'''}\frac{q^{|\xi'''|}-(-1)^{|\xi'''|}}{q^{|\xi'''|+2}-(-1)^{|\xi'''|}} &= \prod_{a\in X^0_\lambda,\ a>a_0+2}\frac{q^{a-a_0-2}-1}{q^{a-a_0}-1} \cdot\prod_{b\in X^1_\lambda,\ b>a_0+2}\frac{q^{b-a_0-2}+1}{q^{b-a_0}+1} \\ \prod_{\xi''}\frac{q^{|\xi''|}-(-1)^{|\xi''|}}{q^{|\xi''|+2}-(-1)^{|\xi''|}} &= \frac{(q-1)(q^2+1)}{(q^{a_0+2}-1)(q^{a_0-2d'+1}+1)}. \end{align*} Again, the proposition is true for $\lambda'$ by induction hypothesis. \item $X^0_{\lambda'}=X^0_\lambda$ and there is a $b_0\in X^1_\lambda$ such that $b_0+2\not\in X^1_\lambda$ and $X^1_{\lambda'}=X^1_\lambda\cup\{b_0+2\}\smallsetminus\{b_0\}$. The proof is similar to case (a) and is skipped. \end{enumerate} \end{enumerate} Finally the proposition is proved for all cases. \end{proof} For $\lambda\in\calp(n)$, let $\ord(X_\lambda)$ or $\ord(\Lambda_\lambda)$ also denote the degree of the polynomial $\dim(\rho_\lambda)$ in $q$. \begin{lem}\label{0209} Let $\lambda\in\calp(n)$. Suppose that $X_\lambda=\{x_1,\ldots,x_m\}$ where $x_1>x_2>\cdots>x_m$. Then \[ \ord(X_\lambda)=\sum_{i=1}^m(m-i)x_i-\sum_{i=1}^m x_i(x_i+1)+n(n+1)-\frac{m(m-1)(m-2)}{6}. \] \end{lem} \begin{proof} Clearly, we have \begin{align*} \deg(\Delta(X^0)\Delta(X^1)\Xi(X^0,X^1)) &= \sum_{i=1}^{m}(m-i)x_i \\ \deg(\Theta(X^0)\Theta(X^1)) &= \sum_{i=1}^{m}x_i(x_i+1) \\ \deg(\binom{m-1}{2}+\cdots+\binom{2}{2}) &= \frac{m(m-1)(m-2)}{6} \\ \deg(|\rmU_n(q)|_{p'}) &= n(n+1). \end{align*} Then the lemma follows. \end{proof} \begin{lem}\label{0210} Let $X=\{x_1,\ldots,x_m\}$ and $X'=\{x'_1,\ldots,x'_m\}$ be two $\beta$-sets. Suppose that there are two indices $k<l$ such that $x'_k=x_k+1$, $x'_l=x_l-1$, and $x'_i=x_i$ for $i\neq k,l$. Then $\ord(X)>\ord(X')$. \end{lem} \begin{proof} We have \begin{align*} (m-k)(x_k+1)-(x_k+1)(x_k+2)-(m-k)x_k+x_k(x_k+1) &=m-k-2x_k-2, \\ (m-l)(x_l-1)-(x_l-1)x_l-(m-l)x_l+x_l(x_l+1) &=l-m+2x_l. \end{align*} Because $x_1>x_2>\cdots>x_m$, we must have $x_k-x_l\geq k-l$, and hence by Lemma~\ref{0209} \[ \ord(X')-\ord(X)=l-k+2(x_l-x_k)-2<0. \] \end{proof} \section{Two Correspondences on Unipotent Characters} In this section, we consider a dual pair of two unitary groups, i.e., $(\bfG,\bfG')=(\rmU_n,\rmU_{n'})$ for some non-negative $n,n'$. \subsection{Theta correspondence} Let $\lambda=[\lambda_1,\ldots,\lambda_m]$ and $\lambda'=[\lambda_1,\ldots,\lambda'_{m'}]$ be two partitions. By adding some $0$'s if necessary, we may assume that $m=m'$. We say that \[ \lambda\preccurlyeq\lambda'\qquad\text{if $\lambda'_i-1\leq\lambda_i\leq\lambda'_i$ for each $i=1,\ldots,m$.} \] Let $\cals$ denote the set of equivalence classes of symbols. We define several relations on $\cals$: \begin{align*} \calb^+ &=\{\,(\Lambda,\Lambda')\in\cals\times\cals \mid\Upsilon(\Lambda_*)^\rmT\preccurlyeq\Upsilon(\Lambda'^*)^\rmT,\ \Upsilon(\Lambda'_*)^\rmT\preccurlyeq\Upsilon(\Lambda^*)^\rmT\,\};\\ \calb^- &=\{\,(\Lambda,\Lambda')\in\cals\times\cals \mid\Upsilon(\Lambda^*)^\rmT\preccurlyeq\Upsilon(\Lambda'_*)^\rmT,\ \Upsilon(\Lambda'^*)^\rmT\preccurlyeq\Upsilon(\Lambda_*)^\rmT\,\};\\ \calb^+_{\rmU,\rmU} &=\left\{\,(\Lambda,\Lambda')\in\calb^+\mid {\rm def}(\Lambda')=\begin{cases} 0, & \text{if ${\rm def}(\Lambda)=0$};\\ -{\rm def}(\Lambda)+1, & \text{if ${\rm def}(\Lambda)\neq 0$} \end{cases}\,\right\};\\ \calb^-_{\rmU,\rmU} &=\left\{\,(\Lambda,\Lambda')\in\calb^-\mid {\rm def}(\Lambda')=-{\rm def}(\Lambda)-1\,\right\};\\ \calb_{\rmU_n,\rmU_{n'}} &= \begin{cases} \{(\Lambda_\lambda,\Lambda_{\lambda'})\in\calb^+_{\rmU,\rmU}\mid |\lambda|=n,\ |\lambda'|=n'\,\}, & \text{if $n+n'$ is even};\\ \{(\Lambda_\lambda,\Lambda_{\lambda'})\in\calb^-_{\rmU,\rmU}\mid|\lambda|=n,\ |\lambda'|=n'\,\}, & \text{if $n+n'$ is odd}. \end{cases} \end{align*} From above definitions and (\ref{0214}) we know that $(\Lambda_\lambda,\Lambda_{\lambda'})\in\calb_{\rmU_n,\rmU_{n'}}$ implies either \[ \ell(\lambda_\infty)=\ell(\lambda'_\infty)=0\quad\text{ or }\quad|\ell(\lambda_\infty)-\ell(\lambda'_\infty)|=1. \] The following proposition on the Howe correspondence of unipotent characters for a dual pair of two unitary groups is rephrased from \cite{amr} th\'eor\`eme 5.15. Note that here we need only to assume that the characteristic of the base field is not equal to $2$ (\cf.~\cite{pan-Lusztig-correspondence} proposition~5.13). \begin{prop}\label{0307} Let $(\bfG,\bfG')=(\rmU_n,\rmU_{n'})$ be a reductive dual pair of two unitary groups. Then the decomposition of the unipotent part of the Weil character for the dual pair $(\bfG,\bfG')$ is given by \[ \omega_{\bfG,\bfG',1}=\sum_{(\Lambda_\lambda,\Lambda_{\lambda'})\in\calb_{\bfG,\bfG'}}\rho_\lambda\otimes\rho_{\lambda'}. \] \end{prop} For a finite classical group $G$, let $\cale(G)$ (resp.~$\cale(G)_1$) denote the set of irreducible characters (resp.~unipotent characters) of $G$. The proposition establishes a relation between $\cale(G)_1$ and $\cale(G')_1$ which will be called the (unipotent part of the) \emph{$\Theta$-correspondence}. For $\rho_\lambda\in\cale(G)_1$ or $\Lambda\in\cals_\bfG$, we define \begin{align*} \Theta_{\bfG'}(\rho_\lambda) &= \{\,\rho_{\lambda'}\in\cale(G')_1\mid(\Lambda_\lambda,\Lambda_{\lambda'})\in\calb_{\bfG,\bfG'}\,\}; \\ \Theta_{\bfG'}(\Lambda) &= \{\,\Lambda'\in\cals_{\bfG'}\mid(\Lambda,\Lambda')\in\calb_{\bfG,\bfG'}\,\}. \end{align*} For $k\geq 0$, we define \[ \Theta_{\bfG'}(\Lambda)_k =\begin{cases} \{\,\Lambda'\in\Theta_{\bfG'}(\Lambda)\mid |\Upsilon(\Lambda')_*|=|\Upsilon(\Lambda)^*|-k\,\}, & \text{if $n+n'$ is even};\\ \{\,\Lambda'\in\Theta_{\bfG'}(\Lambda)\mid |\Upsilon(\Lambda')^*|=|\Upsilon(\Lambda)_*|-k\,\}, & \text{if $n+n'$ is odd}. \end{cases} \] It is known that $\Theta_{\bfG'}(\Lambda)_k=\emptyset$ if $k$ is large enough and \[ \Theta_{\bfG'}(\Lambda)=\bigsqcup_{k\geq 0}\Theta_{\bfG'}(\Lambda)_k. \] \subsection{Definition of $\theta_k$}\label{0308} Consider the dual pair $(\bfG,\bfG')=(\rmU_n,\rmU_{n'})$. Suppose that $\lambda\in\calp(n)$, $d=\ell(\lambda_\infty)$. Then $\Lambda_\lambda\in\cals_{n,\delta}$ where $\delta=d$ if $d$ is even; $\delta=-d$ if $d$ is odd, and $\Upsilon(\Lambda_\lambda)\in\calp_2(\frac{1}{2}(n-\frac{d(d+1)}{2}))$. Similarly, if $\lambda'\in\calp(n')$, $d'=\ell(\lambda'_\infty)$, then $\Upsilon(\Lambda_{\lambda'})\in\calp_2(\frac{1}{2}(n'-\frac{d'(d'+1)}{2}))$. From Proposition~\ref{0307}, we have the following commutative diagram \[ \begin{CD} \cals_{n,\delta} @> \Theta_{\bfG'} >> \cals_{n',\delta'} \\ @V \Upsilon VV @VV \Upsilon V \\ \calp_2(n-\tfrac{d(d+1)}{2}) @>>> \calp_2(n'-\tfrac{d'(d'+1)}{2}) \end{CD} \] where $\Upsilon$ is a bijection and $\Theta_{\bfG'}$ is a correspondence. By abusing the notation, the correspondence in the bottom of the above diagram is also denoted by $\Theta_{\bfG'}$. Now we define \begin{equation}\label{0301} \tau=\frac{1}{2}\left[\left(n'-\frac{d'(d'+1)}{2}\right)-\left(n-\frac{d(d+1)}{2}\right)\right] \end{equation} and we want to construct a mapping \[ \theta_k\colon\calp_2(\tfrac{1}{2}(n-\tfrac{d(d+1)}{2})) \longrightarrow\calp_2(\tfrac{1}{2}(n'-\tfrac{d'(d'+1)}{2})) \] as follows when $\tau\geq 0$. Note that the definition of $\theta_0$ is modified from \cite{akp} definition 5. \begin{enumerate} \item Suppose that $n+n'$ is even and $\tau\geq 0$. Moreover, suppose also that \begin{equation}\label{0602} d'= \begin{cases} d, & \text{if $d=0$};\\ d-1, & \text{if $d$ is even and $d>0$};\\ d+1, & \text{if $d$ is odd}. \end{cases} \end{equation} Let $\sqbinom{\mu_1,\ldots,\mu_{m_1}}{\nu_1,\ldots,\nu_{m_2}}\in\calp_2(\tfrac{1}{2}(n-\tfrac{d(d+1)}{2}))$. For $0\leq k\leq\mu_1$, we define \[ \theta_k\colon\sqbinom{\mu_1,\ldots,\mu_{m_1}}{\nu_1,\ldots,\nu_{m_2}} \mapsto\sqbinom{\nu_1,\ldots,\nu_{m_2}}{\mu_2,\ldots,\mu_{m_1}}\cup\sqbinom{\tau+k}{\mu_1-k}. \] \item Suppose that $n+n'$ is odd and $\tau\geq 0$. Moreover, suppose that \[ d'= \begin{cases} d+1, & \text{if $d$ is even;}\\ d-1, & \text{if $d$ is odd}. \end{cases} \] Let $\sqbinom{\mu_1,\ldots,\mu_{m_1}}{\nu_1,\ldots,\nu_{m_2}}\in\calp_2(\tfrac{1}{2}(n-\tfrac{d(d+1)}{2}))$. For $0\leq k\leq\nu_1$, we define \[ \theta_k\colon\sqbinom{\mu_1,\ldots,\mu_{m_1}}{\nu_1,\ldots,\nu_{m_2}} \mapsto\sqbinom{\nu_2,\ldots,\nu_{m_2}}{\mu_1,\ldots,\mu_{m_1}}\cup\sqbinom{\nu_1-k}{\tau+k}. \] \end{enumerate} Via the bijection $\Upsilon$ in (\ref{0212}), we will also regard $\theta_k$ as a mapping from $\cals_{n,\delta}$ to $\cals_{n',\delta'}$ (when $\tau\geq 0$). \begin{lem}\label{0304} Let $\Lambda\in\cals_\bfG$ and $k\geq 0$. Then $\theta_k(\Lambda)$ is the unique element of maximal order in $\Theta_{\bfG'}(\Lambda)_k$. \end{lem} \begin{proof} The proof is similar to that of lemma~4.10 in \cite{pan-eta}. \end{proof} \begin{lem}\label{0303} Let $\Lambda\in\cals_\bfG$. There exists a unique index $k_0$ such that \[ \ord(\theta_0(\Lambda))<\ord(\theta_1(\Lambda))<\cdots<\ord(\theta_{k_0-1}(\Lambda)) <\ord(\theta_{k_0}(\Lambda))>\ord(\theta_{k_0+1}(\Lambda))>\cdots, \] i.e., $\theta_{k_0}(\Lambda)$ is the unique element of maximal order in the set $\{\,\theta_k(\Lambda)\mid k\geq 0\,\}$. \end{lem} \begin{proof} Write $\Upsilon(\Lambda)=\sqbinom{\mu_1,\ldots,\mu_{m_1}}{\nu_1,\ldots,\nu_{m_2}}$. First suppose that $n+n'$ is even. If $\mu_1=0$, then $\{\,\theta_k(\Lambda)\mid k=0,\ldots,\mu_1\,\}=\{\theta_0(\Lambda)\}$ and there is nothing to prove. So we may assume that $\mu_1\geq 1$. By the similar argument in the proof of lemma~4.11 in \cite{pan-eta}, we know that from $\theta_k(\Lambda)$ to $\theta_{k+1}(\Lambda)$, there is a unique entry $\alpha_k$ in the first row of $\theta_k(\Lambda)$ is changed to $\alpha_k+2$ and all other entries in the first row are unchanged; and there is a unique entry $\beta_k$ in the second row of $\theta_k(\Lambda)$ is changed to $\beta_k-2$ and all other entries in the second row are unchanged. Moreover, we know that \begin{itemize} \item all elements in the sequence $\langle\alpha_k\rangle$ are of the same parity and the sequence is strictly increasing; \item all elements in the sequence $\langle\beta_k\rangle$ are of the same parity and the sequence is strictly decreasing; \item any two elements $\alpha_k,\beta_{k'}$ are of opposite parties. \end{itemize} Now we define the index $k_0$ according to the following situations: \begin{enumerate} \item if $\alpha_0>\beta_0$, we let $k_0=0$; \item if $\alpha_{\mu_1}<\beta_{\mu_1}$, we let $k_0=\mu_1$; or \item if there is a unique index $k_1$ such that $\alpha_{k_1-1}<\beta_{k_1-1}$ and $\alpha_{k_1}>\beta_{k_1}$, then we let \[ k_0=\begin{cases} k_1, & \text{if $\alpha_{k_1-1}+2<\beta_{k_1-1}$};\\ k_1-1, & \text{if $\alpha_{k_1-1}+2>\beta_{k_1-1}$}. \end{cases} \] \end{enumerate} Next we suppose that $n+n'$ is odd. From above, we know that from $\theta_k(\Lambda)$ to $\theta_{k+1}(\Lambda)$, there is a unique entry $\alpha_k$ in the first row of $\theta_k(\Lambda)$ is changed to $\alpha_k-2$ and all other entries in the first row are unchanged; and there is a unique entry $\beta_k$ in the second row of $\theta_k(\Lambda)$ is changed to $\beta_k+2$ and all other entries in the second row are unchanged. Moreover, \begin{itemize} \item all elements in the sequence $\langle\alpha_k\rangle$ are of the same parity and the sequence is strictly decreasing; \item all elements in the sequence $\langle\beta_k\rangle$ are of the same parity and the sequence is strictly increasing; \item any two elements $\alpha_k,\beta_{k'}$ are of opposite parties. \end{itemize} Now we define the index $k_0$ according to the following situations: \begin{enumerate} \item if $\beta_0>\alpha_0$, we let $k_0=0$; \item if $\beta_{\nu_1}<\alpha_{\nu_1}$, we let $k_0=\nu_1$; or \item if there is a unique index $k_1$ such that $\beta_{k_1-1}<\alpha_{k_1-1}$ and $\beta_{k_1}>\alpha_{k_1}$, then we let \[ k_0=\begin{cases} k_1, & \text{if $\beta_{k_1-1}+2<\alpha_{k_1-1}$};\\ k_1-1, & \text{if $\beta_{k_1-1}+2>\alpha_{k_1-1}$}. \end{cases} \] \end{enumerate} Then the assertion follows from Lemma~\ref{0210} immediately. \end{proof} \begin{exam} Consider the dual pair $(\rmU_8,\rmU_{10})$. Let $\lambda=[6,2]\in\calp(8)$. Then $\lambda_\infty=[0]$, $X_\lambda=\{7,2\}$, $\Lambda=\Lambda_\lambda=\binom{7}{2}\in\cals_{8,0}$, and $\Upsilon(\Lambda)=\sqbinom{3}{1}\in\calp_2(4)$. Now the mappings $\theta_k\colon\calp_2(4)\rightarrow\calp_2(5)$ and $\theta_k\colon\cals_{8,0}\rightarrow\cals_{10,0}$ are given by \begin{align*} \textstyle\theta_0(\sqbinom{3}{1}) &=\textstyle\sqbinom{1,1}{3}, & \textstyle\theta_1(\sqbinom{3}{1}) &=\textstyle\sqbinom{2,1}{2}, & \textstyle\theta_2(\sqbinom{3}{1}) &=\textstyle\sqbinom{3,1}{1}, & \textstyle\theta_3(\sqbinom{3}{1}) &=\textstyle\sqbinom{41,1}{0}; \\ \textstyle\theta_0(\binom{7}{2}) &=\textstyle\binom{5,3}{8,0}, & \textstyle\theta_1(\binom{7}{2}) &=\textstyle\binom{7,3}{6,0}, & \textstyle\theta_2(\binom{7}{2}) &=\textstyle\binom{9,3}{4,0}, & \textstyle\theta_3(\binom{7}{2}) &=\textstyle\binom{11,3}{2,0}. \end{align*} Then the sequence $\langle\alpha_k\rangle$ is $5,7,9,11$, and the sequence $\langle\beta_k\rangle$ is $8,6,4,2$. Now $\alpha_0<\beta_0$, $\alpha_1>\beta_1$, and $\alpha_0+2<\beta_0$. So we have $k_0=1$, i.e., \[ \textstyle \ord(\theta_0(\binom{7}{2}))< \ord(\theta_1(\binom{7}{2}))> \ord(\theta_2(\binom{7}{2}))> \ord(\theta_3(\binom{7}{2})). \] \end{exam} \subsection{Definitions of correspondences $\underline\theta$ and $\overline\theta$}\label{0309} Keep the setting in the previous subsection, in particular, we assume that $\tau\geq 0$. For $\Lambda\in\cals_{n,\delta}\subset\cals_\bfG$, we define \[ \underline\theta_{\bfG'}(\Lambda)=\theta_0(\Lambda)\quad\text{ and }\quad \underline\theta_{\bfG'}(\rho_\Lambda)=\rho_{\theta_0(\Lambda)}. \] Then we define a relation between $\cale(G)_1$ and $\cale(G')_1$: \[ \underline\theta_{\bfG,\bfG'} =\{\,(\rho_\Lambda,\rho_{\Lambda'})\in\cale(G)_1\times\cale(G')_1\mid\Lambda'=\underline\theta_{\bfG'}(\Lambda)\,\}. \] \begin{lem} Let $(\bfG,\bfG')=(\rmU_n,\rmU_{n'})$, and let $\lambda\in\calp(n)$. Then $\underline\theta_{\bfG'}(\rho_\lambda)\in\Theta_{\bfG'}(\rho_\lambda)$. \end{lem} \begin{proof} We know that $\calb_{\rmU_n,\rmU_{n'}}\subset\calb^+$ if $n+n'$ is even; and $\calb_{\rmU_n,\rmU_{n'}}\subset\calb^-$ if $n+n'$ is odd. Then the lemma follows from Lemma~\ref{0306} immediately. \end{proof} To define $\overline\theta_{\bfG'}$, we need to introduce a linear order ``$<$''on the set $\cals_{n,\delta}\subset\cals_{\bfG}$ as follows: \begin{enumerate} \item Suppose that $n+n'$ is even. Let $\Lambda,\Lambda'\in\cals_{n,\delta}$. We define that $\Lambda<\Lambda'$ if either \begin{itemize} \item $|\Upsilon(\Lambda)^*|<|\Upsilon(\Lambda')^*|$; or \item $|\Upsilon(\Lambda)^*|=|\Upsilon(\Lambda')^*|$ and $\Upsilon(\Lambda)^*<\Upsilon(\Lambda')^*$ in lexicographic order; or \item $\Upsilon(\Lambda)^*=\Upsilon(\Lambda')^*$ and $\Upsilon(\Lambda)_*<\Upsilon(\Lambda')_*$ in lexicographic order. \end{itemize} \item Suppose that $n+n'$ is odd. Let $\Lambda,\Lambda'\in\cals_{n,\delta}$. We define that $\Lambda<\Lambda'$ if either \begin{itemize} \item $|\Upsilon(\Lambda)_*|<|\Upsilon(\Lambda')_*|$; or \item $|\Upsilon(\Lambda)_*|=|\Upsilon(\Lambda')_*|$ and $\Upsilon(\Lambda)_*<\Upsilon(\Lambda')_*$ in lexicographic order; or \item $\Upsilon(\Lambda)_*=\Upsilon(\Lambda')_*$ and $\Upsilon(\Lambda)^*<\Upsilon(\Lambda')^*$ in lexicographic order. \end{itemize} \end{enumerate} Now we define $\overline\theta_{\bfG'}(\Lambda)$ inductively as follows. Assume that $\overline\theta_{\bfG'}(\Lambda')$ is defined for all $\Lambda'<\Lambda$ and consider the set \[ \Theta_{\bfG'}^\flat(\Lambda) =\Theta_{\bfG'}(\Lambda)\smallsetminus\{\,\overline\theta_{\bfG'}(\Lambda')\mid\Lambda'<\Lambda\,\}. \] \begin{lem} Let $\Lambda\in\cals_{n,\delta}\subset\cals_\bfG$. Then the set $\Theta_{\bfG'}^\flat(\Lambda)$ is always non-empty. \end{lem} \begin{proof} First suppose that $n+n'$ is even and $\Upsilon(\Lambda)=\sqbinom{\lambda_1,\ldots,\lambda_{m_1}}{\mu_1,\ldots,\mu_{m_2}}$. Let $\Lambda'_0\in\Theta_{\bfG'}(\Lambda)_0$ be given such that \[ \Upsilon(\Lambda'_0)=\sqbinom{\mu_1+\tau,\mu_2,\ldots,\mu_{m_2}}{\lambda_1,\ldots,\lambda_{m_1}} \] where $\tau$ is given in (\ref{0301}). By the same argument in the proof of lemma~4.18 in \cite{pan-eta}, we see that $\Lambda'_0\not\in\Theta_{\bfG'}(\Lambda')$ for any $\Lambda'<\Lambda$, and hence $\Lambda'_0$ is in $\Theta_{\bfG'}^\flat(\Lambda)$. The proof for the case that $n+n'$ is odd is similar. \end{proof} Because now $\Theta^\flat_{\bfG'}(\Lambda)$ is non-empty, we define $\overline\theta_{\bfG'}(\Lambda)$ to be the smallest element in the set of elements of maximal order in $\Theta^\flat_{\bfG'}(\Lambda)$. Then we have a mapping $\overline\theta_{\bfG'}\colon\cale(G)_1\rightarrow\cale(G')_1$ by $\overline\theta_{\bfG'}(\rho_\Lambda)=\rho_{\overline\theta_{\bfG'}(\Lambda)}$, and a relation $\overline\theta_{\bfG,\bfG'}$ between $\cale(G)_1$ and $\cale(G')_1$ by \[ \overline\theta_{\bfG,\bfG'} =\{\,(\rho_\Lambda,\rho_{\Lambda'})\in\cale(G)_1\times\cale(G')_1\mid\Lambda'=\overline\theta_{\bfG'}(\Lambda)\,\}. \] \begin{exam} Consider the dual pair $(\rmU_7,\rmU_{10})$. We know that \[ \cals_{\rmU_7}=\cals_{7,-1}\cup\cals_{7,2}\quad\text{and}\quad \cals_{\rmU_{10}}=\cals_{10,0}\cup\cals_{10,-3}\cup\cals_{10,4}. \] Now $\Upsilon$ establishes the bijections $\cals_{7,-1}\simeq\calp_2(3)$, $\cals_{7,2}\simeq\calp_2(2)$, $\cals_{10,0}\simeq\calp_2(5)$, $\cals_{10,-3}\simeq\calp_2(2)$ and $\cals_{10,4}\simeq\calp_2(0)$. The correspondence $\Theta_{\bfG,\bfG'}$ between $\cals_{\rmU_7}$ and $\cals_{\rmU_{10}}$ is decomposed as the union of the correspondence between $\cals_{7,-1}$ and $\cals_{10,0}$, and the correspondence between $\cals_{7,2}$ and $\cals_{10,3}$. Note that for the part of the correspondence $\cals_{7,-1}\rightarrow\cals_{10,0}$, $\tau=5-3>0$, so every element in $\cals_{7,-1}$ occurs in the correspondence $\Theta$. Now we have the following table of the correspondence $\cals_{7,-1}\rightarrow\cals_{10,0}$. A symbol $\Lambda'$ of maximal order in $\Theta_{\bfG'}(\Lambda)$ is superscripted by $\natural$ (Notation: ``$\Lambda'^\natural$''), $\Lambda'$ is overlined (Notation: ``$\overline{\Lambda'}$'') if $\Lambda'=\overline\theta_{\bfG'}(\Lambda)$, $\Lambda'\in\Theta_{\bfG'}(\Lambda)$ is cancelled out (Notation: ``$\bcancel{\Lambda'}$'') if $\Lambda'\not\in\Theta^\flat_{\bfG'}(\Lambda)$. The first element in $\Theta_{\bfG'}(\Lambda)_k$ is $\theta_k(\Lambda)$. In particular, the first element in $\Theta_{\bfG'}(\Lambda)_0$ is $\theta_0(\Lambda)=\underline\theta_{\bfG'}(\Lambda)$. \[ \begin{tabular}{c|llll} \toprule $\cals_{7,-1}$ & $\cals_{10,0}$ \\ $\Lambda$ & $\Theta_{\bfG'}(\Lambda)_0$ & $\Theta_{\bfG'}(\Lambda)_1$ & $\Theta_{\bfG'}(\Lambda)_2$ & $\Theta_{\bfG'}(\Lambda)_3$ \\ \midrule $\binom{6,4,2}{7,5,3,1}$ & $\overline{\binom{7,5,3,1}{10,6,4,2}}^\natural,\binom{5,3,1}{10,4,2}$ & \\ $\binom{6,2}{5,3,1}$ & $\overline{\binom{5,3,1}{8,6,2}}^\natural,\binom{5,3,1}{10,4,2},\binom{3,1}{8,4},\binom{3,1}{10,2}$ & \\ $\binom{6}{3,1}$ & $\overline{\binom{3,1}{8,4}}^\natural,\binom{3,1}{10,2},\binom{1}{10}$ & \\ \midrule $\binom{4,2}{7,3,1}$ & $\overline{\binom{7,3,1}{8,4,2}}^\natural,\binom{5,1}{8,2}$ & $\binom{5,3,1}{10,4,2},\binom{3,1}{10,2}$ & \\ $\binom{4}{5,1}$ & $\overline{\binom{5,1}{6,4}}^\natural,\binom{3}{8}$ & $\bcancel{\binom{3,1}{8,4}},\binom{3,1}{10,2},\binom{1}{10}$ & \\ \midrule $\binom{2}{5,3}$ & $\overline{\binom{5,3}{6,2}}^\natural,\binom{5,3}{8,0}$ & $\binom{5,1}{8,2},\binom{3}{8}$ & \\ $\binom{2}{7,1}$ & $\overline{\binom{7,1}{6,2}}^\natural,\binom{5}{6}$ & $\binom{5,1}{8,2},\binom{3}{8}$ & $\binom{3,1}{10,2},\binom{1}{10}$ & \\ \midrule $\binom{2,0}{7,5,3}$ & $\overline{\binom{7,5,3}{8,2,0}}^\natural$ & $\binom{5,3}{8,0}$ & \\ $\binom{0}{7,3}$ & $\overline{\binom{7,3}{6,0}}^\natural$ & $\binom{5,3}{8,0},\binom{5}{6}$ & $\binom{3}{8}$ & \\ $\binom{-}{7}$ & $\binom{7}{4}$ & $\overline{\binom{5}{6}}^\natural$ & $\binom{3}{8}$ & $\binom{1}{10}$ \\ \bottomrule \end{tabular} \] Note that $\underline\theta_{\bfG'}(\binom{-}{7})=\binom{7}{4}\neq\binom{5}{6}=\overline\theta_{\bfG'}(\binom{-}{7})$, and $\underline\theta_{\bfG'}(\Lambda)=\overline\theta_{\bfG'}(\Lambda)$ for any other $\Lambda$ in $\cals_{7,-1}$. Similarly, for the part $\cals_{7,2}\rightarrow\cals_{10,-3}$, $\tau=2-2=0$, so every element in $\cals_{7,2}$ and every elements in $\cals_{10,-3}$ occur in the correspondence $\Theta$. Now we have the following table of the correspondence $\cals_{7,2}\rightarrow\cals_{10,-3}$: \[ \begin{tabular}{c|llll} \toprule $\cals_{7,2}$ & $\cals_{10,-3}$ \\ $\Lambda$ & $\Theta_{\bfG'}(\Lambda)_0$ & $\Theta_{\bfG'}(\Lambda)_1$ & $\Theta_{\bfG'}(\Lambda)_2$ \\ \midrule $\binom{5,3}{-}$ & $\overline{\binom{-}{7,5,1}}^\natural$ & \\ $\binom{7,1}{-}$ & $\overline{\binom{-}{9,3,1}}^\natural$ & \\ \midrule $\binom{7,3,1}{2}$ & $\overline{\binom{2}{9,5,3,1}}^\natural$ & $\bcancel{\binom{-}{7,5,1}},\bcancel{\binom{-}{9,3,1}}$ & \\ \midrule $\binom{7,5,3,1}{4,2}$ & $\overline{\binom{4,2}{9,7,5,3,1}}^\natural$ & $\bcancel{\binom{2}{9,5,3,1}}$ & \\ $\binom{5,3,1}{4}$ & $\overline{\binom{4}{7,5,3,1}}^\natural$ & $\bcancel{\binom{2}{9,5,3,1}}$ & $\bcancel{\binom{-}{9,3,1}}$ & \\ \bottomrule \end{tabular} \] For this case we have $\underline\theta_{\bfG'}(\Lambda)=\overline\theta_{\bfG'}(\Lambda)$ for any $\Lambda\in\cals_{7,2}$. This will be seen in Proposition~\ref{0302}. \end{exam} \begin{exam} Consider the dual pair $(\rmU_8,\rmU_{10})$. We know that \[ \cals_{\rmU_8}=\cals_{8,0}\cup\cals_{8,-3}\quad\text{and}\quad \cals_{\rmU_{10}}=\cals_{10,0}\cup\cals_{10,-3}\cup\cals_{10,4}. \] Now $\Upsilon$ establishes the bijections $\cals_{8,0}\simeq\calp_2(4)$, $\cals_{8,-3}\simeq\calp_2(1)$, $\cals_{10,0}\simeq\calp_2(5)$, $\cals_{10,-3}\simeq\calp_2(2)$ and $\cals_{10,4}\simeq\calp_2(0)$. Now the correspondence $\Theta_{\bfG,\bfG'}$ between $\cals_{\rmU_8}$ and $\cals_{\rmU_{10}}$ is decomposed as the union of the correspondence between $\cals_{8,0}$ and $\cals_{10,0}$ and the correspondence between $\cals_{8,-3}$ and $\cals_{10,4}$. Note that for the part of the correspondence $\cals_{8,0}\rightarrow\cals_{10,0}$, $\tau=\frac{1}{2}(10-8)>0$, so every element in $\cals_{8,0}$ occurs in the correspondence $\Theta$. Now we have the following table of the correspondence $\cals_{8,0}\rightarrow\cals_{10,0}$: \[ \begin{tabular}{c|lllll} \toprule $\cals_{8,0}$ & $\cals_{10,0}$ \\ $\Lambda$ & $\Theta_{\bfG'}(\Lambda)_0$ & $\Theta_{\bfG'}(\Lambda)_1$ & $\Theta_{\bfG'}(\Lambda)_2$ & $\Theta_{\bfG'}(\Lambda)_3$ & $\Theta_{\bfG'}(\Lambda)_4$ \\ \midrule $\binom{7,5,3,1}{8,6,4,2}$ & $\overline{\binom{11,9,7,5,3}{8,6,4,2,0}}^\natural,\binom{11,7,5,3}{6,4,2,0}$ & \\ $\binom{5,3,1}{8,4,2}$ & $\overline{\binom{11,7,5,3}{6,4,2,0}}^\natural,\binom{9,7,3}{4,2,0},\binom{11,5,3}{4,2,0}$ & \\ $\binom{3,1}{6,4}$ & $\overline{\binom{9,7,3}{4,2,0}}^\natural,\binom{9,5}{2,0}$ & \\ $\binom{3,1}{8,2}$ & $\overline{\binom{11,5,3}{4,2,0}}^\natural,\binom{9,5}{2,0},\binom{11,3}{2,0}$ & \\ $\binom{1}{8}$ & $\overline{\binom{11,3}{2,0}}^\natural,\binom{11}{0}$ & \\ \midrule $\binom{7,3,1}{6,4,2}$ & $\overline{\binom{9,7,5,3}{8,4,2,0}}^\natural,\binom{9,5,3}{6,2,0}$ & $\bcancel{\binom{11,7,5,3}{6,4,2,0}},\bcancel{\binom{11,5,3}{4,2,0}}$ & \\ $\binom{5,1}{6,2}$ & $\overline{\binom{9,5,3}{6,2,0}}^\natural,\binom{7,5}{4,0},\binom{9,3}{4,0}$ & $\bcancel{\binom{9,7,3}{4,2,0}},\bcancel{\binom{11,5,3}{4,2,0}},\binom{9,5}{2,0},\bcancel{\binom{11,3}{2,0}}$ & \\ $\binom{3}{6}$ & $\overline{\binom{9,3}{4,0}}^\natural,\binom{9}{2}$ & $\binom{9,5}{2,0},\bcancel{\binom{11,3}{2,0}},\binom{11}{0}$ & \\ \midrule $\binom{5,3}{4,2}$ & $\overline{\binom{7,5,3}{6,4,0}}^\natural,\binom{7,3}{4,2}$ & $\bcancel{\binom{9,5,3}{6,2,0}},\bcancel{\binom{9,3}{4,0}}$ & \\ $\binom{5,3}{6,0}$ & $\overline{\binom{7,3}{4,2}}^\natural,\binom{9,1}{4,2}$ & $\binom{7,5}{4,0},\bcancel{\binom{9,3}{4,0}},\binom{9}{2}$ & \\ $\binom{7,1}{4,2}$ & $\overline{\binom{7,5,3}{8,2,0}}^\natural,\binom{7,3}{6,0}$ & $\bcancel{\binom{9,4,2}{6,2,0}}^\natural,\bcancel{\binom{9,3}{4,0}}$ & $\bcancel{\binom{11,5,3}{4,2,0}},\bcancel{\binom{11,3}{2,0}}$ & \\ $\binom{5}{4}$ & $\binom{7,3}{6,0},\binom{7}{4}$ & $\overline{\binom{7,5}{4,0}}^\natural,\bcancel{\binom{9,3}{4,0}},\binom{9}{2}$ & $\binom{9,5}{2,0},\bcancel{\binom{11,3}{2,0}},\binom{11}{0}$ & \\ \midrule $\binom{7,5,3}{6,2,0}$ & $\overline{\binom{7,5,1}{6,4,2}}^\natural,\binom{9,3,1}{6,4,2}$ & $\bcancel{\binom{7,3}{4,2}},\binom{9,1}{4,2}$ & \\ $\binom{7,3}{4,0}$ & $\overline{\binom{5,3}{6,2}}^\natural,\binom{7,1}{6,2}$ & $\bcancel{\binom{7,3}{4,2}},\binom{9,1}{4,2},\binom{7,3}{6,0},\binom{7}{4}$ & $\bcancel{\binom{9,3}{4,0}},\binom{9}{2}$ & \\ $\binom{7}{2}$ & $\binom{5,3}{8,0},\binom{5}{6}$ & $\overline{\binom{7,3}{6,0}}^\natural,\binom{7}{4}$ & $\bcancel{\binom{9,3}{4,0}},\binom{9}{2}$ & $\bcancel{\binom{11,3}{2,0}},\binom{11}{0}$ & \\ \midrule $\binom{9,7,5,3}{6,4,2,0}$ & $\overline{\binom{9,5,3,1}{8,6,4,2}}^\natural$ & $\binom{9,3,1}{6,4,2}$ & \\ $\binom{9,5,3}{4,2,0}$ & $\overline{\binom{7,3,1}{8,4,2}}^\natural$ & $\binom{9,3,1}{6,4,2},\binom{7,1}{6,2}$ & $\binom{9,1}{4,2}$ & \\ $\binom{7,5}{2,0}$ & $\overline{\binom{5,1}{6,4}}^\natural$ & $\binom{7,1}{6,2}$ & $\binom{7}{4}$ & \\ $\binom{9,3}{2,0}$ & $\overline{\binom{5,1}{8,2}}^\natural$ & $\binom{7,1}{6,2}^\natural,\binom{5}{6}$ & $\binom{9,1}{4,2}$ & $\binom{9}{2}$ & \\ $\binom{9}{0}$ & $\binom{3}{8}$ & $\overline{\binom{5}{6}}^\natural$ & $\binom{7}{4}$ & $\binom{9}{2}$ & $\binom{11}{0}$ \\ \bottomrule \end{tabular} \] From the above table we see that for most $\Lambda\in\cals_{8,0}$, we have $\underline\theta_{\bfG'}(\Lambda)=\overline\theta_{\bfG'}(\Lambda)$. However, $\underline\theta_{\bfG'}(\Lambda)=\theta_0(\Lambda)\neq\theta_1(\Lambda)=\overline\theta_{\bfG'}(\Lambda)$ for $\Lambda=\binom{5}{4},\binom{7}{2},\binom{9}{0}$. For the part $\cals_{8,-3}\rightarrow\cals_{10,4}$, we have $\tau=1-2<0$, so not every element in $\cals_{8,-3}$ occurs in the correspondence $\Theta$. Then we have to switch the roles of $\bfG$ and $\bfG'$ and consider the correspondence $\cals_{10,4}\rightarrow\cals_{8,-3}$. Note that $\binom{7,5,3,1}{-}$ is the only element in $\cals_{10,4}$, and $\binom{-}{7,3,1},\binom{2}{7,5,3,1}$ are the two elements in $\cals_{8,-3}$. The following table is the correspondence $\cals_{10,4}\rightarrow\cals_{8,-3}$: \[ \begin{tabular}{c|lllll} \toprule $\cals_{10,4}$ & $\cals_{8,3}$ \\ $\Lambda'$ & $\Theta_\bfG(\Lambda')_0$ \\ \midrule $\binom{7,5,3,1}{-}$ & $\overline{\binom{2}{7,5,3,1}}^\natural$ \\ \bottomrule \end{tabular} \] Note that the symbol $\binom{-}{7,3,1}$ is the only element in $\cals_{\rmU_8}$ which does not occur in the correspondence $\Theta_{\bfG,\bfG'}$. \end{exam} \subsection{Properties of correspondences $\underline\theta$ and $\overline\theta$} Let $(\bfG,\bfG')=(\rmU_n,\rmU_{n'})$, $\lambda\in\calp(n)$, and $d=\ell(\lambda_\infty)$. \begin{enumerate} \item Suppose that $n+n'$ is even and $d$ is even. Then from (\ref{0602}), we know that $\rho_\lambda$ occurs in the $\underline\theta_{\bfG,\bfG'}$ if \[ n'-\frac{d(d-1)}{2}\geq n-\frac{d(d+1)}{2}, \] i.e., if $n'\geq n-d$. \item Suppose that $n+n'$ is even and $d$ is odd. Then from (\ref{0602}), we know that $\rho_\lambda$ occurs in the $\underline\theta_{\bfG,\bfG'}$ if \[ n'-\frac{(d+1)(d+2)}{2}\geq n-\frac{d(d+1)}{2}, \] i.e., if $n'\geq n+d+1$. \item Suppose that $n+n'$ is odd and $d$ is even. Then $\rho_\lambda$ occurs in the $\underline\theta_{\bfG,\bfG'}$ if $n'\geq n+d+1$. \item Suppose that $n+n'$ is odd and $d$ is odd. Then $\rho_\lambda$ occurs in the $\underline\theta_{\bfG,\bfG'}$ if $n'\geq n-d$. \end{enumerate} \begin{prop}\label{0302} Let $\Lambda\in\cals_{n,\delta}\subset\cals_\bfG$ and suppose that $\tau=0$. Then $\underline\theta_{\bfG'}(\Lambda)=\overline\theta_{\bfG'}(\Lambda)$. \end{prop} \begin{proof} By the same argument in the proof of lemma~5.2 in \cite{pan-eta}, we can show that $\Theta^\flat_{\bfG'}(\Lambda)=\{\theta_0(\Lambda)\}$ when $\tau=0$. This implies that $\overline\theta_{\bfG'}(\Lambda)=\theta_0(\Lambda)=\underline\theta_{\bfG'}(\Lambda)$. \end{proof} A dual pair $(\rmU_n,\rmU_{n'})$ is called in \emph{stable range} if $n\leq\lfloor\frac{n'}{2}\rfloor$. \begin{lem} Suppose that $(\bfG,\bfG')$ is in stable range and let $\Lambda\in\cals_\bfG$. Then $\theta_0(\Lambda)$ is the unique element of maximal order in $\Theta_{\bfG'}(\Lambda)$. \end{lem} \begin{proof} Suppose that $(\bfG,\bfG')=(\rmU_n,\rmU_{n'})$. Write \[ \Lambda=\binom{a_1,\ldots,a_{m_1}}{b_1,\ldots,b_{m_2}}\in\cals_{n,\delta}\subset\cals_\bfG\quad\text{and}\quad \Upsilon(\Lambda)=\sqbinom{\mu_1,\ldots,\mu_{m_1}}{\nu_1,\ldots,\nu_{m_2}} \in\calp_2(\tfrac{1}{2}(n-\tfrac{d(d+1)}{2})) \] where $d=|\delta|$. Now we consider the following cases: \begin{enumerate} \item Suppose that both $n,n'$ are even and $n'\geq 2n$. Note that now \begin{align*} \tau-\nu_1 &\geq \tfrac{1}{2}\left(n'-n+\tfrac{d(d+1)}{2}-\tfrac{d'(d'+1)}{2}\right)-\tfrac{1}{2}\left(n-\tfrac{d(d+1)}{2}\right) \\ &=\tfrac{1}{2}(n'-2n)+\left(d(d+1)-\tfrac{d'(d'+1)}{2}\right). \end{align*} \begin{enumerate} \item If $\delta$ even and positive, then $d=\delta=m_1-m_2\geq 4$, $d'=d-1$, and \[ \tau-\nu_1\geq\tfrac{1}{2}(n'-2n)+\tfrac{1}{2}d(d+3)\geq 0. \] So now \begin{equation}\label{0305} \theta_0(\Upsilon(\Lambda))=\sqbinom{\tau,\nu_1,\ldots,\nu_{m_2}}{\mu_1,\ldots,\mu_{m_1}}, \end{equation} $n'$ is even, and $d'$ is odd. Hence \begin{align*} \alpha_0 &= 2(\tau+m_2)=n'-n+d+2m_2, \\ \beta_0 &\leq a_1=\mu_1+2m_1-1\leq n-\tfrac{d(d+1)}{2}+2m_1-1, \\ \alpha_0-\beta_0 &\geq n'-2n-d+\tfrac{d(d+1)}{2}+1=n'-2n+\tfrac{d(d-1)}{2}+1>0. \end{align*} \item If $\delta=0$, then $d'=d=m_1-m_2=0$, and $\tau-\nu_1=\tfrac{1}{2}(n'-2n)\geq 0$. So now $\theta_0(\Upsilon(\Lambda))$ is as in (\ref{0305}), and both $n',d'$ are even. Hence \begin{align*} \alpha_0 &= 2(\tau+m_2)+1=n'-n+2m_2+1, \\ \beta_0 &\leq a_1\leq n+2(m_1-1), \\ \alpha_0-\beta_0 &\geq n'-2n+3>0. \end{align*} \item If $\delta$ odd, then $d=m_2-m_1=-\delta\geq 3$, $d'=d+1$, and \[ \tau-\nu_1\geq \tfrac{1}{2}(n'-2n)+\tfrac{1}{2}(d+1)(d-2)\geq 0. \] So now $\theta_0(\Upsilon(\Lambda))$ is as in (\ref{0305}), and both $n',d'$ are even. Hence \begin{align*} \alpha_0 &= 2(\tau+m_2)+1=n'-n-(d+1)+2m_2+1, \\ \beta_0 &\leq a_1=n-\tfrac{d(d+1)}{2}+2(m_1-1), \\ \alpha_0-\beta_0 &\geq n'-2n+\tfrac{d(d+3)}{2}>0. \end{align*} \end{enumerate} \item Suppose that both $n,n'$ are odd and $n'\geq 2n+1$. \begin{enumerate} \item If $\delta$ is even, then $d=\delta=m_1-m_2\geq 2$, $d'=d-1$, and \[ \tau-\nu_1\geq \tfrac{1}{2}(n'-2n)+\tfrac{1}{2}d(d+3)\geq 0. \] So now $\theta_0(\Upsilon(\Lambda))$ is as in (\ref{0305}), and both $n',d'$ are odd. Hence \begin{align*} \alpha_0 &\geq 2(\tau+m_2-1)=n'-n+d+2m_2-2, \\ \beta_0 &\leq a_1=n-\tfrac{d(d+1)}{2}+2m_1-1, \\ \alpha_0-\beta_0 &\geq n'-2n-d+\tfrac{d(d+1)}{2}-1=n'-2n+\tfrac{d(d-1)}{2}-1>0. \end{align*} \item If $\delta$ is odd, then $d=m_2-m_1=-\delta\geq 1$, $d'=d+1$, and \[ \tau-\nu_1\geq \tfrac{1}{2}(n'-2n)+\tfrac{1}{2}(d+1)(d-2)\geq 0. \] So now $\theta_0(\Upsilon(\Lambda))$ is as in (\ref{0305}), $n'$ is odd, and $d'$ is even. Hence \begin{align*} \alpha_0 &\geq 2(\tau+m_2-1)+1=n'-n-(d+1)+2m_2-1, \\ \beta_0 &\leq a_1=n-\tfrac{d(d+1)}{2}+2(m_1-1), \\ \alpha_0-\beta_0 &\geq n'-2n+\tfrac{d(d+3)}{2}>0. \end{align*} \end{enumerate} \item Suppose that $n$ is even, $n'$ is odd and $n'\geq 2n+1$. Note that now \begin{align*} \tau-\mu_1 &\geq \tfrac{1}{2}\left(n'-n+\tfrac{d(d+1)}{2}-\tfrac{d'(d'+1)}{2}\right)-\tfrac{1}{2}\left(n-\tfrac{d(d+1)}{2}\right) \\ &=\tfrac{1}{2}(n'-2n)+\left(d(d+1)-\tfrac{d'(d'+1)}{2}\right). \end{align*} \begin{enumerate} \item If $\delta$ is even, then $d=\delta=m_1-m_2\geq 0$, $d'=d+1$, and \[ \tau-\mu_1\geq \tfrac{1}{2}(n'-2n)+\tfrac{1}{2}(d+1)(d-2)\geq 0. \] So now \begin{equation}\label{0306} \theta_0(\Upsilon(\Lambda))=\sqbinom{\nu_1,\ldots,\nu_{m_2}}{\tau,\mu_1,\ldots,\mu_{m_1}}, \end{equation} both $n'$ and $d'$ are odd. Hence \begin{align*} \beta_0 &= 2(\tau+m_1)+1=n'-n-(d+1)+2m_1+1, \\ \alpha_0 &\leq a_1\leq n-\tfrac{d(d+1)}{2}+2(m_2-1), \\ \beta_0-\alpha_0 &\geq n'-2n+d+2+\tfrac{d(d+1)}{2}=n'-2n+\tfrac{d(d+3)}{2}+2>0. \end{align*} \item If $\delta$ is odd, then $d=m_2-m_1=-\delta\geq 3$, $d'=d-1$, and $\tau-\mu_1\geq 0$. So now $\theta_0(\Upsilon(\Lambda))$ is as in (\ref{0306}), $n'$ is odd, and $d'$ is even. Hence \begin{align*} \beta_0 &= 2(\tau+m_1)=n'-n+d+2m_1, \\ \alpha_0 &\leq a_1=n-\tfrac{d(d+1)}{2}+2(m_2-1)+1, \\ \beta_0-\alpha_0 &\geq n'-2n+\tfrac{d(d-1)}{2}+1>0. \end{align*} \end{enumerate} \item Suppose that $n$ is odd, $n'$ is even and $n'\geq 2n$. \begin{enumerate} \item If $\delta$ is even, then $d=\delta=m_1-m_2\geq 2$, $d'=d+1$, and $\tau-\mu_1\geq 0$. So now $\theta_0(\Upsilon(\Lambda))$ is as in (\ref{0306}), $n'$ is even, and $d'$ is odd. Hence \begin{align*} \beta_0 &\geq 2(\tau+m_1)+1=n'-n-d+2m_1, \\ \alpha_0 &\leq a_1=n-\tfrac{d(d+1)}{2}+2(m_2-1), \\ \beta_0-\alpha_0 &\geq n'-2n+d+\tfrac{d(d+1)}{2}+2=n'-2n+\tfrac{d(d+3)}{2}+2>0. \end{align*} \item If $\delta$ is odd, then $d=m_2-m_1=-\delta\geq 1$, $d'=d-1$, and $\tau-\mu_1\geq 0$. So now $\theta_0(\Upsilon(\Lambda))$ is as in (\ref{0306}), and both $n',d'$ are even. Hence \begin{align*} \beta_0 &\geq 2(\tau+m_1)=n'-n+d+2m_1, \\ \alpha_0 &\leq a_1=n-\tfrac{d(d+1)}{2}+2m_2-1, \\ \beta_0-\alpha_0 &\geq n'-2n-d+\tfrac{d(d+1)}{2}+1=n'-2n+\tfrac{d(d-1)}{2}+1>0. \end{align*} \end{enumerate} \end{enumerate} So we conclude that $\alpha_0>\beta_0$ when $n+n'$ is even; and $\beta_0>\alpha_0$ when $n+n'$ is odd. Then from the proof of Lemma~\ref{0303}, we see that $\theta_0(\Lambda)$ is the unique element of maximal order in the set $\{\,\theta_k(\Lambda)\mid k\geq 0\,\}$. Then the lemma follows from Lemma~\ref{0304} immediately. \end{proof} \begin{prop}\label{0310} Suppose that the dual pair $(\bfG,\bfG')=(\rmU_n,\rmU_{n'})$ is in stable range and let $\Lambda\in\cals_\bfG$. Then $\underline\theta_{\bfG'}(\Lambda)=\overline\theta_{\bfG'}(\Lambda)$. \end{prop} \begin{proof} Suppose that $(\bfG,\bfG')$ is in stable range and $\Lambda\in\cals_{n,\delta}\subset\cale_\bfG$ for some $\delta$. Note that the mapping $\theta_0\colon\cals_{n,\delta}\rightarrow\cals_{n',\delta'}$ where $\delta'$ is given in Subsection~\ref{0308} is one-to-one. This implies that $\theta_0(\Lambda)\in\Theta^\flat_{\bfG'}(\Lambda)$. Then by the previous lemma, we have $\overline\theta_{\bfG'}(\Lambda)=\theta_0(\Lambda)=\underline\theta_{\bfG'}(\Lambda)$. \end{proof} Recall that we define the mappings $\underline\theta,\overline\theta\colon\cals_{n,\delta}\rightarrow\cals_{n',\delta'}$ under the assumption $\tau\geq 0$ in Subsection~\ref{0309}. Now we extend the domain of both mappings by defining \begin{align*} \underline\theta_{\bfG'}(\rho_\lambda) &=\rho_{\lambda'}\quad\text{ if and only if }\quad\underline\theta_\bfG(\rho_{\lambda'})=\rho_\lambda; \\ \overline\theta_{\bfG'}(\rho_\lambda) &=\rho_{\lambda'}\quad\text{ if and only if }\quad\overline\theta_\bfG(\rho_{\lambda'})=\rho_\lambda. \end{align*} So from now on, we will drop the assumption that $\tau\geq0$. However, $\underline\theta_{\bfG'}(\rho_\lambda)$ or $\overline\theta_{\bfG'}(\rho_\lambda)$ might not be defined if $\tau<0$. \section{Maximal Theta Relations for Unitary Groups} Let $(\bfG,\bfG')=(\rmU_n,\rmU_{n'})$ for some $n,n'\in\bbN\cup\{0\}$. \subsection{Lusztig correspondence and $\Theta$-correspondence} For a semisimple element $s$ in the dual group $G^*$ of $G$, let $\cale(G)_s$ denote the \emph{Lusztig series} associated to $s$. For $\rho\in\cale(G)_s$, let $G^{(1)}$, $G^{(2)}$ be defined as in \cite{pan-Lusztig-correspondence} so that $C_{G^*}(s)=G^{(1)}\times G^{(2)}$. Then we have a bijection called a \emph{Lusztig correspondence} \[ \Xi_s\colon \cale(G)_s\longrightarrow\cale(G^{(1)}\times G^{(2)})_1. \] Write $\Xi_s(\rho)=\rho^{(1)}\otimes\rho^{(2)}$ for $\rho^{(j)}\in\cale(G^{(j)})_1$ and $j=1,2$. The following can be extracted from \cite{amr} th\'eor\`eme 2.6 (\cf.~\cite{pan-chain01} theorem 3.10). Note that from \cite{pan-finite-unipotent}, we do not need to assume that $q$ is large enough. \begin{prop} Let $(G,G')=(\rmU_n(q),\rmU_{n'}(q))$. Let $\eta\in\cale(G)_s$ and $\eta\in\cale(G')_{s'}$ for some $s,s'$. Then $\rho\otimes\rho'$ occurs in the Howe correspondence for $(G,G')$ if and only if the following conditions hold: \begin{itemize} \item $G^{(1)}\simeq G'^{(1)}$ and $\rho^{(1)}\simeq\rho'^{(1)}$, \item $\rho^{(2)}\otimes\rho'^{(2)}$ occurs in the correspondence for the dual pair $(G^{(2)},G'^{(2)})$, \end{itemize} i.e., the following diagram \[ \begin{CD} \rho @> \Theta_{\bfG,\bfG'} >> \rho' \\ @V \Xi_s VV @VV \Xi_{s'} V \\ \rho^{(1)}\otimes\rho^{(2)} @> {\rm id}\otimes\Theta_{\bfG^{(2)},\bfG'^{(2)}} >> \rho'^{(1)}\otimes\rho'^{(2)}. \end{CD} \] commutes. \end{prop} Then we define \begin{align}\label{0401} \begin{split} \underline\theta_{\bfG'}(\rho) &= \Xi_{s'}^{-1}(\rho^{(1)}\otimes\underline\theta_{\bfG^{(2)}}(\rho^{(2)})) \\ \overline\theta_{\bfG'}(\rho) &= \Xi_{s'}^{-1}(\rho^{(1)}\otimes\overline\theta_{\bfG^{(2)}}(\rho^{(2)})), \end{split} \end{align} i.e., we have the following two commutative diagrams: \[ \begin{CD} \rho @> \underline\theta_{\bfG,\bfG'} >> \rho' \\ @V \Xi_s VV @VV \Xi_{s'} V \\ \rho^{(1)}\otimes\rho^{(2)} @> {\rm id}\otimes\underline\theta_{\bfG^{(2)},\bfG'^{(2)}} >> \rho'^{(1)}\otimes\rho'^{(2)}, \end{CD} \qquad\qquad \begin{CD} \rho @> \overline\theta_{\bfG,\bfG'} >> \rho' \\ @V \Xi_s VV @VV \Xi_{s'} V \\ \rho^{(1)}\otimes\rho^{(2)} @> {\rm id}\otimes\overline\theta_{\bfG^{(2)},\bfG'^{(2)}} >> \rho'^{(1)}\otimes\rho'^{(2)}. \end{CD} \] Therefore the domains of the relations $\underline\theta$ and $\overline\theta$ are extended from unipotent characters to all irreducible characters. \begin{cor} Suppose that the dual pair $(\bfG,\bfG')=(\rmU_n,\rmU_{n'})$ is in stable range. Then $\underline\theta=\overline\theta$. \end{cor} \begin{proof} This follows from Proposition~\ref{0310} and (\ref{0401}) immediately. \end{proof} For a fixed Witt tower, let $\bfG_n$ denote the group of split rank $n$, i.e., $\bfG_n=\rmU_{2n}$ or $\rmU_{2n+1}$. For $\rho\in\cale(G)$, let $n'_0(\rho)$ (resp.~$\underline n'_0(\rho)$) be the smallest $n'$ such that $\Theta_{\bfG'_{n'}}(\rho)\neq\emptyset$ (resp.~$\underline\theta_{\bfG'_{n'}}(\rho)$ is defined). \begin{cor}\label{0603} Let $(\bfG,\bfG')$ be a dual pair of two unitary groups. Suppose that $\rho'\in\cale(G')$. \[ n_0(\rho')=\underline n_0(\rho'). \] \end{cor} \begin{proof} The proof is similar to that of lemma~6.9 in \cite{pan-eta}. \end{proof} \subsection{Maximal theta relation}\label{0402} Let $\vartheta$ be a sub-relation of $\Theta$ (\cf.~subsection 7.1 in \cite{pan-eta}). For a dual pair $(\bfG,\bfG')$ and $\rho\in\cale(G)$, we denote \[ \vartheta_{\bfG'}(\rho)=\{\,\rho'\in\cale(G')\mid(\rho,\rho')\in\vartheta_{\bfG,\bfG'}\,\}. \] A partial ordering is given on the set of all sub-relations of $\Theta$ by inclusion, i.e., we say that $\vartheta_1\leq\vartheta_2$ if, for each dual pair $(\bfG,\bfG')$ and $\rho\in\cale(G)$, we have $\vartheta_{1,\bfG'}(\rho)\subseteq\vartheta_{2,\bfG'}(\rho)$. \begin{itemize} \item A sub-relation $\vartheta$ of $\Theta$ is called \emph{semi-persistent} (on unipotent characters) if and $\rho_\lambda\in\cale(\rmU_n(q))_1$ occurs in $\vartheta_{\rmU_n,\rmU_{n'}}$ whenever either \begin{itemize} \item $n+n'+\ell(\lambda_\infty)$ is even, and $n'\geq n-\ell(\lambda_\infty)$; or \item $n+n'+\ell(\lambda_\infty)$ is odd and $n'\geq n+\ell(\lambda_\infty)+1$. \end{itemize} \item A sub-relation $\vartheta$ is called \emph{symmetric} if for each dual pair $(\bfG,\bfG')=(\rmU_n,\rmU_{n'})$, $\rho\in\cale(G)$ and $\rho'\in\cale(G')$, we have $\rho'\in\vartheta_{\bfG'}(\rho)$ if and only if $\rho\in\vartheta_\bfG(\rho')$. \item A sub-relation $\vartheta$ is said to be \emph{compatible with} the Lusztig correspondence if for each dual pair $(\bfG,\bfG')=(\rmU_n,\rmU_{n'})$, $\rho\in\cale(G)$ and $\rho'\in\cale(G')$, the following diagram \[ \begin{CD} \rho @> \vartheta_{\bfG,\bfG'} >> \rho' \\ @V \Xi_s VV @VV \Xi_{s'} V \\ \rho^{(1)}\otimes\rho^{(2)} @> {\rm id}\otimes\vartheta_{\bfG^{(2)},\bfG'^{(2)}} >> \rho'^{(1)}\otimes\rho'^{(2)}. \end{CD} \] commutes. \end{itemize} Similar to the case for symplectic/orthogonal dual pair, a sub-relation $\vartheta$ of $\Theta$ is called a \emph{theta relation} if it is semi-persistent, symmetric and compatible with the Lusztig correspondence. We know that both $\underline\theta$ and $\overline\theta$ are one-to-one theta-relations. The following proposition says that a one-to-one theta-relation can not be properly contained in another one-to-one theta-relation. \begin{prop} In the set of all one-to-one theta-relations, each element is maximal. \end{prop} \begin{proof} Let $\vartheta$ be a one-to-one theta-relation. Suppose that $\vartheta'$ is another theta-relation such that $\vartheta'_{\bfG,\bfG'}$ properly contains $\vartheta_{\bfG,\bfG'}$ for some dual pair $(\bfG,\bfG')=(\rmU_n,\rmU_{n'})$, i.e., there are $\rho\in\cale(G)$ and $\rho'\in\cale(G')$ such that $(\rho,\rho')\in\vartheta'_{\bfG,\bfG'}$ and $(\rho,\rho')\not\in\vartheta_{\bfG,\bfG'}$. If $\vartheta_{\bfG'}(\rho)$ is defined, then $\vartheta'$ is not one-to-one. So we may assume that $\vartheta_{\bfG'}(\rho)$ is not defined. Since both $\vartheta'$ and $\vartheta$ are compatible with Lusztig correspondence, we may assume that both $\rho,\rho'$ are unipotent. So we write $\rho=\rho_\lambda$ and $\rho'=\rho_{\lambda'}$ for some $\lambda\in\calp(n)$ and $\lambda'\in\calp(n')$. First suppose that $n+n'$ is even. Because $\vartheta_{\bfG'}(\rho_\lambda)$ is not defined, we must have \[ n'< \begin{cases} n-\ell(\lambda_\infty), & \text{if $\ell(\lambda_\infty)$ is even};\\ n+\ell(\lambda_\infty)+1, & \text{if $\ell(\lambda_\infty)$ is odd}. \end{cases} \] Because now $(\rho_\lambda,\rho_{\lambda'})\in\vartheta'_{\bfG,\bfG'}\subseteq\Theta_{\bfG,\bfG'}$, we have $(\Lambda_\lambda,\Lambda_{\lambda})\in\calb_{\bfG,\bfG'}$. Therefore, we have \[ \ell(\lambda'_\infty)= \begin{cases} \ell(\lambda_\infty), & \text{if $\ell(\lambda_\infty)=0$};\\ \ell(\lambda_\infty)-1, & \text{if $\ell(\lambda_\infty)$ is even and $\ell(\lambda_\infty)>0$};\\ \ell(\lambda_\infty)+1, & \text{if $\ell(\lambda_\infty)$ is odd}, \end{cases} \] and then \[ \ell(\lambda_\infty)= \begin{cases} \ell(\lambda'_\infty), & \text{if $\ell(\lambda'_\infty)=0$};\\ \ell(\lambda'_\infty)-1, & \text{if $\ell(\lambda'_\infty)$ is odd};\\ \ell(\lambda'_\infty)+1, & \text{if $\ell(\lambda'_\infty)$ is even and $\ell(\lambda_\infty)>0$}. \end{cases} \] Hence we have \[ n\geq \begin{cases} n'-\ell(\lambda'_\infty), & \text{if $\ell(\lambda'_\infty)$ is even};\\ n'+\ell(\lambda'_\infty)+1, & \text{if $\ell(\lambda'_\infty)$ is odd}. \end{cases} \] So we have $\vartheta_{\bfG}(\rho_{\lambda'})$ is defined. Let $\rho_{\lambda''}=\vartheta_{\bfG}(\rho_\lambda)\in\cale(G)$. Now $(\rho_{\lambda'},\rho_{\lambda''})\in\vartheta_{\bfG',\bfG}\subseteq\vartheta'_{\bfG',\bfG}$ implies that $(\rho_{\lambda''},\rho_{\lambda'})\in\vartheta'_{\bfG,\bfG'}$ since $\vartheta$ is symmetric. Moreover, $(\rho_\lambda,\rho_{\lambda'})$ is in $\vartheta'_{\bfG,\bfG'}$ by our assumption. However, $\lambda$ and $\lambda''$ are not equal because $\vartheta_{\bfG'}(\rho_\lambda)$ is not defined and $\vartheta_{\bfG'}(\rho_{\lambda''})=\rho_{\lambda'}$ by the symmetricity of $\underline\theta$. So we conclude that $\vartheta'$ is not one-to-one. The proof for the case that $n+n'$ is odd is similar. \end{proof} \begin{cor} Let $(\bfG,\bfG')$ be a dual pair of two unitary groups. Then both $\underline\theta$ and $\overline\theta$ are maximal one-to-one theta-relations. \end{cor}
proofpile-arXiv_065-131
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Appendix} \subsection*{Proof for Theorem \ref{thm:calibrate1}} \begin{definition}\label{def:cal} \cite{bartlett2006convexity} A classification-calibrated loss function $\ell$ is defined as the follows: there exists a non-decreasing convex function $\Psi(\cdot)$ that satisfies: $ \Psi\left(R(f) - R^*\right) \leq R_{\ell}(f) - R^*_{\ell}. $ $\Phi(0) = 0$. \end{definition} \begin{proof} Denote the Bayes risk of a classifier $f$ as $ R(f) := \mathbb{E}_{(X,Y)} \left[\mathbbm{1} \Big(f(X) \neq Y\Big) \right], $ its minimum risk as $R^* = \argmin_{f} R(f)$. The classifier's $\ell$-risk is defined as $ R_{\ell}(f) := \mathbb{E}_{(X,Y)} \left[\ell(f(X),Y) \right]$, with its minimum value $R^*_{\ell} := \argmin_{f} R_{\ell}(f)$. We prove by contradiction. Suppose that reporting a hypothesis $f$ returns higher payoff (or a smaller risk) than reporting $f^*_i$ under $\ell$. Then \begin{align*} 0 > R_{\ell}(f) - R_{\ell}(f^*_i) \underbrace{=}_{(1)} R_{\ell}(f)-R^*_{\ell} \underbrace{\geq}_{(2)} \Psi\left(R(f) - R^*\right)\underbrace{\geq}_{(3)} \Psi(0) = 0, \end{align*} which is a contradiction. In above, the first equality (1) is by definition, (2) is due to the calibration condition, (3) is due to the definition of $R^*$. \end{proof} \subsection*{Proof for Lemma \ref{lemma:delta}} \begin{proof} The proof builds essentially on the law of total probability (note $f^*$ and $Y$ are the same): \begin{align*} &\Delta^*(1,1) = \mathbb P(f_{i}^*(X)=1,Y=1) - \mathbb P(f_{i}^*(X)=1)\cdot \mathbb P(Y=1)\\ =&\mathbb P(f_{i}^*(X)=1|Y=1) \cdot \mathbb P(Y=1)- \mathbb P(f_{i}^*(X)=1)\cdot \mathbb P(Y=1)\\ =&\mathbb P(Y=1)\left(\mathbb P(f_{i}^*(X)=1|Y=1) - \mathbb P(f_{i}^*(X)=1)\right)\\ =&\mathbb P(Y=1)\biggl(\mathbb P(f_{i}^*(X)=1|Y=1) - \mathbb P(Y=1)\mathbb P(f_{i}^*(X)=1|Y=1) \\ &~~~~-\mathbb P(Y=2)\mathbb P(f_{i}^*(X)=1|Y=2)\biggr)\\ =&\mathbb P(Y=1)\bigl(\mathbb P(Y=2) \cdot \mathbb P(f_{i}^*(X)=1|Y=1) - \mathbb P(Y=2)\mathbb P(f_{i}^*(X)=1|Y=2)\bigr)\\ =&\mathbb P(Y=1)\cdot \mathbb P(Y=2) \cdot (\mathbb P(f_{i}^*(X)=1|Y=1)-\mathbb P(f_{i}^*(X)=1|Y=2))\\ =&\mathbb P(Y=1)\cdot \mathbb P(Y=2) \cdot \left(1-\textsf{FNR}(f_{i}^*) - \textsf{FPR}(f_{i}^*)\right) > 0 \end{align*} Now consider $\Delta^*(1,2)$: \begin{align*} &\Delta^*(1,2) = \mathbb P(f_{i}^*(X)=1,Y=2) - \mathbb P(f_{i}^*(X)=1)\cdot \mathbb P(Y=2)\\ =&\mathbb P(f_{i}^*(X)=1|Y=2) \cdot \mathbb P(Y=2)- \mathbb P(f_{i}^*(X)=1)\cdot \mathbb P(Y=2)\\ =&\mathbb P(Y=2)\bigl(\mathbb P(f_{i}^*(X)=1|Y=2) - \mathbb P(f_{i}^*(X)=1)\bigr)\\ =&\mathbb P(Y=2)\biggl(\mathbb P(f_{i}^*(X)=1|Y=2) - \mathbb P(Y=1)\mathbb P(f_{i}^*(X)=1|Y=1) \\ &~~~~- \mathbb P(Y=2)\mathbb P(f_{i}^*(X)=1|Y=2)\biggr)\\ =&\mathbb P(Y=2)\bigl(\mathbb P(Y=1) \cdot \mathbb P(f_{i}^*(X)=1|Y=2) - \mathbb P(Y=1)\mathbb P(f_{i}^*(X)=1|Y=1)\bigr)\\ =&\mathbb P(Y=2)\cdot \mathbb P(Y=1) \cdot (\mathbb P(f_{i}^*(X)=1|Y=2)-\mathbb P(f_{i}^*(X)=1|Y=1))\\ =&\mathbb P(Y=2)\cdot \mathbb P(Y=2) \cdot \left(\textsf{FNR}(f_{i}^*) + \textsf{FPR}(f_{i}^*)-1\right) < 0 \end{align*} The second row of $\Delta^*$ involving $\Delta^*(2,1),\Delta^*(2,2)$ can be symmetrically argued. \end{proof} \subsection*{Proof for Theorem \ref{thm:CA:truthful}} \begin{proof} Note the following fact: $$ \mathbb E[S(f,f^*)] := \sum_n \mathbb E[S(f(x_n),f^*(x_n))]. $$ Therefore we can focus on the expected score of individual sample $X$. The incentive compatibility will then hold for the sum. The below proof is a rework of the one presented in \cite{shnayder2016informed}: \begin{align*} &\quad ~ \mathbb E\left[S(\hat{f}_i(X),f^*(X))\right] \\ &= \mathbb E \left [ Sgn(\Delta^*(\hat{f}_i(X),f^*(X)))-Sgn(\Delta^*(\hat{f}_i(X_{p_1}),f^*(X_{p_2})))\right ]\\ &=\sum_{k \in [L]} \sum_{l \in [L]}\mathbb P(f^*_i(X) = k, f^*(X) = l) \\ &\quad\quad \cdot \sum_{r \in [L]}\mathbb P(\hat{f}_i(X)=r|f^*_i(X)=k) \cdot Sgn(\Delta^*(r,l))\\ &~~~~~-\sum_{k \in [L]} \sum_{l \in [L]}\mathbb P(f^*_i(X_{p_1}) = k)\cdot \mathbb P( f^*(X_{p_2}) = l)\\ &\quad\quad \cdot \sum_{r \in [L]}\mathbb P(\hat{f}_i(X)=r |f^*_i(X_{p_1})=k) \cdot Sgn(\Delta^*(r,l))\\ &=\sum_{k \in [L]} \sum_{l \in [L]}\mathbb P(\hat{f}_i(X) = k, f^*(X) = l) \\ &\quad\quad \cdot \sum_{r \in [L]}\mathbb P(\hat{f}_i(X)=r|f^*_i(X)=k) \cdot Sgn(\Delta^*(r,l))\\ &~~~~~-\sum_{k \in [L]} \sum_{l \in [L]}\mathbb P(f^*_i(X) = k)\cdot \mathbb P( f^*(X) = l) \tag{replacing $X_{p}$ with $X$ due to iid assumption}\\ &\quad\quad \cdot \sum_{r \in [L]}\mathbb P(\hat{f}_i(X)=r |f^*_i(X)=k) \cdot Sgn(\Delta^*(r,l))\\ &=\sum_{k \in [L]} \sum_{l \in [L]} \Delta^*(k,l) \cdot \sum_{r \in [L]} \mathbb P(\hat{f}_i(X)=r|f^*_i(X)=k)\cdot Sgn(\Delta^*(r,l)) \end{align*} Note that truthful reporting returns the following expected payment: \begin{align*} &\sum_{k \in [L]} \sum_{l \in [L]} \Delta^*(k,l) \cdot \sum_{r \in [L]}\mathbb P(\hat{f}_i(X)=f^*_i(X)=r|f^*_i(X)=k) \cdot Sgn(\Delta^*(r,l))\\ =& \sum_{k \in [L]} \sum_{l \in [L]} \Delta^*(k,l) \cdot Sgn(\Delta^*(k,l)) \tag{Only the corresponding $k$ survives the 3rd summation}\\ =& \sum_{k,l: \Delta^*(k,l) > 0} \Delta^*(k,l) \end{align*} Because $\sum_{r \in [L]}\mathbb P(\hat{f}_i(X)=r|f^*_i(X)=k) \cdot Sgn(\Delta^*(r,l)) \in [0,1]$ we conclude that for any other reporting strategy: \[ \sum_{k,l: \Delta^*(k,l) > 0} \Delta^*(k,l)\geq \sum_{k \in [L]} \sum_{l \in [L]} \Delta^*(k,l) \cdot \sum_{r \in [L]}\mathbb P(\hat{f}_i(X)=r|f^*_i(X)=k) \cdot Sgn(\Delta^*(r,l)) \] completing the proof. \end{proof} \subsection*{Proof for Theorem \ref{thm:accuracy}} \begin{proof} For any classifier $f$, the expected score is \begin{align*} &\quad ~ \mathbb E[S(f(X),Y)] \\ &= \mathbb P(f(X) = Y) - \mathbb P(f(X)=1)\mathbb P(Y=1) - \mathbb P(f(X)=2)\mathbb P(Y=2) \quad~\tag{independence between $x_{p1}$ and $x_{p2}$}\\ &= \mathbb P(f(X) = Y) - \mathbb P(f(X)=1)\cdot 0.5 - \mathbb P(f(X)=2)\cdot 0.5 \quad~\tag{\text{equal prior}}\\ &= \mathbb P(f(X) = Y) - 0.5. \end{align*} The last equality indicates that higher than accuracy, the higher the expected score given to the agent, completing the proof. \end{proof} \subsection*{Calibrated CA Scores} We start with extending the definition for calibration for CA: \begin{definition} We call $S_{\ell}$ w.r.t. original CA scoring function $S$ if the following condition holds: \begin{align} &\Psi\left(\mathbb E[S(f(X),f^*(X))] - \mathbb E[S(f^*_i(X),f^*(X))]\right)\nonumber \\ &\leq \mathbb E[S_{\ell}(f(X),f^*(X))] - \mathbb E[S_{\ell}(f^*_i(X),f^*(X))], \forall f. \label{calibration:S} \end{align} \end{definition} Since $S$ induces $f^*_i$ as the maximizer, if $S_{\ell}$ satisfies the calibration property as defined in Definition \ref{def:cal}, we will establish similarly the incentive property of $S(\cdot)$ \begin{theorem} If for a certain loss function $\ell$ such that $S_{\ell}$ satisfies the calibration condition, then $S_{\ell}$ induces truthful reporting of $f^*_i$. \end{theorem} The proof of the above theorem repeats the one for Theorem \ref{thm:calibrate1}, therefore we omit the details. Denote by $f^*_{\ell} = \argmin_f R_{\ell}(f)$. Sufficient condition for CA calibration was studied in \cite{liu2020peerloss}: \begin{theorem}[Theorem 6 of \cite{liu2020peerloss}] Under the following conditions, $S_{\ell}$ is calibrated if $\mathbb P(Y=1) = 0.5$, and $f^*_{\ell}$ satisfies the following: $ \mathbb E[\ell(f^*_{\ell}(X),-Y)] \geq \mathbb E[\ell(f(X),-Y)],~\forall f. $ \end{theorem} \subsection*{Proof for Lemma \ref{col:delta}} \begin{proof} Denote by \begin{align*} &\Delta(k,l) = \mathbb P\bigl(f^*_i(X)=k,f^*(X)=l\bigr)- \mathbb P\bigl(f^*_i(X) = k\bigr) \mathbb P\bigl(f^*(X)= l\bigr), ~k,l \in \{1,2\}. \end{align*} i.e., the correlation matrix defined between $f^*_i$ and ground truth label $Y$ ($f^*$). And \begin{align*} \Delta^*(k,l) = \mathbb P\bigl(f^*_i(X)=k,f^*_j(X)=l\bigr)- \mathbb P\bigl(f^*_i(X) = k\bigr) \mathbb P\bigl(f^*_j(X)= l\bigr) \end{align*} $\Delta^*$ further dervies \begin{align*} \Delta^*(k,l) &= \mathbb P\bigl(f^*_i(X)=k,f^*_j(X)=l\bigr)- \mathbb P\bigl(f^*_i(X) = k\bigr) \mathbb P\bigl(f^*_j(X)= l\bigr)\\ &=\mathbb P\bigl(f^*_i(X)=k,f^*_j(X)=l|Y=1\bigr)\cdot \mathbb P(Y=1) \\ &~~~~+ \mathbb P\bigl(f^*_i(X)=k,f^*_j(X)=l|Y=2\bigr)\cdot \mathbb P(Y=2)\\ &-\mathbb P\bigl(f^*_i(X) = k\bigr) \left(\mathbb P\bigl(f^*_j(X)= l|Y=1\bigr)\cdot \mathbb P(Y=1) + \mathbb P\bigl(f^*_j(X)= l|Y=2\bigr)\cdot \mathbb P(Y=2) \right)\\ &=\mathbb P(f^*_i(X)=k|Y=1)\cdot \mathbb P(f^*_j(X)=l|Y=1)\cdot \mathbb P(Y=1) \\ &~~~~+ \mathbb P(f^*_i(X)=k|Y=1) \cdot \mathbb P(f^*_j(X)=l|Y=2)\cdot \mathbb P(Y=2) \tag*{(by conditional independence)}\\ &-\mathbb P\bigl(f^*_i(X) = k\bigr) \left(\mathbb P\bigl(f^*_j(X)= l|Y=1\bigr)\cdot \mathbb P(Y=1) + \mathbb P\bigl(f^*_j(X)= l|Y=2\bigr)\cdot \mathbb P(Y=2) \right)\\ &=\mathbb P(f^*_j(X)=l|Y=1)\left( \mathbb P(f^*_i(X)=k|Y=1) \cdot \mathbb P(Y=1) - \mathbb P\bigl(f^*_i(X) = k\bigr)\cdot \mathbb P(Y=1) \right) \\ &+\mathbb P(f^*_j(X)=l|Y=2)\left( \mathbb P(f^*_i(X)=k|Y=2) \cdot \mathbb P(Y=2) - \mathbb P\bigl(f^*_i(X) = k\bigr)\cdot \mathbb P(Y=2) \right)\\ &=\mathbb P(f^*_j(X)=l|Y=1) \cdot \Delta(k,1) + \mathbb P(f^*_j(X)=l|Y=2) \cdot \Delta(k,2)\\ &=\left(\mathbb P(f^*_j(X)=l|Y=1)-\mathbb P(f^*_j(X)=l|Y=2)\right) \cdot \Delta(k,1) ~~~~~~\tag*{because $(\Delta(k,1)+\Delta(k,2)=0)$} \end{align*} If $k=1,l=1$, the above becomes \[ \left(1-\textsf{FNR}(f^*_j) - \textsf{FPR}(f^*_j)\right) \Delta(1,1) > 0 \] If $k=1,l=2$, the above becomes \[ \left(-1+\textsf{FNR}(f^*_j) + \textsf{FPR}(f^*_j)\right) \Delta(1,1) < 0 \] If $k=2,l=1$, the above becomes \[ \left(1-\textsf{FNR}(f^*_j) - \textsf{FPR}(f^*_j)\right) \Delta(2,1) < 0 \] If $k=2,l=2$, the above becomes \[ \left(-1+\textsf{FNR}(f^*_j) + \textsf{FPR}(f^*_j)\right) \Delta(2,1) > 0 \] Proved. \end{proof} \subsection*{Proof for Theorem \ref{thm:pp:accuracy}} \begin{proof} Shorthand the following error rates: \[ \alpha' = \mathbb P(f'(X)=2|Y=1),~\beta' = \mathbb P(f'(X)=1|Y=2) \] When two classifiers $f, f'$ are conditionally independent given $Y$, next we establish the following fact: \begin{align} \mathbb E [S(f(X),f'(X))] = (1-\alpha' - \beta') \cdot \mathbb E[S(f(X),Y)].~\label{eqn:affine} \end{align} Short-hand $p:=\mathbb P(Y=1)$. Then \begin{align*} &\mathbb E [S(f(X),f'(X))]\\ =&\mathbb E [\mathbbm{1}(f(X),f'(X))]\\ &-\mathbb P(f(X)=1)\mathbb P(f'(X)=1)-\mathbb P(f(X)=2)\mathbb P(f'(X)=2) \tag{second term in CA}\\ =& p \cdot \mathbb E [\mathbbm{1}(f(X),f'(X))|Y=1] + (1-p) \cdot \mathbb E [\mathbbm{1}(f(X),f'(X))|Y=2]\\ &-\mathbb P(f(X)=1)\mathbb P(f'(X)=1)-\mathbb P(f(X)=2)\mathbb P(f'(X)=2) \\ =& p \cdot \mathbb E [\mathbbm{1}(f(X),1)] \cdot (1-\alpha') +p \cdot \mathbb E [\mathbbm{1}(f(X),2)] \cdot \alpha' \tag{by conditional independence}\\ &+ (1-p) \cdot \mathbb E [\mathbbm{1}(f(X),1)]\cdot \beta' + (1-p) \cdot \mathbb E[\mathbbm{1}(f(X),2)]\cdot (1-\beta') \tag{by conditional independence}\\ &-\mathbb P(f(X)=1)\mathbb P(f'(X)=1)-\mathbb P(f(X)=2)\mathbb P(f'(X)=2) \\ =& p \cdot (1-\alpha'-\beta') \cdot \mathbb E [\mathbbm{1}(f(X),1)] + (1-p) \cdot (1-\alpha'-\beta') \cdot \mathbb E [\mathbbm{1}(f(X),2)] \\ &+ \alpha' \cdot \mathbb E [\mathbbm{1}(f(X),2)] + \beta'\cdot \mathbb E [\mathbbm{1}(f(X),1)] \\ &-\mathbb P(f(X)=1)\mathbb P(f'(X)=1)-\mathbb P(f(X)=2)\mathbb P(f'(X)=2) \\ =&(1-\alpha'-\beta') \cdot \mathbb E[\mathbbm{1}(f(X),Y)]\\ &- \mathbb P(f(X)=1) (\mathbb P(f'(X)=1)-\beta') \\ &- \mathbb P(f(X)=2) (\mathbb P(f'(X)=2)-\alpha') \tag{$*$} \end{align*} Since \begin{align*} \mathbb P(f'(X)=1) &= \mathbb P(Y=1) \cdot \mathbb P(f'(X)=1|Y=1) + \mathbb P(Y=2) \cdot \mathbb P(f'(X)=1|Y=2) \\ &= p \cdot (1-\alpha') + (1-p) \cdot \beta' \end{align*} we have \[ \mathbb P(f'(X)=1)-\beta' = p \cdot (1-\alpha'-\beta') \] Similarly we have \[ \mathbb P(f'(X)=2)-\alpha'= (1-p) \cdot (1-\alpha'-\beta') \] Then \begin{align*} (*)=&(1-\alpha'-\beta') \cdot \mathbb E[\mathbbm{1}(f(X),Y)] -(1-\alpha'-\beta') \cdot\left(\mathbb P(f(X)=1) \cdot p +\mathbb P(f(X)=2)\cdot (1-p)\right)\\ =&(1-\alpha'-\beta') \cdot \left( \mathbb E[\mathbbm{1}(f(X),Y)] - (\mathbb P(f(X)=1) \cdot p +\mathbb P(f(X)=2)\cdot (1-p))\right)\\ =&(1-\alpha'-\beta') \cdot \mathbb E[S(f(X),Y)]. \tag{definition of CA on $f(X)$ and $Y$} \end{align*} By condition (iii), replace $f,f'$ above with $f^*_i,f^*_j$ we know \[ \mathbb E[S(f^*_i(X),f^*_j(X))] = (1-\alpha_j - \beta_j)\mathbb E[S(f^*_i(X),Y)] \] where \[ \alpha_j = \mathbb P(f^*_j(X)=2|Y=1),~\beta_j = \mathbb P(f^*_j(X)=1|Y=2) \] When condition (ii) considering an identify $\Delta^*$ we know that $f^*_j$ is Bayesian informative, which states exactly that \cite{LC17} \[ 1-\alpha_j - \beta_j > 0 \] \begin{definition}[Bayesian informativeness \cite{LC17}] \label{def:BI} A classifier $f$ is Bayesian informative w.r.t. $Y$ if \[ 1 - \mathbb P(f(X)=2|Y=1) - \mathbb P(f(X)=1|Y=2) > 0. \] \end{definition} Further we have proven that $\mathbb E[S(f(X),Y)]$ rewards accuracy under Condition (i): \begin{align*} &\quad~\mathbb E[S(f^*_i(X),Y)] = \mathbb P(f^*_i(X) = Y) - \mathbb P(f^*_i(X)=1)\mathbb P(Y=1) - \mathbb P(f^*_i(X)=2)\mathbb P(Y=2)\\ &= \mathbb P(f^*_i(X) = Y) - \mathbb P(f^*_i(X)=1)\cdot 0.5 - \mathbb P(f^*_i(X)=2)\cdot 0.5\\ &= \mathbb P(f^*_i(X) = Y) - 0.5, \end{align*} completing the proof. \end{proof} \subsection*{Proof for Theorem \ref{thm:market:pp1}} \begin{proof} The proof uses the fact that $\mathbb E [S(f(X),f'(X))] = (1-\alpha' - \beta') \cdot \mathbb E[S(f(X),Y)]$ proved in Theorem \ref{thm:pp:accuracy}, Eqn. (\ref{eqn:affine}): \begin{align*} \mathbb E[S(\hat{f}_{t}(x),f'(x)) - S(\hat{f}_{t-1}(X),f'(x)) ] = (1-\alpha' - \beta') (\mathbb E[S(\hat{f}_{t}(x),Y) - S(\hat{f}_{t-1}(X),Y)]) \end{align*} Since $f'$ is Bayesian informative we know that $1-\alpha' - \beta'>0$ (Definition \ref{def:BI}). Using the incentive-compatibility property of $S$ when the market is closed with ground truth $Y$ (note that $S(\hat{f}_{t-1}(X),Y)$ is independent from reporting $\hat{f}_t$) we complete the proof. \end{proof} \subsection*{Peer Prediction Markets} Our idea of making a robust extension is to pair the market with a separate ``survey" elicitation process that elicits redundant $C$ hypotheses: \begin{algorithm}[H] \caption{Market for Hypothesis Elicitation}\label{m:main} \begin{algorithmic}[1] \STATE Pay crowdsourcing survey participants using surveys and the CA mechanism; \STATE Randomly draw a hypothesis from the surveys (from the $C$ collected ones) to close the market according to Eqn. (\ref{eqn:crowd:market}), up to a scaling factor $\lambda > 0$. \end{algorithmic} \end{algorithm} Assuming the survey hypotheses are conditional independent w.r.t. the ground truth $Y$. Denote by $f'$ a randomly drawn hypothesis from the surveys, and \[ \alpha' := \mathbb P(f'(X) =2|Y=1),~~\beta' := \mathbb P(f'(X)=1|Y=2) \] For an agent who participated both in survey and market will receive the following: \[ S(\hat{f}_i,f'_{-i}) + \lambda (S(\hat{f}_{t},f') - S(\hat{f}_{t-1},f')) \] Below we establish its incentive property: \begin{theorem}\label{thm:crowd:market} For any $\delta$ that $ \delta := \frac{\lambda}{ (1-\alpha'-\beta')\cdot (C+\lambda (C-1))} \underbrace{\longrightarrow}_{C \rightarrow \infty} 0 $, agents have incentives to report a hypothesis that is at most $\delta$ less accurate than the truthful one. \end{theorem} \begin{proof} The agent can choose to participate in either the crowdsourcing survey or the prediction market, or both. For agents who participated only in the survey, there is clearly no incentive to deviate. This is guaranteed by the incentive property of a proper peer prediction mechanism (in our case it is the CA mechanism that we use to guarantee the incentive property.). For agents who only participated in the market at time $t$, we have its expected score as follows: \begin{align*} &\mathbb E[S(\hat{f}_t(X),f'(X))] - \mathbb E[S(\hat{f}_{t-1}(X),f'(X))] \\ =& (1-\alpha' - \beta') \cdot \mathbb E[S(\hat{f}_t(X),Y)] -(1-\alpha' - \beta') \cdot \mathbb E[S(\hat{f}_{t-1}(X),Y)] \\ =& (1-\alpha' - \beta') (\mathbb E[S(\hat{f}_t(X),Y)] - \mathbb E[S(\hat{f}_{t-1}(X),Y)]), \end{align*} where in above we have used Theorem 6, Eqn. (\ref{eqn:affine}). We then have proved the incentive compatibility for this scenario. Now consider an agent who participated both on the survey and market. Suppose the index of this particular agent is $i$ in the survey pool and $t$ in the market. By truthfully reporting $f^*_i,f^*_t$ the agent is scored as \begin{align} &\mathbb E[S(f^*_i(X),f^*_{-i}(X))] + \lambda (\mathbb E[S(f^*_t(X),f'(X))] - \mathbb E[S(f^*_{t-1}(X),f'(X))])\nonumber \\ =&\underbrace{\mathbb E[S(f^*_i(X),f^*_{-i}(X))]}_{\text{Term I}} + \lambda \cdot \underbrace{\frac{C-1}{C} \left(\mathbb E[S(f^*_t(X),f^*_{-i})]-\mathbb E[S(f^*_{t-1}(X),f^*_{-i})]\right)}_{\text{Term II: $1-1/C$ probability of seeing another survey }}\nonumber \\ &+\lambda \cdot \underbrace{\frac{1}{C}\cdot (\mathbb E[S(f^*_t(X),f^*_t(X))]-\mathbb E[S(f^*_{t-1}(X),f^*_t(X))])}_{\text{Term III: $1/C$ probability of seeing their own survey hypothesis for closing the market. }}\label{eqn:deviation} \end{align} We analyze each of the above terms: \squishlist \item[\textbf{Term I}] Due to the incentive-compatibility of CA, the first term $\mathbb E[S(f^*_i(X),f^*_{-i}(X))]$ is maximized by truthful reporting. And the decrease in score with deviation is proportional to \footnote{Here we make a simplification that when the population of survey is large enough, $f_{-i}$ (average classifier by removing one agent) is roughly having the same error rates as $f'$.} \[ \mathbb E[S(f(X),f^*_{-i})]=\lambda \cdot (1-\alpha'-\beta')\cdot \mathbb E[S(f(X),Y)] \] \item[\textbf{Term II}] The second term $\frac{C-1}{C}(\cdot)$ is also maximized by truthful reporting, because it is proportional to $\mathbb E[S(f^*_t(X),Y)]$ (application of Eqn. (\ref{eqn:affine})) \[ \lambda \cdot \frac{C-1}{C}\cdot \mathbb E[S(f^*_t(X),f^*_{-i})]=\lambda \cdot \frac{C-1}{C} \cdot (1-\alpha'-\beta')\cdot \mathbb E[S(f^*_t(X),Y)] \] \item[\textbf{Term III}] The third $1/C(\cdot)$ term is profitable via deviation. Suppose agent deviates to reporting some $\hat{f}$ that differs from $f^*_i$ by $\delta$ amount in accuracy: \[ \delta := \mathbb E[\mathbbm{1}(\hat{f}(X),Y) - \mathbbm{1}(f^*_i(X),Y)] \] that is $\hat{f}$ is $\delta$ less accurate than $f^*_i$. Under CA, since $\mathbb E[S(f(X),Y)] = \mathbb P(f(X) = Y) - 0.5$ (from the proof for Theorem \ref{thm:accuracy}), \begin{align*} &\mathbb E[S(\hat{f}(X),\hat{f}(X))]-\mathbb E[S(f^*_{t-1}(X),\hat{f}(X))]\\ =&\mathbb E[\mathbbm{1}(\hat{f}(X),\hat{f}(X)) - \mathbbm{1}(f^*_{t-1}(X),\hat{f}(X))] \\ =&1-\mathbb E[\mathbbm{1}(f^*_{t-1}(X),\hat{f}(X))] \end{align*} We have \[ 0 \leq \lambda \cdot \frac{1}{C}\cdot \left(\mathbb E[S(\hat{f}(X),\hat{f}(X))]-\mathbb E[S(f^*_{t-1}(X),\hat{f}(X))]\right) \leq \frac{\lambda}{C} \] Therefore the possible gain from the third term is bounded by $\frac{\lambda}{C}$. \end{list} On the other hand, again since $\mathbb E[S(f(X),Y)] = \mathbb P(f(X) = Y) - 0.5$ (from the proof for Theorem \ref{thm:accuracy}), the loss via deviation (the first two terms in Eqn. (\ref{eqn:deviation})) is lower bounded by \begin{align*} &\left(1+\lambda \cdot \frac{C-1}{C} \right)\cdot (1-\alpha'-\beta') \cdot \mathbb E[S(\hat{f}(X),Y) - S(f^*_i(X),Y)]\\ =&\left(1+\lambda \cdot \frac{C-1}{C} \right)\cdot (1-\alpha'-\beta') \cdot \mathbb E[\mathbbm{1}(\hat{f}(X),Y) - \mathbbm{1}(f^*_i(X),Y)]\\ =&\left(1+\lambda \cdot \frac{C-1}{C} \right)\cdot (1-\alpha'-\beta') \cdot \delta, \end{align*} where $1$ comes from the survey reward. Therefore when $C$ is sufficiently large such that \[ \left(1+\lambda \cdot \frac{C-1}{C} \right)\cdot (1-\alpha'-\beta')\cdot \delta \geq \frac{\lambda}{C} \] i.e., \[ \delta \geq \frac{\lambda}{ (1-\alpha'-\beta')\cdot (C+\lambda (C-1))} \] Agent has no incentive to report such a $\hat{f}$. \end{proof} \subsection*{Proof for Theorem \ref{thm:robust}} \begin{proof} Denote by $\tilde{\hat{f}}(X)$ the reference classifier randomly drawn from the population. Then the expected score for reporting a hypothesis $f$ is given by: \begin{align*} \mathbb E[S(f(X), \tilde{\hat{f}}(X))] &= (1-\gamma) \cdot \mathbb E[S(f(X), f^*_{1-\gamma}(X))] +\gamma \cdot \mathbb E[S(f(X), f^*_{\gamma}(X))]\\ &=(1-\gamma) \cdot (1-\alpha-\beta)\cdot \mathbb E[S(f(X), Y)] \\ &~~~~+ \gamma \cdot (1-\hat{\alpha}-\hat{\beta}) \cdot \mathbb E[S(f(X), Y)] \end{align*} where $\hat{\alpha},\hat{\beta}$ are the error rates of the adversarial classifier $f^*_{\gamma}$: \[ \hat{\alpha} := \mathbb P(f^*_{\gamma}(X)=2|Y=1),~\hat{\beta} := \mathbb P(f^*_{\gamma}(X)=1|Y=2).~ \] The second equality is due to the application of Theorem 6, Eqn. (\ref{eqn:affine}) on $f^*_{1-\gamma}(X)$ and $f^*_{\gamma}(X)$. Therefore \begin{align*} \mathbb E[S(f(X), \tilde{\hat{f}}(X))] =\left((1-\gamma) \cdot (1-\alpha-\beta) + \gamma \cdot (1-\hat{\alpha}-\hat{\beta})\right) \cdot \mathbb E[S(f(X), Y)] \end{align*} Due to the incentive property of $\mathbb E[S(f(X), Y)]$, a sufficient and necessary condition to remain truthful is \[ (1-\gamma) \cdot (1-\alpha-\beta) + \gamma \cdot (1-\hat{\alpha}-\hat{\beta})> 0 \] Now we prove \[ 1-\hat{\alpha}-\hat{\beta} \geq - (1-\alpha^*-\beta^*) \] This is because the most adversarial classifier cannot be worse than reversing the Bayesian optimal classifier - otherwise if the error rate is higher than reversing the Bayes optimal classifier (with error rates $1-\alpha^*, 1-\beta^*$): \[ \hat{\alpha} > 1-\alpha^*,~\hat{\beta} > 1-\beta^* \] we can reverse the adversarial classifier to obtain a classifier that performs better than the Bayes optimal one: \[ 1-\hat{\alpha} < \alpha^*,~1-\hat{\beta} < \beta^* \] which is a contradiction! Therefore \[ \hat{\alpha} \leq 1-\alpha^*,~\hat{\beta} \leq 1-\beta^* \Rightarrow 1-\hat{\alpha}-\hat{\beta} \geq - (1-\alpha^*-\beta^*) \] Therefore a sufficient condition is given by \[ \frac{1-\gamma}{\gamma} > \frac{1-\alpha^*-\beta^*}{1-\alpha-\beta} \] \end{proof} \subsection*{Training hyper-parameters} We did not tune hyper-parameters for the training process, since we focus on hypothesis elicitation rather than improving the agent's ability/performance. We concentrate on different misreport transition matrices as well as the misreport rate which falls in the range of $[0.0, 0.5]$ during the hypothesis elicitation stage. In the original submitted version, the mentioned machine learning architecture for agent $\textsf{A}_{S}$ and $\textsf{A}_{W}$ is a typo. The architecture that we use in our experiments are specified below. \begin{itemize} \item \textbf{MNIST}\\ Agent $\textsf{A}_{W}$ is trained on uniformly sampled 25000 training images from MNIST training dataset. The architecture is LeNet. Agent $\textsf{A}_{S}$ is trained on uniformly sampled 25000 training images (with a different random seed) from MNIST training dataset. The architecture is a 13-layer CNN architecture. Both agents are trained for 100 epochs. The optimizer is SGD with momentum 0.9 and weight decay 1e-4. The initial learning rate is 0.1 and times 0.1 every 20 epochs. \item \textbf{CIFAR-10}\\ Agent $\textsf{A}_{W}$ is trained on uniformly sampled 25000 training images from CIFAR-10 training dataset. The architecture is ResNet34. Agent $\textsf{A}_{S}$ is trained on uniformly sampled 25000 training images (with a different random seed) from CIFAR-10 training dataset. The architecture is a 13-layer CNN architecture. Both agents are trained for 180 epochs. The optimizer is SGD with momentum 0.9 and weight decay 1e-4. The initial learning rate is 0.1 and times 0.1 every 40 epochs. \item \textbf{Adversarial attack}\\ We use LinfPGDAttack, introduced in AdverTorch~\cite{ding2019advertorch} to simulate the adversary untargeted attacks. We adopt an example parameter setting provided by AdverTorch: cross-entropy loss function, eps is 0.15, number of iteration is 40, maximum clip is 1.0 and minimum clip is 0.0. \end{itemize} \subsection*{Misreport models} \begin{itemize} \item \textbf{Uniform misreport model}\\ In certain real world scenarios, an agent refuses to truthfully report the prediction by randomly selecting another different class as the prediction. We use the uniform misreport transition matrix to simulate this case. In our experiments, we assume that the probability of flipping from a given class into other classes to be the same: $T_{i,j}=T_{i,k}=e, \forall i\neq j \neq k$. Mathematically, the misreport transition matrix can be expressed as: {\tiny{\[ \begin{bmatrix} 1-9e & e & e & e & e & e & e & e & e & e \\ e & 1-9e & e & e & e & e & e & e & e & e\\ e & e & 1-9e & e & e & e & e & e & e & e \\ e & e & e & 1-9e & e & e & e & e & e & e\\ e & e & e & e & 1-9e & e& e & e & e & e \\ e & e & e & e & e & 1-9e& e & e & e & e \\ e & e & e & e & e & e & 1-9e & e & e & e \\ e & e & e & e & e & e & e& 1-9e & e & e \\ e & e & e & e & e & e & e & e & 1-9e & e\\ e & e & e & e & e & e & e & e & e & 1-9e \end{bmatrix}\]}} We choose the value of $9e$ to be: $[0.0, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50]$ for all mentioned experiments in Section \ref{sec:exp}. \item \textbf{Sparse misreport model}\\ For low resolution images or similar images, an agent is only able to make ambiguous decision. The agent may doubt whether his prediction could result in a higher score than choosing the other similar category or not. Thus, he may purposely report the other one which is not the original prediction. Even if an expert agent is able to distinguish confusing classes, it may still choose not to truthfully report the prediction since many other agents can not classify the image successfully. Without ground truth for verification, reporting truthfully and giving the correct prediction may result in a lower score than misreporting. For the sparse misreport transition matrix, we assume $T_{i,j}=T_{j,i}=e, \forall (i, j)$, $i\neq j$. Mathematically, the misreport transition matrix can be expressed as: {\tiny{\[ \begin{bmatrix} 1-e & 0 & e & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1-e & 0 & 0 & 0 & 0 & 0 & 0 & 0 & e \\ e & 0 & 1-e & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1-e & 0 & e & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1-e & 0 & 0 & e & 0 & 0 \\ 0 & 0 & 0 & e & 0 & 1-e & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1-e & 0 & e & 0 \\ 0 & 0 & 0 & 0 & e & 0 & 0 & 1-e & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & e & 0 & 1-e & 0 \\ 0 & e & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1-e \end{bmatrix}\]}} We choose the value of $e$ to be: $[0.0, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50]$ for all mentioned experiments in Section \ref{sec:exp}. \end{itemize} \subsection*{Evaluation} In the training stage, we use categorical cross-entropy as our loss function for evaluation. In the hypothesis elicitation stage, we choose two kinds of reward structure in the CA mechanism: 0-1 score and CE score. Let $f^*$ denote the ``optimal" agent, $f_i$ denote the agent waiting to be scored. Mathematically, 0-1 score (one-hot encode the model output probability list) can be expressed as: $$ S(f_i(x_n),f^*(x_n)):=\mathbbm{1}\left(f_i(x_n) = f^*(x_n)\right)-\mathbbm{1}\left(f_i(x_{p_1})=f^*(x_{p_2})\right) $$ CE score can be expressed as: $$ S_{\ell_{CE}}(f_i(x_n),f^*(x_n)) = -\ell_{CE}(f_i(x_n),f^*(x_n)))-(-\ell_{CE}(f_i(x_{p_1}),f^*(x_{p_2}))).~\label{ca:calibrated} $$ \begin{itemize} \item \textbf{Ground truth for verification}\\ When there are ground truth labels for verification, $f^*(x)$ is equal to the corresponding ground truth label for a test image $x$. \item \textbf{No ground truth for verification}\\ When there aren't ground truth labels for verification, we substitute the other agent's prediction for the ground truth label. Thus, $f^*(x):=f^*_j(x), j\neq i$ for a test image $x$. \item \textbf{Adversarial attacks and no ground truth verification}\\ To simulate the case when facing a 0.3-fraction of adversary in the participating population and without ground truth for verification, we use LinfPGDAttack to influence the labels for verification. Specifically, given a test image $x$, LinfPGDAttack will attack $x$ and generate a noisy image $x_{attack}$, we replace the ground truth labels with weighted agents' prediction. Mathematically, $f^*(x):=0.7\cdot f^*_j(x)+0.3\cdot f^*_j(x_{attack}), j\neq i$ for the test image $x$. The weighting procedure is implemented before one-hot encoding stage for $f^*_j(x)$ and $f^*_j(x_{attack})$. After that, the one-hot encoded label is considered as the ground truth label for verification. \end{itemize} \subsection*{Statistics and central tendency} \begin{itemize} \item \textbf{Misreport rate}\\ The statistics ``misreport rate" in Figure~\ref{Fig:fig1_1},\ref{Fig:fig2} symbolizes the proportion of misreported labels. For example, in a uniform transition matrix, $T_{i,j}=T_{i,k}=e, \forall i\neq j \neq k$, the misreport rate is $9e$. While in a sparse transition matrix setting, given $T_{i,j}=T_{j,i}=e, \forall (i, j)$, $i\neq j$, the misreport rate is exactly $e$.\\ To see how adversarial attacks will affect the elicitation, in Figure~\ref{Fig:fig3}, we choose the same misreport transition matrices used in Figure~\ref{Fig:fig1_1},\ref{Fig:fig2} and calculate the misreport rate before applying the adversarial attacks. \item \textbf{Central tendency}\\ We run 5 times for each experiment setting as shown in Figure~\ref{Fig:fig1_1},\ref{Fig:fig2},\ref{Fig:fig3}. The central line is the mean of 5 runs. The ``deviation interval" (error bars) is the maximum absolute score deviation. For example, suppose in the elicitation with ground truth verification setting, we have 5 runs/scores for a uniform transition matrix with a 0.25 misreport rate: $[0.5, 0.5, 0.2, 0.1, 0.7]$, the mean is 0.4, the corresponding absolution score deviation is: $[0.1, 0.1, 0.2, 0.3, 0.3]$. Then the ``deviation interval" comes to: $0.4\pm 0.3$. Since the number of runs is not a large number, absolute deviation is no less than the standard deviation in our experiment setting. \end{itemize} \subsection*{Computing infrastructure} In our experiments, we use a GPU cluster (8 TITAN V GPUs and 16 GeForce GTX 1080 GPUs) for training and evaluation. \section{Experiments}\label{sec:exp} In this section, we implement two reward structures of CA: 0-1 score and Cross-Entropy (CE) score as mentioned at the end of Section~\ref{sec:reward sturcture}. We experiment on two image classification tasks: MNIST~\cite{mnist} and CIFAR-10~\cite{cifar} in our experiments. For agent $\textsf{A}_{W}$ (weak agent), we choose LeNet~\cite{mnist} and ResNet34~\cite{resnet} for MNIST and CIFAR-10 respectively. For $\textsf{A}_{S}$ (strong agent), we use a 13-layer CNN architecture for both datasets. Either of them is trained on random sampled 25000 images from each image classification training task. After the training process, agent $\textsf{A}_{W}$ reaches 99.37\% and 62.46\% test accuracy if he truthfully reports the prediction on MNIST and CIFAR-10 test data. Agent $\textsf{A}_{S}$ is able to reach 99.74\% and 76.89\% test accuracy if the prediction on MNIST and CIFAR-10 test data is truthfully reported. $\textsf{A}_{W}$ and $\textsf{A}_{S}$ receive hypothesis scores based on the test data $X_{\text{test}}$ (10000 test images) of MNIST or CIFAR-10. For elicitation with verification, we use ground truth labels to calculate the hypothesis score. For elicitation without verification, we replace the ground truth labels with the other agent's prediction - $\textsf{A}_{W}$ will serve as $\textsf{A}_{S}$'s peer reference hypothesis and vice versa. \subsection{Results} \label{sec:exp_1} Statistically, an agent $i$'s mis-reported hypothesis can be expressed by a misreport transition matrix $T$. Each element $T_{j,k}$ represents the probability of flipping the truthfully reported label $f^*_i(x) = j$ to the misreported label $\tilde{f}_i(x)=k$: $T_{j,k}=\mathbb P(\tilde{f}_i(X)=k|f^*_i(X) = j)$. Random flipping predictions will degrade the quality of a classifier. When there is no adversary attack, we focus on two kinds of misreport transition matrix: a uniform matrix or a sparse matrix. For the uniform matrix, we assume the probability of flipping from a given class into other classes to be the same: $T_{i,j}=T_{i,k}=e, \forall i\neq j\neq k$. $e$ changes gradually from 0 to 0.56 after 10 increases, which results in a 0\%--50\% misreport rate. The sparse matrix focuses on particular 5 pairs of classes which are easily mistaken between each pair. Denote the corresponding transition matrix elements of class pair $(i,j)$ to be: $(T_{ij}, T_{ji}), i\neq j$, we assume that $T_{ij}=T_{ji}=e, \forall (i,j)$. $e$ changes gradually from 0 to 0.5 after 10 increases, which results in a 0\%--50\% misreport rate. Every setting is simulated 5 times. The line in each figure consists of the median score of 5 runs as well as the corresponding ``deviation interval", which is the maximum absolute score deviation. The y axis symbolizes the averaged score of all test images. As shown in Figure~\ref{Fig:fig1_1},~\ref{Fig:fig2}, in most situations, 0-1 score and CE score of both $\textsf{A}_{W}$ and $\textsf{A}_{S}$ keep on decreasing while the misreport rate is increasing. As for 0-1 score without ground truth verification, the score of either agent begins to fluctuate more when the misreport rate in sparse misreport model is $>35\%$. Our results conclude that both the 0-1 score and CE score induce truthful reporting of a hypothesis and will penalize misreported agents whether there is ground truth for verification or not. \begin{figure}[t] \centering \subfigure[\scriptsize 0-1 Score, Agent $A_{W}$] {\includegraphics[width=.26\textwidth]{figures/0-1_Score_changes_Exp12_MNIST_strong1.pdf} \label{Fig: 1a}} \hspace{-0.28in} \subfigure[\scriptsize 0-1 Score, Agent $A_{S}$] {\includegraphics[width=.26\textwidth]{figures/0-1_Score_changes_Exp12_MNIST_strong2.pdf} \label{Fig: 1b} } \hspace{-0.28in} \subfigure[\scriptsize CE Score, Agent $A_{W}$] {\includegraphics[width=.26\textwidth]{figures/CE_Score_changes_Exp12_MNIST_strong1.pdf} \label{Fig: 1c}} \hspace{-0.28in} \subfigure[\scriptsize CE Score, Agent $A_{S}$] {\includegraphics[width=.26\textwidth]{figures/CE_Score_changes_Exp12_MNIST_strong2.pdf} \label{Fig: 1d}} \vspace{-5pt} \caption{Hypothesis scores versus misreport rate on MNIST dataset. } \label{Fig:fig1_1} \end{figure} \begin{figure}[t] \centering \subfigure[\scriptsize 0-1 score, agent $A_{W}$] {\includegraphics[width=.26\textwidth]{figures/0-1_Score_changes_Exp12_CIFAR_weak.pdf} \label{Fig: 2a}} \hspace{-0.28in} \subfigure[\scriptsize 0-1 score, agent $A_{S}$] {\includegraphics[width=.26\textwidth]{figures/0-1_Score_changes_Exp12_CIFAR_strong.pdf} \label{Fig: 2b}} \hspace{-0.28in} \subfigure[\scriptsize CE score, agent $A_{W}$] {\includegraphics[width=.26\textwidth]{figures/CE_Score_changes_Exp12_CIFAR_weak.pdf} \label{Fig: 2c}} \hspace{-0.28in} \subfigure[\scriptsize CE score, agent $A_{S}$] {\includegraphics[width=.26\textwidth]{figures/CE_Score_changes_Exp12_CIFAR_strong.pdf} \label{Fig: 2d}} \vspace{-5pt} \caption{Hypothesis scores versus misreport rate on CIFAR-10 dataset. } \label{Fig:fig2} \end{figure} \subsection{Elicitation with adversarial attack} We test the robustness of our mechanism when facing a 0.3-fraction of adversary in the participating population. We introduce an adversarial agent, LinfPGDAttack, introduced in AdverTorch~\cite{ding2019advertorch} to influence the labels for verification when there is no ground truth. In Figure~\ref{Fig:fig3}, both the 0-1 score and CE score induce truthful reporting of a hypothesis for MNIST. However, for CIFAR-10, with the increasing of misreport rate, the decreasing tendency fluctuates more often. Two factors attribute to this phenomenon: the agents' abilities as well as the quality of generated "ground truth" labels. When the misreport rate is large and generated labels are of low quality, the probability of successfully matching the misreported label to an incorrect generated label can be much higher than usual. But in general, these two scoring structures incentivize agents to truthfully report their results. \begin{figure}[ht] \centering \subfigure[\scriptsize 0-1 score, MNIST] {\includegraphics[width=.26\textwidth]{figures/0-1_Score_changes_Exp3_MNIST.pdf} \label{Fig: 3a}} \hspace{-0.28in} \subfigure[\scriptsize CE score, MNIST] {\includegraphics[width=.26\textwidth]{figures/CE_Score_changes_Exp3_MNIST.pdf} \label{Fig: 3b}} \hspace{-0.28in} \subfigure[\scriptsize 0-1 score, CIFAR] {\includegraphics[width=.26\textwidth]{figures/0-1_Score_changes_Exp3_CIFAR.pdf} \label{Fig: 3c}} \hspace{-0.28in} \subfigure[\scriptsize CE score, CIFAR] {\includegraphics[width=.26\textwidth]{figures/CE_Score_changes_Exp3_CIFAR.pdf} \label{Fig: 3d}} \vspace{-5pt} \caption{Hypothesis scores versus misreport rate (with adversarial attack). } \label{Fig:fig3} \end{figure} \section{Formulation} Consider the setting with a set $\mathcal{K} = \{ 1, 2, ..., K \}$ of agents, each with a hypothesis $f^*_i \in \mathcal{H}_i$ which maps feature space $X$ to label space $Y \in \{1, 2,...,L\}:=[L]$. The hypothesis space $\mathcal{H}_i$ is the space of hypotheses accessible or yet considered by agent $i$, perhaps as a function of the subsets of $X$ or $Y$ which have been encountered by $i$ or the agent's available computational power. $f^*_i$ is often obtained following a local optimization process. For example, $f^*_i$ can be defined as the function which minimizes a loss function over an agent's hypothesis space. \[ f^*_i = \argmin_{f_i \sim \mathcal{H}_i} \mathbb{E}_{\mathcal D_i} \left[\mathbbm{1} \Big(f_i(X) \neq Y\Big)~\right] \] where in above $\mathcal D_i$ is the local distribution that agent $i$ has access to train and evaluate $f^*_i$. In the federated learning setting, note that $f^*_i$ can also represent the optimal output from a private training algorithm and $\mathcal{H}_i$ would denote a training hypothesis space that encodes a certainly level of privacy guarantees. In this paper, we do not discuss the specific ways to make a local hypothesis private \footnote{There exists a variety of definitions of privacy and their corresponding solutions for achieving so. Notable solutions include \emph{output perturbation} \cite{chaudhuri2011differentially} or \emph{output sampling} \cite{bassily2014private} to preserve privacy when differential privacy \cite{dwork2006differential} is adopted to quantify the preserved privacy level.}, but rather we focus on developing scoring functions to incentivize/elicit this ``private" and ready-to-be shared hypothesis. Suppose the mechanism designer has access to a dataset $D$: $D$ can be a standard training set with pairs of features and labels $D:=\{(x_n,y_n)\}_{n=1}^N$, or we are in a unsupervised setting where we don't have labels associated with each sample $x_i$: $D:=\{x_n\}_{n=1}^N$. The \emph{goal} of the mechanism designer is to collect $f^*_i$ truthfully from agent $i$. Denote the reported/contributed hypothesis from agent $i$ as $f_i$\footnote{$f_i$ can be \texttt{none} if users chose to not contribute.}. Each agent will be scored using a function $S$ that takes all reported hypotheses $f_j, \forall j$ and $D$ as inputs: $ S\Big(f_i, \{f_{j \neq i} \}, D\Big) $ such that it is ``proper'' at a Bayesian Nash Equilibrium: \begin{definition} $S(\cdot)$ is called inducing truthful reporting at a Bayesian Nash Equilibrium if for every agent $i$, assuming for all $j \neq i$, $f_j = f^*_j$ (i.e., every other agent is willing to report their hypotheses truthfully), \[ \mathbb{E}\left[S\Big(f^*_i, \{f^*_{j \neq i} \}, D\Big) \right] \geq \mathbb{E}\left[S\Big(f_i,\{f^*_{j \neq i} \}, D\Big) \right],~~~~\forall f_i, \] where the expectation encodes agent $i$'s belief about $\{f^*_{j \neq i} \}~\text{and}~ D$. \end{definition} \subsection{Peer prediction} \label{sec:pp} \emph{Peer prediction} is a technique developed to truthfully elicit information when there is no ground truth verification. Suppose we are interested in eliciting private observations about a categorical event $y \in [L]$ generated according to a random variable $Y$ (in the context of a machine learning task, $Y$ can be thought of as labels). Each of the $K \geq 2$ agents holds a noisy observation of $y$, denoted as $y_i \in [L],\, i \in [K]$. Again the goal of the mechanism designer is to elicit the $y_i$s, but they are private and we do not have access to the ground truth $Y$ to perform an evaluation. The scoring function $S$ is designed so that truth-telling is a strict Bayesian Nash Equilibrium (implying other agents truthfully report their $y_j$), that is, $\forall i$, $ \mathbb E_{y_j}\left[S\left( y_i, y_j \right)|y_i\right] > \mathbb E_{y_j}\left[S\left(r_i, y_j\right)| y_i\right],~\forall r_i \neq y_i. $ \paragraph{Correlated Agreement} Correlated Agreement (CA) \cite{dasgupta2013crowdsourced,2016arXiv160303151S} is a recently established peer prediction mechanism for a multi-task setting. CA is also the core and the focus of our subsequent sections. This mechanism builds on a $\Delta$ matrix that captures the stochastic correlation between the two sources of predictions $y_i$ and $y_j$. $\Delta \in \mathbb R^{L \times L}$ is then defined as a squared matrix with its entries defined as follows: \[ \Delta(k,l) = \mathbb P\bigl(y_i=k,y_j=l\bigr)- \mathbb P\bigl(y_i = k\bigr) \mathbb P\bigl(y_j= l\bigr), ~k,l \in [L]. \] The intuition of above $\Delta$ matrix is that each $(i,j)$ entry of $\Delta$ captures the marginal correlation between the two predictions. $Sgn(\Delta)$ denotes the sign matrix of $\Delta$:$~\text{where}~Sgn(x)=1, x > 0; ~Sgn(x)=0, \text{o.w.}$ CA requires each agent $i$ to perform multiple tasks: denote agent $i$'s observations for the $N$ tasks as $y_{i,1},...,y_{i,N}$. Ultimately the scoring function $S(\cdot)$ for each task $k$ that is shared between $i,j$ is defined as follows: randomly draw two other tasks $k_1,k_2~, k_1 \neq k_2 \neq k$, \begin{align*} S\bigl(y_{i,k},y_{j,k}\bigr) :=& Sgn\bigl(\Delta(y_{i,k}, y_{j,k})\bigr) - Sgn\bigl(\Delta(y_{i,k_1},y_{j,k_2})\bigr), \end{align*} It was established in \cite{2016arXiv160303151S} that CA is truthful and proper (Theorem 5.2, \cite{2016arXiv160303151S}) \footnote{To be precise, it is an informed truthfulness. We refer interested readers to \cite{2016arXiv160303151S} for the detailed differences.}. $ \mathbb P(y_j=y'|y_i = y) < \mathbb P(y_j=y'), \forall i,j \in [K],~y' \neq y $ then $S(\cdot) $ is strictly truthful (Theorem 4.4, \cite{2016arXiv160303151S}). \section{Introduction} When a company relies on distributed users' data to train a machine learning model, federated learning \cite{mcmahan2016communication,yang2019federated,kairouz2019advances} promotes the idea that users/customers' data should be kept local, and only the locally held/learned hypothesis will be shared/contributed from each user. While federated learning has observed success in keyboard recognition \cite{hard2018federated} and in language modeling \cite{chen2019federated}, existing works have made an implicit assumption that participating users will be willing to contribute their local hypotheses to help the central entity to refine the model. Nonetheless, without proper incentives, agents can choose to opt out of the participation, to contribute either uninformative or outdated information, or to even contribute malicious model information. Though being an important question for federated learning \cite{yang2019federated,liu2020fedcoin,hansus,han20}, this capability of providing adequate incentives for user participation has largely been overlooked. In this paper we ask the questions that: \emph{Can a machine learning hypothesis be incentivized/elicited by a certain form of scoring rules from self-interested agents?} The availability of a scoring rule will help us design a payment for the elicited hypothesis properly to motivate the reporting of high-quality ones. The corresponding solutions complement the literature of federated learning by offering a generic template for incentivizing users' participation. We address the challenge via providing a scoring framework to elicit hypotheses truthfully from the self-interested agents/users\footnote{Throughout this paper, we will interchange the use of agents and users.}. More concretely, suppose an agent $i$ has a locally observed hypothesis $f^*_i$. For instance, the hypothesis can come from solving a local problem: $ f^*_i = \argmin_{f_i \sim \mathcal{H}_i} \mathbb{E}_{(X,Y) \sim \mathcal D} [\ell_i \left(f_i(X), Y\right) ] $ according to a certain hypothesis class $\mathcal{H}_i$, a distribution $\mathcal D$, a loss function $\ell_i$. The goal is to design a scoring function $S(\cdot)$ that takes a reported hypothesis $\hat{f}_i$, and possibly a second input argument (to be defined in the context) such that $ \mathbb E\left[S(f^*_i, \cdot)\right] \geq \mathbb E\left[S(\hat{f}_i, \cdot)\right],\forall \hat{f}_i $, where the expectation is w.r.t. agent $i$'s local belief, which is specified in context. If the above can be achieved, $S(\cdot)$ can serve as the basis of a payment system in federated learning such that agents paid by $S(\cdot)$ will be incentivized to contribute their local models truthfully. In this work, we primarily consider two settings, with arguably increasing difficulties in designing our mechanisms: \paragraph{With ground truth verification $(X,Y)$} We will start with a relatively easier setting where we as the designer has access to a labeled dataset $\{(x_n,y_n)\}_{n=1}^N$. We will demonstrate how this question is similar to the classical information elicitation problem with strictly proper scoring rule \cite{gneiting2007strictly}, calibrated loss functions \cite{bartlett2006convexity} and peer prediction (information elicitation without verification) \cite{miller2005eliciting}. \paragraph{With only access to features $X$} The second setting is when we only have $X$ but not the ground truth $Y$. This case is arguably more popular in practice, since collecting label annotation requires a substantial amount of efforts. For instance, a company is interested in eliciting/training a classifier for an image classification problem. While it has access to images, it might not have spent efforts in collecting labels for the images. We will again present a peer prediciton-ish solution for this setting. Besides establishing the desired incentive properties of the scoring rules, we will look into questions such as when the scoring mechanism is rewarding accurate classifiers, how to build a prediction market-ish solution to elicit improving classifiers, as well as our mechanism's robustness against possible collusion. Our work can be viewed both as a contribution to federated learning via providing incentives for selfish agents to share their hypotheses, as well as a contribution to the literature of information elicitation via studying the problem of hypothesis elicitation. We validate our claims via experiments using the MNIST and CIFAR-10 datasets. All omitted proofs and experiment details can be found in the supplementary materials. \subsection{Related works} Due to space limit, we only briefly survey the related two lines of works: \paragraph{Information elicitation} Our solution concept relates most closely to the literature of information elicitation \cite{Brier:50,Win:69,Savage:71,Matheson:76,Jose:06,Gneiting:07}. Information elicitation primarily focuses on the questions of developing scoring rule to incentivize or to elicite self-interested agents' private probalistic beliefs about a private event (e.g., how likely will COVID-19 death toll reach 100K by May 1?). Relevant to us, \cite{abernethy2011collaborative} provides a market treatment to elicit more accurate classifiers but the solution requires the designer to have the ground truth labels and agents to agree on the losses. We provide a more generic solution without above limiations. A more challenging setting features an elicitation question while there sans ground truth verification. Peer prediction \cite{Prelec:2004,MRZ:2005,witkowski2012robust,radanovic2013,Witkowski_hcomp13,dasgupta2013crowdsourced,shnayder2016informed,radanovic2016incentives,LC17,kong2019information,liu2020} is among the most popular solution concept. The core idea of peer prediction is to score each agent based on another reference report elicited from the rest of agents, and to leverage on the stochastic correlation between different agents' information. Most relevant to us is the Correlated Agreement mechanism \cite{dasgupta2013crowdsourced,shnayder2016informed,kong2019information}. We provide a separate discussion of it in Section \ref{sec:pp} \paragraph{Federated learning} Federated learning \cite{mcmahan2016communication,hard2018federated,yang2019federated} arose recently as a promising architecture for learning from massive amounts of users' local information without polling their private data. The existing literature has devoted extensive efforts to make the model sharing process more secure \cite{secure_1, secure_2, secure_3, secure_4, secure_5, bonawitz2016practical}, more efficient \cite{efficient_1,efficient_2,efficient_3,efficient_4,fl:communication, efficient_6, efficient_7}, more robust \cite{robust_1,robust_2,robust_3,pillutla2019robust} to heterogeneity in the distributed data source, among many other works. For more detailed survey please refer to several thorough ones \cite{yang2019federated,kairouz2019advances}. The incentive issue has been listed as an outstanding problem in federated learning \cite{yang2019federated}. There have been several very recent works touching on the challenge of incentive design in federated learning. \cite{liu2020fedcoin} proposed a currency system for federated learning based on blockchain techniques. \cite{hansus} describes a payoff sharing algorithm that maximizes system designer's utility, but the solution does not consider the agents' strategic behaviors induced by insufficient incentives. \cite{han20} further added fairness guarantees to an above reward system. We are not aware of a systematic study of the truthfulness in incentiving hypotheses in federated learning, and our work complements above results by providing an incentive-compatible scoring system for building a payment system for federated learning. \section*{Appendix} \subsection*{Proof for Theorem \ref{thm:calibrate1}} \begin{definition}\label{def:cal} \cite{bartlett2006convexity} A classification-calibrated loss function $\ell$ is defined as the follows: there exists a non-decreasing convex function $\Psi(\cdot)$ that satisfies: $ \Psi\left(R(f) - R^*\right) \leq R_{\ell}(f) - R^*_{\ell}. $ $\Phi(0) = 0$. \end{definition} \begin{proof} Denote the Bayes risk of a classifier $f$ as $ R(f) := \mathbb{E}_{(X,Y)} \left[\mathbbm{1} \Big(f(X) \neq Y\Big) \right], $ its minimum risk as $R^* = \argmin_{f} R(f)$. The classifier's $\ell$-risk is defined as $ R_{\ell}(f) := \mathbb{E}_{(X,Y)} \left[\ell(f(X),Y) \right]$, with its minimum value $R^*_{\ell} := \argmin_{f} R_{\ell}(f)$. We prove by contradiction. Suppose that reporting a hypothesis $f$ returns higher payoff (or a smaller risk) than reporting $f^*_i$ under $\ell$. Then \begin{align*} 0 > R_{\ell}(f) - R_{\ell}(f^*_i) \underbrace{=}_{(1)} R_{\ell}(f)-R^*_{\ell} \underbrace{\geq}_{(2)} \Psi\left(R(f) - R^*\right)\underbrace{\geq}_{(3)} \Psi(0) = 0, \end{align*} which is a contradiction. In above, the first equality (1) is by definition, (2) is due to the calibration condition, (3) is due to the definition of $R^*$. \end{proof} \subsection*{Proof for Lemma \ref{lemma:delta}} \begin{proof} The proof builds essentially on the law of total probability (note $f^*$ and $Y$ are the same): \begin{align*} &\Delta^*(1,1) = \mathbb P(f_{i}^*(X)=1,Y=1) - \mathbb P(f_{i}^*(X)=1)\cdot \mathbb P(Y=1)\\ =&\mathbb P(f_{i}^*(X)=1|Y=1) \cdot \mathbb P(Y=1)- \mathbb P(f_{i}^*(X)=1)\cdot \mathbb P(Y=1)\\ =&\mathbb P(Y=1)\left(\mathbb P(f_{i}^*(X)=1|Y=1) - \mathbb P(f_{i}^*(X)=1)\right)\\ =&\mathbb P(Y=1)\biggl(\mathbb P(f_{i}^*(X)=1|Y=1) - \mathbb P(Y=1)\mathbb P(f_{i}^*(X)=1|Y=1) \\ &~~~~-\mathbb P(Y=2)\mathbb P(f_{i}^*(X)=1|Y=2)\biggr)\\ =&\mathbb P(Y=1)\bigl(\mathbb P(Y=2) \cdot \mathbb P(f_{i}^*(X)=1|Y=1) - \mathbb P(Y=2)\mathbb P(f_{i}^*(X)=1|Y=2)\bigr)\\ =&\mathbb P(Y=1)\cdot \mathbb P(Y=2) \cdot (\mathbb P(f_{i}^*(X)=1|Y=1)-\mathbb P(f_{i}^*(X)=1|Y=2))\\ =&\mathbb P(Y=1)\cdot \mathbb P(Y=2) \cdot \left(1-\textsf{FNR}(f_{i}^*) - \textsf{FPR}(f_{i}^*)\right) > 0 \end{align*} Now consider $\Delta^*(1,2)$: \begin{align*} &\Delta^*(1,2) = \mathbb P(f_{i}^*(X)=1,Y=2) - \mathbb P(f_{i}^*(X)=1)\cdot \mathbb P(Y=2)\\ =&\mathbb P(f_{i}^*(X)=1|Y=2) \cdot \mathbb P(Y=2)- \mathbb P(f_{i}^*(X)=1)\cdot \mathbb P(Y=2)\\ =&\mathbb P(Y=2)\bigl(\mathbb P(f_{i}^*(X)=1|Y=2) - \mathbb P(f_{i}^*(X)=1)\bigr)\\ =&\mathbb P(Y=2)\biggl(\mathbb P(f_{i}^*(X)=1|Y=2) - \mathbb P(Y=1)\mathbb P(f_{i}^*(X)=1|Y=1) \\ &~~~~- \mathbb P(Y=2)\mathbb P(f_{i}^*(X)=1|Y=2)\biggr)\\ =&\mathbb P(Y=2)\bigl(\mathbb P(Y=1) \cdot \mathbb P(f_{i}^*(X)=1|Y=2) - \mathbb P(Y=1)\mathbb P(f_{i}^*(X)=1|Y=1)\bigr)\\ =&\mathbb P(Y=2)\cdot \mathbb P(Y=1) \cdot (\mathbb P(f_{i}^*(X)=1|Y=2)-\mathbb P(f_{i}^*(X)=1|Y=1))\\ =&\mathbb P(Y=2)\cdot \mathbb P(Y=2) \cdot \left(\textsf{FNR}(f_{i}^*) + \textsf{FPR}(f_{i}^*)-1\right) < 0 \end{align*} The second row of $\Delta^*$ involving $\Delta^*(2,1),\Delta^*(2,2)$ can be symmetrically argued. \end{proof} \subsection*{Proof for Theorem \ref{thm:CA:truthful}} \begin{proof} Note the following fact: $$ \mathbb E[S(f,f^*)] := \sum_n \mathbb E[S(f(x_n),f^*(x_n))]. $$ Therefore we can focus on the expected score of individual sample $X$. The incentive compatibility will then hold for the sum. The below proof is a rework of the one presented in \cite{shnayder2016informed}: \begin{align*} &\quad ~ \mathbb E\left[S(\hat{f}_i(X),f^*(X))\right] \\ &= \mathbb E \left [ Sgn(\Delta^*(\hat{f}_i(X),f^*(X)))-Sgn(\Delta^*(\hat{f}_i(X_{p_1}),f^*(X_{p_2})))\right ]\\ &=\sum_{k \in [L]} \sum_{l \in [L]}\mathbb P(f^*_i(X) = k, f^*(X) = l) \\ &\quad\quad \cdot \sum_{r \in [L]}\mathbb P(\hat{f}_i(X)=r|f^*_i(X)=k) \cdot Sgn(\Delta^*(r,l))\\ &~~~~~-\sum_{k \in [L]} \sum_{l \in [L]}\mathbb P(f^*_i(X_{p_1}) = k)\cdot \mathbb P( f^*(X_{p_2}) = l)\\ &\quad\quad \cdot \sum_{r \in [L]}\mathbb P(\hat{f}_i(X)=r |f^*_i(X_{p_1})=k) \cdot Sgn(\Delta^*(r,l))\\ &=\sum_{k \in [L]} \sum_{l \in [L]}\mathbb P(\hat{f}_i(X) = k, f^*(X) = l) \\ &\quad\quad \cdot \sum_{r \in [L]}\mathbb P(\hat{f}_i(X)=r|f^*_i(X)=k) \cdot Sgn(\Delta^*(r,l))\\ &~~~~~-\sum_{k \in [L]} \sum_{l \in [L]}\mathbb P(f^*_i(X) = k)\cdot \mathbb P( f^*(X) = l) \tag{replacing $X_{p}$ with $X$ due to iid assumption}\\ &\quad\quad \cdot \sum_{r \in [L]}\mathbb P(\hat{f}_i(X)=r |f^*_i(X)=k) \cdot Sgn(\Delta^*(r,l))\\ &=\sum_{k \in [L]} \sum_{l \in [L]} \Delta^*(k,l) \cdot \sum_{r \in [L]} \mathbb P(\hat{f}_i(X)=r|f^*_i(X)=k)\cdot Sgn(\Delta^*(r,l)) \end{align*} Note that truthful reporting returns the following expected payment: \begin{align*} &\sum_{k \in [L]} \sum_{l \in [L]} \Delta^*(k,l) \cdot \sum_{r \in [L]}\mathbb P(\hat{f}_i(X)=f^*_i(X)=r|f^*_i(X)=k) \cdot Sgn(\Delta^*(r,l))\\ =& \sum_{k \in [L]} \sum_{l \in [L]} \Delta^*(k,l) \cdot Sgn(\Delta^*(k,l)) \tag{Only the corresponding $k$ survives the 3rd summation}\\ =& \sum_{k,l: \Delta^*(k,l) > 0} \Delta^*(k,l) \end{align*} Because $\sum_{r \in [L]}\mathbb P(\hat{f}_i(X)=r|f^*_i(X)=k) \cdot Sgn(\Delta^*(r,l)) \in [0,1]$ we conclude that for any other reporting strategy: \[ \sum_{k,l: \Delta^*(k,l) > 0} \Delta^*(k,l)\geq \sum_{k \in [L]} \sum_{l \in [L]} \Delta^*(k,l) \cdot \sum_{r \in [L]}\mathbb P(\hat{f}_i(X)=r|f^*_i(X)=k) \cdot Sgn(\Delta^*(r,l)) \] completing the proof. \end{proof} \subsection*{Proof for Theorem \ref{thm:accuracy}} \begin{proof} For any classifier $f$, the expected score is \begin{align*} &\quad ~ \mathbb E[S(f(X),Y)] \\ &= \mathbb P(f(X) = Y) - \mathbb P(f(X)=1)\mathbb P(Y=1) - \mathbb P(f(X)=2)\mathbb P(Y=2) \quad~\tag{independence between $x_{p1}$ and $x_{p2}$}\\ &= \mathbb P(f(X) = Y) - \mathbb P(f(X)=1)\cdot 0.5 - \mathbb P(f(X)=2)\cdot 0.5 \quad~\tag{\text{equal prior}}\\ &= \mathbb P(f(X) = Y) - 0.5. \end{align*} The last equality indicates that higher than accuracy, the higher the expected score given to the agent, completing the proof. \end{proof} \subsection*{Calibrated CA Scores} We start with extending the definition for calibration for CA: \begin{definition} We call $S_{\ell}$ w.r.t. original CA scoring function $S$ if the following condition holds: \begin{align} &\Psi\left(\mathbb E[S(f(X),f^*(X))] - \mathbb E[S(f^*_i(X),f^*(X))]\right)\nonumber \\ &\leq \mathbb E[S_{\ell}(f(X),f^*(X))] - \mathbb E[S_{\ell}(f^*_i(X),f^*(X))], \forall f. \label{calibration:S} \end{align} \end{definition} Since $S$ induces $f^*_i$ as the maximizer, if $S_{\ell}$ satisfies the calibration property as defined in Definition \ref{def:cal}, we will establish similarly the incentive property of $S(\cdot)$ \begin{theorem} If for a certain loss function $\ell$ such that $S_{\ell}$ satisfies the calibration condition, then $S_{\ell}$ induces truthful reporting of $f^*_i$. \end{theorem} The proof of the above theorem repeats the one for Theorem \ref{thm:calibrate1}, therefore we omit the details. Denote by $f^*_{\ell} = \argmin_f R_{\ell}(f)$. Sufficient condition for CA calibration was studied in \cite{liu2020peerloss}: \begin{theorem}[Theorem 6 of \cite{liu2020peerloss}] Under the following conditions, $S_{\ell}$ is calibrated if $\mathbb P(Y=1) = 0.5$, and $f^*_{\ell}$ satisfies the following: $ \mathbb E[\ell(f^*_{\ell}(X),-Y)] \geq \mathbb E[\ell(f(X),-Y)],~\forall f. $ \end{theorem} \subsection*{Proof for Lemma \ref{col:delta}} \begin{proof} Denote by \begin{align*} &\Delta(k,l) = \mathbb P\bigl(f^*_i(X)=k,f^*(X)=l\bigr)- \mathbb P\bigl(f^*_i(X) = k\bigr) \mathbb P\bigl(f^*(X)= l\bigr), ~k,l \in \{1,2\}. \end{align*} i.e., the correlation matrix defined between $f^*_i$ and ground truth label $Y$ ($f^*$). And \begin{align*} \Delta^*(k,l) = \mathbb P\bigl(f^*_i(X)=k,f^*_j(X)=l\bigr)- \mathbb P\bigl(f^*_i(X) = k\bigr) \mathbb P\bigl(f^*_j(X)= l\bigr) \end{align*} $\Delta^*$ further dervies \begin{align*} \Delta^*(k,l) &= \mathbb P\bigl(f^*_i(X)=k,f^*_j(X)=l\bigr)- \mathbb P\bigl(f^*_i(X) = k\bigr) \mathbb P\bigl(f^*_j(X)= l\bigr)\\ &=\mathbb P\bigl(f^*_i(X)=k,f^*_j(X)=l|Y=1\bigr)\cdot \mathbb P(Y=1) \\ &~~~~+ \mathbb P\bigl(f^*_i(X)=k,f^*_j(X)=l|Y=2\bigr)\cdot \mathbb P(Y=2)\\ &-\mathbb P\bigl(f^*_i(X) = k\bigr) \left(\mathbb P\bigl(f^*_j(X)= l|Y=1\bigr)\cdot \mathbb P(Y=1) + \mathbb P\bigl(f^*_j(X)= l|Y=2\bigr)\cdot \mathbb P(Y=2) \right)\\ &=\mathbb P(f^*_i(X)=k|Y=1)\cdot \mathbb P(f^*_j(X)=l|Y=1)\cdot \mathbb P(Y=1) \\ &~~~~+ \mathbb P(f^*_i(X)=k|Y=1) \cdot \mathbb P(f^*_j(X)=l|Y=2)\cdot \mathbb P(Y=2) \tag*{(by conditional independence)}\\ &-\mathbb P\bigl(f^*_i(X) = k\bigr) \left(\mathbb P\bigl(f^*_j(X)= l|Y=1\bigr)\cdot \mathbb P(Y=1) + \mathbb P\bigl(f^*_j(X)= l|Y=2\bigr)\cdot \mathbb P(Y=2) \right)\\ &=\mathbb P(f^*_j(X)=l|Y=1)\left( \mathbb P(f^*_i(X)=k|Y=1) \cdot \mathbb P(Y=1) - \mathbb P\bigl(f^*_i(X) = k\bigr)\cdot \mathbb P(Y=1) \right) \\ &+\mathbb P(f^*_j(X)=l|Y=2)\left( \mathbb P(f^*_i(X)=k|Y=2) \cdot \mathbb P(Y=2) - \mathbb P\bigl(f^*_i(X) = k\bigr)\cdot \mathbb P(Y=2) \right)\\ &=\mathbb P(f^*_j(X)=l|Y=1) \cdot \Delta(k,1) + \mathbb P(f^*_j(X)=l|Y=2) \cdot \Delta(k,2)\\ &=\left(\mathbb P(f^*_j(X)=l|Y=1)-\mathbb P(f^*_j(X)=l|Y=2)\right) \cdot \Delta(k,1) ~~~~~~\tag*{because $(\Delta(k,1)+\Delta(k,2)=0)$} \end{align*} If $k=1,l=1$, the above becomes \[ \left(1-\textsf{FNR}(f^*_j) - \textsf{FPR}(f^*_j)\right) \Delta(1,1) > 0 \] If $k=1,l=2$, the above becomes \[ \left(-1+\textsf{FNR}(f^*_j) + \textsf{FPR}(f^*_j)\right) \Delta(1,1) < 0 \] If $k=2,l=1$, the above becomes \[ \left(1-\textsf{FNR}(f^*_j) - \textsf{FPR}(f^*_j)\right) \Delta(2,1) < 0 \] If $k=2,l=2$, the above becomes \[ \left(-1+\textsf{FNR}(f^*_j) + \textsf{FPR}(f^*_j)\right) \Delta(2,1) > 0 \] Proved. \end{proof} \subsection*{Proof for Theorem \ref{thm:pp:accuracy}} \begin{proof} Shorthand the following error rates: \[ \alpha' = \mathbb P(f'(X)=2|Y=1),~\beta' = \mathbb P(f'(X)=1|Y=2) \] When two classifiers $f, f'$ are conditionally independent given $Y$, next we establish the following fact: \begin{align} \mathbb E [S(f(X),f'(X))] = (1-\alpha' - \beta') \cdot \mathbb E[S(f(X),Y)].~\label{eqn:affine} \end{align} Short-hand $p:=\mathbb P(Y=1)$. Then \begin{align*} &\mathbb E [S(f(X),f'(X))]\\ =&\mathbb E [\mathbbm{1}(f(X),f'(X))]\\ &-\mathbb P(f(X)=1)\mathbb P(f'(X)=1)-\mathbb P(f(X)=2)\mathbb P(f'(X)=2) \tag{second term in CA}\\ =& p \cdot \mathbb E [\mathbbm{1}(f(X),f'(X))|Y=1] + (1-p) \cdot \mathbb E [\mathbbm{1}(f(X),f'(X))|Y=2]\\ &-\mathbb P(f(X)=1)\mathbb P(f'(X)=1)-\mathbb P(f(X)=2)\mathbb P(f'(X)=2) \\ =& p \cdot \mathbb E [\mathbbm{1}(f(X),1)] \cdot (1-\alpha') +p \cdot \mathbb E [\mathbbm{1}(f(X),2)] \cdot \alpha' \tag{by conditional independence}\\ &+ (1-p) \cdot \mathbb E [\mathbbm{1}(f(X),1)]\cdot \beta' + (1-p) \cdot \mathbb E[\mathbbm{1}(f(X),2)]\cdot (1-\beta') \tag{by conditional independence}\\ &-\mathbb P(f(X)=1)\mathbb P(f'(X)=1)-\mathbb P(f(X)=2)\mathbb P(f'(X)=2) \\ =& p \cdot (1-\alpha'-\beta') \cdot \mathbb E [\mathbbm{1}(f(X),1)] + (1-p) \cdot (1-\alpha'-\beta') \cdot \mathbb E [\mathbbm{1}(f(X),2)] \\ &+ \alpha' \cdot \mathbb E [\mathbbm{1}(f(X),2)] + \beta'\cdot \mathbb E [\mathbbm{1}(f(X),1)] \\ &-\mathbb P(f(X)=1)\mathbb P(f'(X)=1)-\mathbb P(f(X)=2)\mathbb P(f'(X)=2) \\ =&(1-\alpha'-\beta') \cdot \mathbb E[\mathbbm{1}(f(X),Y)]\\ &- \mathbb P(f(X)=1) (\mathbb P(f'(X)=1)-\beta') \\ &- \mathbb P(f(X)=2) (\mathbb P(f'(X)=2)-\alpha') \tag{$*$} \end{align*} Since \begin{align*} \mathbb P(f'(X)=1) &= \mathbb P(Y=1) \cdot \mathbb P(f'(X)=1|Y=1) + \mathbb P(Y=2) \cdot \mathbb P(f'(X)=1|Y=2) \\ &= p \cdot (1-\alpha') + (1-p) \cdot \beta' \end{align*} we have \[ \mathbb P(f'(X)=1)-\beta' = p \cdot (1-\alpha'-\beta') \] Similarly we have \[ \mathbb P(f'(X)=2)-\alpha'= (1-p) \cdot (1-\alpha'-\beta') \] Then \begin{align*} (*)=&(1-\alpha'-\beta') \cdot \mathbb E[\mathbbm{1}(f(X),Y)] -(1-\alpha'-\beta') \cdot\left(\mathbb P(f(X)=1) \cdot p +\mathbb P(f(X)=2)\cdot (1-p)\right)\\ =&(1-\alpha'-\beta') \cdot \left( \mathbb E[\mathbbm{1}(f(X),Y)] - (\mathbb P(f(X)=1) \cdot p +\mathbb P(f(X)=2)\cdot (1-p))\right)\\ =&(1-\alpha'-\beta') \cdot \mathbb E[S(f(X),Y)]. \tag{definition of CA on $f(X)$ and $Y$} \end{align*} By condition (iii), replace $f,f'$ above with $f^*_i,f^*_j$ we know \[ \mathbb E[S(f^*_i(X),f^*_j(X))] = (1-\alpha_j - \beta_j)\mathbb E[S(f^*_i(X),Y)] \] where \[ \alpha_j = \mathbb P(f^*_j(X)=2|Y=1),~\beta_j = \mathbb P(f^*_j(X)=1|Y=2) \] When condition (ii) considering an identify $\Delta^*$ we know that $f^*_j$ is Bayesian informative, which states exactly that \cite{LC17} \[ 1-\alpha_j - \beta_j > 0 \] \begin{definition}[Bayesian informativeness \cite{LC17}] \label{def:BI} A classifier $f$ is Bayesian informative w.r.t. $Y$ if \[ 1 - \mathbb P(f(X)=2|Y=1) - \mathbb P(f(X)=1|Y=2) > 0. \] \end{definition} Further we have proven that $\mathbb E[S(f(X),Y)]$ rewards accuracy under Condition (i): \begin{align*} &\quad~\mathbb E[S(f^*_i(X),Y)] = \mathbb P(f^*_i(X) = Y) - \mathbb P(f^*_i(X)=1)\mathbb P(Y=1) - \mathbb P(f^*_i(X)=2)\mathbb P(Y=2)\\ &= \mathbb P(f^*_i(X) = Y) - \mathbb P(f^*_i(X)=1)\cdot 0.5 - \mathbb P(f^*_i(X)=2)\cdot 0.5\\ &= \mathbb P(f^*_i(X) = Y) - 0.5, \end{align*} completing the proof. \end{proof} \subsection*{Proof for Theorem \ref{thm:market:pp1}} \begin{proof} The proof uses the fact that $\mathbb E [S(f(X),f'(X))] = (1-\alpha' - \beta') \cdot \mathbb E[S(f(X),Y)]$ proved in Theorem \ref{thm:pp:accuracy}, Eqn. (\ref{eqn:affine}): \begin{align*} \mathbb E[S(\hat{f}_{t}(x),f'(x)) - S(\hat{f}_{t-1}(X),f'(x)) ] = (1-\alpha' - \beta') (\mathbb E[S(\hat{f}_{t}(x),Y) - S(\hat{f}_{t-1}(X),Y)]) \end{align*} Since $f'$ is Bayesian informative we know that $1-\alpha' - \beta'>0$ (Definition \ref{def:BI}). Using the incentive-compatibility property of $S$ when the market is closed with ground truth $Y$ (note that $S(\hat{f}_{t-1}(X),Y)$ is independent from reporting $\hat{f}_t$) we complete the proof. \end{proof} \subsection*{Peer Prediction Markets} Our idea of making a robust extension is to pair the market with a separate ``survey" elicitation process that elicits redundant $C$ hypotheses: \begin{algorithm}[H] \caption{Market for Hypothesis Elicitation}\label{m:main} \begin{algorithmic}[1] \STATE Pay crowdsourcing survey participants using surveys and the CA mechanism; \STATE Randomly draw a hypothesis from the surveys (from the $C$ collected ones) to close the market according to Eqn. (\ref{eqn:crowd:market}), up to a scaling factor $\lambda > 0$. \end{algorithmic} \end{algorithm} Assuming the survey hypotheses are conditional independent w.r.t. the ground truth $Y$. Denote by $f'$ a randomly drawn hypothesis from the surveys, and \[ \alpha' := \mathbb P(f'(X) =2|Y=1),~~\beta' := \mathbb P(f'(X)=1|Y=2) \] For an agent who participated both in survey and market will receive the following: \[ S(\hat{f}_i,f'_{-i}) + \lambda (S(\hat{f}_{t},f') - S(\hat{f}_{t-1},f')) \] Below we establish its incentive property: \begin{theorem}\label{thm:crowd:market} For any $\delta$ that $ \delta := \frac{\lambda}{ (1-\alpha'-\beta')\cdot (C+\lambda (C-1))} \underbrace{\longrightarrow}_{C \rightarrow \infty} 0 $, agents have incentives to report a hypothesis that is at most $\delta$ less accurate than the truthful one. \end{theorem} \begin{proof} The agent can choose to participate in either the crowdsourcing survey or the prediction market, or both. For agents who participated only in the survey, there is clearly no incentive to deviate. This is guaranteed by the incentive property of a proper peer prediction mechanism (in our case it is the CA mechanism that we use to guarantee the incentive property.). For agents who only participated in the market at time $t$, we have its expected score as follows: \begin{align*} &\mathbb E[S(\hat{f}_t(X),f'(X))] - \mathbb E[S(\hat{f}_{t-1}(X),f'(X))] \\ =& (1-\alpha' - \beta') \cdot \mathbb E[S(\hat{f}_t(X),Y)] -(1-\alpha' - \beta') \cdot \mathbb E[S(\hat{f}_{t-1}(X),Y)] \\ =& (1-\alpha' - \beta') (\mathbb E[S(\hat{f}_t(X),Y)] - \mathbb E[S(\hat{f}_{t-1}(X),Y)]), \end{align*} where in above we have used Theorem 6, Eqn. (\ref{eqn:affine}). We then have proved the incentive compatibility for this scenario. Now consider an agent who participated both on the survey and market. Suppose the index of this particular agent is $i$ in the survey pool and $t$ in the market. By truthfully reporting $f^*_i,f^*_t$ the agent is scored as \begin{align} &\mathbb E[S(f^*_i(X),f^*_{-i}(X))] + \lambda (\mathbb E[S(f^*_t(X),f'(X))] - \mathbb E[S(f^*_{t-1}(X),f'(X))])\nonumber \\ =&\underbrace{\mathbb E[S(f^*_i(X),f^*_{-i}(X))]}_{\text{Term I}} + \lambda \cdot \underbrace{\frac{C-1}{C} \left(\mathbb E[S(f^*_t(X),f^*_{-i})]-\mathbb E[S(f^*_{t-1}(X),f^*_{-i})]\right)}_{\text{Term II: $1-1/C$ probability of seeing another survey }}\nonumber \\ &+\lambda \cdot \underbrace{\frac{1}{C}\cdot (\mathbb E[S(f^*_t(X),f^*_t(X))]-\mathbb E[S(f^*_{t-1}(X),f^*_t(X))])}_{\text{Term III: $1/C$ probability of seeing their own survey hypothesis for closing the market. }}\label{eqn:deviation} \end{align} We analyze each of the above terms: \squishlist \item[\textbf{Term I}] Due to the incentive-compatibility of CA, the first term $\mathbb E[S(f^*_i(X),f^*_{-i}(X))]$ is maximized by truthful reporting. And the decrease in score with deviation is proportional to \footnote{Here we make a simplification that when the population of survey is large enough, $f_{-i}$ (average classifier by removing one agent) is roughly having the same error rates as $f'$.} \[ \mathbb E[S(f(X),f^*_{-i})]=\lambda \cdot (1-\alpha'-\beta')\cdot \mathbb E[S(f(X),Y)] \] \item[\textbf{Term II}] The second term $\frac{C-1}{C}(\cdot)$ is also maximized by truthful reporting, because it is proportional to $\mathbb E[S(f^*_t(X),Y)]$ (application of Eqn. (\ref{eqn:affine})) \[ \lambda \cdot \frac{C-1}{C}\cdot \mathbb E[S(f^*_t(X),f^*_{-i})]=\lambda \cdot \frac{C-1}{C} \cdot (1-\alpha'-\beta')\cdot \mathbb E[S(f^*_t(X),Y)] \] \item[\textbf{Term III}] The third $1/C(\cdot)$ term is profitable via deviation. Suppose agent deviates to reporting some $\hat{f}$ that differs from $f^*_i$ by $\delta$ amount in accuracy: \[ \delta := \mathbb E[\mathbbm{1}(\hat{f}(X),Y) - \mathbbm{1}(f^*_i(X),Y)] \] that is $\hat{f}$ is $\delta$ less accurate than $f^*_i$. Under CA, since $\mathbb E[S(f(X),Y)] = \mathbb P(f(X) = Y) - 0.5$ (from the proof for Theorem \ref{thm:accuracy}), \begin{align*} &\mathbb E[S(\hat{f}(X),\hat{f}(X))]-\mathbb E[S(f^*_{t-1}(X),\hat{f}(X))]\\ =&\mathbb E[\mathbbm{1}(\hat{f}(X),\hat{f}(X)) - \mathbbm{1}(f^*_{t-1}(X),\hat{f}(X))] \\ =&1-\mathbb E[\mathbbm{1}(f^*_{t-1}(X),\hat{f}(X))] \end{align*} We have \[ 0 \leq \lambda \cdot \frac{1}{C}\cdot \left(\mathbb E[S(\hat{f}(X),\hat{f}(X))]-\mathbb E[S(f^*_{t-1}(X),\hat{f}(X))]\right) \leq \frac{\lambda}{C} \] Therefore the possible gain from the third term is bounded by $\frac{\lambda}{C}$. \end{list} On the other hand, again since $\mathbb E[S(f(X),Y)] = \mathbb P(f(X) = Y) - 0.5$ (from the proof for Theorem \ref{thm:accuracy}), the loss via deviation (the first two terms in Eqn. (\ref{eqn:deviation})) is lower bounded by \begin{align*} &\left(1+\lambda \cdot \frac{C-1}{C} \right)\cdot (1-\alpha'-\beta') \cdot \mathbb E[S(\hat{f}(X),Y) - S(f^*_i(X),Y)]\\ =&\left(1+\lambda \cdot \frac{C-1}{C} \right)\cdot (1-\alpha'-\beta') \cdot \mathbb E[\mathbbm{1}(\hat{f}(X),Y) - \mathbbm{1}(f^*_i(X),Y)]\\ =&\left(1+\lambda \cdot \frac{C-1}{C} \right)\cdot (1-\alpha'-\beta') \cdot \delta, \end{align*} where $1$ comes from the survey reward. Therefore when $C$ is sufficiently large such that \[ \left(1+\lambda \cdot \frac{C-1}{C} \right)\cdot (1-\alpha'-\beta')\cdot \delta \geq \frac{\lambda}{C} \] i.e., \[ \delta \geq \frac{\lambda}{ (1-\alpha'-\beta')\cdot (C+\lambda (C-1))} \] Agent has no incentive to report such a $\hat{f}$. \end{proof} \subsection*{Proof for Theorem \ref{thm:robust}} \begin{proof} Denote by $\tilde{\hat{f}}(X)$ the reference classifier randomly drawn from the population. Then the expected score for reporting a hypothesis $f$ is given by: \begin{align*} \mathbb E[S(f(X), \tilde{\hat{f}}(X))] &= (1-\gamma) \cdot \mathbb E[S(f(X), f^*_{1-\gamma}(X))] +\gamma \cdot \mathbb E[S(f(X), f^*_{\gamma}(X))]\\ &=(1-\gamma) \cdot (1-\alpha-\beta)\cdot \mathbb E[S(f(X), Y)] \\ &~~~~+ \gamma \cdot (1-\hat{\alpha}-\hat{\beta}) \cdot \mathbb E[S(f(X), Y)] \end{align*} where $\hat{\alpha},\hat{\beta}$ are the error rates of the adversarial classifier $f^*_{\gamma}$: \[ \hat{\alpha} := \mathbb P(f^*_{\gamma}(X)=2|Y=1),~\hat{\beta} := \mathbb P(f^*_{\gamma}(X)=1|Y=2).~ \] The second equality is due to the application of Theorem 6, Eqn. (\ref{eqn:affine}) on $f^*_{1-\gamma}(X)$ and $f^*_{\gamma}(X)$. Therefore \begin{align*} \mathbb E[S(f(X), \tilde{\hat{f}}(X))] =\left((1-\gamma) \cdot (1-\alpha-\beta) + \gamma \cdot (1-\hat{\alpha}-\hat{\beta})\right) \cdot \mathbb E[S(f(X), Y)] \end{align*} Due to the incentive property of $\mathbb E[S(f(X), Y)]$, a sufficient and necessary condition to remain truthful is \[ (1-\gamma) \cdot (1-\alpha-\beta) + \gamma \cdot (1-\hat{\alpha}-\hat{\beta})> 0 \] Now we prove \[ 1-\hat{\alpha}-\hat{\beta} \geq - (1-\alpha^*-\beta^*) \] This is because the most adversarial classifier cannot be worse than reversing the Bayesian optimal classifier - otherwise if the error rate is higher than reversing the Bayes optimal classifier (with error rates $1-\alpha^*, 1-\beta^*$): \[ \hat{\alpha} > 1-\alpha^*,~\hat{\beta} > 1-\beta^* \] we can reverse the adversarial classifier to obtain a classifier that performs better than the Bayes optimal one: \[ 1-\hat{\alpha} < \alpha^*,~1-\hat{\beta} < \beta^* \] which is a contradiction! Therefore \[ \hat{\alpha} \leq 1-\alpha^*,~\hat{\beta} \leq 1-\beta^* \Rightarrow 1-\hat{\alpha}-\hat{\beta} \geq - (1-\alpha^*-\beta^*) \] Therefore a sufficient condition is given by \[ \frac{1-\gamma}{\gamma} > \frac{1-\alpha^*-\beta^*}{1-\alpha-\beta} \] \end{proof} \subsection*{Training hyper-parameters} We did not tune hyper-parameters for the training process, since we focus on hypothesis elicitation rather than improving the agent's ability/performance. We concentrate on different misreport transition matrices as well as the misreport rate which falls in the range of $[0.0, 0.5]$ during the hypothesis elicitation stage. In the original submitted version, the mentioned machine learning architecture for agent $\textsf{A}_{S}$ and $\textsf{A}_{W}$ is a typo. The architecture that we use in our experiments are specified below. \begin{itemize} \item \textbf{MNIST}\\ Agent $\textsf{A}_{W}$ is trained on uniformly sampled 25000 training images from MNIST training dataset. The architecture is LeNet. Agent $\textsf{A}_{S}$ is trained on uniformly sampled 25000 training images (with a different random seed) from MNIST training dataset. The architecture is a 13-layer CNN architecture. Both agents are trained for 100 epochs. The optimizer is SGD with momentum 0.9 and weight decay 1e-4. The initial learning rate is 0.1 and times 0.1 every 20 epochs. \item \textbf{CIFAR-10}\\ Agent $\textsf{A}_{W}$ is trained on uniformly sampled 25000 training images from CIFAR-10 training dataset. The architecture is ResNet34. Agent $\textsf{A}_{S}$ is trained on uniformly sampled 25000 training images (with a different random seed) from CIFAR-10 training dataset. The architecture is a 13-layer CNN architecture. Both agents are trained for 180 epochs. The optimizer is SGD with momentum 0.9 and weight decay 1e-4. The initial learning rate is 0.1 and times 0.1 every 40 epochs. \item \textbf{Adversarial attack}\\ We use LinfPGDAttack, introduced in AdverTorch~\cite{ding2019advertorch} to simulate the adversary untargeted attacks. We adopt an example parameter setting provided by AdverTorch: cross-entropy loss function, eps is 0.15, number of iteration is 40, maximum clip is 1.0 and minimum clip is 0.0. \end{itemize} \subsection*{Misreport models} \begin{itemize} \item \textbf{Uniform misreport model}\\ In certain real world scenarios, an agent refuses to truthfully report the prediction by randomly selecting another different class as the prediction. We use the uniform misreport transition matrix to simulate this case. In our experiments, we assume that the probability of flipping from a given class into other classes to be the same: $T_{i,j}=T_{i,k}=e, \forall i\neq j \neq k$. Mathematically, the misreport transition matrix can be expressed as: {\tiny{\[ \begin{bmatrix} 1-9e & e & e & e & e & e & e & e & e & e \\ e & 1-9e & e & e & e & e & e & e & e & e\\ e & e & 1-9e & e & e & e & e & e & e & e \\ e & e & e & 1-9e & e & e & e & e & e & e\\ e & e & e & e & 1-9e & e& e & e & e & e \\ e & e & e & e & e & 1-9e& e & e & e & e \\ e & e & e & e & e & e & 1-9e & e & e & e \\ e & e & e & e & e & e & e& 1-9e & e & e \\ e & e & e & e & e & e & e & e & 1-9e & e\\ e & e & e & e & e & e & e & e & e & 1-9e \end{bmatrix}\]}} We choose the value of $9e$ to be: $[0.0, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50]$ for all mentioned experiments in Section \ref{sec:exp}. \item \textbf{Sparse misreport model}\\ For low resolution images or similar images, an agent is only able to make ambiguous decision. The agent may doubt whether his prediction could result in a higher score than choosing the other similar category or not. Thus, he may purposely report the other one which is not the original prediction. Even if an expert agent is able to distinguish confusing classes, it may still choose not to truthfully report the prediction since many other agents can not classify the image successfully. Without ground truth for verification, reporting truthfully and giving the correct prediction may result in a lower score than misreporting. For the sparse misreport transition matrix, we assume $T_{i,j}=T_{j,i}=e, \forall (i, j)$, $i\neq j$. Mathematically, the misreport transition matrix can be expressed as: {\tiny{\[ \begin{bmatrix} 1-e & 0 & e & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1-e & 0 & 0 & 0 & 0 & 0 & 0 & 0 & e \\ e & 0 & 1-e & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1-e & 0 & e & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1-e & 0 & 0 & e & 0 & 0 \\ 0 & 0 & 0 & e & 0 & 1-e & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1-e & 0 & e & 0 \\ 0 & 0 & 0 & 0 & e & 0 & 0 & 1-e & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & e & 0 & 1-e & 0 \\ 0 & e & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1-e \end{bmatrix}\]}} We choose the value of $e$ to be: $[0.0, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50]$ for all mentioned experiments in Section \ref{sec:exp}. \end{itemize} \subsection*{Evaluation} In the training stage, we use categorical cross-entropy as our loss function for evaluation. In the hypothesis elicitation stage, we choose two kinds of reward structure in the CA mechanism: 0-1 score and CE score. Let $f^*$ denote the ``optimal" agent, $f_i$ denote the agent waiting to be scored. Mathematically, 0-1 score (one-hot encode the model output probability list) can be expressed as: $$ S(f_i(x_n),f^*(x_n)):=\mathbbm{1}\left(f_i(x_n) = f^*(x_n)\right)-\mathbbm{1}\left(f_i(x_{p_1})=f^*(x_{p_2})\right) $$ CE score can be expressed as: $$ S_{\ell_{CE}}(f_i(x_n),f^*(x_n)) = -\ell_{CE}(f_i(x_n),f^*(x_n)))-(-\ell_{CE}(f_i(x_{p_1}),f^*(x_{p_2}))).~\label{ca:calibrated} $$ \begin{itemize} \item \textbf{Ground truth for verification}\\ When there are ground truth labels for verification, $f^*(x)$ is equal to the corresponding ground truth label for a test image $x$. \item \textbf{No ground truth for verification}\\ When there aren't ground truth labels for verification, we substitute the other agent's prediction for the ground truth label. Thus, $f^*(x):=f^*_j(x), j\neq i$ for a test image $x$. \item \textbf{Adversarial attacks and no ground truth verification}\\ To simulate the case when facing a 0.3-fraction of adversary in the participating population and without ground truth for verification, we use LinfPGDAttack to influence the labels for verification. Specifically, given a test image $x$, LinfPGDAttack will attack $x$ and generate a noisy image $x_{attack}$, we replace the ground truth labels with weighted agents' prediction. Mathematically, $f^*(x):=0.7\cdot f^*_j(x)+0.3\cdot f^*_j(x_{attack}), j\neq i$ for the test image $x$. The weighting procedure is implemented before one-hot encoding stage for $f^*_j(x)$ and $f^*_j(x_{attack})$. After that, the one-hot encoded label is considered as the ground truth label for verification. \end{itemize} \subsection*{Statistics and central tendency} \begin{itemize} \item \textbf{Misreport rate}\\ The statistics ``misreport rate" in Figure~\ref{Fig:fig1_1},\ref{Fig:fig2} symbolizes the proportion of misreported labels. For example, in a uniform transition matrix, $T_{i,j}=T_{i,k}=e, \forall i\neq j \neq k$, the misreport rate is $9e$. While in a sparse transition matrix setting, given $T_{i,j}=T_{j,i}=e, \forall (i, j)$, $i\neq j$, the misreport rate is exactly $e$.\\ To see how adversarial attacks will affect the elicitation, in Figure~\ref{Fig:fig3}, we choose the same misreport transition matrices used in Figure~\ref{Fig:fig1_1},\ref{Fig:fig2} and calculate the misreport rate before applying the adversarial attacks. \item \textbf{Central tendency}\\ We run 5 times for each experiment setting as shown in Figure~\ref{Fig:fig1_1},\ref{Fig:fig2},\ref{Fig:fig3}. The central line is the mean of 5 runs. The ``deviation interval" (error bars) is the maximum absolute score deviation. For example, suppose in the elicitation with ground truth verification setting, we have 5 runs/scores for a uniform transition matrix with a 0.25 misreport rate: $[0.5, 0.5, 0.2, 0.1, 0.7]$, the mean is 0.4, the corresponding absolution score deviation is: $[0.1, 0.1, 0.2, 0.3, 0.3]$. Then the ``deviation interval" comes to: $0.4\pm 0.3$. Since the number of runs is not a large number, absolute deviation is no less than the standard deviation in our experiment setting. \end{itemize} \subsection*{Computing infrastructure} In our experiments, we use a GPU cluster (8 TITAN V GPUs and 16 GeForce GTX 1080 GPUs) for training and evaluation. \section{Concluding remarks} This paper provides an elicitation framework to incentivize contribution of truthful hypotheses in federated learning. We have offered a scoring rule based solution template which we name as hypothesis elicitation. We establish the incentive property of the proposed scoring mechanisms and have tested their performance with real-world datasets extensively. We have also looked into the accuracy, robustness of the scoring rules, as well as market approaches for implementing them. \clearpage \newpage \bibliographystyle{plain} \section{Formulation} Consider the setting with a set $\mathcal{K} = \{ 1, 2, ..., K \}$ of agents, each with a hypothesis $f^*_i \in \mathcal{H}_i$ which maps feature space $X$ to label space $Y \in \{1, 2,...,L\}:=[L]$. The hypothesis space $\mathcal{H}_i$ is the space of hypotheses accessible or yet considered by agent $i$, perhaps as a function of the subsets of $X$ or $Y$ which have been encountered by $i$ or the agent's available computational power. $f^*_i$ is often obtained following a local optimization process. For example, $f^*_i$ can be defined as the function which minimizes a loss function over an agent's hypothesis space. \[ f^*_i = \argmin_{f_i \sim \mathcal{H}_i} \mathbb{E}_{\mathcal D_i} \left[\mathbbm{1} \Big(f_i(X) \neq Y\Big)~\right] \] where in above $\mathcal D_i$ is the local distribution that agent $i$ has access to train and evaluate $f^*_i$. In the federated learning setting, note that $f^*_i$ can also represent the optimal output from a private training algorithm and $\mathcal{H}_i$ would denote a training hypothesis space that encodes a certainly level of privacy guarantees. In this paper, we do not discuss the specific ways to make a local hypothesis private \footnote{There exists a variety of definitions of privacy and their corresponding solutions for achieving so. Notable solutions include \emph{output perturbation} \cite{chaudhuri2011differentially} or \emph{output sampling} \cite{bassily2014private} to preserve privacy when differential privacy \cite{dwork2006differential} is adopted to quantify the preserved privacy level.}, but rather we focus on developing scoring functions to incentivize/elicit this ``private" and ready-to-be shared hypothesis. Suppose the mechanism designer has access to a dataset $D$: $D$ can be a standard training set with pairs of features and labels $D:=\{(x_n,y_n)\}_{n=1}^N$, or we are in a unsupervised setting where we don't have labels associated with each sample $x_i$: $D:=\{x_n\}_{n=1}^N$. The \emph{goal} of the mechanism designer is to collect $f^*_i$ truthfully from agent $i$. Denote the reported/contributed hypothesis from agent $i$ as $f_i$\footnote{$f_i$ can be \texttt{none} if users chose to not contribute.}. Each agent will be scored using a function $S$ that takes all reported hypotheses $f_j, \forall j$ and $D$ as inputs: $ S\Big(f_i, \{f_{j \neq i} \}, D\Big) $ such that it is ``proper'' at a Bayesian Nash Equilibrium: \begin{definition} $S(\cdot)$ is called inducing truthful reporting at a Bayesian Nash Equilibrium if for every agent $i$, assuming for all $j \neq i$, $f_j = f^*_j$ (i.e., every other agent is willing to report their hypotheses truthfully), \[ \mathbb{E}\left[S\Big(f^*_i, \{f^*_{j \neq i} \}, D\Big) \right] \geq \mathbb{E}\left[S\Big(f_i,\{f^*_{j \neq i} \}, D\Big) \right],~~~~\forall f_i, \] where the expectation encodes agent $i$'s belief about $\{f^*_{j \neq i} \}~\text{and}~ D$. \end{definition} \subsection{Peer prediction} \label{sec:pp} \emph{Peer prediction} is a technique developed to truthfully elicit information when there is no ground truth verification. Suppose we are interested in eliciting private observations about a categorical event $y \in [L]$ generated according to a random variable $Y$ (in the context of a machine learning task, $Y$ can be thought of as labels). Each of the $K \geq 2$ agents holds a noisy observation of $y$, denoted as $y_i \in [L],\, i \in [K]$. Again the goal of the mechanism designer is to elicit the $y_i$s, but they are private and we do not have access to the ground truth $Y$ to perform an evaluation. The scoring function $S$ is designed so that truth-telling is a strict Bayesian Nash Equilibrium (implying other agents truthfully report their $y_j$), that is, $\forall i$, $ \mathbb E_{y_j}\left[S\left( y_i, y_j \right)|y_i\right] > \mathbb E_{y_j}\left[S\left(r_i, y_j\right)| y_i\right],~\forall r_i \neq y_i. $ \paragraph{Correlated Agreement} Correlated Agreement (CA) \cite{dasgupta2013crowdsourced,2016arXiv160303151S} is a recently established peer prediction mechanism for a multi-task setting. CA is also the core and the focus of our subsequent sections. This mechanism builds on a $\Delta$ matrix that captures the stochastic correlation between the two sources of predictions $y_i$ and $y_j$. $\Delta \in \mathbb R^{L \times L}$ is then defined as a squared matrix with its entries defined as follows: \[ \Delta(k,l) = \mathbb P\bigl(y_i=k,y_j=l\bigr)- \mathbb P\bigl(y_i = k\bigr) \mathbb P\bigl(y_j= l\bigr), ~k,l \in [L]. \] The intuition of above $\Delta$ matrix is that each $(i,j)$ entry of $\Delta$ captures the marginal correlation between the two predictions. $Sgn(\Delta)$ denotes the sign matrix of $\Delta$:$~\text{where}~Sgn(x)=1, x > 0; ~Sgn(x)=0, \text{o.w.}$ CA requires each agent $i$ to perform multiple tasks: denote agent $i$'s observations for the $N$ tasks as $y_{i,1},...,y_{i,N}$. Ultimately the scoring function $S(\cdot)$ for each task $k$ that is shared between $i,j$ is defined as follows: randomly draw two other tasks $k_1,k_2~, k_1 \neq k_2 \neq k$, \begin{align*} S\bigl(y_{i,k},y_{j,k}\bigr) :=& Sgn\bigl(\Delta(y_{i,k}, y_{j,k})\bigr) - Sgn\bigl(\Delta(y_{i,k_1},y_{j,k_2})\bigr), \end{align*} It was established in \cite{2016arXiv160303151S} that CA is truthful and proper (Theorem 5.2, \cite{2016arXiv160303151S}) \footnote{To be precise, it is an informed truthfulness. We refer interested readers to \cite{2016arXiv160303151S} for the detailed differences.}. $ \mathbb P(y_j=y'|y_i = y) < \mathbb P(y_j=y'), \forall i,j \in [K],~y' \neq y $ then $S(\cdot) $ is strictly truthful (Theorem 4.4, \cite{2016arXiv160303151S}). \section{Elicitation without verification} Now we move on to a more challenging setting where we do not have ground truth label $Y$ to verify the accuracy, or the informativeness of $f(X)$, i.e., the mechanism designer only has access to a $D=\{x_n\}_{n=1}^N$. The main idea of our solution from this section follows straight-forwardly from the previous section, but instead of having a ground truth agent $f^*$, for each classifier $f^*_i$ we only have a reference agent $f^*_j$ drawn from the rest agents $j \neq i$ to score with. The corresponding scoring rule takes the form of $S(\hat{f}_i(X),\hat{f}_j(X))$, and similarly the goal is to achieve the following: $ \mathbb E\left[S(f^*_i(X), f^*_j(X))\right] \geq \mathbb E\left[S(f(X), f^*_j(X))\right],~\forall f. $ As argued before, if we treat $f_i$ and $f_j$ as two agents $\textsf{A}_i$ and $\textsf{A}_j$ holding private information, a properly defined peer prediction scoring function that elicits $\textsf{A}_i$ using $\textsf{A}_j$ will suffice to elicit $f_i$ using $f_j$. Again we will focus on using Correlated Agreement as a running example. Recall that the mechanism builds on a correlation matrix $\Delta^*(f^*_i(X),f^*_j(X))$. \begin{align*} &\Delta^*(k,l) = \mathbb P\bigl(f^*_i(X)=k,f^*_j(X)=l\bigr)- \mathbb P\bigl(f^*_i(X) = k\bigr) \mathbb P\bigl(f^*_j(X)= l\bigr), ~k,l \in [L] \end{align*} The mechanism then operates as follows: For each task $x_n$, randomly sample two other tasks $x_{p_1},x_{p_2}$. Then pay a reported hypothesis according to \vspace{-0.05in} \begin{align} S(\hat{f}_i,\hat{f}_j):= \sum_{n=1}^N Sgn\left(\Delta^*(\hat{f}_i(x_n),\hat{f}_j(x_n))\right)-Sgn\left(\Delta^*(\hat{f}_i(x_{p_1}),\hat{f}_j(x_{p_2}))\right) \end{align} We reproduce the incentive guarantees: \begin{theorem} CA mechanism induces truthful reporting of a hypothesis at a Bayesian Nash Equilibrium. \end{theorem} The proof is similar to the proof of Theorem \ref{thm:CA:truthful} so we will not repeat the details in the Appendix. To enable a clean presentation of analysis, the rest of this section will focus on using/applying CA for the binary case $L=2$. First, as an extension to Lemma \ref{lemma:delta}, we have: \begin{lemma} \label{col:delta} If $f^*_i$ and $f^*_j$ are conditionally independent given $Y$, $\textsf{FNR}(f^*_i) + \textsf{FPR}(f^*_i) < 1$ and $\textsf{FNR}(f^*_j) + \textsf{FPR}(f^*_j) < 1$, then $Sgn(\Delta^*)$ is an identify matrix. \end{lemma} \paragraph{When do we reward accuracy} As mentioned earlier that in general peer prediction mechanisms do not incentivize accuracy. Nonetheless we provide conditions under which they do. The result below holds for binary classifications. \begin{theorem}\label{thm:pp:accuracy} When (i) $\mathbb P(Y=1) = 0.5$, (ii) $Sgn(\Delta^*) = I_{2 \times 2}$, and (iii) $f^*_i(X)$ and $f^*_j(X)$ are conditional independent of $Y$, the more accurate classifier within each $\mathcal H_i$ receives a higher score in expectation. \end{theorem} \subsection{Peer Prediction market} Implementing the above peer prediction setting in a market setting is hard, due to again the challenge of no ground truth verification. The use of reference answers collected from other peers to similarly close a market will create incentives for further manipulations. Our first attempt is to crowdsource to obtain an independent survey answer and use the survey answer to close the market. Denote the survey hypothesis as $ f'$ and use $f'$ to close the market: \begin{align}\label{eqn:crowd:market} S(\hat{f}_{t}(x),f'(x)) - S(\hat{f}_{t-1}(X),f'(x)) \end{align} \begin{theorem}\label{thm:market:pp1} When the survey hypothesis $f'(x)$ is (i) conditionally independent from the market contributions, and (ii) Bayesian informative, then closing the market using the crowdsourcing survey hypothesis is incentive compatible. \end{theorem} The above mechanism is manipulable in several aspects. Particularly, the crowdsourcing process needs to be independent from the market, which implies that the survey participant will need to stay away from participating in the market - but it is unclear whether this will be the case. In the Appendix we show that by maintaining a survey process that elicits $C>1$ hypotheses, we can further improve the robustness of our mechanisms against agents performing a joint manipulation on both surveys and markets. \paragraph{Remark} Before we conclude this section, we remark that the above solution for the \emph{without verification} setting also points to an hybrid solution when the designer has access to both sample points with and without ground truth labels. The introduction of the pure peer assessment solution helps reduce the variance of payment. \subsection{Robust elicitation} Running a peer prediction mechanism with verifications coming only from peer agents is vulnerable when facing collusion. In this section we answer the question of how robust our mechanisms are when facing a $\gamma$-fraction of adversary in the participating population. To instantiate our discussion, consider the following setting \squishlist \item There are $1-\gamma$ fraction of agents who will act truthfully if incentivized properly. Denote the randomly drawn classifier from this $1-\gamma$ population as $f^*_{1-\gamma}$. \item There are $\gamma$ fraction of agents are adversary, whose reported hypotheses can be arbitrary and are purely adversarial. \end{list} Denote the following quantifies $ \alpha := \mathbb P(f^*_{1-\gamma}(X)=2|Y=1),~\beta := \mathbb P(f^*_{1-\gamma}(X)=1|Y=2)~\alpha^* := \mathbb P(f^*(X)=2|Y=1),~\beta^* := \mathbb P(f^*(X)=1|Y=2) $, that is $\alpha,\beta$ are the error rates for the eliciting classifier $f^*_{1-\gamma}$ while $\alpha^*,\beta^*$ are the error rates for the Bayes optimal classifier. We prove the following \begin{theorem}\label{thm:robust} CA is truthful in eliciting hypothesis when facing $\gamma$-fraction of adversary when $\gamma$ satisfies: $ \frac{1-\gamma}{\gamma} > \frac{1-\alpha^*-\beta^*}{1-\alpha-\beta}. $ \end{theorem} When the agent believes that the classifier the $1-\gamma$ crowd holds is as accurate as the Bayes optimal classifier we have $ \frac{1-\alpha^*-\beta^*}{1-\alpha-\beta} = \frac{1-\alpha^*-\beta^*}{1-\alpha^*-\beta^*} = 1 $, then a sufficient condition for eliciting truthful reporting is $\gamma < 50\%$, that is our mechanism is robust up to half of the population manipulating. Clearly the more accurate the reference classifier is, the more robust our mechanism is. \section{Elicitation with verification} We start by considering the setting where the mechanism designer has access to ground truth labels, i.e., $D=\{(x_n,y_n)\}_{n=1}^N$. % \subsection{A warm-up case: eliciting Bayes optimal classifier} As a warm-up, we start with the question of eliciting the Bayes optimal classifier: \[ f^*_i = \argmin_{f_i} \mathbb{E}_{(X,Y)} \left[\mathbbm{1} \Big(f_i(X) \neq Y\Big) \right]. \] It is straightforward to observe that, by definition using $-\mathbbm{1}(\cdot)$ (negative sign changes a loss to a reward (score)) and any affine transformation of it $a \mathbbm{1}(\cdot) + b, a<0$ will be sufficient to incentivize truthful reporting of hypothesis. Next we are going to show that any classification-calibrated loss function \cite{bartlett2006convexity} can serve as a proper scoring function for eliciting hypothesis.\footnote{We provide details of the calibration in the proof. Classical examples include cross-entropy loss, squared loss, etc.} \begin{theorem}\label{thm:calibrate1} Any classification calibrated loss function $\ell(\cdot)$ (paying agents $-\ell(f_i(X), Y)$) induces truthful reporting of the Bayes optimal classifier. \end{theorem} \subsection{Eliciting ``any-optimal" classifier: a peer prediction approach} Now consider the case that an agent does not hold an absolute Bayes optimal classifier. Instead, in practice, agent's local hypothesis will depend on the local observations they have, the privacy level he desired, the hypothesis space and training method he is using. Consider agent $i$ holds the following hypothesis $f^*_i$, according to a loss function $\ell_i$, and a hypothesis space $\mathcal{H}_i$: $ f^*_i = \argmin_{f_i \sim \mathcal{H}_i} \mathbb{E}\left[\ell_i \Big(f_i(X),Y\Big) \right]. $ By definition, each specific $\ell_i$ will be sufficient to incentivize a hypothesis. However, it is unclear how $f^*_i$ trained using $\ell_i$ would necessarily be optimal according to a universal metric/score. We aim for a more generic approach to elicit different $f^*_i$s that are returned from different training procedure and hypothesis classes. In the following sections, we provide a peer prediction approach to do so. We first state the hypothesis elicitation problem as a standard peer prediction problem. The connection is made by firstly rephrasing the two data sources, the classifiers and the labels, from agents' perspective. Let's re-interpret the ground truth label $Y$ as an ``optimal" agent who holds a hypothesis $f^*(X) = Y$. We denote this agent as $\textsf{A}^*$. Each local hypothesis $f^*_i$ agent $i$ holds can be interpreted as the agent that observes $f^*_i(x_1),...,f^*_i(x_N)$ for a set of randomly drawn feature vectors $x_1,...,x_N$: $ f^*_i(x_n) \sim \textsf{A}_i(X). $ Then a peer prediction mechanism induces truthful reporting if: $ \mathbb E\left[S(f^*_i(X), f^*(X))\right] \geq \mathbb E\left[S(f(X), f^*(X))\right],~\forall f. $ \paragraph{Correlated Agreement for hypothesis elicitation} To be more concrete, consider a specific implementation of peer prediction mechanism, the Correlated Agreement (CA). Recall that the mechanism builds on a correlation matrix $\Delta(f^*_i(X),f^*(X))$ defined as follows: \begin{align*} &\Delta^*(k,l) = \mathbb P\bigl(f^*_i(X)=k,f^*(X)=l\bigr)- \mathbb P\bigl(f^*_i(X) = k\bigr) \mathbb P\bigl(f^*(X)= l\bigr), ~k,l \in [L]. \end{align*} Then the CA for hypothesis elicitation is summarized in Algorithm \ref{m:main1}. \begin{algorithm} \caption{CA for Hypothesis Elicitation}\label{m:main1} \begin{algorithmic}[1] \STATE For each sample $x_n$, randomly sample two other tasks $x_{p_1} \neq x_{p_2} \neq x_n$ to pair with. \STATE Pay a reported hypothesis $f(\cdot)$ for $x_n$ according to \begin{align} S(f(x_n),f^*(x_n)):=Sgn\left(\Delta^*(f(x_n),f^*(x_n))\right)-Sgn\left(\Delta^*(f(x_{p_1}),f^*(x_{p_2}))\right \end{align} \STATE Total payment to a reported hypothesis $f$: $ S(f,f^*) := \sum_{n=1}^N S(f(x_n),f^*(x_n)).$ \end{algorithmic} \end{algorithm} We reproduce the incentive guarantees and required conditions \begin{theorem}\label{thm:CA:truthful} CA mechanism induces truthful reporting of a hypothesis at a Bayesian Nash Equilibrium. \end{theorem} \paragraph{Knowledge requirement of $\Delta^*$} We'd like to note that knowing the sign of $\Delta^*$ matrix between $f^*_i$ and $f^*$ is a relatively weak assumption to have to run the mechanism. For example, for a binary classification task $L=2$, define the following accuracy measure, \[ \textsf{FNR}(f) := \mathbb P(f(X)=2|Y=1),~\textsf{FPR}(f) := \mathbb P(f(X)=1|Y=2). \] We offer the following: \begin{lemma}\label{lemma:delta} For binary classification ($L=2$), if $\textsf{FNR}(f^*_i) + \textsf{FPR}(f^*_i) < 1$, $Sgn(\Delta^*)$ is an identify matrix. \end{lemma} $\textsf{FNR}(f^*_i) + \textsf{FPR}(f^*_i) < 1$ is stating that $f^*_i$ is informative about the ground truth label $Y$ \cite{LC17}. Similar conditions can be derived for $L>2$ to guarantee an identify $Sgn(\Delta^*)$. With identifying a simple structure of $Sgn(\Delta^*)$, the CA mechanism for hypothesis elicitation runs in a rather simple manner. \paragraph{When do we reward accuracy} The elegance of the above CA mechanism leverages the correlation between a classifier and the ground truth label. Ideally we'd like a mechanism that rewards the accuracy of the contributed classifier. Consider the binary label case: \begin{theorem}\label{thm:accuracy} When $\mathbb P(Y=1) = 0.5$ (uniform prior), and let $Sgn(\Delta^* = I_{2\times 2})$ be the identity matrix, the more accurate classifier within each $\mathcal H_i$ receives a higher score. \end{theorem} Note that the above result does not conflict with our incentive claims. In an equal prior case, misreporting can only reduce a believed optimal classifier's accuracy instead of the other way. It remains an interesting question to understand a more generic set of conditions under which CA will be able to incentivize contributions of more accurate classifiers. \vspace{-0.05in} \paragraph{A market implementation} The above scoring mechanism leads to a market implementation \cite{hanson2007logarithmic} that incentivizes improving classifiers. In particular, suppose agents come and participate at discrete time step $t$. Denote the hypothesis agent contributed at time step $t$ as $f^*_t$ (and his report $\hat{f}_t$). Agent at time $t$ will be paid according to $S(\hat{f}_{t}(X),Y) - S(\hat{f}_{t-1}(X),Y),~ $ where $S(\cdot)$ is an incentive-compatible scoring function that elicits $f_t$ truthfully using $Y$. The incentive-compatibility of the market payment is immediate due to $S(\cdot)$. The above market implementation incentivizes improving classifiers with bounded budget \footnote{Telescoping returns: $ \sum_{t=1}^T \left(S(f_{t}(X),Y) - S(f_{t-1}(X),Y) \right) = S(f_T(X),Y)-S(f_0(X),Y)$.}. \paragraph{Calibrated CA scores\label{sec:reward sturcture}} When $\Delta^*$ is the identity matrix, the CA mechanism reduces to: $$ S(f(x_n),f^*(x_n)):=\mathbbm{1}\left(f(x_n) = f^*(x_n)\right)-\mathbbm{1}\left(f(x_{p_1})=f^*(x_{p_2})\right) $$ That is the reward structure of CA builds on 0-1 loss function. We ask the question of can we extend the CA to a calibrated one? We define the following loss-calibrated scoring function for CA: \[\textsf{Calibrated CA:~~~} S_{\ell}(f(x_n),f^*(x_n)) = -\ell(f(x_n),f^*(x_n)))-(-\ell(f(x_{p_1}),f^*(x_{p_2}))).~\label{ca:calibrated} \] Here again we negate the loss $\ell$ to make it a reward (agent will seek to maximize it instead of minimizing it). If this extension is possible, not only we will be able to include more scoring functions, but also we are allowed to score/verify non-binary classifiers directly. Due to space limit, we provide positive answers and detailed results in Appendix, while we will present empirical results on the calibrated scores of CA in Section \ref{sec:exp}. \section{Experiments}\label{sec:exp} In this section, we implement two reward structures of CA: 0-1 score and Cross-Entropy (CE) score as mentioned at the end of Section~\ref{sec:reward sturcture}. We experiment on two image classification tasks: MNIST~\cite{mnist} and CIFAR-10~\cite{cifar} in our experiments. For agent $\textsf{A}_{W}$ (weak agent), we choose LeNet~\cite{mnist} and ResNet34~\cite{resnet} for MNIST and CIFAR-10 respectively. For $\textsf{A}_{S}$ (strong agent), we use a 13-layer CNN architecture for both datasets. Either of them is trained on random sampled 25000 images from each image classification training task. After the training process, agent $\textsf{A}_{W}$ reaches 99.37\% and 62.46\% test accuracy if he truthfully reports the prediction on MNIST and CIFAR-10 test data. Agent $\textsf{A}_{S}$ is able to reach 99.74\% and 76.89\% test accuracy if the prediction on MNIST and CIFAR-10 test data is truthfully reported. $\textsf{A}_{W}$ and $\textsf{A}_{S}$ receive hypothesis scores based on the test data $X_{\text{test}}$ (10000 test images) of MNIST or CIFAR-10. For elicitation with verification, we use ground truth labels to calculate the hypothesis score. For elicitation without verification, we replace the ground truth labels with the other agent's prediction - $\textsf{A}_{W}$ will serve as $\textsf{A}_{S}$'s peer reference hypothesis and vice versa. \subsection{Results} \label{sec:exp_1} Statistically, an agent $i$'s mis-reported hypothesis can be expressed by a misreport transition matrix $T$. Each element $T_{j,k}$ represents the probability of flipping the truthfully reported label $f^*_i(x) = j$ to the misreported label $\tilde{f}_i(x)=k$: $T_{j,k}=\mathbb P(\tilde{f}_i(X)=k|f^*_i(X) = j)$. Random flipping predictions will degrade the quality of a classifier. When there is no adversary attack, we focus on two kinds of misreport transition matrix: a uniform matrix or a sparse matrix. For the uniform matrix, we assume the probability of flipping from a given class into other classes to be the same: $T_{i,j}=T_{i,k}=e, \forall i\neq j\neq k$. $e$ changes gradually from 0 to 0.56 after 10 increases, which results in a 0\%--50\% misreport rate. The sparse matrix focuses on particular 5 pairs of classes which are easily mistaken between each pair. Denote the corresponding transition matrix elements of class pair $(i,j)$ to be: $(T_{ij}, T_{ji}), i\neq j$, we assume that $T_{ij}=T_{ji}=e, \forall (i,j)$. $e$ changes gradually from 0 to 0.5 after 10 increases, which results in a 0\%--50\% misreport rate. Every setting is simulated 5 times. The line in each figure consists of the median score of 5 runs as well as the corresponding ``deviation interval", which is the maximum absolute score deviation. The y axis symbolizes the averaged score of all test images. As shown in Figure~\ref{Fig:fig1_1},~\ref{Fig:fig2}, in most situations, 0-1 score and CE score of both $\textsf{A}_{W}$ and $\textsf{A}_{S}$ keep on decreasing while the misreport rate is increasing. As for 0-1 score without ground truth verification, the score of either agent begins to fluctuate more when the misreport rate in sparse misreport model is $>35\%$. Our results conclude that both the 0-1 score and CE score induce truthful reporting of a hypothesis and will penalize misreported agents whether there is ground truth for verification or not. \begin{figure}[t] \centering \subfigure[\scriptsize 0-1 Score, Agent $A_{W}$] {\includegraphics[width=.26\textwidth]{figures/0-1_Score_changes_Exp12_MNIST_strong1.pdf} \label{Fig: 1a}} \hspace{-0.28in} \subfigure[\scriptsize 0-1 Score, Agent $A_{S}$] {\includegraphics[width=.26\textwidth]{figures/0-1_Score_changes_Exp12_MNIST_strong2.pdf} \label{Fig: 1b} } \hspace{-0.28in} \subfigure[\scriptsize CE Score, Agent $A_{W}$] {\includegraphics[width=.26\textwidth]{figures/CE_Score_changes_Exp12_MNIST_strong1.pdf} \label{Fig: 1c}} \hspace{-0.28in} \subfigure[\scriptsize CE Score, Agent $A_{S}$] {\includegraphics[width=.26\textwidth]{figures/CE_Score_changes_Exp12_MNIST_strong2.pdf} \label{Fig: 1d}} \vspace{-5pt} \caption{Hypothesis scores versus misreport rate on MNIST dataset. } \label{Fig:fig1_1} \end{figure} \begin{figure}[t] \centering \subfigure[\scriptsize 0-1 score, agent $A_{W}$] {\includegraphics[width=.26\textwidth]{figures/0-1_Score_changes_Exp12_CIFAR_weak.pdf} \label{Fig: 2a}} \hspace{-0.28in} \subfigure[\scriptsize 0-1 score, agent $A_{S}$] {\includegraphics[width=.26\textwidth]{figures/0-1_Score_changes_Exp12_CIFAR_strong.pdf} \label{Fig: 2b}} \hspace{-0.28in} \subfigure[\scriptsize CE score, agent $A_{W}$] {\includegraphics[width=.26\textwidth]{figures/CE_Score_changes_Exp12_CIFAR_weak.pdf} \label{Fig: 2c}} \hspace{-0.28in} \subfigure[\scriptsize CE score, agent $A_{S}$] {\includegraphics[width=.26\textwidth]{figures/CE_Score_changes_Exp12_CIFAR_strong.pdf} \label{Fig: 2d}} \vspace{-5pt} \caption{Hypothesis scores versus misreport rate on CIFAR-10 dataset. } \label{Fig:fig2} \end{figure} \subsection{Elicitation with adversarial attack} We test the robustness of our mechanism when facing a 0.3-fraction of adversary in the participating population. We introduce an adversarial agent, LinfPGDAttack, introduced in AdverTorch~\cite{ding2019advertorch} to influence the labels for verification when there is no ground truth. In Figure~\ref{Fig:fig3}, both the 0-1 score and CE score induce truthful reporting of a hypothesis for MNIST. However, for CIFAR-10, with the increasing of misreport rate, the decreasing tendency fluctuates more often. Two factors attribute to this phenomenon: the agents' abilities as well as the quality of generated "ground truth" labels. When the misreport rate is large and generated labels are of low quality, the probability of successfully matching the misreported label to an incorrect generated label can be much higher than usual. But in general, these two scoring structures incentivize agents to truthfully report their results. \begin{figure}[ht] \centering \subfigure[\scriptsize 0-1 score, MNIST] {\includegraphics[width=.26\textwidth]{figures/0-1_Score_changes_Exp3_MNIST.pdf} \label{Fig: 3a}} \hspace{-0.28in} \subfigure[\scriptsize CE score, MNIST] {\includegraphics[width=.26\textwidth]{figures/CE_Score_changes_Exp3_MNIST.pdf} \label{Fig: 3b}} \hspace{-0.28in} \subfigure[\scriptsize 0-1 score, CIFAR] {\includegraphics[width=.26\textwidth]{figures/0-1_Score_changes_Exp3_CIFAR.pdf} \label{Fig: 3c}} \hspace{-0.28in} \subfigure[\scriptsize CE score, CIFAR] {\includegraphics[width=.26\textwidth]{figures/CE_Score_changes_Exp3_CIFAR.pdf} \label{Fig: 3d}} \vspace{-5pt} \caption{Hypothesis scores versus misreport rate (with adversarial attack). } \label{Fig:fig3} \end{figure} \section{Elicitation with verification} We start by considering the setting where the mechanism designer has access to ground truth labels, i.e., $D=\{(x_n,y_n)\}_{n=1}^N$. % \subsection{A warm-up case: eliciting Bayes optimal classifier} As a warm-up, we start with the question of eliciting the Bayes optimal classifier: \[ f^*_i = \argmin_{f_i} \mathbb{E}_{(X,Y)} \left[\mathbbm{1} \Big(f_i(X) \neq Y\Big) \right]. \] It is straightforward to observe that, by definition using $-\mathbbm{1}(\cdot)$ (negative sign changes a loss to a reward (score)) and any affine transformation of it $a \mathbbm{1}(\cdot) + b, a<0$ will be sufficient to incentivize truthful reporting of hypothesis. Next we are going to show that any classification-calibrated loss function \cite{bartlett2006convexity} can serve as a proper scoring function for eliciting hypothesis.\footnote{We provide details of the calibration in the proof. Classical examples include cross-entropy loss, squared loss, etc.} \begin{theorem}\label{thm:calibrate1} Any classification calibrated loss function $\ell(\cdot)$ (paying agents $-\ell(f_i(X), Y)$) induces truthful reporting of the Bayes optimal classifier. \end{theorem} \subsection{Eliciting ``any-optimal" classifier: a peer prediction approach} Now consider the case that an agent does not hold an absolute Bayes optimal classifier. Instead, in practice, agent's local hypothesis will depend on the local observations they have, the privacy level he desired, the hypothesis space and training method he is using. Consider agent $i$ holds the following hypothesis $f^*_i$, according to a loss function $\ell_i$, and a hypothesis space $\mathcal{H}_i$: $ f^*_i = \argmin_{f_i \sim \mathcal{H}_i} \mathbb{E}\left[\ell_i \Big(f_i(X),Y\Big) \right]. $ By definition, each specific $\ell_i$ will be sufficient to incentivize a hypothesis. However, it is unclear how $f^*_i$ trained using $\ell_i$ would necessarily be optimal according to a universal metric/score. We aim for a more generic approach to elicit different $f^*_i$s that are returned from different training procedure and hypothesis classes. In the following sections, we provide a peer prediction approach to do so. We first state the hypothesis elicitation problem as a standard peer prediction problem. The connection is made by firstly rephrasing the two data sources, the classifiers and the labels, from agents' perspective. Let's re-interpret the ground truth label $Y$ as an ``optimal" agent who holds a hypothesis $f^*(X) = Y$. We denote this agent as $\textsf{A}^*$. Each local hypothesis $f^*_i$ agent $i$ holds can be interpreted as the agent that observes $f^*_i(x_1),...,f^*_i(x_N)$ for a set of randomly drawn feature vectors $x_1,...,x_N$: $ f^*_i(x_n) \sim \textsf{A}_i(X). $ Then a peer prediction mechanism induces truthful reporting if: $ \mathbb E\left[S(f^*_i(X), f^*(X))\right] \geq \mathbb E\left[S(f(X), f^*(X))\right],~\forall f. $ \paragraph{Correlated Agreement for hypothesis elicitation} To be more concrete, consider a specific implementation of peer prediction mechanism, the Correlated Agreement (CA). Recall that the mechanism builds on a correlation matrix $\Delta(f^*_i(X),f^*(X))$ defined as follows: \begin{align*} &\Delta^*(k,l) = \mathbb P\bigl(f^*_i(X)=k,f^*(X)=l\bigr)- \mathbb P\bigl(f^*_i(X) = k\bigr) \mathbb P\bigl(f^*(X)= l\bigr), ~k,l \in [L]. \end{align*} Then the CA for hypothesis elicitation is summarized in Algorithm \ref{m:main1}. \begin{algorithm} \caption{CA for Hypothesis Elicitation}\label{m:main1} \begin{algorithmic}[1] \STATE For each sample $x_n$, randomly sample two other tasks $x_{p_1} \neq x_{p_2} \neq x_n$ to pair with. \STATE Pay a reported hypothesis $f(\cdot)$ for $x_n$ according to \begin{align} S(f(x_n),f^*(x_n)):=Sgn\left(\Delta^*(f(x_n),f^*(x_n))\right)-Sgn\left(\Delta^*(f(x_{p_1}),f^*(x_{p_2}))\right \end{align} \STATE Total payment to a reported hypothesis $f$: $ S(f,f^*) := \sum_{n=1}^N S(f(x_n),f^*(x_n)).$ \end{algorithmic} \end{algorithm} We reproduce the incentive guarantees and required conditions \begin{theorem}\label{thm:CA:truthful} CA mechanism induces truthful reporting of a hypothesis at a Bayesian Nash Equilibrium. \end{theorem} \paragraph{Knowledge requirement of $\Delta^*$} We'd like to note that knowing the sign of $\Delta^*$ matrix between $f^*_i$ and $f^*$ is a relatively weak assumption to have to run the mechanism. For example, for a binary classification task $L=2$, define the following accuracy measure, \[ \textsf{FNR}(f) := \mathbb P(f(X)=2|Y=1),~\textsf{FPR}(f) := \mathbb P(f(X)=1|Y=2). \] We offer the following: \begin{lemma}\label{lemma:delta} For binary classification ($L=2$), if $\textsf{FNR}(f^*_i) + \textsf{FPR}(f^*_i) < 1$, $Sgn(\Delta^*)$ is an identify matrix. \end{lemma} $\textsf{FNR}(f^*_i) + \textsf{FPR}(f^*_i) < 1$ is stating that $f^*_i$ is informative about the ground truth label $Y$ \cite{LC17}. Similar conditions can be derived for $L>2$ to guarantee an identify $Sgn(\Delta^*)$. With identifying a simple structure of $Sgn(\Delta^*)$, the CA mechanism for hypothesis elicitation runs in a rather simple manner. \paragraph{When do we reward accuracy} The elegance of the above CA mechanism leverages the correlation between a classifier and the ground truth label. Ideally we'd like a mechanism that rewards the accuracy of the contributed classifier. Consider the binary label case: \begin{theorem}\label{thm:accuracy} When $\mathbb P(Y=1) = 0.5$ (uniform prior), and let $Sgn(\Delta^* = I_{2\times 2})$ be the identity matrix, the more accurate classifier within each $\mathcal H_i$ receives a higher score. \end{theorem} Note that the above result does not conflict with our incentive claims. In an equal prior case, misreporting can only reduce a believed optimal classifier's accuracy instead of the other way. It remains an interesting question to understand a more generic set of conditions under which CA will be able to incentivize contributions of more accurate classifiers. \vspace{-0.05in} \paragraph{A market implementation} The above scoring mechanism leads to a market implementation \cite{hanson2007logarithmic} that incentivizes improving classifiers. In particular, suppose agents come and participate at discrete time step $t$. Denote the hypothesis agent contributed at time step $t$ as $f^*_t$ (and his report $\hat{f}_t$). Agent at time $t$ will be paid according to $S(\hat{f}_{t}(X),Y) - S(\hat{f}_{t-1}(X),Y),~ $ where $S(\cdot)$ is an incentive-compatible scoring function that elicits $f_t$ truthfully using $Y$. The incentive-compatibility of the market payment is immediate due to $S(\cdot)$. The above market implementation incentivizes improving classifiers with bounded budget \footnote{Telescoping returns: $ \sum_{t=1}^T \left(S(f_{t}(X),Y) - S(f_{t-1}(X),Y) \right) = S(f_T(X),Y)-S(f_0(X),Y)$.}. \paragraph{Calibrated CA scores\label{sec:reward sturcture}} When $\Delta^*$ is the identity matrix, the CA mechanism reduces to: $$ S(f(x_n),f^*(x_n)):=\mathbbm{1}\left(f(x_n) = f^*(x_n)\right)-\mathbbm{1}\left(f(x_{p_1})=f^*(x_{p_2})\right) $$ That is the reward structure of CA builds on 0-1 loss function. We ask the question of can we extend the CA to a calibrated one? We define the following loss-calibrated scoring function for CA: \[\textsf{Calibrated CA:~~~} S_{\ell}(f(x_n),f^*(x_n)) = -\ell(f(x_n),f^*(x_n)))-(-\ell(f(x_{p_1}),f^*(x_{p_2}))).~\label{ca:calibrated} \] Here again we negate the loss $\ell$ to make it a reward (agent will seek to maximize it instead of minimizing it). If this extension is possible, not only we will be able to include more scoring functions, but also we are allowed to score/verify non-binary classifiers directly. Due to space limit, we provide positive answers and detailed results in Appendix, while we will present empirical results on the calibrated scores of CA in Section \ref{sec:exp}. \section{Elicitation without verification} Now we move on to a more challenging setting where we do not have ground truth label $Y$ to verify the accuracy, or the informativeness of $f(X)$, i.e., the mechanism designer only has access to a $D=\{x_n\}_{n=1}^N$. The main idea of our solution from this section follows straight-forwardly from the previous section, but instead of having a ground truth agent $f^*$, for each classifier $f^*_i$ we only have a reference agent $f^*_j$ drawn from the rest agents $j \neq i$ to score with. The corresponding scoring rule takes the form of $S(\hat{f}_i(X),\hat{f}_j(X))$, and similarly the goal is to achieve the following: $ \mathbb E\left[S(f^*_i(X), f^*_j(X))\right] \geq \mathbb E\left[S(f(X), f^*_j(X))\right],~\forall f. $ As argued before, if we treat $f_i$ and $f_j$ as two agents $\textsf{A}_i$ and $\textsf{A}_j$ holding private information, a properly defined peer prediction scoring function that elicits $\textsf{A}_i$ using $\textsf{A}_j$ will suffice to elicit $f_i$ using $f_j$. Again we will focus on using Correlated Agreement as a running example. Recall that the mechanism builds on a correlation matrix $\Delta^*(f^*_i(X),f^*_j(X))$. \begin{align*} &\Delta^*(k,l) = \mathbb P\bigl(f^*_i(X)=k,f^*_j(X)=l\bigr)- \mathbb P\bigl(f^*_i(X) = k\bigr) \mathbb P\bigl(f^*_j(X)= l\bigr), ~k,l \in [L] \end{align*} The mechanism then operates as follows: For each task $x_n$, randomly sample two other tasks $x_{p_1},x_{p_2}$. Then pay a reported hypothesis according to \vspace{-0.05in} \begin{align} S(\hat{f}_i,\hat{f}_j):= \sum_{n=1}^N Sgn\left(\Delta^*(\hat{f}_i(x_n),\hat{f}_j(x_n))\right)-Sgn\left(\Delta^*(\hat{f}_i(x_{p_1}),\hat{f}_j(x_{p_2}))\right) \end{align} We reproduce the incentive guarantees: \begin{theorem} CA mechanism induces truthful reporting of a hypothesis at a Bayesian Nash Equilibrium. \end{theorem} The proof is similar to the proof of Theorem \ref{thm:CA:truthful} so we will not repeat the details in the Appendix. To enable a clean presentation of analysis, the rest of this section will focus on using/applying CA for the binary case $L=2$. First, as an extension to Lemma \ref{lemma:delta}, we have: \begin{lemma} \label{col:delta} If $f^*_i$ and $f^*_j$ are conditionally independent given $Y$, $\textsf{FNR}(f^*_i) + \textsf{FPR}(f^*_i) < 1$ and $\textsf{FNR}(f^*_j) + \textsf{FPR}(f^*_j) < 1$, then $Sgn(\Delta^*)$ is an identify matrix. \end{lemma} \paragraph{When do we reward accuracy} As mentioned earlier that in general peer prediction mechanisms do not incentivize accuracy. Nonetheless we provide conditions under which they do. The result below holds for binary classifications. \begin{theorem}\label{thm:pp:accuracy} When (i) $\mathbb P(Y=1) = 0.5$, (ii) $Sgn(\Delta^*) = I_{2 \times 2}$, and (iii) $f^*_i(X)$ and $f^*_j(X)$ are conditional independent of $Y$, the more accurate classifier within each $\mathcal H_i$ receives a higher score in expectation. \end{theorem} \subsection{Peer Prediction market} Implementing the above peer prediction setting in a market setting is hard, due to again the challenge of no ground truth verification. The use of reference answers collected from other peers to similarly close a market will create incentives for further manipulations. Our first attempt is to crowdsource to obtain an independent survey answer and use the survey answer to close the market. Denote the survey hypothesis as $ f'$ and use $f'$ to close the market: \begin{align}\label{eqn:crowd:market} S(\hat{f}_{t}(x),f'(x)) - S(\hat{f}_{t-1}(X),f'(x)) \end{align} \begin{theorem}\label{thm:market:pp1} When the survey hypothesis $f'(x)$ is (i) conditionally independent from the market contributions, and (ii) Bayesian informative, then closing the market using the crowdsourcing survey hypothesis is incentive compatible. \end{theorem} The above mechanism is manipulable in several aspects. Particularly, the crowdsourcing process needs to be independent from the market, which implies that the survey participant will need to stay away from participating in the market - but it is unclear whether this will be the case. In the Appendix we show that by maintaining a survey process that elicits $C>1$ hypotheses, we can further improve the robustness of our mechanisms against agents performing a joint manipulation on both surveys and markets. \paragraph{Remark} Before we conclude this section, we remark that the above solution for the \emph{without verification} setting also points to an hybrid solution when the designer has access to both sample points with and without ground truth labels. The introduction of the pure peer assessment solution helps reduce the variance of payment. \subsection{Robust elicitation} Running a peer prediction mechanism with verifications coming only from peer agents is vulnerable when facing collusion. In this section we answer the question of how robust our mechanisms are when facing a $\gamma$-fraction of adversary in the participating population. To instantiate our discussion, consider the following setting \squishlist \item There are $1-\gamma$ fraction of agents who will act truthfully if incentivized properly. Denote the randomly drawn classifier from this $1-\gamma$ population as $f^*_{1-\gamma}$. \item There are $\gamma$ fraction of agents are adversary, whose reported hypotheses can be arbitrary and are purely adversarial. \end{list} Denote the following quantifies $ \alpha := \mathbb P(f^*_{1-\gamma}(X)=2|Y=1),~\beta := \mathbb P(f^*_{1-\gamma}(X)=1|Y=2)~\alpha^* := \mathbb P(f^*(X)=2|Y=1),~\beta^* := \mathbb P(f^*(X)=1|Y=2) $, that is $\alpha,\beta$ are the error rates for the eliciting classifier $f^*_{1-\gamma}$ while $\alpha^*,\beta^*$ are the error rates for the Bayes optimal classifier. We prove the following \begin{theorem}\label{thm:robust} CA is truthful in eliciting hypothesis when facing $\gamma$-fraction of adversary when $\gamma$ satisfies: $ \frac{1-\gamma}{\gamma} > \frac{1-\alpha^*-\beta^*}{1-\alpha-\beta}. $ \end{theorem} When the agent believes that the classifier the $1-\gamma$ crowd holds is as accurate as the Bayes optimal classifier we have $ \frac{1-\alpha^*-\beta^*}{1-\alpha-\beta} = \frac{1-\alpha^*-\beta^*}{1-\alpha^*-\beta^*} = 1 $, then a sufficient condition for eliciting truthful reporting is $\gamma < 50\%$, that is our mechanism is robust up to half of the population manipulating. Clearly the more accurate the reference classifier is, the more robust our mechanism is. \section{Introduction} When a company relies on distributed users' data to train a machine learning model, federated learning \cite{mcmahan2016communication,yang2019federated,kairouz2019advances} promotes the idea that users/customers' data should be kept local, and only the locally held/learned hypothesis will be shared/contributed from each user. While federated learning has observed success in keyboard recognition \cite{hard2018federated} and in language modeling \cite{chen2019federated}, existing works have made an implicit assumption that participating users will be willing to contribute their local hypotheses to help the central entity to refine the model. Nonetheless, without proper incentives, agents can choose to opt out of the participation, to contribute either uninformative or outdated information, or to even contribute malicious model information. Though being an important question for federated learning \cite{yang2019federated,liu2020fedcoin,hansus,han20}, this capability of providing adequate incentives for user participation has largely been overlooked. In this paper we ask the questions that: \emph{Can a machine learning hypothesis be incentivized/elicited by a certain form of scoring rules from self-interested agents?} The availability of a scoring rule will help us design a payment for the elicited hypothesis properly to motivate the reporting of high-quality ones. The corresponding solutions complement the literature of federated learning by offering a generic template for incentivizing users' participation. We address the challenge via providing a scoring framework to elicit hypotheses truthfully from the self-interested agents/users\footnote{Throughout this paper, we will interchange the use of agents and users.}. More concretely, suppose an agent $i$ has a locally observed hypothesis $f^*_i$. For instance, the hypothesis can come from solving a local problem: $ f^*_i = \argmin_{f_i \sim \mathcal{H}_i} \mathbb{E}_{(X,Y) \sim \mathcal D} [\ell_i \left(f_i(X), Y\right) ] $ according to a certain hypothesis class $\mathcal{H}_i$, a distribution $\mathcal D$, a loss function $\ell_i$. The goal is to design a scoring function $S(\cdot)$ that takes a reported hypothesis $\hat{f}_i$, and possibly a second input argument (to be defined in the context) such that $ \mathbb E\left[S(f^*_i, \cdot)\right] \geq \mathbb E\left[S(\hat{f}_i, \cdot)\right],\forall \hat{f}_i $, where the expectation is w.r.t. agent $i$'s local belief, which is specified in context. If the above can be achieved, $S(\cdot)$ can serve as the basis of a payment system in federated learning such that agents paid by $S(\cdot)$ will be incentivized to contribute their local models truthfully. In this work, we primarily consider two settings, with arguably increasing difficulties in designing our mechanisms: \paragraph{With ground truth verification $(X,Y)$} We will start with a relatively easier setting where we as the designer has access to a labeled dataset $\{(x_n,y_n)\}_{n=1}^N$. We will demonstrate how this question is similar to the classical information elicitation problem with strictly proper scoring rule \cite{gneiting2007strictly}, calibrated loss functions \cite{bartlett2006convexity} and peer prediction (information elicitation without verification) \cite{miller2005eliciting}. \paragraph{With only access to features $X$} The second setting is when we only have $X$ but not the ground truth $Y$. This case is arguably more popular in practice, since collecting label annotation requires a substantial amount of efforts. For instance, a company is interested in eliciting/training a classifier for an image classification problem. While it has access to images, it might not have spent efforts in collecting labels for the images. We will again present a peer prediciton-ish solution for this setting. Besides establishing the desired incentive properties of the scoring rules, we will look into questions such as when the scoring mechanism is rewarding accurate classifiers, how to build a prediction market-ish solution to elicit improving classifiers, as well as our mechanism's robustness against possible collusion. Our work can be viewed both as a contribution to federated learning via providing incentives for selfish agents to share their hypotheses, as well as a contribution to the literature of information elicitation via studying the problem of hypothesis elicitation. We validate our claims via experiments using the MNIST and CIFAR-10 datasets. All omitted proofs and experiment details can be found in the supplementary materials. \subsection{Related works} Due to space limit, we only briefly survey the related two lines of works: \paragraph{Information elicitation} Our solution concept relates most closely to the literature of information elicitation \cite{Brier:50,Win:69,Savage:71,Matheson:76,Jose:06,Gneiting:07}. Information elicitation primarily focuses on the questions of developing scoring rule to incentivize or to elicite self-interested agents' private probalistic beliefs about a private event (e.g., how likely will COVID-19 death toll reach 100K by May 1?). Relevant to us, \cite{abernethy2011collaborative} provides a market treatment to elicit more accurate classifiers but the solution requires the designer to have the ground truth labels and agents to agree on the losses. We provide a more generic solution without above limiations. A more challenging setting features an elicitation question while there sans ground truth verification. Peer prediction \cite{Prelec:2004,MRZ:2005,witkowski2012robust,radanovic2013,Witkowski_hcomp13,dasgupta2013crowdsourced,shnayder2016informed,radanovic2016incentives,LC17,kong2019information,liu2020} is among the most popular solution concept. The core idea of peer prediction is to score each agent based on another reference report elicited from the rest of agents, and to leverage on the stochastic correlation between different agents' information. Most relevant to us is the Correlated Agreement mechanism \cite{dasgupta2013crowdsourced,shnayder2016informed,kong2019information}. We provide a separate discussion of it in Section \ref{sec:pp} \paragraph{Federated learning} Federated learning \cite{mcmahan2016communication,hard2018federated,yang2019federated} arose recently as a promising architecture for learning from massive amounts of users' local information without polling their private data. The existing literature has devoted extensive efforts to make the model sharing process more secure \cite{secure_1, secure_2, secure_3, secure_4, secure_5, bonawitz2016practical}, more efficient \cite{efficient_1,efficient_2,efficient_3,efficient_4,fl:communication, efficient_6, efficient_7}, more robust \cite{robust_1,robust_2,robust_3,pillutla2019robust} to heterogeneity in the distributed data source, among many other works. For more detailed survey please refer to several thorough ones \cite{yang2019federated,kairouz2019advances}. The incentive issue has been listed as an outstanding problem in federated learning \cite{yang2019federated}. There have been several very recent works touching on the challenge of incentive design in federated learning. \cite{liu2020fedcoin} proposed a currency system for federated learning based on blockchain techniques. \cite{hansus} describes a payoff sharing algorithm that maximizes system designer's utility, but the solution does not consider the agents' strategic behaviors induced by insufficient incentives. \cite{han20} further added fairness guarantees to an above reward system. We are not aware of a systematic study of the truthfulness in incentiving hypotheses in federated learning, and our work complements above results by providing an incentive-compatible scoring system for building a payment system for federated learning. \section{Concluding remarks} This paper provides an elicitation framework to incentivize contribution of truthful hypotheses in federated learning. We have offered a scoring rule based solution template which we name as hypothesis elicitation. We establish the incentive property of the proposed scoring mechanisms and have tested their performance with real-world datasets extensively. We have also looked into the accuracy, robustness of the scoring rules, as well as market approaches for implementing them. \clearpage \newpage \bibliographystyle{plain}
proofpile-arXiv_065-132
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} The use of Unmanned Aerial Vehicle (UAV) networks is considered essential in a number of communication scenarios \cite{UAV01, UAV02,UAV_rev1,UAV_rev2,UAV_rev3}. In wireless communications, {Flying Ad-Hoc Networks (FANETs) composed by multiple UAVs} may be adopted to set up a communication network during a natural calamity \cite{UAV01, UAV03}, defense applications, or to improve coverage as drone cells \cite{UAV01,UAV04}. { In a FANET,} UAV formations may be split into diverse coalitions according to distinct assignments. In intra-coalition transmission, a drone must communicate with the coalition leader and must also establish communication with neighbor drones to schedule flight missions \cite{UAV051}. An effective data interaction in the UAV coalition is essential to keep the flight and mission performance \cite{UAV06}. Each UAV cluster-head {(coalition leader)} collects and delivers data from cluster members to the land controller \cite{UAV06,UAV05,UAV07}. Nevertheless, it is not easy to cover the whole network by one-hop transmissions because of transmit power limitations of UAVs. To address this, a key approach is to employ relay-assisted transmission or to modify the position and optimize UAV flights \cite{UAV06}. In fact, relaying can enhance the transmission rate and the coverage of systems without altering the {UAV} formation, which is key in UAV communications \cite{UAV06}. Therefore, relay selection protocols \cite{f411,f78,f80,TCOM,WSA2020,ICASSP2020,f9,f40,tds_cl,f35,armo,badstbc} can be adapted and employed in { FANETs}, in which some UAVs are used as relays in scenarios with homogeneous or heterogeneous distances and path-loss between the UAVs. In this context, the Multi-Way Relay Channel (mRC) \cite{f80} includes the pairwise data exchange model formed by multiple two-way relay channels, which can be used by a pair of UAVs to establish communication with each other in intra-coalition transmissions. The mRC also allows the full data exchange model, where each UAV receives information from the other UAVs. In fact, UAV relaying techniques can be improved by adopting multi-way buffer-aided protocols, where relay nodes can store information in their buffers \cite{f9, f14} before transmitting them to the destination. Moreover, the use of a UAV cluster-head as a central node with the same functions of the cloud in a Cloud Radio Access Network (C-RAN) framework \cite{TCOM,WSA2020,ICASSP2020} may enhance UAV relaying schemes {in FANETs}. In the C-RAN framework, the baseband processing often carried out at base-station (BSs), known as remote radio heads (RRHs), is centrally performed at a cloud processor aided by high-speed links, known as fronthaul links, between the cloud and the BSs \cite{f100}. This centralized processing facilitates interference suppression in wireless links {and may be also adopted in FANETs}. The BSs in the C-RAN are denoted as remote radio heads (RRHs) as their action is commonly restricted to transmission and reception of radio signals \cite{f100}. The Maximum Minimum Distance (MMD), Channel-Norm Based (CNB) and Quadratic Norm (QN) relay selection criteria have been studied with maximum likelihood (ML) detection \cite{TCOM,WSA2020,f411}. It is shown in \cite{TCOM,f411} that the MMD criterion minimizes the pairwise error pobability (PEP) and, consequently, the bit error rate (BER) in the ML receiver. However, C-RAN based UAV relaying protocols {in FANETs} that employ such criteria {and} a recursive strategy that exploits time-correlated channels often found in UAV communications have not been previously investigated {in the literature}. In this work, we develop a C-RAN-type Cluster-Head-Driven-Best-Link (CHD-Best-Link) protocol for multiple-antenna relaying systems with UAVs {(FANETs)}, which chooses the best links among $K$ pairs of UAV sources (SVs) and $N$ UAV relays (RVs), optimizing the PEP, BER, average delay and MMD computation rate performances. We develop ML detectors for the UAV cluster-head and the nodes to detect the signals. We also propose a recursive MMD criterion and develop a relay selection algorithm for CHD-Best-Link, that tracks the evolution of channels over time and computes the MMD metrics only if the channels are considerably changed. Simulations depict the outstanding performance of the CHD-Best-Link protocol as compared to previously studied techniques. {Thus, the main contributions of this letter are:} \begin{enumerate} \item {A C-RAN-type Cluster-Head-Driven framework with joint detection at the UAV cluster-head and the nodes;} \item {The CHD-Best-Link protocol for multiple-antenna relaying systems with UAVs;} \item {The recursive MMD relay selection algorithm.} \item An analysis of the proposed MWC-Best-User-Link scheme in terms of PEP, average delay and computational cost. \end{enumerate} This paper is organized as follows. Section II {presents} the system model and the assumptions. {Then, the proposed CHD-Best-Link protocol with the recursive MMD relay selection algorithm is presented in detail and analyzed in Sections III and IV, respectively.} Section V depicts and examines the simulation {results} whereas Section VI draws the conclusions. \section{System Model} The system is a multi-way multiple-antenna Multiple-Access Broadcast-Channel (MABC) relay network composed by {a number of $K$} clusters (pair of SVs $\mathcal{S}_1$ and $\mathcal{S}_2$) and $N$ half duplex (HD) decode-and-forward (DF) RVs, $\mathcal{R}_1$,...,$\mathcal{R}_N$, where {$K$ and $N$ are finite positive integer numbers and $K$ may be different from $N$. The number of pair of SVs and the number of RVs in the UAV formation depend on the kind of mission.} These SVs and RVs may be fixed-wing UAVs, that must keep a continuous progressive motion to stay aloft, or rotary-wing UAVs such as quadcopters, that can move in any direction as even as to stay stagnant in the air \cite{UAV02}. In a C-RAN structure, the SVs typify mobile users, the RVs typify RRHs and the UAV cluster-head typifies the cloud. {The UAV cluster-head is fixed and has higher processing and buffering capacity than the other UAVs}. The SVs have $M_\mathcal{S}$ antennas to transmit or receive and each RV $M_r=2U M_\mathcal{S}$ antennas, where {$U$ is a finite positive integer number}, all of them used for reception ($M_{r_{Rx}}=M_r$) and $M_\mathcal{S}$ out of $V M_S$ antennas are chosen from each RV employed for transmission ($M_{r_{Tx}}=M_\mathcal{S}$), where {$V$ is a finite positive integer number} and $VM_\mathcal{S} \leq M_r$, composing a spatial multiplexing network. {Therefore, the higher $V$ the superior the network performance, as it increases the degrees of freedom. Besides, the higher $U$ the superior the network performance as it increases the number of receive antennas at the RVs. Nevertheless, if $U$ and $V$ are increased, this leads to a higher computational cost. Thus, there is a trade-off between network performance and computational cost, when $U$ and $V$ are increased.} The chosen RVs employ $K$ cluster-head buffers to store or extract $M_\mathcal{S}$ packets in each time slot. A cluster-head buffer with size $J$ packets is used on demand for each cluster, as illustrated in Fig.\ref{fig:model}. In the uplink (MA phase), a cluster is chosen to transmit $M_S$ packets to a chosen RV $\mathcal{R}_g$ for reception. Then, the signal is decoded by the cluster-head processor, XOR-type PLNC is performed on the decoded data and the resulting symbols are stored in their cluster-head buffers. In the downlink (BC phase), two RVs $\mathcal{R}_{f1}$ and $\mathcal{R}_{f2}$ are chosen to send $M_\mathcal{S}$ packets from the particular cluster-head buffer to the chosen cluster. {In most conditions the choice of only one RV in the downlink is enough for a fair performance. Nevertheless, by choosing two RVs, the chance of combining the links associated with the chosen RVs increases the degrees of freedom of the network and, thus, enhances its performance. The network could choose more than two RVs to further enhance its performance, however the computational cost would be considerably increased for a high value of $N$.} In this study, for simplicity, we employ the mRC pairwise data exchange model, but the full data exchange model may be adopted in future studies. \begin{figure}[!h] \centering \includegraphics[scale=0.5]{model.eps} \caption{System model of the cluster-head-driven UAV relay scheme.} \label{fig:model} \end{figure} \subsection{Assumptions} The energy sent in the uplink from each SV to the chosen RV for reception ($E_\mathcal{S}$) equals that transmitted in the downlink from the chosen RV(s) for transmission to the SVs ($E_{\mathcal{R}_f}$). Thus, $E_{\mathcal{R}_f}=E_\mathcal{S}$. Non reciprocal channels and mutually independent zero mean complex Gaussian random channel coefficients, which are stationary for the time of one time slot and change independently from a time slot to another, are considered. The transmission is structured in data packets, where the order of the packets is inserted in the preamble and the primary order is recovered at the destination. Pilot symbols for estimation of channel state information (CSI) and network signaling are also inserted in the preamble. In each time slot $i$, the central node (the UAV cluster-head) decides whether a cluster or the RVs must transmit, through a feedback channel. Global CSI at the UAV cluster-head is supplied by network signaling. Besides, each RV has information concerning its $\mathcal{S}_1\mathcal{R}$ and $\mathcal{S}_2\mathcal{R}$ links. The use of a UAV cluster-head as a single central node and its buffers leads to a higher control overhead. Nevertheless, it minimizes the average delay and the complexity, as a unique central node decides which nodes transmit (instead of all destination nodes) and the packets related to a cluster are stored in only its particular cluster-head buffer rather than being spread in the buffers of all RVs. This study focus on the ideal case where the fronthaul links (between the UAV cluster-head and RVs) have unconstrained capacities and RVs can reliably convey their data to the cluster-head processor. Realistic systems with capacity-constrained fronthaul links \cite{f100} may be studied in future works. \subsection{System Description} Considering the worst case scenario, where UAVs can fly at ultra-low altitude (5m - 15m) and, consequently, without the presence of the Line of Sight (LoS) component (Rayleigh fading), the channel model may be aproximated to that of ground wireless sensor networks \cite{Hanzo01}. The channel matrix $\mathbf{H}_{\mathcal{S}_k,\mathcal{R}_n}$ includes large-scale fading, arising from the path-loss of signal as a function of distance and shadowing by large objects such as buildings and hills, effects of large-scale fading, associated with the propagation parameters of the signal over far away distances, and the Rayleigh-distributed and small-scale fading effects, resulting from the constructive and destructive interference of the multiple signal paths between the transmitter and receiver \cite{YanZang}. Thus, the quadratic norm of $ \mathbf{H}_{\mathcal{S}_k,\mathcal{R}_n}$ is given by \begin{eqnarray} \norm{\mathbf{H}_{\mathcal{S}_k,\mathcal{R}_n}}^2=\gamma ~ d_{\mathcal{S}_k,\mathcal{R}_n}^{-2\xi} \norm{\mathbf{G}_{\mathcal{S}_k,\mathcal{R}_n}}^2 \label{eq:222} \end{eqnarray} where $\mathcal{S}_k$ denotes each SV $\mathcal{S}_{1_k}$ or $\mathcal{S}_{2_k}$ ($k \in \{1\dots K\}$), $\mathcal{R}_n$ refers to each RV ($n \in \{1\dots N\}$), $\gamma$ is a constant determined by the carrier frequency, antenna gain and other system characteristics, $\xi$ is the path-loss parameter, $\mathbf{G}_{\mathcal{S}_k,\mathcal{R}_n}$ denotes a channel matrix associated with the $\mathcal{S}_k \mathcal{R}_n$ links modeled by mutually independent zero mean complex Gaussian random coefficients and $d_{\mathcal{S}_k,\mathcal{R}_n}$ the respective distance between $\mathcal{S}_k$ and $\mathcal{R}_n$. The same reasoning applies to $\mathbf{H}_{\mathcal{R}_n,\mathcal{S}_k}$ and its quadratic norm is given by \begin{eqnarray} \norm{\mathbf{H}_{\mathcal{R}_n,\mathcal{S}_k}}^2=\gamma ~ d_{\mathcal{R}_n,\mathcal{S}_k}^{-2\xi} \norm{\mathbf{G}_{\mathcal{R}_n,\mathcal{S}_k}}^2. \label{eq:223} \end{eqnarray} In each time slot, the network may work in two modes: "Multiple-Access" (MA) or "Broadcast-Channel" (BC). Therefore, depending on the relay selection metrics (presented in Section III), the network has two options: a) MA mode: The chosen cluster transmits $M_\mathcal{S}$ packets (one packet per each antenna) straight to the chosen RV $\mathcal{R}_g$; and b) BC mode: $\mathcal{R}_{f1}$ and $\mathcal{R}_{f2}$ transmit $M_\mathcal{S}$ packets from the cluster-head buffers to the chosen cluster. If the relay selection algorithm decides to function in the MA mode, the signal transmitted by the chosen cluster $\mathcal{S}$ ($\mathcal{S}_1$ and $\mathcal{S}_2$) and received at $\mathcal{R}_g$ (the RV chosen for reception) is structured in an $2UM_\mathcal{S} \times 1$ vector described by \begin{eqnarray} \mathbf{y}_{\mathcal{S},\mathcal{R}_g}[i]=\sqrt{E_\mathcal{S}/M_\mathcal{S}} \mathbf{H}_{\mathcal{S},\mathcal{R}_g}\mathbf{x}[i]+\mathbf{n}_{\mathcal{R}_g}[i], \label{eq:2} \end{eqnarray} \noindent where $\mathbf{x}[i]$ is an $2M_\mathcal{S} \times 1$ vector with $M_\mathcal{S}$ symbols transmitted by $\mathcal{S}_1$ ($\mathbf{x_1}[i]$) and other $M_\mathcal{S}$ symbols transmitted by $\mathcal{S}_2$ ($\mathbf{x_2}[i]$), $\mathbf{H}_{\mathcal{S},\mathcal{R}_g}$ is a $2UM_\mathcal{S} \times 2 M_\mathcal{S}$ matrix of $\mathcal{S}_1\mathcal{R}_g$ and $\mathcal{S}_2\mathcal{R}_g$ links and $\mathbf{ n}_{\mathcal{R}_g}$ is the zero mean additive white complex Gaussian noise (AWGN) at $\mathcal{R}_g$. Observe that $\mathbf{H}_{\mathcal{S},\mathcal{R}_g}$ is composed by $U$ square sub-matrices of dimensions $2M_\mathcal{S} \times 2M_\mathcal{S}$ as given by \begin{eqnarray} \mathbf{H}_{\mathcal{S},\mathcal{R}_g}= [\mathbf{H}^1_{\mathcal{S}, \mathcal{R}_g}; \mathbf{H}^2_{\mathcal{S},\mathcal{R}_g}; \dots ~ ;\mathbf{H}^U_{\mathcal{S},\mathcal{R}_g}]. \end{eqnarray} Considering perfect synchronization, we employ the ML receiver at the cluster-head processor: \begin{eqnarray} \hat{\mathbf{x}}[i]= \arg \min_{\mathbf{x'}[i]} \left(\norm{\mathbf{y}_{\mathcal{S},\mathcal{R}_g}[i]- \sqrt{E_\mathcal{S}/M_\mathcal{S}} \mathbf{H}_{\mathcal{S},\mathcal{R}_g}\mathbf{x'}[i]}^2\right), \label{eq:4} \end{eqnarray} where $\mathbf{x'}[i]$ is each of the $N_s^{2M_\mathcal{S}}$ possible vectors of transmitted symbols ($N_s$ is the number of symbols in the constellation used). The ML receiver computes an estimate of the vector of symbols transmitted by the SVs $\hat{\mathbf{x}}[i]$. Alternative suboptimal detection techniques could also be considered in future work \cite{mmimo,wence,deLamare2003,itic,deLamare2008,cai2009,jiomimo,Li2011,wlmwf,dfcc,deLamare2013,did,rrmser,bfidd,1bitidd,aaidd,aalidd}. By performing XOR type PLNC, only the XOR outputs, that result in $M_\mathcal{S}$ packets, are stored with the information: "the bit transmitted by $\mathcal{S}_1$ is equal (or not) to the corresponding bit transmitted by $\mathcal{S}_2$". Thus, the bitwise XOR is employed: \begin{eqnarray} \mathbf{z}_{[i]}=\mathbf{\hat{x}_1}[i] \oplus \mathbf{\hat{x}_2}[i] \end{eqnarray} and the resultant symbol {is stored} in the cluster-head buffer. Therefore, an advantage of employing XOR is that only $M_\mathcal{S}$ packets are stored in the cluster-head buffer, rather than $2M_S$. In contrast, if the relay selection algorithm decides to work in the BC mode, the signal transmitted by the RVs chosen for transmission $\mathcal{R}_{f}$ ($\mathcal{R}_{f_{1}}$ and $\mathcal{R}_{f_{2}}$) and received at $\mathcal{S}_1$ and $\mathcal{S}_2$ is structured in an $M_\mathcal{S} \times 1$ vector given by \begin{eqnarray} \mathbf{y}_{\mathcal{R}_f,\mathcal{S}_{1(2)}}[i]=\sqrt{\frac{E_{\mathcal{R}_f}}{2M_{\mathcal{S}}}} \mathbf{H}^{v,v'}_{\mathcal{R}_f,\mathcal{S}_{1(2)}}\mathbf{z}[i]+\mathbf{n}_{\mathcal{S}_{1(2)}}[i], \label{eq:3} \end{eqnarray} \noindent where $\mathbf{z}[i]$ is a $M_\mathcal{S} \times 1$ vector with $M_S$ symbols, $v \in \{1,2,...,V\}$, $v' \in \{1,2,...,V\}$, $\mathbf{H}^{v,v'}_{\mathcal{R}_f,\mathcal{S}_{1(2)}}=\mathbf{H}^v_{\mathcal{R}_{f_1},\mathcal{S}_{1(2)}}+\mathbf{H}^{v'}_{R_{f_2},\mathcal{S}_{1(2)}}$ represents the $M_\mathcal{S} \times M_\mathcal{S}$ matrix of $\mathcal{R}_{f_{1}}\mathcal{S}_{1(2)}$ and $\mathcal{R}_{f_{2}}\mathcal{S}_{1(2)}$ links, and $\mathbf{n}_{\mathcal{S}_{1(2)}}[i]$ is the AWGN at $\mathcal{S}_1$ or $\mathcal{S}_2$. Note that $\mathbf{H}^{v,v'}_{\mathcal{R}_f,\mathcal{S}_{1(2)}}$ is chosen among $V^2$ submatrices of dimensions $M_\mathcal{S} \times M_\mathcal{S}$ contained in $\mathbf{H}_{\mathcal{R}_f,\mathcal{S}_{1(2)}}$ as given by \begin{eqnarray} \begin{split} &\mathbf{H}_{\mathcal{R}_f,\mathcal{S}_{1(2)}}\\ &~= [\mathbf{H}^{1,1}_{\mathcal{R}_f,\mathcal{S}_{1(2)}};...; \mathbf{H}^{1,V}_{\mathcal{R}_f,\mathcal{S}_{1(2)}};...; \mathbf{H}^{V,1}_{\mathcal{R}_f,\mathcal{S}_{1(2)}};...; \mathbf{H}^{V,V}_{\mathcal{R}_f,\mathcal{S}_{1(2)}}]. \end{split} \end{eqnarray} {The ML receiver is also employed} at the chosen cluster: \begin{eqnarray} \begin{split} & \tilde{\mathbf{z}}_{1(2)}[i]\\ &~= \arg \min_{\mathbf{z'}[i]} \left(\norm{\mathbf{y}_{\mathcal{R}_f,\mathcal{S}_{1(2)}}[i]- \sqrt{ \frac{E_{\mathcal{R}_f}}{2M_{\mathcal{S}}}} \mathbf{H}^{v,v'}_{\mathcal{R}_f,\mathcal{S}_{1(2)}}\mathbf{z'}[i]}^2\right), \label{eq:6} \end{split} \end{eqnarray} where $\mathbf{z'}[i]$ denotes the possible vectors with $M_\mathcal{S}$ symbols. Therefore, the vector of symbols sent by $\mathcal{S}_2$ {is calculated} at $\mathcal{S}_1$ by employing XOR type PLNC: \begin{eqnarray} \mathbf{\hat{x}_2}[i]= \mathbf{x}_1[i] \oplus \hat{\mathbf{z}}_1[i]. \end{eqnarray} It is also employed at $\mathcal{S}_2$ to compute the vector of symbols transmitted by $\mathcal{S}_1$: \begin{eqnarray} \mathbf{\hat{x}_1}[i]= \mathbf{x}_2[i] \oplus \hat{\mathbf{z}}_2[i]. \end{eqnarray} An estimate $\mathbf{\hat{H}}$ is used rather than $\mathbf{H}$ in (\ref{eq:4}) and (\ref{eq:6}) with the ML receiver for imperfect CSI. We remark that $\mathbf{\hat{H}}$ is calculated as $\mathbf{\hat{H}}$=$\mathbf{H}$+$\mathbf{H}_e$, where the variance of the mutually independent zero mean complex Gaussian $\mathbf{H}_e$ coefficients is described by $\sigma_e^2=\beta E^{-\alpha}$ ($0 \leq \alpha \leq 1$ and $\beta \geq 0$) \cite{TCOM}, in which $E=E_\mathcal{S}$ in the MA phase, and $E=\frac{E_{\mathcal{S}}}{2}$ in the BC phase. Channel and parameter estimation \cite{smce,1bitce,TongW,jpais_iet,armo,badstc,baplnc,goldstein,qian,jio,jidf,jiols,jiomimo,dce} and resource allocation techniques \cite{jpba} could be considered in future work in order to develop algorithms for this particular setting. \section{Proposed CHD-Best-Link Protocol and Relay Selection Algorithm} The network in Fig. \ref{fig:model} employs the CHD-Best-Link protocol, which in each time slot works in the MA or BC mode. The MMD-based relay selection algorithm, when functioning, must calculate the metrics associated with $KNU$ different $2M_\mathcal{S}\times 2M_\mathcal{S}$ submatrices associated with the uplink channels and $2KN'V^2$ distinct $M_\mathcal{S}\times M_\mathcal{S}$ submatrices associated with the downlink channels, where $N'=N+ C^N_2$, to choose the best cluster, the best RV(s) and the mode of operation, in each time slot (high computational complexity). When a chosen cluster composed by two SVs communicates with each other, the others remain silent. Differently from \cite{TCOM,WSA2020}, where the MMD-based relay selection algorithm is employed for scenarios with time-uncorrelated channels and the MMD metrics are computed in each time slot, we consider scenarios where the UAVs are hovering over a specific area {with low mobility}, leading to possible time-correlated channels. Therefore, the MMD metrics are computed in the inicial time slot and the best RV(s) {are chosen} based on these metrics. Then, the MMD metrics are computed again only when { it is observed} that the channels have been considerably changed from the last time these metrics were computed. Thus, with the proposed recursive MMD, the MMD computation rate (number of time slots the MMD metrics are computed divided by the total number of time slots) is reduced. {In the following, the protocol operation is detailed.} \subsection{Relay selection metric} For each cluster $\mathcal{S}$ (with $\mathcal{S}_1$ and $\mathcal{S}_2$), {in the first step,} the metric $\mathcal{G}^u_{{\mathcal{S}\mathcal{R}_{n}}}$ related to the $\mathcal{S}\mathcal{R}$ links of each square sub-matrix $\mathbf{H}^u_{\mathcal{S},\mathcal{R}_n}$ (associated with $\mathcal{R}_n$), {is calculated} in the MA mode: \begin{eqnarray} \mathcal{G}^u_{{\mathcal{S}\mathcal{R}_{n}}}= \min \frac{E_\mathcal{S}}{M_\mathcal{S}}\norm{\mathbf{H}_{\mathcal{S},\mathcal{R}_n}^u(\mathbf{x}_i - \mathbf{x}_j)}^2, \label{eq:7} \end{eqnarray} where $u \in \{1, ...,U\}$, $n \in \{1, ...,N\}$, $\mathbf{x}_i$ and $\mathbf{x}_j$ are tentative vectors with $2M_\mathcal{S}$ symbols and $\mathbf{x}_i \neq \mathbf{x}_j$. This metric is calculated for each of the $C_2^{N_s^{2M_\mathcal{S}}}$ (combination of $N_s^{2M_\mathcal{S}}$ in $2$) possibilities, for each sub-matrix $\mathbf{H}^u_{\mathcal{S},\mathcal{R}_n}$. In the second step, {the ordering is performed on} $\mathcal{G}^u_{{\mathcal{S}\mathcal{R}_{n}}}$ and the smallest metric {is stored} \begin{eqnarray} \mathcal{G}_{\mathcal{S}\mathcal{R}_n} = \min(\mathcal{G}^u_{{\mathcal{S}\mathcal{R}_{n}}}). \end{eqnarray} In the third step, {the ordering is performed on} $\mathcal{G}_{\mathcal{S}\mathcal{R}_n}$ and {the largest metric is obtained}: \begin{eqnarray} \mathcal{G}_{k_{\max \mathcal{S}\mathcal{R}}} = \max(\mathcal{G}_{\mathcal{S}\mathcal{R}_n}), \end{eqnarray} where $k \in \{1, ...,K\}$. After finding $\mathcal{G}_{k_{\max \mathcal{S}\mathcal{R}}}$ for each cluster $k$, {the ordering is performed and the largest metric is stored:} \begin{eqnarray} \mathcal{G}_{\max \mathcal{S}\mathcal{R}} = \max(\mathcal{G}_{k_{\max \mathcal{S}\mathcal{R}}}). \label{eq:999} \end{eqnarray} Therefore, the cluster and $\mathcal{R}_n$ that fulfil (\ref{eq:999}) {are chosen} to receive $M_\mathcal{S}$ packets from the chosen cluster. In the fourth step, for each cluster the metrics $\mathcal{G}^{v,v'}_{\mathcal{R}_{nl}\mathcal{S}_1}$, related to each sub-matrix $\mathbf{H}^{v,v'}_{\mathcal{R}_{nl},\mathcal{S}_1}$ (associated with each pair $\mathcal{R}_n$ and $\mathcal{R}_l$), {are computed} for BC mode: \begin{eqnarray} \mathcal{G}^{v,v'}_{\mathcal{R}_{nl}\mathcal{S}_1}= \min \left(\frac{E_s}{2M_{\mathcal{S}}}\norm{\mathbf{H}^{v,v'}_{\mathcal{R}_{nl},S_1}(\mathbf{x}_i - \mathbf{x}_j)}^2\right) \label{eq:77} \end{eqnarray} where $\mathbf{H}^{v,v'}_{\mathcal{R}_{nl},\mathcal{S}_1}=\mathbf{H}^v_{\mathcal{R}_{n},\mathcal{S}_1}+\mathbf{H}^{v'}_{\mathcal{R}_{l},\mathcal{S}_1}$, $v$ and $v'$ $\in \{1, ...,V\}$, $n$ and $l$ $\in \{1, ...,N\}$, $\mathbf{x}_i$ and $\mathbf{x}_j$ are tentative vectors formed by $M_\mathcal{S}$ symbols and $\mathbf{x}_i \neq \mathbf{x}_j$. This metric is calculated for each of the $C_2^{N_s^{M_\mathcal{S}}}$ possibilities, for each sub-matrix $\mathbf{H}^{v,v'}_{\mathcal{R}_{nl},\mathcal{S}_1}$. This reasoning is also applied in the fifth step, to calculate the metric $\mathcal{G}^{v,v'}_{\mathcal{R}_{nl}\mathcal{S}_2}$. In the sixth step, the metrics $\mathcal{G}^{v,v'}_{\mathcal{R}_{nl}\mathcal{S}_1}$ and $\mathcal{G}^{v,v'}_{\mathcal{R}_{nl}\mathcal{S}_2}$ {are compared} and the smallest one {is stored}: \begin{eqnarray} \mathcal{G}^{v,v'}_{\mathcal{R}_{nl}\mathcal{S}} = \min(\mathcal{G}^{v,v'}_{\mathcal{R}_{nl}\mathcal{S}_1},\mathcal{G}^{v,v'}_{\mathcal{R}_{nl}\mathcal{S}_2}). \end{eqnarray} After finding $\mathcal{G}^{v,v'}_{\mathcal{R}_{nl}\mathcal{S}}$ for each pair of sub-matrices $\mathbf{H}^{v,v'}_{\mathcal{R}_{nl},\mathcal{S}_1}$ and $\mathbf{H}^{v,v'}_{\mathcal{R}_{nl},\mathcal{S}_2}$, {the ordering is performed and the largest metric is obtained:} \begin{eqnarray} \mathcal{G}_{\mathcal{R}_{nl}\mathcal{S}} = \max(\mathcal{G}^{v,v'}_{\mathcal{R}_{nl}\mathcal{S}}). \label{eq:9999} \end{eqnarray} In the seventh step, after finding $\mathcal{G}_{\mathcal{R}_{nl}\mathcal{S}}$ for each pair of RVs, {the ordering is performed and the largest metric is stored:} \begin{eqnarray} \mathcal{G}_{k_{\max \mathcal{R}\mathcal{S}}}=\max(\mathcal{G}_{ \mathcal{R}_{nl}\mathcal{S}}), \end{eqnarray} where $k \in \{1, ...,K\}$. After finding $\mathcal{G}_{k_{\max \mathcal{R}\mathcal{S}}}$ for each cluster $k$, {the ordering is performed and the largest metric is stored:} \begin{eqnarray} \mathcal{G}_{\max \mathcal{R}S} = \max(\mathcal{G}_{k_{\max \mathcal{R}\mathcal{S}}}). \label{eq:888} \end{eqnarray} Therefore, the cluster and the RVs $\mathcal{R}_n$ and $\mathcal{R}_l$ that fulfil (\ref{eq:888}) {are chosen} to transmit at the same time $M_\mathcal{S}$ packets stored in the particular cluster-head buffer to the chosen cluster. The estimated channel matrix $\mathbf{\hat{H}}$ is considered in (\ref{eq:7}) and (\ref{eq:77}), rather than $\mathbf{H}$, if we assume imperfect CSI. \vspace{-5pt} \subsection{Observing the channels} At each time slot, {the protocol observes} if the channels change considerably in relation to the last computed MMD metrics: \begin{eqnarray} \text{DN}=\norm{\mathbf{H}_{\text{pres}}-\mathbf{H}_{\text{last}}}^2, \end{eqnarray} where $\mathbf{H}_{\text{last}}$ is the channel matrix associated with the chosen RV when the MMD metrics were computed at the last time and $\mathbf{H}_{\text{pres}}$ is the channel matrix associated with the same RV but in the present time slot. Moreover, if $\frac{\text{DN}}{\norm{\mathbf{H}_{\text{last}}}^2} \leq p$, in which $0\leq p\leq1$, {the protocol considers} that the channels have not changed so much and decides that the last computed MMD metrics can be reused for relay selection. Otherwise, it computes again the MMD metrics, as described in {Subsection III. A}. Additionally, a designer might consider precoding and beamforming techniques \cite{lclattice,switch_int,switch_mc,gbd,wlbd,mbthp,rmbthp,bbprec,1bitcpm,bdrs,baplnc,memd,wljio,locsme,okspme,lrcc} to help mitigate interference rather than open loop transmission. \subsection{Choice of the transmission mode} After calculating $\mathcal{G}_{\max \mathcal{S}\mathcal{R}}$ and $\mathcal{G}_{\max \mathcal{R}\mathcal{S}}$ (or observing the channels and deciding to reuse the last computed metrics), the metrics are compared and we choose the transmission mode:\\\\ \begin{cases} \mbox{if} ~ \frac{N_{p}}{M_\mathcal{S}} > L, ~ \mbox{then}~&\mbox{"BC mode" and choose the }\\ & \mbox{ cluster whose buffer is fullest.}\\ \mbox{elseif} ~\frac{\mathcal{G}_{\max \mathcal{S}\mathcal{R}}}{\mathcal{G}_{\max \mathcal{R}\mathcal{S}}} \geq G, ~ \mbox{then} & \mbox{"MA mode",}\\ \mbox{otherwise,} & \mbox{"BC mode"}, \end{cases} $ where $G =\frac{E[\mathcal{G}_{\max \mathcal{S}\mathcal{R}}]}{E[\mathcal{G}_{\max \mathcal{R}\mathcal{S}}]}$, $N_{p}$ is the total number of packets stored in the cluster-head buffers, $L$ is a {finite integer non negative} metric that when reduced increases the chance of the protocol to work in BC mode, leading to smaller average delay. \section{Analysis: Pairwise Error Probability } The PEP suposes an error event when $\mathbf{x}_i$ is transmitted and the detector calculates an incorrect $\mathbf{x}_j$ (where $i$ $\neq$ $j$), based on the received symbol \cite{f411,f78}. In \cite{TCOM,WSA2020} an approach is proposed to analyze the PEP worst case of the {Multi-Way Cloud Driven Best-User-Link (MWC-Best-User-Link) protocol.} In this work, this approach {is used} to calculate the PEP worst case of the proposed CHD-Best-Link. Considering $\mathcal{D'}=\norm{\mathbf{H}(\mathbf{x}_i-\mathbf{x}_j)}^2$, in MA mode, and $\mathcal{D'}=\frac{1}{2}\norm{\mathbf{H}(\mathbf{x}_i-\mathbf{x}_j)}^2$, in BC mode, and $U=1 ~ (M_{r_{Rx}}=2M_\mathcal{S})$, an expression for computing the PEP worst case with cooperative transmissions (CT) in each time slot is described by \begin{eqnarray} \mathbf{P}^{CT}(\mathbf{x}_i \rightarrow \mathbf{x}_j | \mathbf{H})= 1- \left(1-Q\left(\sqrt{\frac{E_\mathcal{S}}{2 N_0M_\mathcal{S}} \mathcal{D'}_{\min}}\right)\right)^2, \label{eq:102} \end{eqnarray} where $\mathcal{D'}_{\min}$ is the smallest value of $\mathcal{D'}$ {and the $Q$-function is the probability a standard normal random variable takes a value greater than its argument}. The proposed CHD-Best-Link with the MMD criterion chooses the channel matrix $\mathbf{H}^{MMD}$ that minimizes the PEP worst case as given by \begin{eqnarray} \begin{split} \mathbf{H}^{MMD}&=\arg \min_{\mathbf{H}} \mathbf{P}(\mathbf{x}_i \rightarrow \mathbf{x}_j | \mathbf{H})\\ &=\arg \max_{\mathbf{H}} \min \norm{\mathbf{H}(\mathbf{x}_i-\mathbf{x}_j)}^2. \end{split} \end{eqnarray} This strategy can be employed for each of the square sub-matrices $\mathbf{H}^u$ in a non square matrix $\mathbf{H}$ (composed by multiple square sub-matrices). In \cite{TCOM}, a proof shows that the MMD relay selection criterion by maximizing the minimum Euclidian distance between different vectors of transmitted symbols minimizes the BER in the ML receiver of MWC-Best-User-Link \cite{TCOM} and, consequently, also of CHD-Best-Link. \vspace{-5pt} \section{Simulation Results} {This section presents the simulation results of} the proposed C-RAN {-type} CHD-Best-Link, using the recursive MMD-based relay selection algorithm, and the existing Buffer-Aided {Multi-Way Max-Link (MW-Max-Link)} \cite{f78} and MWC-Best-User-Link \cite{TCOM} protocols adapted to UAV relaying, with the MMD-based relay selection algorithm and the ML receiver. {The Monte Carlo simulation method is performed}. { Binary Phase Shift Keying (BPSK)} signals are adopted and remark that higher order constellations may be studied elsewhere. The MMD computation rate is given by the number of time slots the MMD metrics are computed divided by the total number of time slots. {The time a packet takes to arrive at the destination after it is sent by the SV is considered to calculate the average delay \cite{f23}}. Thus, the delay is the amount of time slots the packet resides in the buffer. These protocols were tested for a set of $J$ values and $J=6$ packets is enough to ensure excellent performance. Perfect and imperfect CSI and symmetric unit power channels ($\sigma_{ \mathcal{S},\mathcal{R}}^2$ $=$ $\sigma_{ \mathcal{R},\mathcal{S}}^2$ $= 1$) are considered. For simplicity, homogeneous distances and path-loss {are considered} and the SVs and RVs are spread with distinct positions but the RVs have almost the same distances and path-loss as the SVs. Moreover, time-uncorrelated and time-correlated channels {(in scenarios where the UAVs are hovering over a specific area with low mobility) are employed}. With time-correlated channels, the channel matrix is described by $\mathbf{H}_{t+1}= \rho\mathbf{H}_{t}+\sqrt{1-\rho^2}\mathbf{H}_p$, in each time slot, where $\mathbf{H}_{t}$ is the channel matrix in the previous time-slot, $-1\leq \rho \leq 1$ and $\mathbf{H}_p$ is also a channel matrix formed by mutually independent zero mean complex Gaussian random coefficients (with time-uncorrelated channels, $\rho=0$). The signal-to-noise ratio (SNR) given by $E/N_0$ ranges from 0 to 10 dB, where $E$ is the energy transmitted from each SV or the RV(s) and $N_0 =1$. The protocols were tested for $10000M_\mathcal{S}$ packets, each containing $T=100$ symbols. \vspace{-10pt} \begin{figure}[!h] \centering \includegraphics[scale=0.51]{PEP_MMD_comp_rate_final.eps} \vspace{-20pt} \caption{Theoretical PEP and MMD Computation Rate versus SNR.} \label{fig:theoreticalPEP} \end{figure} Fig. \ref{fig:theoreticalPEP} illustrates the theoretical PEP worst case performance (calculated by the algorithm based on the chosen channel matrix $\mathbf{H}$, in each time slot) of CHD-Best-Link, for BPSK, $M_\mathcal{S} = 2$, $M_{r_{Tx}}=2$, $M_{r_{Rx}}=8$, $K=5$, $N = 10$, $L=0$, perfect CSI, $p=0.1$, 0.2, 0.4 and 0.8, $\rho=0.95$ (time-correlated channels). Note that the lower the $p$ value the better the PEP worst case performance and the higher the MMD computation rate (higher cost). Thus, a trade-off between PEP worst case and MMD computation rate is shown. \begin{figure}[!h] \centering \includegraphics[scale=0.49]{UAV_BER_MMD_Rate6_final.eps} \caption{BER and MMD computation rate performances versus SNR.} \label{fig:pepMaxlinkmmse} \end{figure} Fig. \ref{fig:pepMaxlinkmmse} depicts the BER and MMD Computation Rate of the CHD-Best-Link, MWC-Best-User-Link and MW-Max-Link protocols, for $M_\mathcal{S} = 3$, $M_{r_{Tx}}=3$, $M_{r_{Rx}}=6$ in MW-Max-Link and $M_{r_{Rx}}=12$ in CHD-Best-Link and MWC-Best-User-Link, $K = 5$, $N = 10$, BPSK, $L=0$, perfect CSI, $p=0.1$, 0.2 and 0.4 and $\rho=0.95$ and $\rho=0$. The BER of CHD-Best-Link is quite superior to that of MW-Max-Link for all SNR values tested. Remark that the BER performance of CHD-best-Link, with $M_{r_{Rx}}=12$, achieves a gain of approximately 3dB in SNR for the same BER as compared to that of MW-Max-Link. Besides, the BER performance of CHD-Best-Link, for $p=0.2$ and $\rho=0.95$ is close to that of MWC-Best-User-Link for $\rho=0$, but with the MMD computation rate approximately of 0.2 (considerably reduced cost). Furthermore, CHD-Best-Link has the same performance of MWC-Best-User-Link \cite{TCOM,WSA2020}, when $\rho=0$ or $p=0$, as the MMD metrics are computed in each time slot and, consequently, the MMD computation rate is equal to 1 (100\%). The full and dashed curves represent the uplink and downlink MMD computation rate, repectively, which show a trade-off between BER performance and MMD computation rate. \begin{figure}[!h] \centering \includegraphics[scale=0.49]{UAV_BER_AD4_final.eps} \caption{BER and Average Delay performances versus SNR.} \label{fig:berAdmmse} \end{figure} Fig. \ref{fig:berAdmmse} depicts the BER and the average delay performances of CHD-Best-Link and MW-Max-Link, for BPSK, $M_\mathcal{S} = 2$, $M_{r_{Tx}}=2$, $M_{r_{Rx}}=4$ in MW-Max-Link, and $M_{r_{Rx}}=8$ in CHD-Best-Link, $K=5$, $N = 10$, $L=0$, 5 and $L>KQ$ (where $Q=\frac{J}{M_{\mathcal{S}}}$), imperfect CSI ($\beta=0.5$ and $\alpha=1$), $p=0.2$ and $\rho=0$ and 0.95. The average delay performance of CHD-Best-Link is quite supeior to that of MW-Max-Link, as CHD-Best-Link has a single group of $K$ cluster-head buffers. When the value of $L$ is reduced to 0 in CHD-Best-Link, the average delay achieves $1$ time slot, still keeping a superior BER performance to that of MW-Max-Link. \vspace{-5pt} \section{Conclusions} A new C-RAN {-type} structure with a UAV cluster-head as a central node and a recursive relay selection strategy that exploits time-correlated channels often found in UAV communications has been introduced and studied as an appropriate relay selection technique for multi-way UAV relaying schemes {in FANETs.} The simulation results, {considering the worst case scenario (UAVs flying at ultra-low altitude) without the presence of the LoS component, show an outstanding performance of the proposed CHD-Best-Link protocol as compared to those of other existing protocols in the literature}. The performance of CHD-Best-Link is considerably better than that of MW-Max-Link {\cite{f78}}, in terms of BER, average delay and MMD computational rate (reduced complexity), and also is better than that of MWC-Best-User-Link {\cite{TCOM}}, in terms of MMD computational rate. {The Monte Carlo simulation method is adopted in this work, but practical experiments considering different scenarios may be performed in future studies.} \vspace{-5pt}
proofpile-arXiv_065-133
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Over the past decade, moving computing, control, and data storage into the cloud has been an important trend in order to utilize much needed abundant computing resources to handle explosive traffic demands. However, this relocation introduces network delays that bring significant challenges related to meeting the latency requirements of critical applications. To overcome the disadvantages of the cloud, fog computing, which selectively moves computation, communication, control, and decision making close to the network edge where data is being generated, became inevitable in this era \cite{b0}. One of the key benefits of fog computing stems from its highly virtualized platform that offers computing capacities allowing various applications to run anywhere. Hence, fog computing resolves problems of cloud-only solutions for applications that require a real-time response with low latency, e.g., mission-critical applications \cite{b1,b2}. Given the substantial benefits that can be drawn from this technology, fog computing is expected to play a crucial role in IoT, 5G, and other advanced distributed and connected systems \cite{b3,b4,b5,b6}. In fog networks, where fog nodes and cloud data centers present heterogeneous resources (e.g., computational, bandwidth, and memory), service tasks are classified according to various performance requirements and heterogeneous resource configurations. In contrast to the cloud server, the computing capacity of fog nodes is usually limited and in-homogeneous. Thus, computation-intensive tasks often exhibit poor performance when they are processed by fog nodes with extremely limited resource capacities \cite{b11, b11-1}. In this context, offloading and distributing tasks over the network while guaranteeing the Quality-of-Service (QoS) requirements of the users, can be particularly useful. Considering the fact that fog nodes are located relatively close to each other, offloading from an originally requested fog node to an affordable neighbor node with available resources can be an attainable solution even for delay-critical services. Moreover, it is critical for the fog nodes and cloud to be able to cope with heterogeneous tasks when deciding which service tasks should be deployed and where. Hence, both the fog and cloud should complement each other in a distributive way to fulfill service needs. To this end, the hierarchical fog architecture was introduced to support a better distribution of the computing resources with an elastic and flexible placement of resources \cite{b11-2}. On the other hand, as wireless services and applications become more sophisticated and intelligent, there is a pressing need for an efficient management of the execution of increasingly complex tasks based on the application requirements. Specifically, the selection of suitable nodes and proper resource assignments are critical in fog networks, where various types of applications are simultaneously running over the same network \cite{b9-2, b9-3}. The problem is deteriorated due to the highly volatile service demands of end-users and uncertainties associated with resource availability at the fog nodes. When fog nodes handle significantly different traffic demands, the imbalance between fog nodes can lead to inefficient resource usage and inequitable QoS \cite{b9-4}. Furthermore, each node cannot acquire full knowledge of the other nodes due to a non-trivial amount of signaling overhead and communication latency. Therefore, how to distribute the computing resources optimally throughout the network and design algorithms based on local knowledge that can derive globally emergent system characteristics such as agility, efficiency, and reliability are the central questions that lead this paper \cite{b9-5}. \subsection{Related work} Recently, significant efforts have been centered on fog computing implementations to tackle the limitations of traditional cloud platforms. Specifically, many approaches have been proposed in the literature to enhance task offloading and resource allocation problems for fog networks. Yousefpour \textit{et al.} \cite{b12} introduce a delay-minimizing offloading policy for fog nodes in IoT-fog-cloud application scenarios, where the policy considers not only the length of the queue but also different request types that vary in processing times. Following this, it determines whether or not to offload the selected tasks as the estimated waiting time of fog node is greater than an acceptance threshold; it will offload the request to its best neighbor fog node. Zhang \textit{et al.} \cite{b13} formulate the resource allocation problem between fog nodes, data service operators, and data service subscribers. First, service subscribers purchase the optimal number of computing resource blocks from service operators with a Stackelberg game. Each subscriber competes for the required computing resource blocks owned by the nearby fog nodes. With a many-to-many matching game between service operators and fog nodes, each operator determines its fog nodes that have computing resources to sell. With another many-to-many matching framework, resource blocks of fog nodes are allocated to the service subscribers. Although some promising works have been dedicated to studying task offloading and resource allocation in fog computing and edge computing networks, it is necessary to jointly address the two issues to improve the overall performance. Wang \textit{et al.} \cite{b14} propose to jointly address computation offloading, resource allocation, and content caching in wireless cellular networks with mobile edge computing. First, they transform the original non-convex problem into a convex problem and prove the convexity of the transformed problem. Then, they decompose the problem and apply an alternating direction method of multipliers to solve it in an distributed and practical way. Alameddine \textit{et al.} \cite{b15} address task offloading with joint resource allocation and scheduling specifically focused on delay-sensitive IoT services. They mathematically formulate the problem as a mixed-integer problem and present a decomposition approach to achieve faster run times while providing the optimal solution. For heterogeneous real-time tasks, the authors in \cite{b16} propose task offloading and resource allocation problems in a three-tier fog system with a parallel virtual queueing model. They apply an adaptive queuing weight resource allocation policy based on the Lyapunov function. Moreover, they propose multi-objective sorting policies in terms of both the laxity and execution times of the task to achieve a trade-off between the throughput and task completion ratio optimization. However, the computation offloading and resource allocation designs \cite{b12,b14,b15,b16} are mostly based on one-shot optimization and may not be able to achieve a long-term stable performance in dynamic situations. And since most optimization problems that arise in network resource management are non-convex and NP-hard, all these algorithms generally impose restrictions on the network to simplify non-trivial mathematical equations \cite{convex}. Nevertheless, such assumptions would require a revision of the objective functions, or even the system models, that lead to these problem formulations in the first place. Furthermore, there are related works using different meta-heuristic methods \cite{m1,m2,m3,m4}. S. K. Mishra \textit{et al.} \cite{m2} introduce the scheduling of service requests to virtual machines (VMs) with the minimum energy consumption at the fog servers. They formulate the service allocation algorithm for the heterogeneous fog server system using three meta-heuristic methods, namely particle swarm optimization (PSO), binary PSO, and bat algorithm. Moreover, the authors in \cite{m4} introduce a new evolutionary algorithm (EA) that is combined with a PSO and genetic algorithm (GA) for the joint design of the computation offloading and fog nodes provisioning. However, in meta-heuristic algorithms, the memory required to maintain a population of candidate solutions becomes vast as the size of problems increases. Specifically, due to the larger search space in large-scale problems, almost every state encountered will never have been seen before, which makes it impossible to converge in limited time steps. In that respect, as the system model becomes more complex, meta-heuristic methods can no longer be applied. In this context, to make sensible decisions in such large search spaces, it is necessary to generalize from previous encounters with different states that are in some sense similar to the current one. In order to cope with an unprecedented level of complexity as we consider many parameters to accurately model the system, embedding versatile machine intelligence into future wireless networks is drawing unparalleled research interest \cite{r1,r2}. A lot of recent works try to address the resource allocation problem in IoT networks by using supervised machine learning, i.e., Support Vector Machines (SVMs), Recurrent Neural Networks (RNNs), Convolutional Neural Networks (CNNs), etc \cite{r3}. Nevertheless, supervised learning is learning from a fixed data set. Thus the algorithm does not directly interact with the environment where it operates, which is not adequate to dynamically provision the on-demand resources, especially for highly volatile IoT application demands. Moreover, in the context of resource management for IoT networks, the lack of sufficient labeled data is another factor that hinders the practicality of supervised learning-based algorithms. On the other hand, a different machine learning technique that does not fall in the category of supervised and unsupervised learning is reinforcement learning (RL) \cite{b20}. One of the key features of reinforcement learning is that it explicitly considers the problem of a goal-directed algorithm interacting with an uncertain environment. Therefore, RL-based algorithms can continually adapt as the environment changes without needing explicit system models. To tackle the curse of dimensionality of RL, deep reinforcement learning (DRL) was recently introduced \cite{b26}. DRL embraces deep neural networks to train the learning process, thereby improving the learning speed and the performance of RL-based algorithms. Therefore, a DRL can provide efficient solutions for future IoT networks \cite{r3-1}. The authors in \cite{r3-2} introduce an optimal computation offloading policy for mobile edge computing (MEC) in an ultra dense system based on a deep Q-network (DQN) without prior knowledge of the dynamic statistics. Pan \textit{et al.} \cite{r3-3} study the dependency-aware task offloading decision in MEC based on Q-learning aiming to minimize the execution time for mobile services with limited battery power consumption. Huynh \textit{et al.} \cite{r8} develop an optimal and fast resource slicing framework based on a semi-Markov decision process (MDP) that jointly allocates the computing, storage, and radio resources of the network provider to different slice requests. To further enhance the performance, they propose a deep Q-learning approach with a deep dueling neural network, which can improve the training process and outperform all other reinforcement learning techniques in managing network slicing. Chen \textit{et al.} \cite{r9} consider a software-defined radio access network where multiple service providers (SPs) compete to acquire channel access for their subscribed mobile users, thereby each mobile user proceeds to offload tasks and schedule queued packets over the assigned channel. They transform the stochastic game between non-cooperative SPs into an abstract stochastic game and propose a linear decomposition approach to simplify decision making. Also, a DRL algorithm is leveraged to tackle the huge state space. Sun \textit{et al.} \cite{r10} propose a DRL based joint communication mode selection and resource management approach with the objective of minimizing the network power consumption. This approach can help the network controller learn the environment from collected data and make fast and intelligent decisions to reduce power consumption. Moreover, the tremendous growth in data traffic over next-generation networks can be substantially reduced via caching, which proactively stores reusable contents in geographically distributed memory storage \cite{r5,r4,r7,r6}. The authors in \cite{r4} study the joint cache and radio resource optimization on different timescales in fog access networks. The optimization problem is modeled as a Stackelberg game. To solve the problem, single-agent RL and multi-agent RL are utilized and rigorously analyzed. Meanwhile, the authors in \cite{r6} exploit the power allocation problem in non-orthogonal multiple access for a system with cache-enabled users. They propose a DRL based scheme, which responds quickly upon requests from users as well as allows all users to share the full bandwidth. Also, they show that applying iterative optimization algorithms is not suitable for satisfying a short-response requirement from the base station to users. \subsection{Contributions} This paper focuses on resource management in a fog system with the aim of guaranteeing the specific quality of service of each task as well as maximizing the resource utilization by cooperating between fog computing nodes. To address this problem, we design a joint heterogeneous task offloading and resource allocation algorithm whose goal is to maximize the processing tasks completed within their delay time limits. More precisely, we consider an independent multi-agent decision-making problem that is cooperative and partially observable. To solve this problem, we propose a deep recurrent Q-network (DRQN) based learning algorithm, namely deep Q-learning combined with a recurrent layer. The DRQN-based algorithm aims to resolve partially observable environments by maintaining internal states. In particular, to guarantee the convergence and accuracy of the neural network, the proposed DRQN-based algorithm adopts an adjusted exploration-exploitation scheduling method, which efficiently avoids the exploitation of incorrect actions as the learning progresses. To our best knowledge, this is the first work that introduces DRQN to solve the joint task offloading and resource allocation problems in fog computing networks. The key contributions of this paper are summarized as follows. \begin{itemize} \item The proposed algorithm considers two-levels of heterogeneity of service tasks in terms of QoS requirements and resource demand characteristics. In real IoT scenarios, various service tasks demanding different resource sizes can require different service performances. In order to consider these heterogeneities, we propose a fog network slicing model that manages different types of tasks separately and partitions physical resources to each slice. \item Regarding the feedback and memory overhead, we consider cooperative scenarios where the independent multi-fog nodes perceive a common reward that is associated with each joint action while estimating the value of their individual actions solely based on the rewards that they receive for their actions. Therefore, this reduces the feedback and memory overheads considerably compared to joint-action learning schemes where the fog nodes require the reward, observation, and action sets of others. \item To deal with partial observability, we apply a DRQN approach to approximate the optimal value functions. The DRQN-based algorithm can tackle partial observability issues by enabling the agent to perform the temporal integration of observations. This solution is more robust than DQN and deep convolutional Q-network (DCQN)-based methods in ways that the neural network with a recurrent layer can learn its output depending on the temporal pattern of observations by maintaining a hidden state, and thus it can keep internal states and aggregate observations. Moreover, to guarantee the convergence and accuracy of the neural network, an adjusted exploration-exploitation method is adopted. \item Numerical experiments using Tensorflow are presented to support the model and the proposed algorithm. The proposed DRQN-based algorithm requires much less memory and computation than the conventional Q-learning and meta-heuristic algorithms which are impractical for solving the problem considered in this paper. Particularly, the proposed DRQN-based algorithm is compared to the DQN and DCQN approaches where it is demonstrated that the performance in terms of average success rate, average overflow rate, and task delay can be significantly enhanced by using the proposed DRQN-based algorithm. \end{itemize} \subsection{Organizations} The remainder of this article is organized as follows: in section \ref{sectiontwo}, the system description is presented. The formulation of the offloading and resource allocation problem as a partially observable MDP (POMDP) is detailed in Section \ref{sectionthree}. In section \ref{sectionfour}, we propose the cooperative decision-making problem between independent multi-nodes and derive a deep reinforcement learning scheme to solve the problem formulated in Section \ref{sectionthree}. Simulation results are presented in Section \ref{sectionfive}. Finally, Section \ref{sectionsix} concludes this paper and provides insight on possible future work. \section{System description}\label{sectiontwo} In this section, we introduce a three-layer fog network system model that supports the integration of different services while serving the best of each dissimilar service characteristics, such as CPU processing density and delay requirements, through a hierarchical model. The time horizon is divided into decision epochs of equal durations (in millisecond) and indexed by an integer $t\in{\mathbb{N_+}}$. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{system_model3.pdf} \caption{Three-layer fog network system.} \label{figure1} \end{figure} \begin{table}[] \caption{List of Notations} \label{table0} \centering \begin{tabular}{l|l} \hline Symbol & \multicolumn{1}{|c}{Definition} \\ \hline \multicolumn{1}{c|}{$I$} & The set of fog nodes \\ \hline \multicolumn{1}{c|}{$Z$} & The set of cloud servers \\ \hline \multicolumn{1}{c|}{$K_i$} & The set of fog slices at fog node $i$ \\ \hline \multicolumn{1}{c|}{$T_k$} & The packet size of the task of slice $k$ \\ \hline \multicolumn{1}{c|}{$D^{max}_k$} & The maximum delay budget of the task of slice $k$ \\ \hline \multicolumn{1}{c|}{$\lambda_{i,k}$} & The task of slice $k$ arrival rate for the fog node $i$ \\ \hline \multicolumn{1}{c|}{$a_{i,k}$} & \begin{tabular}[c]{@{}l@{}}The boolean variable whether the task of slice $k$ \\ arrives at fog node $i$ or not\end{tabular} \\ \hline \multicolumn{1}{c|}{$b_{i,k}$} & The number of tasks in the buffer of slice $k$ at fog node $i$ \\ \hline \multicolumn{1}{c|}{$b^e_{i,k}$} & \begin{tabular}[c]{@{}l@{}}The number of tasks are allocated resources for processing\\ among all the tasks in the buffer of slice $k$ at fog node $i$ \end{tabular} \\ \hline \multicolumn{1}{c|}{$L^c_k$} & CPU processing density demanded for the task of slice $k$\\ \hline \multicolumn{1}{c|}{$L^m_k$} & Memory size demanded for the task of slice $k$\\ \hline \multicolumn{1}{c|}{$U^c_i$} & Total CPU resource capacity of fog node $i$ \\ \hline \multicolumn{1}{c|}{$U^m_i$} & Total memory resource capacity of fog node $i$ \\ \hline \multicolumn{1}{c|}{$U^c_z$} & Total CPU resource capacity of cloud server $z$ \\ \hline \multicolumn{1}{c|}{$U^m_z$} & Total memory resource capacity of cloud server $z$ \\ \hline \multicolumn{1}{c|}{$\eta^c_i$} & The allocation unit of CPU resource at fog node $i$ \\ \hline \multicolumn{1}{c|}{$\eta^m_i$} & The allocation unit of memory resource at fog node $i$ \\ \hline \multicolumn{1}{c|}{$\eta^c_z$} & The allocation unit of CPU resource at cloud server $z$ \\ \hline \multicolumn{1}{c|}{$\eta^m_z$} & The allocation unit of memory resource at cloud server $z$ \\ \hline \multicolumn{1}{c|}{$BW_i$} & The transmission bandwidth of fog node $i$ \\ \hline \multicolumn{1}{c|}{$P_i$} & The transmission power of fog node $i$ \\ \hline \multicolumn{1}{c|}{$\beta_1, \beta_2$} & The path loss constant and exponent \\ \hline \multicolumn{1}{c|}{$r^c_i$} & The available CPU resource units at fog node $i$ \\ \hline \multicolumn{1}{c|}{$r^m_i$} & The available memory resource units at fog node $i$ \\ \hline \multicolumn{1}{c|}{$f_{i,k}$} & \begin{tabular}[c]{@{}l@{}} The offloading decision by fog node $i$ where \\ the task of slice $k$ will be processed \end{tabular} \\ \hline \multicolumn{1}{c|}{$w_{i,k}$} & \begin{tabular}[c]{@{}l@{}} The resource allocation decision by fog node $i$ how many \\ tasks of slice $k$ will be allocated resources for processing \end{tabular} \\ \hline \multicolumn{1}{c|}{$\psi_i$} & The local reward observed by fog node $i$ \\ \hline \end{tabular} \end{table} \subsection{Three-layer fog network system} To improve scalability and resource utility in fog networks, a three-layer hierarchy is the most considered architecture \cite{b4,b11-2}. A three-layer fog network consists of an end-device layer, a fog layer, and a cloud layer. The end-device layer includes various types of devices, known as unintelligent devices, that are only producing data and are not equipped with processing capacity. Therefore, devices can request the nearest fog node to run their tasks on behalf of them. These tasks can be classified into types according to two characteristics, the task performance requirement (also called QoS) and the heterogeneous resource configuration (e.g., different CPU and memory configurations). The fog layer is composed of multiple fog nodes $i\in \mathcal{I}$. As illustrated in Fig. \ref{figure1}, we consider the fog layer where the physical network infrastructure is split into multiple virtual networks to offer heterogeneous service requests for different types of end-device segments, known as network slicing \cite{b9}. With network slicing technology, fog nodes can set up customized slices to guarantee specific latency and resource requirements by the supported services. Fog nodes are formed by at least one or more physical devices with high processing capabilities, which are aggregated as one single logical entity able to seamlessly execute distributed services as if it was on a single device. The shared physical resources (e.g., bandwidth, CPU, and memory) on fog nodes are partitioned into fog slices to enable running the network functions that meet certain required slice characteristics (e.g., ultra-low-latency, ultra-reliability, etc). Finally, the cloud layer includes various servers that are capable of sufficient storage and computational resources but are physically remote from end-devices. In our architecture, a fog network comprises of a single cloud server $z\in \mathcal{Z}$ that interacts with all fog nodes. \subsection{SDN-based fog nodes and inter-SDN communications} To provide more distributed and scalable management, fog nodes make use of software-defined networking techniques where the control plane is capable of decision-making while the data plane simply serves forwarding and computing tasks \cite{b6-1, b6-2,b8-1}. Furthermore, many different applications are operated concurrently in the SDN application plane. Besides, as individual SDN controllers are located in separate fog nodes, we apply the concept of inter-SDN communications, which interconnects controllers to share information and coordinate their decisions \cite{b17}. It is noted that the need for inter-SDN communications is increased as the explosive increase in task demands of end devices is requiring networks formed by more than one SDN controller \cite{b17-1}. In our system model, each fog node defines, deploys, and adapts independent decision-making in its separate SDN controller, and communication between the SDN controllers of the fog nodes aims to exchange feedback information required by the independent decision-making processes. Details about the information exchange is provided in section \ref{sectionthree}. \subsection{Fog slices based on heterogeneous task models} We consider a fog network with fog nodes deploying the same set of logical fog slices for different task types over separate infrastructures (i.e. $\mathcal{K}_i=\mathcal{K}, \forall i\in \mathcal{I}$). Let $k\in \mathcal{K}_i$ be the set of slices available in the $i^{th}$ node where each slice processes a specific type of tasks in a separate buffer. The task demanded from the end devices by slice $k$ has a size of $T_k$ bits. In this context, since the time slot duration is relatively small, at most one task arrives at each slice of the fog node within a time slot. At the beginning of each time slot $t$, let $a_{i,k}(t)$ be the arrived task, where $a_{i,k}(t)= 1$ if a task of slice $k$ arrives at fog node $i$, otherwise $a_{i,k}(t)= 0$. Hence, the probability that a new task of slice $k$ arrives at fog node $i$ within time slot $t$ follows a Bernoulli distribution with parameter $\lambda_{i,k}$, $\mathbb{P}(a_{i,k}= 1)=\lambda_{i,k}$. The number of tasks in a buffer for slice $k$ at fog node $i$ and time slot $t$ is $b_{i,k}(t)$, of which $b^e_{i,k}(t)$ tasks are in progress in time slot $t$. Meanwhile, the maximum buffer size is $\overline{b}_{i,k}$. Classified tasks in each slice have specific QoS requirements as well as different resource configurations. We assume that tasks delivered from end-devices are classified to the corresponding slices regarding their characteristics without manual intervention since classifying tasks to predict the type of application \cite{b18} are outside the scope of this paper. In terms of QoS, we categorize tasks into three classes: 1) delay-critical class (e.g., self-driving cars, live-streaming), 2) delay-sensitive class (e.g., augmented reality/virtual reality (AR/VR), smartphone applications) and 3) delay-tolerant class (e.g., IoT sensors). The priority of tasks is determined in a way that provides maximum reliability within an acceptable delay, proper to each slice. On the other hand, with regard to resource configurations, tasks of each slice demand two types of resources (i.e. CPU and memory). To process a task of slice $k$, we denote the CPU processing density (in cycles/bit) and the memory (in Mbyte) demands as $L^c_k$ and $L^m_k$, respectively. Therefore, although tasks require the same QoS demands, the resource demands can be dissimilar \cite{b18-1}. One use case example of the delay-critical class is a live sport-streaming application requiring high-throughput, while another from the same class would be an emergency signal for self-driving cars which doesn't necessarily require high-throughputs. Thus, they are processed through different slices. Furthermore, the total resource capacities (i.e. CPU and memory) of fog node $i$ and of the cloud server $z$ are $U_i=(U^c_i, U^m_i)$, $\forall i\in \mathcal{I}$ and $U_{z} =(U^c_z, U^m_z)$, respectively, where the superscript $c$ and $m$ indicate CPU speed (in cycles/$\Delta t$) and memory size (in Mbyte), respectively. We assume that the cloud server is much more computationally powerful than its associated fog nodes, i.e., $U^c_z\gg U^c_i$, $\forall i\in \mathcal{I}$, and provides limitless storage, i.e., $U^m_z\sim\infty$. The fog nodes and the cloud server can allocate their resource on a resource unit basis. Hence, the total amount of resource units, which can be allocated by fog node $i$ to all slices, can be computed as $N_i=(N^c_i, N^m_i)=\left(\lfloor{\frac{U^c_i}{\eta^c_i}}\rfloor, \lfloor{\frac{U^m_i}{\eta^m_i}}\rfloor\right)$ where $\eta^c_i$ and $\eta^m_i$ stand for the number of allocated units of computing and memory resources at fog node $i$, respectively, and $\lfloor\cdot\rfloor$ denotes the floor function. Likewise, $\eta^c_z$ and $\eta^m_z$ indicate the number of allocated units of computing and memory resources at cloud server $z$ and the total number of resource units of cloud server is unlimited. Thus, for a given node $i$ at time $t$, the occupied resource units of all slices can be calculated as \begin{equation} \label{one} G_i\left(t\right)=\left(g^c_i, g^m_j\right)=\left(\sum_{k}b^e_{i,k}(t), \sum_{k}b^e_{i,k}(t)\cdot\lceil{\frac{L^m_k}{\eta^m_i}}\rceil\right), \end{equation} where $\lceil\cdot\rceil$ is the ceiling function since a minimum of memory units greater than or equal to $\frac{L^m_k}{\eta^m_i}$ must be allocated to execute the task of slice $k$. At every time slot $t$, fog nodes only monitor their own available resources which correspond to the total resources minus the sum of resources being allocated to tasks of all slices. Hence, the available resource units at fog node $i$ and time $t$ can be measured as \begin{equation} \label{two} R_i\left(t\right)=\left(r^c_i, r^m_j\right)=\left(N^c_i-g^c_i, N^m_i-g^m_i\right), \end{equation} where $r^c_i+g^c_i\le N^c_i$ and $r^m_i+g^m_i\le N^m_i$. Once the task processing is completed during the time slot, the finished task will be eliminated from the buffer and the resource allocated to this task will return to the available resource pool in the next time slot. \subsection{Calculation of task latency} In our architecture, at the beginning of each time slot $t$, fog nodes use an independent offloading policy to decide whether they will process arrived tasks locally or offload them to another node between neighboring fog nodes and the cloud server. Furthermore, decisions on resource allocation are simultaneously made by fog nodes with regard to their own resources through separate allocation policies. We define a task latency to enable different delay constraints for tasks, thereby minimizing timeout failures that result from high transmission latency from offloading to a remote node or long waiting delays due to insufficient resource capacities at the processing node. Formally, the task latency can be denoted as the sum of the transmission delay, waiting delay, and processing delay. We assume that the fog nodes have information regarding the distance to neighboring fog nodes in the fog network as well as to the cloud server. To model the transmission delay of offloading, the tasks are transmitted to the selected node over a wireless channel. Then the transmission delay for fog node $i$ to forward the task of slice $k$ can be defined as \begin{equation} \label{three} D^s_{i,j,k,n}(t)= \begin{cases} \frac{T_k}{\nu_{i,j,n}(t)},& \text{if } i\neq j \\ 0, & \text{otherwise,} \end{cases} \end{equation} where $j\in \{\mathcal{I}, \mathcal{Z}\}$ if the selected node is a fog or cloud node, respectively, and $n \in \{1,2,..K\}$ indicates the total number of tasks offloaded by fog node $i$ at time slot $t$. Moreover, $\nu_{i,j,n}(t)$ represents the transmission rate from fog node $i$ to the selected node $j$, which is given by \cite{b11}: \begin{equation} \label{four} \nu_{i,j,n}(t)=BW_{i,n}(t)\cdot\log\left(1+\frac{\beta_1 {d_{i,j}}^{-\beta_2}\cdot P_i}{BW_{i,n}(t)\cdot\sigma^2}\right), \end{equation} where $d_{i,j}$, $\beta_1$, and $\beta_2$ are the distance between two nodes, the path loss constant, and the path loss exponent, respectively. The variable $P_i$ denotes the transmission power of fog node $i$ and $\sigma^2$ is the noise power spectral density. Additionally, the bandwidth is given by $BW_{i,n}(t)=\frac{BW_i}{n}$, which means that the total bandwidth of the fog node $BW_i$ is equally shared by $n$ tasks. For example, when a fog node $i$ offloads a total of two different tasks during a time slot, each task is transmitted with $\frac{BW_i}{2}$ in separate ways. When $i=j$, a fog node $i$ processes this task locally, thus there is no transmission delay. Moreover, in most cases, the size of task after processing is small, thus the transmission delay of the result after processing can be ignored. Next, when the slice $k$ task arrives in the corresponding buffer at node $j$, the waiting delay $D^q_{i,j,k}(t)$ can be calculated as \begin{equation} \label{five} D^q_{i,j,k}(t)= \begin{cases} \frac{b_{j,k}(t)}{\mu_{j,k}(t)},& \text{if } j\in I \\ 0, & \text{otherwise,} \end{cases} \end{equation} where $b_{j,k}(t)$ is the number of tasks previously existing in a buffer and $\mu_{j,k}(t)$ is the service rate (i.e., the rate of tasks leaving a buffer). However, since a fog node does not have prior information about the buffer status of the other nodes when offloaded tasks arrive in its buffer and given that the service rate varies depending on the resource scheduling process, the waiting delay cannot be calculated in advance. On the other hand, we assume that the waiting time at the buffer of the cloud can be disregarded because the cloud is equipped with a larger number of cores than the fog node. This indicates that the cloud initiates the computation for the received tasks without queueing delay. When a task is computed by fog nodes, the processing delay $D^p_{i,j,k}(t)$ can be denoted as \begin{equation} \label{six} D^p_{i,j,k}(t)= \begin{cases} \frac{T_k\cdot L^c_k}{\eta^c_j},& \text{if } j \in I \\[8pt] \frac{T_k\cdot L^c_k}{\eta^c_z},& \text{otherwise, } \end{cases} \end{equation} where $T_k\cdot L^c_k$ refers to the number of CPU cycles required to complete the execution of a task of slice $k$. When the task is offloaded to a fog node $ j \in \mathcal{I}$, the task of slice $k$ is executed by a fog node $j$ with the CPU speed $\eta^c_j$. Likewise, when the task is offloaded to the cloud server $z$, the processing delay is formulated in the bottom equation of (6) where $\eta^c_z$ is the CPU speed of cloud server $z$. Thus, the processing delay is dependent on both the resource configuration of the task and the amount of allocated resources. In essence, if a slice $k$ task is offloaded from a fog node $i$ to a neighboring fog node $j\neq i$, the latency is obtained as \begin{equation} \label{seven} \begin{split} D_{i,j,k,n}(t)&= D^s_{i,j,k,n}(t)+D^q_{i,j,k}(t)+D^p_{i,j,k}(t) \\ & = \frac{T_k}{\nu_{i,j,n}(t)}+\frac{b_{j,k}(t)}{\mu_{j,k}(t)}+\frac{T_k\cdot L^c_k}{\eta^c_j}. \end{split} \end{equation} If a slice $k$ task is computed locally by a fog node $i$, the latency becomes \begin{equation} \label{eight} \begin{split} D_{i,i,k,n}(t)&= D^q_{i,i,k}(t)+D^p_{i,i,k}(t) \\ & = \frac{b_{i,k}(t)}{\mu_{i,k}(t)}+\frac{T_k\cdot L^c_k}{\eta^c_i}. \end{split} \end{equation} Finally, if a slice $k$ task is offloaded from a fog node $i$ to the cloud server $z$, the latency is \begin{equation} \label{nine} \begin{split} D_{i,z,k,n}(t)&= D^s_{i,z,k,n}(t)+D^p_{i,z,k}(t) \\ & = \frac{T_k}{\nu_{i,z,n}(t)}+\frac{T_k\cdot L^c_k}{\eta^c_z}. \end{split} \end{equation} \section{Problem formulation}\label{sectionthree} In this section, we define the problem of heterogeneous task offloading and resource allocation in a system with multiple fog nodes as a POMDP across the time horizon. \subsection{Partially observable MDP based problem formulation}\label{sectionthreea} The main goal of the system is to make an optimal offloading and resource allocation decision at each node with the objective of maximizing the successfully processed tasks while guaranteeing the corresponding delay constraint of each task. Therefore, the joint offloading and resource allocation decision is achieved by finding proper processing nodes for the tasks and an optimal allocation of the node’s resources to all individual slices. We assume that the joint offloading and resource allocation decisions from all fog nodes are made simultaneously at every time slot $t$. To this end, each node repeatedly observes its own system states at the beginning of the time slot. The local observation by fog node $i$ is defined as \begin{equation} \label{ten} O_i(t)=\Big(A_i(t), B_i(t), B^e_i(t), R_i(t)\Big), \end{equation} where $A_i(t)=(a_{i,k}(t): k\in \mathcal{K}_i)$, $B_i(t)=(b_{i,k}(t): k\in \mathcal{K}_i)$, and $B^e_i(t)=(b^e_{i,k}(t): k\in \mathcal{K}_i)$ are the set of arrived tasks, the number of tasks in the buffer, and the number of tasks in progress among $B_i(t)$ from all slices at the fog node $i\in \mathcal{I}$ and time $t$, respectively. Moreover, $R_i(t)=\Big(r^c_i(t), r^m_i(t)\Big)$ is the available resource units at the fog node $i\in \mathcal{I}$ at time $t$. Note that the underlying states of the system including the states of other fog nodes are not accessible by the fog node. Instead, only the aforementioned state set in (\ref{ten}) can be observed and thus the system becomes a POMDP. We suppose that the observations are limits to the measurement accuracy of the state but are enough to make usable state data for a POMDP system. In the presence of uncertainties stemming from the task demands and resource availability at the fog nodes, we formulate the POMDP based problem across the time horizon as a stochastic game in which each node selects actions as a function of their local observation. In our model, a fog node’s offloading and resource allocation policy operates independently from the other nodes’ policies. Thus, each fog node does not have any prior information on the task demands, buffer status, and resource availability of the other fog nodes. Accordingly, the actions are defined as \begin{equation} \label{eleven} X_i(t)=\Big(X^f_i(t), X^w_i(t)\Big), \end{equation} \noindent where $X^f_i(t)=(f_{i,k}(t): k\in \mathcal{K}_i)$ and $X^w_i(t)=(w_{i,k}(t): k\in \mathcal{K}_i)$ denote the offloading decision and resource allocation decision, respectively. $f_{i,k}(t)\in\{0,1,...,I+1\}$ represents by whom the task will be processed, where $f_{i,k}(t)=0$ if the slice $k$ task doesn't arrive at the fog node $i$ at time $t$, $f_{i,k}(t)=i$ if the slice $k$ task arrives and will be computed locally, and $f_{i,k}(t)=j, j\in \mathcal{I}$ and $j\neq i$, if the slice $k$ task arrives and will be offloaded to another node $(f_{i,k}(t)=I+1$ implies that the fog node will offload this task to the cloud server). The resource allocation decision $w_{i,k}(t)\in\{0,1,...,\lfloor\frac{U^m_i}{L^m_k}\rfloor\}$ represents how many tasks will be initiated by being allocated resources where $\lfloor\frac{U^m_i}{L^m_k}\rfloor$ is the maximum number of tasks that can be simultaneously processed by the fog node $i$. For example, $w_{i,k}(t)=2$ indicates that, at time $t$, fog node $i$ allocates its resources to slice $k$ to execute two tasks that are not in progress in the buffer. Each node takes an action $X_i(t)$ only among the ones allowed in that observation, i.e., $X_i(t)\in\mathcal{X}_i(O(t))$. We apply the following constraints for the offloading and resource allocation at time $t$, \begin{equation} \label{twelve} f_{i,k}(t)=0, \text{if }a_{i,k}(t)=0, \forall k\in \mathcal{K}_i, \forall i\in \mathcal{I}, \end{equation} \begin{equation} \label{thirteen} w_{i,k}(t)\le (B_i(t)-B^e_i(t)), \forall k\in \mathcal{K}_i, \forall i\in \mathcal{I}, \end{equation} \begin{equation} \label{fourteen} \sum_{k\in \mathcal{K}_i} w_{i,k}(k)\le r^c_i, \forall i\in \mathcal{I}, \end{equation} \begin{equation} \label{fifteen} \sum_{k\in \mathcal{K}_i} w_{i,k}(k)\cdot\lceil{\frac{L^m_k}{\eta^m_i}}\rceil\le r^m_i, \forall i\in \mathcal{I} \end{equation} to ensure that the fog node cannot offload the task when it doesn't arrive by (\ref{twelve}), cannot allocate more than the number of tasks waiting for allocation by (\ref{thirteen}), and the sum of newly allocated resources cannot exceed the available resources by (\ref{fourteen}) and (\ref{fifteen}). Given that each node is in state $O_i(t)$ and action $X_i(t)$ is chosen, a transition probability is given by (\ref{sixteen}), where $\mathbf{X}(t)=(X_i(t): i\in \mathcal{I})$ are the set of actions occurring at time $t$. From (\ref{sixteen}), $B_i^e(t+1)$ and $R_i(t+1)$ only depend on the action $X_i(t)$ of fog node $i$, while $A_i(t+1)$ is determined regardless of the action. Since one node's offloading decisions result in increasing others' buffers, the sequence of each node's buffer status $B_i(t)$ depends on the actions of all agents $\mathbf{X}(t)$. \begin{figure*}[t] \begin{equation}\label{sixteen} \begin{aligned} \mathbb{P}\Big(O_i(t+1)|O_i(t), \mathbf{X}(t)\Big)=& \mathbb{P}\Big(A_i(t+1)\Big)\times\mathbb{P}\Big(B_i(t+1)|B_i(t),\mathbf{X}(t)\Big)\\ &\times\mathbb{P}\Big(B_i^e(t+1)|B_i^e(t),X_i^w(t)\Big)\times\mathbb{P}\Big(R_i(t+1)|R_i(t), X^w_i(t)\Big) \end{aligned} \end{equation} \begin{equation} \label{seventeen} \begin{aligned} \psi_i(O_i(t), \mathbf{X}(t))&=\frac{1}{K}\cdot\sum_{k\in \mathcal{K}}a_{i,k}(t)\cdot\Big((-1)^{\mathds{1}(D^{max}_k\le D_{i,k}(t))}-\xi_k\cdot\mathds{1}(b_{f_{i,k},k}(t+D^t_{i,k}(t))\ge\overline{b}_{f_{i,k},k})\Big) \end{aligned} \end{equation} \hrulefill \end{figure*} Based on the set of actions $\mathbf{X}(t)$ in local observation $O_i(t)$, we define the local reward in (\ref{seventeen}), where $D_k^{max}$ is the maximum delay budget of the task in slice $k$ where the task is discarded if its processing is not completed within this budget. The first term of the summation in (\ref{seventeen}) represents the success reward, a positive reward if the task is successfully completed and negative reward if timeout failure is encountered, which depends on both offloading decisions of arrived fog node $i$ and resource allocation decision of the processing fog node. The second term describes the overflow cost which defines whether the task is dropped because the slice buffer is already full, thus it is related to the buffer status of processing fog node $b_{f_{i,k},k}$. Moreover, $\xi_k$ is a constant weighting factor that balances the importance of the overflow failure for tasks of slice $k$. \subsection{Cooperative games by independent learners} Although each fog node’s main goal is to optimize its own service performance and its resource interests, the fog nodes must still coordinate on the resource flows between neighboring nodes in order to achieve a meaningful solution from an overall system perspective \cite{b21}. In addition, the service performance experienced by service tasks during the processing is determined by the offloading and the resource allocation decisions of all fog nodes. Therefore, our stochastic game, sometimes called Markov game, follows a cooperative network to maximize the common goal rather than a competitive game where each fog node has opposing goals \cite{b22, b23}. More precisely, we apply cooperative scenarios between independent multi-fog nodes where the fog nodes share their local rewards with others as feedback information. This decision-making problem implies that independent fog nodes perceive the common reward that is associated with each joint-action while estimating the value of their individual actions solely based on the rewards that they receive for their actions. Therefore, this reduces the feedback and memory overheads considerably compared to joint-action learning schemes where the fog nodes share their reward, observation, and action sets with others to maintain a model of the strategy of other agents. As such, at each time step, each node executes an individual action, with the joint goal of maximizing the average rewards of all nodes which can be formally formulated as \begin{equation} \label{eighteen} \psi(t)=\sum_{i\in \mathcal{I}}{\psi_i\Big(O_i(t), \mathbf{X}(t)\Big)}. \end{equation} Thus, each node's reward is drawn from the same distribution, reflecting the value assessment of all nodes \cite{b23-2}. Moreover, the convergence performance of joint-action learning schemes is not enhanced dramatically despite the availability of more information due to the exploration strategy \cite{b23-2}. As detailed in section \ref{sectiontwo}, the reward feedback is transmitted through inter-SDN communications to the SDN controllers of all the fog nodes for the decision-making process. In summary, the decision-making process at each fog node is fully distributed for real-time task offloading and resource management while communications between SDN controllers aims to exchange less time-sensitive reward information. \section{Learning the optimal offloading and resource allocation policies}\label{sectionfour} In this section, we propose a Q-learning-based optimal policy solution to address the limitations of the traditional approaches and discuss deep recurrent Q-networks (DRQN) which can better approximate actual Q-values from sequences of observations, leading to better policies in a partially observable environment. \subsection{Optimal policy solution using Q-learning} In the case where the system has access to transition probability functions and rewards for any state-action pair, the MDP can be solved through dynamic programming (DP) approaches to find the optimal control policy \cite{b24, b25}. However, in our cases, the system cannot precisely predict the transition probability distributions and rewards. To address this limitation, reinforcement learning is proposed in which the lack of information is solved by making observations from experience \cite{b20}. Among the different reinforcement learning techniques, Q-learning is used to find the optimal state-action value for any MDP without an underlying policy. Given the controlled system, the learning node $i$ repeatedly observes the current state $O^t_i$, takes action $X^t_i$ that incurs a transition, then it observes the new state $O^{t+1}_i$ and the reward $\psi^t_i$. From these observations, it can update its estimation of the Q-function for state $O^t_i$ and action $X^t_i$ as follows: \begin{equation}\label{19} \begin{aligned} Q_i(O^t_i,X^t_i) \leftarrow &(1-\alpha)\cdot Q_i(O^t_i,X^t_i)\\ &+\alpha\cdot[\psi^t_i+\gamma\max_{X'\in \mathbf{X}(O^{t+1}_i)}Q_i(O^{t+1}_i,X')], \end{aligned} \end{equation} where $\alpha$ is the \textit{learning rate} (0$<$$\alpha$$<$1), balancing the weight of what has already been learned with the weight of the new observation, and $\gamma$ is the \textit{discount factor} (0$<$$\gamma$$<$1).The most common action selection rule is the $\epsilon$-greedy algorithm that behaves greedily most of the time i.e., Greedy selection ($X^t\doteq\arg\max_{X'}Q(O^t,X')$) and explores other options by selecting a random action with a small probability $\epsilon$. This greedy selection and the $\epsilon$ probability of random selection are called exploitation and exploration, respectively. Non-optimal action selection can be uniform during exploration ($\epsilon$-greedy algorithm) or biased by the magnitudes of Q-values (such as Boltzmann exploration). Moreover, we discuss the computational complexity of the Q-learning algorithm. The Q-algorithm requires storing a $|\mathcal{O}|\times|\mathcal{X}|$ size table of Q-values, i.e., $Q(O,X)$ for all $O\in \mathcal{O}$ and $X\in \mathcal{X}$. In our problem, the size of local observation spaces $|\mathcal{O}_i|$ and local action spaces $|\mathcal{X}_i|$ is calculated as $\prod_{k\in\mathcal{K}}\Big(2\times(1+\overline{b}_{i,k})^2\Big)\times(1+N^c_i)\times(1+N^m_i)$ and $(I+2)^K\times\prod_{k\in\mathcal{K}}\Big(1+\lfloor\frac{U^m_i}{L^m_k}\rfloor\Big)$. When $I=5$, $K=3$, $\overline{b}_{i,k}=5$, $N^c_i=5$, $N^m_i=5$, and $\lfloor\frac{U^m_i}{L^m_k}\rfloor=5$, one node $i$ has to update a total of $9.955\times10^{8}$ Q-function values, which makes it impossible for the conventional Q-learning process to converge within a limited number of time steps. This problem is even more pronounced in multi-agent scenarios, where the number of joint actions grows exponentially with the number of agents in the system. \subsection{Convergence to equilibrium} $\pi^*_i$ of a node $i$ is the optimal policy to other nodes. Recall from fictitious play \cite{b23-3}, the exploration strategy is required to be asymptotically myopic to ensure that Nash equilibrium will be reached in multi-agent RL strategies. An action selection rule $\pi_i$ is said to be asymptotically myopic if the loss from agent $i$’s choice of actions at every given history $\pi_i$ goes to zero as $t$ proceeds\cite{b23-3}: \begin{equation} \label{20} \psi(\pi^t_i)\nearrow\max\{\psi(X_i)|X_i\in\mathcal{X}_i(O_i)\}, \end{equation} as $t\to\infty$, where $\psi$ denotes the reward function. Therefore, the independent multi-agent Q-learning in cooperative systems will converge to equilibrium almost surely when the following conditions are satisfied \cite{b23-1}: \begin{itemize} \item The learning rate $\alpha$ decreases over time such that $\sum^t\alpha=\infty \text{ and } \sum^t\alpha^2<\infty$. \item Each node visits every action infinitely often. \item The probability $\mathbb{P}_i^t(x)$ that node $i$ selects action $x$ is nonzero, $x\in\mathcal{X}(o)$. \item The exploration strategy of each node is exploitative such that $$\lim_{t\to\infty}\mathbb{P}_i^t(\pi^t_i)=0\text{ ,}$$ where $\pi^t_i$ is a random variable denoting a non-optimal action was taken based on estimated Q-values of node $i$ at time $t$. \end{itemize} The first two conditions are required for convergence in Q-learning, while the third ensures that nodes explore with a positive probability at all times, which will ensure the second condition. Last but not least, the fourth condition guarantees that agents exploit their knowledge as the number of time steps increases. In fact, convergence of Q-learning does not depend on the exploration strategy used, which implies that there is no rule to choose actions as long as every action is visited infinitely often. However, effective exploration strategies will encourage long run optimal equilibrium \cite{b23-1}. To this end, we propose an adjusted exploration-exploitation method in the next subsection. \subsection{Deep Q-learning with nonlinear transformation} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{SDN_fog_node2.pdf} \caption{Application of a deep Q-network (DQN) to approximate the optimal joint offloading and resource allocation control policy of the SDN-based fog nodes.} \label{figure2} \end{figure} \begin{figure*}[t] \begin{equation}\label{twenty} \begin{aligned} \mathcal{L}_i\left({\theta}^t_i\right) = \mathbb{E}\left[\left(\psi^t+\gamma\max_{X'\in \mathcal{X}_i}Q_i\left(O_i^{t+1},X';{\theta}^{t-1}_i\right)-Q_i\left(O_i^t,X_i^t;{\theta}^t_i\right)\right)^2\right] \end{aligned} \end{equation} \begin{equation} \label{twentyone} \begin{aligned} \nabla_{{\theta}^t_i}\mathcal{L}_i\left({\theta}^t_i\right)=\mathbb{E}\Big[\Big(\psi^t+&\gamma\max_{X'\in \mathcal{X}_i}Q_i\left(O_i^{t+1},X';{\theta}^{t-1}_i\right)-Q_i\left(O_i^t,X_i^t;{\theta}^t_i\right)\Big)\cdot\nabla_{{\theta}^t_i} Q_i\left(O_i^t,X_i^t;{\theta}^t_i\right)\Big] \end{aligned} \end{equation} \hrulefill \end{figure*} To solve the scalability issues of Q-learning, we adopt Q-learning with a neural network, called deep Q-network (DQN). The DQN embraces the advantage of deep neural networks (DNNs) to train the learning process at each node $i\in\mathcal{I}$, thereby improving the learning speed and the performance of Q-learning algorithms. The Q-network can be trained by iteratively adjusting the weights $\theta$ to minimize a sequence of the loss function, $\mathcal{L}_i\left({\theta}^t\right)$, where the loss function at time slot $t$ is defined in (18). Precisely, given a transition $\langle O_i^t,X_i^t,\psi^t,O_i^{t+1}\rangle$, the weights $\theta^t_i$ of the Q-network of node $i\in\mathcal{I}$ are updated in a way that minimizes the squared error loss between the current predicted Q-value of $Q_i\left(O_i^t,X_i^t\right)$ and the target Q-value of $\left[\psi^t+\gamma\max_{X'\in \mathcal{X}_i}Q_i(O_i^{t+1},X')\right]$. The gradient of the loss function in (\ref{twenty}) with respect to the weights ${\theta}^t_i$ is given by (\ref{twentyone}). Moreover, in the DQN algorithm, the experience replay technique is adopted as the training method to address the instability of the Q-network due to the use of non-linear approximation functions \cite{b26}. Hence, the transition experiences, $e^t_i=\langle O_i^t,X_i^t,\psi^t,O_i^{t+1}\rangle$ are stored into a replay buffer $\mathcal{M}_i=\{e^{t-\mathcal{D}_i}_i,…,e^t_i\}$, where $\mathcal{D}_i$ is the replay buffer capacity. Due to possible delays in the reward feedback between fog nodes, the past transition experiences may need to wait in the temporal replay buffer to combine the rewards (as in (\ref{eighteen})), where they are transferred to the replay buffer as soon as it is ready. At each time step, instead of the most recent transition $e^t_i$, a random mini-batch $\mathcal{N}_i$ of transitions from the replay memory is chosen to train the Q-network by node $i$. Furthermore, every $\mathcal{C}$ time steps, the network $Q_i$ is duplicated to obtain a target network $\hat{Q}_i$ which is used for generating the target Q-value for the following $\mathcal{C}$ updates to $Q_i$. In addition, to guarantee the convergence and accuracy of the neural network, we adopt an adjusted exploration-exploitation scheduling method. At the beginning of the process, the agent with a normal $\epsilon$-greedy algorithm selects more random actions with a probability $\epsilon=\epsilon_{start}$ to encourage initial exploration. Then, the exploration rate is asymptotically decayed with $\epsilon_{decay}$ until it reaches a certain minimum value $\epsilon_{min}$ and is preserved until the last iteration. Since $\epsilon_{min}$ is a very small number, after this initial exploration phase, most decisions take the highest estimated value at the present iteration. However, this often leads to a sub-optimal policy due to exploiting bad estimates of the Q-value which were learned during the early iterations and insufficient exploration in large state-action space cases. To deal with this problem. the adjusted exploration-exploitation method allows the agent to shift back into exploratory mode every $\mathcal{R}^\epsilon$ time slots, where the starting exploration probability $\epsilon_{start}$ is decreased $\delta^\epsilon$ (0$<$$\delta^\epsilon$$<$1) times every update. Therefore, this method efficiently avoids the exploitation of incorrect actions by selecting better estimates of the Q-value as the learning progresses. The optimal control policy learning implementation using the DQN algorithm is illustrated in Fig. \ref{figure2}. \subsection{Deep-recurrent Q-learning for partial observability} Another problem is that estimating a Q-value from an immediate observation in DQN can be arbitrarily wrong since the network states are partially observed and hinge upon multiple users \cite{b27}. Any system that requires a memory of more than an immediate observation will appear to be non-Markovian because the future system states depend on more than just the current input. This issue can be solved by allowing the agent to perform temporal integration of observations. The solution adopted in \cite{b26} stacks the last four observations in memory and feeds them to the convolutional neural network (called DCQN) instead of a single observation at a time. However, the DCQN takes in a fixed size vector as input, a stack of 4 observations in \cite{b26}, which limits its usage in situations that involve a sequence type input with no predetermined size. In order to address this issue, we implement a DRQN which replaces the DCQN’s first fully connected layer by a recurrent layer. By utilizing a recurrent layer, the neural network will be able to learn its output depending on the temporal pattern of observations by maintaining a hidden state that it computes at every iteration. The recurrent layer can feed the hidden state back into itself, and thus it can maintain internal states and aggregate observations. \begin{figure*} \centering\includegraphics[width=1\linewidth]{DRQN.pdf}\par \caption{The proposed DRQN structure with GRU.} \label{DRQN} \end{figure*} However, during backpropagation, vanilla recurrent neural networks suffer from the vanishing gradient problem, which makes layers that get a small gradient value stop learning, and thus neural networks may forget important information from the beginning. To tackle this problem, we use Gated Recurrent Unit (GRU) for the recurrent layer. Similar to Long short-term memory (LSTM), GRU was introduced as a solution to the short-term memory of vanilla recurrent neural networks \cite{b28}. The main concept of GRUs is a gating mechanism, which can learn which information is relevant to keep or forget during training in the recurrent network. GRU has two gates (reset and update) and is known to be computationally more efficient and faster than LSTM which consists of three gates (forget, input, and output) and cell state, while its performance is comparable to LSTM \cite{b28, b29}. The proposed DRQN structure is illustrated in Fig. \ref{DRQN}. \begin{equation} \begin{split} G^t_r &=\sigma(W^s_{r}S^{t-1}+W^g_{r}G^t_{i}+\text{bias}_{r}), \\ G^t_z &=\sigma(W^s_{z}S^{t-1}+W^g_{z}G^t_{i}+\text{bias}_{z}), \\ \tilde{S}^t &=\tanh(W^{s}(G^t_r\odot S^{t-1})+W^{g}G^t_{i}+\text{bias}), \\ S^t &=G^t_z\odot S^{t-1}+(1-G^t_z)\odot\tilde{S}^t, \end{split} \end{equation} where $G^t_r$ and $G^t_z$ are reset and update gates, respectively. With that, the recurrent network can learn how to use some of its units to selectively cancel out the irrelevant information and protect the state. Sigmoid and Tanh activation functions can make these decisions by filtering values between 0 and 1 for each state element. Algorithm 1 details the procedure of the proposed learning algorithm at fog node $i$. The neural network takes the state sequence as an input to the first convolutional layer. Since the valid action space is dependent upon the current state value, we involve a step in the action selection that sets the probability of invalid actions to zero and re-normalizes the sum of the probabilities of the other actions to 1. \begin{algorithm} \DontPrintSemicolon \LinesNumbered \textbf{Set} Initialize replay buffer $\mathcal{M}_i$ to capacity $\mathcal{D}_i$, state-action value function $Q_i$ with random weights $\theta_i$, target state-action function $\hat{Q_i}$ with weights $\theta_{i-}=\theta_i$. \While{(t $\le$ maximum iteration)} { Observe the arrival task $A_i^t$, buffer state $B_i^t, B^{e,t}_i$, resource status $R_i^t$ and combine them as $O_i^t$ and take $\hat{O}_i^t$ as an input to the DRQN network with parameter $\theta_i$, where $\hat{O}_i^t$ is a state sequence \; Calculate $\epsilon=\max[\exp{(-\epsilon_{decay}\cdot(t\mod{\mathcal{R}^\epsilon})+\log{\epsilon_{start}})}, \epsilon_{min}]$, and choose random action $X_i^t$ from valid action spaces $X(O_i^t)$ with probability $\epsilon$ otherwise select $X_i^t\doteq\arg\max_{X'}Q(O_i^t,X';\theta_i)$\; Execute action $X_i^t$; offload tasks according to $X_i^{f,t}$ and allocate the resource according to $X_i^{w,t}$ \; Observe local reward $\psi_i^t$, next state $O_i^{t+1}$ and receive reward feedback from other nodes $\psi_{j\neq i}^t$\; Save transition $\langle O_i^t,X_i^t,\psi^t,O_i^{t+1}\rangle$ in $\mathcal{M}_i$\; Sample a random mini-batch from $\mathcal{M}_i$ $\mathcal{N}_i=\langle \hat{O}_i^n,X_i^n,\psi^n,\hat{O}_i^{n+1}\rangle$\; Set $y_i^n=\psi^n+\gamma\max_{X'\in \mathcal{X}_i}Q_i(O_i^{n+1},X';\theta_{i-}^t)$\; Perform a gradient step in (19) with respect to the parameter $\theta_i^t$\; Every $\mathcal{C}$ time step, reset the target network parameters $\theta_{i-}^{t+1}=\theta_i^t$\; Every $\mathcal{R}^\epsilon$ time step, update $\epsilon_{start}=\delta^\epsilon\cdot\epsilon_{start}$ and $\epsilon_{decay}=-\log{\frac{\epsilon_{min}}{\mathcal{R}^\epsilon}}-\frac{\epsilon_{start}}{\mathcal{R}^\epsilon}$ \textit{t}$\leftarrow$\textit{t}$+1$ } \caption{The deep recurrent Q-learning algorithm for approximating the optimal state-action value functions of a fog node $i\in I$ with experience memory\label{DQR}} \end{algorithm} \section{Performance evaluation}\label{sectionfive} In this section, we quantify the performance gain from the proposed DRQN-based learning algorithm for heterogeneous task offloading and resource allocation problems in multi-fog networks using numerical experiments based on Python-TensorFlow simulator. We used three different environments which are equipped with Inter(R) Core i7-7500 U CPU @ 2.7GHz 64-bit OS, Intel(R) Xeon(R) CPU E3-1225 v6 @ 3.3GHz 64-bit OS, and AMD Ryzen Threadripper 1920X 12-Core Processor. \subsection{Simulation settings} For our simulations, we consider a fog layer consisting of five fog nodes that are randomly distributed within a network area of $100\times 100$ $m^{2}$. In addition, a total of three different slices are created on top of each fog node. Slice characteristics are customized by the two-level of heterogeneity, namely the resource demands types and delay constraints, which are summarized in Table \ref{table1}. As an example of slice characteristics in Table \ref{table1}, slice $k$ can be dedicated to Standard resource type services to meet the delay-critical constraint. To obtain realistic values for the processing capacities of fog nodes, we use the CPU processing densities and memory sizes from \cite{b18-1} which used real applications data including a YouTube video data set in \cite{b26-1}. For slice $k$ at fog node $i$, the task arrivals follow a Bernoulli distribution with parameter $\lambda_{i,k}$ (in task/slot) and the packet size is 5$\cdot10^6$ bits. Additionally, the buffer size in each slice is 10, which means that a maximum of 10 slice tasks can stay in the buffer concurrently until processing terminates. The path loss constant and exponent are set to $10^{-3}$ and 4, respectively. The bandwidth for each fog node is 1MHz. The transmission power of the fog node is 20dBm, while the noise power density is -174dBm/Hz \cite{b11}. In regard to resource capacity distribution at fog nodes, the CPU speed of a fog node is randomly sampled from [5GHz, 6GHz, 7GHz, 8GHz, 9GHz, 10GHz], where the memory size of a fog node is randomly sampled from [2.4GB, 4GB, 8GB]. The allocation unit of CPU and memory resources are 1GHz and 400MB, respectively. \begin{table}[t] \caption{Two-level of heterogeneity values to service tasks in simulation} \label{table1} \centering \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Resource} \\ \textbf{Types}\end{tabular}} & \multicolumn{3}{c|}{\textbf{Delay Constraints}} \\ \cline{2-4} & Critical & Sensitive & Tolerant \\ \hline Standard & \begin{tabular}[c]{@{}c@{}}$D_k^{max}=10$ms\\ $L^c_k=400$ cycles/bit\\ $L^m_k=400$ Mbytes\end{tabular} & \begin{tabular}[c]{@{}c@{}}50ms\\ 400 cycles/bit\\ 400 Mbytes\end{tabular} & \begin{tabular}[c]{@{}c@{}}100ms\\ 400 cycles/bit\\ 400 Mbytes\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}CPU \\ intensive\end{tabular} & \begin{tabular}[c]{@{}c@{}}$D_k^{max}=10$ms\\ $L^c_k=600$ cycles/bit\\ $L^m_k=400$ Mbytes\end{tabular} & \begin{tabular}[c]{@{}c@{}}50ms\\ 600 cycles/bit\\ 400 Mbytes\end{tabular} & \begin{tabular}[c]{@{}c@{}}100ms\\ 600 cycles/bit\\ 400 Mbytes\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Memory \\ intensive\end{tabular} & \begin{tabular}[c]{@{}c@{}}$D_k^{max}=10$ms\\ $L^c_k=200$ cycles/bit\\ $L^m_k=1200$ Mbytes\end{tabular} & \begin{tabular}[c]{@{}c@{}}50ms\\ 200 cycles/bit\\ 1200 Mbytes\end{tabular} & \begin{tabular}[c]{@{}c@{}}100ms\\ 200 cycles/bit\\ 1200 Mbytes\end{tabular} \\ \hline \end{tabular} \end{table} To evaluate the performance of different neural network settings, three neural networks are considered to estimate the Q-value in our simulation. For all of them, the output layer is a fully connected layer of $|X_i(t)|$ units, where $|X_i(t)|$ represents the dimension of the action set. Additionally, the activation function of the output layer is a linear activation function, which corresponds to the predicted Q-value of all possible actions. These neural networks differ from each other on the input layer and the hidden layers as detailed below. \begin{enumerate} \item \textit{DRQN}: for the design of the deep recurrent Q-network, the input to the network consists of $Seq\times|O_i(t)|$, where $|O_i(t)|$ is the dimension of the state set and $Seq$ is the sequence size for the 1D-convolutional network. The first hidden layer convolves 32 filters with a kernel size of 3 and applies a Rectified Linear Unit (ReLU). The second hidden layer convolves 64 filters with a kernel size of 3, again followed by a ReLU. This is followed by a recurrent layer in which we use GRU. The number of units in the GRU cell is 128 and the sequence length is 10. The final GRU state is followed by a fully connected layer with ReLU, which has 64 units. \item \textit{DCQN}: the deep convolutional Q-network is almost identical to the deep recurrent Q-network except for a recurrent hidden layer. The resulting activations from the second convolutional hidden layer are followed by two fully-connected layers with ReLU, the first one has 128 units and the second has 64 units. \item \textit{DQN}: we use four fully-connected hidden layers consisting of 64, 128, 128, and 64 units with ReLU. \end{enumerate} In all the experiments, we use the Adam optimizer with a learning rate of 0.001 and learning starts after $10^4$ iterations. A discount factor $\gamma$ of 0.98 is used in the Q-learning update. The replay memory size of $\mathcal{D}_i$ is $10^4$. The target network parameters $\mathcal{C}$ is updated every $10^3$ time slots. We use a mini-batch size of 32 transition samples per update. The $\epsilon$-renewal factor $\delta^{\epsilon}$, $\epsilon$-renewal rate $\mathcal{R}^{\epsilon}$, $\epsilon$-start $\epsilon_{start}$ and minimum-$\epsilon$ $\epsilon_{min}$ are set to 0.9, 5000, 1 and 0.01, respectively. For performance comparisons, the existing methods are simulated as baseline schemes. Given the large state and action spaces in the problem considered, we compare methods that are practicable using limited computational resources. Specifically, one baseline offloading method is used as follows: \begin{itemize} \item Threshold Offloading with Nearest Node selection: the node offloads its tasks only if the buffer is above a certain threshold and we set the threshold to 0.8 which implies that the task is to be offloaded if the buffer is more than 80$\%$ full. Also, the node selects the most adjacent neighboring node aiming to minimize communication delays and energy consumption which is an offloading algorithm widely used in IoT and device-to-device communications. \end{itemize} On the other hand, two conventional resource allocation algorithms are simulated as baseline schemes, namely: \begin{itemize} \item Round Robin (RR): this algorithm allows every slice that has tasks in the queue to take turns in processing on a shared resource in a periodically repeated order. \item Priority Queuing (PQ): this algorithm handles the scheduling of the tasks following a priority-based model. Tasks are scheduled to be processed from the head of a given queue only if all queues of higher priority are empty, which is determined by a delay constraint. \end{itemize} \begin{table}[t] \caption{Simulation cases according to the three slices' characteristics} \label{table2} \centering \small \begin{tabular}{|c|c|c|c|} \hline \multicolumn{1}{|l|}{\textbf{Case}} & \textbf{fog slice-1} & \textbf{fog slice-2} & \textbf{fog slice-3} \\ \hline \multirow{2}{*}{\textbf{case-1}} & \footnotesize\begin{tabular}[c]{@{}c@{}}Standard\\ Delay-Critical\end{tabular} & \footnotesize\begin{tabular}[c]{@{}c@{}}CPU-intensive\\ Delay-Critical\end{tabular} & \footnotesize\begin{tabular}[c]{@{}c@{}}Memory-intensive\\ Delay-Critical\end{tabular} \\ \cline{2-4} & \multicolumn{3}{c|}{\small\begin{tabular}[c]{@{}c@{}}All slices have the same delay constraint,\\ but different resource type tasks\end{tabular}} \\ \hline \multirow{2}{*}{\textbf{case-2}} & \footnotesize\begin{tabular}[c]{@{}c@{}}Standard\\ Delay-Critical\end{tabular} & \footnotesize\begin{tabular}[c]{@{}c@{}}Standard\\ Delay-Sensitive\end{tabular} & \footnotesize\begin{tabular}[c]{@{}c@{}}Standard\\ Delay-Tolerant\end{tabular} \\ \cline{2-4} & \multicolumn{3}{c|}{\small\begin{tabular}[c]{@{}c@{}}All slices have the same resource type tasks,\\ but different delay priorities\end{tabular}} \\ \hline \multirow{2}{*}{\textbf{case-3}} & \footnotesize\begin{tabular}[c]{@{}c@{}}Standard\\ Delay-Critical\end{tabular} & \footnotesize\begin{tabular}[c]{@{}c@{}}CPU-intensive\\ Delay-Critical\end{tabular} & \footnotesize\begin{tabular}[c]{@{}c@{}}Standard\\ Delay-Sensitive\end{tabular} \\ \cline{2-4} & \multicolumn{3}{c|}{\small\begin{tabular}[c]{@{}c@{}}Some slices have the same resource type tasks,\\ but different delay priorities and some vise versa\end{tabular}} \\ \hline \end{tabular} \end{table} For different evaluation scenarios, we specifically assign three different cases in terms of slices' characteristics to analyze how each slice's different resource demands and delay priorities are interrelated to each other. Thus, the three simulation cases according to the three slices' characteristics are summarized in Table \ref{table2}. It is noted that this evaluation can simply be expanded by configuring Table \ref{table1} and Table \ref{table2} to suit the needs of the fog network. \subsection{Performance analysis} In this subsection, we evaluate the performance of the proposed algorithm by comparing the simulation results under different system parameters. \subsubsection{Complexity analysis} In this section, the memory complexity and processing time of the proposed algorithm are investigated. The proposed DRQN-based learning algorithm described in Section \ref{sectionfour}.C requires storing a replay buffer which consists of the state $O^t_i$, action $X^t_i$, reward $\psi^t$, next state $O^{t+1}_i$, and valid action spaces of the next state $\Xi(O^{t+1}_i)$ for the target Q-network. In single transition experience, the state, action, and reward require storing a $\mathbf{len}~A_i+\mathbf{len}~B_i+\mathbf{len}~B^e_i+\mathbf{len}~R_i=3\times K\times2$ size array of decimal values, single integer numbers, and single float numbers, respectively. Moreover, the valid action space of the next state is an $|\mathcal{X}_i|$ size array of binary values, where $\Xi[X']=0$ if $X'\in\mathcal{X}_i$ is invalid in state $O^{t+1}_i$, otherwise $\Xi[X']= 1$. Finally, using the parameters in Section \ref{sectionfour}.A, the proposed algorithm requires much less memory than the conventional Q-learning algorithm, i.e., approximately 2.8GB compared to 7.24TB. It is worth mentioning that we leverage a Python-based simulator where the array as a whole is an object that stores the float data in consecutive regions of the memory, and thus the memory size of an individual float is not explicit. Therefore, the memory usage compared corresponds to the data contained in the object. Furthermore, multi-node learning alleviates the overhead of the network infrastructure as well as improves the system response time, compared to the centralized architecture. In our simulation, assuming each node is equipped with a single CPU, the processing time per iteration is around 0.04s. Moreover, the proposed neural network model can be trained in parallel on multiple CPUs or GPUs to improve the training time and memory usage. \subsubsection{Convergence performance} In this experiment, we evaluate the convergence property of different neural networks with the above parameter settings to confirm whether the proposed deep reinforcement learning-based algorithm can achieve stable performance. To quantify the impact of task traffic status on the convergence performance, we implement two different average task arrival rates, which are categorized into normal($\bar{\lambda}$=0.6) and heavy($\bar{\lambda}$=0.8), where $\bar{\lambda}$ is the average task arrival rate per slice at the fog node. Since the uniform random policy runs for $10^4$ iterations at all nodes before learning starts, the total average reward value is not enhanced during this time and thus we show the average total reward of fog nodes after they start learning their networks. Once each node starts learning its own state-action value functions with a preassigned neural network, the total average rewards are increasing and asymptotically converge after around $1.5\times10^4$ iterations as shown in Fig. \ref{figure3}. In regard to the average task arrival rate, when the nodes receive a smaller number of tasks per time slot, the average total reward value is larger over all simulation cases. The main reason behind this is that the number of successfully processed tasks with limited resources of fog nodes is higher when a fewer number of tasks are waiting in the buffer and also that the buffers are less likely to be overflowed. Given the findings from this experiment, the proposed algorithm using DRQN can achieve greater total reward compared to DQN and DCQN. This result implies that DRQN controllers can handle partial observability by retaining some memory of previous observations to help the nodes achieve better decision making. \begin{figure*} \centering\includegraphics[width=1\linewidth]{all_cases_learning_start_new.png}\par \caption{The convergence property of the proposed algorithm using different neural networks.} \label{figure3} \end{figure*} \subsubsection{Performance under different characteristics of fog slices} This experiment mainly aims to demonstrate the performance in terms of the average processing success rate, the average overflow rate, and the average task latency under different characteristics of fog slices as shown in Table \ref{table2}. Like the previous experiment, two kinds of traffic demands, normal and heavy, are used for evaluation. Fig. \ref{normalall} and Fig. \ref{heavyall} illustrates the average success rate of tasks and the average overflow rate per fog node. From Fig. \ref{normalall} (a) and Fig. \ref{heavyall} (a), it can be observed that the proposed scheme using DRQN achieves the highest average success rate. Moreover, as the traffic requested increases, a larger number of tasks fail to attain their delay performance requirements due to the lack of resources. Comparing the three cases, when the nodes are requested the same traffic rate but demanding high computation and memory resources (Case-1), fog nodes are more likely to experience a lower success rate. This is because the processing time takes longer with limited resources, which also leads to failure of the delay requirements. In Fig. \ref{normalall} (b) and Fig. \ref{heavyall} (b), it is shown that, when implementing baseline methods, extremely large amounts of tasks are dropped due to overloading buffers. This is because fog nodes always select the most proximate fog node to offload their tasks, where the same fog node can be selected by several neighbors and thus its buffers will fill up quickly with tasks from multiple neighboring nodes. Furthermore, the resource allocation methods also affect the overflow performance. As we mentioned, the slices with large resource demands take more processing time than the slices with small resource demands. Thus, when implementing RR and PQ methods, there is unfairness in the allocation of slices with small resource demands, which induces the high number of task drops from overloaded buffers. On the other hand, even though slices of Case-2 constitute the tasks with the same resource demands, their overflow rate is higher than that of Case-1 and Case-3. The difference in Case-2 is that the fog slice-3 of nodes are dedicated to delay-tolerant tasks where tasks of this slice can stay in the buffer until they exceed their large delay limit. Hence, the average processing success rate from Case-2 is higher due to relatively adequate delay limits, while the average overflow performance is worse. However, the proposed algorithm can offload their tasks to different neighboring nodes depending on their buffer and resource status and avoid unfairness among slices with different priorities in the resource allocation process, leading to an increased average success rate. Moreover, Fig. \ref{normalall} and Fig. \ref{heavyall} show that the variances of the success and overflow rates indicated by the error bars vary from one algorithm to the other. The error bars in these figures represent the largest value as the upper limit and the smallest value as the lower limit among all the nodes. For example in Fig. \ref{normalall} (a), the success rate of Case-2 using DRQN has a mean of 95.6$\%$ and varies between 95.3$\%$ to 96.1$\%$, while the success rate of Case-2 using nearest node selection with PQ resource allocation has a mean of 62.1$\%$ and varies between 36.8$\%$ to 90.8$\%$. We can clearly see that the variability of the task success rate between fog nodes is greater for baseline methods than for the proposed algorithms, where the same trend is shown in the overflow rate. This result indicates that the proposed algorithm discourages selfish behavior in nodes and achieves a win-win cooperation between fog nodes by making rational offloading and resource allocation decisions. Fig. \ref{figure5} illustrates the average task delay under different cases. In contrast to the baseline methods, the proposed algorithm decreases the average task delay by selecting a neighboring fog node that minimizes transmission delay as well as waiting time in the buffers and allowing distinct resource allocation with respect to characteristics of each slice. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{rev_normal_allcases.png} \caption{Performance of (a) average success rate and (b) average overflow rate of normal traffic under different cases.} \label{normalall} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{rev_heavy_allcases.png} \caption{Performance of (a) average success rate and (b) average overflow rate of heavy traffic under different cases.} \label{heavyall} \end{figure} \begin{figure*} \centering\includegraphics[width=0.65\linewidth]{all_cases_delay_new.png} \caption{Task delay performance of (a) normal traffic and (b) heavy traffic under different cases.} \label{figure5} \end{figure*} \subsubsection{Performance under different task arrival rates} In this experiment, we consider how the task arrival rates impact the average performance in terms of the average processing success rate, the average overflow rate, and the average task latency per slice. In this simulation, we set all fog nodes' slice characteristics to Case-2. In Fig. \ref{figure6} (a), as the task arrival rate increases, more tasks fail to be completed within their delay limits. Similar observations can be found from Fig. \ref{figure6} (b) where more tasks are dropped when the task arrival rate increases from 0.5 to 0.9. The reason behind this is that, as the task arrival rate increases, the waiting time becomes longer due to the larger number of tasks waiting in the buffer, which means that the tasks are more likely to fail their delay requirements or be dropped if they arrive when the buffer is full. Meanwhile, over the variation of the task arrival rate, the maximum success rate and minimum overflow rate is achieved from the proposed algorithm. Moreover, as shown in Fig. \ref{figure6} (c), the DRQN-based algorithm has the lowest average delay in Case-2. These empirical results show that temporal integration of observations from a recurrent network allows the nodes to coordinate in their choices without knowing the explicit state and action sets of the others which makes the proposed DRQN-based algorithm relatively robust to the dynamics of the partially observable environment. These results also demonstrate that intelligently distributing resources to slices requiring different delay constraints makes a huge impact on the overall system performance. \begin{figure*} \centering\includegraphics[width=0.9\linewidth]{rev_plot_case2.png} \caption{Performance of (a) average success rate, (b) average overflow rate, and (c) average task delay of Case-2 under different task arrival rates.} \label{figure6} \end{figure*} \section{Conclusion}\label{sectionsix} In this paper, we devised a joint heterogeneous task offloading and resource allocation algorithm whose goal is to maximize the processing tasks completed within their delay constraints while minimizing the task drops from buffer overflows. The SDN-based fog network we consider has multiple fog nodes that are coordinating to achieve the best overall network performance without knowing the explicit status of other fog nodes. In the presence of uncertainties stemming from task demands and resource status, we formulate the problem as a partially observable stochastic game and apply cooperative multi-agent deep reinforcement learning with a global reward that aims to maximize the common goal of nodes and stabilize the convergence property. Further, we implement a recurrent neural network to tackle the partial-observability by maintaining internal states and aggregating temporal observations. The simulation results show that the proposed DRQN-based algorithm can achieve a higher average success rate and lower average overflow than DQN and DCQN as well as non-deep learning based baseline methods. In the future, we will extend the multi-agent learning to scenarios for agents in large-scale fog networks with differing reward functions.
proofpile-arXiv_065-134
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Interactions of cosmic-ray particles with detector materials can produce radioactive isotopes that create backgrounds for experiments searching for rare events such as dark matter interactions and neutrinoless double beta decay. Silicon is a widely used detector material because it is available with very high purity, which leads to low intrinsic radioactive backgrounds. In particular, solid-state silicon-based detector technologies show promise because their eV-scale energy thresholds~\cite{PhysRevLett.123.181802,Abramoff:2019dfb,Agnese:2018col} provide sensitivity to scattering events between atoms and ``low-mass'' dark matter particles with masses below 1\,GeV/c$^{2}$~\cite{Essig:2015cda}. Three prominent low-mass dark matter efforts that employ silicon detectors are DAMIC~\cite{aguilararevalo2020results}, SENSEI~\cite{Abramoff:2019dfb}, and SuperCDMS~\cite{PhysRevD.95.082002}. All three use the highest-purity single-crystal silicon as detector substrates~\cite{VONAMMON198494}, with sensors fabricated on the surfaces for readout of charge or phonons and installed in low-background facilities to reduce the event rate from environmental backgrounds. A primary challenge in these rare-event searches is to distinguish potential signal events from the much higher rate of interactions due to conventional sources of radiation, both from the terrestrial environment and in the detector materials. A variety of mitigation strategies are used to minimize backgrounds; nevertheless, a nonzero residual background expectation is generally unavoidable. Beta-emitting radiocontaminants in the bulk and on the surfaces of the detectors are especially challenging in the search for dark matter because the decay products can produce energy signals that are indistinguishable from the expected signal. Both DAMIC and SuperCDMS have investigated these detector backgrounds (see, e.g., Refs.~\cite{Aguilar-Arevalo:2015lvd,aguilararevalo2020results,PhysRevD.95.082002,Orrell:2017rid}), and they have identified $^3$H~(tritium), $^{32}$Si (intrinsic to the silicon) and $^{210}$Pb (surface contamination) as the leading sources of background for future silicon-based dark matter experiments. Unlike for $^{32}$Si, there are not yet any direct measurements of the tritium background in silicon; current estimates are based on models that have yet to be validated. Tritium and other radioactive isotopes such as $^7$Be~and $^{22}$Na~are produced in silicon detectors as a result of cosmic-ray exposure, primarily due to interactions of high-energy cosmic-ray neutrons with silicon nuclei in the detector substrates~\cite{cebrian,Agnese:2018kta}. The level of background from cosmogenic isotopes in the final detector is effectively determined by the above-ground exposure time during and following detector production, the cosmic-ray flux, and the isotope-production cross sections. The neutron-induced production cross sections for tritium, $^7$Be, and to a lesser extent $^{22}$Na, are not experimentally known except for a few measurements at specific energies. There are several estimates of the expected cross sections; however, they vary significantly, leading to large uncertainties in the expected cosmogenic background for rare-event searches that employ silicon detectors. To address this deficiency, we present measurements of the integrated isotope-production rates from a neutron beam at the Los Alamos Neutron Science Center (LANSCE) ICE HOUSE facility \cite{lisowski2006alamos, icehouse}, which has a similar energy spectrum to that of cosmic-ray neutrons at sea level. This spectral-shape similarity allows for a fairly direct extrapolation from the measured beam production rates to the expected cosmogenic production rates. While the spectral shape is similar, the flux of neutrons from the LANSCE beam greater than \SI{10}{MeV} is roughly \num{5E8} times larger than the cosmic-ray flux, which enables production of measurable amounts of cosmogenic isotopes in short periods of time. Our measurement will allow the determination of acceptable above-ground residency times for future silicon detectors, as well as improve cosmogenic-related background estimates and thus sensitivity forecasts. We begin in Sec.~\ref{sec:isotopes} with a discussion of radioisotopes that can be cosmogenically produced in silicon, and we identify those most relevant for silicon-based dark matter searches: $^3$H, $^7$Be, and $^{22}$Na. For these three isotopes, we review previous measurements of the production cross sections and present the cross-section models that we use in our analysis. Section~\ref{sec:exposure} introduces our experimental approach, in which several silicon targets---a combination of charge-coupled devices (CCDs) and wafers---were irradiated at LANSCE. In Sec.~\ref{sec:counting} and Sec.~\ref{sec:production_rates} we present our measurements and predictions of the beam-activated activities, respectively. These results are combined in Sec.~\ref{sec:cosmogenic_rates} to provide our best estimates of the production rates from cosmogenic neutrons. In Sec.~\ref{sec:alternate} we evaluate other (non-neutron) production mechanisms and we conclude in Sec.~\ref{sec:discussion} with a summarizing discussion. \section{Cosmogenic Radioisotopes} \label{sec:isotopes} \begin{table}[t] \centering \begin{tabular}{c c c c} \hline Isotope & Half-life & Decay & Q-value \\ & [yrs] & mode & [keV]\\ \hline \vrule width 0pt height 2.2ex $^3$H & 12.32\,$\pm$\,0.02 & $\beta$- & 18.591\,$\pm$\,0.003 \\ $^7$Be & 0.1457\,$\pm$\,0.0020 & EC & 861.82\,$\pm$\,0.02\\ $^{10}$Be & (1.51\,$\pm$\,0.06)$\times$10$^6$ & $\beta$- & 556.0\,$\pm$\,0.6\\ $^{14}$C & 5700\,$\pm$\,30 & $\beta$- & 156.475\,$\pm$\,0.004\\ $^{22}$Na & 2.6018\,$\pm$\,0.0022 & $\beta$+ & 2842.2\,$\pm$\,0.2\\ $^{26}$Al & (7.17\,$\pm$\,0.24)$\times$10$^5$ & EC & 4004.14\,$\pm$\,6.00\\ \hline \end{tabular} \caption{List of all radioisotopes with half-lives $>$\,30 days that can be produced by cosmogenic interactions with natural silicon. All data is taken from NNDC databases \cite{dunford1998online}. \protect\footnotemark[1]} \footnotetext{Unless stated otherwise, all uncertainties quoted in this paper are at 1$\sigma$ (68.3\%) confidence.} \label{tab:rad_isotopes} \end{table} Most silicon-based dark matter experiments use high-purity ($\gg$\,99\%) natural silicon (92.2\% $^{28}$Si, 4.7\% $^{29}$Si, 3.1\% $^{30}$Si \cite{meija2016isotopic}) as the target detector material. The cosmogenic isotopes of interest for these experiments are therefore any long-lived radioisotopes that can be produced by cosmic-ray interactions with silicon; Table~\ref{tab:rad_isotopes} lists all isotopes with half-lives greater than 30 days that are lighter than $^{30}$Si + n/p. None of them have radioactive daughters that may contribute additional backgrounds. Assuming that effectively all non-silicon atoms present in the raw material are driven out during growth of the single-crystal silicon boules used to fabricate detectors, and that the time between crystal growth and moving the detectors deep underground is typically less than 10 years, cosmogenic isotopes with half-lives greater than 100 years (i.e., $^{10}$Be, $^{14}$C, and $^{26}$Al) do not build up sufficient activity~\cite{reedy2013cosmogenic, caffee2013cross} to produce significant backgrounds. Thus the cosmogenic isotopes most relevant to silicon-based rare-event searches are tritium, $^7$Be, and $^{22}$Na. Tritium is a particularly dangerous background for dark matter searches because it decays by pure beta emission and its low Q-value (\SI{18.6} {\keV}) results in a large fraction of decays that produce low-energy events in the expected dark matter signal region. $^7$Be~decays by electron capture, either directly to the ground state of $^7$Li (89.56\%) or via the \SI{477}{\keV} excited state of $^7$Li (10.44\%). $^7$Be~is not a critical background for dark matter searches, because it has a relatively short half-life (\SI{53.22}{\day}); however, the \SI{54.7}{\eV} atomic de-excitation following electron capture may provide a useful energy-calibration tool. $^{22}$Na~decays primarily by positron emission (90.3\%) or electron capture (9.6\%) to the 1275 keV level of $^{22}$Ne. For thin silicon detectors $^{22}$Na~can be a significant background as it is likely that both the \SI{1275}{\keV} $\gamma$ ray and the \SI{511}{\keV} positron-annihilation photons will escape undetected, with only the emitted positron or atomic de-excitation following electron capture depositing any energy in the detector. Note that compared to $^3$H, the higher $\beta^+$ endpoint (\SI{546}{keV}) means that a smaller fraction of the $^{22}$Na~decays produce signals in the energy range of interest for dark matter searches. \subsection{Tritium Production} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{si_h3_crosssections.pdf} \caption{Experimental measurements (magenta error bars) \cite{QAIM1978150, Tippawan:2004sy, benck2002secondary} and model estimates (continuous curves) of neutron-induced tritium production in silicon. Measurements of the proton-induced cross section \cite{goebel1964production, kruger1973high} are also shown for reference (gray error bars).} \label{fig:si_3h_cross_sections} \end{figure} Tritium production in silicon at sea-level is dominated by spallation interactions of high-energy cosmogenic neutrons with silicon nuclei. Tritium is a pure $\beta$ emitter and it is therefore not possible to directly measure the production cross section using conventional methods that rely on $\gamma$-ray~detectors to tag the reaction products. There are three previous experimental measurements of the neutron-induced tritium production cross section in silicon (shown in Fig.~\ref{fig:si_3h_cross_sections}), which either extracted tritium from a silicon target and measured the activity in a proportional counter \cite{QAIM1978150} or measured the triton nuclei ejected from a silicon target using $\Delta E-E$ telescopes \cite{Tippawan:2004sy,benck2002secondary}. The proton-induced cross section is expected to be similar to that of neutrons so we also show previous measurements with proton beams \cite{goebel1964production, kruger1973high}. While these measurements provide useful benchmarks at specific energies, they are insufficient to constrain the cosmogenic production cross section across the full range of relevant neutron energies (from $\sim$10\,MeV to a few GeV). For this reason, previous estimates of tritium production in silicon dark matter detectors have relied on estimates of the cross section from calculations and simulations of the nuclear interactions or compiled databases that combine calculations with experimental data \cite{martoff1987limits, zhang2016cosmogenic, agnese2019production}. The production of tritons due to spallation is difficult to model, because the triton is a very light nucleus that is produced not only during the evaporation or de-excitation phase but also from coalescence of nucleons emitted during the high-energy intra-nuclear cascade stage \cite{leray2010improved, leray2011results, filges2009handbook}. Due to large variations among the predictions of different cross-section models, we consider several models for comparison to our experimental results and extraction of cosmogenic production rates. Shown in Fig.~\ref{fig:si_3h_cross_sections} are the semi-empirical formulae of Konobeyev and Korovin (K\&K) \cite{konobeyev1993tritium} (extracted from the commonly used ACTIVIA code \cite{back2008activia}) and results from nuclear reaction calculations and Monte Carlo simulations that are performed by codes such as TALYS \cite{koning2008talys}, INCL \cite{boudard2013new} and ABLA \cite{kelic2008deexcitation}.\footnote{The Konobeyev and Korovin ($^3$H), and Silberberg and Tsao ($^7$Be, $^{22}$Na) cross sections were obtained from the ACTIVIA code package \cite{activia2017}, the TALYS cross sections were calculated using TALYS-1.9 \cite{talys1.9}, and the INCL cross sections were calculated using the INCL++ code (v6.0.1) with the ABLA07 de-excitation model \cite{mancusi2014extension}. The default parameters were used for all programs. We note that the TALYS models are optimized in the \SI{1}{\keV} to \SI{200}{\MeV} energy range though the maximum energy has been formally extended to \SI{1}{\GeV} \cite{koning2014extension}.} We also compared effective cross sections (extracted through simulation) from built-in physics libraries of the widely used Geant4 simulation package \cite{agostinelli2003geant4,allison2016recent} such as INCLXX \cite{boudard2013new,mancusi2014extension}, BERTINI \cite{bertini1963low, guthrie1968calculation, bertini1969intranuclear, bertini1971news}, and Binary Cascades (BIC) \cite{folger2004binary}.\footnote{We used Geant4.10.3.p02 with physics lists QGSP\_INCLXX 1.0 (INCL++ v5.3), QGSP\_BERT 4.0, and QGSP\_BIC 4.0.} \subsection{$^7$Be~Production} $^7$Be~is produced as an intermediate-mass nuclear product of cosmogenic particle interactions with silicon. The neutron-induced production cross section has been measured at only two energies \cite{ninomiya2011cross}, as shown in Fig.~\ref{fig:si_7be_cross_sections}. Although the neutron- and proton-induced cross sections are not necessarily the same, especially for neutron-deficient nuclides such as $^7$Be~and $^{22}$Na~\cite{ninomiya2011cross}, there are a large number of measurements with protons that span the entire energy range of interest \cite{otuka2014towards, zerkin2018experimental}, which we show in Fig.~\ref{fig:si_7be_cross_sections} for comparison.\footnote{We have excluded measurements from Ref.~\cite{rayudu1968formation}, because there are well-known discrepancies with other measurements \cite{ michel1995nuclide, schiekel1996nuclide}.} For ease of evaluation, we fit the proton cross-section data with a continuous 4-node spline, hereafter referred to as ``$^{\text{nat}}$Si(p,x)$^7$Be Spline Fit''. As with tritium, we also show predictions from different nuclear codes and semi-empirical calculations, including the well-known Silberberg and Tsao (S\&T) semi-empirical equations \cite{silberberg1973partial,silberberg1973partial2, silberberg1977cross, silberberg1985improved, silberberg1990spallation, silberberg1998updated} as implemented in the ACTIVIA code. We note that the model predictions for the $^7$Be~production cross section in silicon vary greatly, with significantly different energy thresholds, energy dependence, and magnitude. $^7$Be~is believed to be produced predominantly as a fragmentation product rather than as an evaporation product or residual nucleus \cite{michel1995nuclide}, and fragmentation is typically underestimated in most theoretical models \cite{michel1995nuclide, titarenko2006excitation}. We note that unlike for the tritium cross-section models, there is a significant difference between the predictions obtained by evaluating the INCL++ v6.0.1 model directly versus simulating with Geant4 (INCL++ v5.3), probably due to updates to the model. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{si_be7_crosssections.pdf} \caption{Experimental measurements (magenta error bars) \cite{ninomiya2011cross} and model estimates (continuous curves) of the neutron-induced $^7$Be~production cross section in silicon. Measurements of the proton-induced cross section \cite{otuka2014towards, zerkin2018experimental} are also shown for reference (gray error bars).} \label{fig:si_7be_cross_sections} \end{figure} \subsection{$^{22}$Na~Production} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{si_na22_crosssections.pdf} \caption{Experimental measurements (magenta and pink error bars) \cite{michel2015excitation, hansmann2010production, yashima2004measurement, sisterson2007cross, ninomiya2011cross} and model estimates (continuous curves) of the neutron-induced $^{22}$Na~production cross section in silicon. Measurements of the proton-induced cross section \cite{otuka2014towards, zerkin2018experimental} are also shown for reference (gray error bars).} \label{fig:si_22na_cross_sections} \end{figure} $^{22}$Na~is produced as a residual nucleus following cosmogenic interactions with silicon. Compared to tritium and $^7$Be, the production of $^{22}$Na~is the best studied. Measurements of the neutron-induced cross section were carried out by Michel et.\ al.\ using quasi-monoenergetic neutrons between 33 and 175 MeV, with TALYS-predicted cross sections used as the initial guess to unfold the experimentally measured production yields \cite{michel2015excitation, hansmann2010production}. These, along with six other data points between 66 and 370 MeV \cite{yashima2004measurement, sisterson2007cross, ninomiya2011cross}, are shown in Fig.~\ref{fig:si_22na_cross_sections}. Proton-induced cross-section measurements\footnote{Similar to $^7$Be, we have excluded measurements from Ref.~\cite{rayudu1968formation}.} \cite{otuka2014towards, zerkin2018experimental} span the entire energy range of interest and are significantly larger than the measured neutron-induced cross sections. As before, we also show the predicted cross sections from Silberberg and Tsao, TALYS, INCL++ (ABLA07) and Geant4 models. In order to compare the existing neutron cross-section measurements to our data, we use a piecewise model that follows the measurements in Refs.~\cite{michel2015excitation, hansmann2010production} below 180\,MeV and follows the TALYS model at higher energies. This model is hereafter referred to as ``Michel-TALYS'' (see Fig.~\ref{fig:si_22na_cross_sections}). $^{22}$Na~can also be produced indirectly through the production of the short-lived isotopes $^{22}$Mg, $^{22}$Al, and $^{22}$Si, which eventually decay to $^{22}$Na, but for the models considered the total contribution from these isotopes is $<$ \SI{1}{\percent}, and is ignored here. \section{Beam Exposure} \label{sec:exposure} To evaluate the production rate of cosmogenic isotopes through the interaction of high-energy neutrons, we irradiated silicon charge-coupled devices (CCDs) and silicon wafers at the LANSCE neutron beam facility. Following the irradiation, the CCDs were readout to measure the beam-induced $\beta$ activity within the CCD active region, and the $\gamma$ activity induced in the wafers was measured using $\gamma$-ray spectroscopy. In this section we describe the details of the targets and beam exposure, while in Sec.~\ref{sec:counting} we present the measurement results. \subsection{CCDs} \label{sec:ccds} The irradiated CCDs were designed and procured by Lawrence Berkeley National Laboratory (LBNL)~\cite{ccdtech} for the DAMIC Collaboration. CCDs from the same fabrication lot were extensively characterized in the laboratory and deployed underground at SNOLAB to search for dark matter~\cite{Aguilar-Arevalo:2016zop, PhysRevD.94.082006}. The devices are three-phase scientific CCDs with a buried $p$-channel fabricated on a \SI{670}{\micro\meter}-thick $n$-type high-resistivity (10--20\,\si{\kilo\ohm\cm}) silicon substrate, which can be fully depleted by applying $>$\,\SI{40}{\volt} to a thin backside contact. The CCDs feature a 61.44$\times$30.72\,mm$^2$ rectangular array of 4096$\times$2048 pixels (each 15$\times$15 \si{\micro\meter\squared}) and an active thickness of \SI{661 \pm 10}{\micro\meter}. By mass, the devices are $>$\,\SI{99}{\percent} elemental silicon with natural isotopic abundances. Other elements present are oxygen ($\sim$\,\SI{0.1}{\percent}) and nitrogen ($<$\,\SI{0.1}{\percent}) in the dielectrics, followed by phosphorous and boron dopants ($<$\,\SI{0.01}{\percent}) in the silicon. Ionizing particles produce charge in the CCD active region; e.g., a fast electron or $\beta$ particle will produce on average one electron-hole pair for every \SI{3.8}{\eV} of deposited energy. The ionization charge is drifted by the applied electric field and collected on the pixel array. The CCDs are read out serially by moving the charge vertically row-by-row into the serial register (the bottom row) where the charge is moved horizontally pixel-by-pixel to the output readout node. Before irradiation, the charge-transfer inefficiency from pixel to pixel was $< 10^{-6}$~\cite{ccdtech}, the dark current was $<$\SI{1}{e^- \per pixel \per \hour}, and the uncertainty in the measurement of the charge collected by a pixel was $\sim$2\,$e^-$ RMS. Further details on the response of DAMIC CCDs can be found in Sec.~IV of Ref.~\cite{PhysRevD.94.082006}. Even after the significant increase in CCD noise following irradiation (e.g., due to shot noise associated with an increase in dark current), the CCD can still resolve most of the tritium $\beta$-decay spectrum. Irradiation generates defects in silicon devices that can trap charges and negatively impact the performance of CCDs. Fully depleted devices are resilient to irradiation damage in the bulk silicon because the ionization charge is collected over a short period of time, which minimizes the probability of charge being trapped by defects before it is collected. For this reason LBNL CCDs have been considered for space-based imaging where the devices are subjected to high levels of cosmic radiation~\cite{snap}. Measurements at the LBNL cyclotron demonstrated the remarkable radiation tolerance of the CCDs proposed for the SNAP satellite, which follow the same design principles and fabrication process as the DAMIC CCDs. For the measurements presented in this paper, there is a trade-off between activation rate and CCD performance. Higher irradiation leads to a higher activity of radioisotopes in the CCD and hence a lower statistical uncertainty in the measurement. On the other hand, higher irradiation also decreases the CCD performance, which needs to be modeled and can thus introduce significant systematic uncertainty. The two most relevant performance parameters affected by the irradiation are the charge-transfer inefficiency (CTI) and the pixel dark current (DC). Ref.~\cite{snap} provides measurements of CTI and DC after irradiation with 12.5 and \SI{55}{MeV} protons. Following irradiation doses roughly equivalent to a LANSCE beam fluence of $2.4\times10^{12}$ neutrons above \SI{10}{\MeV}, the CCDs were still functional with the CTI worsened to $\sim$\,$10^{-4}$ and asymptotic DC rates (after days of operation following a room-temperature anneal) increased to $\sim$\SI{100}{e^- \per pixel \per \hour}. These values depend strongly on the specific CCD design and the operation parameters, most notably the operating temperature. Considering the available beam time, the range of estimated production rates for the isotopes of interest, and the CCD background rates, we decided to irradiate three CCDs with different levels of exposure, roughly corresponding to $2.4\times10^{12}$, $1.6\times10^{12}$, and $0.8\times10^{12}$ neutrons above \SI{10}{MeV} at the LANSCE neutron beam. Furthermore, we used a collimator (see Sec.~\ref{sec:lansce_beam}) to suppress irradiation of the serial register at the edge of the CCDs by one order of magnitude and thus mitigate CTI in the horizontal readout direction. Following the beam exposure, we found that the least irradiated CCD had an activity sufficiently above the background rate while maintaining good instrumental response and was therefore selected for analysis in Sec.~\ref{sec:ccd_counting}. The CCDs were packaged at the University of Washington following the procedure developed for the DAMIC experiment. The CCD die and a flex cable were glued onto a silicon support piece such that the electrical contact pads for the signal lines are aligned. The CCDs were then wedge bonded to the flex cable with \SI{25}{\micro\meter}-thick aluminum wire. A connector on the tail of the flex cable can be connected to the electronics for device control and readout. Each packaged device was fixed inside an aluminum storage box, as shown in Fig.~\ref{fig:CCDphoto}. The CCDs were kept inside their storage boxes during irradiation to preserve the integrity of the CCD package, in particular to prevent the wire bonds from breaking during handling and to reduce any possibility of electrostatic discharge, which can damage the low-capacitance CCD microelectronics. To minimize the attenuation of neutrons along the beam path and activation of the storage box, the front and back covers that protect each CCD were made from relatively thin (0.5\,mm) high-purity aluminum (alloy 1100). \begin{figure} \centering \includegraphics[width=\columnwidth]{CCD_photo.pdf} \caption{Photograph of the CCD package inside its aluminum storage box. Left: Package before wire bonding. Right: After wire bonding, with aluminum frame to keep the CCD package fixed in place.} \label{fig:CCDphoto} \end{figure} \subsection{Wafers} In addition to the CCDs, we exposed several Si wafers, a Ge wafer, and two Cu plates to the neutron beam. These samples served both as direct targets for activation and measurement of specific radioisotopes, and as witness samples of the neutron beam. In this paper, we focus on the Si wafers; however, the Ge wafer and Cu plates were also measured and may be the subject of future studies. A total of eight Si wafers (4 pairs) were used: one pair matched to each of the three CCDs (such that they had the same beam exposure time) and a fourth pair that served as a control sample. The eight wafers were purchased together and have effectively identical properties. Each wafer was sliced from a Czochralski-grown single-crystal boule with a 100-mm diameter and a resistivity of $>$\SI{20}{\ohm\cm}. The wafers are undoped, were polished on one side, and have a $\langle$100$\rangle$ crystal-plane alignment. The thickness of each individual wafer is \SI{500 \pm 17}{\micro\meter} (based on information from the vendor). The control sample was not exposed to the neutron beam and thus provides a background reference for the gamma counting. Note that because the wafers were deployed and counted in pairs, henceforth we distinguish and refer to only pairs of wafers rather than individual wafers. The (single) Ge wafer is also \SI{100}{\milli\meter} in diameter and undoped, with a thickness of \SI{525 \pm 25}{\micro\meter}, while the Cu plates have dimensions of $114.7 \times 101.6 \times$ \SI{3.175}{\milli\meter}. \subsection{LANSCE Beam Exposure} \label{sec:lansce_beam} \begin{figure*} \centering \includegraphics[width=0.32\textwidth]{config1-pers.pdf} \includegraphics[width=0.32\textwidth]{config2-pers.pdf} \includegraphics[width=0.32\textwidth]{config3-pers.pdf} \caption{Geant4 renderings of the three setups used to position targets in the neutron beam, with the beam passing from right to left. Aluminum (Al) boxes holding the CCDs (yellow) were held in place by an Al rack (dark gray). For the initial setup (left), the Al box is made transparent to show the positioning of the CCD (red), air (grey), and other structures (light brown). The other targets include pairs of Si wafers (green), a Ge wafer (blue), and Cu plates (copper brown). The polyethylene wafer holder (purple) is simplified to a rectangle of the same thickness and height as the actual object, with the sides and bottom removed. All targets were supported on an acetal block (light gray).} \label{fig:g4rendering} \end{figure*} The samples were irradiated at the LANSCE WNR ICE-HOUSE II facility~\cite{icehouse} on Target 4 Flight Path 30 Right (4FP30R). A broad-spectrum (0.2--800 MeV) neutron beam was produced via spallation of 800 MeV protons on a tungsten target. A 2.54-cm (1") diameter beam collimator was used to restrict the majority of the neutrons to within the active region of the CCD and thus prevent unwanted irradiation of the serial registers on the perimeter of the active region. The neutron fluence was measured with $^{238}$U foils by an in-beam fission chamber~\cite{wender1993fission} placed downstream of the collimator. The beam has a pulsed time structure, which allows the incident neutron energies to be determined using the time-of-flight technique (TOF)---via a measurement between the proton beam pulse and the fission chamber signals~\cite{lisowski2006alamos,wender1993fission}. \begin{figure}[h!] \begin{center} \includegraphics[width=\columnwidth]{InBeamLayout_cropped.jpg} \end{center} \caption{Layout of the samples as placed in the beam during the final irradiation setup (cf.\ Fig.~\ref{fig:g4rendering} right). The beam first passes through the cylindrical fission chamber (far right) and then through the samples (from right to left): 3~CCDs in Al boxes (with flex cables emerging at the top), 3~pairs of Si wafers, 1~Ge wafer, and 2~Cu plates.} \label{Fig:CCDlayout} \end{figure} The beam exposure took place over four days between September 18$^{\mathrm{th}}$ and 22$^{\mathrm{nd}}$, 2018. On Sept.\,18, CCD\,1 was placed in the beam line at 18:03 local time, located closest to the fission chamber, along with a pair of Si wafers, one Ge wafer, and one Cu plate placed downstream (in that order; cf.\ Fig.~\ref{fig:g4rendering} left). The front face of the Al box containing CCD\,1 was \SI{260}{\mm} from the face of the fission chamber. At 17:16 on Sept.\,20, CCD\,2 was added directly downstream from CCD\,1, along with another pair of Si wafers. The front face of the Al box for CCD\,2 was \SI{14.3}{\mm} from the front face of CCD\,1. At 09:11 on Sept.\,22, CCD\,3 was added downstream with an equidistant spacing relative to the other CCDs, along with another pair of Si wafers and a second Cu plate. Figure~\ref{fig:g4rendering} shows schematics of these three exposure setups, while Fig.~\ref{Fig:CCDlayout} shows a photograph of the final setup in which all three CCDs were on the beam line. The exposure was stopped at 08:00 on Sept.\,23, and all parts exposed to the beam were kept in storage for approximately seven weeks to allow short-lived radioactivity to decay prior to shipment for counting. \subsection{Target Fluence} The fluence measured by the fission chamber during the entire beam exposure is shown in Fig.~\ref{fig:lanscebeamenergy}, with a total of \num{2.91 \pm 0.22 E12} neutrons above 10 MeV. The uncertainty is dominated by the systematic uncertainty in the $^{238}$U(n, f) cross section used to monitor the fluence, shown in Fig.~\ref{fig:fission_cs}. Below 200 MeV the assumed LANSCE cross section and various other experimental measurements and evaluations \cite{lisowski1991fission, carlson2009international, tovesson2014fast, marcinkevicius2015209} agree to better than 5\%. Between 200 and 300 MeV there are only two measurements of the cross section \cite{lisowski1991fission, miller2015measurement} which differ by 5--10\%. Above \SI{300}{\MeV} there are no experimental measurements. The cross section used by the LANSCE facility assumes a constant cross section above \SI{380}{\MeV} at roughly the same value as that measured at \SI{300}{\MeV} \cite{miller2015measurement}. This is in tension with evaluations based on extrapolations from the $^{238}$U(p, f) cross section that recommend an increasing cross section to a constant value of roughly \SI{1.5}{\barn} at 1 GeV \cite{duran2017search,carlson2018evaluation}. We have used the LANSCE cross section and assumed a 5\% systematic uncertainty below \SI{200}{\MeV}, a 10\% uncertainty between 200 and \SI{300}{\MeV}, and a constant 20\% uncertainty between 300 and \SI{750}{\MeV}. The uncertainty in the neutron energy spectrum due to the timing uncertainty in the TOF measurement (FWHM $\sim$ \SI{1.2}{\nano\second}) is included in all calculations but is sub-dominant (2.5\%-3.5\%) for the estimates of isotope production rates. While the nominal beam diameter was set by the 1" collimator, the cross-sectional beam profile has significant tails at larger radii. At the fission chamber approximately 38.8\% of neutrons fall outside a 1" diameter, as calculated with the beam profile provided by LANSCE. Additionally the beam is slightly diverging, with an estimated cone opening angle of 0.233\degree. A Geant4 \cite{agostinelli2003geant4,allison2016recent} simulation that included the measured beam profile and beam divergence, the measured neutron spectrum, and the full geometry and materials of the targets, mounting apparatus, and fission chamber, was used to calculate the neutron fluence through each material, accounting for any attenuation of the neutrons through the targets. To reduce computational time, a biasing technique was used to generate neutrons. Instead of following the beam profile, neutrons were generated uniformly in a \SI{16}{\cm}$\times$\SI{16}{\cm} square in front of the fission chamber, covering the entire cross-sectional area of the setup. After running the Geant4 simulation, each event was assigned a weight which is proportional to the intensity of the beam at the simulated neutron location, as obtained from the two-dimensional beam profile supplied by LANSCE. This allows reuse of the same simulation results for different beam profiles and alignment offsets. A total of \num{5.5 E10} neutrons above 10 MeV were simulated for each setup and physics list. At this level of statistics, the statistical uncertainties in the simulation are sub-dominant to the total neutron fluence uncertainty. The simulations show that each CCD receives about \SI{83}{\percent} of the whole beam. To assess the uncertainty in the neutron fluence due to misalignment of the beam with the center of the CCDs, the profile of the beam was reconstructed by measuring the dark current rate in the CCDs as a function of position (see Sec.~\ref{sec:ccd_counting}). The beam misalignment is calculated to be about $-2.3$\,mm in the $x$ direction and $+0.5$\,mm in the $y$ direction, which when input into the Geant4 simulation yields a systematic uncertainty in the neutron fluence of less than 1\%. The total neutron fluence ($>$ \SI{10}{\MeV}) through each CCD and its Si-wafer matched pair is listed in Table~\ref{tab:neutron_fluences}; corresponding energy spectra are shown in Fig.~\ref{fig:lanscebeamenergy} (the spectral shape of the fluence through each Si-wafer pair is very similar to that of the corresponding CCD and has been omitted for clarity). \begin{figure} \centering \includegraphics[width=\columnwidth]{neutron_flux_targets.pdf} \caption{Comparison of the LANSCE 4FP30R/ICE II neutron beam with sea-level cosmic-ray neutrons. The black data points and left vertical axis show the number of neutrons measured by the fission chamber during the entire beam exposure used for this measurement. Uncertainties shown are statistical only (see main text for discussion of systematic uncertainties). The colored markers show the simulated fluence for each of the CCDs in the setup. For comparison, the red continuous line and the right vertical axis show the reference cosmic-ray neutron flux at sea level for New York City during the midpoint of solar modulation \cite{gordon2004measurement}}. \label{fig:lanscebeamenergy} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{fission_cs.pdf} \end{center} \caption{Experimental measurements (circles) \cite{lisowski1991fission, tovesson2014fast, miller2015measurement} and evaluations (squares) \cite{carlson2009international, marcinkevicius2015209, duran2017search, carlson2018evaluation} of the $^{238}$U(n, f) cross section. The cross section assumed by the LANSCE facility to convert the fission chamber counts to a total neutron fluence is shown by the black line, with the shaded grey band indicating the assumed uncertainty.} \label{fig:fission_cs} \end{figure} \begin{table} \centering \begin{tabular}{c c c} \hline Target & Exposure time & Neutrons through target \\ & [hrs] & ($> 10$ MeV)\\ \hline \vrule width 0pt height 2.2ex CCD 1 & 109.4 & \num{2.39 \pm 0.18 E12}\\ Wafer 1 & 109.4 & \num{2.64 \pm 0.20 E12}\\ \hline \vrule width 0pt height 2.2ex CCD 2 & 62.7 & \num{1.42 \pm 0.11 E12}\\ Wafer 2 & 62.7 & \num{1.56 \pm 0.12 E12}\\ \hline \vrule width 0pt height 2.2ex CCD 3 & 22.8 & \num{5.20 \pm 0.39 E11}\\ Wafer 3 & 22.8 & \num{5.72 \pm 0.43 E11}\\ \hline \end{tabular} \caption{Beam exposure details for each CCD and its Si-wafer matched pair.} \label{tab:neutron_fluences} \end{table} \section{Counting} \label{sec:counting} \subsection{Wafers} \label{ssec:wafer_counting} \begin{table*}[ht] \centering \begin{tabular}{ccccc} \hline & Wafer 0 & Wafer 1 & Wafer 2 & Wafer 3 \\ \hline \vrule width 0pt height 2.2ex Si areal density [atoms/cm$^2$] & \multicolumn{4}{c}{\num{4.99 \pm 0.17 e21}~~~~~~~~~~~~~~~~~~~~~} \\ Beam to meas.\ time [days] & - & \num{184.107} & \num{187.131} & \num{82.342} \\ Ge counting time [days] & \num{7.000} & \num{1.055} & \num{3.005} & \num{7.000} \\ \hline \vrule width 0pt height 2.2ex Measured $^7$Be~activity [mBq] & $<$\num{40} & \num{161 \pm 24} & \num{75 \pm 12} & \num{149 \pm 12}\\ Decay-corrected $^7$Be~activity [mBq] & - & \num{1830 \pm 270} & \num{870 \pm 140} & \num{437 \pm 34}\\ Beam-avg.\ $^7$Be~cross section [cm$^2$] & - & \num{0.92 \pm 0.16 E-27} & \num{0.74 \pm 0.13 E-27} & \num{1.01 \pm 0.12 E-27}\\ \hline \vrule width 0pt height 2.2ex Measured $^{22}$Na~activity [mBq] & $<$\num{5.1} & \num{606 \pm 29} & \num{370 \pm 16} & \num{139.5 \pm 6.3}\\ Decay-corrected $^{22}$Na~activity [mBq] & - & \num{694 \pm 33} & \num{424 \pm 19} & \num{148.2 \pm 6.6}\\ Beam-avg.\ $^{22}$Na~cross section [cm$^2$] & - & \num{6.23 \pm 0.60 E-27} & \num{6.44 \pm 0.61 E-27} & \num{6.15 \pm 0.58 E-27}\\ \hline \end{tabular} \caption{Gamma-counting results for the Si-wafer pairs. Measured activities are corrected for isotope decay that occurred during the beam exposure, as well as between the end of the beam exposure and the time of the gamma counting. Uncertainties are listed at 1$\sigma$ (68.3\%) confidence while upper limits quoted for the unirradiated pair (``Wafer 0'') represent the spectrometer's minimum detectable activity (Currie MDA with a 5\% confidence factor~\cite{currie}) at the corresponding peak energy.} \label{tab:wafer_counting} \end{table*} The gamma-ray activities of the Si-wafer pairs (including the unirradiated pair) were measured with a low-background counter at Pacific Northwest National Laboratory (PNNL). Measurements were performed using a Canberra Broad Energy Ge (BEGe) gamma-ray spectrometer (model BE6530) situated within the shallow underground laboratory (SUL) at PNNL \cite{aalseth2012shallow}. The SUL is designed for low-background measurements, with a calculated depth of \SI{30}{\meter} water equivalent. The BEGe spectrometer is optimized for the measurement of fission and activation products, combining the spectral advantages of low-energy and coaxial detectors, with an energy range from \SI{3}{\keV} to \SI{3}{\MeV}. The detector is situated within a lead shield (200\,mm), lined with tin (1\,mm) and copper (1\,mm). It is equipped with a plastic scintillator counter \cite{burnett2017development, burnett2014cosmic, burnett2012development, burnett2013further} to veto cosmic rays, which improves sensitivity by further reducing the cosmic-induced detector background by 25\%. The detector was operated with a Canberra Lynx MCA to provide advanced time-stamped list mode functionality. \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{ge_counting.pdf} \caption{Spectral comparison of the gamma-counting results for the Si-wafer pairs. Inspection of the full energy range (top panel) reveals two peaks in the irradiated samples (1, 2, and 3) at \SI{478}{\keV} (bottom left) and \SI{1275}{\keV} (bottom right) that are not present in the unirradiated sample (0), corresponding to $^7$Be\ and $^{22}$Na\ activated by the LANSCE neutron beam, respectively.} \label{fig:ge_counting} \end{figure*} Each wafer pair was measured independently, with wafer pair 3 and the unexposed wafer pair 0 counted for longer periods because their expected activities were the lowest. Table~\ref{tab:wafer_counting} shows the gamma-counting details, and Fig.~\ref{fig:ge_counting} shows the measured gamma-ray spectra. Spectral analysis was performed using the Canberra Genie 2000 Gamma Acquisition \& Analysis software (version 3.4) and all nuclear data were taken from the Evaluated Nuclear Data File (ENDF) database \cite{chadwick2011endf} hosted at the National Nuclear Data Center by Brookhaven National Laboratory. Compared to the unirradiated wafer-pair spectrum, the only new peaks identified in the spectra of the irradiated wafer pairs are at 478 and \SI{1275}{\keV}, corresponding to $^7$Be~(10.44\% intensity per decay) and $^{22}$Na~(99.94\% intensity per decay), respectively (cf.\,Fig.\,\ref{fig:ge_counting}). Note that each of the irradiated wafer pairs also has a significant excess at \SI{511}{\keV}, corresponding to positron-annihilation photons from $^{22}$Na\ decays, and an associated sum peak at \SI{1786}{\keV} ($= 511 +$ \SI{1275}{\keV}). The $^7$Be\ and $^{22}$Na\ activities in each wafer pair were calculated using the 478 and \SI{1275}{\keV} peaks, respectively. The measured values listed in Table~\ref{tab:wafer_counting} include the detector efficiency and true-coincidence summing corrections for the sample geometry and gamma-ray energies considered (calculated using the Canberra In Situ Object Counting Systems, or ISOCS, calibration software \cite{venkataraman1999validation}). The activity uncertainties listed in Table~\ref{tab:wafer_counting} include both the statistical and systematic contributions, with the latter dominated by uncertainty in the efficiency calibration ($\sim$\SI{4}{\percent}). Each measured activity is then corrected for isotope decay that occurred during the beam exposure, as well as between the end of the beam exposure and the time of the gamma counting. To compare among the results of the different wafer pairs, we divide each decay-corrected activity by the total number of incident neutrons and the number of target Si atoms to obtain a beam-averaged cross section (also listed in Table~\ref{tab:wafer_counting}). The values are in good agreement for both $^7$Be\ and $^{22}$Na\ (even if the common systematic uncertainty associated with the neutron beam fluence is ignored), which serves as a cross-check of the neutron-beam exposure calculations. The lack of any other identified peaks confirms that there are no other significant long-lived gamma-emitting isotopes produced by high-energy neutron interactions in silicon. Specifically, the lack of an identifiable peak at \SI{1808.7}{\keV} allows us to place an upper limit on the produced activity of $^{26}$Al at the minimum detectable activity level of \SI{12}{\milli\becquerel} (Currie MDA with a 5\% confidence factor~\cite{currie}), i.e.\ at least 58$\times$ lower than the $^{22}$Na\ activity in wafer pair 1. \subsection{CCDs} \label{sec:ccd_counting} Images from CCD\,3 were acquired at The University of Chicago in a custom vacuum chamber. Prior to counting, the CCD was removed from the aluminum transport box and placed in a copper box inside the vacuum chamber. Images taken were 4200 columns by 2100 rows in size, with 52 rows and 104 columns constituting the ``overscan'' (i.e., empty pixel reads past the end of the CCD pixel array). These overscan pixels contain no charge and thus provide a direct measurement of the pixel readout noise. A total of 8030 post-irradiation images with \SI{417}{\sec} of exposure were acquired, for a total counting time of 38.76 days. Data were taken in long continuous runs of many images, with interruptions in data taking for testing of the CCD demarcating separate data runs. Background data were taken prior to shipment to the LANSCE facility for neutron irradiation. These background data consist of the combined spectrum from all radioactive backgrounds in the laboratory environment, including the vacuum chamber, the intrinsic contamination in the CCD, and cosmic rays. A total of 1236 images were acquired using the same readout settings as post-irradiation images, but with a longer exposure of \SI{913}{\sec}, for a total counting time of 13.06 days. CCD images were processed with the standard DAMIC analysis software~\cite{PhysRevD.94.082006}, which subtracts the image pedestal, generates a ``mask'' to exclude repeating charge patterns in the images caused by defects, and groups pixels into clusters that correspond to individual ionization events. The high dark current caused by damage to the CCD from the irradiation (see Fig.~\ref{fig:darkcurrentprofile}) necessitated a modification to this masking procedure because the average CCD pixel values were no longer uniform across the entire CCD, as they were before irradiation. The images were therefore split into 20-column segments which were treated separately for the pedestal subtraction and masking steps. \begin{figure} \centering \includegraphics[width=\columnwidth]{dark_current_profile.pdf} \caption{Post-irradiation dark-current profile for CCD\,3, obtained from the median pixel values across multiple images. The elevated number of dark counts in the center of the CCD shows the effect of the neutron damage on the CCD.} \label{fig:darkcurrentprofile} \end{figure} Simulations of $^3$H{}, $^{22}$Na{}, and $^7$Be{} decays in the bulk silicon of the CCD were performed using a custom Geant4 simulation, using the Penelope Geant4 physics list, with a simplified geometry that included only the CCD and the surrounding copper box. Radioactive-decay events were simulated according to the beam profile, assumed to be proportional to the dark current profile (shown in Fig. ~\ref{fig:darkcurrentprofile}). The CCD response was simulated for every ionization event, including the stochastic processes of charge generation and transport that were validated in Ref.~\cite{PhysRevD.96.042002}. To include the effects of noise and dark current on the clustering algorithm, simulated ``blank'' images were created with the same noise and dark-current profile as the post-irradiation data. The simulated ionization events were pixelated and added onto the blank images, which were then processed with the standard DAMIC reconstruction code to identify clusters. The increase in the vertical (row-to-row) charge transfer inefficiency (CTI) observed in the post-irradiation data was simulated with a Poissonian kernel, which assumes a constant mean probability, $\lambda$, of charge loss for each pixel transfer along a column~\cite{janesick}. We assume a dependence of $\lambda$ as a function of column number that is proportional to the dark current profile. The total effect of CTI on a particular cluster depends on the number of vertical charge transfers $n$. The continuous CCD readout scheme, chosen to optimize the noise while minimizing overlap of charge clusters, results in a loss of information about the true number of vertical charge transfers for each cluster. For every simulated cluster we therefore pick a random $n$ uniformly from 1 to 2000 to simulate events distributed from the bottom row to the top row of the CCD and apply the Poissonian kernel. We determined the maximum value of $\lambda$ near the center of the CCD to be $9\times10^{-4}$ by matching the distribution of the vertical spread of clusters in the simulation to the data.\footnote{The data from CCD\,1 and CCD\,2, which experienced significantly higher neutron irradiation than CCD\,3, were discarded from the analysis because the vertical CTI could not be well described with a Poissonian kernel. We suspect that the CTI in these CCDs is dominated by the effect of charge traps introduced by the neutron irradiation. During the readout procedure these traps are filled with charge from ionization clusters. The charge is then released on the time scale of milliseconds, corresponding to $\sim$25 vertical transfers. This effect is difficult to model and results in considerable loss of charge from clusters in these two CCDs.} The identified clusters in the background data acquired prior to irradiation at LANSCE were also introduced on simulated blank images to include the effect of dark current, defects, and CTI on the background spectrum in the activated region of the CCD. The post-irradiation energy spectrum was fit using a model that includes components for the CCD background, $^{22}$Na{} decays, and $^3$H{} decays. $^7$Be{} was excluded from the fit because the decay does not produce a significant contribution to the total energy spectrum, even if the activity were many times the value we expect based on the wafer measurement. We constructed a binned Poissonian log-likelihood as the test statistic for the fit, which was minimized using Minuit \cite{James:1994vla} to find the best-fit parameters. Due to the relatively low statistics in the background template compared to post-irradiation data, statistical errors were corrected using a modified Barlow-Beeston method \cite{BARLOW1993219}, allowing each bin of the model to fluctuate by a Gaussian-constrained term with a standard deviation proportional to the bin statistical uncertainty. The data spectrum was fit from 2 to \SI{25}{\kilo\eV} to contain most of the $^3$H{} spectrum, while excluding clusters from noise at low energies. A \SI{2}{\kilo\eV}-wide energy region around the copper K-shell fluorescence line at \SI{8}{\kilo\eV} was masked from the fit because it is not well-modeled in the simulation. This peak-like feature is more sensitive to the details of the energy response than the smooth $^3$H{} spectrum. We have verified that including this K-shell line in the fit has a negligible effect on the fitted $^3$H\ activity. The background rate for the fit was fixed to the pre-irradiation value, while keeping the amplitude of the $^{22}$Na{} spectrum free. This choice has a negligible impact on the $^3$H{} result because the background and $^{22}$Na{} spectra are highly degenerate within the fit energy range, with a correlation coefficient of 0.993. Figure~\ref{fig:finalfitresults} shows the measured energy spectrum and the best-fit result ($\chi^2$/NDF=104/87). \begin{figure} \centering \includegraphics[width=\columnwidth]{plot_final_fit.pdf} \caption{Data spectrum and best-fit model with the spectral components stacked in different colors. The spectrum was fit from 2 to \SI{25}{\keV} with the shaded region around the \SI{8}{\keV} copper K-shell fluorescence line excluded from the fit. The rise in the spectrum below \SI{18}{\keV} from $^3$H{} decay is clearly visible above the nearly flat background and $^{22}$Na{} spectrum.} \label{fig:finalfitresults} \end{figure} After the fit was performed, the activities were calculated by dividing the fitted counts by the cumulative data exposure. This number was corrected for the isotope-specific event detection efficiency obtained from the simulation for the energy region of interest. Systematic errors were estimated from a series of fits under different configurations, including varying the energy range of the fit, varying the energy response and charge transfer parameters within their uncertainties, and floating versus constraining the amplitudes of the background and/or $^{22}$Na{} components in the fit. The best estimate for the tritium activity in CCD\,3 (after correcting for radioactive decay) is $45.7 \pm 0.5 $ (stat) $\pm 1.5 $ (syst) \si{\milli\becquerel}. The precision of the $^{22}$Na\ measurement in the CCDs is limited because the relatively flat $^{22}$Na{} spectrum is degenerate with the shape of the background spectrum. Unfortunately, there are no features in the CCD spectrum at low energies that can further constrain the $^{22}$Na{} activity. Further, the damage to the CCD renders the spectrum at higher energies unreliable because events with energies $>$\SI{50}{\kilo\eV} create large extended tracks where the effects of CTI, dark current, and pileup with defects become considerable, preventing reliable energy reconstruction. Notably, characteristic full-absorption $\gamma$ lines are not present in the CCD spectrum because $\gamma$ rays do not deposit their full energy in the relatively thin CCDs. As a cross-check of the post-irradiation background rate, we separately fit the first and last 400 columns of the CCD (a region mostly free of neutron exposure) and found values consistent with the pre-irradiation background to within $\sim$\SI{7}{\percent}. Constraining the background to within this range has a negligible effect on the fitted tritium activity but leads to significant variation in the estimated $^{22}$Na\ activity, which dominates the overall systematic uncertainty. The best estimate for the $^{22}$Na~activity in CCD\,3 is $126 \pm 5 $ (stat) $ \pm 26 $ (syst) \si{\milli\becquerel}. This is consistent with the more precise measurement of the $^{22}$Na~activity in the silicon wafers, which corresponds to a CCD\,3 activity of \SI{88.5 \pm 5.3}{\milli\becquerel}. \section{Predicted Beam Production Rate} \label{sec:production_rates} If the neutron beam had an energy spectrum identical to that of cosmic-ray neutrons, we could simply estimate the cosmogenic production rate by scaling the measured activity by the ratio of the cosmic-ray neutrons to that of the neutron beam. However the beam spectrum falls off faster at higher energies than that of cosmic rays (see Fig.~\ref{fig:lanscebeamenergy}). Thus we must rely on a model for the production cross sections to extrapolate from the beam measurement to the cosmogenic production rate. We can evaluate the accuracy of the different cross-section models by comparing the predicted $^3$H, $^7$Be, and $^{22}$Na~activity produced by the LANSCE neutron beam irradiation to the decay-corrected measured activities. We note that measurements of the unirradiated targets confirm that any non-beam related isotope concentrations (e.g. due to cosmogenic activation) are negligible compared to the beam-induced activity. For a given model of the isotope production cross section $\sigma(E)$ [cm$^2$], the predicted isotope activity, $P$ [Bq], produced by the beam (correcting for decays) is given by \begin{linenomath*} \begin{align} \label{eq:beam_act} P = \frac{n_a}{\tau} \int S(E) \cdot \sigma(E)~dE \end{align} \end{linenomath*} where $n_a$ is the areal number density of the target silicon atoms [\si{\atoms\per \cm\squared}], $\tau$ is the mean life [\si{\second}] of the isotope decay, and $S(E)$ is the energy spectrum of neutrons [\si{\neutrons \per \MeV}]. The second column of Table~\ref{tab:trit_pred} shows the predicted activity in CCD 3, $P_\text{CCD3}$, for the different $^3$H~cross-section models considered. The corresponding numbers for $^7$Be~and $^{22}$Na~in Wafer 3 ($P_\text{W3})$ are shown in Tables~\ref{tab:ber_pred} and \ref{tab:sod_pred} respectively. The uncertainties listed include the energy-dependent uncertainties in the LANSCE neutron beam spectrum and the uncertainty in the target thickness. \begin{table*}[t!] \centering \begin{tabular}{cccccc} \hline Model & Predicted LANSCE & Ejected & Implanted & Predicted LANSCE & Measured/Predicted\\ & $^3$H~produced act. & activity & activity & $^3$H~residual act. & $^3$H~residual activity\\ & $P_\text{CCD3}$ [\si{\milli\becquerel}] & $E_\text{CCD3}$ [\si{\milli\becquerel}] & $I_\text{CCD3}$ [\si{\milli\becquerel}] & $R_\text{CCD3}$ [\si{\milli\becquerel}] & \\ \hline K\&K (ACTIVIA) & \num{40.8 \pm 4.5} & & &\num{41.5 \pm 5.6} & \num{1.10 \pm 0.15}\\ TALYS & \num{116 \pm 16} & \num{46.70 \pm 0.12} & \num{53.8 \pm 2.1} & \num{123 \pm 17} & \num{0.370 \pm 0.053} \\ INCL++(ABLA07) & \num{41.8 \pm 4.8} & & & \num{42.5 \pm 5.9} & \num{1.07 \pm 0.15}\\ GEANT4 BERTINI & \num{13.0 \pm 1.5} & \num{3.354 \pm 0.072} & \num{3.699 \pm 0.045} & \num{13.3 \pm 1.6} & \num{3.43 \pm 0.42}\\ GEANT4 BIC & \num{17.8 \pm 1.8} & \num{4.995 \pm 0.084} & \num{6.421 \pm 0.059} & \num{19.2 \pm 2.0} & \num{2.38 \pm 0.26}\\ GEANT4 INCLXX & \num{42.3 \pm 5.1} & \num{20.65 \pm 0.11} & \num{16.94 \pm 0.10} & \num{38.5 \pm 4.6} & \num{1.19 \pm 0.15}\\ \hline \end{tabular} \caption{Predicted $^3$H~activity in CCD 3 based on different cross-section models. The second column lists the total predicted activity produced in the CCD. The third and fourth columns list the activity ejected and implanted respectively with listed uncertainties only due to simulation statistics. The fifth column shows the final predicted residual activity calculated from the second, third, and fourth columns, including systematic uncertainties due to the geometry. For models without ejection and implantation information we use the average of the other models---see text for details. The final column shows the ratio of the experimentally measured activity to the predicted residual activity.} \label{tab:trit_pred} \end{table*} \begin{table*}[t!] \centering \begin{tabular}{cccccc} \hline Model & Predicted LANSCE & Ejected & Implanted & Predicted LANSCE & Measured/Predicted\\ & $^7$Be~produced act. & activity & activity & $^7$Be~residual act. & $^7$Be~residual act.\\ & $P_\text{W3}$ [\si{\milli\becquerel}] & $E_\text{W3}$ [\si{\milli\becquerel}] & $I_\text{W3}$ [\si{\milli\becquerel}] & $R_\text{W3}$ [\si{\milli\becquerel}] & \\ \hline S\&T (ACTIVIA) & \num{408 \pm 46} & & & \num{405 \pm 49} & \num{1.08 \pm 0.16}\\ TALYS & \num{294 \pm 41} & & & \num{292 \pm 42} & \num{1.50 \pm 0.25}\\ INCL++(ABLA07) & \num{141 \pm 21} & & & \num{140 \pm 22} & \num{3.12 \pm 0.55}\\ $^{\text{nat}}$Si(p,x)$^7$Be Spline Fit & \num{518 \pm 68} & & & \num{514 \pm 72} & \num{0.85 \pm 0.14}\\ GEANT4 BERTINI & \num{0.99 \pm 0.20} & $<0.33$ & \num{0.64 \pm 0.14} & \num{1.63 \pm 0.43} & \num{268 \pm 74} \\ GEANT4 BIC & \num{1.27 \pm 0.24} & $<0.33$ & \num{0.61 \pm 0.16} & \num{1.98 \pm 0.50} & \num{221 \pm 59}\\ GEANT4 INCLXX & \num{21.6 \pm 3.0} & \num{3.59 \pm 0.85} & \num{3.42 \pm 0.38} & \num{21.4 \pm 3.1} & \num{20.4 \pm 3.4}\\ \hline \end{tabular} \caption{Predicted $^7$Be~activity in Wafer 3 based on different cross-section models. See Table~\ref{tab:trit_pred} caption for a description of the columns. Upper limits are 90\% C.L.} \label{tab:ber_pred} \end{table*} \begin{table*}[t!] \centering \begin{tabular}{cccccc} \hline Model & Predicted LANSCE & Ejected & Implanted & Predicted LANSCE & Measured/Predicted\\ & $^{22}$Na~produced act. & activity & activity & $^{22}$Na~residual act. & $^{22}$Na~residual act.\\ & $P_\text{W3}$ [\si{\milli\becquerel}] & $E_\text{W3}$ [\si{\milli\becquerel}] & $I_\text{W3}$ [\si{\milli\becquerel}] & $R_\text{W3}$ [\si{\milli\becquerel}] & \\ \hline S\&T (ACTIVIA) & \num{295 \pm 29} & & & \num{295 \pm 29} & \num{0.502 \pm 0.054}\\ TALYS & \num{209 \pm 18}& & & \num{208 \pm 18} & \num{0.711 \pm 0.070}\\ INCL++(ABLA07) & \num{207 \pm 21} & & & \num{206 \pm 21} & \num{0.718 \pm 0.081}\\ Michel-TALYS & \num{151 \pm 14} & & & \num{151 \pm 14} & \num{0.98 \pm 0.10}\\ GEANT4 BERTINI & \num{97 \pm 11} & $< 0.88$ & $<0.008$ & \num{96 \pm 11} & \num{1.54 \pm 0.18}\\ GEANT4 BIC & \num{393 \pm 40} & $<2.0$ & $<0.02$ & \num{392 \pm 40} & \num{0.378 \pm 0.042}\\ GEANT4 INCLXX & \num{398 \pm 40} & $<2.0$ & $<0.03$ & \num{398 \pm 40} & \num{0.373 \pm 0.041}\\ \hline \end{tabular} \caption{Predicted $^{22}$Na~activity in Wafer 3 based on different cross-section models. See Table~\ref{tab:trit_pred} caption for a description of the details. Upper limits are 90\% C.L.} \label{tab:sod_pred} \end{table*} \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{tritium_ejection_implantation.pdf} \caption{Schematic diagram showing triton ejection and implantation. The filled circles indicate example triton production locations while the triton nuclei show the final implantation locations. Production rate estimates include trajectories (a) and (b), while counting the tritium decay activity in the CCD measures (a) and (c).} \label{fig:trit_ejec_schematic} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{transmatices-logcolor-altstyle.pdf} \caption{Shown are the activities [mBq] of $^3$H (left), $^7$Be (middle), and $^{22}$Na (right) produced and implanted in various volumes (i.e., $T_{ij}\cdot P_j$) as predicted by the GEANT4 INCLXX model. CCD\,1, CCD\,2, CCD\,3 are the CCDs, with CCD\,1 being closest to the fission chamber. Box\,1, Box\,2, and Box\,3 are the aluminum boxes that contain CCD\,1, CCD\,2, and CCD\,3, respectively. Si\,1, Si\,2, Si\,3, and Ge are the silicon and germanium wafers downstream of the CCDs. World represents the air in the irradiation room.} \label{fig:transmat} \end{figure*} \subsection{Ejection and Implantation} Light nuclei, such as tritons, can be produced with significant fractions of the neutron kinetic energy. Due to their small mass, these nuclei have relatively long ranges and can therefore be ejected from their volume of creation and implanted into another volume. The situation is shown schematically in Fig.~\ref{fig:trit_ejec_schematic}. While we would like to estimate the total production rate in the silicon targets, what is actually measured is a combination of the nuclei produced in the target that are not ejected and nuclei produced in surrounding material that are implanted in the silicon target. The measured activity therefore depends not only on the thickness of the target but also on the nature and geometry of the surrounding materials. The residual activity, $R_i$, eventually measured in volume $i$, can be written as \begin{align} \label{eq:transfer} R_i = \sum_j T_{ij} \cdot P_j \end{align} where $P_j$ is the total activity produced in volume $j$ (see Eq.~\ref{eq:beam_act}) and $T_{ij}$ is the transfer probability---the probability of a triton produced in $j$ to be eventually implanted in $i$. Because the ejection and implantation of light nuclei is also an issue for dark matter detectors during fabrication and transportation, we have also explicitly factored the transfer probability into ejected activity ($E_i$) and activity implanted from other materials ($I_i$) to give the reader an idea of the relative magnitudes of the two competing effects: \begin{align} \label{eq:ejection} E_i &= (1 - T_{ii})\cdot P_i\\ \label{eq:implantation} I_i &= \sum_{j \neq i} T_{ij} \cdot P_j\\ R_i &= P_i - E_i + I_i \end{align} For nuclear models that are built-in as physics lists within Geant4, explicit calculations of transfer probabilities are not necessary, because the nuclei produced throughout the setup are propagated by Geant4 as part of the simulation. For the TALYS model, which does calculate the kinematic distributions for light nuclei such as tritons but is not included in Geant4, we had to include the propagation of the nuclei separately. Since the passage of nuclei through matter in the relevant energy range is dominated by electromagnetic interactions, which are independent of nuclear production models and can be reliably calculated by Geant4, we used TALYS to evaluate the initial kinetic energy and angular distributions of triton nuclei produced by the LANSCE neutron beam and then ran the Geant4 simulation starting with nuclei whose momenta are drawn from the TALYS-produced distributions. For the remaining models which do not predict kinematic distributions of the resulting nuclei, we simply used the average and standard deviation of the transfer probabilities from the models that do provide this information. As an example, the transfer matrix (expressed in terms of activity $T'_{ij} = T_{ij}\cdot P_j$) from the Geant4 INCLXX model for all three isotopes of interest is shown in Fig.~\ref{fig:transmat}. The uncertainties are calculated by propagating the statistical errors from the simulations through Eqs.~(\ref{eq:transfer}), (\ref{eq:ejection}), and (\ref{eq:implantation}). Additionally we have evaluated a 1\% systematic uncertainty on ejection and implantation of $^3$H{} and $^7$Be~due to the uncertainty in the target thicknesses. \subsubsection{Tritium} The model predictions for the ejected and implantated activity of tritons in CCD 3 are shown in the third and fourth columns of Table~\ref{tab:trit_pred}. One can see that depending on the model, 25\%--50\% of the tritons produced in the CCDs are ejected and there is significant implantation of tritons from the protective aluminum boxes surrounding the CCDs. Due to the similarity of the aluminum and silicon nucleus and the fact that the reaction Q-value for triton production only differs by \SI{5.3}{MeV}, at high energies the production of tritons in aluminum is very similar to that of silicon. In Ref.~\cite{benck2002secondary}, the total triton production cross section as well as the single and double differential cross sections for neutron-induced triton ejection were found to be the same for silicon and aluminum, within the uncertainty of the measurements. This led the authors to suggest that results for aluminum, which are more complete and precise, can also be used for silicon. We show all existing measurements for neutron- and proton-induced triton production in aluminum \cite{benck2002fast, otuka2014towards, zerkin2018experimental} in Fig.~\ref{fig:al_3h_cross_sections} along with model predictions. Comparison to Fig.~\ref{fig:si_3h_cross_sections} shows that all models considered have very similar predictions for aluminum and silicon. This similarity in triton production, as well as the similar stopping powers of aluminum and silicon, leads to a close compensation of the triton ejected from the silicon CCD with the triton implanted into the CCD from the aluminum box. If the material of the box and CCD were identical and there was sufficient material surrounding the CCD, the compensation would be exact, with no correction to the production required (ignoring attenuation of the neutron flux). In our case, the ratio of production to residual tritons is predicted to be \num{0.985 \pm 0.078}, based on the mean and RMS over all models with kinematic information, and we apply this ratio to the rest of the cross-section models. \subsubsection{$^7$Be} Due to the heavier nucleus, the fraction of ejected $^7$Be~nuclei is expected to be smaller than for tritons. As listed in Table~\ref{tab:ber_pred}, the Geant4 INCLXX model predicts that $\sim17\%$ of $^7$Be~produced in the silicon wafers is ejected. For the BIC and BERTINI models, the predicted production rates in silicon are roughly 400 times smaller than our measurement and within the statistics of our simulations we could only place upper limits on the fraction ejected from the wafers at roughly 30\%. We chose to use Wafer 3 for our estimation because it has the largest amount of silicon upstream of the targets, allowing for the closest compensation of the ejection through implantation. However, for $^7$Be~there is also a contribution of implantation from production in the $\sim$\num{0.5}" of air between the wafer targets, which varies between \SIrange[range-phrase = --]{0.4}{0.6}{\milli\becquerel} for the different models. Because this is significant compared to the severely underestimated production and ejection in silicon for the BERTINI and BIC models, the ratio of the production to residual activity is also greatly underestimated and we have therefore chosen to not use the BERTINI and BIC models for estimations of the $^7$Be~production rate from here onward. For all models without kinematic information we have used the ratio of production to residual $^7$Be~activity from the Geant4 INCLXX model, i.e. \num{1.008 \pm 0.046}. \subsubsection{$^{22}$Na} As seen in the third and fourth columns of Table~\ref{tab:sod_pred}, both the ejection and implantation fraction of $^{22}$Na~nuclei are negligible due to the large size of the residual nucleus and no correction needs to be made to the predicted production activity. \begin{figure} \centering \includegraphics[width=\columnwidth]{al_vs_si_h3_crosssections_2.pdf} \caption{Experimental measurements (data points) and model estimates (continuous lines) of the neutron-induced tritium production in aluminum. Measurements of the proton-induced cross section are also shown for reference. For direct comparison, we also show the corresponding model predictions for silicon (dashed lines) from Fig.~\ref{fig:si_3h_cross_sections}.} \label{fig:al_3h_cross_sections} \end{figure} \subsection{Comparison to Experimental Measurements} The ratio of the experimentally measured activities to the predictions of the residual activity from different models are shown in the final column of Tables~\ref{tab:trit_pred}, \ref{tab:ber_pred}, and \ref{tab:sod_pred} for $^3$H{}, $^7$Be{}, and $^{22}$Na{} respectively. For tritium, it can be seen that the predictions of the K\&K and INCL models are in fairly good agreement with the measurement, while the TALYS model overpredicts and the Geant4 BERTINI and BIC models underpredict the activity by more than a factor of two. For $^7$Be, the best agreement with the data comes from the S\&T model and the spline fit to measurements of the proton-induced cross section. We note that the proton cross sections do slightly overpredict the production from neutrons, as found in Ref.~\cite{ninomiya2011cross}, but the value is within the measurement uncertainty. For $^{22}$Na, there is good agreement between our measured activity and the predictions from the experimental measurements of the neutron-induced activity by Michel et al. \cite{michel2015excitation, hansmann2010production}, extrapolated at high energies using the TALYS model. For comparison, the use of the proton-induced production cross section (shown in Fig.~\ref{fig:si_22na_cross_sections}) leads to a value that is roughly 1.9$\times$ larger than our measured activity. If we assume that the energy dependence of the cross-section model is correct, the ratio of the experimentally measured activity to the predicted activity is the normalization factor that must be applied to each model to match the experimental data. In the next section we will use this ratio to estimate the production rates from cosmic-ray neutrons at sea level. \section{Cosmogenic Neutron Activation} \label{sec:cosmogenic_rates} The isotope production rate per unit target mass from the interaction of cosmic-ray neutrons, $P'$ [\si{\atoms\per\kg\per\second}], can be written as \begin{linenomath*} \begin{align} P' = n \int \Phi(E) \cdot \sigma(E)~dE, \end{align} \end{linenomath*} where $n$ is the number of target atoms per unit mass of silicon [atoms/kg], $\sigma(E)$ is the isotope production cross section [cm$^2$], $\Phi(E)$ is the cosmic-ray neutron flux [\si{\neutrons\per\cm\squared\per\second\per\MeV}], and the integral is evaluated from 1\,MeV to 10\,GeV.\footnote{The TALYS cross sections only extend up to 1 GeV \cite{koning2014extension}. We have assumed a constant extrapolation of the value at 1\,GeV for energies $>$1\,GeV.} While the cross section is not known across the entire energy range and each of the models predicts a different energy dependence, the overall normalization of each model is determined by the comparison to the measurements on the LANSCE neutron beam. The similar shapes of the LANSCE beam and the cosmic-ray neutron spectrum allow us to greatly reduce the systematic uncertainty arising from the unknown cross section. There have been several measurements and calculations of the cosmic-ray neutron flux (see, e.g., Refs.~\cite{hess1959cosmic, armstrong1973calculations, ziegler1996terrestrial}). The intensity of the neutron flux varies with altitude, location in the geomagnetic field, and solar magnetic activity---though the spectral shape does not vary as significantly---and correction factors must be applied to calculate the appropriate flux \cite{desilets2001scaling}. The most commonly used reference spectrum for sea-level cosmic-ray neutrons is the so-called ``Gordon'' spectrum \cite{gordon2004measurement} (shown in Fig.~\ref{fig:lanscebeamenergy}), which is based on measurements at five different sites in the United States, scaled to sea level at the location of New York City during the mid-point of solar modulation. We used the parameterization given in Ref.~\cite{gordon2004measurement}, which agrees with the data to within a few percent. The spectrum uncertainties at high energies are dominated by uncertainties in the spectrometer detector response function ($<4$\% below 10 MeV and 10--15\% above 150 MeV). We have assigned an average uncertainty of 12.5\% across the entire energy range. \begin{table}[t!] \centering \begin{tabular}{ccc} \hline Model & Predicted & Scaled \\ & cosmogenic $^3$H & cosmogenic $^3$H \\ & production rate & production rate\\ & [\si{\atoms\per\kilogram\per\dayshort}] & [\si{\atoms\per\kilogram\per\dayshort}] \\ \hline K\&K (ACTIVIA) & \num{98 \pm 12} & \num{108 \pm 20} \\ TALYS & \num{259 \pm 33} & \num{96 \pm 18}\\ INCL++(ABLA07) & \num{106 \pm 13} & \num{114 \pm 22}\\ G4 BERTINI & \num{36.1 \pm 4.5} & \num{124 \pm 22}\\ G4 BIC & \num{42.8 \pm 5.4} & \num{102 \pm 17}\\ G4 INCLXX & \num{110 \pm 14} & \num{130 \pm 23}\\ \hline \end{tabular} \caption{Predicted $^3$H\ production rates (middle column) from sea-level cosmic-ray neutron interactions in silicon for different cross-section models. The final column provides our best estimate of the production rate for each model after scaling by the ratio of the measured to predicted $^3$H~activities for the LANSCE neutron beam.} \label{tab:trit_cosmic} \end{table} \begin{table}[t!] \centering \begin{tabular}{ccc} \hline Model & Predicted & Scaled \\ & cosmogenic $^7$Be & cosmogenic $^7$Be \\ & production rate & production rate\\ & [\si{\atoms\per\kilogram\per\dayshort}] & [\si{\atoms\per\kilogram\per\dayshort}] \\ \hline S\&T (ACTIVIA) & \num{8.1 \pm 1.0} & \num{8.7 \pm 1.6}\\ TALYS & \num{4.17\pm 0.52} & \num{6.2 \pm 1.3}\\ INCL++(ABLA07) & \num{2.81 \pm 0.35} & \num{8.8 \pm 1.9}\\ $^{\text{nat}}$Si(p,x)$^7$Be Spl. & \num{9.8 \pm 1.2} & \num{8.3 \pm 1.7}\\ G4 INCLXX & \num{0.411 \pm 0.052} & \num{8.4 \pm 1.7}\\ \hline \end{tabular} \caption{Predicted $^7$Be\ production rates (middle column) from sea-level cosmic-ray neutron interactions in silicon for different cross-section models. The final column provides our best estimate of the production rate for each model after scaling by the ratio of the measured to predicted $^7$Be~activities for the LANSCE neutron beam.} \label{tab:ber_cosmic} \end{table} \begin{table}[t!] \centering \begin{tabular}{ccc} \hline Model & Predicted & Scaled \\ & cosmogenic $^{22}$Na & cosmogenic $^{22}$Na\\ & production rate & production rate\\ & [\si{\atoms\per\kilogram\per\dayshort}] & [\si{\atoms\per\kilogram\per\dayshort}] \\ \hline S\&T (ACTIVIA) & \num{86 \pm 11} & \num{43.2 \pm 7.1}\\ TALYS & \num{60.5 \pm 7.6} &\num{43.0 \pm 6.8}\\ INCL++(ALBA07) & \num{60.0 \pm 7.5} & \num{43.1 \pm 7.2}\\ Michel-TALYS & \num{42.8 \pm 5.4} & \num{42.0 \pm 6.8}\\ G4 BERTINI & \num{28.0 \pm 3.5} & \num{43.0 \pm 7.3}\\ G4 BIC & \num{115 \pm 14} & \num{43.4 \pm 7.2}\\ G4 INCLXX & \num{116 \pm 15} & \num{43.1 \pm 7.1}\\ \hline \end{tabular} \caption{Predicted $^{22}$Na\ production rates (middle column) from sea-level cosmic-ray neutron interactions in silicon for different cross-section models. The final column provides our best estimate of the production rate for each model after scaling by the ratio of the measured to predicted $^{22}$Na~activities for the LANSCE neutron beam.} \label{tab:sod_cosmic} \end{table} The predicted production rates per unit target mass for the cross-section models considered are shown in the second columns of Tables~\ref{tab:trit_cosmic}, ~\ref{tab:ber_cosmic}, and~\ref{tab:sod_cosmic} for $^3$H, $^7$Be, and $^{22}$Na~respectively. Scaling these values by the ratio of the measured to predicted activities for the LANSCE neutron beam, we obtain our best estimates for the neutron-induced cosmogenic production rates per unit target mass, shown in the corresponding final columns. The spread in the values for the different cross-section models is an indication of the systematic uncertainty in the extrapolation from the LANSCE beam measurement to the cosmic-ray neutron spectrum. If the LANSCE neutron-beam spectral shape was the same as that of the cosmic-ray neutrons, or if the cross-section models all agreed in shape, the central values in the final column of each table would be identical. Our best estimate of the activation rate of tritium in silicon from cosmic-ray neutrons is \mbox{$(112 \pm 15_\text{exp} \pm 12_\text{cs} \pm 14_\text{nf})$} \si{\atomstrit\per\kg\per\day}, where the first uncertainty listed is due to experimental measurement uncertainties (represented by the average uncertainty on the ratio of the measured to predicted activities from the LANSCE beam irradiation for a specific cross-section model), the second is due to the uncertainty in the energy dependence of the cross section (calculated as the standard deviation of the scaled cosmogenic production rates of the different models), and the third is due to the uncertainty in the sea-level cosmic-ray neutron flux. Similarly, the neutron-induced cosmogenic activation rates for $^7$Be\ and $^{22}$Na\ in silicon are \mbox{$(8.1 \pm 1.3_\text{exp} \pm 1.1_\text{cs} \pm 1.0_\text{nf})$} \si{\atomsber\per\kg\per\day} and \mbox{$(43.0 \pm 4.7_\text{exp} \pm 0.4_\text{cs} \pm 5.4_\text{nf})$} \si{\atomssod\per\kg\per\day}. \section{Activation from other particles} \label{sec:alternate} In addition to activity induced by fast neutrons, interactions of protons, gamma-rays, and muons also contribute to the total production rate of $^3$H, $^7$Be~and $^{22}$Na. In the following subsections we describe the methods we used to estimate the individual contributions using existing measurements and models. In some cases experimental data is very limited and we have had to rely on rough approximations based on other targets and related processes. \subsection{Proton Induced Activity} At sea level the flux of cosmic-ray protons is lower than that of cosmic-ray neutrons due to the attenuation effects of additional electromagnetic interactions in the atmosphere. To estimate the production rate from protons we have used the proton spectra from Ziegler \cite{ziegler1979effect, ziegler1981effect} and Diggory et.\ al.\ \cite{diggory1974momentum} (scaled by the angular distribution from the PARMA analytical model \cite{sato2016analytical} as implemented in the EXPACS software program \cite{expacs}), shown in Fig.~\ref{fig:alt_flux_comp}. \begin{figure} \centering \includegraphics[width=\columnwidth]{alt_flux_comparison.pdf} \caption{Comparison of sea-level cosmic-ray fluxes of protons \cite{diggory1974momentum, ziegler1979effect, ziegler1981effect}, gamma rays \cite{expacs}, and neutrons \cite{gordon2004measurement}.} \label{fig:alt_flux_comp} \end{figure} Experimental measurements of the proton-induced tritium production cross section have been made only at a few energies (see Fig.~\ref{fig:si_3h_cross_sections}). We have therefore based our estimates on the neutron cross-section models, scaled by the same factor used in Table~\ref{tab:trit_pred}. To account for possible differences between the proton- and neutron-induced cross sections, we have included a 30\% uncertainty based on the measured differences between the cross sections in aluminum (see Fig.~\ref{fig:al_3h_cross_sections}). Similar to the neutron-induced production, we have used the mean and sample standard deviation of the production rates calculated with all the different combinations of the proton spectra and cross-section models as our estimate of the central value and uncertainty, yielding a sea-level production rate from protons of \SI{10.0 \pm 4.5}{\atomstrit\per\kg\per\day}. For $^7$Be~and $^{22}$Na, measurements of the proton cross section across the entire energy range have been made; we have used spline fits to the data with an overall uncertainty of roughly 10\% based on the experimental uncertainties (see Figs.~\ref{fig:si_7be_cross_sections}~and \ref{fig:si_22na_cross_sections}). Our best estimates for the $^7$Be~and $^{22}$Na~production rates from protons are \SI{1.14 \pm 0.14}{\atomsber\per\kg\per\day} and \SI{3.96 \pm 0.89}{\atomssod\per\kg\per\day}. \begin{table*}[t!] \centering \begin{tabular}{cccc} \hline \vrule width 0pt height 2.2ex Source & $^3$H~production rate & $^7$Be~production rate & $^{22}$Na~production rate \\ & [\si{\atoms\per\kilogram\per\day}] & [\si{\atoms\per\kilogram\per\day}] & [\si{\atoms\per\kilogram\per\day}] \\ \hline Neutrons & \num{112 \pm 24} & \num{8.1 \pm 1.9} & \num{43.0 \pm 7.2}\\ Protons & \num{10.0 \pm 4.5} & \num{1.14 \pm 0.14} & \num{3.96 \pm 0.89}\\ Gamma Rays & \num{0.73 \pm 0.51} & \num{0.118 \pm 0.083} & \num{2.2 \pm 1.5}\\ Muon Capture & \num{1.57 \pm 0.92} & \num{0.09 \pm 0.09} & \num{0.48 \pm 0.11}\\ \hline Total & \num{124 \pm 25} & \num{9.4 \pm 2.0} & \num{49.6 \pm 7.4}\\ \hline \end{tabular} \caption{Final estimates of the radioisotope production rates in silicon exposed to cosmogenic particles at sea level.} \label{tab:final_cosmic_prod} \end{table*} \subsection{Gamma Ray Induced Activity} \begin{figure} \centering \includegraphics[width=\columnwidth]{si_gamma_crosssections.pdf} \caption{Estimated photonuclear cross-section models for production of $^3$H, $^7$Be, and $^{22}$Na. The dashed lines indicate the original models from TALYS while the solid lines indicate the models scaled to match yield measurements made with bremsstrahlung radiation \cite{matsumura2000target, currie1970photonuclear}.} \label{fig:gamma_cs} \end{figure} The flux of high-energy gamma rays at the Earth's surface was obtained using the PARMA analytical model \cite{sato2016analytical} as implemented in the EXPACS software program \cite{expacs}. Similar to the neutron spectrum, we used New York City as our reference location for the gamma spectrum, which is shown in Fig.~\ref{fig:alt_flux_comp}. Photonuclear yields of $^7$Be~and $^{22}$Na~in silicon have been measured using bremsstrahlung beams with endpoints ($E_0$) up to \SI{1}{\giga\eV} \cite{matsumura2000target}. We are not aware of any measurements of photonuclear tritium production in silicon, though there is a measurement in aluminum with $E_0 =$ \SI{90}{\MeV} \cite{currie1970photonuclear} which we assume to be the same as for silicon. The yields, $Y(E_0)$, are typically quoted in terms of the cross section per equivalent quanta (eq.q), defined as \begin{align} Y(E_0) = \frac{\displaystyle\int_0^{E_0} \sigma(k)N(E_0,k)dk}{\displaystyle \frac{1}{E_0}\int_0^{E_0} kN(E_0,k)dk} \end{align} where $\sigma(k)$ is the cross section as a function of photon energy $k$, and $N(E_0, k)$ is the bremsstrahlung energy spectrum. To obtain an estimate for $\sigma(k)$, we assume a $1/k$ energy dependence for $N(E_0, k)$~\cite{tesch1971accuracy} and scale the TALYS photonuclear cross section models to match the measured yields of \SI{72}{\micro\barn \per \eqquanta} at $E_0 =$ \SI{90}{\MeV} for tritium and \SI{227}{\micro\barn \per \eqquanta} and \SI{992}{\micro\barn \per \eqquanta} at $E_0 =$ \SI{1000}{\MeV} for $^7$Be\ and $^{22}$Na , respectively (see Fig.~\ref{fig:gamma_cs}). This corresponds to estimated photonuclear production rates of \SI{0.73}{\atomstrit\per\kilogram\per\day}, \SI{0.12}{\atomsber\per\kilogram\per\day}, and \SI{2.2}{\atomssod\per\kilogram\per\day}. Given the large uncertainties in the measured yields, the cross-section spectral shape, and the bremsstrahlung spectrum, we assume a $\sim 70\%$ overall uncertainty on these rates. \subsection{Muon Capture Induced Activity} The production rate of a specific isotope $X$ from sea-level cosmogenic muon capture can be expressed as \begin{align} P_\mu(X) = R_0 \cdot \frac{\lambda_c\text{(Si)}}{Q\lambda_d + \lambda_c\text{(Si)}}\cdot f_\text{Si}(X) \end{align} where $R_0 = \SI{484 \pm 52}{\muons\per\kg\per\day}$ is the rate of stopped negative muons at sea level at geomagnetic latitudes of about \SI{40}{\degree} \cite{charalambus1971nuclear}, the middle term is the fraction of muons that capture on silicon (as opposed to decaying) with the capture rate on silicon $\lambda_c$(Si) = \SI{8.712 \pm 0.018 E5}{\per\sec} \cite{suzuki1987total}, the decay rate of muons $\lambda_d$ = \SI{4.552E5}{\per\sec} \cite{tanabashi2018m}, and the Huff correction factor $Q = 0.992$ for bound-state decay \cite{measday2001nuclear}. The final term, $f_\text{Si}(X)$, is the fraction of muon captures on silicon that produce isotope $X$. For $^{28}$Si the fraction of muon captures with charged particles emitted has been measured to be \SI{15 \pm 2}{\percent} with theoretical estimates \cite{lifshitz1980nuclear} predicting the composition to be dominated by protons ($f_\text{Si}(^1$H) = \SI{8.8}{\percent}), alphas ($f_\text{Si}(^4$He) = \SI{3.4}{\percent}), and deuterons ($f_\text{Si}(^2$H) = \SI{2.2}{\percent}). The total fraction of muon captures that produce tritons has not been experimentally measured\footnote{A direct measurement of triton production from muon capture in silicon was performed by the \href{http://muon.npl.washington.edu/exp/AlCap/index.html}{AlCap Collaboration} and a publication is in preparation. }, but a lower limit can be set at \SI{7 \pm 4 e-3}{\percent} from an experimental measurement of tritons emitted above 24 MeV \cite{budyashov1971charged}. Recent measurements of the emission fractions of protons and deuterons following muon capture on aluminum have found values of $f_\text{Al}(^1$H) = \SI{4.5 \pm 0.3}{\percent} and $f_\text{Al}(^2$H) = \SI{1.8 \pm 0.2}{\percent} \cite{gaponenko2020charged}, and those same data can be used to calculate a rough triton emission fraction of $f_\text{Al}(^3$H) = \SI{0.4}{\percent} \cite{gaponenkopersonal}. If one assumes the same triton kinetic energy distribution in silicon as estimated for aluminum \cite{gaponenko2020charged} and uses it to scale the value measured above 24 MeV, one obtains a triton production estimate of $f_\text{Si}(^3$H) = \SI{0.49 \pm 0.28}{\percent}. The production rate of tritons from muon capture is then estimated to be \SI{1.57 \pm 0.92}{\atomstrit\per\kg\per\day}. The fraction of muon captures that produce $^{22}$Na~has been measured at $f_\text{Si}$($^{22}$Na) = \SI{0.15 \pm 0.03}{\percent} \cite{heisinger2002production}, corresponding to a production rate from muon captures of \SI{0.48 \pm 0.11}{\atomssod\per\kg\per\day}. To our knowledge there have been no measurements of the production of $^7$Be~through muon capture on silicon. We assume the ratio of $^7$Be~to $^{22}$Na~production is the same for muon capture as it is for the neutron production rates calculated earlier, with roughly \SI{100}{\percent} uncertainty, resulting in an estimated production rate from muon captures of \SI{0.09 \pm 0.09}{\atomsber\per\kg\per\day}. \section{Discussion} \label{sec:discussion} The final estimates for the total cosmogenic production rates of $^3$H, $^7$Be, and $^{22}$Na~at sea level are listed in Table~\ref{tab:final_cosmic_prod}. These rates can be scaled by the known variations of particle flux with altitude or depth, location in the geomagnetic field, and solar activity, to obtain the total expected activity in silicon-based detectors for specific fabrication, transportation, and storage scenarios. The production rate at sea level is dominated by neutron-induced interactions, but for shallow underground locations muon capture may be the dominant production mechanism. For estimates of the tritium background, implantation of tritons generated in surrounding materials and ejection of tritons from thin silicon targets should also be taken into account. Tritium is the main cosmogenic background of concern for silicon-based dark matter detectors. At low energies, 0--5\,keV, the estimated production rate corresponds to an activity of roughly \SI{0.002} {\decays \per \keV \per \kg \per \day} per day of sea-level exposure. This places strong restrictions on the fabrication and transportation of silicon detectors for next-generation dark matter experiments. In order to mitigate the tritium background we are currently exploring the possibility of using low-temperature baking to remove implanted tritium from fabricated silicon devices. Aside from silicon-based dark matter detectors, silicon is also widely used in sensors and electronics for rare-event searches due to the widespread use of silicon in the semiconductor industry and the availability of high-purity silicon. The relative contributions of $^3$H, $^7$Be, and $^{22}$Na~to the overall background rate of an experiment depends not only on the activation rate but also on the location of these components within the detector and the specific energy region of interest. The cosmogenic production rates determined here can be used to calculate experiment-specific background contributions and shielding requirements for all silicon-based materials. \section{Acknowledgements} We are grateful to John Amsbaugh and Seth Ferrara for designing the beamline holders, Larry Rodriguez for assistance during the beam time, and Brian Glasgow and Allan Myers for help with the gamma counting. We would also like to thank Alan Robinson and Andrei Gaponenko for useful discussions on production mechanisms from other particles. This work was performed, in part, at the Los Alamos Neutron Science Center (LANSCE), a NNSA User Facility operated for the U.S.\ Department of Energy (DOE) by Los Alamos National Laboratory (Contract 89233218CNA000001) and we thank John O'Donnell for his assistance with the beam exposure and data acquisition. Pacific Northwest National Laboratory (PNNL) is operated by Battelle Memorial Institute for the U.S.\ Department of Energy (DOE) under Contract No.\ DE-AC05-76RL01830; the experimental approach was originally developed under the Nuclear-physics, Particle-physics, Astrophysics, and Cosmology (NPAC) Initiative, a Laboratory Directed Research and Development (LDRD) effort at PNNL, while the application to CCDs was performed under the DOE Office of High Energy Physics' Advanced Technology R\&D subprogram. We acknowledge the financial support from National Science Foundation through Grant No.\ NSF PHY-1806974 and from the Kavli Institute for Cosmological Physics at The University of Chicago through an endowment from the Kavli Foundation. The CCD development work was supported in part by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
proofpile-arXiv_065-135
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The orbital configuration of Saturn'{}s mid-sized moons, Mimas, Enceladus, Tethys, Dione and Rhea from inner to outer orbits, is puzzling: they are trapped in mean-motion resonances for every other pairs (Mimas-Tethys 4:2 and Enceladus-Dione 2:1), but not for adjacent pairs. The observed current fast tidal orbital expansion rate \citep{Lainey2012, Lainey2017} and observations of the rings by Cassini suggest late formation of the satellites \citep[see e.g.][]{Ida2019} such as the formation model from a hypothetical ancient massive rings \citep{Charnoz2011,Crida2012}, which we refer to as the ``disk." Note, however, that a possible slower tidal orbital expansion rate in the past, before resonance locking between the orbital frequency and the planetary internal oscillation mode, could allow the satellite formation in the circumplanetary disk 4.5 G years ago \citep{Lainey2020}. In the model of the formation from the disk, satellites were formed one after another at the disk outer edge and the outward orbital migrations of adjacent pairs of satellites due to planetary tide are generally convergent. In that case, the satellite pairs are usually captured into a mutual 1st order mean-motion resonance, which is inconsistent with the current orbital configuration \citep[e.g.][]{Nakajima2019}. The avoidance of such resonance capture requires moderate orbital eccentricity of the satellites or fast orbital migration with the timescale smaller than the resonant libration period \citep{Malhotra1996}. Recent Cassini's observations determined the current rings mass as $M_{\rm {disk}} = (1.54 \pm 0.49) \times 10^{19} \simeq 0.4 \times$ Mimas mass \citep{Iess2019}. The rings still undergo viscous spreading and should have been much more massive in the past \citep{Salmon2010}. \citet{Crida2012} suggested that the satellite-disk (rings) interaction is more effective for the orbital migration than the Saturn's tide until the satellite reaches the 2:1 resonance with the disk outer edge, beyond which the disk torque would quickly decay. They applied the theoretical model for a planet in a gap of a protoplanetary disk \citep[e.g.][]{Lin1986} to estimate the migration rate of the satellite. The satellite-disk interactions can also excite the orbital eccentricity of the satellite \citep{Goldreich2003b, Duffell2015}. The eccentricity excitation may be much faster than the eccentricity damping by the satellite's tide, near the disk edge, as we will show in Section 2. If the eccentricity is excited beyond a critical value, the satellites can avoid the resonance capture to reach the current orbital configuration \citep{Nakajima2019}. Previous studies on planet-disk interactions usually assumed protoplanetary gas disks that are stable against self-gravitational instability, while the rings are often in a marginally unstable state \citep[e.g.][]{Salo1995,Daisaka2001}. Thus, it is important to investigate the interactions of a satellite and a marginally unstable self-gravitating particle disk (rings) by high resolution N-body simulations. \citet{Hyodo2015} performed high resolution N-body simulations ($N=3\times10^4 - 5 \times 10^4$) of the formation of satellites from a disk with mass $M_{\rm disk} \simeq (0.01$--$0.06) \, M_{\rm p}$ ($M_{\rm p}$ is the planet mass) to find the dependence of the forming satellite mass on the initial disk mass. On the other hand, they were not concerned with the orbital evolution of satellites and disk structures. Here, we focus on the detailed evolution of the disk structures and satellite's orbit. In our study, we perform high-resolution ($N\sim10^5$) N-body simulation of particle disk evolution due to the mutual gravitational interactions and inelastic collisions of the particles and the disk's interactions with a satellite in an orbit exterior to the disk, to investigate the orbital evolution of the satellite. \section{Methods} We use a new N-body simulation code, ``GPLUM" (Ishigaki in prep), that adopts the particle-particle particle-tree scheme \citep[P$^3$T,][]{Ohsino2011} for planetary formation. The P$^3$T scheme uses the fourth-order Hermite integrator to calculate gravitational interactions between particles within a cut-off radius and the Barnes-Hut tree method for gravity from particles beyond the cut-off \citep{Iwasawa2016}, which guarantees higher-order integrations for close interactions and fast integrations for perturbations from a large number of distant particles simultaneously. GPLUM adopts individual cut-off radius scheme for individual particles, depending on their mass and distance from the central star, resulting in a significant speedup of calculations, while keeping the accuracy. We follow the orbital evolution of the satellite and the disk particles by the gravitational interactions, inelastic collisions between the particles, and the accretion of the particles onto the satellite and the host planet. When physical sizes overlap, we regard that a collision occurs. For collisions between the disk particles, we apply inelastic collisions with the normal restitution coefficient $\epsilon_{\rm n} = 0.1$ and the tangential restitution coefficient $\epsilon_{\rm t} = 1$ (free-slip condition). When a particle collides with the satellite at the orbital radius larger than the Roche limit radius (Eq. (\ref{roche})), the collision results in gravitational binding of the particle and the satellite. Because in the results we show here, the satellite does not reenter the Roche limit, we make a merged body from the satellite and the particle, keeping their total mass and momentum. The particles are initially distributed from the physical radius of the planet ($R_{\rm p}$) to the Roche limit radius (denoted by $a_{\rm R}$), which is given by \begin{eqnarray} a_{\rm R} \simeq 2.456\left( \frac{\rho}{\rho_{\rm p}} \right)^{-1/3} R_{\rm p}, \label{roche} \end{eqnarray} where $\rho$ and $\rho_{\rm p}$ are the bulk densities of the disk particles and the planet, respectively. In this paper, we assume $\rho = 0.9\, {\rm {g/cm^3}}$ and $\rho_{\rm p} = 0.7\,{\rm {g/cm^3}}$, so that $a_{\rm R} \simeq 2.26\,R_{\rm p}$. The initial surface density of the particles follows $\Sigma(r) \propto r^{-3}$ and their total mass is $\sim (10^{-3}$--$10^{-2}) M_{\rm p}$. We use $8\times10^4 - 1.2 \times 10^5$ particles with equal mass of $M \sim (10^{-8}$--$10^{-7}) M_{\rm p}$ for the disk. Initially, the particles have circular orbits with small enough inclinations, following a normal distribution of $\langle e^2 \rangle^{1/2}=2 \langle i^2 \rangle ^{1/2}\sim R/r \sim (2-4) \times 10^{-3}$, where $R$ is the particle physical radius. Because of the inelastic collisions and self-gravity of the disk particles, they are quickly relaxed to quasi-equilibrium values, which are also $\sim R/r$ ($e$ is a few times larger than $R/r$ probably due to the scattering by the satellite). The satellite with a mass $M_{\rm s} \sim 10^{-3} M_{\rm p}$ is placed outside the Roche limit. We do not include the outward orbital migration due to the planetary tide and eccentricity damping due to the satellite tide, because they are negligible compared with the migration due to the satellite-disk interactions at the radius inside the 2:1 resonance with the disk outer edge. The tidal $e$-damping and $a$-expansion timescales are $\tau_{e, \rm tide} \sim (2/21)(Q_{\rm s}/k_{\rm 2s})(M_{\rm s}/M_{\rm p})(a_{\rm s}/R_{\rm m})^5\Omega^{-1}$ and $\tau_{a, \rm tide} \sim (7/2)[(Q_{\rm p}/k_{\rm 2p})/(Q_{\rm s}/k_{\rm 2s})] (M_{\rm p}/M_{\rm s})^{1/3}\tau_{\rm e,tide}$ \citep[e.g.,][]{Charnoz2011}. For the tidal parameters for the satellite $Q_{\rm s}/k_{\rm 2s}\sim 10^5$, $M_{\rm p}/M_{\rm s} \sim 10^6$, and the satellite orbital radius $a_{\rm s}\sim a_{\rm R} \simeq 2.26 R_{\rm p}$, $\tau_{e, \rm tide} \sim 6 \times 10^{10} \Omega^{-1} \sim 2 \times 10^7$ years. For the planet tidal parameter $Q_{\rm p}/k_{\rm 2p}\sim 10^{3}$--$10^{5}$, $\tau_{a, \rm tide} \sim 10^7$--$10^9$ years. As we show in Section 3, the $a$-expansion timescale due to the satellite-disk interactions is $\tau_{a, \rm disk} \sim (\pi^2/51)(M_{\rm p}/M_{\rm disk})^3(M_{\rm s}/M_{\rm p})(a_{\rm R}/a_{\rm s})^{1/2}\Omega^{-1} \sim 7 \times 10^3 \Omega^{-1} \sim 2 \,{\rm years}$ for a realistic case with $M_{\rm disk}/M_{\rm p} \sim 3 \times 10^{-4}$ and $M_{\rm s}/M_{\rm p}\sim10^{-6}$ (see Section 4). Since $\tau_{a, \rm disk} \ll \tau_{a, \rm tide}, \tau_{e, \rm disk}$ near the disk outer edge, the assumption to neglect the tidal force is justified in our simulation. Table \ref{tab:param} shows the parameter sets of the runs with the different initial disk mass ($M_{\rm disk}$) and satellite mass ($M_{\rm p}$). The disk particles have equal masses. We use the satellite masses that are much larger than the current Saturn's mid-size moons. We will derive semi-analytical formulas from the results of N-body simulations to clarify intrinsic physics in this system. Applying the derived mass scaling law for the realistic masses of Saturn'{}s mid-size moons, we will discuss the possibility to avoid the resonance trapping. \renewcommand{\arraystretch}{1} \begin{table}[h] \begin{center} \small \begin{tabular}{c|c|c|c|c|c} RUN & $M_{\rm disk} [M_{\rm p}]$ & $M_{\rm s} [M_{\rm p}]$ & $N$ & $M_{\rm s,final} [M_{\rm p}]$ & $M_{\rm s,mig} [M_{\rm p}]$ \\ \hline\hlin 1 & $4.47\times10^{-3}$ & $10^{-3}$ & $8\times 10^{4}$ & $1.37\times10^{-3}$ &$1.18\times10^{-3}$ \\%& 18 \\ 2 & $2.99\times10^{-3}$ & $6\times10^{-4}$ & $10^{5}$ & $7.93\times10^{-4}$ & $7.05\times10^{-4}$ \\%& 6 \\ 3 & $8.45 \times10^{-3}$ & $10^{-3}$ & $8\times 10^{4}$ & $1.87\times10^{-3}$ & $1.60\times10^{-3}$ \\%& 5 \\ 4 & $2.11\times10^{-3}$ & $6\times10^{-4}$ & $1.2\times 10^{5}$ & $7.27\times10^{-4}$ & $6.71\times10^{-4}$ \\%& 10 \\ 5 & $5.99\times10^{-3}$ & $10^{-3}$ & $10^{5}$ & $1.59\times10^{-3}$ & $1.34\times10^{-3}$ \\%& 15 \\ \end{tabular} \end{center} \renewcommand{\arraystretch}{1} \caption{Parameter sets of our simulations. $M_{\rm disk}$ is the initial disk mass. The unit $M_{\rm p}$ is the host planet mass. $M_{\rm s}$, $M_{\rm s, final}$ and $M_{\rm s, mig}$ are the masses of the satellite at $t=0$, at the end of simulations, and at the time when the satellite starts outward migration, respectively. $N$ is the initial number of the particles in each run.} \label{tab:param} \end{table} \section{Simulation Results} \subsection{Ring Structures} Figure~\ref{snap_shot}(c) is a snapshot at $t=1.47 \times 10^3 T_{\rm Kep}$ of RUN 1, where $T_{\rm Kep}$ is the Keplerian period at $r=a_{\rm R}$. The figure shows two two distinct structures are superposed: 1) the dense wake structures with small-wavelengths caused by the disk self-gravity \citep[e.g.][]{Salo1995,Daisaka2001,Takeda2001} and 2) the $m=2$ spiral arms produced by the Lindblad resonance torque from the satellite. The radial wavelength and wavenumber of the self-gravity wakes are estimated as \citep{Takeda2001} \begin{align} \lambda_{\rm self} & \sim 2\pi \frac{M_{\rm disk}}{M_{\rm p}} a_{\rm R} \sim 2 \times 10^{-2} \left( \frac{M_{\rm disk}/M_{\rm p}}{3 \times 10^{-3}}\right) a_{\rm R} \label{Lambda} \\ m_{\rm self} & \sim \frac{2\pi \; a_{\rm R}}{\lambda_{\rm self}}\sim 300 \left( \frac{M_{\rm disk}/M_{\rm p}}{3 \times 10^{-3}}\right)^{-1}, \end{align} where $M_{\rm disk}$ is the total disk mass and we assumed the pitch angle $\sim \pi/4$ to estimate $m_{\rm self}$. These estimates are consistent with the result in Fig.~\ref{snap_shot}(c). \begin{figure*}[ht] \centering \includegraphics[width=1.0\hsize, bb=0.000000 0.000000 2048.000000 2048.000000]{19_snap_fig_r2_small.pdf} \caption{Time evolution of the system of RUN1: (a) $t = 0$, (b) $t = 29.4\,T_{\rm {Kep}}$, (c) $t = 1.47\times10^3\,T_{\rm {Kep}}$ and (d) $t = 2.94\times10^4\,T_{\rm {Kep}}$, where $T_{\rm {Kep}}$ is the Keplerian period at $r=a_{\rm R}$. The green point is the outer satellite and purple dots show the disk particles. Inner and outer circles in black solid lines represent the planetary surface and the Roche limit ($r=a_{\rm R}$).} \label{snap_shot} \end{figure*} Lindblad torque exerted by the satellite makes the spiral arms on the disk \citep[e.g.][]{Goldreich1982}. The wavenumber of the spiral arms is given by \begin{eqnarray} m_{\rm res} \sim \frac{\Omega}{\Omega - \Omega_{\rm s}}, \label{arm_res} \end{eqnarray} where $m_{\rm res} \neq 1$ and $\Omega_{\rm s}$ and $\Omega$ are the orbital frequencies of the satellite and the disk. Figure~\ref{snap_shot}(c) clearly shows $m_{\rm self}=2$ spiral arms. With $a_{\rm s} \sim 3 R _{\rm p}$ and $\Omega$ at $\sim 2 R_{\rm p}$, Eq.~(\ref{arm_res}) predicts $m_{\rm res} \sim 2^{-3/2}/(2^{-3/2} - 3^{-3/2}) \sim 2$, which agrees with Fig.~\ref{snap_shot}(c). Our N-body simulation simultaneously show the short-wavelength wakes due to the self-gravity, which were often shown in the local shearing sheet simulations \citep[e.g.][]{Salo1995,Daisaka2001}, and the $m_{\rm res}=2$ global spiral arms structure. While \citet{Hyodo2015} also performed N-body simulation with $N=3\times10^4 - 5 \times 10^4$, they did not clearly show these two distinct structures, because they used 10 times larger $M_{\rm disk}$ (accordingly, ten times fewer $m_{\rm res}$) and because their simulations often had multiple clumps. \subsection{Time Evolution of Disk structures and the Satellite's orbit} Figure~\ref{snap_shot}(a) to (d) shows the time evolution of the disk structures. We initially set the satellite near the disk outer edge (Fig. \ref{snap_shot}(a)). In the disk, the dense wakes quickly emerge due to the combined effect of self-gravity and inelastic collisions. The satellite creates $m_{\rm res}=3$ spiral arms by the Lindblad torque and scatters/accretes nearby disk particles to open a gap with the half width $\sim3r_{\rm Hill}$, where $r_{\rm Hill}$ is the satellite Hill's radius defined by $r_{\rm Hill}=(M_{\rm s}/3M_{\rm p})^{1/3} a_{\rm s}$ (Fig. \ref{snap_shot}(b)). But, the number of the particles scattered outside of the Roche radius is only $\sim 2000$ at this time. The disk mass loss is mostly caused by accretion onto the planet due to the viscous spreading. At the initial satellite location ($a_{\rm s} \simeq a_{\rm R}$), Eq.~(\ref{arm_res}) at the disk outer edge ($r \sim a_{\rm R}- 3r_{\rm Hill}$) is \begin{eqnarray} m_{\rm res} & \sim & \frac{1}{1 - (\Omega_{\rm s}/\Omega)} \sim \frac{1}{(3/2) \times 3 (r_{\rm Hill}/a_{\rm s})} \nonumber \\ &\simeq & \frac{2}{9}\left( \frac{M_{\rm s}}{3M_{\rm p}} \right)^{-1/3} \simeq 3.2 \left( \frac{M_{\rm s}/M_{\rm p}}{10^{-3}} \right)^{-1/3}. \end{eqnarray} This is consistent with the $m_{\rm res}=3$ spiral mode in Fig. \ref{snap_shot} (b). After the gap opening, the satellite migrates outward, and $m_{\rm res}$ decreases from 3 to 2 (Fig. \ref{snap_shot}(c)). When the satellite orbit expands beyond the 2:1 resonance with the disk outer edge, $m_{\rm res}$ becomes well smaller than 2 and the spiral arms disappears (Fig. \ref{snap_shot}(d)). Because the theoretically predicted effective viscosity is $\propto \Sigma^2$ (Eq.~(\ref{nu})), the $r$-gradient of $\Sigma$ is quickly flattened. The self-gravity wakes become fainter (Eq.~(\ref{Lambda})) through the loss of $M_{\rm disk}$ as well as the spiral arms decay (Fig. \ref{snap_shot}(d)). Consequently, the satellite's orbital migration slows down. In the next subsection, we quantitatively discuss the satellite migration rate. \subsection{Satellite Orbital Evolution} \begin{figure}[ht] \centering \includegraphics[width=100mm,bb=0 0 200 200]{amd.pdf} \caption{ The time evolution of the satellite's semimajor axis ($a_{\rm s}$; the upper panel) and the disk mass ($M_{\rm disk}$; the lower panel). The units of the semimajor axis and the time are the Roche limit radius $a_{\rm R}$ and the Keplerian period at $a_{\rm R}$, respectively. The red curves are the results of RUN1 and the blue curve in the lower panel is the analytical estimation given by Eq.~(\ref{M_ring}). } \label{a_evolve} \end{figure} Figure~\ref{a_evolve} shows the time evolution of the satellite's semimajor axis ($a_{\rm s}$) and the total disk mass ($M_{\rm disk}$) obtained by our N-body simulation (RUN 1). The changes of $a_{\rm s}$ and $M_{\rm disk}$ become slower with $t$, which are theoretically explained as follows. Through N-body simulations, \citet{Daisaka2001} found that the effective viscosity of a self-gravitating disk is given by \begin{equation} \nu_{\rm R} \simeq 26 \gamma \frac{G^2 \Sigma_{\rm R}^2}{\Omega_{\rm R}} \simeq \frac{8.5}{\pi^2}\tilde{\gamma}^5 \left( \frac{M_{\rm disk}}{M_{\rm p}}\right)^2 a_{\rm R}^2 \Omega_{\rm R}, \label{nu} \end{equation} where the subscript ''$_{\rm R}$" represents the values at $r \simeq a_{\rm R}$, we used $M_{\rm disk} \sim \pi \Sigma_{\rm R} a_{\rm R}^2$, and $\tilde{\gamma}=(r/0.8 a_{\rm R})^5$ represents the effect of finite physical size of the particles. Because we are concerned with the outer disk region, we adopted $r \simeq 0.8 a_{\rm R}$. The rate of disk accretion onto the planet is \begin{equation} \dot{M}_{\rm disk} \sim - 3\pi \Sigma_{\rm R} \nu_{\rm R} \simeq - \frac{25.5\,\tilde{\gamma}}{\pi^2} \left( \frac{M_{\rm disk}}{M_{\rm p}}\right)^3 M_{\rm p}\Omega_{\rm R}. \label{mdot} \end{equation} Integrating this equation, we predict the explicit time evolution of the disk mass, \begin{equation} \frac{M_{\rm disk}(t)}{M_{\rm p}} \simeq \frac{1}{\sqrt{1+(102\,\tilde{\gamma}/\pi) (M_{\rm disk}(0)/M_{\rm p})^2(t/T_{\rm Kep})} } \frac{M_{\rm disk}(0)}{M_{\rm p}}, \label{M_ring} \end{equation} which reproduces the N-body simulation result (the lower panel of Fig.~\ref{a_evolve}). Using Eq.~(\ref{M_ring}), we will show that the migration is regulated by the self-gravity wakes, but not by Lindblad resonance torque (the spiral arms) induced by the satellite. The migration rate due to the (one-sided) Lindblad resonance is \citep[e.g.][]{Lin1986,Crida2012} \begin{eqnarray} \left(\frac{da_{\rm s}}{dt}\right)_{\rm res} & \simeq & \frac{16}{27 \pi} \frac{\pi \Sigma_{\rm R} a_{\rm R}^2}{M_{\rm p}}\frac{M_{\rm s}}{M_{\rm p}} \left( \frac{\Delta a_{\rm s}}{a_{\rm R}}\right)^{-3} a_{\rm R}\Omega_{\rm R} \nonumber \\ & \sim & \frac{16}{27 \pi}\frac{M_{\rm disk}}{M_{\rm p}}\frac{M_{\rm s}}{M_{\rm p}} \left( \frac{\Delta a_{\rm s}}{a_{\rm R}}\right)^{-3} a_{\rm R}\Omega_{\rm R}, \label{dadt_Lindblad} \end{eqnarray} where $\Delta a_{\rm s} = a_{\rm s} - a_{\rm R}$. The migration rate by the self-gravity wakes is evaluated as follows. When the disk viscous spreading beyond the Roche limit is prevented by the satellite's perturbations, the angular momentum flux in the disk ($\sim 3 \pi \Sigma \nu \, r^2 \Omega$) is transferred from the disk outer edge to the satellite's orbit. In this case, \begin{eqnarray} \left(\frac{da_{\rm s}}{dt}\right)_{\rm self} \simeq \frac{2 a_{\rm s}}{L_{\rm s}} \frac{dL_{\rm s}}{dt} \simeq \frac{2 a_{\rm s}}{M_{\rm s} a_{\rm s}\Omega_{\rm s}} \times 3 \pi \Sigma_{\rm R} \nu_{\rm R} \, a_{\rm R}^2 \Omega_{\rm R}. \label{dadt_self} \end{eqnarray} Substituting Eq.~(\ref{nu}) and $M_{\rm disk} \simeq \pi \Sigma_{\rm R} a_{\rm R}^2$ into Eq.~(\ref{dadt_self}), we obatin \begin{eqnarray} \left(\frac{da_{\rm s}}{dt}\right)_{\rm self} \simeq \frac{51 \tilde{\gamma}}{\pi^2} \left( \frac{M_{\rm disk}}{M_{\rm p}} \right)^3 \frac{M_{\rm p}}{M_{\rm s}} \left(\frac{a_{\rm s}}{a_{\rm R}}\right)^{1/2} a_{\rm R}\Omega_{\rm R}. \label{dadt_self2} \end{eqnarray} \begin{figure}[ht] \centering \includegraphics[width=80mm,bb=30 0 300 200]{dadt.pdf} \caption{ The orbital expansion rate, $da_{\rm s}/dt$, as a function of $a_{\rm s}$ in RUN1 to RUN5. The red curves are the results of individual N-body simulations. The blue and green curves are the theoretical predictions of $(da_{\rm s}/dt)_{\rm res}$ and and $(da_{\rm s}/dt)_{\rm self}$ given respectively by Eqs.~(\ref{dadt_Lindblad}) and (\ref{dadt_self2}) using $M_{\rm s}$ and $M_{\rm disk}$ obtained by the N-body simulations at individual $a_{\rm s}$ in each run. } \label{dadt} \end{figure} In Fig.~\ref{dadt}, $da_{\rm s}/dt$ from each run of our N-body simulations is compared with analytical estimations. In the analytical estimations, $M_{\rm s}$ and $M_{\rm disk}$ obtained by the N-body simulations at each $a_{\rm s}$ are substituted to Eqs.~(\ref{dadt_Lindblad}) and (\ref{dadt_self2}) to calculate $(da_{\rm s}/dt)_{\rm res}$ and $(da_{\rm s}/dt)_{\rm self}$. This figures show that the results of N-body simulations fit $(da_{\rm s}/dt)_{\rm self}$. In the vicinity of the disk edge, $(da_{\rm s}/dt)_{\rm res}$ dominates over $(da_{\rm s}/dt)_{\rm self}$. The theoretical prediction for $(da_{\rm s}/dt)_{\rm res}$ assumes a non-self-gravitating disk with modest viscosity. The spiral arms may be weakened by the relatively strong diffusion due to self-gravity wakes. Because the self-gravity wake torque is independent of the satellite locations, it dominates over the Lindblad torque that is very sensitive to the distance from the disk outer edge. In these runs, we adoted $M_{\rm s} \sim 10^{-3}M_{\rm p}$, while the masses of the actual mid-sized moons are $M_{\rm s} \sim (10^{-7}$--$10^{-6})M_{\rm p}$. \citet{Hyodo2015} showed through N-body simulations that $M_{\rm s}/M_{\rm p} \sim 10 (M_{\rm disk}(0)/M_{\rm p})^2$ for $M_{\rm s} \sim 10^{-3}M_{\rm p}$. Although \citet{Crida2012} proposed $M_{\rm s}/M_{\rm p} \propto (M_{\rm disk}(0)/M_{\rm p})^3$ for smaller value of $M_{\rm s}/M_{\rm p}$, generated clumps would quickly coagulate each other. Here we use \citet{Hyodo2015}'s relation to discuss the cases of $M_{\rm s} \sim (10^{-7}$--$10^{-6})M_{\rm p}$. As will be shown in Section 4, the high migration rate induced by the self-gravity wake torque, which has been overlooked in the past studies, would play an important role in avoidance of the mean-motion resonant capture between adjacent satellites. Figure \ref{e_evolve} shows the eccentricity evolution of the satellite in the individual runs. The eccentricity is excited only in the early phase, $t \la (10^{4}$--$10^{5})\, T_{\rm Kep}$, when the Lindblad torque may not be negligible compared to the self-gravity wake torque. As we discuss in Section 4, the excited eccentricity in the early phase ($e \sim 0.01$) is marginal for the condition to avoid a mean-resonance trapping. \begin{figure}[ht] \centering \includegraphics[width=100mm,bb=0 0 300 200]{e_evolve_tot_fig.pdf} \caption{Eccentricity evolution of the massive satellite during the migration in each run of our simulations. } \label{e_evolve} \end{figure} \section{Resonance capture probability} The probability of the mean motion resonance capture depends on the eccentricity and migration rate of the satellites \citep{Dermott1988,Malhotra1993}. Here, we consider the avoidance of Tethys-Dione 3:2 resonance tapping as an example. Dione's orbit is currently located beyond Tethys' one with the separation within 3:2 mean-motion resonance. Because tidal migration rate is a strong function of orbital radius while the mass difference between Dione and Tethys is within a factor of 2 ($M_{\rm Dione}/M_{\rm Saturn}\simeq 1.94 \times 10^{-6}, M_{\rm Tethys}/M_{\rm Saturn}\simeq 1.09 \times 10^{-6}$), their tidal migration are convergent unless the planetary tidal parameter $Q_{\rm p}$ is much lower (much more dissipative) for Dione. To avoid the trapping into their 3:2 mean-motion resonance, a large enough eccentricity and/or fast enough convergence of their orbits is required. The critical eccentricity, beyond which the $j+1:j$ resonance trapping is inhibited, is given by \citep{Malhotra1996} \begin{eqnarray} e_{\rm {crit}} \simeq 0.01\left[ \frac{j/(j+1)^2}{0.2}\right]^{1/3}\left( \frac{M_{\rm s}/M_{\rm p}}{10^{-6}}\right). \end{eqnarray} \citet{Nakajima2019} pointed out the possibility to avoid Tethys-Dione 3:2 resonance trapping by Enceladus' eccentricity excitation. If Enceladus' eccentricity is excited enough by the Lindblad torque from the disk, Tethys' eccentricity can also be excited to be $\ga e_{\rm crit}$ by the secular perturbations from Enceladus. The other possibility is fast orbital migration with the timescale shorter than the resonant libration timescale. As we have shown, the migration is significantly faster if we consider the satellite-disk interactions. According to \citet{Ogihara2013}, the resonance trapping is avoided for Tethys-Dione 3:2 resonance ($j=2$), if \begin{eqnarray} \tau_{a} \equiv \frac{a_{\rm s}}{\dot{a}_{\rm s}} < \tau_{a,{\rm crit}} = \left(\frac{3}{1024j\alpha^4 f(\alpha)^4} \right)^{1/3} \left( \frac{M_{\rm s}}{M_{\rm p}}\right)^{-4/3} T_{\rm Kep}, \end{eqnarray} where $a_{\rm s}$ is the semimajor axis of the inner satelliite (Tethys), $\alpha$ is the semimajor axis ratio ($\simeq 0.763$) and $f(\alpha) \sim -1.55/\alpha$ \citep[see][Table 8.5]{MurrayDermott1999}. With $M_{\rm s}/M_{\rm p} \sim 10^{-6}$. $\tau_{a,{\rm crit}} \sim 8.0 \times 10^6 T_{\rm Kep}$. If we consider the Tethys-mass satellite ($M_{\rm s} \sim 10^{-6} M_{\rm p} $), the disk mass may be $M_{\rm disk} \sim 3 \times 10^{-4} M_{\rm p}$ \citep{Hyodo2015}. The orbital migration timescale of the self-gravity wake is estimated to be $\tau_{a,{\rm self}}\sim 1.2 \times10^{3}(M_{\rm disk}/3\times10^{-4}M_{\rm p})^{-3}T_{\rm Kep}$ (Eq.~(\ref{dadt_self2})), which is shorter than $t_{a, \rm crit}$ by more than three orders of magnitude. Although the migration by the self-gravity wake torque could be weaker along with the decrease of $M_{\rm disk}$, the fast migration potentially prevents the resonance capture of the Dione-Tethys pair. \section{Conclusions} In order to investigate the gravitational interactions between Saturn's mid-sized moons and a hypothetical ancient massive rings and the associated orbital evolution of the moons, we have performed global high-resolution N-body simulations ($N \sim10^5$) of a self-gravitating particle disk interacting with a single satellite, taking account of gravitational forces among all the disk particles and the satellite and inelastic collisions between the particles. Our simulations show that the dense short-wavelength wake structure and $m=2$ or 3 global spiral arms simultaneously develop in the disk. The former and the latter are produced by the disk self-gravity and Lindblad torque from the satellite, respectively. These structures transfer the angular momentum of the disk to the satellite and regulate the early phase of orbital evolution of the sattellite. We found that the orbital migrations of the satellites are determined by the self-gravity wakes. The past literatures assumed that the Lindblad torque regulates the migrations, because they considered the self-gravity wakes only for the source of the disk diffusion. In this paper, we focused on investigating the detailed dynamics of gravitational interactions between a circumplanetary particle disk and a satellite, and derived the semi-analytical formulas for the satellite's migrating rate. While the simulations used a much more massive satellite than the current Saturn's mid-sized moons due to the simulation limitation, we extrapolated the formulas to realistic satellite masses to find that the migration is fast enough to avoid the resonance capture of adjacent moons on the way to the current orbital configuration of Saturn's mid-size moons. To confirm this conclusion, the simulations with much higher resolution simulations and with multiple satellites are needed, which is left for future study. \begin{acknowledgements} We thank Takaaki Takeda for helpful and useful comments. This research was supported by JSPS Grants-in-Aid for Scientific Research (\# JP19J12542) and MEXT ``Exploratory Challenge on Post-K computer'' hp190143. \end{acknowledgements} \bibliographystyle{aa}
proofpile-arXiv_065-136
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Materials lacking inversion symmetry may display useful properties such as piezoelectricity and ferroelectricity, which have wide applications in modern industries. In particular, ferroelectric materials are not only prototypical systems for studying spontaneous symmetry breaking and structural phase transitions, but also key components for non-volatile memory devices, piezoelectric sensors, photocatalysis, and many other technologically important applications~\cite{Li2018,Scott1989,fang2018}. Driven by the need for further miniaturization of electronic devices, researchers have devoted significant efforts to reduce the thickness of thin-films ferroelectrics~\cite{setter2006,Park2015,Boscke2011}. Despite the depolarization field~\cite{Junquera2003,mehta1973,Wurfel1973}, which usually inhibits the electric polarization of thin-film ferroelectrics, a few groups have demonstrated ferroelectricity sustains in bulk ferroelectrics with thickness down to $\sim$ 1 nm\cite{Fong2004,Lee2019}. The recent discovery of ferroelectricity in monolayer or few-layer Van der Waals (vdW) materials offer new opportunities for shrinking the size of ferroelectric devices to the atomically thin regime.~\cite{Fei2018,Liu2016,Chang2016,Zhou2017,Sharma2019} Compared to conventional bulk ferroelectrics, a key advantage of two-dimension vdW materials is free of dangling bonds on the surface. First-principles calculations also show that a large number of two-dimensional (2D) materials are piezoelectric.~\cite{Dong2017,blonsky2015,duerloo2012} Remarkably, some of them\cite{Fei2016,Dong2017} even demonstrate giant piezoelectric effects, which can be more than two-orders-of-magnitude stronger than bulk piezoelectric materials. Currently, our understanding of the fundamental physical properties of 2D piezoelectric and ferroelectric systems is in an early stage, and the lack of a robust and economical fabrication process for high-quality 2D ferroelectric samples hinders mass production and applications~\cite{Cui2018}. Among 2D ferroelectric materials predicted with first-principles theories~\cite{Wu2016, Mehrshad2016, Fei2016,gao2019,Ding2017,anand2017,Lin201908}, only a few, such as monolayer SnS~\cite{Higashitarumizu2020}, SnSe~\cite{chang2020}, SnTe~\cite{Chang2016} and In$_2$Se$_3$~\cite{Zhou2017}, have so far been synthesized and confirmed to be ferroelectric. First-principles prediction of piezoelectricity or switchable electric polarization in readily fabricated 2D materials is important for enriching the toolbox of 2D non-centrosymmetric materials with technological interests. Through first-principles calculations, we show ample evidence that three monolayer arsenic chalcogenides (As$_2$X$_3$) with the Pmn2$_1$ space group will exhibit spontaneous and reversible in-plane polarization. Among these three materials, the Pmn2$_1$ As$_2$S$_3$, i.e., monolayer orpiment, has recently been isolated through mechanical exfoliation~\cite{siskins2019}. Moreover, we predict the existence of several novel metastable polymorphs of As$_2$S$_3$. Both ferroelectric monolayer orpiment and these new polymorphs can be related to the soft zone-center modes of a hypothetical high-symmetry phase. Remarkably, our calculations show some of these polymorphs have large piezoelectric coefficients comparable to those of group IV-VI compounds~\cite{Fei2016}. \section{Results and Discussion} \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{Figure1-As2S3_struc-new.pdf} \caption{The crystal structures of (a) monolayer orpiment and (b) monolayer anorpiment, plot with VESTA\cite{Momma2011}. The red arrows show the direction of polarization.} \label{fig:1} \end{figure} \begin{figure*}[tb] \centering \includegraphics[width=0.75\textwidth]{Figure2.pdf} \caption{(a) (left) The high-symmetry structure with space group Pmmn and (right) the change of total energy under collective atomic displacements of the Pmmn structure with frozen-in soft phonon mode A$_u$, B$_{1g}$, B$_{3g}$, and B$_{2u}$. The displacement vector $\Delta \mathbf{R}_p$ is proportional to the polarization vector of phonon $p$: $\Delta \mathbf{R}_p = Q\cdot \mathbf{u}_p$. (b) Schematic plot of the transform from Pmmn structure to monolayer orpiment through the B$_{2u}$ soft mode. The red arrows (not to scale) show the moving direction of corresponding atoms. (c) Unit cell of the metastable P2$_1$/m phase and P2/c phase. (d) Schematic plot of the stabilization of the P2$_1$2$_1$2 phase by doubling the unit cell due to soft modes at $X$ point.} \label{fig:2} \end{figure*} Under ambient conditions, bulk As$_2$S$_3$ can be either amorphous or crystalline. Bulk orpiment and anorpiment, which were found in natural minerals~\cite{kampf2011}, are two common crystalline As$_2$S$_3$ phases with noncentrosymmetric layered structures bounded by vdW interactions. To date, two-dimensional anorpiment has not been synthesized, while monolayer and few-layer orpiment have been successfully exfoliated and demonstrates better chemical stability than phosphorene under low light conditions~\cite{siskins2019}. Our calculated total energy of monolayer orpiment is lower than that of monolayer anorpiment by 73~meV/formula unit (f.u.), suggesting better stability of monolayer orpiment compared to monolayer anorpiment. A finite bandgap is required for sustaining the ferroelectricity of 2D materials with in-plane polarization. Monolayer orpiment and anorpiment have indirect bandgaps around 2.2 eV calculated with the Perdew-Burke-Ernzerhof (PBE) functional~\cite{PBE}. The band structures are presented in the Supporting Information. The crystal structures of monolayer arsenic chalcogenides do not resemble those of other well-known 2D materials. As shown in Fig.~\ref{fig:1}(a), monolayer orpiment is highly anisotropic and consists of rings connected by six corner-sharing AsS$_3$ units, which have a pyramidal shape. Monolayer orpiment has Pmn2$_1$ symmetry, which includes a mirror-reflection to the $xz$-plane, but no symmetry with the $yz$-plane, as illustrated in Fig.~\ref{fig:1}(a). Such symmetry properties allow a spontaneous electric polarization along the $x$-axis. In comparison, monolayer anorpiment has a more irregular structure and electric polarization pointing in the $y$-direction, as illustrated in Fig.~\ref{fig:1}(b). Bulk As$_2$Se$_3$ can be found in mineral laphamite with a similar structure as orpiment~\cite{stergiou1985}, while bulk As$_2$Te$_3$ with the orpiment-like structure is yet to be found. Our calculations show monolayer As$_2$Se$_3$ and As$_2$Te$_3$ with the orpiment-like structure (Pmn2$_1$ symmetry) are dynamically stable, while those with the anorpiment-like structure (Pc space group) demonstrate dynamical instability with imaginary phonon modes. Using first-principles methods based on modern polarization theory~\cite{resta1994,king1993}, we calculate that monolayer orpiment has a spontaneous electric polarization of 71 pC/m. According to the experimental structure of bulk orpiment\cite{mullen1972,kampf2011}, the electric polarization in neighboring layers aligns in an antiferroelectric order. Therefore, bulk orpiment shows no macroscopic polarization and the net polarization of a few-layer orpiment shows an odd-even effect. Only samples with odd numbers of layers show net electric polarization. \begin{figure*}[htb] \centering \includegraphics[width=0.85\textwidth]{Figure3-transitionpath-addlabels-Ebarrier.pdf} \caption{(a). (top) Selected intermediate states on the transition path of inverting the polarization direction of monolayer orpiment. Red arrows show the main movement of As ions between consecutive intermediate structures. The dashed curves are used to represent the atomic displacements that cross the unit-cell boundary. (bottom) The evolution of the total energies of intermediate structures along the transition path. (b). A comparison between the theoretical energy barriers $E_{barrier}$ of the polarization-reversing process of arsenic chalcogenide and other ferroelectrics. The values from previous work are all calculated with density functional theory. } \label{fig:3} \end{figure*} As a classical example of displacive transitions, the ferroelectric phase transition of perovskite oxide like PbTiO$_3$ is explained by a zone-center vibrational mode which vanishes at the phase transition. Similarly, we propose the ferroelectricity of monolayer orpiment is also driven by a soft mode of a high-symmetry structure with space group Pmmn. The unit cell of the Pmmn structure is shown schematically in Fig.~\ref{fig:2}(a). Different from monolayer orpiment which only has mirror symmetry to $xz$-plane, the Pmmn structure of As$_2$S$_3$ has additional mirror symmetry with $yz$-plane. This high-symmetry Pmmn structure is dynamically unstable with five soft optical phonon modes at the $\Gamma$ point. To quantify contributions of a zone-center soft phonon mode to the structural transition from the high-symmetry Pmmn structure to Pmn2$_1$ phase, we calculate the projection of the atomic displacement vector $\Delta \mathbf{R} = \mathbf{R}_{Pmmn}-\mathbf{R}_{Pmn2_1}$ on soft phonon modes: \begin{equation*} \eta[p] = \frac{\Delta \mathbf{R}}{ |\Delta \mathbf{R}| } \cdot \mathbf{u}_p \end{equation*} where $\mathbf{u}_p$ is the normalized polarization vector of a zone-center phonon $p$. We find $\eta[B_{2u}]=86\%$, and other four soft zone-center modes contribute less than 1 percent to $\Delta \mathbf{R}$. This is expected since the B$_{2u}$ mode is the only one that breaks the inversion symmetry and also has the deepest double-well potential curve among all zone-center soft modes. Therefore the B$_{2u}$ mode is the dominant phonon mode driving the structural transition from the unstable Pmmn structure to the Pmn2$_1$ phase of As$_2$S$_3$. As shown in Fig.~\ref{fig:2}(b), the main effect of B$_{2u}$ optical mode is to shift the As atoms along the $x$-axis and break the reflection symmetry to $yz$-plane. The relative shifts between different S atoms are smaller comparing to the displacement of As atoms in the B$_{2u}$ mode. We also examine the roles of other four soft zone-center modes, namely B$_{3g}$, B$_{1g}$, B$_{3u}$, and A$_u$ modes, by collectively displacing atomic coordinates of the high-symmetry Pmmn structure by $\Delta \mathbf{R}_p = Q\cdot \mathbf{u}_p$, where $\mathbf{u}_p$ is the normalized polarization vector of phonon mode $p$. As shown in Fig.~\ref{fig:2}(a), one can easily identify structures that correspond to the local minima (shown as small spheres) on the double-well curve of total energy versus general coordinate $Q$. Further relaxing local minimal structures may lead to new metastable phases of As$_2$S$_3$. Since the structural relaxation moves the local minima that correspond to the B$_{3u}$ mode back to monolayer orpiment, we will not discuss it further. Interestingly, B$_{3g}$ and B$_{1g}$ modes transform the high-symmetry Pmmn structure into metastable P2/c and P2$_1$/m phases, respectively. As shown in Fig.~\ref{fig:2}(c), both P2/c and P2$_1$/m phase show unusual one-dimensional chain structures consist of interconnected AsS$_3$ pyramidal units. These two phases show zero macroscopic polarization since the dipole moments of neighboring AsS$_3$ pyramidal units point in opposite directions and thus cancel with each other. A more complicated case is the soft A$_u$ mode. Relaxing the local minima structure corresponding to the A$_u$ mode leads to a dynamically unstable P2$_1$2$_1$2 structure without a net electric polarization. Such an unstable structure has doubly-degenerate soft phonon modes at the $X$ point which can stabilize the structure by doubling the unit cell along $x$-axis. The final stable structure we find has the P2$_1$ space group symmetry and a rectangle unit cell with 20 atoms, as shown in Fig.~\ref{fig:2}(d). The P2$_1$ phase has a noncentrosymmetric structure with spontaneous polarization of 20 pC/m pointing in the $y$-direction. We confirmed the stability of P2/c, P2$_1$/m, and P2$_1$ phases from their phonon spectra calculated with density functional perturbation theory~\cite{dfpt} and finite-temperature molecular dynamics~\cite{berendsen1984} trajectories. These results are presented in Supporting Information. The total energy of the P2$_1$ phase As$_2$S$_3$ is 65 meV/f.u. lower than that of monolayer anorpiment, and only 7 meV/f.u. higher than that of monolayer orpiment. Even though the P2/c and P2$_1$/m phases are shown to be metastable, they have total energies which are about 300 meV/f.u. higher than that of monolayer orpiment, because they are composed with one-dimensional chain-like structures bounded by weak vdW forces. We mention only the zone-center soft modes of Pmmn structures are studied in this work. It is also interesting to study the finite-momentum soft modes, which also appear in the phonon spectrum of the Pmmn structure and may lead to other interesting polymorphs. We perform similar analyses on the soft modes of the high-symmetry Pmmn structure of As$_2$Se$_3$ and As$_2$Te$_3$. Like As$_2$S$_3$, they both have a B$_{2u}$ mode driving the displacive transition to the corresponding Pmn2$_1$ phase. As$_2$Se$_3$ also has a metastable phase with P2$_1$ space group. In Table~\ref{tab:phase}, we list the space groups and the electric polarization of all stable polymorphs studied in this work. We find $P$ of Pmn2$_1$ phases decreases as the chalcogen element changes from sulfur to tellurium. A similar trend also appears in IV-VI monolayers\cite{Fei2016}. We explain this qualitatively with two arguments. First, The electrical polarization P is positively correlated to the difference between the electron negativity of As and the chalcogen elements. As the chalcogen element changes from S to Te, the reduced electronegativity results in a diminished polarization. Second, since the distortion amplitude $|Q_{min}|$ at the minima of the double-well potential of B$_{2u}$ mode decreases as the chalcogen element changes from S to Te, the dipole moment, which is proportional to $|Q_{min}|$, also decreases. \begin{table}[htb] \setlength\tabcolsep{0.02\textwidth} \caption{Summary of the space group and electric polarization $P$ of different arsenic chalcogenides phases.} \label{tab:phase} \begin{tabular}{@{}ccc@{}} \toprule Formula & Space group & \begin{tabular}[c]{@{}c@{}}$P$ (pC/m) \\ (The direction of P)\end{tabular} \\ \midrule \multirow{5}{*}{As$_2$S$_3$} & \begin{tabular}[c]{@{}c@{}}Pmn2$_1$\\ (Monolayer \\ orpiment)\end{tabular} & 71 (x) \\ & \begin{tabular}[c]{@{}c@{}}Pc\\ (Monolayer \\ anorpiment)\end{tabular} & 47 (y) \\ & P2/c & 0 \\ & P2$_1$/m & 0 \\ & P2$_1$ & 20 (y) \\ \midrule \multirow{2}{*}{As$_2$Se$_3$} & Pmn2$_1$ & 54 (x) \\ & P2$_1$ & 18 (y) \\ \midrule As$_2$Te$_3$ & Pmn2$_1$ & 45 (x) \\ \bottomrule \end{tabular} \end{table} The reversibility of electric polarization is a necessary condition for ferroelectrics and also important for the application in data storage. Using the nudged-elastic-band method~\cite{neb}, we find a minimal-energy-barrier transition path for reversing the electric polarization of an infinite large monolayer orpiment. We show important intermediate structures and the corresponding energies on the transition path of As$_2$S$_3$ in Fig.~\ref{fig:3} (a). In the polarization-reversing process, three important intermediate structures labeled as $A, B,$ and $A^\prime$ are found. $A$ and $A^\prime$ are related by a 180$^\circ$ rotation around the $z$-axis. $B$ is structurally akin to the unstable P2$_1$2$_1$2 structure. The initial and final structures on the transition path correspond to monolayer orpiment with electric polarization pointing in opposite directions. The process of reversing electric polarization goes in the sequence $\{\text{initial}\rightarrow A \rightarrow B \rightarrow A^\prime \rightarrow \text{final} \}$, which consists of four major steps. Each step mainly involves shifting a single As atom along the $x$-axis. For example, in the first step $\{\text{initial}\rightarrow A\}$, the major structural change is the displacement of As1 (i.e. the arsenic atom at the bottom of the unit cell shown in Fig.~\ref{fig:3} (a)) along the negative $x$-direction. Similarly, in the second step $\{A\rightarrow B\}$, we observe the movement of As2 along the negative $x$-direction to pass the boundary of the unit cell. We emphasize that the switching process we presented here may not correspond to the global minimum barrier, since the changes of lattice vectors are not considered in our nudged-elastic-band calculations and the electric dipoles can never be switched simultaneously in real situations. Previous work also shows that including the variation of lattice vectors can further lower the energy barrier\cite{Salvador2018,Mehrshad2016}. Nevertheless, our theoretical energy barrier E$_{barrier}$ provides an upper bound for the activation energy of the real polarization-reversing process. With the similar approach, the energy profiles for reversing the electric polarization of As$_2$Se$_3$ and As$_2$Te$_3$ are calculated and shown in the Supporting Information. In Fig.~\ref{fig:3} (b), we compare calculated energy barriers E$_{barrier}$ of As$_2$X$_3$ with those of other ferroelectrics, which are either studied experimentally\cite{Sharma2019,Yuan2019,Huan2014,Sang2015,Higashitarumizu2020,Wang_2017,Cui2018_b,Zhou2017,Ding2017,Beckman2009,Ye2016,Brehm2020} or solely predicted in theory\cite{anand2017,Wang_2017,Lin201908,Ai2019,Xu2017,Jia2019}. Except FeTiO$_3$, LiNbO$_3$, AgBiP$_2$Se$_6$, and PbTiO$_3$, the E$_{barrier}$ in Fig.~\ref{fig:3} are calculated with the nudged-elastic-band method. The range of calculated $E_{barrier}$ covers two orders of magnitudes from 0.6 meV to 116 meV. The $E_{barrier}$ of As$_2$X$_3$ are notably smaller than those of two room-temperature ferroelectrics, namely $T_d$-WTe$_2$\cite{Sharma2019} and monolayer d1T-MoTe$_2$\cite{Yuan2019}, and a few predicted ferroelectrics such as GeS\cite{Wang_2017} and Sc$_2$CO$_2$\cite{Mehrshad2016}, but much larger than those of CuInP$_2$S$_6$\cite{Brehm2020}, In$_2$Se$_3$\cite{Ding2017}, SnS\cite{Wang_2017} and so on. Such comparisons suggest that the energy barriers of switching the polarization direction of Pmn2$_1$ As$_2$X$_3$ are within a proper range. \begin{figure}[htb] \centering \includegraphics[width=0.47\textwidth]{Figure4-new.pdf} \caption{(top panel) Selected structures on the process of moving the 180$^\circ$ domain wall in As$_2$S$_3$. The shaded regions highlight the domain boundaries. (bottom panel) Total energies on the process of moving the domain walls of As$_2$S$_3$, As$_2$Se$_3$, and As$_2$Te$_3$.} \label{fig:4} \end{figure} In practical situations, domain-wall shifting and domain growing mediate the process of reversing electric polarization of ferroelectrics. Formation energies of domain walls and energy barriers for moving domain walls indicate how difficult it is to form and grow a domain, respectively. We studied the atomistic structure of a few 180$^\circ$ domain walls parallel to the $x$-axis in As$_2$X$_3$. For example, the structure of the 180$^\circ$ domain wall in As$_2$S$_3$ is shown schematically in the top panel of Fig.~\ref{fig:4}. Our calculations indicate the energy costs of forming and moving the 180$^\circ$ domain wall along $x$-axis are reasonable compared to other ferroelectrics. In detail, the calculated domain-wall formation energies $E^{dw}_{form}$ are 89, 105, and 124 meV/f.u. (i.e., 43, 50, and 58 meV/\AA) for monolayer As$_2$S$_3$, As$_2$Se$_3$, and As$_2$Te$_3$, respectively. Previous calculations show the $E^{dw}_{form}$ of group IV-VI materials range from 8 meV/$\text{\AA}$ to 116 meV/\AA~\cite{Wang_2017}, which covers those of 2D As$_2$X$_3$ ferroelectrics. The $E^{dw}_{form}$ of In$_2$Se$_3$ is 220 meV/f.u.~\cite{Ding2017}, comparable to those of As$_2$X$_3$. Assuming the thickness of monolayer As$_2$X$_3$ is 6.0 \AA, we convert the $E^{dw}_{form}$ of As$_2$S$_3$, As$_2$Se$_3$, and As$_2$Te$_3$ to be 115, 133, and 155 mJ/m$^2$, which are in the same order as those of some ferroelectric oxides, such as PbTiO$_3$ (132 mJ/m$^2$ for 180$^\circ$ domain wall and 35.2 mJ/m$^2$ for 90$^\circ$ domain wall)~\cite{meyer2002} and BiFeO$_3$ (205 to 1811 mJ/m$^2$)~\cite{lubk2009}, but much higher than that of BaTiO$_3$ (7.5 mJ/m$^2$)~\cite{meyer2002}. Using nudged-elastic-band method, we calculate the energy barriers $E^{dw}_{barrier}$ for moving the 180$^\circ$ domain walls are 233 meV/f.u., 128 meV/f.u., and 35 meV/f.u. (i.e., 113 meV/\AA, 54 meV/\AA, and 16 meV/\AA) for monolayer As$_2$S$_3$, As$_2$Se$_3$, and As$_2$Te$_3$, respectively, as shown in the bottom panel of Fig.~\ref{fig:4}. This suggests the 180$^\circ$ domain wall of As$_2$X$_3$ becomes easier to shift as the chalcogen element X changes from sulfur to tellurium. Compared to bulk ferroelectrics, the $E^{dw}_{barrier}$ of monolayer As$_2$Te$_3$ and As$_2$Se$_3$ are of the same order-of-magnitude as those of bulk corundum derivatives ranging from 14 meV/f.u. to 197 meV/f.u.~\cite{Meng2017}. Compared to other two-dimensional ferroelectrics, $E^{dw}_{barrier}$ of monolayer As$_2$S$_3$ is more than an order-of-magnitude higher than those of group IV-VI two-dimensional ferroelectrics (less than 1.6 meV/\AA)~\cite{Wang_2017}, but in similar order with that of monolayer In$_2$Se$_3$, which ranges from 280 meV/f.u. to 400 meV/f.u.~\cite{Ding2017}. These comparisons suggest that the energy costs for forming and moving the 180$^\circ$ domain wall of As$_2$X$_3$ are reasonable. \begin{table*}[htb] \setlength\tabcolsep{0.018\textwidth} \caption{Elasticity tensor elements (N/m) and piezoelectric coefficients ($10^{-10}$ C/m for $e_{ij}$ and pm/V for $d_{ij}$). } \label{tab:piezo} \begin{tabular}{@{}ccccccccc@{}} \toprule Space group (Point group) & Formula & $C_{11}$ & $C_{22}$ & $C_{12}$ & $e_{11}$ & $e_{12}$ & $d_{11}$ & $d_{12}$ \\ \midrule \multirow{3}{*}{Pmn2$_1$ (C$_{2v}$)} & As$_2$S$_3$ & 11.07 & 43.38 & 10.40 & 4.36 & -1.75 & 55.7 & -17.4 \\ & As$_2$Se$_3$ & 13.86 & 41.76 & 10.03 & 6.71 & -1.49 & 61.7 & -18.4 \\ & As$_2$Te$_3$ & 18.09 & 34.65 & 9.72 & 9.09 & -1.48 & 61.9 & -21.6 \\ \midrule & & $C_{11}$ & $C_{22}$ & $C_{12}$ & $e_{21}$ & $e_{22}$ & $d_{21}$ & $d_{22}$ \\ \midrule \multirow{2}{*}{P2$_1$ (C$_2$)} & As$_2$S$_3$ & 18.63 & 16.29 & 3.42 & -0.85 & 0.22 & -5.0 & 2.4 \\ & As$_2$Se$_3$ & 21.51 & 23.76 & 2.65 & -0.91 & -0.33 & -4.1 & -0.9 \\ \midrule Pc (C$_s$) & As$_2$S$_3$ & 21.63 & 9.25 & 8.00 & -2.77 & 2.69 & -34.7 & 59.1 \\ \bottomrule \end{tabular} \end{table*} Similar to monolayer group IV-VI compounds and black phosphorene, monolayer arsenic chalcogenides studied in this work are super flexible. We calculate Young's modulus of monolayer orpiment to be 8.6~N/m along the $x$-axis and 33.6~N/m along the $y$-axis, which is more than one-order-of-magnitude smaller than those of graphene (345~N/m)\cite{kudin2001,deji2017} and also significantly smaller than that of black phosphorene ($21\sim56$ N/m)\cite{Jiang2014_b,wei2014}. To our knowledge, monolayer orpiment is among the softest 2D material ever fabricated. Such remarkable structural flexibility motivates us to investigate the piezoelectricity of arsenic chalcogenides. We summarize the calculated elasticity tensor $C_{ij}$ and piezoelectric tensor elements $e_{ij}$ and $d_{ij}$ in Table~\ref{tab:piezo}. More details of calculating these tensor elements are presented in Supporting Information. Obviously, the piezoelectric strain coefficients $d_{ij}$ of Pmn2$_1$ and Pc phases are one-order-of-magnitude larger than those of common two-dimensional polar materials such as 2H-MoSe$_2$ ($d_{11}=3.73$ pm/V)~\cite{duerloo2012}, 2H-WSe$_2$ ($d_{11}=2.79$ pm/V)~\cite{duerloo2012}, hexagonal group III-V materials ($0.02<d_{11}<5.50$ pm/V)~\cite{blonsky2015}, and multilayer janus transition metal chalcogenide MoSTe ($5.7<d_{33}<13.5$ pm/V)~\cite{Dong2017}. On the other hand, the piezoelectric stress constants $e_{ij}$ of arsenic chalcogenides are comparable with those of 2H-MoSe$_2$, 2H-WSe$_2$, and so on~\cite{duerloo2012,blonsky2015,Dong2017}. This indicates the large $d_{ij}$ coefficients of arsenic chalcogenides is originated from their superior flexibility, i.e., small elasticity tensor components. Compared to group IV-VI monolayers with giant piezoelectricity, the $d_{ij}$ coefficients of Pmn2$_1$ and Pc phases are on the same order as that of GeS, but two to four-times smaller than those of SnS, SnSe, and GeSe~\cite{Fei2016}. The piezoelectric coefficients of P2$_1$ are much smaller than other phases. Interestingly, P2$_1$ As$_2$Se$_3$ shows weak negative piezoelectric effect along y-direction. In summary, we employ ab initio methods to predict the intrinsic ferroelectricity and strong piezoelectricity in arsenic chalcogenides, which include the recently isolated monolayer orpiment. By analyzing the soft optical modes of the high-symmetry Pmmn structures of arsenic chalcogenides, we find these soft modes can lead to several undiscovered metastable polymorphs. The Pmn2$_1$ ferroelectric phases can be related to the soft B$_{2u}$ phonon mode of a high-symmetry Pmmn structure. We investigate the feasibility of switching the electrical polarization in the Pmn2$_1$ phase. The energy barrier of coherently flip all electrical dipoles and that of moving a 180$^\circ$-domain wall in two-dimensional Pmn2$_1$ As$_2$X$_3$ are in a proper range compared with other ferroelectrics. Moreover, superior structural flexibility results in large piezoelectric responses in a few polymorphs. Such a unique combination of unusual structures, pliability, strong piezoelectricity, and predicted ferroelectricity make monolayer arsenic chalcogenides new platforms of studying polar materials. They are also convincing candidates for small-sized, flexible electronic devices. Computation Details: Our first-principles calculations are based on pseudopotential density functional theory implemented in Quantum Espresso~\cite{QE-2017,QE-2009} and PARSEC~\cite{Chelikowsky1994,Kronik2006}. More technical details are presented in Supporting Information, which cites these references~\cite{PBEsol,Dalcorso2014,ultrasoft,troullier1991,dfpt,berendsen1984,mullen1972,Nye2012}. \acknowledgement W.G. and J.R.C. acknowledge support from a subaward from the Center for Computational Study of Excited-State Phenomena in Energy Materials at the Lawrence Berkeley National Laboratory, which is funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DEAC02-05CH11231, as part of the Computational Materials Sciences Program. Computational resources are provided by the Texas Advanced Computing Center (TACC). \suppinfo The supporting information presents more details on piezoelectric tensors, phonon spectra, molecular dynamics simulation, transition path for reversing electric polarization, and structure parameters of arsenic chalcogenides polymorphs. This material is available free of charge via the internet at http://pubs.acs.org. \providecommand{\latin}[1]{#1} \makeatletter \providecommand{\doi} {\begingroup\let\do\@makeother\dospecials \catcode`\{=1 \catcode`\}=2 \doi@aux} \providecommand{\doi@aux}[1]{\endgroup\texttt{#1}} \makeatother \providecommand*\mcitethebibliography{\thebibliography} \csname @ifundefined\endcsname{endmcitethebibliography} {\let\endmcitethebibliography\endthebibliography}{} \begin{mcitethebibliography}{69} \providecommand*\natexlab[1]{#1} \providecommand*\mciteSetBstSublistMode[1]{} \providecommand*\mciteSetBstMaxWidthForm[2]{} \providecommand*\mciteBstWouldAddEndPuncttrue {\def\unskip.}{\unskip.}} \providecommand*\mciteBstWouldAddEndPunctfalse {\let\unskip.}\relax} \providecommand*\mciteSetBstMidEndSepPunct[3]{} \providecommand*\mciteSetBstSublistLabelBeginEnd[3]{} \providecommand*\unskip.}{} \mciteSetBstSublistMode{f} \mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})} \mciteSetBstSublistLabelBeginEnd {\mcitemaxwidthsubitemform\space} {\relax} {\relax} \bibitem[Li and Ji(2018)Li, and Ji]{Li2018} Li,~W.; Ji,~L.-J. Perovskite ferroelectrics go metal free. \emph{Science} \textbf{2018}, \emph{361}, 132--132\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Scott and Paz~de Araujo(1989)Scott, and Paz~de Araujo]{Scott1989} Scott,~J.~F.; Paz~de Araujo,~C.~A. Ferroelectric Memories. \emph{Science} \textbf{1989}, \emph{246}, 1400--1405\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Fang \latin{et~al.}(2018)Fang, You, and Liu]{fang2018} Fang,~L.; You,~L.; Liu,~J.-M. \emph{Ferroelectric Materials for Energy Applications}; John Wiley {\&} Sons, Ltd, 2018; Chapter 9, pp 265--309\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Setter \latin{et~al.}(2006)Setter, Damjanovic, Eng, Fox, Gevorgian, Hong, Kingon, Kohlstedt, Park, Stephenson, Stolitchnov, Taganstev, Taylor, Yamada, and Streiffer]{setter2006} Setter,~N.; Damjanovic,~D.; Eng,~L.; Fox,~G.; Gevorgian,~S.; Hong,~S.; Kingon,~A.; Kohlstedt,~H.; Park,~N.~Y.; Stephenson,~G.~B.; Stolitchnov,~I.; Taganstev,~A.~K.; Taylor,~D.~V.; Yamada,~T.; Streiffer,~S. Ferroelectric thin films: Review of materials, properties, and applications. \emph{Journal of Applied Physics} \textbf{2006}, \emph{100}, 051606\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Park \latin{et~al.}(2015)Park, Lee, Kim, Kim, Moon, Kim, M{\"{u}}ller, Kersch, Schroeder, Mikolajick, and Hwang]{Park2015} Park,~M.~H.; Lee,~Y.~H.; Kim,~H.~J.; Kim,~Y.~J.; Moon,~T.; Kim,~K.~D.; M{\"{u}}ller,~J.; Kersch,~A.; Schroeder,~U.; Mikolajick,~T.; Hwang,~C.~S. {Ferroelectricity and Antiferroelectricity of Doped Thin HfO$_2$ -Based Films}. \emph{Advanced Materials} \textbf{2015}, \emph{27}, 1811--1831\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[B{\"{o}}scke \latin{et~al.}(2011)B{\"{o}}scke, M{\"{u}}ller, Br{\"{a}}uhaus, Schr{\"{o}}der, and B{\"{o}}ttger]{Boscke2011} B{\"{o}}scke,~T.~S.; M{\"{u}}ller,~J.; Br{\"{a}}uhaus,~D.; Schr{\"{o}}der,~U.; B{\"{o}}ttger,~U. {Ferroelectricity in hafnium oxide thin films}. \emph{Applied Physics Letters} \textbf{2011}, \emph{99}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Junquera and Ghosez(2003)Junquera, and Ghosez]{Junquera2003} Junquera,~J.; Ghosez,~P. {Critical thickness for ferroelectricity in perovskite ultrathin films}. \emph{Nature} \textbf{2003}, \emph{422}, 506--509\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Mehta \latin{et~al.}(1973)Mehta, Silverman, and Jacobs]{mehta1973} Mehta,~R.~R.; Silverman,~B.~D.; Jacobs,~J.~T. Depolarization fields in thin ferroelectric films. \emph{Journal of Applied Physics} \textbf{1973}, \emph{44}, 3379--3385\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wurfel \latin{et~al.}(1973)Wurfel, Batra, and Jacobs]{Wurfel1973} Wurfel,~P.; Batra,~I.~P.; Jacobs,~J.~T. Polarization Instability in Thin Ferroelectric Films. \emph{Phys. Rev. Lett.} \textbf{1973}, \emph{30}, 1218--1221\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Fong \latin{et~al.}(2004)Fong, Stephenson, Streiffer, Eastman, Auciello, Fuoss, and Thompson]{Fong2004} Fong,~D.~D.; Stephenson,~G.~B.; Streiffer,~S.~K.; Eastman,~J.~A.; Auciello,~O.; Fuoss,~P.~H.; Thompson,~C. Ferroelectricity in Ultrathin Perovskite Films. \emph{Science} \textbf{2004}, \emph{304}, 1650--1653\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Lee \latin{et~al.}(2019)Lee, Baasandorj, Chang, Hwang, Kim, Kim, Ko, Shim, Choi, You, Yang, Kim, and Song]{Lee2019} Lee,~S.~R.; Baasandorj,~L.; Chang,~J.~W.; Hwang,~I.~W.; Kim,~J.~R.; Kim,~J.-G.; Ko,~K.-T.; Shim,~S.~B.; Choi,~M.~W.; You,~M.; Yang,~C.-H.; Kim,~J.; Song,~J. First Observation of Ferroelectricity in ~1 nm Ultrathin Semiconducting BaTiO3 Films. \emph{Nano Letters} \textbf{2019}, \emph{19}, 2243--2250\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Fei \latin{et~al.}(2018)Fei, Zhao, Palomaki, Sun, Miller, Zhao, Yan, Xu, and Cobden]{Fei2018} Fei,~Z.; Zhao,~W.; Palomaki,~T.~A.; Sun,~B.; Miller,~M.~K.; Zhao,~Z.; Yan,~J.; Xu,~X.; Cobden,~D.~H. Ferroelectric switching of a two-dimensional metal. \emph{Nature} \textbf{2018}, \emph{560}, 336--339\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Liu \latin{et~al.}(2016)Liu, You, Seyler, Li, Yu, Lin, Wang, Zhou, Wang, He, Pantelides, Zhou, Sharma, Xu, Ajayan, Wang, and Liu]{Liu2016} Liu,~F. \latin{et~al.} {Room-temperature ferroelectricity in CuInP 2 S 6 ultrathin flakes}. \emph{Nature Communications} \textbf{2016}, \emph{7}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Chang \latin{et~al.}(2016)Chang, Liu, Lin, Wang, Zhao, Zhang, Jin, Zhong, Hu, Duan, Zhang, Fu, Xue, Chen, and Ji]{Chang2016} Chang,~K.; Liu,~J.; Lin,~H.; Wang,~N.; Zhao,~K.; Zhang,~A.; Jin,~F.; Zhong,~Y.; Hu,~X.; Duan,~W.; Zhang,~Q.; Fu,~L.; Xue,~Q.~K.; Chen,~X.; Ji,~S.~H. {Discovery of robust in-plane ferroelectricity in atomic-thick SnTe}. \emph{Science} \textbf{2016}, \emph{353}, 274--278\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Zhou \latin{et~al.}(2017)Zhou, Wu, Zhu, Cho, He, Yang, Herrera, Chu, Han, Downer, Peng, and Lai]{Zhou2017} Zhou,~Y.; Wu,~D.; Zhu,~Y.; Cho,~Y.; He,~Q.; Yang,~X.; Herrera,~K.; Chu,~Z.; Han,~Y.; Downer,~M.~C.; Peng,~H.; Lai,~K. {Out-of-Plane Piezoelectricity and Ferroelectricity in Layered $\alpha$-In$_2$Se$_3$ Nanoflakes}. \emph{Nano Letters} \textbf{2017}, \emph{17}, 5508--5513\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sharma \latin{et~al.}(2019)Sharma, Xiang, Shao, Zhang, Tsymbal, Hamilton, and Seidel]{Sharma2019} Sharma,~P.; Xiang,~F.-X.; Shao,~D.-F.; Zhang,~D.; Tsymbal,~E.~Y.; Hamilton,~A.~R.; Seidel,~J. A room-temperature ferroelectric semimetal. \emph{Science Advances} \textbf{2019}, \emph{5}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Dong \latin{et~al.}(2017)Dong, Lou, and Shenoy]{Dong2017} Dong,~L.; Lou,~J.; Shenoy,~V.~B. Large In-Plane and Vertical Piezoelectricity in Janus Transition Metal Dichalchogenides. \emph{ACS Nano} \textbf{2017}, \emph{11}, 8242--8248\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Blonsky \latin{et~al.}(2015)Blonsky, Zhuang, Singh, and Hennig]{blonsky2015} Blonsky,~M.~N.; Zhuang,~H.~L.; Singh,~A.~K.; Hennig,~R.~G. Ab Initio Prediction of Piezoelectricity in Two-Dimensional Materials. \emph{ACS Nano} \textbf{2015}, \emph{9}, 9885--9891\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Duerloo \latin{et~al.}(2012)Duerloo, Ong, and Reed]{duerloo2012} Duerloo,~K.-A.~N.; Ong,~M.~T.; Reed,~E.~J. Intrinsic Piezoelectricity in Two-Dimensional Materials. \emph{The Journal of Physical Chemistry Letters} \textbf{2012}, \emph{3}, 2871--2876\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Fei \latin{et~al.}(2016)Fei, Kang, and Yang]{Fei2016} Fei,~R.; Kang,~W.; Yang,~L. Ferroelectricity and Phase Transitions in Monolayer Group-IV Monochalcogenides. \emph{Phys. Rev. Lett.} \textbf{2016}, \emph{117}, 097601\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Cui \latin{et~al.}(2018)Cui, Xue, Hu, and Li]{Cui2018} Cui,~C.; Xue,~F.; Hu,~W.~J.; Li,~L.~J. Two-dimensional materials with piezoelectric and ferroelectric functionalities. \emph{npj 2D Materials and Applications} \textbf{2018}, \emph{2}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wu and Zeng(2016)Wu, and Zeng]{Wu2016} Wu,~M.; Zeng,~X.~C. Intrinsic Ferroelasticity and/or Multiferroicity in Two-Dimensional Phosphorene and Phosphorene Analogues. \emph{Nano Letters} \textbf{2016}, \emph{16}, 3236--3241, PMID: 27096689\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Mehboudi \latin{et~al.}(2016)Mehboudi, Dorio, Zhu, van~der Zande, Churchill, Pacheco-Sanjuan, Harriss, Kumar, and Barraza-Lopez]{Mehrshad2016} Mehboudi,~M.; Dorio,~A.~M.; Zhu,~W.; van~der Zande,~A.; Churchill,~H. O.~H.; Pacheco-Sanjuan,~A.~A.; Harriss,~E.~O.; Kumar,~P.; Barraza-Lopez,~S. Two-Dimensional Disorder in Black Phosphorus and Monochalcogenide Monolayers. \emph{Nano Letters} \textbf{2016}, \emph{16}, 1704--1712, PMID: 26866878\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Gao \latin{et~al.}(2019)Gao, Wu, and Zeng]{gao2019} Gao,~Y.; Wu,~M.; Zeng,~X.~C. Phase transitions and ferroelasticity–multiferroicity in bulk and two-dimensional silver and copper monohalides. \emph{Nanoscale Horiz.} \textbf{2019}, \emph{4}, 1106--1112\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ding \latin{et~al.}(2017)Ding, Zhu, Wang, Gao, Xiao, Gu, Zhang, and Zhu]{Ding2017} Ding,~W.; Zhu,~J.; Wang,~Z.; Gao,~Y.; Xiao,~D.; Gu,~Y.; Zhang,~Z.; Zhu,~W. {Prediction of intrinsic two-dimensional ferroelectrics in In 2 Se 3 and other III 2 -VI 3 van der Waals materials}. \emph{Nature Communications} \textbf{2017}, \emph{8}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Chandrasekaran \latin{et~al.}(2017)Chandrasekaran, Mishra, and Singh]{anand2017} Chandrasekaran,~A.; Mishra,~A.; Singh,~A.~K. Ferroelectricity, Antiferroelectricity, and Ultrathin 2D Electron/Hole Gas in Multifunctional Monolayer MXene. \emph{Nano Letters} \textbf{2017}, \emph{17}, 3290--3296\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Lin \latin{et~al.}(2019)Lin, Zhang, Moreo, Dagotto, and Dong]{Lin201908} Lin,~L.-F.; Zhang,~Y.; Moreo,~A.; Dagotto,~E.; Dong,~S. Frustrated Dipole Order Induces Noncollinear Proper Ferrielectricity in Two Dimensions. \emph{Phys. Rev. Lett.} \textbf{2019}, \emph{123}, 067601\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Higashitarumizu \latin{et~al.}(2020)Higashitarumizu, Kawamoto, Lee, Lin, Chu, Yonemori, Nishimura, Wakabayashi, Chang, and Nagashio]{Higashitarumizu2020} Higashitarumizu,~N.; Kawamoto,~H.; Lee,~C.-J.; Lin,~B.-H.; Chu,~F.-H.; Yonemori,~I.; Nishimura,~T.; Wakabayashi,~K.; Chang,~W.-H.; Nagashio,~K. Purely in-plane ferroelectricity in monolayer SnS at room temperature. \emph{Nature Communications} \textbf{2020}, \emph{11}, 2428\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Chang \latin{et~al.}(2020)Chang, Küster, Miller, Ji, Zhang, Sessi, Barraza-Lopez, and Parkin]{chang2020} Chang,~K.; Küster,~F.; Miller,~B.~J.; Ji,~J.-R.; Zhang,~J.-L.; Sessi,~P.; Barraza-Lopez,~S.; Parkin,~S. S.~P. Microscopic Manipulation of Ferroelectric Domains in SnSe Monolayers at Room Temperature. \emph{Nano Letters} \textbf{2020}, \emph{20}, 6590--6597, PMID: 32809837\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Šiškins \latin{et~al.}(2019)Šiškins, Lee, Alijani, van Blankenstein, Davidovikj, van~der Zant, and Steeneken]{siskins2019} Šiškins,~M.; Lee,~M.; Alijani,~F.; van Blankenstein,~M.~R.; Davidovikj,~D.; van~der Zant,~H. S.~J.; Steeneken,~P.~G. Highly Anisotropic Mechanical and Optical Properties of 2D Layered As2S3 Membranes. \emph{ACS Nano} \textbf{2019}, \emph{13}, 10845--10851\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Momma and Izumi(2011)Momma, and Izumi]{Momma2011} Momma,~K.; Izumi,~F. {{\it VESTA3} for three-dimensional visualization of crystal, volumetric and morphology data}. \emph{Journal of Applied Crystallography} \textbf{2011}, \emph{44}, 1272--1276\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kampf \latin{et~al.}(2011)Kampf, Downs, Housley, Jenkins, and Hyrsl]{kampf2011} Kampf,~A.; Downs,~R.; Housley,~R.; Jenkins,~R.; Hyrsl,~J. Anorpiment, As2S3, the triclinic dimorph of orpiment. \emph{Mineralogical Magazine} \textbf{2011}, \emph{75}, 2857--2867\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Perdew \latin{et~al.}(1996)Perdew, Burke, and Ernzerhof]{PBE} Perdew,~J.~P.; Burke,~K.; Ernzerhof,~M. Generalized Gradient Approximation Made Simple. \emph{Phys. Rev. Lett.} \textbf{1996}, \emph{77}, 3865--3868\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Stergiou and Rentzeperis(1985)Stergiou, and Rentzeperis]{stergiou1985} Stergiou,~A.~C.; Rentzeperis,~P.~J. The crystal structure of arsenic selenide, As2Se3. \emph{Zeitschrift für Kristallographie - Crystalline Materials} \textbf{1985}, \emph{173}, 185--191\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Resta(1994)]{resta1994} Resta,~R. Macroscopic polarization in crystalline dielectrics: the geometric phase approach. \emph{Rev. Mod. Phys.} \textbf{1994}, \emph{66}, 899--915\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[King-Smith and Vanderbilt(1993)King-Smith, and Vanderbilt]{king1993} King-Smith,~R.~D.; Vanderbilt,~D. Theory of polarization of crystalline solids. \emph{Phys. Rev. B} \textbf{1993}, \emph{47}, 1651--1654\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Mullen and Nowacki(1972)Mullen, and Nowacki]{mullen1972} Mullen,~D.; Nowacki,~W. Refinement of the crystal structures of realgar, AsS and orpiment, As2S3. \emph{Zeitschrift für Kristallographie - Crystalline Materials} \textbf{1972}, \emph{136}, 48\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Baroni \latin{et~al.}(2001)Baroni, de~Gironcoli, Dal~Corso, and Giannozzi]{dfpt} Baroni,~S.; de~Gironcoli,~S.; Dal~Corso,~A.; Giannozzi,~P. Phonons and related crystal properties from density-functional perturbation theory. \emph{Rev. Mod. Phys.} \textbf{2001}, \emph{73}, 515--562\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Berendsen \latin{et~al.}(1984)Berendsen, Postma, van Gunsteren, DiNola, and Haak]{berendsen1984} Berendsen,~H. J.~C.; Postma,~J. P.~M.; van Gunsteren,~W.~F.; DiNola,~A.; Haak,~J.~R. Molecular dynamics with coupling to an external bath. \emph{The Journal of Chemical Physics} \textbf{1984}, \emph{81}, 3684--3690\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Henkelman and Jónsson(2000)Henkelman, and Jónsson]{neb} Henkelman,~G.; Jónsson,~H. Improved tangent estimate in the nudged elastic band method for finding minimum energy paths and saddle points. \emph{The Journal of Chemical Physics} \textbf{2000}, \emph{113}, 9978--9985\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Barraza-Lopez \latin{et~al.}(2018)Barraza-Lopez, Kaloni, Poudel, and Kumar]{Salvador2018} Barraza-Lopez,~S.; Kaloni,~T.~P.; Poudel,~S.~P.; Kumar,~P. Tuning the ferroelectric-to-paraelectric transition temperature and dipole orientation of group-IV monochalcogenide monolayers. \emph{Phys. Rev. B} \textbf{2018}, \emph{97}, 024110\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Yuan \latin{et~al.}(2019)Yuan, Luo, Chan, Xiao, Dai, Xie, and Hao]{Yuan2019} Yuan,~S.; Luo,~X.; Chan,~H.~L.; Xiao,~C.; Dai,~Y.; Xie,~M.; Hao,~J. Room-temperature ferroelectricity in MoTe2 down to the atomic monolayer limit. \emph{Nature Communications} \textbf{2019}, \emph{10}, 1775\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Huan \latin{et~al.}(2014)Huan, Sharma, Rossetti, and Ramprasad]{Huan2014} Huan,~T.~D.; Sharma,~V.; Rossetti,~G.~A.; Ramprasad,~R. Pathways towards ferroelectricity in hafnia. \emph{Phys. Rev. B} \textbf{2014}, \emph{90}, 064111\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sang \latin{et~al.}(2015)Sang, Grimley, Schenk, Schroeder, and LeBeau]{Sang2015} Sang,~X.; Grimley,~E.~D.; Schenk,~T.; Schroeder,~U.; LeBeau,~J.~M. On the structural origins of ferroelectricity in HfO2 thin films. \emph{Applied Physics Letters} \textbf{2015}, \emph{106}, 162905\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wang and Qian(2017)Wang, and Qian]{Wang_2017} Wang,~H.; Qian,~X. Two-dimensional multiferroics in monolayer group {IV} monochalcogenides. \emph{2D Materials} \textbf{2017}, \emph{4}, 015042\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Cui \latin{et~al.}(2018)Cui, Hu, Yan, Addiego, Gao, Wang, Wang, Li, Cheng, Li, Zhang, Alshareef, Wu, Zhu, Pan, and Li]{Cui2018_b} Cui,~C. \latin{et~al.} Intercorrelated In-Plane and Out-of-Plane Ferroelectricity in Ultrathin Two-Dimensional Layered Semiconductor In2Se3. \emph{Nano Letters} \textbf{2018}, \emph{18}, 1253--1258\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Beckman \latin{et~al.}(2009)Beckman, Wang, Rabe, and Vanderbilt]{Beckman2009} Beckman,~S.~P.; Wang,~X.; Rabe,~K.~M.; Vanderbilt,~D. Ideal barriers to polarization reversal and domain-wall motion in strained ferroelectric thin films. \emph{Phys. Rev. B} \textbf{2009}, \emph{79}, 144124\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ye and Vanderbilt(2016)Ye, and Vanderbilt]{Ye2016} Ye,~M.; Vanderbilt,~D. Ferroelectricity in corundum derivatives. \emph{Phys. Rev. B} \textbf{2016}, \emph{93}, 134303\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Brehm \latin{et~al.}(2020)Brehm, Neumayer, Tao, O'Hara, Chyasnavichus, Susner, McGuire, Kalinin, Jesse, Ganesh, Pantelides, Maksymovych, and Balke]{Brehm2020} Brehm,~J.~A.; Neumayer,~S.~M.; Tao,~L.; O'Hara,~A.; Chyasnavichus,~M.; Susner,~M.~A.; McGuire,~M.~A.; Kalinin,~S.~V.; Jesse,~S.; Ganesh,~P.; Pantelides,~S.~T.; Maksymovych,~P.; Balke,~N. Tunable quadruple-well ferroelectric van der Waals crystals. \emph{Nature Materials} \textbf{2020}, \emph{19}, 43--48\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ai \latin{et~al.}(2019)Ai, Song, Qi, Li, and Zhao]{Ai2019} Ai,~H.; Song,~X.; Qi,~S.; Li,~W.; Zhao,~M. Intrinsic multiferroicity in two-dimensional VOCl2 monolayers. \emph{Nanoscale} \textbf{2019}, \emph{11}, 1103--1110\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Xu \latin{et~al.}(2017)Xu, Xiang, Xia, Jiang, Wan, He, Yin, and Liu]{Xu2017} Xu,~B.; Xiang,~H.; Xia,~Y.; Jiang,~K.; Wan,~X.; He,~J.; Yin,~J.; Liu,~Z. Monolayer AgBiP2Se6: an atomically thin ferroelectric semiconductor with out-plane polarization. \emph{Nanoscale} \textbf{2017}, \emph{9}, 8427--8434\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jia \latin{et~al.}(2019)Jia, Zhao, Gou, Zeng, and Li]{Jia2019} Jia,~Y.; Zhao,~M.; Gou,~G.; Zeng,~X.~C.; Li,~J. Niobium oxide dihalides NbOX2: a new family of two-dimensional van der Waals layered materials with intrinsic ferroelectricity and antiferroelectricity. \emph{Nanoscale Horiz.} \textbf{2019}, \emph{4}, 1113--1123\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Meyer and Vanderbilt(2002)Meyer, and Vanderbilt]{meyer2002} Meyer,~B.; Vanderbilt,~D. Ab initio study of ferroelectric domain walls in ${\mathrm{PbTiO}}_{3}$. \emph{Phys. Rev. B} \textbf{2002}, \emph{65}, 104111\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Lubk \latin{et~al.}(2009)Lubk, Gemming, and Spaldin]{lubk2009} Lubk,~A.; Gemming,~S.; Spaldin,~N.~A. First-principles study of ferroelectric domain walls in multiferroic bismuth ferrite. \emph{Phys. Rev. B} \textbf{2009}, \emph{80}, 104110\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ye and Vanderbilt(2017)Ye, and Vanderbilt]{Meng2017} Ye,~M.; Vanderbilt,~D. Domain walls and ferroelectric reversal in corundum derivatives. \emph{Phys. Rev. B} \textbf{2017}, \emph{95}, 014105\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kudin \latin{et~al.}(2001)Kudin, Scuseria, and Yakobson]{kudin2001} Kudin,~K.~N.; Scuseria,~G.~E.; Yakobson,~B.~I. ${\mathrm{C}}_{2}\mathrm{F},$ BN, and C nanoshell elasticity from ab initio computations. \emph{Phys. Rev. B} \textbf{2001}, \emph{64}, 235406\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Akinwande \latin{et~al.}(2017)Akinwande, Brennan, Bunch, Egberts, Felts, Gao, Huang, Kim, Li, Li, Liechti, Lu, Park, Reed, Wang, Yakobson, Zhang, Zhang, Zhou, and Zhu]{deji2017} Akinwande,~D. \latin{et~al.} A review on mechanics and mechanical properties of 2D materials—Graphene and beyond. \emph{Extreme Mechanics Letters} \textbf{2017}, \emph{13}, 42 -- 77\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jiang and Park(2014)Jiang, and Park]{Jiang2014_b} Jiang,~J.-W.; Park,~H.~S. Mechanical properties of single-layer black phosphorus. \emph{Journal of Physics D: Applied Physics} \textbf{2014}, \emph{47}, 385304\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wei and Peng(2014)Wei, and Peng]{wei2014} Wei,~Q.; Peng,~X. Superior mechanical flexibility of phosphorene and few-layer black phosphorus. \emph{Applied Physics Letters} \textbf{2014}, \emph{104}, 251915\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Giannozzi \latin{et~al.}(2017)Giannozzi, Andreussi, Brumme, Bunau, Nardelli, Calandra, Car, Cavazzoni, Ceresoli, Cococcioni, Colonna, Carnimeo, Corso, de~Gironcoli, Delugas, Jr, Ferretti, Floris, Fratesi, Fugallo, Gebauer, Gerstmann, Giustino, Gorni, Jia, Kawamura, Ko, Kokalj, Küçükbenli, Lazzeri, Marsili, Marzari, Mauri, Nguyen, Nguyen, de-la Roza, Paulatto, Poncé, Rocca, Sabatini, Santra, Schlipf, Seitsonen, Smogunov, Timrov, Thonhauser, Umari, Vast, Wu, and Baroni]{QE-2017} Giannozzi,~P. \latin{et~al.} Advanced capabilities for materials modelling with QUANTUM ESPRESSO. \emph{Journal of Physics: Condensed Matter} \textbf{2017}, \emph{29}, 465901\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Giannozzi \latin{et~al.}(2009)Giannozzi, Baroni, Bonini, Calandra, Car, Cavazzoni, Ceresoli, Chiarotti, Cococcioni, Dabo, {Dal Corso}, de~Gironcoli, Fabris, Fratesi, Gebauer, Gerstmann, Gougoussis, Kokalj, Lazzeri, Martin-Samos, Marzari, Mauri, Mazzarello, Paolini, Pasquarello, Paulatto, Sbraccia, Scandolo, Sclauzero, Seitsonen, Smogunov, Umari, and Wentzcovitch]{QE-2009} Giannozzi,~P. \latin{et~al.} QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials. \emph{Journal of Physics: Condensed Matter} \textbf{2009}, \emph{21}, 395502\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Chelikowsky \latin{et~al.}(1994)Chelikowsky, Troullier, and Saad]{Chelikowsky1994} Chelikowsky,~J.~R.; Troullier,~N.; Saad,~Y. {Finite-difference-pseudopotential method: Electronic structure calculations without a basis}. \emph{Physical Review Letters} \textbf{1994}, \emph{72}, 1240--1243\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kronik \latin{et~al.}(2006)Kronik, Makmal, Tiago, Alemany, Jain, Huang, Saad, and Chelikowsky]{Kronik2006} Kronik,~L.; Makmal,~A.; Tiago,~M.~L.; Alemany,~M. M.~G.; Jain,~M.; Huang,~X.; Saad,~Y.; Chelikowsky,~J.~R. {PARSEC – the pseudopotential algorithm for real-space electronic structure calculations: recent advances and novel applications to nano-structures}. \emph{physica status solidi (b)} \textbf{2006}, \emph{243}, 1063--1079\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Perdew \latin{et~al.}(2008)Perdew, Ruzsinszky, Csonka, Vydrov, Scuseria, Constantin, Zhou, and Burke]{PBEsol} Perdew,~J.~P.; Ruzsinszky,~A.; Csonka,~G.~I.; Vydrov,~O.~A.; Scuseria,~G.~E.; Constantin,~L.~A.; Zhou,~X.; Burke,~K. Restoring the Density-Gradient Expansion for Exchange in Solids and Surfaces. \emph{Phys. Rev. Lett.} \textbf{2008}, \emph{100}, 136406\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Corso(2014)]{Dalcorso2014} Corso,~A.~D. Pseudopotentials periodic table: From H to Pu. \emph{Computational Materials Science} \textbf{2014}, \emph{95}, 337 -- 350\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Vanderbilt(1990)]{ultrasoft} Vanderbilt,~D. Soft self-consistent pseudopotentials in a generalized eigenvalue formalism. \emph{Phys. Rev. B} \textbf{1990}, \emph{41}, 7892--7895\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Troullier and Martins(1991)Troullier, and Martins]{troullier1991} Troullier,~N.; Martins,~J.~L. Efficient pseudopotentials for plane-wave calculations. \emph{Phys. Rev. B} \textbf{1991}, \emph{43}, 1993--2006\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Nye(1985)]{Nye2012} Nye,~J.~F. \emph{Physical properties of crystals: their representation by tensors and matrices}; Clarendon Press: Oxford, UK, 1985\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \end{mcitethebibliography} \end{document}
proofpile-arXiv_065-137
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Advances in photon measurements have stimulated innovation in encrypted communications and information processing, distinguished by a single datum being encoded in a single photon \cite{Tittel:01,chen:06}. As relevant techniques mature, accuracy in photon generation and detection is growing to compete with the present standard of radiative flux. A new standard is thus being defined by photon flux, in which counting is a central element for information processing with photons, and which is differentiated from the present standard where absoluteness is given by foreign physical systems based on electrical current and temperature \cite{Willson:73,Martin_1985,QCandelta:07,Zwinkels:10}. For the realization of this new standard, the accurate calibration of photon flux as well as high-purity single photon generation are prerequisite \cite{Chunnilall:14}. Commonly used single photon counters like avalanche diodes and superconducting nanowires have a short period of breakdown after a detection event \cite{Cova:96,Kerman:SucDetDeadT06}, and require post-corrections to represent real radiant flux with pre-assumptions about photon statistics. Recent correction techniques supposing a Poisson distribution have achieved repeatable accuracies with parameters of quantum efficiency and detection dead time, promising that the counting-based standard can soon compete \cite{Bae_2019}. To improve even further, the goal has become achieving single photon generation that deterministically furnishes a single photon only when the detector is ready to count, or at least with a certain and uniform probability \cite{Rodiek:17,Vaigu:2017,Molecule:19}. Such an ideal source has to maintain single photon flux with low fluctuations and high repeatability. Current sources, however, show complex behaviors of relaxations, and as a result their emission blinks \cite{Jeong:GAN,Konthasinghe:19}. In this study, we focus on materials that emit single photon fluorescence at room temperature by investigating silicon vacancy in diamond, defects in GaN, and vacancy in hBN as spectrally narrow and accessible platforms. We characterize photon number statistics and fluctuations since their degrees are the major factors in determining the accuracy of photon flux for radiometry applications. Additionally, we compare the maximum count rate allowed for the bare materials under the conventional collection technique of confocal microscopy. The reason why we stress this condition is that detection count rates are not intrinsic but rather depend on efficiencies given by refractive index geometries and detection techniques. We note that the present experiments are limited by estimations of internal quantum efficiency and the theoretically maximum count rates under continuous wave operation; more specific methods to optimize collection efficiency, an important subject of photonics, remain for future, application-oriented studies. To find general tendencies and characteristics from among the complexity and variety of our materials, the current work was based on a large amount of data collected from numerous emitters. Our full dataset consists of two levels: the first concerns basic properties in identifying single photon emitters, and the second concerns figures of stability. Data fields in the first level are of photon coincidence correlation $g^{(2)}(0)$ and spectra, which have been used to authenticate single photon fluorescence. Statistical distributions of the positions of the spectral peaks were collected as a subject for later studies on defect states and their formations. We discuss the results of these basic properties in \sref{sec:spe}. Data fields in the second level include photon number uncertainty and the repeatability of measurements for conversion to radiometry flux. They are examined with a dual detection system that measures photon streams with two modes of detection, namely, photon counting detectors with results in count per second (cps) and photocurrent-generating photodiodes with results in joules per second (W). Having two different detection mechanisms enables a comparison of outcomes for the same single photon stream as well as an examination of the uncertainty of conversion between the two measures. Setting the system to photon counting detectors, we measured both photon number fluctuation and repeatability of photon flux measurements to evaluate the degrees of stability both for the photon sources and for the detection system itself, respectively. Analyzed results are discussed in sections~\ref{sec:stab}--\ref{sec:radiant}. The main difficulty in radiant flux measurements is to find an emitter that produces a photon flux intense enough to be detected by the photocurrents of the photodiode. To this end, we exploited an emitter of count rate $> 10^6$ per second and $g^{(2)}(0) < 1$. We verified both the repeatability of measurement of the radiant flux generated by the single photon emitters and the equivalence of the calibration parameters of detection, as described in \sref{sec:radiant}. We did not find additional corrections other than our given parameters \cite{Bae_2019}, which was expected from our collection efficiency of $< 6.4 \%$ that flattens any dissimilar photon number statistics into Poisson distribution \cite{Loudon:105699}. Increasing collection efficiency will be our main subject in future studies to achieve the advantages of sub-Poisson statistics, which is being pursued for few-photon metrology. \section{Samples and Experimental Methods}\label{sec:method} Materials of interest are silicon vacancy in diamond (vc-SiV), crystal defects in GaN (df-GaN), and vacancy in hBN (vc-hBN). Their common feature is a wide band-gap of the host crystal that preserves single photon emission at room temperature. The SiV sample in this work took the form of nano-diamonds on an iridium substrate, which were grown to the shape of the film by the silicon-free CVD process and milled to diameters of 50--100 nm. Silicon was implanted after the clean CVD process to attain a high SiV purity \cite{Neu_2011}. The GaN substrates are a commercially available 4 $\mu$m GaN crystal grown on sapphire \cite{GaN:Spec}. The fluorescence center in GaN was explained to be a charge-trapped dot that locates at the intersection of a lattice dislocation stemming from the sapphire--GaN interface and a layer mismatch of crystal orientation, similar to the point-like potential wall in a two-dimensional quantum well \cite{Jeong:GAN}. The hBN sample took the form of nano-flakes dispersed on an oxidized layer of silicon substrate. The nano-flakes are commercially available, but required a special treatment of annealing in an inert environment: 800$^{\circ}$C for 30 min in 1 Torr Ar gas \cite{hBN:ACS2016}. Because emitters are randomly distributed, confocal microscopy has been commonly used to confine fluorescence signals. We were helped by the measurement system of our previous work, where we set up modules of single photon collection with single mode fibers (SMFs) and analyzed their photon number statistics \cite{Lim:19}. The setup has benefits of uninterrupted serial measurements of spatial positions, spectra and photon statistics ($g^{(2)}(0)$), and a high stability of maintained alignments that leads to repeatable measurements (see \sref{sec:radiant}). The SMF interface gives identical beam profiles for analyzing the modules, leaving the external systems available for exploitation. The theoretically maximum collection efficiency from the sample--air interface to the SMF output is 21 \%, assuming a Gaussian beam as in past work \cite{Lim:19}. The collection efficiency $< 6.4$ \% was drawn with consideration of the mode coupling efficiency of an electrical dipole radiation and a SMF mode.\cite{novotny_hecht_2012,Schneider:18} (For more details, see \ref{sec:apenA}.) The real collection efficiency is smaller than this prediction though when we take into account surface scattering and all the variable nano-crystal shapes. Still, our photon count rate results are similar to other works that studied the same materials, implying a sufficient level of mechanical rigidity to maintain the count rate. For the evaluation of photon number fluctuation and application to radiometry experiments, we constructed a radiometry module that includes the dual detection system described above. This new module has two stages of detection: the first is counting photon flux with a silicon single photon avalanche detector (SPAD), and the second is measuring radiant power converted from the photodiode photocurrents. They share the same single photon input injected via SMF, and the same incidence position at which we rotate groups of detectors to place. This intends to attain convergence between the outcomes of the two detection mechanisms consistently without any corrections for optical path loss (\sref{sec:radiant}). The module also helped the measurement of photon number fluctuations, as shown in \sref{sec:stab}. \section{Spectra and Photon Statistics}\label{sec:spe} \begin{figure}[ht!] \begin{center} \includegraphics[width=10cm]{FIG2v2.pdf} \end{center} \caption{Spectra of single photon emitters of different materials: \textbf{(a)} silicon vacancy in diamond (SiV), \textbf{(b)} fluorescence defect in GaN crystal (GaN), and \textbf{(c)} vacancy in hBN (hBN). Insets are photoluminescence images scanned over the area centered on each fluorescence center. \textbf{(d)} Histograms of spectral peaks of wavelength and linewidth for SiV (blue), GaN (red), and hBN (gray). The gray zone is the resolution boundary limited by our spectrometer and its coupling optics. } \label{fig:spectra} \end{figure} Spectra of fluorescence centers commonly consist of a zero-phonon line (ZPL) and phonon side-bands, and show a high ratio of ZPL intensity compared to the photon side-band, i.e., a high Debye--Waller factor, as shown in \fref{fig:spectra}(a--c). The ZPL positions of vc-hBN and df-GaN depend on strain and defect formations, and are widely distributed over 600--750 nm and 690--800 nm, respectively [\fref{fig:spectra}(d)]. Due to the different mechanisms of defect formation between df-GaN and vc-hBN, df-GaN has a greater linewidth on average and can be more directly affected by crystal strain. Likewise, both df-GaN and vc-hBN have a wider linewidth on average, which can vary by the local strain of the host crystal because of the various kinds of defect formations and their large degrees of freedom. On the other hand, vc-SiV has a definite ZPL position, $\sim737$ nm\cite{SiV:1996}, the formation of which is explicitly allowed by the diamond crystal. \begin{figure}[t] \begin{center} \includegraphics[width=12cm]{FIG3V4raster.pdf} \end{center} \caption{Photon correlation ($g^{(2)}(\tau)$) acquired from a Hanbury Brown--Twiss interferometer for \textbf{(a)} silicon vacancy in diamond (blue), \textbf{(b)} defects in GaN (red), and \textbf{(c)} vacancy in hBN (gray). Solid lines show the model $g^{(2)}(\tau) = 1 - p_1 \exp (-\tau/\tau_1) + p_2 \exp (-\tau/\tau_2)$ with characteristic times of anti-bunching ($\tau_1$) and trapping in meta-stable dark states ($\tau_2$). Fitted to this model, the zero-time correlations ($g^{(2)}(0)$) are derived to be \textbf{(a)} 0.38 $\pm$ 0.22, \textbf{(b)} 0.24 $\pm$ 0.14, and \textbf{(c)} 0.33 $\pm$ 0.05 (95 \% trust). From the full set of data attained from various fluorescence centers in the three materials, photon counts as acquired from the detectors are plotted for \textbf{(d)} $g^{(2)}(0)$, \textbf{(e)} $\tau_1$, and \textbf{(f)} $\tau_2$. The gray zone in the figure (e) is the resolution boundary limited by time jitter noises of single photon detectors. } \label{fig:g2} \end{figure} For a statistical approach, we collected photon statistics from $>$ 20 fluorescence emitters. Single photon emitters have a unique property of fluorescence, exhibiting a low coincidence of photon count. The degree and time scale of this coincidence suppression have been commonly represented by a normalized correlation ($g^{(2)}(\tau) = \langle C(t+\tau) C(t)\rangle/\langle C(t)\rangle^2$) of photon count ($C$) as measured by a Hanbury Brown--Twiss interferometer (HBT) \cite{Loudon:105699}. We employed two methods of deducing $g^{(2)}(\tau)$ experimentally: start-stop histogram and time-tag correlation (TTC). The former is advantageous for real-time acquisition as the trigger intervals between signals from two SPADs are collected. The latter though has a better convergence at $g^{(2)}(\infty) \rightarrow 1$ because this method stores time-tags crudely and deduces a normalization factor from the total count of each detector during data processing, which gives reliable results of $g^{(2)}(0)$. In our study, estimations of $g^{(2)}(\tau)$ were based on raw data given by the TTC method. The $g^{(2)}(\tau)$ from our samples, however, shows composite features of both anti-bunching ($g^{(2)}(\tau) < 1 $) at $|\tau| < \tau_1$ and bunching ($g^{(2)}(\tau) > 1$) at $\tau_1<|\tau| < \tau_2$ simultaneously. Here, $\tau_1$ and $\tau_2$ are effective time constants defined by the fitting model $g^{(2)}(\tau) = 1 - p_1 e^{-|\tau|/\tau_1} + p_2 e^{-|\tau|/\tau_2}$, where $p_1$ is the depth contributing to the anti-bunching of $g^{(2)}(0)$, $p_2$ is the height above $g^{(2)}(\tau) > 1$, $\tau_1$ is the time width of anti-bunching, and $\tau_2$ is the characteristic exponential decay time of bunching. We collected $\tau_1$ and $\tau_2$ because they have physical origins: $\tau_1$ is the spontaneous emission time for an ideal photon source or relates to the lifetime of the radiative transition for single photon emission, while $\tau_2$ stems from non-radiative relaxations, which can cause blinking in the photon emission.\cite{Santori:2004bv} If a single emitter is trapped in meta-stable dark states, it stops emitting until it is released to bright states, with $\tau_2$ directly representing this time scale. With this model, we observed $g^{(2)}(0) = 0.38 \pm 0.22$ for vc-SiV, $0.24\pm 0.14$ for df-GaN, and $0.33 \pm 0.05$ for vc-hBN, as shown in \fref{fig:g2}(a--c), where the errors are the widths of the 95 \% confidence interval calculated via robust covariance estimation \cite{RCVME}. The full $g^{(2)}(0)$ data was measured from $>$ 20 fluorescence centers of our samples as shown in \fref{fig:g2}(d). The large errors in the $g^{(2)}(0)$ obtained from SiV emitters are due to the short $\tau_1$ of these emitters and a time jitter of the detector. To restore the pure $g^{(2)}(0)$ before time jitter noises, we tried a deconvolution method with a noise filter $H(\tau;D) = (D\sqrt{2 \pi})^{-1}\exp(-\tau^2/2 D^2)$ assumed for a time jitter $D = 0.3$ ns.\cite{Nomura:2010gx} Our method of deconvolution is to fit data with a convolution form of exponential function. However, this method is redundant as its results are similar to the previous estimation of $g^{(2)}(0)$ with parameters of $p_1$ and $p_2$, and they can be biased when $D$ is overestimated. It is a common tendency regardless of materials that time jitter errors of $g^{(2)}(0)$ grows as $\tau_1$ is shortened. Since $g^{(2)}(\tau)$ measured by TTC is in absolute values, we could take the average $g^{(2)}(0)$ for some intervals of $\tau \in (-\tau_1, \tau_1)$. These experimentally allowed values are also presented as transparent points from 18/hBN fluorescence emitter shown in \fref{fig:g2}(d). Their differences from pure values $g^{(2)}(0)$ of the model are reflected in confidence intervals for all emitters. Our closest recorded $g^{(2)}(0)$ values to 0 were attained around $0.24\pm 0.14$ from df-GaN and $0.31 \pm 0.05$ from vc-hBN, both of which are allowed at a low excitation power. The effective excitation power differed among materials and samples due to the differences in refractive index, diameter of grains, and geometry. However, the set of data used for \fref{fig:g2}(d--f) comprises the full range of photon count rates achieved from far below and above the saturation points of each material. We suspect that the lowest value of $g^{(2)}(0) > 0.2$ can either be attributed to background photoluminescence, including deeper infrared, as we used a long pass filter with a 568 nm edge for purification, or the unresolved single photon emitting transitions in a ZPL \cite{Alexander:2019um}. We discuss this in \sref{sec:radiant} with \fref{fig:conv}. Our result shows that $\tau_1$ and $\tau_2$ decrease with an increasing excitation power for every material we observed. These variables of the $g^{(2)}(\tau)$ model has an origin in relaxation processes of fluorescence materials. The power dependence of $\tau_1$ evidently shown with df-GaN and vc-hBN implies that $\tau_1^{-1}$ represents a re-excitation rate rather than a spontaneous emission rate under moderate $P$ that allows exploitable photon count rates. Nevertheless, we can expect large spontaneous emission rates from SiV, whose $\tau_1$ is as short as being close to the instrumental limit of time jitter at the entire range of $P$. According to the three level model, $\tau_2$ are related to a recovering relaxation from meta-stable states (deshelving), and also depends on excitation power because it gives more chances of initializing ionization \cite{Santori:2004bv,Jeong:GAN,ASTAKHOV2018211}. We observed high count rates of $> 10^6$ cps with vc-hBN, similar to other studies \cite{C7NR08249E}, and low count rates of $< 2\times 10^5$ cps with vc-SiV. This result is opposed to the long $\tau_{1,2}$ of hBN, as shown in \fref{fig:g2}(e) and (f), and to the intuition that fast transitions allow high photon rates. The speed of the transitions and blinking of vc-SiV was the highest among the materials of interest, but exact pictures of these have yet to be unveiled to predict the internal efficiency of fluorescence emissions at room temperature \cite{SiV:APL:2011,Lindner_2018}. The photon count rate of df-GaN, $< 3 \times 10^5$ cps, seems limited by total internal reflections at the GaN--air interface, which can be overcome by an immersion medium before the objective lens. Otherwise, vc-hBN is preferable for radiometry experiments that require a wide range of photon counts on the order of $> 10^6$ cps. \section{Photon number fluctuation}\label{sec:stab} \begin{figure}[h] \centerline{\includegraphics[width=7.5 cm]{fig4_PwStability_2.pdf}} \caption{Relative number uncertainty $\sigma(N)/\langle N\rangle$ of average photon number $\langle N\rangle$ from various fluorescence centers of different materials: silicon vacancy in diamond (SiV, blue), defect in GaN (red), and vacancy in hBN (gray). Each point was acquired from a 10 s long streaming acquisition with a time bin width of 10 ms. Data collected with different emitters were distinguished by shape of marker specified in \fref{fig:g2}. } \label{fig:stability} \end{figure} Low photon number uncertainty is required for an ideal photon source. From various values of $\tau_2$ depending on the emitters and materials, which are related to the time scales of emission blinking, we can infer the requirement to find one that has a low photon number fluctuation. We first measured the photon number uncertainty ($\sigma (N)/\langle N\rangle$), defined as the ratio of the standard deviation of photon number ($\sigma (N)$) to its average value ($\langle N\rangle$). These statistical variables were obtained by a single shot of streaming acquisition of photon counts over 10 s. Photon number $N$ is the accumulated photon count in a 10 ms long time bin ($\Delta t$) within the streaming acquisition. Thus, we clarify the terminological distinction between $N$ and photon count rate $C$ by the following relation: \begin{equation} N_i = \int_{t_i}^{+ \Delta t} C(t)\, \mathrm{d}t, \end{equation} for an $i$-th time bin $[t_i, t_i + \Delta t]$. In experiments, we measured $N_i$ directly from an edge counter with a fixed $\Delta t$ and then deduced $C_i$ via $C_i = N_i/\Delta t$. For shot-noise limited $N$ in Poisson statistics as acquired for $\Delta t > \tau_1$, the defined photon number uncertainty $\sigma (N)/\langle N\rangle$ can be reduced to $1/\sqrt{N}$, which tends to decrease with increasing $N$. This assumption excludes the observed differences of uncertainty between the materials shown in \fref{fig:stability}(a). Measured values of the uncertainty are split into two groups: SiV with GaN, and hBN on the other side. With a few exceptions, every hBN emitter exhibited a larger $\sigma (N)/\langle N\rangle$ than SiV and GaN. \begin{figure}[h] \centerline{\includegraphics[width=10cm]{FIG4Braster.pdf}} \caption{Photon number variance $\langle \Delta N^2\rangle$ with respect to average photon number $\langle N \rangle$, measured from fluorescence emissions of defect in GaN (red) and vacancy in hBN (black) samples. Photon number $N$ is an integrated variable of count rate $C$ for time bin $\Delta t$ as $N = C \Delta t$. Varying $\Delta t$ from 1 ms to 1 s, the statistical values of $N$ were taken from streaming acquisitions for 200 s, as shown in the inset. The gray line is the shot-noise limit: $\langle \Delta N^2\rangle_{\mathrm{shot}} = \langle N \rangle$. The solid lines follow the model $\langle \Delta N^2\rangle = \langle N \rangle + \nu_2 \langle N \rangle^2$, with $\nu_2 = 7.9 (\pm 0.3)\times10^{-5}$ for GaN and $1.3 (\pm 0.05)\times10^{-3}$ for hBN. } \label{fig:fluc} \end{figure} Additional noises other than the shot-noise $\langle \Delta N^2\rangle_{\mathrm{shot}} = \langle N \rangle$ are clearly seen. We examined $\langle \Delta N^2\rangle$ with integration times ($\Delta t$) from 1 ms to 1 s for df-GaN and vc-hBN; $\langle N \rangle$ was set to a similar value ($1.3 \times 10^5$ cps) by adjusting excitation power. In this condition, $g^{(2)}(0) = 0.41 \pm 0.04$ for the df-GaN and $0.54 \pm 0.03$ for the vc-hBN, with both emitters below saturation. We extrapolated the noise model to include the quadratic term of $\langle N \rangle$ with the coefficient $\nu_2$: \begin{equation} \langle \Delta N^2 \rangle = \langle N \rangle + \nu_2 \langle N \rangle^2. \end{equation} This model was fitted to df-GaN with $\nu_2 = 7.9 (\pm 0.3) \times 10^{-5}$, and as shown in \fref{fig:fluc}, $\langle \Delta N^2 \rangle$ was close to $\langle \Delta N^2\rangle_{\mathrm{shot}}$ at $\Delta t <$ 20 ms and small $\langle N \rangle$. However, the data from vc-hBN was close to the model only for large $\langle N \rangle > 10^4$ with $\nu_2 = 1.3 (\pm 0.05) \times 10^{-3}$, and did \textit{not} converge to $\langle \Delta N^2\rangle_{\mathrm{shot}}$ at short $\Delta t$. The $\langle \Delta N^2 \rangle$ of vc-hBN was greater than that of df-GaN by an order of magnitude, as much as the difference of $\nu_2$ between them. The large fluctuation of $N$ in vc-hBN can easily be seen from the real-time data in the inset of \fref{fig:fluc}. The origin is related with the blinking phenomenon, as vc-hBN has long $\tau_2 = 1.0 \pm 0.1$ $\mu$s. This value is in contrast to the short $\tau_2= 53 \pm 5$ ns of df-GaN. In our survey of $>$ 36 fluorescence centers, most vc-hBN $\tau_2$, which are related to the time dynamics of the blinking, are slower than those of GaN, as shown in \fref{fig:g2}(f). This analysis of $\tau_2$ is still insufficient to extrapolate the general blinking dynamics though, since we could not measure $\tau_2 > 1$ ms $\sim \Delta t$ in $g^{(2)}(\tau)$ due to the sampling time window. Nevertheless, this tendency of slow blinking dynamics being more clearly seen in vc-hBN, as shown in the inset of \fref{fig:fluc}, is intriguing. On the other hand, df-GaN with the fast $\tau_2$ did \textit{not} exhibit such fluctuations, and enabled the low noise $\sim \langle \Delta N^2\rangle_{\mathrm{shot}}$ with $\Delta t < 10$ ms. Because of the low $\langle N \rangle$ of vc-SiV, we could not perform a fair test of $\nu_2$ for the vc-SiV samples. However, the analogous behavior of $\langle \Delta N^2 \rangle$ with df-GaN as shown in \fref{fig:fluc} leads us to expect that $\langle \Delta N^2 \rangle$ of vc-SiV would be as low as that of df-GaN if it had a similar level of $\langle N \rangle$. The low $\langle N \rangle$ of vc-SiV also has disadvantages in obtaining repeatable $S$, as in the following section. \section{Conversion Between Photon and Radiant Fluxes with High Repeatability}\label{sec:radiant} In order to apply single photon emitters in radiometry experiments, evaluations of reliability in measurement results must precede. The main quality of our assessment is the repeatability of photon flux ($\Phi_q$) or photon count ($C$) measurement followed by conversion to radiant flux ($S$). We performed two tests: one to deduce the repeatability errors of $\langle N\rangle$, and the other to confirm the validity of applying present calibration parameters for photon counts to represent radiant fluxes. The dual detection module introduced in \sref{sec:method} is well suited for performing these tests. With this module, we measured $C$ and $S$ from a SPAD and photodiode, respectively, and cross-checked the independent results with previously proven calibration parameters. \begin{figure}[h] \centerline{\includegraphics[width=7.5 cm]{fig4_Repeatability_2.pdf}} \caption{Repeatability of $\langle N\rangle$ ($=\sigma(\langle N\rangle)/\sqrt{M}$, where $M$ is the number of repeated measurements). Single values of $\langle N \rangle$ were acquired from a 10 s long streaming acquisition at a sampling rate of 10 ms. $\sigma(\langle N\rangle)$ was calculated by $M$ values of $\langle N \rangle$, where $M=5$ was set to extract a quantity of repeatability. Data in the green circles were attained from high repetitions up to $M=20$. Data collected with different emitters were distinguished by shape of marker specified in \fref{fig:g2}. } \label{fig:repeat} \end{figure} Repeatability errors were measured via the following process. We repeated the streaming acquisition of $N$ the same as we did for photon number uncertainty in \sref{sec:stab}. We took the average value ($\langle N\rangle$) of each shot ($i$) and repeated $M$ times to obtain $\langle N\rangle_{1\leq i\leq M}$, thereby yielding the repeatability error as $\sigma (\langle N \rangle)/\sqrt M$ according to its theoretical definition. In order to obtain practical values, we inserted an $S$ measurement process between each shot of $N$ acquisition to mirror the calibration sequence in radiometry experiments. Hence, we attain results of both repeatability error and data on $C$ and $S$. As shown in \fref{fig:repeat}, df-GaN, which has low $\langle \Delta N^2 \rangle$, demonstrates a high repeatability of $\langle N\rangle$ measurement. We extended $M$ to 20 to obtain the upper bound of repeatability with a qualified df-GaN emitter with $g^{(2)}(0) = 0.43 \pm 0.09$ and $C = 1.3 \times 10^5$ cps. This highest result reaches to 30 ppm, as marked with green circles in \fref{fig:repeat}. We note that this value is close to the present repeatability of radiometry experiments with this laser source. \begin{figure}[h] \centerline{\includegraphics[width=10cm]{FIG5raster_2.pdf}} \caption{\textbf{(a)} Relation of radiant flux (red dots, $I$) with respect to photon count rate ($C$), where detections are based on different mechanisms of operation. The radiant fluxes (y-axis) are given by currents produced in a traceable, high-sensitivity photodiode, and the photon fluxes are from a single photon counter (SPC). The solid red line represents the theoretical relation $S = \frac{hc}{\lambda} \eta \Phi_q$ (identical to \eref{eq:identity}), where $\eta$ is the quantum efficiency of the SPC and $\lambda$ is the center wavelength. The black squares show $g^{(2)}(0)$ varied at different levels of $C$. \textbf{(b)} The spectrum of the single photon source presents mainly as the Lorentzian function centered on 665 nm with a FWHM of 2 nm in wavelength (1) while containing small peaks (2--5). \textbf{(c)} Photon autocorrelation measured at an excitation power of 0.2 mW and a photon count of $2 \times 10^5$ cps. All were measured with 18/hBN (see \fref{fig:g2}). } \label{fig:conv} \end{figure} Despite a low fluctuation and high repeatability, the maximum $C$ obtained from df-GaN has a lower limit around $2 \times 10^5$ cps [\fref{fig:g2}(d--f)]. This also limits the signal to noise ratio (SNR) to 22 for $S$ measurements by a silicon photodiode having a noise equivalent power (NEP) of 8.4 fW/$\mathrm{Hz}^{\frac{1}{2}}$ and integrating photocurrents for 10 s \cite{Mountford:08,Park:16}. To take advantage of the high SNR of $S$ and the wide range of flux levels shown in \fref{fig:conv}, we chose 18/hBN among vc-hBN emitters (see \fref{fig:g2}), whose maximum $C \sim 2 \times 10^6$ cps. We adjusted the level of $C$ by controlling the excitation power ($P$). $C$ has a behavior following the function $C =C_0 (1 - e^{-P/P_\mathrm{sat}})$ with a saturation count rate of $C_0 = 1.8\times 10^6$ cps and a saturation power of $P_\mathrm{sat} = 1.3 \pm 0.2$ mW. Over this wide range of $C$, $g^{(2)}(0)$ remained within the range 0.37--0.58. The uncertainty of $g^{(2)}(0)$ (confidence interval 95 \% trust) increased at high $C$, as strong excitation power shortens the anti-bunching time width $\tau_1$ into the time jitter limit of SPAD, as shown in \fref{fig:g2}(e). As shown in \fref{fig:conv}(c), $g^{(2)}(0) = 0.53 \pm 0.03$ when measured at a low excitation power of $0.15 \times P_\mathrm{sat}$, and it decreases to 0.35 at higher $P$. We attribute this irregularity to the presence of other transitions that independently emit single photons. Spectral investigations reveal that the ZPL is composed of two Lorentzian peaks, labeled \textit{1} and \textit{2} in \fref{fig:conv}(b). Similar findings were observed in another work, and according to the previous analysis we can predict that cross-correlations between independently emitted single photons increase $g^{(2)}(0)$ \cite{Alexander:2019um}. As the overlapped peaks \textit{1} and \textit{2} have mostly similar areas, the degree of mixture is sufficient to be the main cause of the high $g^{(2)}(0)$. We also attribute the oscillating behavior of $g^{(2)}(0)$ with respect to $P$ to a property of mixture in which different excitation cross-sections of fluorescence transitions make them compete in contributing to a sum of photon count rate. Thanks to the narrow linewidth ($\Delta \lambda$ $\sim 2$ nm) shown in \fref{fig:conv}(b), we took a single value of $\eta$ at the center wavelength $\lambda_c = 665.8 \pm 0.03$ nm \cite{Bae_2019}, with the justification that $\Delta \eta$ at the $\lambda$ within the narrow line is smaller than the measured uncertainty (0.5 \%). Error caused by detection dead time and nonlinear counting was predicted to be 0.2 \% at $C=10^5$ cps and grow to 4 \% at $C=5 \times 10^5$ cps, so that error correction is critical for $C > 10^5$ cps. For this, we used the correction function $\hat C = U(C)$ to present the corrected count rate $\hat C$, as defined in a previous work \cite{Bae_2019}. At this stage, with given quantum efficiency $\eta$, we can extract the important $\Phi_q = \eta \times \hat C$. In practice, we can reduce this to $\Phi_q = \mathrm{DE} (C) \times C$ by introducing an effective function $DE (C)$ that includes both effects of $\eta$ and $U: C \rightarrow \hat C$ \cite{Bae_2019}. Then we represent the relation between $S$ and $\Phi_q$ in a convenient form with the uncorrected variable $C$: \begin{equation}\label{eq:identity} S = \frac{hc}{\lambda} \times \mathrm{DE} (C) \times C. \end{equation} This relation with given $\mathrm{DE} (C)$ agrees well with the experimental $S$ and $C$ data shown in \fref{fig:conv}(a). We first note that the parameters given in previous works were applied here identically, even though the measurement system in this work was newly fashioned. Second, identical results were achieved even with a non-classical light source that was not fully controlled, as this new source contained uncontrolled internal dynamics related to $\tau_2$ and the blinking phenomenon. Such a photon source may reduce the repeatability of $C$ due to a high $\langle\Delta N^2\rangle$, but it does not cause contradictions with the given parameters in $\mathrm{DE} (C)$ because the parameters are central values. \section{Discussions} We have studied single photon emitters based on fluorescence defects in various crystal materials at room temperature. Silicon vacancy in diamond, defects in GaN, and vacancy in hBN platforms all have a narrow linewidth, and their spectrum centers are spread over a wide range of wavelengths. This is, in fact, an important advantage for few-photon radiometry where we have various quantum efficiencies as variable parameters in conversion models for radiant power. Both stability and brightness are important qualities of single photon sources in few-photon metrology. None of the materials investigated in this study are endowed with both characteristics simultaneously. For example, although single photon sources in hBN nano-flakes exhibit high rates of photon detection, they are interrupted by slow blinking, which causes severe fluctuations in photon count rates. On the other hand, those in GaN show a high degree of stability and a high repeatability of emission rate, while their photon count rates are lower than those attained from hBN. Such differences can be expected from their shapes. Emitters in hBN are close to the surface or edges in a nano-flake, where electrostatic fields can have large effects and cause vulnerability to charge fluctuations. The scenario of severe blinking in hBN is also supported by a recent study that revealed many internal states and frequent relaxations between them, even at low temperature. On the other hand, emitters in GaN are embedded at a depth of a few micrometers, and this reduces the fluctuations caused by electrostatic fields. However, the flat surface with high refractive index of GaN significantly decreases its photon collection efficiency, according to the total internal reflection effect \cite{Bowman:14}. Various methods have been developed to alleviate total internal reflection. This has long been a subject to mitigate the surface effects in solar cells and light-emitting diodes, and many techniques have been developed. The simplest solution is a solid immersion lens that includes various lens techniques like micro-half spheres and meta-surfaces. These methods do not modify the regions near the emitters beneath the surface. We are expecting high collection efficiencies as supported by these methods, which will enable us to apply non-classical photon number statistics, like one corresponding to the Fock state, and to achieve a high degree of number uncertainty.
proofpile-arXiv_065-138
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} We are approaching the close of a century of mathematics, following Hausdorff's seminal work \cite{Hausdorff}, dedicated to a panoply of measure- and dimension-theoretic research regarding the intricate fractal geometry of sets arising from classical Diophantine approximation and its manifold avatars. Abram Samoilovitch Besicovitch and Vojt\v{e}ch Jarn\'ik were among the pioneers who first broke ground at this fertile interface of algebra (number theory) and analysis (geometric measure theory), and this paper is dedicated to the beautiful vistas exposed by their mathematics. Their influential investigations have led to a blossoming area broadly known as \emph{metric Diophantine approximation}, with several connections to classical number theoretic questions, as well as more surprising links to mathematical physics, dynamical systems, fractal geometry, analytic combinatorics, computer science, wireless communication, etc. -- see \cite{CusickFlahive, Shallit, BernikDodson, Dani2, BDD, Lagarias, DodsonKristensen, Kristensen2, KSS, FSU4, Moreira2, BeresnevichVelani6, Hensley_book, IosifescuKraaikamp,WangWu} and the references therein for a sampling of such relationships. We begin with a brief description of two theorems in this vein, which follow from our more general results that are described in later sections. Recall (e.g., \cite{Khinchin_book,Schmidt3}) that an irrational number $x$ is called \emph{badly approximable} if there exists $\epsilon >0$ such that $|x - p/q| \geq \epsilon/q^2$ for any rational $p/q$. To study Diophantine properties it suffices to consider irrationals in the unit interval, and for any irrational $x \in [0,1]$ we abbreviate its simple (or regular) continued fraction expansion as follows \[ x=\cfrac1{a_1+\cfrac1{a_2+\cfrac1{a_3+ \ddots \;}}} = [a_1,a_2,a_3,\dots], \] where the sequence of positive integers $a_i = a_i(x)$ are known as the partial quotients (or continued fraction entries/digits) of $x$. It is well-known (\cite[Theorem 5F]{Schmidt3} or \cite[Theorem 1.9]{Bugeaud}) that $x$ is badly approximable if and only if the partial quotients in its continued fraction expansion are bounded. Thus given a finite subset $I \subset \N$ the set $\Lambda_I$ of all numbers in $[0,1]$ whose continued fraction expansions have partial quotients that belong to $I$ form a subset of the badly approximable numbers. Such sets $\Lambda_I$ are Cantor sets that may be described as conformal iterated function system (CIFS) limit sets \cite{MauldinUrbanski1,CLU}, or as cookie-cutter (Cantor) sets, after Dennis Sullivan \cite{Bedford3}. The study of their Hausdorff dimension has attracted the attention of several researchers over many decades -- for a small sampling of such work across a broad spectrum of fields see \cite{Bumby1, Bumby2, Cusick2, Cusick3, Cusick, Hensley, Hensley3, Hensley4, Hensley5, FlajoletVallee2, FlajoletVallee3, CesarattoVallee2, McMullen_conformal_3, JenkinsonPollicott, JenkinsonPollicott2, JenkinsonPollicott3, JenkinsonPollicott4, FalkNussbaum, Kontorovich} and the references therein. In contrast, estimates and rigorous dimension computation for Cantor sets that arise from infinite subsets $I \subset \N$ (and the measure-theoretic study of limit sets of infinite CIFS, more generally) present a variety of new challenges and there is plenty left to uncover in this vein -- see \cite{Ramharter2, GardnerMauldin, FSU1, HeinemannUrbanski, MauldinUrbanski1, MauldinUrbanski4, FalkNussbaum2, CLU2} for some progress in this vein. Perhaps the earliest paper on the Hausdorff dimension of continued fraction Cantor sets was Jarn\'ik's paradigmatic \cite{Jarnik1}, in which he established that for every $N \geq 8$ \[ 1 - \frac{4}{N \log(2)} \leq \HD(F_{\leq N}) \leq 1 - \frac{1}{8N \log(N)}, \] where $\HD$ denotes Hausdorff dimension and $F_{\leq N}$ denotes the set of all numbers in $[0,1]$ whose continued fraction expansions have partial quotients all $\leq N$. As a corollary Jarn\'ik was able to prove his seminal result on full Hausdorff dimension of the set of badly approximable numbers\footnote{Jarn\'ik's result inspired a myriad extensions, e.g. \cite{Patterson1, Schmidt2, Dani4, FSU4, Beresnevich_BA, Simmons5}, and finding analogues of our results in any of these settings would involve tackling several new challenges.}, which may be described as an increasing union of the $F_{\leq N}$ sets. Over two decades later in 1951, Jarn\'ik's student Jaroslav Kurzweil was able to improve the former bounds in his doctoral work by proving \cite[Theorem VIII]{Kurzweil} that \begin{equation} \label{kurzweil} 1 - \frac{0.99}{N} \leq \HD(F_{\leq N}) \leq 1 - \frac{0.25}{N} \end{equation} for $N \geq 1000$. This was the state of the art for the next four decades until the breakthrough work of Doug Hensley who leveraged functional analytic techniques\footnote{Hensley's approach arose from a distinguished line of research on Gauss's problem on the distribution of continued fraction partial quotients by Kuzmin, Levy, Sz\"usz, Wirsing, Babenko, Mayer and several others, see Knuth's \cite[pp.362--366]{Knuth} for a beautiful, albeit already dated, survey.} to improve on Kurzweil's result by proving \cite{Hensley} (cf. \cite[Chapter 5]{Bugeaud}) that \begin{equation} \label{hensley} \HD(F_{\leq N}) = 1 - \frac{6}{\pi^2} \frac 1N - \frac{72}{\pi^4} \frac{\log(N)}{N^2} + O\left(\frac{1}{N^2}\right) \end{equation} Note that \eqref{hensley} is stronger than \eqref{kurzweil} since $.25 < 6/\pi^2 < .99$, and also has an estimate on the error term $o(1/N)$. Hensley's haunting formula \eqref{hensley} leads to some natural questions: what does the remainder term $O(1/N^2)$ look like? Can it be written as $c/N^2 + o(1/N^2)$ for some coefficient $c$? And if so, what does the $o(1/N^2)$ here look like: do more logarithms appear? The following theorem is an example of our main result, Theorem \ref{maintheorem}, applied to the sequence of sets $(F_{\leq N})$: \begin{theorem} \label{theoremFleqN} For each $p\geq 1$, the Hausdorff dimension of $F_{\leq N}$ can be estimated via the formula \begin{equation} \label{HDFleqN} \HD(F_{\leq N}) = 1 + \sum_{i = 1}^{p - 1} \sum_{j = 0}^{i - 1} c_{i,j} \frac{\log^j(N)}{N^i} + O_p\left(\frac{\log^{p - 1}(N)}{N^{p}}\right), \end{equation} where $c_{i,j} \in \R$ are effectively computable constants. Here $O_p$ means that the implied constant of $O$ may depend on $p$. \end{theorem} Note that by \eqref{hensley} we have $c_{1,0} = -6/\pi^2$ and $c_{2,1} = -72/\pi^4$. Our methods yield explicit formulas for the subsequent coefficients $c_{i,j}$ (see Appendix \ref{appendix1} for some example computations), but the formulas for $c_{2,0}$ and further coefficients depend on a certain operator $Q$ on the space of H\"older-continuous functions on $[0,1]$, defined in terms of the Gauss--Kuzmin--Wirsing operator $L$ (cf.~Theorem \ref{theoremoperatorequation}). This operator is given as a series and it appears to be quite challenging to give a closed formula for its value on explicitly given functions such as $\one(x) \df 1$. In particular, the precise formula for $c_{2,0}$ in terms of $Q$ is quite complicated; see \eqref{c20}. However, the sequence of coefficients $(c_{i,i - 1})$ turns out to have a relatively simple expression: \begin{equation} \label{cii-1} c_{i,i-1} = -\frac{2^{i - 1}\cdot i^{i - 2}}{(i - 1)!} \left(\frac{6}{\pi^2}\right)^i . \end{equation} This includes the two coefficients $c_{1,0}$ and $c_{2,1}$ computed by Hensley. Our techniques can be used to estimate the Hausdorff dimensions of many different sequences of sets coming from conformal iterated function systems, such as sequences of sets $(F_N)$ where each $F_N$ is specified by restricting continued fraction partial quotients to lie in some set $E_N \subset \N$, such that the sequence of characteristic functions $(\one_{E_N})$ converges pointwise to some characteristic function $\one_E$ (we denote such convergence by $E_N \to E$). In some cases, the formula for the Hausdorff dimension coming from Theorem \ref{maintheorem} ends up being far more complicated than \eqref{HDFleqN}. For instance, consider $F_{\geq N}$, the set of elements of $[0,1]$ whose continued fraction partial quotients are all $\geq N$. The earliest estimates on the dimension of $F_{\geq N}$ were obtained in the late 1930s by Irving John (Jack) Good. Good's work \cite{Good,Good2}, which was undertaken on Besicovitch's suggestion and awarded the prestigious Smith Prize at the University of Cambridge \cite{Banks}, has since inspired a wealth of research on the dimension theory of continued fraction Cantor sets. Good proved that for $N \geq 20$ \[ \frac12 + \frac{1}{2\log(N+2)} \leq \HD(F_{\geq N}) \leq \frac12 + \frac{\log\log(N-1)}{2\log(N-1)} \cdot \] Applying our main result, Theorem \ref{maintheorem}, to the sequence of sets $(F_{\geq N})$ leads to the following strengthening of Good's result: \begin{theorem} \label{theoremFgeqN} For each $p\geq 1$, the Hausdorff dimension of $F_{\geq N}$ can be estimated via the formula \begin{equation} \label{HDFgeqN} \begin{split} \HD(F_{\geq N}) &= \frac12+ \frac1{2\log(N)}\left[\log\log(N) - \log\log\log(N) + \sum_{k = 1}^\infty \sum_{\ell = 1}^k c_{k,\ell} \frac{\log^\ell \log\log(N)}{\log^k \log(N)}\right.\\ &\left.+ \sum_{i = 1}^{p - 1} \sum_{j = 1}^\infty \sum_{k = -j}^\infty \sum_{\ell = 0}^{j + k} c_{i,j,k,\ell} \frac{\log^\ell\log\log(N)}{N^i \log^j(N) \log^k\log(N)} \right] + O_p\left(\frac{\log\log(N)}{N^p \log(N)}\right) \end{split} \end{equation} where $c_{k,\ell} \in \Q$ and $c_{i,j,k,\ell} \in \Q$ are appropriate constants that can be computed explicitly. For example, $c_{1,1} = -1$, $c_{2,1} = 1$, $c_{2,2} = -1/2$, $c_{3,1} = -1$, $c_{3,2} = 3/2$, $c_{3,3} = -1/3$, and $c_{1,1,-1,0} = 1/2$. \end{theorem} \bigskip {\bf Outline for the sequel.} In Section \ref{sectionoperatorequation} we prove a general result in the setting of Banach spaces that will introduce the key equation leading to \eqref{HDFleqN} and \eqref{HDFgeqN}. In Section \ref{sectionPACIFS} we introduce a class of conformal iterated function systems that includes the class of Gauss IFSes, to which our results will apply. In Section \ref{sectiontheoremscase12} we state our main theorem, of which Theorems \ref{theoremFleqN} and \ref{theoremFgeqN} are special cases. In Section \ref{sectionproof} we prove this theorem, and in Section \ref{sectiongauss} we provide examples where the theorem applies, in particular Proposition \ref{propositionpolysequence} which corresponds to the above theorems. Sections \ref{sectionspectralgap}, \ref{sectionvanishing}, and \ref{sectionlogloglog} contain auxiliary results necessary for these proofs and examples. Section \ref{sectionopen} concludes with some directions for further research. Finally, in Appendix \ref{appendix1} we compute the coefficients $c_{i,i-1}$ and $c_{2,0}$ appearing in Theorem \ref{theoremFleqN}. \bigskip {\bf Conventions.} We use the standard Landau notation $O(\cdot)$, $\Theta(\cdot)$, as well as writing $A \lesssim B$ when $A = O(B)$ and $A \equiv_X B$ when $B - A = O(X)$. If $A \lesssim B \lesssim A$, we write $A \asymp B$. Recall that $\Q[x]$ denotes the ring of polynomials in the variable $x$ with coefficients in $\Q$. All linear operators between Banach spaces are assumed to be bounded. By default balls are closed, e.g. $B_\C(0,1) \df \{ z \in \C : |z| \leq 1\}$, and we denote open balls with $^\circ$, e.g. $B^\circ_\C(0,1) \df \{ z \in \C : |z| < 1\}$. Note that we use the Iverson bracket notation in several places in the text: $[\Phi] = 1$ when $\Phi$ is true and $[\Phi] = 0$ when $\Phi$ is false. The notation $F_{|i}$ represents the partial derivative of the function $F$ with respect to the $i$th coordinate. Similarly $F_{|ij}$ denotes a double derivative achieved by taking a partial derivative with respect to the $i$th coordinate followed by a partial derivative with respect to the $j$th coordinate; and $F_{|ijk}$ denotes a triple derivative, etc. \bigskip {\bf Acknowledgements.} This research began on 12$^{th}$ March 2018 when the authors met at the American Institute of Mathematics via their SQuaRE program. We thank the institute and their staff for their hospitality and excellent working conditions. In particular, we thank Estelle Basor for her continued encouragement and support. The first-named author was supported in part by a 2017-2018 Faculty Research Grant from the University of Wisconsin-La Crosse. He thanks the scientific and organizing committees of the \href{http://sites.math.u-pem.fr/oneworld-fractals/}{\it One-world Fractals and Related Fields} seminar, in particular Stéphane Seuret and Julien Barral, for the opportunity to speak about this work at his first virtual research lecture. The third-named author was supported in part by the EPSRC Programme Grant EP/J018260/1, and also by a fellowship from the Royal Society. The fourth-named author was supported in part by a Simons Foundation Grant 581668. \section{An abstract operator formula} \label{sectionoperatorequation} The idea for proving Theorem \ref{theoremFleqN} is to consider the Perron--Frobenius operators $L,L_N: C([0,1]) \to C([0,1])$ defined by the formulas \begin{align} \label{Ldef} L f(x) &= \sum_{n = 1}^\infty \frac{1}{(n+x)^2} f\left(\frac{1}{n+x}\right)\\ \label{LNdef} L_N f(x) &= \sum_{n = 1}^N \frac{1}{(n+x)^{2\delta_N}} f\left(\frac{1}{n+x}\right) \end{align} where $\delta_N$ is the Hausdorff dimension of $F_{\leq N}$. Note that $L$ is the well-known Gauss--Kuzmin--Wirsing operator\footnote{The operator $L$ is variously referred to in the literature as the transfer operator for Gauss's continued fraction map, or as the Perron--Frobenius, Ruelle--Perron--Frobenius, Ruelle--Mayer, or Ruelle operator, etc.}. By well-known dynamical results (see \6\ref{sectionPACIFS}), the definition $\delta_N = \HD(F_{\leq N})$ can be encoded as the assertion that the spectral radius of $L_N$ is 1, which is furthermore equivalent to the assertion that $L_N$ fixes a positive function $g_N$ and the dual operator $L_N^*$ fixes a positive measure $\mu_N$. (Similarly, $L$ fixes the positive function $g(x) = 1/(1 + x)$, and its dual $L^*$ fixes $\mu$, the Lebesgue measure on $[0,1]$.) We wish to convert this assertion into a formula involving $L_N$, which in turn determines a relation between $N$ and $\delta_N$. To this end we introduce some notation. \begin{notation*} Let $\BB$ be a complex Banach space, let $\BB^*$ be its dual space, i.e. the Banach space of all bounded linear functionals from $\BB$ to $\C$, and let $\LL(\BB)$ denote the Banach space of all bounded linear operators from $\BB$ to $\BB$. Fix $f\in \BB$ and $\sigma\in \BB^*$. Then $\sigma f$ denotes the value of $\sigma$ on $f$, while $f \sigma$ (which is our shorthand for $f \otimes \sigma$) denotes the element of $\LL(\BB)$ defined as $(f\sigma)f' \df (\sigma f') f$. Note that $f \sigma$ is a projection when $\sigma f = 1$. If $\sigma f \neq 0$, then $(\sigma f)^{-1} f \sigma$ is a projection, while if $\sigma f = 0$, then $f \sigma$ is a nilpotent operator of order $2$. If $L \in \LL(\BB)$, then $\sigma L$, is the element of $\BB^*$ defined by the formula $(\sigma L) f \df \sigma (L f)$. The map $L^* : \sigma \mapsto \sigma L$ from $\BB^*$ to $\BB^*$ is called the dual operator of $L$. However, we avoid using the notation $L^*$ in formulas, so we will write $\sigma L$ rather than $L^* \sigma$. This notation is analogous to the notation used in matrix multiplication, with elements of $\BB$, $\BB^*$, and $\LL(\BB)$ corresponding to column, row, and square matrices, respectively. In particular, the associative laws \begin{align*} &(\sigma L) f = \sigma (L f), & &(f \sigma) f' = f (\sigma f'), & &\sigma' (f \sigma) = (\sigma' f) \sigma \end{align*} all hold by definition. If $L f = f$ we call $f$ a right fixed point of $L$, and if $\sigma L = \sigma$ we call $\sigma$ a left fixed point of $L$ (or equivalently, a fixed point of the dual operator $L^*$). \end{notation*} \begin{theorem} \label{theoremoperatorequation} Let $\BB$ be a Banach space. Suppose that $L$ and $L'$ in $\LL(\BB)$ have respective right fixed points $g,g'$, and let $\mu \in \BB^*$ be a left fixed point of $L$, such that $\mu g , \mu g' \neq 0$. Let $$R \df L - c g \mu \in \LL(\BB),$$ where $c = 1/\mu g$, and let $$\Delta \df L' - L \in \LL(\BB),$$ and $\rho(R)<1$, where $\rho$ denotes the spectral radius. Also suppose that \begin{equation} \label{RnDeltabound} \sum_{n = 0}^\infty \|R^n\|\cdot\|\Delta\| < 1. \end{equation} Then \begin{equation} \label{operatorequation} \sum_{p = 0}^\infty \mu \Delta (Q\Delta)^p g = 0, \end{equation} where $Q \df \sum_{n = 0}^\infty R^n \in \LL(\BB)$. Note that $Q$ is well-defined since $\rho(R)<1$. \end{theorem} \begin{remark*} The hypothesis that $g'$ is a right fixed point of $L'$ such that $\mu g' \neq 0$ may be replaced by the hypothesis that $\mu'$ is a left fixed point of $L'$ such that $\mu' g \neq 0$, with minimal changes to the proof. (Both hypotheses are satisfied in our applications of Theorem \ref{theoremoperatorequation}.) \end{remark*} \begin{proof The idea is to start with the equation $L' g' = g'$, expressing the fact that $g'$ is a right fixed point for $L'$, then multiply on the left by a measure $\mu'$ to get a scalar equation, and finally rearrange to get \eqref{operatorequation}. Specifically, let \[ \mu' \df \sum_{m = 0}^\infty \mu (L' - c g \mu)^m = \sum_{m = 0}^\infty \mu (R + \Delta)^m \] (We will show later that this series converges in $\BB^*$.) We have $\mu' = \mu + \mu' (L' - c g \mu)$,\Footnote{Plugging in the formula $\mu' g = \mu g$ proven below, it follows that $\mu'$ is a left fixed point of $L'$. However, this fact is irrelevant to the proof, except as an indicator that our choice of $\mu'$ is not as arbitrary as it may initially appear to be.} and thus \[ \mu' L' - \mu' = \mu' L' - (\mu + \mu' (L' - c g \mu)) = (c \mu' g - 1) \mu . \] Multiplying on the right by $g'$ and using the fact that $g'$ is fixed gives \[ 0 = \mu' L' g' - \mu' g' = (c \mu' g - 1) (\mu g') \] and thus since $\mu g' \neq 0$, \[ \mu g = 1/c = \mu' g = \sum_{m = 0}^\infty \mu (R + \Delta)^m g. \] Now by the distributive law $\sum_{m=0}^\infty (R + \Delta)^m$ is the sum of all finite ordered products of $R$ and $\Delta$, i.e. \[ \sum_{m=0}^\infty (R + \Delta)^m = \sum_{p = 1}^\infty \sum_{n_1,\ldots,n_p} R^{n_1} \Delta R^{n_2} \cdots R^{n_{p-1}} \Delta R^{n_p} = \sum_{p = 1}^\infty (Q\Delta)^{p-1} Q. \] These three series all converge absolutely since \begin{align*} \sum_{m=0}^\infty \| (R + \Delta)^m \| &\leq \sum_{p = 1}^\infty \sum_{n_1,\ldots,n_p} \|R^{n_1} \Delta R^{n_2} \cdots R^{n_{p-1}} \Delta R^{n_p}\| \\ &\leq \sum_{p=1}^\infty \left(\sum_{n=0}^\infty \|R^n\|\right)^p \|\Delta\|^{p-1} \underset{\eqref{RnDeltabound}}{<} \infty. \end{align*} Note that this implies that the series defining $\mu'$ converges. Thus, we have \[ \mu g = \sum_{p = 0}^\infty \mu (Q\Delta)^p Q g = \mu Q g + \sum_{p = 0}^\infty \mu Q\Delta (Q\Delta)^p Q g. \] Since $g$ is a right fixed point for $L$, we have $R g = L g - c g \mu g = g - g = 0$, and thus $Q g = g$. Similarly, since $\mu$ is a left fixed point for $L$, we have $\mu R = 0$ and $\mu Q = \mu$. Finally, using the identities $Q g = g$ and $\mu Q = \mu$ in the previous displayed equation, we derive \eqref{operatorequation}. \end{proof} \begin{remark*} \label{remarkspectral} Since $R g = 0$ and $\mu R = 0$ (see the last paragraph of the proof above), it follows that for all $n \geq 1$ we have \[ L^n = (R + c g \mu)^n = R^n + (c g \mu)^n = R^n + c g \mu, \] and thus \[ Q = I + \sum_{n = 1}^\infty (L^n - c g \mu). \] \end{remark*} \begin{remark*} In the case where $L$ and $L'$ are Perron--Frobenius operators of similarity IFSes, \eqref{operatorequation} reduces to the Moran--Hutchinson equation for the latter IFS (assuming that for the former), see Proposition \ref{propositionhutchinson}. \end{remark*} The idea of the proof of Theorem \ref{theoremFleqN} is now to apply Theorem \ref{theoremoperatorequation} with $L$ as in \eqref{Ldef} and $L' = L_N$ as in \eqref{LNdef}, and then to solve the resulting formula \eqref{operatorequation} for $\delta_N$. This determines the sought-after relation between $N$ and $\delta_N$. We refer to the subsequent sections for details on how this is implemented. The idea of the proof of Theorem \ref{theoremFgeqN} is similar, except instead of taking $L$ as in \eqref{Ldef} we take $L f(x) = f(0)$, or equivalently $L = h \nu$ where $\nu$ is the Dirac point mass at $0$ and $h = \one$ (the motivation for this choice will become clear in subsequent sections, in particular Lemma \ref{lemmabetaf} and Remark \ref{remarkgaussvalues}). \section{Point-accumulating conformal iterated function systems} \label{sectionPACIFS} The sets $F_{\leq N}$ and $F_{\geq N}$ can be viewed as limit sets of certain \emph{conformal iterated function systems}, or CIFSes. CIFSes were introduced by Mauldin and Urba\'nski \cite{MauldinUrbanski1} (see Appendix \ref{appendixCIFS} for their definition), and their generalizations \emph{conformal graph directed Markov systems} (CGDMSes) were studied in \cite{MauldinUrbanski2}. We will consider a certain class of CIFSes, which we define as follows. \begin{definition} \label{definitionPACIFS} Fix a quintuple $(U,V,u,v,q)$ such that \begin{itemize} \item $U\subset\R$ is a bounded open set containing $0$; \item $V \subset \R$ is a bounded connected open set; \item $u:U\times V \to V$ is an real-analytic map such that \begin{itemize} \item the family of maps $(u_b)_{b \in U}$ defined by \[ V\ni x\mapsto u_b(x) = u(b,x) \in V \] is conformal and uniformly contracting (i.e. for each $(b,x) \in U\times V$, the map $u_b'(x)$ is a similarity whose dilatation constant $|u_b'(x)|$ is $\leq \lambda$ for some uniform constant $\lambda<1$), and \item $\bigcup_{b\in U} u_b(V)$ is precompact in $V$; and \end{itemize} \item $v:U\times V \to \R$ is a bounded real-analytic function and $q > 0$ is a parameter such that for all $b\in U$ and $x\in V$, \begin{equation} \label{vdef} |u_b'(x)| = |b|^q e^{v(b,x)}. \end{equation} Note that \eqref{vdef} implies that $u_0$ is constant; for convenience, in what follows we assume that this constant is $0$, i.e. that $u_0(x) = 0$ for all $x$. Formula \eqref{vdef} also implies that for all $x\in V$, $q$ is the order of the analytic function $b \mapsto u_b(x)$ at $0$. \end{itemize} Then if $S \subset U\butnot\{0\}$ is a set whose only accumulation point, if any, is 0, then the family of maps $(u_b)_{b\in S}$ is called a \emph{point-accumulating conformal iterated function system (PACIFS)} over $(U,V,u,v,q)$. For conciseness we will usually omit ``over $(U,V,u,v,q)$'' when referring to a PACIFS $(u_b)_{b\in S}$. The \emph{limit set} of the PACIFS $(u_b)_{b\in S}$ is denoted by $\Lambda_S$, and is the image of the projection map $\pi: \Sigma \df S^\N \to V$ defined by the formula \[ \pi(\omega) = \lim_{n\to\infty} u_{\omega_1}\circ\cdots\circ u_{\omega_n}(x_0), \] where $x_0\in V$ is an arbitrary point. This limit exists and is independent of $x_0$ because the maps $(u_b)_{b\in S}$ are uniformly contracting and $V$ is connected, and $\Lambda_S \subset V$ because of the assumption that $\bigcup_{b\in U} u_b(V)$ is precompact in $V$. \end{definition} \begin{remark*} Our results also hold in the following more general setting: $U = \bigcup_{i=1}^k (U_i\times \{i\}) \cup F$, where each $U_i$ is a bounded open subset of $\R^{d_i}$ containing $0$, $F$ is a finite set, $V$ is a bounded connected open subset of $\R^d$, $u,v$ are analytic on each $U_i\times\{i\}\times V$ and on each $\{b\}\times V$ for $b\in F$, and \eqref{vdef} is replaced by the formula $|u_{b,i}'(x)| = |b|^{q_i} e^{v_i(b,x)}$, where $b\in U_i$ and $x\in V$. This can be proven with only minor modifications (but notational complications) to the definitions and proofs. \end{remark*} \begin{remark*} Our PACIFSes are not always CIFSes in the sense of \cite{MauldinUrbanski1,MauldinUrbanski2}, because they do not necessarily satisfy the open set condition (OSC), see Appendix \ref{appendixCIFS}. However, in Definition \ref{definitionOSCPACIFS} we define the class of OSC PACIFSes, and these are CIFSes in the sense of \cite{MauldinUrbanski1,MauldinUrbanski2}. \end{remark*} \subsection{Examples of CIFS that are not PACIFS} Though this paper is concerned with PACIFSes, we include two non-examples for the benefit of our readers who are familiar with the well-studied notion of CIFSes. \begin{example} \label{examplecocantor} Let $C$ be the middle-thirds Cantor set, and let $\II$ be the unique disjoint collection of intervals such that \[ [0,1]\butnot C = \bigcup_{I\in \II} \Int(I). \] For each $I\in \II$, let $u_I:[0,1] \to I$ be the unique order-preserving bijective similarity between $[0,1]$ and $I$. Then $(u_I)_{I\in\II}$ is a similarity IFS (and thus also a conformal IFS), but it cannot be realized as a PACIFS. Indeed, if $(u_b)_{b\in S}$ is a PACIFS then $u_b \to p$ uniformly for some point $p$ (with $p=0$ according to our convention), but if $(I_n)$ is a sequence of distinct elements of $\II$, then the limit of the sequence $(u_{I_n})$ can be any point in $C$, and in particular is not limited to a single point. \end{example} \begin{example} \label{exampleNotPACIFS} For each $n$ let $u_n:[0,1] \to [0,1]$ be defined by \[ u_n(x) = \frac{1 + x^n}{2^n}\cdot \] Then $(u_n)_{n\geq 1}$ is a conformal IFS, since the sequence $(u_n)$ is bounded in the $\CC^2$ norm. However, it cannot be obviously realized as a PACIFS, since this would require a finite-dimensional space $U$ to be able to parameterize the sequence $(u_n)$. \end{example} \subsection{Symbolic and geometric Perron--Frobenius operators} For the remainder of this section, we fix $(U,V,u,v,q)$ and let $(u_b)_{b\in S}$ be a PACIFS as in Definition \ref{definitionPACIFS}. \begin{definition} \label{definitionsymbolicPF} Fix $s \in \R$. If $\sum_{b\in S} |b|^{qs} < \infty$, we let $\w L = \w L_{S,s}:C(\Sigma)\to C(\Sigma)$ denote the \textbf{symbolic Perron--Frobenius operator} \begin{equation} \label{symbolicPF} \w L f(\omega) \df \sum_{b\in S} \left|u_b'\circ\pi(\omega)\right|^s f(b\ast \omega), \end{equation} where $\ast$ denotes concatenation, and we let \[ P = P(S,s) \df \log\rho(\w L) \] denote the logarithm of the spectral radius of $\w L$. If $\sum_{b\in S} |b|^{qs} = \infty$, then $\w L$ is not defined and we instead let $P \df +\infty$. Note that by \eqref{vdef}, we have \begin{equation} \label{partitionfunction} e^P \asymp \|\w L\| \asymp \sum_{b\in S} |b|^{qs}, \end{equation} where the middle expression is interpreted as $\infty$ when $\w L$ is not defined. Here the implied constants may depend on $(U,V,u,v,q)$ but not on the PACIFS $(u_b)_{b\in S}$. \end{definition} We now wish to recall several results from \cite{MauldinUrbanski2}. These results are generally stated for what \cite{MauldinUrbanski2} calls CIFSes, and what we will call OSC CIFSes (because they assume the open set condition in addition to conformality). Although PACIFSes are not necessarily OSC CIFSes, we can show that they satisfy \cite[\64.2: (4d),(4e)]{MauldinUrbanski2} in the definition of OSC CIFSes as well as parts of \cite[\64.2: (4a),(4c)]{MauldinUrbanski2}: \begin{itemize} \item[(4a),(4d)] Let $X = \{x\in V : \dist(x,\Lambda_S) \leq \epsilon\}$ for some sufficiently small $\epsilon > 0$. This satisfies all desired properties except connectedness. \item[(4c)] Let $W = V$. This satisfies all desired properties except that the extension may not be globally invertible (it is locally invertible). \item[(4e)] By \eqref{vdef}, this is true with $\alpha=1$. \end{itemize} These properties are enough to prove the following results for all PACIFSes. However, we note that for our main example of Gauss IFSes, all conditions of the OSC CIFS definition are satisfied. \begin{itemize} \item[(A1)] Convex and decreasing pressure function: $P(S,\cdot)$ is equal to the standard pressure function of $(u_b)_{b\in S}$ (cf.~\cite[(2.1) / pp.54-55 / p.78]{MauldinUrbanski2}) and in particular is convex and decreasing \cite[Proposition 4.2.8(b)]{MauldinUrbanski2} \item[(A2)] Existence of eigenfunctions and eigenmeasures: for each $s\geq 0$ such that $-\infty < P < +\infty$, for some $\beta > 0$ there exist a positive $\beta$-H\"older continuous function $\w g = \w g_{S,s} \in \BB = \HH^\beta(\Sigma)$ and a positive measure $\w \mu = \w \mu_{S,s} \in \MM_+(\Sigma) \subset \BB^*$ such that \begin{align*} &\w L \w g = e^P \w g & &\text{and} & &\w \mu \w L = e^P \w \mu \end{align*} see \cite[Theorem 2.7.3 / 3.2.3 / 6.1.2 and Theorem 2.4.3]{MauldinUrbanski2}. Note that if $P = 0$, then this means that $\w g$ and $\w \mu$ are right and left fixed points, respectively, for $\w L$. \item[(A3)] Spectral gap\Footnote{\label{spectralgap} We mean a spectral gap in the sense of \cite{Rugh2}. Indeed, let $\w R \df \w L - c e^P \w g \w \mu$. The inequality $\|\w R^n\| \leq C e^{P n} \gamma^n$ implies that $\rho(\w R) \leq e^P \gamma < \rho(\w L)$, and conversely if $\rho(\w R) < \rho(\w L)$, we may take $\gamma \in e^{-P} (\rho(\w R),\rho(\w L))$, and then we have $\|\w R^n\| \leq C e^{n P} \gamma^n$ for some $C$. So the inequality $$e^{-n P} \|\w R^n\| = \| e^{-n P} \w L^n - c \w g \w \mu\| \leq C \gamma^n$$ is equivalent to the operator $\w L$ having a spectral gap in the sense of \cite{Rugh2}: $\w L$ has a simple isolated eigenvalue the modulus of which equals $\rho(\w L)$, and that the remaining part of the spectrum is contained in a disk centered at zero and of radius strictly smaller than $\rho(\w L)$.}: with $s,\beta$ as above, if $c = 1/\w \mu \w g$, then \[ \|e^{-n P} \w L^n - c \w g \w \mu\|_\beta \leq C \gamma^n \] for some $C < \infty$ and $\gamma < 1$ \cite[Theorem 2.4.6(b)]{MauldinUrbanski2}. Here the notation $\|\cdot\|_\beta$ means that the operator norm is taken with respect to the space $\BB = \HH^\beta(\Sigma)$, rather than the space $C(\Sigma)$ that $\w L$ was originally defined on. \item[(A4)] Invariant measure: the shift map $\sigma: \Sigma \to \Sigma$ defined by \[ \sigma(b \ast \omega) = \omega \] has the invariant measure \[ \w\nu = c \w \mu M_{\w g}, \] where $M_{\w g}$ denotes the operator of multiplication by $\w g$ \cite[Proposition 2.4.7]{MauldinUrbanski2}. \end{itemize} For the purpose of later calculation, we define and compute the Lyapunov exponent of the dynamical system $(\sigma,\w\nu)$: \begin{equation} \label{chidef} \begin{split} \chi &\df \int_\Sigma \log\left|(u_{\omega_1}^{-1})'\circ\pi(\omega)\right|\; \dee \w\nu(\omega)\\ &= \sum_{b\in S} \int_{b \ast \Sigma} \log\left|(u_b^{-1})'\circ\pi(\omega)\right| \; \w g(\omega) \;\dee\w \mu(\omega)\\ &= -e^{-P} \int_\Sigma \sum_{b\in S} \left|u_b'\circ\pi(\omega)\right|^s \log\left|u_b'\circ\pi(\omega)\right| \; \w g(b \ast \omega) \;\dee\w \mu(\omega) \end{split} \end{equation} Note that $\chi > 0$, since $|(u_{\omega_1}^{-1})' \circ \pi(\omega)| > 1$ for all $\omega\in\Sigma$. In what follows we will need a version of the Perron--Frobenius operator that operates on the space of holomorphic functions on a complex neighborhood of $\Lambda_S$. \begin{definition} \label{definitioncomplexPF} Let $U_\C,V_\C\subset \C$ be neighborhoods of $S\cup\{0\}$ and $\cl{\Lambda_S}$, respectively, such that $V_\C$ is connected, and $u,v$ can be extended to bounded holomorphic functions from $U_\C\times V_\C$ to $V_\C$ and to $\C$ respectively, such that the family of maps $(u_b)_{b\in U}$ is still uniformly contracting. For each $s \in \R$ such that $\sum_{b\in S} |b|^{qs} < \infty$ we consider the \textbf{geometric Perron--Frobenius operator} $L = L_{S,s}: C(V_\C)\to C(V_\C)$ defined by the formula \begin{equation} \label{complexPF} L f(x) \df \sum_{b\in S} |b|^{q s} e^{s v(b,x)} \; f\circ u_b(x), \end{equation} which we will consider throughout the paper. Note that $L$ is related to the $\w L$ by the semiconjugacy relation \[ \w L \,\Pi = \Pi \,L, \] where the operator $\Pi : \Lip(V_\C) \to \HH^\beta(\Sigma)$ defined by $\Pi f = f\circ \pi$ is continuous but usually not surjective. \end{definition} Note that the following bounded distortion property holds: for all $n\in\N$, $\omega \in S^n$, and $x \in V_\C$, we have \[ \big|v(\omega_1,x) + v(\omega_2,u_{\omega_1}(x)) + \ldots + v(\omega_n,u_{\omega_{n-1}}\circ\cdots\circ u_{\omega_1}(x)) \big| \lesssim 1. \] This is because of the uniform contraction property of $(u_b)_{b\in S}$, together with the Lipschitz continuity of $v$. The arguments of \cite[\62]{MauldinUrbanski2} can be easily adapted to the setting of \eqref{complexPF}, yielding the following results: \begin{itemize} \item[(B1)] Spectral Radius: Due to the bounded distortion property described above, the spectral radii of the operators \eqref{symbolicPF} and \eqref{complexPF} are both equal to $e^P$. \item[(B2)] Existence of eigenfunctions and eigenmeasures: If $-\infty < P < +\infty$, then there exist a Lipschitz continuous function $g \in \BB = \Lip(V_\C)$ which is positive on $V_\R \df V_\C \cap \R$ and a positive measure $\mu \in \MM_+(V_\C) \subset \BB^*$ such that \begin{align*} &L g = e^P g & &\text{and} & &\mu L = e^P \mu. \end{align*} After renormalization, we have $\Pi \,g = \w g$ and $\w\mu \,\Pi = \mu$; in particular, $\mu$ is supported on $\Lambda_S$. \item[(B3)] Spectral gap\Footnote{See footnote \ref{spectralgap} attached to (A3).}: We have \[ \|e^{-n P} L^n - c g \mu\|_1 \leq C \gamma^n \] for some $C < \infty$ and $\gamma < 1$, where $c = 1/\mu g$, and $\|\cdot\|_1$ indicates that the operator norm is being taken with respect to the space $\BB = \Lip(V_\C)$, rather than the space $C(V_\C)$ that $L$ was originally defined on. \item[(B4)] Lyapunov exponent: Using the formulas $\w L \,\Pi = \Pi \,L$, $\Pi \,g = \w g$, and $\w \mu\, \Pi = \mu$, we get \[ \chi = -c e^{-P} \mu \alpha_1 g, \] where \begin{equation} \label{alpha1def} \alpha_1 f(x) \df \sum_{b\in S} |u_b'(x)|^s \log|u_b'(x)| \; f\circ u_b(x). \end{equation} In what follows we will need to consider the ``unnormalized'' Lyapunov exponent \[ \w\chi \df c^{-1} e^P \chi = -\mu \alpha_1 g > 0. \] \end{itemize} Now by (B3) we have \[ e^{-n P} L^n \one \to c g \mu \one \] uniformly, and since $L$ preserves the space of holomorphic functions, it follows that $g$ is holomorphic\Footnote{A similar result was proven in \cite[Corollary 6.1.4]{MauldinUrbanski2}, though the hypotheses and conclusion are somewhat different. Note that the invariance hypothesis on $U$ in \cite[Corollary 6.1.4]{MauldinUrbanski2} should be that each element of $S$ can be extended to a univalent holomorphic map from $U$ to itself, rather than what is written there.}. Although (B3) is stated only for the Lipschitz norm $\|\cdot\|_1$, for holomorphic functions it holds for the sup norm $\|\cdot\|_\infty$ as well. Indeed, recall Cauchy's inequality: for every bounded holomorphic function $f$ whose domain includes $B_\C(z,\rho)$ we have \begin{equation} \label{cauchy} \frac1{i!} |f^{(i)}(z)| \leq \rho^{-i} \|f\|_\infty . \end{equation} In particular, if $K \subset V_\C$ is compact then $\|f \given K\|_{1} \lesssim \|f\|_\infty$. Since $\bigcup_{b\in U} u_b(V_\C)$ is precompact in $V_\C$, it follows that $\|L f\|_1 \lesssim \|f\|_\infty$ and thus \[ \|e^{-n P} L^n f - c g \mu f\|_\infty \leq \|e^{-n P} L^n f - c g \mu f\|_1 \leq C \gamma^{n - 1} \|L f\|_1 \lesssim \gamma^n \|f\|_\infty \] and therefore we have \[ \|e^{-n P} L^n - c g \mu\|_\infty \lesssim \gamma^n. \] The value of $s$ such that $P(S,s) = 0$ is particularly important, if such a value exists, hence we make the following definition: \begin{definition}[Cf. {\cite[p.78 and Definition 4.3.1]{MauldinUrbanski2}}] Given a set $S\subset U\butnot\{0\}$ as in Definition \ref{definitionPACIFS}, we call $S$ as well as the associated PACIFS $(u_b)_{b\in S}$ \emph{regular} if there exists $\delta\geq 0$ such that $P(S,\delta) = 0$, and \emph{strongly regular} if furthermore there exists $\kappa > 0$ such that $P(S,\delta - \kappa) < +\infty$. Equivalently, $S$ is strongly regular if there exists $s$ such that $0 < P(S,s) < +\infty$. \end{definition} \subsection{Bowen's formula} In what follows we let \[ \delta = \delta_S \df \inf\{s \in \R : P(S,s) \leq 0\}, \] and we notice that $P(S,\delta_S) = 0$ if and only if $S$ is regular. If $S$ is regular, we write $L_S = L_{S,\delta_S}$. We also let \[ \Theta_S \df \inf\{s \in\R : P(S,s) < +\infty\} \] and we note that $S$ is strongly regular if and only if $\delta_S > \Theta_S$. Finally, for our last result we need to assume the OSC, which we define as follows: \begin{definition} \label{definitionOSCPACIFS} A PACIFS $(u_b)_{b\in S}$ satisfies the \emph{open set condition (OSC)} if there exists a connected open set $W$ precompact in $V$ such that: \begin{itemize} \item $(u_b(W))_{b\in S}$ is a disjoint collection of subsets of $W$; \item for each $b \in S$, $u_b$ is injective; \item (Cone condition)\Footnote{We note that this condition has become somewhat outdated and is not needed at all for the purposes of the current paper, and is mainly included for eth benefit of those readers familiar with \cite{MauldinUrbanski1, MauldinUrbanski2}. The definition of a CGDMS as in \cite{MauldinUrbanski2} may be equivalently reformulated in a more elegant and transparent form -- see, in particular, the work of the fourth-named author with Janina Kotus \cite[Chapter 10]{KotusUrbanski5}, which also includes a short proof of Bowen's formula that is independent of the cone condition.} For every $x\in \cl W$, there exists an open cone of vertex $x$, angle $\gamma$, and altitude $l$ which is entirely contained in $W$, where $\gamma,l > 0$ are constants. \end{itemize} \end{definition} It is easily verified (by letting $X = \cl W$) that every OSC PACIFS is an OSC CIFS (as recalled in Appendix \ref{appendixCIFS}). For every OSC PACIFS, we have the following: \begin{itemize} \item[(A5,B5)] Bowen's formula: The Hausdorff dimension of $\Lambda_S$ is \[ \HD(\Lambda_S) = \delta_S, \] see \cite[Theorem 4.2.13]{MauldinUrbanski2}. Note that the pressure function $P(S,\cdot)$ appearing in the definition of $\delta_S$ can be expressed as either $P(S,s) = \log \rho(\w L)$ or as $P(S,s) = \log\rho(L)$, so this result can be thought of as being about both the symbolic and the geometric Perron--Frobenius operators. \end{itemize} \section{Statement of main result} \label{sectiontheoremscase12} Our main result is an application of Theorem \ref{theoremoperatorequation} to the situation where a PACIFS $(u_b)_{b\in S}$ is being approximated by another PACIFS $(u_b)_{b\in S'}$. Thus, we fix $(U,V,u,v,q)$, and we let $U_\C,V_\C\subset \C$ be as above. In what follows all operators will be interpreted as acting on the Banach space $\BB = \H(V_\C)$, where $\H(V_\C)$ denotes the Banach space of bounded holomorphic functions on $V_\C$, endowed with the sup norm. \ignore{ \begin{theorem} \label{theoremcase1} Let $(U,V,u,v,q)$ be as in Definition \ref{definitionPACIFS}, and fix $S \subset U\butnot\{0\}$. Let $\delta = \delta_S$, fix $\kappa > 0$, and suppose that $\zeta(\kappa) < \infty$. Then there exist $\w\zeta,\epsilon > 0$ such that for all $S' \subset U\butnot\{0\}$ such that $\zeta(\kappa) \leq \w\zeta$ and $\|S\triangle S'\| \leq \epsilon$, we have \begin{equation} \label{case1formula} \delta_{S'} = \delta + \sum_{\substack{I\\ j(I)\leq \#(I) - 1}} c_I \, \eta_I \end{equation} for some constants $c_I$, with $c_\0 = 0$ and $c_{\{(0,0)\}} = \frac{\mu h \nu g}{\w\chi} > 0$, where $\0$ is the empty multiset and $\{(0,0)\}$ is the singleton multiset containing $(0,0)$ with multiplicity 1. Moreover, for all $I$ we have \[ |c_I| \lesssim \w\zeta^{-\#(I)} \epsilon^{-\Sigma(I)} \kappa^{-j(I)} \] and thus for all $K,I$, \[ \delta_{S'} = \delta + \sum_{\substack{I \\ \#(I) < K \\ \Sigma(I) \leq I}} c_I \, \eta_I(S') = O\left(\left(\frac{\zeta(\kappa)}{\w\zeta}\right)^K \left(\frac{\|S\triangle S'\|}{\epsilon}\right)^I\right). \] \end{theorem} This theorem includes Scenario 1 as shown by the following corollaries: \begin{corollary*} \label{corollarycase1a} Let $(S_N)$ be an ascending sequence of sets converging to a strongly regular set $S$. Then \eqref{case1formula} holds for all $N$ sufficiently large. \end{corollary*} \begin{corollary*} \label{corollarycase1b} Let $(S_N)$ be a descending sequence of sets converging to a set $S$, and suppose that $P(S_M,\delta - \kappa) < +\infty$ for some $M\in\N$ and $\kappa > 0$. Then \eqref{case1formula} holds for all $N$ sufficiently large. \end{corollary*} \begin{proof}[Proof of both corollaries] Let $S_* = S$ for the first corollary and $S_* = S_M$ for the second corollary. Either way, there exists $\kappa > 0$ such that $P(S_*,\delta - \kappa) < +\infty$. Then $\sum_{b\in S_*} |b|^{q(\delta - \kappa)} < \infty$, so $\zeta(\kappa) < \infty$, and by the dominated convergence theorem $\sum_{b\in S\triangle S_N} |b|^{q(\delta - \kappa)} \to 0$ as $N\to\infty$. Thus for all sufficiently large $N$, $\zeta(\kappa) \leq \w\zeta$ and thus Theorem \ref{theoremcase1} applies. \end{proof} \begin{remark*} Note that if $\delta = 0$ and $S_M$ is infinite for all $M$, then $P(S_M,\delta - \kappa) \geq P(S_M,0) = \log\#(S_M) = \infty$, so Corollary \ref{corollarycase1b} does not apply. More generally, if $\delta = 0$ then $\zeta(\kappa) \geq 1$ for all infinite $S'$ and $\kappa > 0$, so Theorem \ref{theoremcase1} cannot be used to compute $\delta_{S'}$ for such $S$. \end{remark*} Now, Theorem \ref{theoremcase1} cannot be used to prove Theorem \ref{theoremFgeqN} because the inequality $\zeta(S_N,\kappa) \leq \w\zeta$ is not satisfied. Our next theorem will generalize Theorem \ref{theoremcase1}, in a way that includes the case of Theorem \ref{theoremFgeqN}. \begin{notation*} Fix $S,S' \subset U\butnot \{0\}$ and $\delta\in\R$ such that $\delta \geq \delta_S$, and let \[ \theta = \theta(S') \df \delta_{S'} - \delta. \] For each $i\geq 0$, we let \[ \eta_i = \eta_i(S') \df \sum_{b \in S'} |b|^{q(\delta + \theta)} b^i - \sum_{b \in S} |b|^{q(\delta + \theta)} b^i. \] Note that $\eta_0$ is positive when $S'\supset S$ and negative when $S'\subset S$, as long as $S' \neq S$. Next, we define \[ \eta \df \sup_{b\in S\triangle S'} |b|, \] where $S \triangle S' \df (S\butnot S') \cup (S'\butnot S)$ is the symmetric difference of $S$ and $S'$. Let $\MM(A)$ denote the set of all multisets on a set $A$, i.e. finitely supported functions from $A$ to $\N$. If $I\in\MM(A)$, then $I(i) = n$ is interpreted as meaning ``$i$ is an element of $I$ of multiplicity $n$''. We denote the empty multiset by $\0$, and for each $i\in A$, we denote the singleton multiset containing $i$ by $\{i\}$, so that $\{i\}(i) = 1$, and $\{i\}(i') = 0$ for $i' \neq i$. Note that this implies that e.g. $i\{j\}$ denotes the multiset containing $j$ with a multiplicity of $i$. For $I\in\MM(\N)$, we write \begin{align*} \#(I) &\df \sum_i I(i), & \Sigma(I) &\df \sum_i i I(i), & \eta_I &\df \prod_i \eta_i^{I(i)}, \end{align*} where the summations and product are taken over the finite set $\{i\in\N : I(i) > 0\}$. Finally, let \begin{align* h(x) &\df e^{\delta v(0,x)}, & \nu f &\df f(0), & L_1 &\df L_{S,\delta}, \end{align*} \begin{align*} \hspace{58pt} \w c\; &\df 1/\sum_{m = 0}^\infty \nu L_1^m h, & \xi = \xi(S') &\df \eta_0 - \w c. \hspace{36pt} \end{align*} \end{notation*} \begin{remark*} Since $\delta \geq \delta_S$, we have $P(S,\delta) \leq 0$, or equivalently $\rho(L_1) = e^{P(S,\delta)} \leq 1$, where as before $\rho$ denotes spectral radius. It follows that $\w c = 0$ if and only if $P(S,\delta) = 0$. Since $\delta \geq \delta_S$, it follows that $\w c = 0$ if and only if both (a) $\delta = \delta_S$ and (b) $S$ is regular. \end{remark*} \begin{theorem} \label{maintheorem} With notation as above, fix $S \subset U\butnot\{0\}$ and $\delta\in\R$ such that $\delta \geq \delta_S$ and $\delta > \Theta_S$. Then there exist $\epsilon > 0$ and explicitly computable constants $c_{I,j,k}$ with $c_{\0,0,0} = 0$ and $c_{\0,0,1} = 1$ such that for all regular $S'\subset U\butnot\{0\}$ satisfying $\eta,|\theta| \leq \epsilon$, we have \begin{equation} \label{mainformula} \Xi = 0, \;\;\; \text{ where } \;\;\;\;\;\; \Xi = \Xi(S') \df \sum_{I\in \MM(\N_{\geq 1})} \sum_{j = 0}^\infty \sum_{k = 0}^\infty c_{I,j,k} \, \eta_I \theta^j \xi^k. \end{equation} Note that if $\w c = 0$, then $\xi = \eta_0$ and thus the right half of \eqref{mainformula} can be rewritten as \begin{equation} \label{mainformula2} \Xi = \sum_{I\in \MM(\N)} \sum_{j = 0}^\infty c_{I,j} \, \eta_I \theta^j \end{equation} where $c_{I,j} = c_{(I\given\N_{\geq 1}),j,I(0)}$. In this case we have $c_{\0,1} = c_{\0,1,0} = -\w\chi < 0$, where $\w\chi$ is as in \text{(B4)}, where $g,\mu$ are right and left fixed points of $L_1$ normalized so that \begin{equation} \label{normalization} \mu h = \nu g = 1. \end{equation} Moreover, \begin{equation} \label{cijkbounds} |c_{I,j,k}| \lesssim \epsilon^{-(\Sigma(I) + j + k)}. \end{equation} Note that when $\w c = 0$, this can be written as \begin{equation} \label{cijkboundsv2} |c_{I,j}| \lesssim \epsilon^{-(\Sigma(I) + I(0) + j)}. \end{equation} \end{theorem} \begin{remark*} If we assume $\delta \geq \delta_S$, then the hypothesis that $\delta > \Theta_S$ is satisfied if and only if either (a) $\delta_S > \Theta_S$ (i.e. $S$ is strongly regular) or (b) $\delta > \delta_S$. \end{remark*} \begin{remark*} It is natural to let $\delta = \lim_{N\to \infty} \delta_{S_N}$, where $S_N \to S$ is a sequence such that this limit exists, from which $S'$ will be chosen. In this case, we automatically have $|\theta| \leq \epsilon$ for all $N$ sufficiently large, and $\delta \geq \delta_S$ automatically due to semicontinuity of Hausdorff dimension for CIFS limit sets \cite[Theorem 4.2.13]{MauldinUrbanski2}. Moreover, if $S = \emptyset$ but $S_N \neq \emptyset$, then $\delta \geq 0 > -\infty = \delta_S$, so the hypothesis $\delta > \Theta_S$ is satisfied despite the fact that $S = \emptyset$ is not strongly regular (and in fact is not regular at all). \end{remark*} \begin{corollary} \label{corollaryanalytic} Fix $S,\delta$ as in Theorem \ref{maintheorem}, and let $(S_N)$ be a sequence of sets. Suppose that for some $d\in\N$, there exist a sequence $(\tt_N)$ in $\C^d$ converging to $\0$, and functions $F_i,F_\ast \in \H(B)$ holomorphic on a fixed neighborhood $B$ of $(\0,0)\in \C^{d+1}$, such that for each $N$, \[ \eta_i(S_N) = F_i(\tt_N,\theta_N), \;\;\;\; \theta_N = \theta(S_N), \;\;\;\; \xi(S_N) = F_*(\tt_N,\theta_N). \] Furthermore, suppose that \begin{align*} &\|F_i\| \leq \epsilon^i/2^i, & &\|\pi_2 \| \leq \epsilon/2, \;\text{and}\; & &\|F_*\| \leq \epsilon/2, \end{align*} where $\pi_2(\tt,\theta) = \theta$ is the projection onto the second coordinate, and $\epsilon$ is as in Theorem \ref{maintheorem}. Then \[ \Xi(S_N) = F(\tt_N,\theta_N) \] where $F$ is a holomorphic function defined on $B$. \end{corollary} \begin{proof} Define the function \[ F \;\df \sum_{I\in \MM(\N_{\geq 1})} \sum_{j = 0}^\infty \sum_{k = 0}^\infty c_{I,j,k} \left(\prod_{i\in I} F_i\right) \pi_2^j F_*^k \;=\; \sum_{I,j,k} c_{I,j,k} \left(\prod_{i\in I} F_i\right) \pi_2^j F_*^k \] Note that the bound \eqref{cijkbounds} guarantees that the above series converges absolutely, since \begin{align*} \sum_{I,j,k} \left\| c_{I,j,k} \left(\prod_{i\in I} F_i\right) \pi_2^j F_*^k \right\| &\underset{\eqref{cijkbounds}}{\lesssim} \sum_{I,j,k} \epsilon^{-(\Sigma(I) + j + k)} (\epsilon/2)^{\Sigma(I) + j + k}\\ &= \sum_{I,j,k} (1/2)^{\Sigma(I) + j + k}\\ &= \left(\prod_{i = 1}^\infty \sum_{\ell = 0}^\infty (1/2)^{i\ell}\right) \left(\sum_{j = 0}^\infty (1/2)^j\right) \left(\sum_{k = 0}^\infty (1/2)^j\right)\\ &= 4 \prod_{i = 1}^\infty (1 - 2^{-i})^{-1} < \infty \end{align*} Therefore the series defining $F$ converges in $\H(B)$. By the definition of $F$, we have \[ F(\tt_N,\theta_N) \underset{\eqref{mainformula}}{=} \Xi(S_N). \] Indeed, \begin{align*} F(\tt_N,\theta_N) &= \sum_{I,j,k} c_{I,j,k} \left(\prod_{i\in I} F_i (\tt_N,\theta_N) \right) \cdot \pi_2^j (\tt_N,\theta_N) \cdot F_*^k (\tt_N,\theta_N)\\ &= \sum_{I,j,k} c_{I,j,k} \left(\prod_{i\in I} \eta_i(S_N) \right) \theta(S_N)^j \xi(S_N)^k \underset{\eqref{mainformula}}{=} \Xi(S_N) \qedhere \end{align*} \end{proof} \section{Spectral gap in the case $\w c > 0$} \label{sectionspectralgap} To prove Theorem \ref{maintheorem}, we need to apply Theorem \ref{theoremoperatorequation}; thus, given sets $S,S'$, we need to produce operators $L,L'$, satisfying the hypotheses of Theorem \ref{theoremoperatorequation} if $S'$ is a sufficiently close perturbation of $S$, i.e. one for which $\eta,|\theta| \leq \epsilon$ as in Theorem \ref{maintheorem}. When $\w c = 0$ we can take $L = L_S$ and $L' = L_{S'}$, since we have $P(S,\delta) = 0$ and thus by (B2) of \6\ref{sectionPACIFS}, $L_S$ has right and left fixed points. However, if $\w c > 0$ then $P(S,\delta) < 0$ and thus $L_1 = L_{S,\delta}$ has spectral radius $<1$ and has neither right nor left fixed points. In this section we prove that there is another operator with right and left fixed points, which will be suitable to plug in for $L$ in Theorem \ref{theoremoperatorequation}. Moreover we prove that this operator has a spectral gap, guaranteeing that the series $\sum_n \|R^n\|$ appearing in Theorem \ref{theoremoperatorequation} converges. \begin{proposition} \label{propositioncbarspectralgap} Let $L_1$ be an operator on a Banach space $\BB$ such that $\rho(L_1) < 1$, where $\rho$ denotes spectral radius. Fix $h\in\BB$, $\nu\in\BB^*$ such that $\nu h > 0$, and $\nu L_1^m h \geq 0$ for all $m \geq 0$. Then if we let \begin{align*} Q_1 &\df \sum_{m = 0}^\infty L_1^m, & \w c &\;\df 1/\nu Q_1 h, & L &\df L_1 + \w c \, h \nu, \end{align*} \begin{align*} g &\df Q_1 h, & \mu &\df \nu Q_1, \end{align*} then it follows that \[ L g = g \;\text{and}\; \mu L = \mu. \] Moreover, there exist $C < \infty$ and $\gamma < 1$ such that for all $n$, \begin{equation} \label{equationspectralgap} \|L^n - c g \mu\| \leq C \gamma^n \end{equation} where $c \df 1/\mu g$. In particular, $L$ has a spectral gap in the sense of \cite{Rugh2}\Footnote{See footnote \ref{spectralgap} attached to (A3).}. \end{proposition} \begin{proof} We have \[ L g = (L_1 + \w c \, h \nu) Q_1 h = \sum_{m = 1}^\infty L_1^m h + \w c \, h \nu Q_1 h = \sum_{m = 0}^\infty L_1^m h = g \] and similarly $\mu L = \mu$. Let $a_{m + 1} = \w c \, \nu L_1^m h$ and $b_{n + 1} = \w c \, \nu L^n h$ for $m,n\geq 0$, and let $a_0 = 0$ and $b_0 = 1$. Then since $L^n = (L_1 + \w c \, h \nu)^n$ is the sum of all $n$-fold ordered products of $L_1$ and $\w c \, h \nu$, we have \begin{align*} b_{n + 1} = \w c \, \nu L^n h &= \w c \, \nu (L_1 + \w c \, h \nu)^n h\\ &= \sum_{t = 1}^\infty \sum_{\substack{m_1,\ldots,m_t \\ \sum_i (m_i + 1) - 1 = n}} \w c \, \nu L_1^{m_1} (\w c \, h\nu) L_1^{m_2} \cdots L_1^{m_{t - 1}} (\w c \, h \nu) L_1^{m_t} h\\ &= \sum_{t = 1}^\infty \sum_{\substack{m_1,\ldots,m_t \\ \sum_i m_i = n - t + 1}} \prod_{i = 1}^t \w c \, \nu L_1^{m_i} h\\ &= \sum_{t = 1}^\infty \sum_{\substack{m_1,\ldots,m_t \\ \sum_i m_i = n + 1}} \prod_{i = 1}^t a_{m_i}. \end{align*} Together with the equality $b_0 = 1$, this shows that the sequence $(b_n)$ is the sum of the $t$-fold convolutions of the sequence $(a_m)$ over $t\in\N$. Now let $A$ and $B$ be the functions whose Taylor series coefficients are given by $(a_m)$ and $(b_n)$, i.e. \begin{align*} A(z) &= \sum_{m = 0}^\infty a_m z^m, & B(z) &= \sum_{n = 0}^\infty b_n z^n. \end{align*} Then the convolution relation between $(a_m)$ and $(b_n)$ mentioned above implies that \[ B(z) = \sum_{t = 0}^\infty [A(z)]^t = \frac{1}{1 - A(z)} \] for all $z$ in the radius of convergence of both $A$ and $B$. Now by hypothesis we have $\lambda \df \rho(L_1) < 1$, and by definition of the spectral radius we have $|a_m| \lesssim_m (\lambda + \epsilon)^m$ for all $\epsilon > 0$. Thus the series defining $A$ converges in the open ball $B_\C^\circ(0,\lambda^{-1}) \supset B_\C(0,1)$. Moreover, by the definition of $\w c$ we have $f(1) = \sum_m a_m = 1$, and by hypothesis we have $a_1 > 0$ and $a_m \geq 0$ for all $m$. We claim that for all $z\in B_\C(0,1)$, if $A(z) = 1$ then $z = 1$. Indeed, since $|a_m z^m| \leq a_m$ and $\sum_m a_m = 1$, if $A(z) = 1$ then we must have $a_m z^m = a_m$ for all $n$. In particular $a_1 z = a_1$, and since $a_1 > 0$ this implies $z = 1$. Next, we observe that \[ A'(1) = \sum_{m = 0}^\infty m a_m = \frac1r \df \frac{\w c}{c} = \frac{\mu g}{\nu Q_1 h} > 0. \] It follows that $B$ can be extended to a meromorphic function $\what B = \frac{1}{1 - A}$ on $B_\C^\circ(0,\lambda^{-1})$, and the only pole of $\what B$ in $B_\C(0,1)$ is $1$, where $\what B$ has a simple pole of residue $-r$. So \[ \what B(z) = \frac{r}{1 - z} + E(z) \] where $E$ is a meromorphic function on $B_\C^\circ(0,\lambda^{-1})$, which is holomorphic on a closed neighborhood of $B_\C(0,1)$, say $B_\C(0,\tau^{-1})$ with $\lambda < \tau < 1$. Since $\frac{r}{1 - z} = \sum_n r z^n$, it follows from Cauchy's inequality (cf. \eqref{cauchy}) that \[ |b_n - r| \leq \tau^n \| E \|_\infty. \] Now \begin{align*} L^n = (L_1 + \w c \, h \nu)^n &= L_1^n + \sum_{i = 0}^{n - 1} L_1^i (\w c \, h \nu) L_1^{n - i - 1} + \sum_{i = 0}^{n - 2} \sum_{j = 0}^{n - i - 2} L_1^i (\w c \, h \nu) L^{n - i - j - 2} (\w c \, h \nu) L_1^j\\ &= L_1^n + \sum_{i = 0}^{n - 1} \sum_{j = 0}^{n - i - 1} \w c \, L_1^i h b_{n - i - j - 1} \nu L_1^j \end{align*} and thus after setting $b_n = 0$ when $n < 0$ and $\|b\| = \sup_n |b_n|$, we have \begin{align*} \|L^n - c g \mu\| &= \|L^n - r \, \w c \, g \mu\|\\ &= \left\|L_1^n + \sum_{i = 0}^\infty \sum_{j = 0}^\infty \w c(b_{n - i - j - 1} - r) L_1^i h \nu L_1^j\right\|\\ &\leq \|L_1^n\| + \sum_{i = 0}^\infty \sum_{j = 0}^\infty \w c \, \min\big(r + \|b\|,|b_{n - i - j - 1} - r|\big) \|L_1^i\| \cdot \|h \nu\| \cdot \|L_1^j\|\\ &\lesssim \tau^n + \sum_{i = 0}^\infty \sum_{j = 0}^\infty \min(1,\tau^{n - i - j}) \tau^{i + j}\\ &\leq \tau^n + \sum_{i = 0}^\infty \sum_{j = 0}^\infty \min(1,\tau^{i - n}) \min(1,\tau^{j - n}) \tau^n\\ &= \tau^n + \left(\sum_{i = -n}^\infty \min(1,\tau^i)\right)^2 \tau^n \asymp n^2 \tau^n. \end{align*} Thus for any choice of $\gamma$ in $(\tau,1)$, we have that \eqref{equationspectralgap} is satisfied. This completes the proof of Proposition \ref{propositioncbarspectralgap}. \end{proof} \section{Proof of Theorem \ref{maintheorem}} \label{sectionproof} As before, recall that $\BB$ denotes the Banach space of bounded holomorphic functions on $V_\C$, endowed with the sup norm. All operator norms will be taken with respect to $\BB$. Let $S,\delta$ be as in Theorem \ref{maintheorem}. Since $\delta > \Theta_S$, there exists $\kappa > 0$ such that $\delta - \kappa > \Theta_S$. Fix $\epsilon > 0$ to be determined. Now fix $S'\subset U\butnot\{0\}$ such that $\eta,|\theta| \leq \epsilon$, let \begin{align*} L &\df L_1 + \w c \, h\nu, & L' &\df L_{S',\delta + \theta}, & L'' &\df L_{S,\delta + \theta} + \w c \, h\nu,\\ \alpha &\df L'' - L, & \beta &\df L' - L'', & \Delta &\df L' - L = \alpha + \beta. \end{align*} Since $S'$ is regular and $\delta + \theta = \delta_{S'}$, we have $P(S',\delta + \theta) = 0$ and thus $L' = L_{S'}$ has positive right and left fixed points $g'$ and $\mu'$ by (B2) of \6\ref{sectionPACIFS}. Here we call a function \emph{positive} if it is uniformly positive on $V_\R$, and we call an element of $\BB^*$ positive if it arises from a nonzero nonnegative measure supported on $V_\R$. In particular, if $f \in \BB$ and $\sigma \in \BB^*$ are both positive then $\sigma f > 0$. If $\w c = 0$, then $P(S,\delta) = 0$ and thus $L = L_S$ has positive right and left fixed points $g$ and $\mu$ by (B2), and by (B3), there exist constants $C < \infty$ and $\gamma < 1$ such that $\|R^n\| = \|L^n - c g \mu\| \leq C \gamma^n$ for all $n$, where $R$ is as in Theorem \ref{theoremoperatorequation}. On the other hand, if $\w c > 0$, then the existence of such $g,\mu,C,\gamma$ follows from Proposition \ref{propositioncbarspectralgap}. Either way, we get $\sum_{n=0}^\infty \|R^n\| \leq C/(1-\gamma) < \infty$, so if \begin{equation} \label{Deltasufficient} \|\Delta\| < (1-\gamma)/C, \end{equation} then the hypotheses of Theorem \ref{theoremoperatorequation} are satisfied, and consequently \eqref{operatorequation} holds. We aim to show that \eqref{Deltasufficient} holds if $\epsilon$ is sufficiently small, while simultaneously developing the tools that will allow us to reduce \eqref{operatorequation} to the equation $\Xi = 0$, where $\Xi$ is as in \eqref{mainformula}. \begin{remark*} In the remainder of the paper we will assume that $g,\mu$ are normalized as in \eqref{normalization}. \end{remark*} In what follows, the implied constants of asymptotics may depend on $S$ but not on $S'$ or $\theta$. \begin{lemma} \label{lemmaetabound} If $\epsilon$ is sufficiently small, then for all $i$, \[ |\eta_i| \lesssim \eta^i. \] \end{lemma} \begin{proof} If $\epsilon \leq \kappa$, then \begin{align*} |\eta_i| &\leq \sum_{b \in S\triangle S'} |b|^{q(\delta + \theta) + i} \leq \eta^i \sum_{b \in S\triangle S'} |b|^{q(\delta + \theta)}\\ &\leq \eta^i \left(\sum_{b\in S} |b|^{q(\delta + \theta)} + \sum_{b\in S'} |b|^{q(\delta + \theta)} \right)\\ &\lesssim \eta^i \left(\sum_{b\in S} |b|^{q(\delta - \kappa)} + \sum_{b\in S'} |b|^{q(\delta + \theta)} \right)\since{$|\theta| \leq \epsilon \leq \kappa$} \\ &\asymp \eta^i \left( e^{P(S,\delta - \kappa)} + e^{P(S',\delta + \theta)} \right) \by{\eqref{partitionfunction}} \\ &\asymp \eta^i \end{align*} where the last asymptotic is true since $P(S,\delta - \kappa) < +\infty$ and $P(S',\delta + \theta) = 0$, the former being true since we chose $\kappa$ such that $\delta - \kappa > \Theta_S$ and the latter since $S'$ is regular and $\delta + \theta = \delta_{S'}$. \end{proof} \begin{lemma} \label{lemmaalphaf} If $\epsilon$ is sufficiently small, then \begin{equation} \label{alphaseries} \alpha = \sum_{j\geq 1} \theta^j \alpha_j, \end{equation} where $\alpha_j \in \LL(\BB)$ is the unique operator such that \begin{equation} \label{alphajdef} \alpha_j f(x) = \frac{1}{j!} \sum_{b\in S} |u_b'(x)|^\delta \log^j|u_b'(x)| \; f\circ u_b(x) \;\;\;\; \all f\in\BB \all x\in V_\R. \end{equation} The value of $\alpha_j f(x)$ for $x\in V_\C$ is obtained by replacing $|u_b'(x)|$ by $|b|^q e^{v(b,x)}$ in the above formula. Furthermore, \begin{equation} \label{alphabound} \|\alpha_j\| \lesssim \kappa^{-j}. \end{equation} \end{lemma} Note that $\alpha_0 = L_1$, and that $\alpha_1$ is as defined in \eqref{alpha1def}, with $s = \delta$. \begin{proof} Indeed, fix $f\in \BB$. For $x\in V_\R$ we have \begin{align*} \alpha f(x) &= \sum_{b\in S} |u_b'(x)|^\delta \big(|u_b'(x)|^\theta - 1\big) \; f\circ u_b(x)\\ &= \sum_{j=1}^\infty \frac{\theta^j}{j!} \sum_{b\in S} |u_b'(x)|^\delta \log^j|u_b'(x)| \; f\circ u_b(x) \note{see below to justify interchange}\\ &= \sum_{j=1}^\infty \theta^j \alpha_j f(x). \end{align*} The assertion about the value of $\alpha_j f(x)$ for $x\in V_\C$ can be obtained either by analytic continuation, or by repeating the above calculation with the suggested substitution. To demonstrate \eqref{alphabound}, we note that for all $x\in V_\C$, \begin{align*} |\alpha_j f(x)| &\leq \frac{1}{j!} \sum_{b\in S} |b|^{q\delta} e^{\delta \Re v(b,x)} \big|q\log|b| + v(b,x)\big|^j \cdot |f\circ u_b(x)|\\ &\leq \frac{1}{j!} \sum_{b\in S} |b|^{q\delta} e^{\delta \|v\|} (-q\log|b| + C)^j \|f\|, \end{align*} where $C = \|v\| + 2q(\log\|U\|)_+$. Here $\|U\| = \sup_{b\in U} |b|$, and $(\cdot)_+$ denotes the positive part. Let $w(b) = -q\log|b| + C \geq 0$. Then \begin{align*} \|\alpha_j\| &\leq \frac1{j!} \sum_{b\in S} e^{-\delta w(b)} w^j(b) = \kappa^{-j} \sum_{b\in S} e^{-\delta w(b)} \frac{(\kappa w(b))^j}{j!}\\ &\leq \kappa^{-j} \sum_{b\in S} e^{-(\delta - \kappa) w(b)} \asymp \kappa^{-j} e^{P(S,\delta - \kappa)} \lesssim \kappa^{-j} \end{align*} since $P(S,\delta - \kappa) < +\infty$. Note that this calculation also shows that the interchange of summation in the second equation of the first calculation is valid as long as $\epsilon < \kappa$ (so that $|\theta| < \kappa$). \end{proof} \begin{lemma} \label{lemmabetaf} Recall from Section \ref{sectiontheoremscase12} that $\xi = \eta_0 - \w c$\,. If $\epsilon >0$ is sufficiently small, then \begin{equation} \label{betaseries} \begin{split} \beta &= \sum_{i,j} \eta_i \theta^j \beta_{i,j} - \w c \, h\nu = \xi h \nu + \sum_{(i,j) \neq (0,0)} \eta_i \theta^j \beta_{i,j}\\ &= \xi h \nu + \sum_{j = 1}^\infty (\w c + \xi) \theta^j \beta_{0,j} + \sum_{i = 1}^\infty \sum_{j = 0}^\infty \eta_i \theta^j \beta_{i,j}, \end{split} \end{equation} where $\beta_{i,j} \in \LL(\BB)$ is defined by \begin{equation} \label{betaijdef} \begin{split} \beta_{i,j} f(x) &\df \frac{1}{i!} \frac{1}{j!} \left(\frac{\del}{\del b}\right)^i \left(\frac{\del}{\del\theta}\right)^j \left[e^{(\delta + \theta) v(b,x)} \; f\circ u_b(x)\right]_{b = \theta = 0}\\ &= \text{\textbf{Coeff}}\left(b^i \theta^j, e^{(\delta + \theta) v(b,x)} \; f\circ u_b(x)\right). \end{split} \end{equation} Here $\text{\textbf{Coeff}}(X,A)$ denotes the coefficient of a multinomial $X$ in the power series expansion of $A$. Furthermore, \begin{equation} \label{betabound} \|\beta_{i,j}\| \leq \rho_U^{-i} e^{(\delta + 1) \|v\|} \end{equation} where $\rho_U > 0$ is small enough so that $B_\C(0,\rho_U) \subset U_\C$. \end{lemma} Note that $\beta_{0,0} = h \nu$. This is in fact the reason that we defined $h,\nu$ as we did. \begin{proof} Indeed, fix $f\in\BB$. For all $x\in V_\C$ we have \begin{align*} (\beta + \w c \, h \nu) f(x) &= (L_{S',\delta + \theta} - L_{S,\delta + \theta}) f(x)\\ &= \sum_{b\in S'} |b|^{q(\delta + \theta)} e^{(\delta + \theta) v(b,x)} \; f\circ u_b(x) - \sum_{b\in S} |b|^{q(\delta + \theta)} e^{(\delta + \theta) v(b,x)} \; f\circ u_b(x) \noreason \\ &= \sum_{b\in S'} |b|^{q(\delta + \theta)} \left(\sum_{i,j} b^i \theta^j \beta_{i,j} f(x)\right) - \sum_{b\in S} |b|^{q(\delta + \theta)} \left(\sum_{i,j} b^i \theta^j \beta_{i,j} f(x)\right) \noreason \\ &= \sum_{i,j} \eta_i \theta^j \beta_{i,j} f(x). \note{see below to justify interchange} \end{align*} The last two equalities of \eqref{betaseries} follow from the definition of $\xi$ and the fact that $\beta_{0,0} = h\nu$. To demonstrate \eqref{betabound}, we plug in $\rho = \rho_U$ and $\rho = 1$ into \eqref{cauchy} (both with $z = 0$) for the variables $b$ and $\theta$, respectively. This yields \begin{align*} |\beta_{i,j} f(x)| &\leq \rho_U^{-i} \sup_{b\in U_\C} \sup_{|\theta| \leq 1} \left| e^{(\delta + \theta) v(b,x)} \; f\circ u_b(x)\right|\\ &\leq \rho_U^{-i} e^{(\delta + 1)\|v\|} \|f\|. \end{align*} Note that by Lemma \ref{lemmaetabound}, this calculation also shows that the interchange of summation in the last equation of the first calculation is valid as long as $\epsilon < \min(\rho_U,1)$ (so that $\eta < \rho_U$ and $|\theta| < 1$). \end{proof} We now wish to prove a bound on $|\xi|$. By \eqref{alphabound}, we have $\|\alpha\| \lesssim |\theta|$, and by \eqref{betabound} and Lemma \ref{lemmaetabound}, we have $\|\beta - \xi h \nu\| \lesssim \max(\eta,|\theta|)$, as long as $\epsilon \leq \min(\kappa,\rho_U,1)/2$. Thus \[ \|L' g - (g + \xi h)\| = \|(L' - L - \xi h \nu) g\| = \|(\alpha + \beta - \xi h \nu) g\| \lesssim \max(\eta,|\theta|). \] Suppose $\xi \geq 0$. Then since $h \geq \inf_{V_\R} (h/g) g$ on $V_\R$, it follows that \[ g + \xi h \geq \left(1 + \inf_{V_\R}(h/g)\, \xi \right)g. \] Now since $g$ is uniformly positive on $V_\R$, we have that \[ L' g \geq \lambda g \text{ on } V_\R \text{, where } \lambda = 1 + \inf_{V_\R}(h/g) \, \xi + O\big(\max(\eta,|\theta|)\big). \] Since $L'$ and $g$ are both positive, it follows that $\lambda \leq \rho(L') = 1$. Thus $\xi = O(\max(\eta,|\theta|))$, and the case $\xi \leq 0$ proceeds similarly. Let $C \geq 1$ be the implied constant, so that \[ |\xi| \leq C \max(\eta,|\theta|) \leq C\epsilon. \] \begin{definition} \label{definitionseriesfunction} A \emph{series function} is an expression of the form \begin{equation} \label{powerseries} \ff = (f_{I,j,k}) = \sum_{I,j,k} \eta_I \theta^j \xi^k f_{I,j,k}, \end{equation} where $f_{I,j,k} : V_\C \to \C$ are holomorphic functions independent of $\theta$ and $S'$, and $I,j,k$ are as in \eqref{mainformula}. If $\ff$ is a series function, let \[ \@ifstar\@opnorms\@opnorm{\ff} \df \sum_{I,j,k} (C\epsilon)^{\Sigma(I) + j + k} \|f_{I,j,k}\|. \] The set of series functions $\ff$ such that $\@ifstar\@opnorms\@opnorm{\ff}$ is finite, which we denote as $\AA$, forms a Banach space under the norm $\@ifstar\@opnorms\@opnorm{\cdot}$. There is a natural projection map $\pi:\AA\to\BB$ defined by letting $\pi(\ff)$ be the value of the right-hand side of \eqref{powerseries}. Note that $\pi(\ff)$ depends on $S'$, while $\ff$ (being merely a formal expression) does not. Since $\eta,|\theta|,|\xi| \leq C\epsilon$, we have \[ \| \pi(\ff) \| \leq \@ifstar\@opnorms\@opnorm{\ff} \] for every series function $\ff$. Finally, if $f\in\BB$ then we abuse notation and also let $f$ denote the series function $f = \eta_\0 \theta^0 \xi^0 f$. \end{definition} For $I\in \MM(\N_{\geq 1})$, let $\bfeta_I$, $\bftheta$, and $\bfxi$ denote the series functions given by the formulas \begin{align*} \bfeta_I &= \eta_I \theta^0 \xi^0 \one, & \bftheta &= \eta_\0 \theta^1 \xi^0 \one, & \bfxi &= \eta_\0 \theta^0 \xi^1 \one. \end{align*} Then by replacing $\eta_I$, $\theta$, and $\xi$ by $\bfeta_I$, $\bftheta$, and $\bfxi$ in \eqref{alphaseries} and \eqref{betaseries}, we can construct operators $\bfalpha$ and $\bfbeta$ on $\AA$ such that $\pi(\bfalpha f) = \alpha f$ and $\pi(\bfbeta f) = \beta f$ for all $f\in\BB$. The corresponding operator norms satisfy \begin{align*} &\|\alpha\| \leq \@ifstar\@opnorms\@opnorm{\bfalpha} \lesssim \epsilon/\kappa \asymp \epsilon & &\text{and} & &\|\beta\| \leq \@ifstar\@opnorms\@opnorm{\bfbeta} \lesssim \epsilon/\rho_U \asymp \epsilon, \end{align*} as long as $\epsilon \leq \min(\kappa,\rho_U,1)/2$. Recall that $\Delta = \alpha + \beta$, and let $\bfDelta \df \bfalpha + \bfbeta$. Then it follows from the above inequalities that \[ \|\Delta\| \leq \@ifstar\@opnorms\@opnorm{\bfDelta} \lesssim \epsilon. \] Thus if $\epsilon$ is sufficiently small then \eqref{Deltasufficient} holds, and thus so does \eqref{operatorequation}. Moreover, if $\@ifstar\@opnorms\@opnorm{\bfDelta} < 1/\|Q\|$, then \[ \sum_{p = 0}^\infty \@ifstar\@opnorms\@opnorm[\big]{\bfDelta (Q\bfDelta)^p} \leq \@ifstar\@opnorms\@opnorm{\bfDelta} \sum_{p = 0}^\infty (\|Q\|\cdot\@ifstar\@opnorms\@opnorm{\bfDelta})^p < \infty, \] so $\sum_{p = 0}^\infty \bfDelta (Q\bfDelta)^p \in \LL(\AA)$. It follows that $\sum_{p = 0}^\infty \bfDelta (Q\bfDelta)^p g \in \AA$ and thus \[ \sum_{p = 0}^\infty \mu \Delta (Q \Delta)^p g = \Xi \;\underset{\eqref{mainformula}}{\df}\; \sum_{I,j,k} c_{I,j,k} \, \eta_I \theta^j \xi^k \] for some constants $c_{I,j,k}$, such that \[ |c_{I,j,k}| \lesssim (C\epsilon)^{-(\Sigma(I) + j + k)} \leq \epsilon^{-(\Sigma(I) + j + k)}. \] This demonstrates \eqref{cijkbounds}, and we have $\Xi = 0$ by \eqref{operatorequation}. Next we want to show that the coefficients of $1$ and $\xi$ are $c_{\0,0,0} = 0$ and $c_{\0,0,1} = 1$, respectively, and that if $\w c = 0$ then the coefficient of $\theta$ is $c_{\0,1,0} = -\w\chi$, where $\w\chi > 0$ is as in \eqref{chidef}. Indeed, let \[ X = \eta \; \vee \; \max(|\theta|,|\xi|)^2 \] and recall that $A \equiv_X B$ means $B - A = O(X)$. Then by \eqref{normalization}, \begin{align*} \Xi &\underset{X}{\equiv} \mu \Delta g = \mu \alpha g + \mu \beta g \underset{X}{\equiv} \theta \mu \alpha_1 g + \xi \mu h \nu g + \w c \, \theta \mu \beta_{0,1} g \\ &= \xi + (-\w\chi + \w c \,\mu\beta_{0,1} g) \theta, \end{align*} which is what we wanted. \ignore{ \section{Proof of Theorem \ref{theoremcase1}} \label{sectioncase1} We now prove Theorem \ref{theoremcase1} using Theorem \ref{maintheorem}. As before we fix $\epsilon,\epsilon,\w\zeta > 0$ to be determined. Now let $S'$ satisfy $\zeta \df \zeta(\kappa) \leq \w\zeta$ and $\epsilon \df \|S\triangle S'\| \leq \epsilon$. Then by Theorem \ref{maintheorem}, \eqref{mainformula} holds, and \[ |c_{I,j,k}| \lesssim \epsilon^{-j} \epsilon^{-k} \epsilon^{-\Sigma(I)}. \] Since $\w c = 0$, we have $\xi = \eta_0$. We now wish to rewrite $\eta_i$ in terms of $\eta_{i,j}$ and $\theta$: \begin{lemma} \label{lemmaetaexpansion} We have \[ \eta_i = \sum_{j = 0}^\infty \theta^j \eta_{i,j}(S'). \] Moreover, \[ |\eta_{i,j}(S')| \lesssim \epsilon^i \kappa^{-j} \zeta. \] \end{lemma} \begin{proof} Indeed, \[ \eta_i = \sum_{b\in S' - S} |b|^{q\delta} \sum_{j = 0}^\infty \frac{(q\theta\log|b|)^j}{j!} = \sum_{j = 0}^\infty \frac{q^j \theta^j}{j!} \sum_{b\in S' - S} |b|^{q\delta} \log^j|b| = \eta_i = \sum_{j = 0}^\infty \theta^j \eta_{i,j}(S') \] and \begin{align*} |\eta_{i,j}(S')| &\leq \frac{q^j}{j!} \sum_{b\in S \triangle S'} |b|^{q\delta + i} \big|\log |b| \big|^j\\ &= \kappa^{-j} \sum_{b\in S\triangle S'} |b|^{q\delta + i} \frac{(q\kappa|\log|b||)^j}{j!}\\ &\leq \epsilon^i \kappa^{-j} \sum_{b\in S\triangle S'} |b|^{q\delta} \exp(q\kappa|\log|b||)\\ &\asymp \epsilon^i \kappa^{-j} \sum_{b\in S\triangle S'} |b|^{q(\delta - \kappa)} \lesssim \epsilon^i \kappa^{-j} \w\zeta. \qedhere\end{align*} \end{proof} Now, analogously to the proof of Theorem \ref{maintheorem}, we define a \emph{series function of type 2} to be an expression of the form \[ \sum_{I,j} c_{I,j} \theta^j \, \eta_I(S'), \] where $I$ ranges over multisubsets of $\N^2$, and we let \[ \left\|\sum_{I,j} c_{I,j} \theta^j \, \eta_I(S')\right\|_\sigma \df \max_{I,j} |c_{I,j}| (2\epsilon)^{\Sigma(I)} (2\epsilon)^j (2\w\zeta)^{\#(I)}. \] By Lemma \ref{lemmaetaexpansion}, we can replace $\eta_i$ by $\sum_j \theta^j \eta_{i,j}$ in \eqref{mainformula} to get \begin{equation} \label{etaimformula} \sum_{\substack{I,j \\ j(I) \leq j}} c_{I,j} \theta^j \, \eta_I(S') = 0, \end{equation} where the left-hand side has $\|\cdot\|_\sigma < \infty$. Moreover, we have $c_{\0,0} = c_{\0,0,0} = 0$, $c_{\0,1} = c_{\0,1,0} = -\w\chi$, and $c_{\{(0,0)\},0} = c_{\0,0,1} = 1$. We claim next that \eqref{etaimformula} can be solved for $\theta$ yielding a formal power series \begin{equation} \label{thetaseries} \theta = \sum_{\substack{I \\ j(I) \leq \#(I) - 1}} c_I \, \eta_I(S'), \end{equation} which we will then show converges. Indeed, writing \eqref{etaimformula} as \begin{equation} \label{solvefortheta} \theta = \T_{S'}(\theta) \df -\frac{1}{\w\chi} \left[\sum_{j = 2}^\infty c_{\0,j} \theta^j + \sum_{j = 0}^\infty \sum_{\substack{I\neq \0 \\ j(I) \leq j}} c_{I,j} \theta^j \, \eta_I(S')\right] \end{equation} yields a recursive formula for $\theta$. Taking the $n$th iterate yields \begin{equation} \label{iteraten} \theta = \T_{S'}^n(0) + O(\theta^{n + 1}) \end{equation} at least on the formal level. We claim that all nonzero terms $c_I \, \eta_I(S')$ on the right-hand side of \eqref{iteraten} satisfy $j(I) \leq \#(I) - 1$. Indeed, suppose that this is true for $n$, and we will prove it for $n+1$. Since $\sum_{p = 0}^\infty \Delta (Q\Delta)^p g$ is a series function, each term comes from an expression $c_{I,j} \theta^j \, \eta_I(S')$, where $j(I) \leq j$, and either $j\geq 2$ or $I\neq \0$. The induction hypothesis yields that if $c_{I'} \eta_{I'}(S')$ is a nonzero term in $\theta^j$ (corresponding to a term $c_{I,j} c_{I'} \eta_{I+I'}(S')$ on the right-hand side of \eqref{iteraten}), then $j(I') \leq \#(I') - j$ and thus $j(I + I') \leq \#(I')$. If $\#(I) \geq 1$, then $j(I + I') \leq \#(I + I') - 1$ and we are done. Otherwise, $I = \0$ and thus $j(I) = 0$, so $j(I + I') = j(I') \leq \#(I') - j < \#(I + I') - 1$. Thus, taking the limit of \eqref{iteraten} as $n\to\infty$ (with each coefficient eventually constant) yields \eqref{thetaseries}. To prove that the series \eqref{thetaseries} in fact converges, fix $\gamma > 0$ and take $a,b\in\AA$ with $\|a\|_\sigma,\|b\|_\sigma \leq \epsilon$ and $\|b - a\|_\sigma \leq \gamma$. Then \[ \|\T_{S'}(b) - \T_{S'}(a)\|_\sigma \leq \sum_{I,j} |c_{I,j}| j\epsilon^{j - 1} \gamma \epsilon^{\Sigma(I)} \epsilon^{\#(I=0)} \lesssim \sum_j \frac{j\epsilon^{j-1}}{\kappa^j} \gamma ([j\geq 2] + \w\zeta) \lesssim (\epsilon + \w\zeta) \gamma \] and thus if $\epsilon,\w\zeta$ are small enough, we have $\|\T_{S'}(b) - \T_{S'}(a)\|_\sigma \leq (1/2) \|b - a\|_\sigma$. Moreover, $\|\T(0)\|_\sigma = \sum_{I\neq \0} |c_{I,0}| \cdot \epsilon^{\Sigma(I)} \w\zeta^{\#(I)} \leq C \w\zeta$ for some $C > 0$ and thus by induction, for all $n$ we have $\|\T^n(0)\|_\sigma \leq 2 C \w\zeta(1 - 2^{-n}) < \epsilon$ and $\|\T^{n + 1}(0) - \T^n(\0)\|_\sigma \leq 2^{-n} C \epsilon$, as long as $\w\zeta \leq \epsilon/2C$. So the limit of $\T^n(0)$ exists and has $\|\cdot\|_\sigma$-norm $\leq 2\w\zeta$. \section{Simplifications in special cases} \label{sectionvanishing} In some special cases, we can make some simplifications to \eqref{mainformula}. \begin{proposition} \label{propositionvanishing} Let $S,\delta,S'$ be as in Theorem \ref{maintheorem}. Then \eqref{mainformula} can be simplified in the following ways: \begin{itemize} \item[(i)] If $S = \emptyset$, then (possibly after renormalizing $g$ and $\mu$) we have $\alpha = 0$, $L_1 = 0$, $Q_1 = I$, $g = h$, $\mu = \nu$, $c = \w c = 1/\nu h$, $L = c g \mu$, $R = 0$, $Q = I$, and \begin{equation} \label{Semptyset} \Xi = \sum_{p \geq 1} \nu \beta^p h. \end{equation} \item[(ii)] If $v(0,\cdot) = 0$, then $\beta_{i,j} = 0$ for all $i,j$ with $j > i$. If furthermore $v(\cdot,0) = 0$, then $\nu \beta_{i,j} = 0$ for all $i,j$ with $j > 0$. If both hypotheses hold, and in addition $S = \emptyset$, then $c_{I,j,k} = 0$ whenever $j \geq \Sigma(I)$ and $(I,j) \neq (\0,0)$. In particular, $c_{I,j,k} = 0$ for all $I,j,k$ with $j \geq \Sigma(I)$ and $k = 0$, and for all $I,j,k$ with $j > \Sigma(i)$. \item[(iii)] If $v(0,\cdot) = 0$ and $v(\cdot,0) = 0$, then $\nu \beta_{i,j} h = 0$ for all $(i,j) \neq (0,0)$. If furthermore $S = \emptyset$, then $c_{I,j,0} = 0$ for all $I,j$ such that $\#(I) < 2$. \item[(iv)] If $\delta = 0$, then $\beta_{i,0} g = \beta_{i,0} h = 0$ for all $i > 0$. This implies that $c_{I,0,k} = 0$ for all $I,k$ such that $I\neq \0$. \end{itemize} \end{proposition} Since (i) is a straightforward consequence of the definitions, we proceed to the proofs of (ii)-(iv). \begin{proof}[Proof of (ii)] If $v(0,\cdot) = 0$, then we can write $v(b,x) = b w(b,x)$ for all $b,x$, for some holomorphic function $w$. Thus for all $i,j,f,x$ with $j > i$, \[ \beta_{i,j} f(x) = \frac1{j!}\text{\textbf{Coeff}}\left(b^i,e^{\delta v(b,x)} b^j w^j(b,x) f \circ u_b(x)\right) = 0. \] If furthermore $v(\cdot,0) = 0$, then for $i,j,f$ with $j > 0$, \[ \nu \beta_{i,j} f = \text{\textbf{Coeff}}\big(b^i\theta^j,e^{(\delta + \theta) v(b,0)} f(u_b(0))\big) = \text{\textbf{Coeff}}\big(b^i\theta^j, f(u_b(0))\big) = 0. \] If $v(0,\cdot) = 0$, $v(\cdot,0) = 0$, and $S = \emptyset$, then applying (i) gives us \[ \Delta = \beta = \xi h \nu + \sum_{i = 1}^\infty \sum_{j = 0}^i \eta_i \theta^j \beta_{i,j}, \] and \[ \mu \Delta = \nu \beta = \xi \nu h \nu + \sum_{i = 1}^\infty \eta_i \nu\beta_{i,0} . \] One proves by induction that for every $p \in \N$ \[ \nu \beta^p = \sum_{k=0}^\infty c_k \xi^k \nu + \sum_{I\in\MM(\N_{\geq 1})} \sum_{j=0}^{\#(I)-1} \sum_{k=0}^\infty \eta_I \theta^j \xi^k \sigma_{i,j,k} \] Then multiplying both sides of the above equation on the right by $h$ and using \eqref{Semptyset} leads to \[ c_{I,j,k} = 0 \] whenever $j \geq \Sigma(I)$ and $(I,j) \neq (\0,0)$. \end{proof} \begin{proof}[Proof of (iii)] If $v(0,\cdot) = 0$, then $h = \one$. If furthermore $v(\cdot,0) = 0$, then for all $i,j$ \begin{equation*} \nu \beta_{i,j} h = \text{\textbf{Coeff}}\left(b^i \theta^j,e^{(\delta + \theta) v(b,0)} \one(u_b(0))\right) = \text{\textbf{Coeff}}(b^i \theta^j,1) = \big[i = j = 0\big]. \end{equation*} Here we use the Iverson bracket notation: $[\Phi] = 1$ when $\Phi$ is true and $[\Phi] = 0$ when $\Phi$ is false. By \eqref{betaseries}, it follows that $\nu \beta h = \xi$. If furthermore $S = \emptyset$, then combining with part (i) gives \[ \Xi = \xi + \sum_{p = 2}^\infty \nu \beta^p h, \] while by part (ii), all terms $\theta^i\w c \beta_{i,0}$ appearing in $\beta$ vanish, and thus $\beta$ is of the sum of terms with factors $\xi$ and $\eta_i$. It follows that every term $c_{I,j,k} \, \eta_I \theta^j \xi^k$ appearing in the above series satisfies $\#(I) + k \geq 2$. In particular, if $\#(I) < 2$ then $c_{I,j,0} = 0$. \end{proof} \begin{proof}[Proof of (iv)] If $\delta = 0$ then $L_1 \one = \#(S) \one$ and $h = \one$, and thus $L \one = (\#(S) + \w c\,)\one$. It follows that $\#(S) + \w c = 1$ and (after renormalizing) $g = \one$. Now for all $i$, \[ \beta_{i,0} \one(x) = \text{\textbf{Coeff}}\left(b^i,e^{(0 + 0) v(b,x)} \one \circ u_b(x)\right) = \text{\textbf{Coeff}}(b^i,1) = \big[i = 0\big]. \] Now, the coefficient $c_{I,0,k}$ is the sum of all products of the form $\mu \beta_{i_1,0} Q \cdots Q \beta_{i_p,0} g$ such that $k = \#(\ell : i_\ell = 0)$ and $I(i) = \#(\ell : i_\ell = i)$ for all $i$. For each such product, either $i_1 = \ldots = i_p = 0$, in which case $I = \0$, or there exists $\ell = 1,\ldots,p$ such that $i_\ell > 0$, and either $\beta_{i_\ell,0} Q \beta_{0,0}$ or $\beta_{i_\ell,0} g$ is a factor of the term in question. But since $Q \beta_{0,0} = Q h \nu = \one \nu$ and $g = \one$, both of these factors vanish, and thus $c_{I,0,k} = 0$. \end{proof} The next two propositions are proving slightly different things. Proposition \ref{propositionrationalv2} has the advantage that it applies to perturbations of a similarity IFS with more than one element. Proposition \ref{propositionrational} has the advantage that it applies to a larger class of non-Gauss PACIFSes, such as those defined over the system $(U,V,u,v,q)$ where $U = V = (-1/3,1/3)$, $u(b,x) = \frac{b}{1 + x}$, $v(b,x) = -2\log(1 + x)$, and $q = 1$. In what follows, recall that $\Q[x]$ denotes the ring of polynomials in the variable $x$ with coefficients in $\Q$. \begin{proposition} \label{propositionrationalv2} Let $S,\delta,S'$ be as in Theorem \ref{maintheorem}, and suppose that $S$ is finite, $(u_b)_{b\in S}$ consists entirely of similarities, $v(0,\cdot) = 0$, and $\text{\textbf{Coeff}}(b^i, u_b(x)) \in \Q[x]$ for all $i$. Let \[ R \df \Q(\lambda_b,\lambda_b^\delta)[\log(\lambda_b),c_b,\delta] \] where $\lambda_b$ is the contraction ratio of $u_b$, $c_b = u_b(0)$, and the extensions are taken over all $b\in S$. Then $c_{I,j,k} \in R$ for all $I,j,k$. In particular, if $S = \emptyset$, then $c_{I,j,k} \in \Q[\delta]$ for all $I,j,k$. \end{proposition} \begin{proof} We claim that $\alpha_j,\beta_{i,j},Q$ preserve $R[x]$, and that $\mu$ sends $R[x]$ to $R$. Indeed, if $f_k(x) = x^k$, then \[ \alpha_j f_k(x) = \frac1{j!} \sum_{b\in S} \lambda_b^\delta (\pm\lambda_b x + c_b)^k \log^j(\lambda_b) \in R[x] \] and in particular $L f_k(x) = \alpha_0 f_k(x)$ is a polynomial of degree $k$ with leading coefficient $a_k = \sum_{b\in S} \pm\lambda_b^{k + \delta} \in \Q[\lambda_b,\lambda_b^\delta]$, which by the Moran--Hutchinson equation (e.g. \cite{Hutchinson}) satisfies $|a_k| < 1$ when $k > 0$. Since $\mu L = \mu$, we have \[ \mu f_k = \mu L f_k = a_k \mu f_k + \sum_{k' < k} a_{k'}^{(k)} \mu f_{k'} \] which, together with the equality $\mu \one = 1$ (cf. \eqref{normalization}), yields a recursive formula for $\mu f_k$ proving that $\mu f_k \in R$, and thus $\mu f \in R$ for all $f\in R[x]$. Similarly, $Q = Q L + I - c g \mu$ and thus since $g = \one = f_0$ and $c = 1$,\Footnote{It is easy to see that $L \one = \one$, so the normalization \eqref{normalization} guarantees $g = \one$.} \[ Q f_k = a_k Q f_k + \sum_{k' < k} a_{k'}^{(k)} Q f_{k'} + f_k - f_0 \mu f_k \] whicy yields a recursive formula for $Q f_k$ proving that $Q f_k(x) \in R[x]$, and thus that $Q$ preserves $R[x]$. Now since $v(0,\cdot) = 0$, \[ \beta_{i,j} f_k(x) = \sum_{i' = 0}^i \frac1{i'!} \text{\textbf{Coeff}}\big(\theta^j,(\delta + \theta)^{i'}\big) \text{\textbf{Coeff}}\big(b^i,b^{i'} w^{i'}(b,x) u_b^k(x)\big) \] where $v(b,x) = b w(b,x)$ as above. Since $\text{\textbf{Coeff}}(b^i, u_b(x)) \in \Q[x]$, we have $\text{\textbf{Coeff}}(b^i,u_b'(x)) \in \Q[x]$ and thus $\text{\textbf{Coeff}}(b^i,e^{v(b,x)}) \in \Q[x]$ for all $i$. Since $e^{v(0,x)} = 1$, the Taylor expansion of $\log(x)$ around $x = 1$ shows that $\text{\textbf{Coeff}}(b^i,v(b,x)) \in \Q[x]$ for all $i$. Thus $\beta_{i,j}$ preserves $R[x]$, which completes the proof. \end{proof} \begin{definition} Let $R$ be a subring of $\R$, e.g. $\Q$. An analytic function $f:U\to \R$, where $U$ is a neighborhood of $\0$ in $\R^d$, is \emph{$R$-analytic} if the coefficients of the Taylor expansion of $f$ at $\0$ are all in $R$. \end{definition} Note that $R$-analyticity is highly sensitive to the point that the Taylor series is expanded around; if $f$ is $R$-analytic then $x\mapsto f(x - a)$ may not be $R$-analytic even if $a\in R^d$. \begin{proposition} \label{propositionrational} Let $S,\delta,S'$ be as in Theorem \ref{maintheorem}, and suppose that $S = \emptyset$, that $v(0,0) = 0$, and that $\text{\textbf{Coeff}}(b^i x^k , u_b(x)) \in \Q$ for all $i,k$. Then $c_{I,j,k} \in \Q[\delta]$ for all $I,j,k$. \end{proposition} \begin{proof} By Proposition \ref{propositionvanishing}(ii), $\Xi = \sum_{p\geq 1} \nu \beta^p h$. Thus, all coefficients $c_{I,j,k}$ in \eqref{mainformula} can be written as linear combinations of expressions of the form $\nu \beta_{i_1,j_1}\cdots \beta_{i_p,j_p} h$, where $p \geq 1$. In turn, we can write \[ \beta_{i,j} = \sum_{i' = 0}^i \sum_{k = 0}^{i'} M_{\psi_{i',k}} f_{i - i',j} \sigma_k, \] where \begin{align*} f_{i',j}(x) &= \text{\textbf{Coeff}}\left( b^{i'} \theta^j , e^{(\delta + \theta) v(b,x)}\right),\\ \sigma_k f &= \text{\textbf{Coeff}}\big(x^k, f(x)\big),\\ \psi_{i',k}(x) &= \text{\textbf{Coeff}}\big(b^{i'}, u_b^k(x)\big) \end{align*} Thus, all coefficients can be written as linear combinations of products of expressions of the form $\sigma_i M_{\psi_{j,k}} f_{\ell,m}$. Here we use the fact that $\sigma_0 = \nu$ and $f_{0,0} = h$. To show that $\sigma_i M_{\psi_{j,k}} f_{\ell,m} \in \Q[\delta]$, first note that since $u$ is $\Q$-analytic and $v(0,0) = 0$, it follows that $(b,x) \mapsto e^{v(b,x)} = \pm b^{-q} u_b'(x)$ is $\Q$-analytic and sends $(0,0)$ to $1$; thus $v$ is $\Q$-analytic. Again using the fact that $v(0,0) = 0$, it follows that $(b,\theta,x) \mapsto e^{(\delta + \theta) v(b,x)}$ is $\Q[\delta]$-analytic. This shows that $f_{\ell,m}$ and $\psi_{j,k}$, and thus $M_{\psi_{j,k}} f_{\ell,m}$, are $\Q[\delta]$-analytic, so $\sigma_i M_{\psi_{j,k}} f_{\ell,m}$, i.e. the $i$th coefficient of $M_{\psi_{j,k}} f_{\ell,m}$, is in $\Q[\delta]$. \end{proof} The next proposition is not strictly necessary for our purposes, but shows how our formula is a generalization of the Moran--Hutchinson equation. \begin{proposition} \label{propositionhutchinson} If $(u_b)_{b\in U}$ consists entirely of similarities, then \eqref{mainformula} reduces to the Moran--Hutchinson equation $e^{P'} = 1$, where $P' = P(S',s)$. Specifically, \[ \Xi = \frac{e^{P'} - 1}{2 - e^{P'}} \cdot \] \end{proposition} \begin{proof} All the operators $A = L,Q,R,\alpha_j,\beta_{i,j}$ satisfy $A\one = [A]\one$ for some $[A]\in\R$, and the map $A\mapsto [A]$ is a ring homomorphism. Similarly, if we write $[\sigma] = \sigma \one$ and $[r \one] = r$, then we get \[ \Xi = \sum_{p = 0}^\infty [\nu] [\Delta] ([Q] [\Delta])^p [g] = \frac{[\Delta]}{1 - [Q] [\Delta]}, \] since \eqref{normalization} implies $[g] = [\nu] = 1$. Moreover, since $L g = g$, we have $[L] = 1$ and thus $[R] = 0$, $[Q] = 1$. So $[\Delta] = [L'] - 1 = e^{P'} - 1$, which completes the proof. \end{proof} \section{Solving \eqref{mainformula}: a motivating computation} \label{sectionlogloglog} Although the proof of Theorem \ref{maintheorem} shows that \eqref{operatorequation} can be converted into the power series equation \eqref{mainformula}, there remains the question of how to solve this equation for $\theta$, particularly since $\eta_i$ and $\xi$ both depend on $\theta$. In some cases this is relatively easy, but in some cases more tools are needed. In this section we develop a tool that will help us solve for $\theta$ in the more difficult cases. We start by considering a special case consisting of similarities, so that we can use the simpler Moran--Hutchinson equation in place of \eqref{mainformula}. Namely, for each $\lambda,B > 0$ wth $\lambda + B \leq 1$, we can consider a similarity IFS on $\R$ consisting of two elements with contraction ratios $\lambda$ and $B$, with distinct fixed points. For $B$ small, this system is a perturbation of the system consisting of ony one similarity contraction of contraction ratio $\lambda < 1$. On the other hand, the dimension $\theta$ of the limit set of this IFS is given by the Moran--Hutchinson equation: \[ \lambda^\theta + B^\theta = 1. \] We want to analyze the behavior of $\theta$ when $\lambda$ is fixed and $B \to 0$. To this end, we note that \begin{equation} \label{Btheta} B^\theta = 1 - \lambda^\theta = \theta f(\theta) \end{equation} for an analytic function $f$ depending on $\lambda$ such that $f(0) = \log(1/\lambda) > 0$. Taking logarithms yields \[ -C\theta = \log(\theta) + \log f(\theta), \] where here and in the rest of this section we use the notation \begin{equation} \label{CDE} \begin{split} C &\df -\log(B) = \log(1/B),\\ D &\df \log(C) = \log\log(1/B),\\ E &\df \log(D) = \log\log\log(1/B). \end{split} \end{equation} We now follow the heuristic of changing variables in such a way so that the new variable is ``closer to being bounded from above and below'' than the previous variable. Thus, let $\gamma \df C\theta > 0$. Then \[ \gamma = -\log(\gamma/C) - \log f(\theta) = D - \log(\gamma) - \log f(\theta). \] Let $\beta \df D - \gamma$. Then \[ \beta = \log(D - \beta) + \log f(\theta) = E + \log(1 - \beta/D) + \log f(\theta). \] Let $\alpha \df \beta - E$, and note that \begin{equation} \label{thetaalpha} \theta = \frac{D - E - \alpha}{C} \cdot \end{equation} Then \begin{equation} \label{alpharecursive} \alpha = \log\left(1 - \frac{E}{D} - \frac{\alpha}{D}\right) - \log f\left(\frac{D - E - \alpha}{C}\right) \end{equation} and thus \begin{equation} \label{alphaF} \alpha = F\left(\frac ED,\frac 1D,\frac DC,\alpha\right), \end{equation} where \[ F(x,y,z,w) \df \log(1 - x - y w) - \log f\big(z(1 - x - y w)\big). \] This concludes our change of variables, and we have rewritten \eqref{Btheta} as \eqref{alphaF}. Note that $F(\0,w) = \alpha_0 \df -\log f(0)$ and $F_{|4}(\0,w) = 0$ for all $w\in \R$, and $F$ is analytic on a neighborhood of $\{0\}^3 \times \R$. Thus, $\alpha_0 - F(\0,\alpha_0) = 0$ and $\frac{\del}{\del w} [w - F(0,0,0,w)] = 1$. So by the implicit function theorem, the equation $w - F(w,x,y,z) = 0$ can be solved analytically for $w$ in terms of $(x,y,z)$ in a neighborhood of $(0,0,0)$. Since $\frac ED,\frac 1D,\frac DC \to 0$ as $B \to 0$, it follows that $\alpha$ can be written as a power series in $\frac ED,\frac 1D,\frac DC$ whenever $B$ is sufficiently small. By \eqref{thetaalpha}, we have \[ \theta = \frac{1}{C}\left[D - E + \sum_{j = 0}^\infty \sum_{k = -j}^\infty \sum_{\ell = 0}^{j + k} c_{j,k,\ell} \frac{E^\ell}{C^j D^k}\right]. \] The coefficients $c_{j,k,\ell}$ can be computed recursively using the formula \eqref{alpharecursive}. The following lemma generalizes the above calculation: \begin{lemma} \label{lemmathetageneral} If $f:(\aa,\theta,\xi)\mapsto f(\aa,\theta,\xi)$ is analytic in a neighborhood of $A\times \{(0,0)\} $, with $f(\aa,0,0) > 0$ for all $\aa\in A \subset \R^d$, then for all $B$ sufficiently small and for all $\aa \in A$, the equation \begin{equation} \label{thetageneral} B^\theta = \theta f(\aa,\theta,B^\theta) \end{equation} has a unique solution: \begin{equation} \label{thetageneralsolved} \theta = \frac1C \left[D - E - \sum_{j = 0}^\infty \sum_{k = -j}^\infty \sum_{\ell = 0}^{j + k} f_{j,k,\ell}(\aa) \frac{E^\ell}{C^j D^k} \right] \end{equation} (cf. \eqref{CDE}) for some functions $f_{j,k,\ell}$ analytic on a complex neighborhood $A_\C$ of $A$, such that \[ \|f_{j,k,\ell}\| \lesssim \epsilon^{-(j + k + \ell)} \] for some $\epsilon > 0$. If $f$ is constant then \[ \theta = \frac1C \left[D - E - \sum_{k = 0}^\infty \sum_{\ell = 0}^k c_{k,\ell} \frac{E^\ell}{D^k} \right] \] and if $f(\aa,0,0) = 1$ then $f_{0,k,0}(\aa) = 0$ for all $k$. If $f(\aa,0,0) = 1$ and $f$ is $\Q$-analytic, then so are $f_{j,k,\ell}$. \end{lemma} \begin{proof} First note that if $\alpha$ is defined as the unique solution to \eqref{thetaalpha}, i.e. $\alpha \df D - E - C\theta$, then \[ B^\theta = e^{-(D - E - \alpha)} = \frac DC e^\alpha. \] Thus, repeating the above calculations shows that the equation \eqref{thetageneral} is equivalent to \[ \alpha = F\left(\aa,\frac ED,\frac 1D,\frac DC,\alpha\right), \] where \[ F(\aa,x,y,z,w) \df \log(1 - x - y w) - \log f\big(\aa, z(1 - x - y w), z e^w\big). \] As before, we have $F(\aa,\0,w) = \alpha_0(\aa) \df -\log f(\aa,0,0,0)$ and $F_{|5}(\aa,\0,w) = 0$ for all $w\in\R$, and $F$ is analytic in a neighborhood of $A \times \{\0\} \times \R$. So as before, by the implicit function theorem the equation $w = F(\aa,x,y,z,w)$ can be solved analytically for $w$ in terms of $(\aa,x,y,z)$ in a neighborhood of $A\times \{\0\}$, and thus $\alpha$ can be written as a power series in $\frac ED,\frac 1D,\frac DC$ with coefficients in $\H(U_\C)$ whenever $B$ is sufficiently small, for some complex neighborhood $A_\C$ of $A$. Applying \eqref{thetaalpha} demonstrates \eqref{thetageneralsolved}. If $f$ is constant, then $F(\aa,x,y,z,w)$ is constant with respect to $\aa,z$, so $\alpha$ can be written as a power series in $\frac ED,\frac 1D$. If $f(\aa,0,0) = 1$, then $F(\aa,0,y,0,0) = 0$, which implies that the solution of $w = F(\aa,x,y,z,w)$ vanishes on the line $x = z = 0$. It follows that $f_{0,k,0}(\aa) = 0$ for all $k$. \end{proof} \section{Gauss IFS Examples} \label{sectiongauss} We now use Theorem \ref{maintheorem} to compute and estimate the Hausdorff dimensions of sets of the form \[ F_E = \{[0;n_1,n_2,\ldots] : n_1,n_2,\ldots \in E\} \] where $E \subset \N$, and $[0;n_1,n_2,\ldots]$ represents the continued fraction expansion with partial quotients $n_1,n_2,\ldots$. To this end we define a tuple $(U,V,u,v,q)$ as in Definition \ref{definitionPACIFS}, so that each set $E \subset \N$ corresponds to an OSC PACIFS \[ S(E) \df \{ 1/n : n \in E \} \] whose limit set is $F_E$. Let $U = V = (-\epsilon,1 + \epsilon)$ with $0 < \epsilon < 1$ and consider the map $u: U\times V \to V$ defined by \[ u(b,x) \df \frac{b}{1+bx} \cdot \] This family is not uniformly contracting since $u_1'(0) = -1$, but after conjugating by e.g. the map $\phi(x) = 1/(1+x)$, the family $(\phi\circ u_b\circ \phi^{-1})_{b\in U}$ is uniformly contracting, so the same results apply as for OSC PACIFSes. \begin{lemma} For $E \subset \N$, we have $\Lambda_{S(E)} = F_E$. \end{lemma} \begin{proof} For each $n\in E$ we have $u_{1/n}(x) = 1/(n + x)$, and thus for each sequence $n_1,n_2,\ldots$, \[ \pi(1/n_1,1/n_2,\ldots) = [0;n_1,n_2,\ldots]. \qedhere\] \end{proof} \begin{remark} \label{remarkgaussvalues} We have \begin{align*} v(b,x) &= -2\log(1 + bx), & q &= 2, & h(x) &= 1. \end{align*} Note that $v(0,\cdot) = 0$ and $v(\cdot,0) = 0$ and that $u$ is $\Q$-analytic. If $E = \N$, then $\delta = 1$, and since $\mu h = \nu g = 1$, we have $g(x) = 1/(1 + x)$, and $\mu$ is the Lebesgue measure on $[0,1]$. In this case, we have $1/c = \mu g = \log(2)$.\Footnote{Note that $g$ is usually normalized so that $\mu g = 1$, i.e. $g(x) = \frac{1}{\log(2)(1 + x)}$; however, we find the normalization \eqref{normalization} more convenient.} \end{remark} In what follows we will consider various sequences of sets $E_N \to E$ (we recall that this notation means that the characteristic functions converge pointwise). In each case we let $S = S(E)$ and $S' = S_N = S(E_N)$. Similarly, we write \begin{align*} \delta_E &\df \delta_S, & \delta_N &\df \delta_{S_N}, & \delta &\df \lim_{N\to\infty} \delta_N,\\ \theta &= \delta_N - \delta, & P(E,s) & \df P(S(E),s), & F_N &\df F_{E_N} = \Lambda_{S_N}. \end{align*} Note that \begin{align*} \eta_i &= \sum_{n\in E_N - E} \frac{1}{n^{2(\delta + \theta) + i}} \cdot \end{align*} Our goal is now to express $\eta_i$ in terms of $N$ and $\theta$, as well as information about the sequence $(E_N)$. To do this we consider various cases for the sequence $(E_N)$. \subsection{Polynomial sequences} The two sequences considered in the introduction, \begin{align*} E_N &= \{1,\ldots,N\} \to E = \N, & E_N &= \{N,N+1,\ldots\} \to E = \emptyset, \end{align*} share the property that the symmetric difference $E\triangle E_N$ is a tail of $\N$. Evidently, this means we need similar methods to compute or estimate $\eta_i$ in these two cases; specifically, we need the Euler--Maclaurin formula. It turns out that the Euler--Maclaurin formula is also useful in the more general case where $E\triangle E_N$ is the tail of a polynomial sequence in $\N$ (and $(E_N)$ is either ascending or descending). \begin{theorem}[Euler--Maclaurin formula, {\cite{Apostol,Lehmer}}] \label{theoremEM} Given natural numbers $M<N$ and $p$, let $f$ be a $p$-times continuously differentiable function defined on $[M,N]$. Then \[ \sum_{n = M}^N f(n) = \int_M^N f(x) \; \dee x + \frac{f(M) + f(N)}{2} + \sum_{i = 1}^{\lfloor p/2 \rfloor} \frac{B_{2i}}{(2i)!} \big( f^{(2i - 1)}(N) - f^{(2i - 1)}(M) \big) + R_p, \] where $(B_i)$ is the sequence of Bernoulli numbers, and \[ |R_p| \leq \frac{2\zeta(p)}{(2\pi)^p} \int_M^N |f^{(p)}(x)| \; \dee x. \] \end{theorem} For $\alpha > 0$, letting $f(x) = x^{-(1 + \alpha)}$ and taking the limit as $N\to\infty$ yields the following corollary: \begin{corollary*} Fix $p \in \N$, $C < \infty$, and $\alpha > 0$ such that $|\alpha| \leq C$. Then for all $N\in\N$, \begin{align*} \sum_{n \succneq N} \frac{1}{n^{1+\alpha}} &= \frac{N^{-\alpha}}{\alpha} + \sum_{i=1}^{p - 1} P_i^\pm(\alpha) N^{-(i + \alpha)} + O_{p,C}( N^{-(p + \alpha)}) \end{align*} where $\succneq$ can be taken to mean either $\geq$ or $>$, with the $\pm$ on the right-hand side depending on which choice is made, and $(P_i^\pm)_{i\geq 1}$ is an explicit sequence of polynomials (with rational coefficients): \[ P_i^\pm(\alpha) = \frac{B_i^\pm}{i!} \binom{-(1 + \alpha)}{i - 1} \] where we use the convention $B_1^+ = 1/2$, $B_1^- = -1/2$ for the Bernoulli sequence. Note that $P_1^\pm(\alpha) = \pm 1/2$. \end{corollary*} By taking a formal limit as $p\to \infty$ we can think of this as an \emph{asymptotic expansion} of the sum $\sum_{n \succneq N} \frac{1}{n^{1+\alpha}}$: \begin{equation} \label{EM} \sum_{n \succneq N} \frac{1}{n^{1+\alpha}} \equiv \frac{N^{-\alpha}}{\alpha} + \sum_{i = 1}^{\to\infty} P_i^\pm(\alpha) N^{-(i + \alpha)} \end{equation} where $A \equiv \sum_{i = i_0}^{\to\infty} a_i x_i$ means that for all $p \geq i_0$, \[ A = \sum_{i = i_0}^{p - 1} a_i x_i + O_p(x_p).\Footnote{In the sequel, multiple summations are handled as follows: $A \equiv \sum_{i = i_0}^{\to\infty} \sum_{j = j_0}^{\to\infty} a_{ij} x_i y_j$ means that for all $p \geq i_0$, $q\geq j_0$, \[A = \sum_{i = i_0}^{p - 1} \sum_{j = j_0}^{q - 1} a_{ij} x_i y_j + O_p(x_p) + O_q(y_q).\]} \] However, note that the series \eqref{EM} does not actually converge (due to the explosion of the sequence of Bernoulli coefficients $(B_i)_{i = 1}^\infty$). Now let $(s_n)$ be a sequence defined by a polynomial of degree $d$ and leading coefficient $a > 0$, say \[ s_n = a (n^d + b_1 n^{d - 1} + \ldots + b_d). \] Fix $\alpha > 0$. Then for all $n$ sufficiently large, \begin{align*} \frac{1}{s_n^{(1 + \alpha)/d}} &= \frac{(1 + b_1 n^{-1} + \ldots + b_d n^{-d})^{-(1+\alpha)/d}}{a^{(1+\alpha)/d} n^{1 + \alpha}} = \sum_{i = 0}^\infty \frac{f_i(\alpha)}{n^{1 + i + \alpha}} \end{align*} for some entire functions $f_i$, with $f_0(\alpha) = a^{-(1 + \alpha)/d}$. Applying \eqref{EM} shows that \begin{equation} \label{EM2} \sum_{n \succneq N} \frac{1}{s_n^{(1+\alpha)/d}} \equiv a^{-(1 + \alpha)/d} \frac{N^{-\alpha}}{\alpha} + \sum_{i = 0}^{\to\infty} \what f_i(\alpha) N^{-(i + \alpha)} \end{equation} for some functions $\what f_i$ holomorphic on $\C_{> -1}$, where for each $r\in\R$, \[ \C_{> r} \df \{z\in \C : \Re z > r\}. \] We are now ready to prove Theorems \ref{theoremFleqN} and \ref{theoremFgeqN} from the introduction: \begin{proposition} \label{propositionpolysequence} Let $(s_n)$ be a polynomial sequence of degree $d$, let $F \subset \N$ be an empty or strongly regular set disjoint from $\{s_M,s_{M+1},\ldots\}$ for some $M$, and consider the following cases: \begin{align} \label{goingup} \tag{$\nearrow$} E_N &= F \cup \{s_M,\ldots,s_N\} \hspace{0.1 in} \to E = F \cup \{s_M,s_{M+1},\ldots\}\\ \label{goingdown} \tag{$\searrow$} E_N &= F \cup \{s_N,s_{N+1},\ldots\} \to E = F \end{align} Note that $\delta = \lim\delta_N = \max(\delta_E,1/2d)$, and for convenience of notation write ${\wbar\delta} = 2d\delta - 1 \geq 0$ and ${\wbar\theta} = 2d\theta$. \begin{itemize} \item[(i)] Suppose $\delta > 1/2d$, i.e. ${\wbar\delta} > 0$. Then \begin{equation} \label{HDFleqNgeneral} \HD(F_N) = \delta_N \equiv \delta + \sum_{i = 0}^{\to\infty} \sum_{k = 1}^{\to\infty} \sum_{j = 0}^{k - 1} c_{i,k,j} \frac{\log^j(N)}{N^{i + k{\wbar\delta}}}, \end{equation} with $c_{0,1,0} = \pm 1/\w\chi$. (Here and hereafter $\pm$ represents $+$ in the case \eqref{goingdown}, and $-$ in the case \eqref{goingup}.) In the case \eqref{goingup}, if $F = \emptyset$, $s_n = n$, and $M = 1$, then \eqref{HDFleqNgeneral} reduces to \eqref{HDFleqN}. \item[(ii)] Suppose $\delta = 1/2d$, i.e. ${\wbar\delta} = 0$. Then \begin{equation} \label{HDFgeqNgeneral} \begin{split} \HD(F_N) = \delta_N \equiv \frac{1}{2d} & + \frac{1}{A d\log(N)}\left[\log\log(N) - \log\log\log(N)\phantom{\sum_{j=0}^\infty}\right.\\ &\left.+ \sum_{i = 0}^{\to\infty} \sum_{j = [i > 0]}^\infty \sum_{k = -j}^\infty \sum_{\ell = 0}^{j + k} c_{i,j,k,\ell} \frac{\log^\ell\log\log(N)}{N^i \log^j(N) \log^k\log(N)} \right] \end{split} \end{equation} for some constants $c_{i,j,k,\ell}$, where $A = 2$ if $\w c > 0$ and $A = 1$ if $\w c = 0$. If $F = \emptyset$ and $a = 1$, then $c_{i,j,k,\ell} \in \Q$ for all $i,j,k,\ell$. In the case \eqref{goingdown}, if $F = \emptyset$ and $s_n = n$, then \eqref{HDFgeqNgeneral} reduces to \eqref{HDFgeqN}, and the constants $c_{k,\ell}$ and $c_{i,j,k,\ell}$ have the values specified in Theorem \ref{theoremFgeqN}. \end{itemize} \end{proposition} Parts (i) and (ii) imply Theorems \ref{theoremFleqN} and \ref{theoremFgeqN}, respectively. \begin{proof} First we observe by direct calculation that for all $N$, the sets $\{s_M,\ldots,s_N\}$ and $\{s_N,s_{N + 1},\ldots\}$ are strongly regular. Since the union of two strongly regular sets is strongly regular, it follows that $E$ is empty or strongly regular, and the $E_N$s are strongly regular. Thus Theorem \ref{maintheorem} applies, and \eqref{mainformula} holds. By \eqref{EM2}, for all $i'\geq 0$ we have \begin{equation} \label{etai'} \begin{split} \eta_{i'} &= \pm\sum_{n\succneq N} \frac1{s_n^{2(\delta + \theta) + i'}} = \pm\sum_{n\succneq N} \frac1{s_n^{(1 + d i' + {\wbar\delta} + {\wbar\theta})/d}}\\ &\equiv \pm a^{-(1 + d i' + {\wbar\delta} + {\wbar\theta})/d} \frac{N^{-(d i' + {\wbar\delta} + {\wbar\theta})}}{d i' + {\wbar\delta} + {\wbar\theta}} + \sum_{i = 1}^{\to\infty} \what f_i(d i' + {\wbar\delta} + {\wbar\theta}) N^{-(i + d i' + {\wbar\delta} + {\wbar\theta})}\\ &= N^{-({\wbar\delta} + {\wbar\theta})} \begin{cases} \frac{a^{-(1 + {\wbar\theta})/d}}{{\wbar\theta}} + \sum_{i = 1}^{\to\infty} F_i^{(0)}({\wbar\theta}) N^{-i} & i' = {\wbar\delta} = 0\\ \sum_{i = d i'}^{\to\infty} F_i^{(i')}({\wbar\theta}) N^{-i} & d i' + {\wbar\delta} > 0, \end{cases} \end{split} \end{equation} where $F_i^{(i')}$ is analytic on $W \df \C_{> -t}$, where $t = {\wbar\delta}$ if ${\wbar\delta} > 0$ and $t = 1$ if ${\wbar\delta} = 0$. The reason first term of the case $i' = {\wbar\delta} = 0$ is positive is that in the case \eqref{goingup}, direct calculation shows that $\delta = \delta_E > 1/2d$ and thus ${\wbar\delta} > 0$. In what follows we assume that $\epsilon < t$, so that $B_\C(0,\epsilon) \subset W$. It follows that \begin{equation} \label{etaI} \eta_I \equiv N^{-\#(I)({\wbar\delta} + {\wbar\theta})} \sum_{i = d \Sigma(I)}^{\to\infty} F_i^{(I)}({\wbar\theta}) N^{-i} \end{equation} with $F_i^{(I)}$ analytic on $W$, for all $I\in \MM(\N)$ if ${\wbar\delta} > 0$ and for all $I \in \MM(\N_{\geq 1})$ if ${\wbar\delta} = 0$. We now consider the two cases (i) and (ii). \begin{itemize} \item[(i)] Suppose ${\wbar\delta} > 0$. Then since $E$ is regular and $\delta = \delta_E$, we have $\w c = 0$ and thus \eqref{mainformula2} holds. Note that $\eta \leq s_N^{-1} \lesssim N^{-d}$ and \[ |\eta_0| \lesssim N^{-({\wbar\delta} + {\wbar\theta})} \leq N^{-{\wbar\delta}/2} \] for all sufficiently large $N$. Let $C$ denote the implied constant. Then for all $I,j$, by \eqref{cijkboundsv2} we have \[ |c_{I,j} \, \eta_I \theta^j| \leq \epsilon^{-(\Sigma(I) + I(0) + j)} C^{\Sigma(I) + I(0)} N^{-(d\Sigma(I) + ({\wbar\delta}/2) I(0))} (\epsilon/2)^j \] assuming $N$ is sufficiently large. Now fix $p,q\in\N$. Since \[ |c_{I,j} \, \eta_I \theta^j| \leq \begin{cases} N^{-d p} 2^{-(\Sigma(I) + I(0) + j)} & \Sigma(I) \geq p \\ N^{-({\wbar\delta}/2) q} 2^{-(\Sigma(I) + I(0) + j)} & I(0) \geq q \end{cases} \] as long as $N$ is sufficiently large, by \eqref{mainformula2} we have \[ \Xi = \sum_{\substack{I \in \MM(\N) \\ \Sigma(I) < p \\ I(0) < q}} \sum_{j = 0}^\infty c_{I,j} \, \eta_I \theta^j + O\left(N^{-d p} + N^{-({\wbar\delta}/2) q}\right) \] and by \eqref{cijkboundsv2}, for each $I$ the function $F_I(\theta) \df \sum_j c_{I,j} \theta^j$ is analytic on (a neighborhood of) $B_\C(0,\epsilon/2) \subset W$. Combining with \eqref{etaI} and using the fact that $\eta_\0 = 1$, $c_{\0,0} = 0$, and $c_{\0,1} = -\w\chi$ gives \begin{align*} \Xi \equiv -\w\chi \theta + \sum_{j = 2}^\infty c_{\0,j} \theta^j + \sum_{i = 0}^{\to\infty} \sum_{k = 1}^{\to\infty} F_{i,k}(\theta) N^{-(i + k({\wbar\delta} + {\wbar\theta}))} \end{align*} where the functions \[ F_{i,k} = \sum_{\substack{I \in \MM(\N)\\ d\Sigma(I) \leq i\\ \#(I) = k}} F_I F_i^{(I)} \] are analytic on (a neighborhood of) $B_\C(0,\epsilon/2)$. Thus, the equation \eqref{mainformula} (i.e. $\Xi = 0$) can be solved for $\theta$ as \[ \theta \equiv \frac1{\w\chi}\left[\sum_{j = 2}^\infty c_{\0,j} \theta^j + \sum_{i = 0}^{\to\infty} \sum_{k = 1}^{\to\infty} F_{i,k}(\theta) \exp\big(-\hspace{-0.03in}k\,{\wbar\theta}\log(N)\big) N^{-(i + k{\wbar\delta})} \right]. \] Letting $\gamma = N^{{\wbar\delta}} \theta$, we have \[ \gamma \equiv \sum_{j = 1}^\infty c_{\0,j + 1} \gamma^{j + 1} N^{-j {\wbar\delta}} + \sum_{i = 0}^{\to\infty} \sum_{k = 0}^{\to\infty} F_{i,k + 1}\left(\frac{1}{N^{\wbar\delta}} \gamma\right) \exp\left(-2d(k + 1)\frac{\log(N)}{N^{\wbar\delta}}\gamma\right) N^{-(i + k {\wbar\delta})}. \] Solving for $\gamma$ in terms of $N^{-1}$, $N^{-{\wbar\delta}}$, and $N^{-{\wbar\delta}} \log(N)$, and then multiplying by $N^{-{\wbar\delta}}$ yields \[ \theta = \sum_{i = 0}^{\to\infty} \sum_{k = 1}^{\to\infty} \sum_{j = 0}^{k - 1} c_{i,k,j} \frac{\log^j(N)}{N^{i + k{\wbar\delta}}}\:\cdot \] Note that $\w\chi c_{0,1,0} = F_{0,1}(0) = F_{\{0\}}(0) F_0^{(\{0\})}(0) c_{\{0\},0} = F_0^{(\{0\})} = \pm 1$, so $c_{0,1,0} = \pm 1/\w\chi$. In the case \eqref{goingup}, if $F = \emptyset$, $s_n = n$, and $M = 1$, then $d = \delta = 1$ and thus ${\wbar\delta} = 1$, so \[ \HD(F_{\leq N}) \equiv 1 + \sum_{i=1}^{\to\infty} \sum_{j = 0}^{i - 1} c_{i,j} \frac{\log^j(N)}{N^i}\cdot \] This proves Theorem \ref{theoremFleqN} from the introduction. See Appendix \ref{appendix1} for the computation of some of the coefficients $c_{i,j}$ in this case. \item[(ii)] Suppose ${\wbar\delta} = 0$. Let $\xihat = a^{-(1 + {\wbar\theta})/d} N^{-{\wbar\theta}}/{\wbar\theta} - \w c$ and $\what\eta = \xi - \xihat = \eta_0 - a^{-(1 + {\wbar\theta})/d} N^{-{\wbar\theta}}/{\wbar\theta}$. Then by \eqref{etaI} and \eqref{cijkbounds}, we have \[ \Xi \equiv \sum_{i = 0}^{\to\infty} F_i({\wbar\theta},\xi,N^{-{\wbar\theta}}) N^{-i}, \] where the $F_i$s are analytic on $B_\C(0,\epsilon/2)^3 \subset W\times \C^2$. (In fact, $F_i$ is a polynomial of degree $\leq i$ with respect to its third input.) Next, observe that by \eqref{etai'} we have \[ \what\eta = \sum_{i = 1}^{\to\infty} F_i^{(0)}({\wbar\theta}) N^{-(i + {\wbar\theta})}. \] Since $\xi = \what\eta + \xihat$ and $N^{-{\wbar\theta}} = a^{(1 + {\wbar\theta})/d} {\wbar\theta} (\w c + \xihat\,)$, it follows that \[ \Xi \equiv \sum_{i = 0}^{\to\infty} G_i({\wbar\theta},\xihat\,) N^{-i}, \] where the $G_i$s are analytic on a neighborhood of $(0,0)$. By direct calculation, we have $G_0(0,0) = c_{\0,0,0} = 0$ and $(G_0)_{|2}(0,0) = c_{\0,0,1} = 1$. Thus, the equation \eqref{mainformula} (i.e. $\Xi = 0$) can be solved for $\xihat$: \begin{equation} \label{wxi} \xihat = a^{-(1 + {\wbar\theta})/d} \frac{N^{-{\wbar\theta}}}{{\wbar\theta}} - \w c \equiv \sum_{i = 0}^{\to\infty} \what G_i({\wbar\theta}) N^{-i}, \end{equation} where the $\what G_i$s are analytic on a neighborhood of $0$, and by direct calculation, $G_i(0,\cdot) = 0$ and thus $\what G_i(0) = 0$ for all $i > 0$; similarly, $\what G_0(0) = 0$ since $G_i(0,0) = 0$. If $\w c > 0$, then by solving for $N^{-{\wbar\theta}}$, using Lemma \ref{lemmathetageneral} to solve for ${\wbar\theta}$ (with $\aa = N^{-1}$), and finally solving for $\delta_N$ in terms of ${\wbar\theta}$, we complete the proof of \eqref{HDFgeqNgeneral}. (Note that Lemma \ref{lemmathetageneral} applies to asymptotic expansions because it applies to the estimates of which the asymptotic expansion is a limit.) The inequality $j \geq [i > 0]$ corresponds to the fact that $\what G_i(0) = 0$ for all $i$, since $\wbar\theta \sim \frac{\log\log(N)}{A d \log(N)}$. So suppose that $\w c = 0$. Then direct calculation gives $G_i(\cdot,0) = G_i(0,\cdot) = 0$ for all $i > 0$, and thus $\what G_i'(0) = 0$ for such $i$. On the other hand, since $S \neq \emptyset$ we have $\what G_0'(0) = -c_{\0,1,0} = \w\chi > 0$. So \eqref{wxi} becomes \[ N^{-{\wbar\theta}} \equiv {\wbar\theta}^2\left(a^{1/d}\w\chi + {\wbar\theta} \sum_{i = 0}^{\to\infty} H_i({\wbar\theta}) N^{-i}\right) \] for some analytic functions $H_i$. By taking the square root, using Lemma \ref{lemmathetageneral} to solve for ${\wbar\theta}/2$, and solving for $\delta_N$ in terms of ${\wbar\theta}$, we complete the proof of \eqref{HDFgeqNgeneral}. The inequality $j \geq [i > 0]$ corresponds to the fact that $\what G_i(0) = 0$ and $\what G_i'(0) = 0$ for all $i > 0$. Suppose that $S = \emptyset$ and $a = 1$. Then since $s_n \in \N$ for all $n$, we have $b_1,\ldots,b_d \in \Q$ and thus for all $i$ we have $f_i(x) \in \Q[x]$ and $\what f_i(x) \in \Q(x)$, with notation as in \eqref{EM2}. It follows that all $F_i$s and $F_i^{(0)}$s are $\Q$-analytic and thus by Proposition \ref{propositionrational}, $G_i$ and $\what G_i$ are $\Q$-analytic. Since $\w c = 1/\nu h = 1$, it follows that $c_{i,j,k,\ell} \in \Q$ for all $i,j,k,\ell$. Moreover, by Proposition \ref{propositionvanishing}(ii)\ and direct calculation we find that $\what G_0(\cdot) = 0$. Since $a = \w c = 1$, it follows that \begin{equation} \label{equiv22} N^{-{\wbar\theta}} \underset{X}{\equiv} {\wbar\theta} \end{equation} where $X = N^{-1} {\wbar\theta}^2$. The second part of Lemma \ref{lemmathetageneral} now demonstrates \eqref{HDFgeqN}. We now wish to check that the values specified in Theorem \ref{theoremFgeqN} are correct. By \eqref{equiv22}, the coefficients $c_{k,\ell}$ are the same as the coefficients arising from the equation $N^{-{\wbar\theta}} = {\wbar\theta}$, so they can be computed explicitly from \eqref{alpharecursive} without any operator calculations, and we omit their calculation. On the other hand, since \[ f(a,{\wbar\theta},\xi) = 1 + \sum_{i = 0}^{\to\infty} \what G_i({\wbar\theta}) a^i \] in Lemma \ref{lemmathetageneral}, we have \begin{align*} c_{1,1,-1,0} &= - (\log f)_{|12}(0,0,0)\\ &= -\what G_1'(0) = (G_1)_{|1}(0,0) = \text{\textbf{Coeff}}\big(N^{-1} {\wbar\theta},c_{\0,0,1} \what\eta \,+\, c_{\{1\},0,0} \eta_1 \,+\, c_{\0,1,0} \theta\big)\\ &= \text{\textbf{Coeff}}\big(N^{-(1 + {\wbar\theta})},\what\eta) = -P_1^+(0) = -1/2. \end{align*} This completes the proof. \qedhere\end{itemize} \end{proof} \subsection{Quasi-geometric sequences} \label{subsectionquasigeom} We can ask what happens in the cases \eqref{goingup} and \eqref{goingdown} of Proposition \ref{propositionpolysequence} when instead of being a polynomial sequence, the sequence $(s_n)$ is an exponentially growing sequence such as a geometric sequence or the Fibonacci sequence $s_n = s_{n - 1} + s_{n - 2}$ (initial conditions $s_0 = 0$, $s_1 = 1$). Recall that the $n$th term in the Fibonacci sequence is given by the formula \begin{equation} \label{fibonacci} s_n = a \lambda^n (1 + b \rho^n) \end{equation} where $a = \frac1{\sqrt 5}$, $\lambda = \phi = \frac{1 + \sqrt 5}{2}$, $b = -1$, and $\rho = \wbar\phi / \phi = -\phi^{-2}$. The formula \eqref{fibonacci} can also be used to describe a geometric sequence, by letting $a$ and $\lambda$ be integers and $b = 0$. \begin{proposition} \label{propositionquasigeom} Let $(s_n)$ be a sequence of positive integers defined by \eqref{fibonacci}, with $a > 0$, $\lambda > 1$, $b\in\R$, and $|\rho| < 1$. Let $F \subset \N$ be empty or strongly regular and disjoint from $\{s_M,s_{M+1},\ldots\}$ for some $M$, and let $E_N \to E$ be as in \eqref{goingup} or \eqref{goingdown} of Proposition \ref{propositionpolysequence}. \begin{itemize} \item[(i)] If $\delta > 0$, then \begin{equation} \label{quasigeom1} \theta = \sum_{k = 1}^\infty \sum_{i = 0}^\infty \sum_{h = 0}^\infty \sum_{j = 0}^{k - 1} c_{k,i,h,j} (\lambda^{-(2\delta k + i)} \rho^h)^N N^j. \end{equation} \item[(ii)] If $\delta = 0$, then \begin{equation} \label{quasigeom2} \theta = \frac{1}{A\log(\lambda)} \frac{1}{N} \left[\log(N) - \log\log(N) + \sum_{i = 0}^\infty \sum_{h = 0}^\infty \sum_{j = 0}^\infty \sum_{k = -j}^\infty \sum_{\ell = 0}^{j+k} c_{i,h,j,k,\ell} (\lambda^{-i} \rho^h)^N \frac{\log^\ell\log(N)}{N^j \log^k(N)}\right], \end{equation} where $A = 2$ if $\w c > 0$ and $A = 1$ if $\w c = 0$ (equivalently, $A = 2$ if $E = \emptyset$ and $A = 1$ if $\#(E) = 1$). Moreover, $c_{i,j,k,h} \in R \df \Q(\lambda,\rho)[a,\log(a),\log(\lambda),b,\log(\rho)]$ for all $i,j,k,h$. \end{itemize} Note that these formulas are exact and are not asymptotic expansions. \end{proposition} \begin{remark*} Proposition \ref{propositionquasigeom} can be generalized to the setting where \eqref{fibonacci} is replaced by the formula \[ s_n = a \lambda^n \left(1 + \sum_{i = 1}^d b_i \rho_i^n\right). \] This can be proven by making minor changes to the proof below. \end{remark*} \begin{proof} As in the proof of Proposition \ref{propositionpolysequence}, for all $N$ we observe by direct calculation that the sets $\{s_M,\ldots,s_N\}$ and $\{s_N,s_{N + 1},\ldots\}$ are strongly regular, and thus since the union of two strongly regular sets is strongly regular, it follows that $E$ is empty or strongly regular and the $E_N$s are strongly regular, so Theorem \ref{maintheorem} applies and \eqref{mainformula} holds. Let $\omega = 1$ if \eqref{goingup} holds and $\omega = 0$ if \eqref{goingdown} holds. Then $\eta_i = \pm f_\omega\big(2(\delta + \theta) + i\big)$, where \begin{align*} f_\omega(\alpha) &\df \sum_{n \geq N + \omega} \frac1{s_n^\alpha} = \sum_{n\geq N + \omega} a^{-\alpha} \lambda^{-n\alpha} \sum_{h = 0}^\infty \binom{-\alpha}h b^h \rho^{h n}\\ &= \sum_{h = 0}^\infty a^{-\alpha} \binom{-\alpha}h b^h \sum_{n\geq N + \omega} (\lambda^{-\alpha} \rho^h)^n = \sum_{h = 0}^\infty a^{-\alpha} \binom{-\alpha}h b^h \frac{(\lambda^{-\alpha} \rho^h)^{N + \omega}}{1 - \lambda^{-\alpha} \rho^h}\\ &= \sum_{h = 0}^\infty f_{t,\omega}(\alpha) (\lambda^{-\alpha} \rho^h)^N \end{align*} where for each $h > 0$, $f_{h,\omega}(\alpha) = a^{-\alpha} \binom{-\alpha}h b^h \frac{(\lambda^{-\alpha} \rho^h)^\omega}{1 - \lambda^{-\alpha} \rho^h}$ is analytic on $W = \C_{> \log(\rho)/\log(\lambda)}$; similarly, $f_{0,\omega}$ is analytic on $\C_{> 0}$. Note that since \begin{align*} \left|\binom zh\right| &\leq \frac{(|z| + h)^h}{h!} \leq \left(\frac{e (|z| + h)}{h}\right)^h\\ &\leq e^h \exp(|z|/h)^h = e^{h + |z|}, \end{align*} we have $|f_{h,\omega}(\alpha)| \lesssim C^{h + |\alpha|}$ for some constant $C \geq 1$. It follows that \begin{align*} \eta_i &= \pm f_\omega\big(2(\delta + \theta) + i\big) = \lambda^{-2(\delta + \theta) N} \lambda^{-i N} \begin{cases} \frac{a^{-2\theta}}{1 - \lambda^{-2\theta}} + \sum_{h = 1}^\infty f_{h,\omega}(2\theta) \rho^{h N} & \delta = i = 0\\ \sum_{h = 0}^\infty \pm f_{h,\omega}(2(\delta + \theta) + i) \rho^{h N} & 2\delta + i > 0 \end{cases} \end{align*} where as before, if $\delta = i = 0$ we use direct calculation to rule out the case \eqref{goingup}, allowing us to conclude that the first term of the top case is positive, as well as to reduce this term using the equality $\omega = 0$. Now since $|f_{h,\omega}(2(\delta + \theta) + i))| \lesssim C^{i + h}$ for all $\theta$ sufficiently close to $0$, there exists a ball $B$ centered at $\0$ satisfying the required bounds appearing Corollary \ref{corollaryanalytic} for the appropriate functions $F_i$, $F_*$, where \[ \tt_N = \begin{cases} (\lambda^{-2(\delta + \theta) N},\lambda^{-N},\rho^N) & \text{ if $\delta > 0$}\\ (\lambda^{-2(\delta + \theta) N},\lambda^{-N},\rho^N,\xihat\,) & \text{ if $\delta = 0$} \end{cases} \] Here $\xihat \df \lambda^{-2\theta N} \frac{a^{-2\theta}}{1 - \lambda^{-2\theta}} - \w c$, and we use the fact that $\w c = 0$ when $\delta > 0$ to write $\xi$ in terms of $\tt_N,\theta$ in that case. Thus by Corollary \ref{corollaryanalytic}, there exists a function $f$ analytic on $B$ such that $\Xi = f(\tt_N,\theta)$. \begin{itemize} \item[(i)] Suppose that $\delta > 0$. Then \begin{align*} \Xi &= F_{\0}(\theta) + \sum_{k = 1}^\infty \sum_{i = 0}^\infty \sum_{h = 0}^\infty F_{k,i,h}(\theta) \exp(-2k N \theta) \lambda^{-2k \delta N} \lambda^{-i N} \rho^{h N}\\ &= F_{\0}(\theta) + \sum_{k = 1}^\infty \sum_{i = 0}^\infty \sum_{h = 0}^\infty \sum_{j= 0}^\infty F_{k,i,h}(\theta) \frac{(-2k N \theta)^j}{j!} \lambda^{-2k \delta N} \lambda^{-i N} \rho^{h N} \end{align*} where $(F_{k,i,h})$ are analytic in a fixed neighborhood of $\0$, and $F_{\0}(0) = 0$ and $F_{\0}'(0) = c_{\0,1,0} = -\w\chi \neq 0$. Letting $\gamma = \lambda^{2\delta N} \theta$, solving for $\gamma$, and then dividing by $\lambda^{2\delta N}$ yields \eqref{quasigeom1}. \item[(ii)] Now suppose that $\delta = 0$. Then since $\delta_E \leq \delta$, we have $\#(E) \leq 1$ and thus we are in the case \eqref{goingdown}. Now let $\what\eta = \xi - \xihat = \eta_0 - a^{-2\theta} \lambda^{-2\theta N}/(1 - \lambda^{-2\theta})$. Since $\lambda^{-2\theta N} = (1 - \lambda^{-2\theta})(\w c + \xihat\,)$ and $\xi = \what\eta + \xihat$, it follows that \[ \Xi = \sum_{i = 0}^\infty \sum_{h = 0}^\infty F_{i,h}(\theta,\xihat\,) \lambda^{-iN} \rho^{h N} \] for some analytic functions $F_{i,h}$, and by direct calculation we have $F_{0,0}(0,0) = 0$ and $F_{0,0|2}(0,0) = c_{\0,0,1} = 1$. Thus we can solve for $\xihat$, which yields \[ \frac{\lambda^{-2\theta N}}{2\theta\log(\lambda)} = \left(\frac{1 - \lambda^{-2\theta}}{2\theta\log(\lambda)}\right) \left(\w c + \sum_{i = 0}^\infty \sum_{h = 0}^\infty F_{i,h}(\theta) (\lambda^{-i} \rho^h)^N\right). \] Note that $F_{i,h}(0) = 0$ for all $i,h$, since on a formal level, when $\theta = 0$, we have $\lambda^{-2\theta} = 0$ and thus $\eta_i = 0$ for all $i > 0$. If $E = \emptyset$ then $\w c = 1$, and thus Lemma \ref{lemmathetageneral} applies (with ${\wbar\theta} = 2\theta\log(\lambda)$, $B = e^{-N}$, and $\aa = (\lambda^{-N},\rho^N)$), yielding \eqref{quasigeom2} with $A = 2$. It can be shown using Proposition \ref{propositionrationalv2} that $c_{i,h,j,k,\ell} \in R$ for all $k,i,h$. This is because each function \[ F_h^{(i)}(\theta) \df f_{h,0}(2\theta + i) \] is $R$-analytic, or $R$-meromorphic if $i = h = 0$. If $\#(E) = 1$, then $\w c = 0$, and since $\delta = 0$, by Proposition \ref{propositionvanishing}(iv)\ we have $c_{I,0,k} = 0$ for all $I,k$ with $I \neq \0$. Moreover, $F_{0,0}'(0) = -c_{\0,1,0} = \w\chi > 0$. So by taking the square root of the previous equation and letting ${\wbar\theta} = \theta\log(\lambda)$, Lemma \ref{lemmathetageneral} shows that \eqref{quasigeom2} holds with $A = 1$. \qedhere\end{itemize} \end{proof} \subsection{Miscellaneous examples} In the next two examples we consider relatively simple sequences $(E_N)$ of two-element sets with dimension tending to zero, one where both generators tend to zero and another where one of the generators is fixed. These examples illustrate the variety of behavior that can occur when solving \eqref{mainformula}. \begin{example} Let $E_N = \{N,N+1\} \to E = \emptyset$. Then \begin{equation} \label{thetaNN1} \theta = \frac{1}{2\log(N)} \left[\log(\phi) + \sum_{i = 2}^\infty \sum_{j = 1}^{i - 1} \frac{c_{i,j}}{N^i \log^j(N)}\right] \end{equation} for all $N$ sufficiently large, where $c_{i,j} \in \Q[\phi,\log(\phi)]$ for all $i,j$, where $\phi \df (1 + \sqrt 5)/2$. In particular, \[ c_{2,1} = \frac{4\phi^{-1} \log(\phi)}{1 + \phi^{-2}}\cdot \] \end{example} \begin{proof} Since $E$ is empty and the $E_N$s are strongly regular, Theorem \ref{maintheorem} applies and thus \eqref{mainformula} holds. Now $\w c = 1$ and thus for all $i$, \[ \eta_i = \frac{1}{N^{i + 2\theta}} + \frac{1}{(N+1)^{i + 2\theta}} = N^{-(i + 2\theta)} \left[1 + \frac{1}{1 + N^{-(i + 2\theta)}}\right] = \begin{cases} \w c + F_*(N^{-2\theta} - \phi^{-1}) & i = 0\\ F(N^{-(i + 2\theta)}) & i > 0, \end{cases} \] where $F_*,F$ are $\Q[\phi]$-analytic functions with $F_*(0) = F(0) = 0$, $F_*'(0) = 1 + \phi^{-2} > 0$, and $F'(0) = 2$. So $\xi = F_*(N^{-2\theta} - \phi^{-1})$, and thus by Corollary \ref{corollaryanalytic}, for all $N$ sufficiently large we have \[ \Xi = G(N^{-1},\theta,N^{-2\theta} - \phi^{-1}) \] where $G$ is analytic in a neighborhood of $\0$, $G(\0) = 0$, and $G_{|3}(\0) = F_*'(0) c_{\0,0,1} = 1 + \phi^{-2}$. Moreover, by either Proposition \ref{propositionrationalv2} or Proposition \ref{propositionrational}, $G$ is $\Q[\phi]$-analytic. Since $S = \emptyset$ and $\delta = 0$, by Proposition \ref{propositionvanishing}, solving $\Xi = 0$ for $N^{-2\theta} - \phi^{-1}$ and then rearranging yields \[ N^{-2\theta} = \phi^{-1} + N^{-2} \theta H(N^{-1},N^{-1}\theta) \] where $H$ is $\Q[\phi]$-analytic. Taking logarithms and dividing by $-2\log(N)$, we see that \[ \theta = \frac1{2\log(N)}\big[\log(\phi) - \log(1 + \phi N^{-2} \theta H(N^{-1},N^{-1} \theta))\big]. \] By the implicit function theorem applied to the above equation treating $\theta$, $N^{-1}$, and $\log(N)^{-1}$ as the basic variables, solving for $\theta$ demonstrates \eqref{thetaNN1}. To see why the bounds on $i$ and $j$ follow, note that if we substitute $\alpha = 2\theta\log(N) - \log(\phi)$, then in the resulting formula all terms are of the form $N^{-i} \log(N)^{-j} \alpha^k$, with $1 \leq j \leq i-1$. An induction argument shows that these bounds hold for all terms of the series representing $\alpha$. Now, $c_{2,1} = -\phi\frac{\log(\phi)}{2} H(0,0)$, and \[ H(0,0) = \frac{G_{|112}(\0)}{G_{|3}(\0)} = \frac{(\phi^{-1} F'(0))^2}{F_*'(0)} \cdot \frac{c_{2\{1\},1,0}}{c_{\0,0,1}} = \frac{4\phi^{-2}}{1 + \phi^{-2}} \nu \beta_{1,0} \beta_{1,1} h, \] where in the last step we use Proposition \ref{propositionvanishing} to ignore the other possible contributions to $c_{2\{1\},1,0}$. The calculation is completed by observing that \begin{align*} \beta_{1,1} h(x) &= \text{\textbf{Coeff}}\big(b^1 \theta^1, e^{\theta v(b,x)}\big) = \text{\textbf{Coeff}}(b^1,v(b,x)) = -2 x\\ \nu \beta_{1,0} \beta_{1,1} h &= -2\text{\textbf{Coeff}}\big(b^1 \theta^0, e^{\theta v(b,0)} u_b(0)\big) = -2\text{\textbf{Coeff}}\big(b^1,b) = -2. \qedhere\end{align*} \end{proof} \begin{example} Let $E_N = \{1,N\} \to E = \{1\}$. Then \begin{equation} \label{HDF1N} \theta = \frac1{2\log(N)} \left[\log\log(N) - \log\log\log(N) + \sum_{i = 0}^\infty \sum_{j = [i > 0]}^\infty \sum_{k = -j}^\infty \sum_{\ell = 0}^{j + k} c_{i,j,k,\ell} \frac{\log^\ell\log\log(N)}{N^i \log^j(N) \log^k\log(N)} \right] \end{equation} for all $N$ sufficiently large, and $c_{0,0,0,0} = -\log\log(\phi) > 0$. Notice that in the second summation, we use the Iverson bracket notation: $[\Phi] = 1$ when $\Phi$ is true and $[\Phi] = 0$ when $\Phi$ is false. \end{example} \begin{proof} Since $E$ and the $E_N$s are strongly regular, Theorem \ref{maintheorem} applies and thus \eqref{mainformula} holds. Now \[ \eta_i = \sum_{n\in E_N - E} \frac{1}{n^{i + 2(\delta + \theta)}} = N^{-i} N^{-2\theta} \] and since $P(E,\delta) = P(E,0) = \log\#(E) = 0$, we have $\w c = 0$. Thus by Corollary \ref{corollaryanalytic} and Proposition \ref{propositionvanishing}(iv), \[ \Xi = \theta F_0(\theta) + N^{-2\theta} F_1(N^{-2\theta}) + N^{-2\theta} \theta F_2(N^{-1},\theta,N^{-2\theta}) \] where $F_0,F_1,F_2$ are analytic in a neighborhood of $\0$ with $F_0(\0) = c_{\0,1,0} = \w\chi =\chi = 2\log(\phi) > 0$ and $F_1(\0) = c_{\0,0,1} = 1 > 0$. Solving $\Xi = 0$ for $N^{-2\theta}$ gives \[ N^{-2\theta} = \theta \big(2\log(\phi) + \theta g(N^{-1},\theta)\big) = 2\theta \big(\log(\phi) + (1/2) \theta g(N^{-1},\theta)\big) \] where $g$ is analytic in a neighborhood of $\0$. Thus Lemma \ref{lemmathetageneral} applies and we have \eqref{HDF1N}, with $c_{0,0,0,0} = -\log\log(\phi)$. The $i=j=0$ case of the summation is ruled out because of the factor $\theta$ appearing in $(1/2)\theta g(N^{-1},\theta)$ above. \end{proof} \section{Directions to further research} \label{sectionopen} This section concludes this paper by presenting a small sample of problems and research directions, which we hope will partially illustrate the wide scope awaiting future exploration. We speculate that the key ideas behind our our basic perturbation result, Theorem \ref{theoremoperatorequation}, might apply more generally. For instance, it would be interesting to leverage our perturbation theorem or some variant thereof to analyze other functionals that arise in the study of dynamical systems and perturbations thereof. With a view to developing asymptotic expansions that lie beyond the scope of this paper, here are three concrete CIFSes for which our methods do not directly apply: \begin{itemize} \item The co-Cantor similarity IFS described in Example \ref{examplecocantor}. \item The CIFS $u_n: x \mapsto (1 + x^n)2^{-n}$ for $n \in \N$, described in Example \ref{exampleNotPACIFS}. \item The prime alphabet Gauss CIFS: $u_p: x \mapsto (p + x)^{-1}$ for primes $p$ \end{itemize} It may be the case that it is impossible to develop an asymptotic expansion for the last example above, which is an OSC PACIFS unlike the previous two examples. It is natural to attempt a generalization of our results in \6\ref{subsectionquasigeom} where we studied alphabets that were quasi-geometric (e.g. Fibonacci) sequences to the broader class of \emph{constant-recursive sequences}, i.e. sequences satisfying a linear recurrence with constant coefficients. One could consider holonomic or $P$-recursive sequences, $k$-automatic sequences, or $k$-regular sequences. It would be interesting to precipitate connections with ideas familiar to {\it analytic combinatorics} and {\it analysis of algorithms} communities. Perhaps investigating the coefficient numerology in the asymptotic expansions studied in this paper will lead more directly to such links. Though the main applications in this article have emphasized the approximation of real numbers by rationals using the simple continued fraction algorithm, there exist several avenues of active research inspired by this particular seam with a multitude of surprising interactions with dynamical systems and number theory. Dajani and Kraaikamp's Carus monograph \cite{DajaniKraaikamp} is a beautifully written introduction to the ergodic theory of several numeration schemes and continued fraction algorithms; and for higher-dimensional variants see \cite{Schweiger, Schweiger2, ArnouxSchmidt, Berthe}. It would be interesting to leverage our techniques to study analogues of our results in any of these settings. For instance, one could focus on any one of several existing families of piecewise smooth expanding maps of the interval with infinitely many branches, e.g. those arising from the family of \emph{Japanese continued fractions}, named for Hitoshi Nakada, Shunji Ito, and Shigeru Tanaka -- see \cite{CarminatiTiozzo2,KSS2,BDV3,DHKM} for these systems and some variations. Furthermore, continued fraction algorithms arising from study of geodesic flows on negatively curved surfaces \cite{KatokUgarcovici2,BocaMerriman}, and continued fraction expansions over the field of Laurent series \cite{Schmidt9,Wu3,BertheNakada,HWWY} would be natural environs to investigate analogues of our results. To conclude, here are three scenarios to reconnoiter in the higher-dimensional setting: \begin{itemize} \item Find analogues of our results for \emph{complex continued fractions}, where one studies the complex analogue of the Gauss CIFS $\{ u_{n}: x \mapsto (n+x)^{-1} \;\text{for}\; n \in \N \}$ on the unit interval, by replacing $\N$ with the Gaussian integers with positive real part, and the unit interval with $B(1/2,1/2) \subset \C$. Such systems may be profitably studied within the broad framework of conformal graph directed Markov systems (CGDMSes), see \cite{CLU} and its references. \item CIFS and CGDMS limit sets model several fractals that arise from Sullivan's dictionary \cite{McMullen_classification, Sullivan_conformal_dynamical} (see also \cite[Table 1]{DSU_rigidity}), which translates between the study of Julia sets associated with holomorphic and meromorphic iteration, and Kleinian limit sets associated with actions of discrete subgroups of isometries of hyperbolic (negatively curved) spaces. Exploring analogues of our results within this broad framework would be very interesting. \item Find analogues of our results beyond the conformal setting, e.g. for infinite self-affine IFSes as studied by Jurga \cite{Jurga}, or the broad class of examples studied by Reeve in \cite{Reeve}. \end{itemize}
proofpile-arXiv_065-139
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The combination of large public datasets and novel machine learning architectures has provided state of the art predictive power in many diverse fields, such as computer vision and machine translation. These models are largely regarded as opaque \textit{black-box} models. With their prevalence and increasing adoption, an active field of research is eXplainable Artificial Intelligence (XAI), which has sought to provide explanations for their predictions. One avenue of research is on building interpretability into the model architecture~\cite{kim2015mind, gupta2016lattices, lee2019locallylinear}. We focus on the area \textit{post hoc} explanations -- which occurs after model training. Currently there is no one size fits all solution. The method of explanation depends on model type~\cite{chen2018looks, krakovna2016increasing, grath2018interpretable}, desired granularity~\cite{ribiero2016lime,ibrahim2019gam, dhur2018explanations, bhatt2020evaluating,linden2019global}, and audience~\cite{wachter2017counterfactual,bhatt2020explainable}. In support of the methods, there are a growing number of packages that seek to provide an umbrella of methods such as AIX360~\cite{aix360-sept-2019}, ELI5~\cite{eli5}, and Alibi~\cite{alibi}. Early methods were focused on explaining image classifiers. The use of sensitivities of the output class based on the input image pixels, provides a visual and immediately interpretable explanation for the classifier's prediction. For tabular data, the intuitive visual nature of sensitivities is not a natural metaphor. Additionally, where as computer vision typically relies on correlations between pixels as features, for tabular data that can be detrimental~\cite{aas2019explaining}. An ongoing challenge in XAI is the lack of ground truth. To add to the landscape of XAI, and move towards ground truth explainability for tabular data, we provide a flexible synthetic data generation method allowing generation of arbitrarily complex data. The current implementation is focused on the task of binary classification. The paper is structured as follows: previous work is discussed in Section 2, Section 3 presents the method used to construct the synthetic tabular data, and Section 4 shows some results from three use cases: one dimensional logistic regression, impact of correlation from informative features, and impact of correlation from redundant variables. \section{Previous work} Early use cases of synthetic data focused on the tasks of feature and model selection~\cite{guyonTechReport}. This method is available in the scikit-learn~\cite{scikit-learn} module $make\_classification$. An alternative method of generating tabular data for classification is to generate clusters and apply class labels to them. Another approach is to model joint probability $P(\boldsymbol{X}, y)$ from an actual dataset. This can be helpful in dealing with sensitive data and as an aid in sharing data where there are privacy and regulatory restrictions on the use of actual data~\cite{ping2017datasynthesizer, howe2017synthetic, goncalves2020medical}. Techniques used range from probabilistic models, to Bayesian networks, to generative adversarial neural networks (GANS). In finance, it is typical to use copulas. The theory of copulas has been developed considerably in mathematics, statistics, actuarial science, with significant interest in their application to finance \cite{genest2009advent}, and their misuse may have led the financial crisis \cite{salmon2012formula}. However, methods that mimic the statistics of other datasets are incapable of providing ground truth for explanations - since they lack the complete data generation process that imposes a functional dependence between $\boldsymbol{X}$ and $y$. Our research is inspired from recent work in interpreting the image classifiers trained with a carefully crafted dataset that controls the relative feature importance \cite{yangBenchmarkingAttributionMethods2019}. In this case, the model can be quantitatively evaluated in the form of known foreground and background images by providing ground truth of local feature importances. We propose a similar method for tabular data. We use copulas to define the correlation structure and marginal distributions of the independent features. We specify the dependence of the underlying probability field as a symbolic expression. Binary class labels are assigned by setting a threshold probability. This method provides global explanations since we prescribe the coefficients of the terms in the symbolic expression. In some instances, where we build models only from informative features, we can provide ground truth local attributions. Our contributions are providing a unique and flexible method for synthetic tabular data generation suitable for current model architectures and demonstration of its use in informative experiments highlighting that not all correlation in inputs change local attributions. \begin{figure*} \centering \begin{subfigure}[b]{0.5\columnwidth} \includegraphics[width=\textwidth]{figures/1_logistic/joint_dist_plot.png} \caption{} \label{fig:joint_logistic} \end{subfigure} \begin{subfigure}[b]{0.5\columnwidth} \includegraphics[width=\textwidth]{figures/2_cov_0/joint_dist_plot.png} \caption{} \label{fig:joint_2d_no_cov} \end{subfigure} \begin{subfigure}[b]{0.5\columnwidth} \includegraphics[width=\textwidth]{figures/3_cov_0.5/joint_dist_plot.png} \caption{} \label{fig:joint_2d_cov} \end{subfigure} \caption{Joint probability plot for features $x_1$ and $x_2$ (a) with Gaussian marginals (b) with uniform marginals with no correlation and (c) uniform marginals with positive correlation.} \end{figure*} \section{Synthetic data generation} The generation of synthetic data seeks a method to provide the joint probability $P(\boldsymbol{X}, y)$ where $\boldsymbol{X}$ are the input features, and $y$ is the output variable, and to draw samples from that joint distribution. From those samples, we will fit machine learned models that approximate the conditional probability $P(y|X)$ (via possibly black box models) and to provide explanations for those models. We separate the input feature vectors, \(\boldsymbol{X}\), into three categories - informative, redundant, and nuisance features. \begin{equation} \boldsymbol{X} = \left( \boldsymbol{X}_{I} | \boldsymbol{X}_{r} | \boldsymbol{X}_{n}\right) \end{equation} Informative features, $\boldsymbol{X}_{I}$, used to determine the binary labels, are specified by a copula. A multivariate distribution can be separated into a set of marginal distributions and the correlation structure between them. Copula theory is the mathematical framework of the separation of the correlation structure from the marginal distributions of the feature vectors. See \cite{jaworski_2013, nelsen_1999} for further details. The current library supports any marginal distribution available in \textit{scipy.stats}. The results for this paper use a multivariate Gaussian copula. Redundant features, $\boldsymbol{X}_{r}$, are a random linear combination of informative features. Nuisance features, $\boldsymbol{X}_{n}$ are random uncorrelated vectors drawn from the interval [-1, 1] which are useful as benchmarks to set the lower bound on explainability. Being purely random and not contributing to the specification of the labels, any feature found to have lower importance than a nuisance feature should be subjected to further scrutiny and possibly removed from the model. The final step in the process is to generate binary labels for the inputs. First, a scalar regression value is created from the informative features via a symbolic expression using the \textit{sympy} python module. \begin{equation} \vec{y}_{reg} = f\left( \boldsymbol{X}_{I} \right) \end{equation} \begin{equation} h_ k (y_{reg}) = \frac{\mathrm{1} }{\mathrm{1} + e^{- k(y_{reg} - y_0)}} \end{equation} To generate binary classification probabilities, the regression value is squashed by a sigmoid to the range [0,1]. After setting a threshold probability, class labels are determined. Additional post processing allows the addition of Gaussian noise to the informative features. A random seed can be specified so that repeated runs of a synthetic dataset yield the same values. \section{Experiments} This section demonstrates some of the capabilities of the synthetic tabular data through the process of modeling and providing local attributions via the SHAP library~\cite{lundbergUnifiedApproachInterpreting}. \subsection{Logistic regression} The first synthetic data set is for two features, $x_1, x_2 \in [-1,1]$, with no covariance and Gaussian marginal distributions for both. The joint probability plot is shown in Figure~\ref{fig:joint_logistic}. The symbolic regression expression is $y = x_1$. The probability values and class labels are shown in Figure~\ref{fig:1}. One thousand samples are generated. The dataset is split 70/30 into a training and test set, with a logistic regression model fit to the training data. The AUC of the model is 99.8\% and the coefficients of the model are $[11.98, 0.04]$. To provide local attributions, we fit a SHAP KernelExplainer to the training set with the results shown in Figure~\ref{fig:logistic_shap_vals}. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figures/1_logistic/trif_round1.png} \caption{Contours of SHAP values for $x_1$ (left) and $x_2$ (right) for the simple 1-D logistic regression model.} \label{fig:logistic_shap_vals} \end{figure} The SHAP values for $x_1$ dominate by two orders of magnitude, roughly in keeping with the relative global importance found in the coefficients. Strong left to right symmetry is broken only in sparsely populated regions. Interesting to note that the SHAP values for $x_2$ also display symmetry from top to bottom - with the opposite sign of the coefficient. \begin{figure*}[htbp] \centering \includegraphics[width=0.8\textwidth]{figures/3_cov_0.5/correlated_inputs_summary.png} \caption{Contours of SHAP values for $x_1$ (top row) and $x_2$ (bottom row) for the baseline data and model (left), correlated model and data (center), and correlated model fit with baseline data (right).} \label{fig:2D_shap_vals} \end{figure*} \subsection{Presence of correlation in informative features} We investigate the impact of correlation between the input features on model explanations. It is well known that the presence of correlation can alter a model's explanations, see for example~\cite{breiman2001,aas2019explaining}. We again generate datasets for two features, $x_1, x_2 \in [-1,1]$; the first with uniform marginal distributions, a baseline dataset with no correlation whose joint distribution can be seen in Figure~\ref{fig:joint_2d_no_cov}, and a second dataset, see Figure~\ref{fig:joint_2d_cov} with the covariance specified as: \[ cov = \left[ \begin{array}{cc} 1.0 & 0.5 \\ 0.5 & 1.0 \end{array} \right].\] The symbolic regression expression for this experiment is: \[ y_{reg} = cos(x_1^ 2 \cdot \pi / 180) - sin(x_2 \cdot \pi / 180) + x_1 \cdot x_2 \] We hold the probability field constant between the datasets. The probability values and class labels are shown in Figure~\ref{fig:2}. The dataset is split 70/30 into a training and test set, with a dense network with three hidden layers and a total of 255 weights in Tensorflow fit to the training data. The AUC of the resulting model is 100\%. To provide local attributions, we fit a SHAP DeepExplainer to the training set with the results shown in Figure~\ref{fig:2D_shap_vals}. There is an apparent decrease in SHAP values in the presence of correlation in the input features. Recall that SHAP values are relative with respect to the expectation of the output. With correlation, their density is drawn out of quadrants 2 and 4, and placed in quadrants 1 and 3, leading to a higher expected predicted probability due to the imbalance of the class labels. If we account for that effect by refitting the explainer with the baseline data (which does not suffer the same level of imbalance), the SHAP values look essentially like those from the uncorrelated inputs (as shown in the last column of Figure~\ref{fig:2D_shap_vals}) In this context, the apparent effect of correlation is an artifact induced from the expectation of the correlated model predictions over the correlated training data -- it is not a bias that the model has learned during training. \subsection{Presence of redundant variables} In contrast to the experiment in the previous section, this section considers the effect of correlation with unimportant features -- since the redundant features are not explicitly used in the symbolic expression used to derive the binary labels. We reuse the baseline from the previous section. We create a second dataset by augmenting the baseline informative features with two redundant and two nuisance features. The joint distribution of the informative features, the symbolic expression that maps the informative features to the binary labels all remain unchanged. We keep the same train/test split. The only change to the dense network is to increase the dimension of the input layer to accommodate the two redundant and two nuisance features. \begin{figure*} \centering \includegraphics[width=0.95\textwidth, keepaspectratio]{figures/4_redundant/redundant_shap_values_all_comps.png} \caption{Contours of SHAP values for $x_1$ (top row) and $x_2$ (bottom row) for the baseline data and model(left), redundant feature model for three components of explanation: informative, redundant, and nuisance.} \label{fig:2D_shap_vals_with_redundant} \end{figure*} Redundant features have a significant impact on SHAP's perception of feature importance. Summing the individual components of explanations for the redundant feature model does not recover the values obtained from the baseline model. \section{Conclusions} In this paper, we have described a method of generating synthetic tabular data utilizing copulas and symbolic expressions that provides users the ability to control the complexity of the resulting dataset. We have briefly demonstrated how the origin of correlation amongst inputs (informative vs redundant) influences model explainability. This method is a powerful investigative tool for data scientists, model developers, and explainable AI method developers. Future work includes: further experimentation with more complex data sets, bridging the gap between global explanations (provided by our symbolic expression) and local explanations, investigation of categorical and ordinal variables, as well as improving local attributions for tabular data. At the time of publication, we were working on making the repository accessible. Please contact a corresponding author for further details.
proofpile-arXiv_065-140
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $G=(V,E)$ be a nontrivial simple connected graph, where $V = \{v_1, v_2, \cdots, v_n\}$ and $E$ are the vertex set and edge set of $G$, respectively. $A(G)=(a_{ij})_{n \times n}$ is the adjacency matrix of $G$, where $a_{ij} =1$ if $v_i v_j \in E$, and $a_{ij} =0$ otherwise. Let $d_i$ be the degree of vertex $v_i$ in $G$, and $D(G) = diag(d_1, d_2, \cdots, d_n)$. Then $L(G) =D(G) -A(G)$ is called the Laplacian matrix of $G$, and $\mathcal{L}(G) =D(G)^{-\frac{1}{2}} L(G) D(G)^{-\frac{1}{2}}$ the normalized Laplacian matrix of $G$. It is easily seen that, \begin{equation*} \left(\mathcal{L}(G)\right)_{ij} = \begin{cases} 1,&\text{if}~i =j;\\ -\frac{1}{\sqrt {d_i d_j}}, &\text{if}~ i \ne j ~\text{and}~ v_i v_j \in E;\\ 0, &\text{otherwise}. \end{cases} \end{equation*} Let $0= \mu_1 <\mu_2 \leq \cdots \leq \mu_n$ be the eigenvalues of $L(G)$, and $0= \nu_1 <\nu_2 \leq \cdots \leq \nu_n$ the eigenvalues of $\mathcal{L}(G)$. The sets $Sp(L(G)) = \{\mu_1, \mu_2, \cdots, \mu_n\}$ and $Sp(\mathcal{L}(G)) = \{\nu_1, \nu_2, \cdots, \nu_n\}$ are called the Laplacian spectrum and normalized Laplacian spectrum of $G$, respectively. Let $d_{ij}$ denote the distance between vertices $v_i$ and $v_j$ in $G$ (namely, the length of a shortest path connecting them). The Wiener index \cite{b1} and Gutman index \cite{b2} of $G$ are defined as $W(G) =\sum_{i<j} d_{ij}$ and $Gut(G) =\sum_{i<j} d_i d_j d_{ij}$. For these two famous topological indices, one can refer to \cite{b3,b4,b5,b6,b7,b8,b9,b10} and the references therein. If regard each edge in $E(G)$ as an unit resistor, then the resistance distance between two vertices $v_i$ and $v_j$, denoted by $r_{ij}$, is defined \cite{b11} to be the effective resistance between them. Similar to the Wiener index, the Kirchhoff index of $G$ is defined as $Kf(G)=\sum_{i<j} r_{ij}$. Later, the following relation between $Kf(G)$ and $Sp(L(G))$ was established by Zhu et al. \cite{b12} and Gutman and Mohar \cite{b13} independently. \noindent \textbf{Lemma 1.1} \cite{b12,b13}\textbf{.} Let $G$ be a simple graph of order $n \geq 2$. Then $$Kf(G) = \sum_{i=2}^n \frac{1}{\mu_i}.$$ Similar to the Gutman index, Chen and Zhang \cite{b14} defined the multiplicative degree-Kirchhoff index of $G$ as $Kf^*(G)= \sum_{i <j} d_i d_j r_{ij}$. Moreover, the following relation between $Kf^*(G)$ and $Sp(\mathcal{L}(G))$ were confirmed. \noindent \textbf{Lemma 1.2} \cite{b14}\textbf{.} Let $G$ be a simple connected graph of order $n \geq 2$ and size $m$. Then $$Kf^*(G) = 2m \sum_{i=2}^n \frac{1}{\nu_i}.$$ In recent years, more and more attentions were paid to the Kirchhoff index and multiplicative degree-Kirchhoff index. Closed-form formulae of (multiplicative degree-)Kirchhoff index have been established for some classes of graphs. For examples, the formulae of Kirchhoff index for cycles, circulant graphs, and composite graphs were obtained in \cite{b15}, \cite{b16}, and \cite{b17}, respectively, and those of both indices for complete multipartite graphs were obtained in \cite{b18}. Besides, quite a few literatures concerned the (multiplicative degree-)Kirchhoff index of polygon chains and their variants. Explicit expressions of (multiplicative degree-)Kirchhoff index have been obtained for linear polyomino chain \cite{b19}, linear crossed polyomino chain \cite{b20}, linear pentagonal chain \cite{b21}, linear phenylenes \cite{b22,b23}, cyclic phenylenes \cite{b24}, M\"{o}bius phenylenes chain and cylinder phenylenes chain \cite{b25,b26}, linear [n] phenylenes \cite{b27}, generalized phenylenes \cite{b28,b29}, linear hexagonal chain \cite{b30,b31}, linear crossed hexagonal chain \cite{b32}, M\"{o}bius hexagonal chain \cite{b33}, and periodic linear chains \cite{b34}, linear octagonal chain \cite{b35}, linear octagonal-quadrilateral chain \cite{b36}, and linear crossed octagonal chain \cite{b37}. For two disjoint graphs $G$ and $H$, $G \otimes H$ will denote the strong product of $G$ and $H$. That is, $V(G \otimes H) = V(G) \times V(H)$, and two distinct vertices $(u_1, v_1)$ and $(u_2, v_2)$ are adjacent whenever $u_1 =u_2$ or $u_1 u_2 \in E(G)$, or, $v_1 =v_2$ or $v_1 v_2 \in E(H)$. The Catersian product of $G$ and $H$, denoted by $G \times H$, is the graph with vertex set $V(G) \times V(H)$, and two vertices $(u_1, v_1)$ and $(u_2, v_2)$ are adjacent whenever $u_1= u_2$ and $v_1 v_2 \in E(H)$, or $v_1= v_2$ and $u_1 u_2 \in E(G)$. Figure 1 depicts the graphs $S_n \otimes K_2$ and $S_n \times K_2$, where $S_n$ and $K_n$ denote the star and complete graph of order $n$, respectively. Recently, Li et al. \cite{b38} determined the expressions of $Kf(S_r)$, $Kf^*(S_r)$, and $\tau(S_r)$, where $S_r$ is a graph derived from $S_n \otimes K_2$ by randomly removing $r$ vertical edges, and $\tau(G)$ denotes the number of spanning trees of a connected graph $G$. Finally, they proposed the problem of determining these three invariants for graphs derived from $S_n \times K_2$. In the present paper, we completely solve this problem. \begin{figure}[ht] \centering \includegraphics[width=4 in]{1} \caption{ The graphs $S_{n} \otimes K_{2}$ and $S_{n} \times K_{2}$.} \end{figure} For convenience, we denote $S_n^2 = S_n \times K_2$. Then $|V(S_n^2)| = 2n$ and $|E(S_n^2)| = 3n-2$. Let $E'= \{ii'|i=1,2,\cdots,n\}$. $\mathcal{S}_{n,r}^2$ will denote the set of graphs derived from $S_n^2$ by randomly deleting $r$ edges in $E'$. Obviously, the unique graph in $\mathcal{S}_{n,n}^2$ is disconnected, hence we consider $\mathcal{S}_{n,r}^2$ for $0 \leq r \leq n-1$ only. Note also, $\mathcal{S}_{n,0}^2 = \{ S_n^2 \}$. In Section 2, some notations and known results are introduced, which will be applied to get our main results. In Section 3, explicit expressions of $Kf(S_n^2)$, $Kf^*(S_n^2)$, and $\tau(S_n^2)$ are obtained. Finally, $Kf(S_{n,r}^2)$ and $\tau(S_{n,r}^2)$ are determined in Section 4, where $S_{n,r}^2$ is an arbitrary graph in $\mathcal{S}_{n,r}^2$. Moreover, it is shown that, $\lim_{n \rightarrow +\infty} Kf(S_n^2)/W(S_n^2) = \lim_{n \rightarrow +\infty} Kf(S_{n,r}^2)/W(S_{n,r}^2) = 8/15$ and $\lim_{n \rightarrow +\infty} Kf^*(S_n^2)/Gut(S_n^2) = 16/33$. \section{Preliminaries} Label the vertices of $S_n^2$ as in Figure 1, and set $V_1= \{1,2,\cdots,n\}$, $V_2= \{1',2',\cdots,n'\}$. Then we have $$L(S_n^2)= \begin{pmatrix} L_{11}(S_n^2) & L_{12}(S_n^2)\\ L_{21}(S_n^2) & L_{22}(S_n^2) \end{pmatrix},~ \mathcal{L}(S_n^2)= \begin{pmatrix} \mathcal{L}_{11}(S_n^2) & \mathcal{L}_{12}(S_n^2)\\ \mathcal{L}_{21}(S_n^2) & \mathcal{L}_{22}(S_n^2) \end{pmatrix},$$ \noindent where $L_{ij}(S_n^2)$ ($\mathcal{L}_{ij}(S_n^2)$) is the submatrix of $L(S_n^2)$ (resp. $\mathcal{L}(S_n^2)$) whose rows (columns) corresponding to the vertices in $V_i$ (resp. $V_j$). It is easily seen that, $L_{11}(S_n^2) =L_{22}(S_n^2)$, $L_{12}(S_n^2)= L_{21}(S_n^2)$, $\mathcal{L}_{11}(S_n^2) = \mathcal{L}_{22}(S_n^2)$, and $\mathcal{L}_{12}(S_n^2) = \mathcal{L}_{21}(S_n^2)$. Let $$ T=\begin{pmatrix} \frac{1}{\sqrt{2}}I_n &\frac{1}{\sqrt{2}}I_n \\ \frac{1}{\sqrt{2}}I_n &-\frac{1}{\sqrt{2}}I_n \end{pmatrix},$$ \noindent then we have $$T L(S_n^2) T = \begin{pmatrix} L_A(S_n^2) &0\\ 0 &L_S(S_n^2) \end{pmatrix},~ T \mathcal{L}(S_n^2) T = \begin{pmatrix} \mathcal{L}_A(S_n^2) &0\\ 0 &\mathcal{L}_S(S_n^2) \end{pmatrix},$$ \noindent where $L_A(S_n^2)= L_{11}(S_n^2) + L_{12}(S_n^2)$, $L_S(S_n^2)= L_{11}(S_n^2) - L_{12}(S_n^2)$, $\mathcal{L}_A(S_n^2) = \mathcal{L}_{11}(S_n^2)+ \mathcal{L}_{12}(S_n^2)$, and $\mathcal{L}_S(S_n^2) = \mathcal{L}_{11}(S_n^2) - \mathcal{L}_{12}(S_n^2)$. Based on the above arguments, by applying the technique used in \cite{b32,b39}, we immediately have the following decomposition theorem, where $\Phi(B, \lambda) = |\lambda I-B|$ stands for the characteristic polynomial of $B$. \noindent \textbf{Lemma 2.1.} Let $L_A(S_n^2)$, $L_S(S_n^2)$, $\mathcal{L}_A(S_n^2)$, and $\mathcal{L}_S(S_n^2)$ be defined as above. Then $$\Phi(L(S_n^2), \lambda) = \Phi(L_A(S_n^2), \lambda) \Phi(L_S(S_n^2), \lambda),$$ and $$\Phi(\mathcal{L}(S_n^2), \lambda) = \Phi(\mathcal{L}_A(S_n^2), \lambda) \Phi(\mathcal{L}_S(S_n^2), \lambda).$$ \noindent \textbf{Lemma 2.2} \cite{b40}\textbf{.} If $G$ is a connected graph with $n \geq 2$ vertices, then $$\tau(G) = \frac{1}{n}\prod_{i=2}^{n} \mu_i.$$ \section{Results for $S_n^2$} We will give explicit expressions of $Kf(S_n^2)$, $Kf^*(S_n^2)$, and $\tau(S_n^2)$ in this section. \subsection{On $Kf(S_n^2)$ and $\tau(S_n^2)$} Obviously, $$L_{11}(S_n^2) = \begin{pmatrix} n &-1 &-1 &\cdots &-1 \\ -1 &2 &0 &\cdots &0 \\ -1 &0 &2 &\cdots &0\\ \cdots &\cdots &\cdots &\cdots &\cdots\\ -1 &0 &0 &\cdots &2 \end{pmatrix}_{n \times n},~ L_{12}(S_n^2) = \begin{pmatrix} -1 &0 &0 &\cdots &0 \\ 0 &-1 &0 &\cdots &0 \\ 0 &0 &-1 &\cdots &0\\ \cdots &\cdots &\cdots &\cdots &\cdots\\ 0 &0 &0 &\cdots &-1 \end{pmatrix}_{n \times n}.$$ Hence $$L_A(S_n^2) = L_{11}(S_n^2)+ L_{12}(S_n^2) = \begin{pmatrix} n-1 &-1 &-1 &\cdots &-1 \\ -1 &1 &0 &\cdots &0 \\ -1 &0 &1 &\cdots &0\\ \cdots &\cdots &\cdots &\cdots &\cdots\\ -1 &0 &0 &\cdots &1 \end{pmatrix}_{n \times n},$$ \noindent and we easily have $Sp(L_A(S_n^2)) = \{0, 1^{n-2}, n \}$, where $a^k$ denotes $k$ successive $a$'s. Similarly, we have $$L_S(S_n^2) = L_{11}(S_n^2) - L_{12}(S_n^2) = \begin{pmatrix} n+1 &-1 &-1 &\cdots &-1 \\ -1 &3 &0 &\cdots &0 \\ -1 &0 &3 &\cdots &0\\ \cdots &\cdots &\cdots &\cdots &\cdots\\ -1 &0 &0 &\cdots &3 \end{pmatrix}_{n \times n},$$ \noindent and get $Sp(L_S(S_n^2)) = \{2, 3^{n-2}, n+2 \}$. Hence $Sp(L(S_n^2)) = \{0, 1^{n-2}, 2, 3^{n-2}, n, n+2 \}$ from Lemma 2.1, and we get the following result. \noindent \textbf{Theorem 3.1.} Let $S_n^2 = S_n \times K_2$. Then (1) $Kf(S_n^2) = \frac{8n^3 +3n^2 -14n +12}{3n +6}$; (2) $\tau(S_n^2) = (n+2)\cdot 3^{n-2}$; (3) $\lim\limits_{n \rightarrow +\infty} \frac{Kf(S_n^2)}{W(S_n^2)} = \frac{8}{15}$. \noindent \textbf{Proof.} From Lemma 1.1 we have $$Kf(S_n^2) = 2n \left[(n-2) + \frac{1}{2} + \frac{n-2}{3} + \frac{1}{n} + \frac{1}{n+2} \right] = \frac{8n^3 +3n^2 -14n +12}{3(n+2)}.$$ From Lemma 2.2 we immediately have $$\tau(S_n^2) = \frac{1}{2n}\cdot 2 \cdot 3^{n-2} \cdot n \cdot (n+2) = (n+2)\cdot 3^{n-2}.$$ Finally, we end the proof by confirm that $W(S_n^2) =5n^2 -8n +4$. Let $w_i = \sum_{j \in V(S_n^2)} d_{ij}$. Obviously, $w_i = 1 \cdot n + 2(n-1) = 3n-2 $ if $i = 1, 1'$, and $w_i = 1 +1 +2(n-1) +3(n-2) = 5n -6$ otherwise. Hence $$ W(S_n^2) = \frac{1}{2} \sum\limits_{i \in V(S_n^2)} w_i = \frac{1}{2} \left[ 2(3n-2) +(2n-2)(5n-6) \right] = 5n^2 -8n +4.~\blacksquare$$ \subsection{On $Kf^*(S_n^2)$} Consequently, we will determine $Kf^*(S_n^2)$. Obviously, $$\mathcal{L}_{11}(S_n^2) =\begin{pmatrix} 1 &-\frac{1}{\sqrt{2n}} &-\frac{1}{\sqrt{2n}} &\cdots &-\frac{1}{\sqrt{2n}}\\ -\frac{1}{\sqrt{2n}} &1 &0 &\cdots &0\\ -\frac{1}{\sqrt{2n}} &0 &1 &\cdots &0\\ \cdots &\cdots &\cdots &\cdots &\cdots\\ -\frac{1}{\sqrt{2n}} &0 &0 &\cdots &1 \end{pmatrix}_{n \times n}, $$ \noindent and $$\mathcal{L}_{12}(S_n^2) =\begin{pmatrix} -\frac{1}{n} &0 &0 &\cdots &0 \\ 0 &-\frac{1}{2} &0 &\cdots &0\\ 0 &0 &-\frac{1}{2} &\cdots &0\\ \cdots &\cdots &\cdots &\cdots &\cdots\\ 0 &0 &0 &\cdots &-\frac{1}{2} \end{pmatrix}_{n \times n}. $$ \noindent Hence $$\mathcal{L}_A(S_n^2) = \mathcal{L}_{11}(S_n^2) + \mathcal{L}_{12}(S_n^2) = \begin{pmatrix} \frac{n-1}{n} &-\frac{1}{\sqrt{2n}} &-\frac{1}{\sqrt{2n}} &\cdots &-\frac{1}{\sqrt{2n}}\\ -\frac{1}{\sqrt{2n}} &\frac{1}{2} &0 &\cdots &0\\ -\frac{1}{\sqrt{2n}} &0 &\frac{1}{2} &\cdots &0\\ \cdots &\cdots &\cdots &\cdots &\cdots\\ -\frac{1}{\sqrt{2n}} &0 &0 &\cdots &\frac{1}{2} \end{pmatrix}_{n \times n}, $$ \noindent and we easily have $Sp(\mathcal{L}_A(S_n^2)) = \{0, \left(\frac{1}{2} \right)^{n-2}, \frac{3n-2}{2n} \}$. Similarly, we have $$\mathcal{L}_S(S_n^2) = \mathcal{L}_{11}(S_n^2) - \mathcal{L}_{12}(S_n^2) = \begin{pmatrix} \frac{n+1}{n} &-\frac{1}{\sqrt{2n}} &-\frac{1}{\sqrt{2n}} &\cdots &-\frac{1}{\sqrt{2n}}\\ -\frac{1}{\sqrt{2n}} &\frac{3}{2} &0 &\cdots &0\\ -\frac{1}{\sqrt{2n}} &0 &\frac{3}{2} &\cdots &0\\ \cdots &\cdots &\cdots &\cdots &\cdots\\ -\frac{1}{\sqrt{2n}} &0 &0 &\cdots &\frac{3}{2} \end{pmatrix}_{n \times n}, $$ \noindent and get $Sp(\mathcal{L}_S(S_n^2)) = \{2, \left(\frac{3}{2} \right)^{n-2}, \frac{n+2}{2n} \}$. Hence $Sp(\mathcal{L}(S_n^2)) = \{0, \left(\frac{1}{2} \right)^{n-2}, \frac{n+2}{2n}, \frac{3n-2}{2n}, \left(\frac{3}{2} \right)^{n-2}, 2\}$ from Lemma 2.1, and we immediately have the following result. \noindent \textbf{Theorem 3.2.} Let $S_n^2 = S_n \times K_2$. Then (1) $Kf^*(S_n^2) = \frac{48n^3 +25 n^2 -180n +116}{3n+6}$; (2) $\lim\limits_{n \rightarrow +\infty} \frac{Kf^*(S_n^2)}{Gut(S_n^2)} = \frac{16}{33}$. \noindent \textbf{Proof.} From Lemma 1.2 it is easily confirm that \begin{align*} Kf^*(S_n^2) &= 2(3n-2) \left[ 2n-4 + \frac{2n}{n+2} + \frac{2n}{3n-2}+ \frac{2(n-2)}{3} +\frac{1}{2} \right] \\ &= \frac{48n^3 +25 n^2 -180n +116}{3n+6}. \end{align*} Now, let $g_i =\sum_{j \in V(S_n^2)} d_i d_j d_{ij}$. Obviously, if $i = 1, 1'$, then $$g_i = n \cdot 2 \cdot 1 + n \cdot 2 \cdot 1 \cdot (n-1) + n \cdot 2 \cdot 2 \cdot (n-1) = 7n^2 -6n,$$ and otherwise $$ g_i = 2 \cdot n \cdot 1 + 2 \cdot 2 \cdot 1 + 2 \cdot n \cdot 2 + 2 \cdot 2 \cdot 2 \cdot (n-2) + 2 \cdot 2 \cdot 3 \cdot (n-2) = 26n -36.$$ Hence $$Gut(S_n^2) = \frac{1}{2}\sum\limits_{i \in V(S_n^2)} g_i = \frac{1}{2} \left[ 2(7n^2 -6n) + (26n -36)(2n-2) \right] = 33n^2 -68n +36,$$ and it follows that $$\lim\limits_{n \rightarrow +\infty} \frac{Kf^*(S_n^2)}{Gut(S_n^2)} = \lim\limits_{n \rightarrow +\infty} \frac{48n^3 +25 n^2 -180n +116}{(3n+6)(33n^2 -68n +36)} =\frac{16}{33},$$ which completes the proof. $\blacksquare$ \section{Results for graphs in $\mathcal{S}_{n,r}^2$} Let $S_{n,r}^2$ be any graph in $\mathcal{S}_{n,r}^2$, $1 \leq r \leq n-1$. We will determine $Kf(S_{n,r}^2)$ and $\tau(S_{n,r}^2)$ in this section. Let $d_i$ be the degree of vertex $i$ in $S_{n,r}^2$. Then $d_i = n$ or $n-1$ if $i = 1, 1'$, and $d_i =1$ or $2$ otherwise. We will compute $Sp(S_{n,r}^2)$ in the following two cases. \textbf{Case 1.} Edge $11' \notin E(_{n,r}^2)$. Then $$L_{11}(S_{n,r}^2) =\begin{pmatrix} n-1 &-1 &-1 &\cdots &-1 \\ -1 &d_2 &0 &\cdots &0 \\ -1 &0 &d_3 &\cdots &0 \\ \cdots &\cdots &\cdots &\cdots &\cdots \\ -1 &0 &0 &\cdots &d_n \end{pmatrix},~ L_{12}(S_{n,r}^2) =\begin{pmatrix} 0 &0 &0 &\cdots &0 \\ 0 &t_2 &0 &\cdots &0 \\ 0 &0 &t_3 &\cdots &0 \\ \cdots &\cdots &\cdots &\cdots &\cdots \\ 0 &0 &0 &\cdots &t_n \end{pmatrix}, $$ where $t_i =0$ if $d_i =1$, and $t_i =1$ if $d_i =2$, $2 \leq i \leq n$. Hence $$L_A(S_{n,r}^2) = L_{11}(S_{n,r}^2) + L_{12}(S_{n,r}^2) =\begin{pmatrix} n-1 &-1 &-1 &\cdots &-1 \\ -1 &1 &0 &\cdots &0 \\ -1 &0 &1 &\cdots &0 \\ \cdots &\cdots &\cdots &\cdots &\cdots \\ -1 &0 &0 &\cdots &1 \end{pmatrix}_{n \times n}, $$ and $Sp(L_A(S_{n,r}^2)) = \{0, 1^{n-2}, n \}$. On the other hand, $$L_S(S_{n,r}^2) = L_{11}(S_{n,r}^2) - L_{12}(S_{n,r}^2) =\begin{pmatrix} n-1 &-1 &-1 &\cdots &-1 \\ -1 &d_2 -t_2 &0 &\cdots &0 \\ -1 &0 &d_3 -t_3 &\cdots &0 \\ \cdots &\cdots &\cdots &\cdots &\cdots \\ -1 &0 &0 &\cdots &d_n -t_n \end{pmatrix}, $$ where $d_i -t_i =1$ if $d_i =1$, and $d_i -t_i =3$ if $d_i =2$, $2 \leq i \leq n$. We will compute $Sp(L_S(S_{n,r}^2))$ in the following cases. \textbf{Case 1.1.} $r=1$. Then $d_i -t_i =3$, $2 \leq i \leq n$, and we easily have $$ Sp(L_S(S_{n,r}^2))= \left\{ 3^{n-2}, \frac{n+2 +\sqrt{n^2 -4n +12}}{2}, \frac{n+2 -\sqrt{n^2 -4n +12}}{2} \right\}.$$ \textbf{Case 1.2.} $r \geq 2$. By direct calculations, we have $$\Phi( L_S(S_{n,r}^2), \lambda) =\left[\lambda^3 -(n+3)\lambda^2 +3n\lambda +2r -2n \right] (\lambda -1)^{r-2} (\lambda -3)^{n-r-1}.$$ Let $\lambda_1, \lambda_2, \lambda_3$ be the three roots of $\lambda^3 -(n+3)\lambda^2 +3n\lambda +2r -2n = 0$. Then $Sp(L_S(S_{n,r}^2)) =\{1^{r-2}, 3^{n-r-1}, \lambda_1, \lambda_2, \lambda_3 \}$, and it holds that $\lambda_1 \lambda_2 \lambda_3 =2n -2r$ and $$\frac{1}{\lambda_1} +\frac{1}{\lambda_2} +\frac{1}{\lambda_3} =\frac{\lambda_1 \lambda_2 +\lambda_1 \lambda_3 +\lambda_2 \lambda_3}{\lambda_1 \lambda_2 \lambda_3} =\frac{3n}{2n-2r}$$ from the Vieta's theorem. \textbf{Case 2.} $11' \in E(_{n,r}^2)$. Then $$L_{11}(S_{n,r}^2) =\begin{pmatrix} n &-1 &-1 &\cdots &-1 \\ -1 &d_2 &0 &\cdots &0 \\ -1 &0 &d_3 &\cdots &0 \\ \cdots &\cdots &\cdots &\cdots &\cdots \\ -1 &0 &0 &\cdots &d_n \end{pmatrix},~ L_{12}(S_{n,r}^2) =\begin{pmatrix} -1 &0 &0 &\cdots &0 \\ 0 &t_2 &0 &\cdots &0 \\ 0 &0 &t_3 &\cdots &0 \\ \cdots &\cdots &\cdots &\cdots &\cdots \\ 0 &0 &0 &\cdots &t_n \end{pmatrix}, $$ where $t_i =0$ if $d_i =1$, and $t_i =-1$ if $d_i =2$, $2 \leq i \leq n$. Hence $$L_A(S_{n,r}^2) = L_{11}(S_{n,r}^2) + L_{12}(S_{n,r}^2) =\begin{pmatrix} n-1 &-1 &-1 &\cdots &-1 \\ -1 &1 &0 &\cdots &0 \\ -1 &0 &1 &\cdots &0 \\ \cdots &\cdots &\cdots &\cdots &\cdots \\ -1 &0 &0 &\cdots &1 \end{pmatrix}_{n \times n}, $$ and $Sp(L_A(S_{n,r}^2)) = \{0, 1^{n-2}, n \}$. On the other hand, $$L_S(S_{n,r}^2) = L_{11}(S_{n,r}^2) - L_{12}(S_{n,r}^2) =\begin{pmatrix} n+1 &-1 &-1 &\cdots &-1 \\ -1 &d_2 -t_2 &0 &\cdots &0 \\ -1 &0 &d_3 -t_3 &\cdots &0 \\ \cdots &\cdots &\cdots &\cdots &\cdots \\ -1 &0 &0 &\cdots &d_n -t_n \end{pmatrix}, $$ where $d_i -t_i =1$ if $d_i =1$, and $d_i -t_i =3$ if $d_i =2$, $2 \leq i \leq n$. By direct calculations, we have $$\Phi( L_S(S_{n,r}^2), \lambda) =\left[\lambda^3 -(n+5)\lambda^2 +(3n+8)\lambda +2r -2n -4 \right] (\lambda -1)^{r-1} (\lambda -3)^{n-r-2}.$$ Let $\lambda_1, \lambda_2, \lambda_3$ be the three roots of $\lambda^3 -(n+5)\lambda^2 +(3n+8)\lambda +2r -2n -4 = 0$. Then $Sp(L_S(S_{n,r}^2)) =\{1^{r-1}, 3^{n-r-2}, \lambda_1, \lambda_2, \lambda_3 \}$, and it holds that $\lambda_1 \lambda_2 \lambda_3 =2n -2r +4$ and $$\frac{1}{\lambda_1} +\frac{1}{\lambda_2} +\frac{1}{\lambda_3} =\frac{\lambda_1 \lambda_2 +\lambda_1 \lambda_3 +\lambda_2 \lambda_3}{\lambda_1 \lambda_2 \lambda_3} =\frac{3n+8}{2n-2r+4}$$ from the Vieta's theorem. Now, we are able to give the main result of this section. \noindent \textbf{Theorem 4.1.} If $S_{n,r}^2 \in \mathcal{S}_{n,r}^2$, $0 \leq r \leq n-1$, then (1) $Kf(S_{n,r}^2) = \begin{cases} \frac{8n^3 -(4r+17)n^2 -(4r^2 -26r -6)n -6r}{3(n-r)}, &\text{if}~11' \notin E(S_{n,r}^2) \\ \frac{8n^3 -(4r-3)n^2 -(4r^2 -30r +14)n +12 -6r}{3(n-r-2)}, &\text{if}~11' \in E(S_{n,r}^2) \end{cases} $; (2) $\tau(S_{n,r}^2) = \begin{cases} (n-r) \cdot 3^{n-r-1}, &\text{if}~11' \notin E(S_{n,r}^2) \\ (n-r+2) \cdot 3^{n-r+2}, &\text{if}~11' \in E(S_{n,r}^2) \end{cases} $; (3) $\lim\limits_{n \rightarrow +\infty} \frac{Kf(S_{n,r}^2)}{W(S_{n,r}^2)} = \frac{8}{15}$. \noindent \textbf{Proof.} If $r=0$, then $S_{n,r}^2 \cong S_{n}^2$, and the conclusion holds from Theorem 3.1. Hence assume $r \geq 1$. We distinguish the following two cases. \textbf{Case 1.} Edge $11' \notin E(_{n,r}^2)$. \textbf{Case 1.1.} $r=1$. Then $$ Sp(L(S_{n,r}^2)) =\{0, 1^{n-2}, n, 3^{n-2}, \frac{n+2 -\sqrt{n^2 -4n +12}}{2}, \frac{n+2 +\sqrt{n^2 -4n +12}}{2} \},$$ From Lemma 1.1 we have \begin{align*} Kf(S_{n,r}^2) &= 2n \left[n-2 +\frac{1}{n} +\frac{n-2}{3} +\frac{2}{n+2 -\sqrt{n^2 -4n +12}} +\frac{2}{n+2 +\sqrt{n^2 -4n +12}} \right] \\ &= \frac{8n^3 -21n^2 +28n -6}{3(n-1)}\\ &= \frac{8n^3 -(4r+17)n^2 -(4r^2 -26r -6)n -6r}{3(n-r)}. \end{align*} Then from Lemma 2.2, we have \begin{align*} \tau(S_{n,r}^2) &= \frac{1}{2n} \left[n \cdot 3^{n-2} \cdot \frac{n+2 -\sqrt{n^2 -4n +12}}{2} \cdot \frac{n+2 +\sqrt{n^2 -4n +12}}{2} \right] \\ &= (n-1) \cdot 3^{n-2}\\ &= (n-r) \cdot 3^{n-r-1}. \end{align*} \textbf{Case 1.2.} $r \geq 2$. Then $ Sp(L(S_{n,r}^2)) =\{0, 1^{n+r-4}, n, 3^{n-r-1}, \lambda_1, \lambda_2, \lambda_3 \}$, where $\lambda_1 \lambda_2 \lambda_3 = 2n-2r$ and $1/\lambda_1 + 1/\lambda_2 + 1/\lambda_3 = 3n/(2n-2r)$. From Lemma 1.1 we have \begin{align*} Kf(S_{n,r}^2) &= 2n \left[n+r-4 +\frac{1}{n} +\frac{n-r-1}{3} + \frac{3n}{2n-2r} \right] \\ &= \frac{8n^3 -(4r+17)n^2 -(4r^2 -26r -6)n -6r}{3(n-r)}. \end{align*} Then from Lemma 2.2, we have $$ \tau(S_{n,r}^2) = \frac{n \cdot 3^{n-r-1} \cdot \lambda_1 \cdot \lambda_2 \cdot \lambda_3}{2n} = \frac{n \cdot 3^{n-r-1} \cdot (2n-2r)}{2n} = (n-r) \cdot 3^{n-r-1}. $$ \textbf{Case 2.} Edge $11' \in E(_{n,r}^2)$. Then $ Sp(L(S_{n,r}^2)) =\{0, 1^{n+r-3}, n, 3^{n-r-2}, \lambda_1, \lambda_2, \lambda_3 \}$, where $\lambda_1 \lambda_2 \lambda_3 = 2n-2r+4$ and $1/\lambda_1 + 1/\lambda_2 + 1/\lambda_3 = (3n+8)/(2n-2r+4)$. From Lemma 1.1 we have \begin{align*} Kf(S_{n,r}^2) &= 2n \left[n+r-3 +\frac{1}{n} +\frac{n-r-2}{3} + \frac{3n+8}{2n-2r+4} \right] \\ &= \frac{8n^3 -(4r-3)n^2 -(4r^2 -30r +14)n +12 -6r}{3(n-r+2)}. \end{align*} Then from Lemma 2.2, we have $$ \tau(S_{n,r}^2) = \frac{n \cdot 3^{n-r-2} \cdot \lambda_1 \cdot \lambda_2 \cdot \lambda_3}{2n} = \frac{n \cdot 3^{n-r-2} \cdot (2n-2r+4)}{2n} = (n-r+2) \cdot 3^{n-r-2}. $$ Finally, it is straightforward to have $W(S_{n,r}^2) = W(S_{n}^2) +r = 5n^2 -8n +r+4$. Hence, in both cases, it holds that $$\lim\limits_{n \rightarrow +\infty} \frac{Kf(S_{n,r}^2)}{W(S_{n,r}^2)} = \frac{8}{15}.~\blacksquare$$
proofpile-arXiv_065-141
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} To estimate a regression when the errors have a non-identity covariance matrix, we usually turn first to generalized least squares (GLS). Somewhat surprisingly, GLS proves to be computationally challenging in the very simple setting of the unbalanced crossed random effects models that we study here. For that problem, the cost to compute the GLS estimate on $N$ data points grows at best like $O(N^{3/2})$ under the usual algorithms. If we additionally assume Gaussian errors, then \cite{crelin} show that even evaluating the likelihood one time costs at least a multiple of $N^{3/2}$. These costs make the usual algorithms for GLS infeasible for large data sets such as those arising in electronic commerce. In this paper, we present an iterative algorithm based on a backfitting approach from \cite{buja:hast:tibs:1989}. This algorithm is known to converge to the GLS solution. The cost of each iteration is $O(N)$ and so we also study how the number of iterations grows with~$N$. The crossed random effects model we consider has \begin{equation}\label{eq:refmodel} Y_{ij} =x_{ij}^\mathsf{T}\beta+a_i+b_j+e_{ij},\quad 1\le i\le R,\quad 1\le j\le C \end{equation} for random effects $a_i$ and $b_{j}$ and an error $e_{ij}$ with a fixed effects regression parameter $\beta\in\mathbb{R}^p$ for the covariates $x_{ij}\in\mathbb{R}^p$. We assume that $a_i\stackrel{\mathrm{iid}}{\sim} (0,\sigma^2_A)$, $b_j\stackrel{\mathrm{iid}}{\sim}(0,\sigma^2_B)$, and $e_{ij}\stackrel{\mathrm{iid}}{\sim}(0,\sigma^2_E)$ are all independent. It is thus a mixed effects model in which the random portion has a crossed structure. The GLS estimate is also the maximum likelihood estimate (MLE), when $a_i$, $b_{j}$ and $e_{ij}$ are Gaussian. Because we assume that $p$ is fixed as $N$ grows, we often leave $p$ out of our cost estimates, giving instead the complexity in $N$. The GLS estimate $\hat\beta_\mathrm{GLS}$ for crossed random effects can be efficiently estimated if all $R\times C$ values are available. Our motivating examples involve ratings data where $R$ people rate $C$ items and then it is usual that the data are very unbalanced with a haphazard observational pattern in which only $N\ll R\times C$ of the $(x_{ij},Y_{ij})$ pairs are observed. The crossed random effects setting is significantly more difficult than a hierarchical model with just $a_i+e_{ij}$ but no $b_{j}$ term. Then the observations for index $j$ are `nested within' those for each level of index $i$. The result is that the covariance matrix of all observed $Y_{ij}$ values has a block diagonal structure allowing GLS to be computed in $O(N)$ time. Hierarchical models are very well suited to Bayesian computation \citep{gelm:hill:2006}. Crossed random effects are a much greater challenge. \cite{GO17} find that the Gibbs sampler can take $O(N^{1/2})$ iterations to converge to stationarity, with each iteration costing $O(N)$ leading once again to $O(N^{3/2})$ cost. For more examples where the costs of solving equations versus sampling from a covariance attain the same rate see \cite{good:soka:1989} and \cite{RS97}. As further evidence of the difficulty of this problem, the Gibbs sampler was one of nine MCMC algorithms that \cite{GO17} found to be unsatisfactory. Furthermore, \cite{lme4} removed the {\tt mcmcsamp} function from the R package lme4 because it was considered unreliable even for the problem of sampling the posterior distribution of the parameters from previously fitted models, and even for those with random effects variances near zero. \cite{papa:robe:zane:2020} present an exception to the high cost of a Bayesian approach for crossed random effects. They propose a collapsed Gibbs sampler that can potentially mix in $O(1)$ iterations. To prove this rate, they make an extremely stringent assumption that every index $i=1,\dots,R$ appears in the same number $N/C$ of observed data points and similarly every $j=1,\dots,C$ appears in $N/R$ data points. Such a condition is tantamount to requiring a designed experiment for the data and it is much stronger than what their algorithm seems to need in practice. Under that condition their mixing rate asymptotes to a quantity $\rho_{\mathrm{aux}}$, described in our discussion section, that in favorable circumstances is $O(1)$. They find empirically that their sampler has a cost that scales well in many data sets where their balance condition does not hold. In this paper we study an iterative linear operation, known as backfitting, for GLS. Each iteration costs $O(N)$. The speed of convergence depends on a certain matrix norm of that iteration, which we exhibit below. If the norm remains bounded strictly below $1$ as $N\to\infty$, then the number of iterations to convergence is $O(1)$. We are able to show that the matrix norm is $O(1)$ with probability tending to one, under conditions where the number of observations per row (or per column) is random and even the expected row or column counts may vary, though in a narrow range. While this is a substantial weakening of the conditions in \cite{papa:robe:zane:2020}, it still fails to cover many interesting cases. Like them, we find empirically that our algorithm scales much more broadly than under the conditions for which scaling is proved. We suspect that the computational infeasibility of GLS leads many users to use ordinary least squares (OLS) instead. OLS has two severe problems. First, it is \myemph{inefficient} with $\var(\hat\beta_\mathrm{OLS})$ larger than $\var(\hat\beta_\mathrm{GLS})$. This is equivalent to OLS ignoring some possibly large fraction of the information in the data. Perhaps more seriously, OLS is \myemph{naive}. It produces an estimate of $\var(\hat\beta_\mathrm{OLS})$ that can be too small by a large factor. That amounts to overestimating the quantity of information behind $\hat\beta_\mathrm{OLS}$, also by a potentially large factor. The naivete of OLS can be countered by using better variance estimates. One can bootstrap it by resampling the row and column entities as in \cite{pbs}. There is also a version of Huber-White variance estimation for this case in econometrics. See for instance \cite{came:gelb:mill:2011}. While these methods counter the naivete of OLS, the inefficiency of OLS remains. The method of moments algorithm in \cite{crelin} gets consistent asymptotically normal estimates of $\beta$, $\sigma^2_A$, $\sigma^2_B$ and $\sigma^2_E$. It produces a GLS estimate $\hat\beta$ that is more efficient than OLS but still not fully efficient because it accounts for correlations due to only one of the two crossed random effects. While inefficient, it is not naive because its estimate of $\var(\hat\beta)$ properly accounts for variance due to $a_i$, $b_{j}$ and $e_{ij}$. In this paper we get a GLS estimate $\hat\beta$ that takes account of all three variance components, making it efficient. We also provide an estimate of $\var(\hat\beta)$ that accounts for all three components, so our estimate is not naive. Our algorithm requires consistent estimates of the variance components $\sigma^2_A$, $\sigma^2_B$ and $\sigma^2_E$ in computing $\hat\beta$ and $\widehat\var(\hat\beta)$. We use the method of moments estimators from \cite{GO17} that can be computed in $O(N)$ work. By \citet[Theorem 4.2]{GO17}, these estimates of $\sigma^2_A$, $\sigma^2_B$ and $\sigma^2_E$ are asymptotically uncorrelated and each of them has the same asymptotic variance it would have had were the other two variance components equal to zero. It is not known whether they are optimally estimated, much less optimal subject to an $O(N)$ cost constraint. The variance component estimates are known to be asymptotically normal \citep{gao:thesis}. The rest of this paper is organized as follows. Section~\ref{sec:missing} introduces our notation and assumptions for missing data. Section~\ref{sec:backfitting} presents the backfitting algorithm from \cite{buja:hast:tibs:1989}. That algorithm was defined for smoothers, but we are able to cast the estimation of random effect parameters as a special kind of smoother. Section~\ref{sec:normconvergence} proves our result about backfitting being convergent with a probability tending to one as the problem size increases. Section~\ref{sec:empiricalnorms} shows numerical measures of the matrix norm of the backfitting operator. It remains bounded below and away from one under more conditions than our theory shows. We find that even one iteration of the lmer function in lme4 package \cite{lme4} has a cost that grows like $N^{3/2}$ in one setting and like $N^{2.1}$ in another, sparser one. The backfitting algorithm has cost $O(N)$ in both of these cases. Section~\ref{sec:stitch} illustrates our GLS algorithm on some data provided to us by Stitch Fix. These are customer ratings of items of clothing on a ten point scale. Section~\ref{sec:discussion} has a discussion of these results. An appendix contains some regression output for the Stitch Fix data. \section{Missingness}\label{sec:missing} We adopt the notation from \cite{crelin}. We let $Z_{ij}\in\{0,1\}$ take the value $1$ if $(x_{ij},Y_{ij})$ is observed and $0$ otherwise, for $i=1,\dots,R$ and $j=1,\dots,C$. In many of the contexts we consider, the missingness is not at random and is potentially informative. Handling such problems is outside the scope of this paper, apart from a brief discussion in Section~\ref{sec:discussion}. It is already a sufficient challenge to work without informative missingness. The matrix $Z\in\{0,1\}^{R\times C}$, with elements $Z_{ij}$ has $N_{i\sumdot} =\sum_{j=1}^CZ_{ij}$ observations in `row $i$' and $N_{\sumdot j}=\sum_{i=1}^RZ_{ij}$ observations in `column $j$'. We often drop the limits of summation so that $i$ is always summed over $1,\dots,R$ and $j$ over $1,\dots,C$. When we need additional symbols for row and column indices we use $r$ for rows and $s$ for columns. The total sample size is $N=\sum_i\sum_jZ_{ij} =\sum_iN_{i\sumdot} = \sum_jN_{\sumdot j}$. There are two co-observation matrices, $Z^\mathsf{T} Z$ and $ZZ^\mathsf{T}$. Here $(Z^\tran Z)_{js}=\sum_iZ_{ij}Z_{is}$ gives the number of rows in which data from both columns $j$ and $s$ were observed, while $(ZZ^\tran)_{ir}=\sum_jZ_{ij}Z_{rj}$ gives the number of columns in which data from both rows $i$ and $r$ were observed. In our regression models, we treat $Z_{ij}$ as nonrandom. We are conditioning on the actual pattern of observations in our data. When we study the rate at which our backfitting algorithm converges, we consider $Z_{ij}$ drawn at random. That is, the analyst is solving a GLS conditionally on the pattern of observations and missingness, while we study the convergence rates that analyst will see for data drawn from a missingness mechanism defined in Section~\ref{sec:modelz}. If we place all of the $Y_{ij}$ into a vector $\mathcal{Y}\in\mathbb{R}^N$ and $x_{ij}$ compatibly into a matrix $\mathcal{X}\in\mathbb{R}^{N\times p}$, then the naive and inefficient OLS estimator is \begin{align}\label{eq:bhatols} \hat\beta_\mathrm{OLS} = (\mathcal{X}^\mathsf{T} \mathcal{X})^{-1}\mathcal{X}^\mathsf{T}\mathcal{Y}. \end{align} This can be computed in $O(Np^2)$ work. We prefer to use the GLS estimator \begin{align}\label{eq:bhatgls}\hat\beta_\mathrm{GLS} = (\mathcal{X}^\mathsf{T} \mathcal{V}^{-1}\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}\mathcal{V}^{-1}\mathcal{Y}, \end{align} where $\mathcal{V}\in\mathbb{R}^{N\times N}$ contains all of the $\cov(Y_{ij},Y_{rs})$ in an ordering compatible with $\mathcal{X}$ and $\mathcal{Y}$. A naive algorithm costs $O(N^3)$ to solve for $\hat\beta_\mathrm{GLS}$. It can actually be solved through a Cholesky decomposition of an $(R+C)\times (R+C)$ matrix \citep{sear:case:mccu:1992}. That has cost $O(R^3+C^3)$. Now $N\le RC$, with equality only for completely observed data. Therefore $\max(R,C)\ge \sqrt{N}$, and so $R^3+C^3\ge N^{3/2}$. When the data are sparsely enough observed it is possible that $\min(R,C)$ grows more rapidly than $N^{1/2}$. In a numerical example in Section~\ref{sec:empiricalnorms} we have $\min(R,C)$ growing like $N^{0.70}$. In a hierarchical model, with $a_i$ but no $b_{j}$ we would find $\mathcal{V}$ to be block diagonal and then $\hat\beta_\mathrm{GLS}$ could be computed in $O(N)$ work. A reviewer reminds us that it has been known since \cite{stra:1969} that systems of equations can be solved more quickly than cubic time. Despite that, current software is still dominated by cubic time algorithms. Also none of the known solutions are quadratic and so in our setting the cost would be at least a multiple of $(R+C)^{2+\gamma}$ for some $\gamma>0$ and hence not $O(N)$. We can write our crossed effects model as \begin{align}\label{eq:cemodelviaz} \mathcal{Y} = \mathcal{X}\beta + \mathcal{Z}_A\boldsymbol{a} + \mathcal{Z}_B\boldsymbol{b} + \boldsymbol{e} \end{align} for matrices $\mathcal{Z}_A\in\{0,1\}^{N\times R}$ and $\mathcal{Z}_B\in\{0,1\}^{N\times C}$. The $i$'th column of $\mathcal{Z}_A$ has ones for all of the $N$ observations that come from row $i$ and zeroes elsewhere. The definition of $\mathcal{Z}_B$ is analogous. The observation matrix can be written $Z = \mathcal{Z}_A^\mathsf{T}\mathcal{Z}_B$. The vector $\boldsymbol{e}$ has all $N$ values of $e_{ij}$ in compatible order. Vectors $\boldsymbol{a}$ and $\boldsymbol{b}$ contain the row and column random effects $a_i$ and $b_{j}$. In this notation \begin{equation} \label{eq:Vee} \mathcal{V} = \mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}\sigma^2_A + \mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}\sigma^2_B + I_N\sigma^2_E, \end{equation} where $I_N$ is the $N \times N$ identity matrix. Our main computational problem is to get a value for $\mathcal{U}=\mathcal{V}^{-1}\mathcal{X}\in\mathbb{R}^{N\times p}$. To do that we iterate towards a solution $\boldsymbol{u}\in\mathbb{R}^N$ of $\mathcal{V} \boldsymbol{u}=\boldsymbol{x}$, where $\boldsymbol{x}\in\mathbb{R}^N$ is one of the $p$ columns of $\mathcal{X}$. After that, finding \begin{equation} \label{eq:betahat} \hat\beta_\mathrm{GLS} = (\mathcal{X}^\mathsf{T} \mathcal{U})^{-1}(\mathcal{Y}^\mathsf{T}\mathcal{U})^\mathsf{T} \end{equation} is not expensive, because $\mathcal{X}^\mathsf{T}\mathcal{U}\in\mathbb{R}^{p\times p}$ and we suppose that $p$ is not large. If the data ordering in $\mathcal{Y}$ and elsewhere sorts by index $i$, breaking ties by index $j$, then $\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}\in\{0,1\}^{N\times N}$ is a block matrix with $R$ blocks of ones of size $N_{i\sumdot}\timesN_{i\sumdot}$ along the diagonal and zeroes elsewhere. The matrix $\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}$ will not be block diagonal in that ordering. Instead $P\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T} P^\mathsf{T}$ will be block diagonal with $N_{\sumdot j}\timesN_{\sumdot j}$ blocks of ones on the diagonal, for a suitable $N\times N$ permutation matrix $P$. \section{Backfitting algorithms}\label{sec:backfitting} Our first goal is to develop computationally efficient ways to solve the GLS problem \eqref{eq:betahat} for the linear mixed model~\eqref{eq:cemodelviaz}. We use the backfitting algorithm that \cite{hast:tibs:1990} and \cite{buja:hast:tibs:1989} use to fit additive models. We write $\mathcal{V}$ in (\ref{eq:Vee}) as $\sigma^2_E\left(\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}/\lambda_A+\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}/\lambda_B +I_N\right)$ with $\lambda_A=\sigma^2_E/\sigma^2_A$ and $\lambda_B=\sigma^2_E/\sigma^2_B$, and define $\mathcal{W}=\sigma^2_E\mathcal{V}^{-1}$. Then the GLS estimate of $\beta$ is \begin{align} \hat\beta_{\mathrm{GLS}}&=\arg\min_\beta (\mathcal{Y}-\mathcal{X}\beta)^\mathsf{T}\mathcal{W}(\mathcal{Y}-\mathcal{X}\beta) = (\mathcal{X}^\mathsf{T}\mathcal{W}\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}\mathcal{W}\mathcal{Y}\label{eq:betahatw} \end{align} and $\cov(\hat\beta_{\mathrm{GLS}})=\sigma^2_E (\mathcal{X}^\mathsf{T}\mathcal{W}\mathcal{X})^{-1}$. It is well known (e.g., \cite{robinson91:_that_blup}) that we can obtain $\hat\beta_{\mathrm{GLS}}$ by solving the following penalized least-squares problem \begin{align}\label{eq:minboth} \min_{\beta,\boldsymbol{a},\boldsymbol{b}}\Vert \mathcal{Y}-\mathcal{X}\beta-\mathcal{Z}_A\boldsymbol{a}-\mathcal{Z}_B\boldsymbol{b}\Vert^2 +\lambda_A\Vert\boldsymbol{a}\Vert^2 +\lambda_B\Vert\boldsymbol{b}\Vert^2. \end{align} Then $\hat\beta=\hat\beta_{\mathrm{GLS}}$ and $\hat \boldsymbol{a}$ and $\hat \boldsymbol{b}$ are the best linear unbiased prediction (BLUP) estimates of the random effects. This derivation works for any number of factors, but it is instructive to carry it through initially for one. \subsection{One factor}\label{sec:one-factor} For a single factor, we simply drop the $\mathcal{Z}_B\boldsymbol{b}$ term from \eqref{eq:cemodelviaz} to get \begin{equation*} \mathcal{Y} = \mathcal{X}\beta + \mathcal{Z}_A\boldsymbol{a} +\boldsymbol{e}. \end{equation*} Then $\mathcal{V}=\cov(\mathcal{Z}_A\boldsymbol{a}+\boldsymbol{e})= \sigma^2_A\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T} +\sigma^2_E I_N$, and $\mathcal{W}=\sigma^2_E\mathcal{V}^{-1}$ as before. The penalized least squares problem is to solve \begin{align}\label{eq:equivmina} \min_{\beta,\boldsymbol{a}} \Vert \mathcal{Y} - \mathcal{X}\beta -\mathcal{Z}_A\boldsymbol{a}\Vert^2 + \lambda_A \Vert\boldsymbol{a}\Vert^2. \end{align} We show the details as we need them for a later derivation. The normal equations from~\eqref{eq:equivmina} yield \begin{align} \boldsymbol{0} & = \mathcal{X}^\mathsf{T}(\mathcal{Y}-\mathcal{X}\hat\beta-\mathcal{Z}_A\hat\boldsymbol{a}),\quad\text{and}\label{eq:normbeta}\\ \boldsymbol{0} & = \mathcal{Z}_A^\mathsf{T}(\mathcal{Y}-\mathcal{X}\hat\beta-\mathcal{Z}_A\hat\boldsymbol{a}) -\lambda_A\hat\boldsymbol{a}.\label{eq:normbsa} \end{align} Solving~\eqref{eq:normbsa} for $\hat\boldsymbol{a}$ and multiplying the solution by $\mathcal{Z}_A$ yields $$ \mathcal{Z}_A\hat\boldsymbol{a} = \mathcal{Z}_A(\mathcal{Z}_A^\mathsf{T} \mathcal{Z}_A + \lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}(\mathcal{Y}-\mathcal{X}\hat\beta) \equiv \mathcal{S}_A(\mathcal{Y}-\mathcal{X}\hat\beta), $$ for an $N\times N$ ridge regression ``smoother matrix'' $\mathcal{S}_A$. As we explain below this smoother matrix implements shrunken within-group means. Then substituting $\mathcal{Z}_A\hat\boldsymbol{a}$ into equation~\eqref{eq:normbeta} yields \begin{equation} \label{eq:onefactor} \hat\beta = (\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_A)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_A)\mathcal{Y}. \end{equation} Using the Sherman-Morrison-Woodbury (SMW) identity, one can show that $\mathcal{W}=I_N-\mathcal{S}_A$ and hence $\hat\beta$ above equals $\hat\beta_\mathrm{GLS}$ from~\eqref{eq:betahatw}. This is not in itself a new discovery; see for example \cite{robinson91:_that_blup} or \cite{hast:tibs:1990} (Section 5.3.3). To compute the solution in (\ref{eq:onefactor}), we need to compute $\mathcal{S}_A \mathcal{Y}$ and $\mathcal{S}_A\mathcal{X}$. The heart of the computation in $\mathcal{S}_A \mathcal{Y}$ is $(\mathcal{Z}_A^\mathsf{T} \mathcal{Z}_A + \lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}\mathcal{Y}$. But $\mathcal{Z}_A^\mathsf{T} \mathcal{Z}_A=\mathrm{diag}(N_{1\text{\tiny$\bullet$}},N_{2\text{\tiny$\bullet$}},\ldots,N_{R\text{\tiny$\bullet$}})$ and we see that all we are doing is computing an $R$-vector of shrunken means of the elements of $\mathcal{Y}$ at each level of the factor $A$; the $i$th element is $\sum_jZ_{ij} Y_{ij}/(N_{i\text{\tiny$\bullet$}}+\lambda_A)$. This involves a single pass through the $N$ elements of $Y$, accumulating the sums into $R$ registers, followed by an elementwise scaling of the $R$ components. Then pre-multiplication by $\mathcal{Z}_A$ simply puts these $R$ shrunken means back into an $N$-vector in the appropriate positions. The total cost is $O(N)$. Likewise $\mathcal{S}_A\mathcal{X}$ does the same separately for each of the columns of $\mathcal{X}$. Hence the entire computational cost for \eqref{eq:onefactor} is $O(Np^2)$, the same order as regression on $\mathcal{X}$. What is also clear is that the indicator matrix $\mathcal{Z}_A$ is not actually needed here; instead all we need to carry out these computations is the factor vector $f_A$ that records the level of factor $A$ for each of the $N$ observations. In the R language \citep{R:lang:2015} the following pair of operations does the computation: \begin{verbatim} hat_a = tapply(y,fA,sum)/(table(fA)+lambdaA) hat_y = hat_a[fA] \end{verbatim} where {\tt fA} is a categorical variable (factor) $f_A$ of length $N$ containing the row indices $i$ in an order compatible with $Y\in\mathbb{R}^N$ (represented as {\tt y}) and {\tt lambdaA} is $\lambda_A=\sigma^2_A/\sigma^2_E$. \subsection{Two factors}\label{sec:two-factors} With two factors we face the problem of incompatible block diagonal matrices discussed in Section~\ref{sec:missing}. Define $\mathcal{Z}_G=(\mathcal{Z}_A\!:\!\mathcal{Z}_B)$ ($R+C$ columns), $\mathcal{D}_\lambda=\mathrm{diag}(\lambda_AI_R,\lambda_BI_C)$, and $\boldsymbol{g}^\mathsf{T}=(\boldsymbol{a}^\mathsf{T},\boldsymbol{b}^\mathsf{T})$. Then solving \eqref{eq:minboth} is equivalent to \begin{align}\label{eq:ming} \min_{\beta,\boldsymbol{g}}\Vert \mathcal{Y}-\mathcal{X}\beta-\mathcal{Z}_G\boldsymbol{g}\Vert^2 +\boldsymbol{g}^\mathsf{T}\mathcal{D}_\lambda\boldsymbol{g}. \end{align} A derivation similar to that used in the one-factor case gives \begin{equation} \label{eq:gfactor} \hat\beta = H_\mathrm{GLS}\mathcal{Y}\quad\text{for}\quad H_\mathrm{GLS} = (\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_G)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_G), \end{equation} where the hat matrix $H_\mathrm{GLS}$ is written in terms of a smoother matrix \begin{equation} \label{eq:defcsg} \mathcal{S}_G=\mathcal{Z}_G(\mathcal{Z}_G^\mathsf{T} \mathcal{Z}_G + \mathcal{D}_\lambda)^{-1}\mathcal{Z}_G^\mathsf{T}. \end{equation} We can again use SMW to show that $I_N-\mathcal{S}_G=\mathcal{W}$ and hence the solution $\hat\beta=\hat\beta_{\mathrm{GLS}}$ in \eqref{eq:betahatw}. But in applying $\mathcal{S}_G$ we do not enjoy the computational simplifications that occurred in the one factor case, because \begin{equation*} \mathcal{Z}_G^\mathsf{T}\mathcal{Z}_G= \left( \begin{array}{cc} \mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A&\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_B\\[0.25ex] \mathcal{Z}_B^\mathsf{T}\mathcal{Z}_A&\mathcal{Z}_B^\mathsf{T}\mathcal{Z}_B \end{array} \right) =\begin{pmatrix} \mathrm{diag}(N_{i\sumdot}) & Z\\ Z^\mathsf{T} & \mathrm{diag}(N_{\sumdot j}) \end{pmatrix}, \end{equation*} where $Z\in\{0,1\}^{R\times C}$ is the observation matrix which has no special structure. Therefore we need to invert an $(R+C)\times (R+C)$ matrix to apply $\mathcal{S}_G$ and hence to solve \eqref{eq:gfactor}, at a cost of at least $O(N^{3/2})$ (see Section~\ref{sec:missing}). Rather than group $\mathcal{Z}_A$ and $\mathcal{Z}_B$, we keep them separate, and develop an algorithm to apply the operator $\mathcal{S}_G$ efficiently. Consider a generic response vector $\mathcal{R}$ (such as $\mathcal{Y}$ or a column of $\mathcal{X}$) and the optimization problem \begin{align}\label{eq:minab} \min_{\boldsymbol{a},\boldsymbol{b}}\Vert \mathcal{R}-\mathcal{Z}_A\boldsymbol{a}-\mathcal{Z}_B\boldsymbol{b}\Vert^2 +\lambda_A\|\boldsymbol{a}\|^2+\lambda_B\|\boldsymbol{b}\|^2. \end{align} Using $\mathcal{S}_G$ defined at~\eqref{eq:defcsg} in terms of the indicator variables $\mathcal{Z}_G\in\{0,1\}^{N\times (R+C)}$ it is clear that the fitted values are given by $\widehat\mathcal{R}=\mathcal{S}_G\mathcal{R}$. Solving (\ref{eq:minab}) would result in two blocks of estimating equations similar to equations \eqref{eq:normbeta} and \eqref{eq:normbsa}. These can be written \begin{align}\label{eq:backfit} \begin{split} \mathcal{Z}_A\hat\boldsymbol{a} & = \mathcal{S}_A(\mathcal{R}-\mathcal{Z}_B\hat\boldsymbol{b}),\quad\text{and}\\ \mathcal{Z}_B\hat\boldsymbol{b} & = \mathcal{S}_B(\mathcal{R}-\mathcal{Z}_A\hat\boldsymbol{a}), \end{split} \end{align} where $\mathcal{S}_A=\mathcal{Z}_A(\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A + \lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}$ is again the ridge regression smoothing matrix for row effects and similarly $\mathcal{S}_B=\mathcal{Z}_B(\mathcal{Z}_B^\mathsf{T}\mathcal{Z}_B + \lambda_BI_C)^{-1}\mathcal{Z}_B^\mathsf{T}$ the smoothing matrix for column effects. We solve these equations iteratively by block coordinate descent, also known as backfitting. The iterations converge to the solution of~\eqref{eq:minab} \citep{buja:hast:tibs:1989, hast:tibs:1990}. It is evident that $\mathcal{S}_A,\mathcal{S}_B\in\mathbb{R}^{N\times N}$ are both symmetric matrices. It follows that the limiting smoother $\mathcal{S}_G$ formed by combining them is also symmetric. See \citet[page 120]{hast:tibs:1990}. We will need this result later for an important computational shortcut. Here the simplifications we enjoyed in the one-factor case once again apply. Each step applies its operator to a vector (the terms in parentheses on the right hand side in (\ref{eq:backfit})). For both $\mathcal{S}_A$ and $\mathcal{S}_B$ these are simply the shrunken-mean operations described for the one-factor case, separately for factor $A$ and $B$ each time. As before, we do not need to actually construct $\mathcal{Z}_B$, but simply use a factor $f_B$ that records the level of factor $B$ for each of the $N$ observations. The above description holds for a generic response $\mathcal{R}$; we apply that algorithm (in parallel) to $\mathcal{Y}$ and each column of $\mathcal{X}$ to obtain the quantities $\mathcal{S}_G\mathcal{X}$ and $\mathcal{S}_G\mathcal{Y}$ that we need to compute $H_{\mathrm{GLS}}\mathcal{Y}$ in \eqref{eq:gfactor}. Now solving (\ref{eq:gfactor}) is $O(Np^2)$ plus a negligible $O(p^3)$ cost. These computations deliver $\hat\beta_{\mathrm{GLS}}$; if the BLUP estimates $\hat\boldsymbol{a}$ and $\hat{\boldsymbol{b}}$ are also required, the same algorithm can be applied to the response $\mathcal{Y}-\mathcal{X}\hat\beta_{\mathrm{GLS}}$, retaining the $\boldsymbol{a}$ and $\boldsymbol{b}$ at the final iteration. We can also write \begin{equation}\label{eq:covbhat} \cov(\hat\beta_{\mathrm{GLS}})=\sigma^2_E(\mathcal{X}^\mathsf{T}(I_N-\mathcal{S}_G)\mathcal{X})^{-1}. \end{equation} It is also clear that we can trivially extend this approach to accommodate any number of factors. \subsection{Centered operators} \label{sec:centered-operators} The matrices $\mathcal{Z}_A$ and $\mathcal{Z}_B$ both have row sums all ones, since they are factor indicator matrices (``one-hot encoders''). This creates a nontrivial intersection between their column spaces, and that of $\mathcal{X}$ since we always include an intercept, that can cause backfitting to converge more slowly. In this section we show how to counter this intersection of column spaces to speed convergence. We work with this two-factor model \begin{align}\label{eq:equivmina1} \min_{\beta,\boldsymbol{a},\boldsymbol{b}} \Vert \mathcal{Y} - \mathcal{X}\beta -\mathcal{Z}_A\boldsymbol{a}-\mathcal{Z}_B\boldsymbol{b}\Vert^2 + \lambda_A \Vert\boldsymbol{a}\Vert^2+\lambda_B\Vert\boldsymbol{b}\Vert^2. \end{align} \begin{lemma} If $\mathcal{X}$ in model~\eqref{eq:equivmina1} includes a column of ones (intercept), and $\lambda_A>0$ and $\lambda_B>0$, then the solutions for $\boldsymbol{a}$ and $\boldsymbol{b}$ satisfy $\sum_{i=1}^R a_i=0$ and $\sum_{j=1}^C b_j=0$. \end{lemma} \begin{proof} It suffices to show this for one factor and with $\mathcal{X}=\mathbf{1}$. The objective is now \begin{align}\label{eq:equivsimp} \min_{\beta,\boldsymbol{a}} \Vert \mathcal{Y} - \mathbf{1}\beta -\mathcal{Z}_A\boldsymbol{a}\Vert^2 + \lambda_A \Vert\boldsymbol{a}\Vert^2. \end{align} Notice that for any candidate solution $(\beta,\{a_i\}_1^R)$, the alternative solution $(\beta+c,\{a_i-c\}_1^R)$ leaves the loss part of \eqref{eq:equivsimp} unchanged, since the row sums of $\mathcal{Z}_A$ are all one. Hence if $\lambda_A>0$, we would always improve $\boldsymbol{a}$ by picking $c$ to minimize the penalty term $\sum_{i=1}^R(a_i-c)^2$, or $c=(1/R)\sum_{i=1}^Ra_i$. \end{proof} It is natural then to solve for $\boldsymbol{a}$ and $\boldsymbol{b}$ with these constraints enforced, instead of waiting for them to simply emerge in the process of iteration. \begin{theorem}\label{thm:smartcenter} Consider the generic optimization problem \begin{align}\label{eq:equivsimp2} \min_{\boldsymbol{a}} \Vert \mathcal{R} -\mathcal{Z}_A\boldsymbol{a}\Vert^2 + \lambda_A \Vert\boldsymbol{a}\Vert^2\quad \mbox{subject to } \sum_{i=1}^Ra_i=0. \end{align} Define the partial sum vector $\mathcal{R}^+ = \mathcal{Z}_A^\mathsf{T}\mathcal{R}$ with components $\mathcal{R}^+_{i} = \sum_jZ_{ij}\mathcal{R}_{ij}$, and let $$w_i=\frac{(N_{i\text{\tiny$\bullet$}}+\lambda)^{-1}}{\sum_{r}(N_{r\sumdot}+\lambda)^{-1}}.$$ Then the solution $\hat \boldsymbol{a}$ is given by \begin{align}\label{eq:ahatsoln} \hat a_i=\frac{\mathcal{R}^+_{i}-\sum_{r}w_r\mathcal{R}^+_{r}}{N_{i\text{\tiny$\bullet$}}+\lambda_A}, \quad i=1,\ldots,R. \end{align} Moreover, the fit is given by $$\mathcal{Z}_A\hat\boldsymbol{a}=\tilde\mathcal{S}_A\mathcal{R},$$ where $\tilde \mathcal{S}_A$ is a symmetric operator. \end{theorem} The computations are a simple modification of the non-centered case. \begin{proof} Let $M$ be an $R\times R$ orthogonal matrix with first column $\mathbf{1}/\sqrt{R}$. Then $\mathcal{Z}_A\boldsymbol{a}=\mathcal{Z}_AMM^\mathsf{T}\boldsymbol{a}=\tilde \mathcal{G}\tilde\boldsymbol{\gamma}$ for $\mathcal{G}=\mathcal{Z}_AM$ and $\tilde\boldsymbol{\gamma}=M^\mathsf{T}\boldsymbol{a}$. Reparametrizing in this way leads to the equivalent problem \begin{align}\label{eq:equivsimp2} \min_{\tilde\boldsymbol{\gamma}} \Vert \mathcal{R} -\tilde\mathcal{G}\tilde\boldsymbol{\gamma}\Vert^2 + \lambda_A \Vert\tilde\boldsymbol{\gamma}\Vert^2,\quad \mbox{subject to } \tilde\gamma_1=0. \end{align} To solve (\ref{eq:equivsimp2}), we simply drop the first column of $\tilde \mathcal{G}$. Let $\mathcal{G}=\mathcal{Z}_AQ$ where $Q$ is the matrix $M$ omitting the first column, and $\boldsymbol{\gamma}$ the corresponding subvector of $\tilde\boldsymbol{\gamma}$ having $R-1$ components. We now solve \begin{align}\label{eq:equivsimp3} \min_{\tilde\boldsymbol{\gamma}} \Vert \mathcal{R} -\mathcal{G}\boldsymbol{\gamma}\Vert^2 + \lambda_A \Vert\tilde\boldsymbol{\gamma}\Vert^2 \end{align} with no constraints, and the solution is $\hat\boldsymbol{\gamma}=(\mathcal{G}^\mathsf{T}\mathcal{G}+\lambda_A I_{R-1})^{-1}\mathcal{G}^\mathsf{T}\mathcal{R}$. The fit is given by $\mathcal{G}\hat\boldsymbol{\gamma}=\mathcal{G}(\mathcal{G}^\mathsf{T}\mathcal{G}+\lambda_A I_{R-1})^{-1}\mathcal{G}^\mathsf{T}\mathcal{R}=\tilde \mathcal{S}_A\mathcal{R}$, and $\tilde \mathcal{S}_A$ is clearly a symmetric operator. To obtain the simplified expression for $\hat\boldsymbol{a}$, we write \begin{align} \mathcal{G}\hat\gamma&=\mathcal{Z}_AQ(Q^\mathsf{T}\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A Q+\lambda_A I_{R-1})^{-1}Q^\mathsf{T} \mathcal{Z}_A^\mathsf{T}\mathcal{R}\nonumber\\ &=\mathcal{Z}_AQ(Q^\mathsf{T} D Q+\lambda_A I_{R-1})^{-1}Q^\mathsf{T} \mathcal{R}^+\label{eq:tosimplify}\\ &=\mathcal{Z}_A\hat\boldsymbol{a},\nonumber \end{align} with $D=\mathrm{diag}(N_{i\sumdot})$. We write $H=Q(Q^\mathsf{T} D Q+\lambda_A I_{R-1})^{-1}Q^\mathsf{T}$ and $\tilde Q=(D+\lambda_A I_R)^{\frac12}Q$, and let \begin{align} \tilde H&= (D+\lambda_A I_R)^{\frac12} H (D+\lambda_A I_R)^{\frac12} = \tilde Q(\tilde Q^\mathsf{T}\tilde Q)^{-1}\tilde Q^\mathsf{T}.\label{eq:Qproj} \end{align} Now (\ref{eq:Qproj}) is a projection matrix in $\mathbb{R}^R$ onto a $R-1$ dimensional subspace. Let $\tilde q = (D+\lambda_A I_R)^{-\frac12}\mathbf{1}.$ Then $\tilde q^\mathsf{T} \tilde Q={\boldsymbol{0}}$, and so $$\tilde H=I_R-\frac{\tilde q\tilde q^\mathsf{T}}{\Vert \tilde q\Vert^2}.$$ Unraveling this expression we get $$ H=(D+\lambda_AI_R)^{-1} -(D+\lambda_AI_R)^{-1}\frac{\mathbf{1}\bone^\mathsf{T}}{\mathbf{1}^\mathsf{T}(D+\lambda_AI_R)^{-1}\mathbf{1}}(D+\lambda_AI_R)^{-1}.$$ With $\hat\boldsymbol{a}=H\mathcal{R}^+$ in (\ref{eq:tosimplify}), this gives the expressions for each $\hat a_i$ in~\eqref{eq:ahatsoln}. Finally, $\tilde \mathcal{S}_A = \mathcal{Z}_A H\mathcal{Z}_A^\mathsf{T}$ is symmetric. \end{proof} \subsection{Covariance matrix for $\hat\beta_{\mathrm{GLS}}$ with centered operators} \label{sec:covar-matr-hatb} In Section~\ref{sec:two-factors} we saw in (\ref{eq:covbhat}) that we get a simple expression for $\cov(\hat\beta_{\mathrm{GLS}})$. This simplicity relies on the fact that $I_N-\mathcal{S}_G=\mathcal{W}=\sigma^2_E\mathcal{V}^{-1}$, and the usual cancelation occurs when we use the sandwich formula to compute this covariance. When we backfit with our centered smoothers we get a modified residual operator $I_N-\widetilde \mathcal{S}_G$ such that the analog of (\ref{eq:gfactor}) still gives us the required coefficient estimate: \begin{equation} \label{eq:gfactorc} \hat\beta_{\mathrm{GLS}} = (\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{Y}. \end{equation} However, $I_N-\widetilde\mathcal{S}_G\neq \sigma^2_E\mathcal{V}^{-1}$, and so now we need to resort to the sandwich formula $ \cov(\hat\beta_{\mathrm{GLS}})=H_\mathrm{GLS} \mathcal{V} H_\mathrm{GLS}^\mathsf{T}$ with $H_\mathrm{GLS}$ from \eqref{eq:gfactor}. Expanding this we find that $\cov(\hat\beta_{\mathrm{GLS}})$ equals \begin{align*} (\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{X})^{-1}\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G) \cdot \mathcal{V}\cdot (I_N-\widetilde\mathcal{S}_G)\mathcal{X}(\mathcal{X}^\mathsf{T}(I_N-\widetilde\mathcal{S}_G)\mathcal{X})^{-1}. \end{align*} While this expression might appear daunting, the computations are simple. Note first that while $\hat\beta_{\mathrm{GLS}}$ can be computed via $\tilde\mathcal{S}_G\mathcal{X}$ and $\tilde\mathcal{S}_G\mathcal{Y}$ this expression for $\cov(\hat\beta_{\mathrm{GLS}})$ also involves $\mathcal{X}^\mathsf{T} \tilde\mathcal{S}_G$. When we use the centered operator from Theorem~\ref{thm:smartcenter} we get a symmetric matrix $\tilde \mathcal{S}_G$. Let $\widetilde \mathcal{X}=(I_N-\widetilde\mathcal{S}_G)\mathcal{X}$, the residual matrix after backfitting each column of $\mathcal{X}$ using these centered operators. Then because $\widetilde\mathcal{S}_G$ is symmetric, we have \begin{align} \hat\beta_{\mathrm{GLS}}&=(\mathcal{X}^\mathsf{T}\widetilde\mathcal{X})^{-1}\widetilde\mathcal{X}^\mathsf{T}\mathcal{Y},\quad\text{and} \notag\\ \cov(\hat\beta_{\mathrm{GLS}})&=(\mathcal{X}^\mathsf{T}\widetilde\mathcal{X})^{-1}\widetilde\mathcal{X}^\mathsf{T}\cdot\mathcal{V}\cdot\widetilde\mathcal{X}(\mathcal{X}^\mathsf{T}\widetilde\mathcal{X})^{-1}.\label{eq:covbhatgls} \end{align} Since $\mathcal{V}=\sigma^2_E\left(\mathcal{Z}_A\mathcal{Z}_A^\mathsf{T}/\lambda_A+\mathcal{Z}_B\mathcal{Z}_B^\mathsf{T}/\lambda_B +I_N\right)$ (two low-rank matrices plus the identity), we can compute $\mathcal{V}\cdot \widetilde\mathcal{X}$ very efficiently, and hence also the covariance matrix in~\eqref{eq:covbhatgls}. The entire algorithm is summarized in Section~\ref{sec:wholeshebang}. \section{Convergence of the matrix norm}\label{sec:normconvergence} In this section we prove a bound on the norm of the matrix that implements backfitting for our random effects $\boldsymbol{a}$ and $\boldsymbol{b}$ and show how this controls the number of iterations required. In our algorithm, backfitting is applied to $\mathcal{Y}$ as well as to each non-intercept column of $\mathcal{X}$ so we do not need to consider the updates for $\mathcal{X}\hat\beta$. It is useful to take account of intercept adjustments in backfitting, by the centerings described in Section~\ref{sec:backfitting} because the space spanned by $a_1,\dots,a_R$ intersects the space spanned by $b_1,\dots,b_C$ because both include an intercept column of ones. In backfitting we alternate between adjusting $\boldsymbol{a}$ given $\boldsymbol{b}$ and $\boldsymbol{b}$ given $\boldsymbol{a}$. At any iteration, the new $\boldsymbol{a}$ is an affine function of the previous $\boldsymbol{b}$ and then the new $\boldsymbol{b}$ is an affine function of the new $\boldsymbol{a}$. This makes the new $\boldsymbol{b}$ an affine function of the previous $\boldsymbol{b}$. We will study that affine function to find conditions where the updates converge. If the $\boldsymbol{b}$ updates converge, then so must the $\boldsymbol{a}$ updates. Because the updates are affine they can be written in the form $$ \boldsymbol{b} \gets M\boldsymbol{b} + \eta $$ for $M\in\mathbb{R}^{C\times C}$ and $\eta\in\mathbb{R}^C$. We iterate this update and it is convenient to start with $\boldsymbol{b} = \boldsymbol{0}$. We already know from \cite{buja:hast:tibs:1989} that this backfitting will converge. However, we want more. We want to avoid having the number of iterations required grow with $N$. We can write the solution $\boldsymbol{b}$ as $$ \boldsymbol{b} = \eta +\sum_{k=1}^\infty M^k\eta, $$ and in computations we truncate this sum after $K$ steps producing an error $\sum_{k>K}M^k\eta$. We want $\sup_{\eta\ne0}\Vert \sum_{k>K}M^k\eta\Vert/\Vert\eta\Vert<\epsilon$ to hold with probability tending to one as the sample size increases for any $\epsilon$, given sufficiently large $K$. For this it suffices to have the spectral radius $\lambda_{\max}(M)<1-\delta$ hold with probability tending to one for some $\delta>0$. Now for any $1\le p\le\infty$ we have $$ \lambda_{\max}(M)\le \Vert M\Vert_{p} \equiv \sup_{\boldsymbol{x}\in \mathbb{R}^C\setminus\{\boldsymbol{0}\}} \frac{\Vert M\boldsymbol{x}\Vert_p}{\Vert \boldsymbol{x}\Vert_p}. $$ The explicit formula $$ \Vert M\Vert_{1} \equiv \sup_{\boldsymbol{x}\in \mathbb{R}^C\setminus\{\boldsymbol{0}\}} \frac{\Vert M\boldsymbol{x}\Vert_1}{\Vert \boldsymbol{x}\Vert_1} = \max_{1\le s\le C}\sum_{j=1}^C | M_{js}| $$ makes the matrix $L_1$ matrix norm very tractable theoretically and so that is the one we study. We look at this and some other measures numerically in Section~\ref{sec:empiricalnorms}. \subsection{Updates} Recall that $Z\in\{0,1\}^{R\times C}$ describes the pattern of observations. In a model with no intercept, centering the responses and then taking shrunken means as in \eqref{eq:backfit} would yield these updates \begin{align*} a_i &\gets \frac{\sum_s Z_{is}(Y_{is}-b_s)}{N_{i\sumdot}+\lambda_A}\quad\text{and}\quad b_j \gets \frac{\sum_i Z_{ij}(Y_{ij}-a_i)}{N_{\sumdot j}+\lambda_B}. \end{align*} The update from the old $\boldsymbol{b}$ to the new $\boldsymbol{a}$ and then to the new $\boldsymbol{b}$ takes the form $\boldsymbol{b}\gets M\boldsymbol{b}+\eta$ for $M=M^{(0)}$ where $$ M^{(0)}_{js} = \frac1{N_{\sumdot j}+\lambda_B}\sum_i \frac{\zisZ_{ij}}{N_{i\sumdot}+\lambda_A}.$$ This update $M^{(0)}$ alternates shrinkage estimates for $\boldsymbol{a}$ and $\boldsymbol{b}$ but does no centering. We don't exhibit $\eta$ because it does not affect the convergence speed. In the presence of an intercept, we know that $\sum_ia_i=0$ should hold at the solution and we can impose this simply and very directly by centering the $a_i$, taking \begin{align*} a_i &\gets \frac{\sum_s Z_{is}(Y_{is}-b_s)}{N_{i\sumdot}+\lambda_A} -\frac1R\sum_{r=1}^R\frac{\sum_s Z_{rs}(Y_{rs}-b_s)}{N_{r\sumdot}+\lambda_A}, \quad\text{and}\\ b_j &\gets \frac{\sum_i Z_{ij}(Y_{ij}-a_i)}{N_{\sumdot j}+\lambda_B}. \end{align*} The intercept estimate will then be $\hat\beta_0=(1/C)\sum_jb_j$ which we can subtract from $b_j$ upon convergence. This iteration has the update matrix $M^{(1)}$ with \begin{align}\label{eq:monejs} M^{(1)}_{js} &=\frac1{N_{\sumdot j}+\lambda_B}\sum_r \frac{Z_{rs}(Z_{rj}-N_{\sumdot j}/R)}{N_{r\sumdot}+\lambda_A} \end{align} after replacing a sum over $i$ by an equivalent one over $r$. In practice, we prefer to use the weighted centering from Section~\ref{sec:centered-operators} to center the $a_i$ because it provides a symmetric smoother $\tilde\mathcal{S}_G$ that supports computation of $\widehat\cov(\hat\beta_{\mathrm{GLS}})$. While it is more complicated to analyze it is easily computable and it satisfies the optimality condition in Theorem~\ref{thm:smartcenter}. The algorithm is for a generic response $\mathcal{R}\in\mathbb{R}^N$ such as $\mathcal{Y}$ or a column of $\mathcal{X}$. Let us illustrate it for the case $\mathcal{R}=\mathcal{Y}$. We begin with vector of $N$ values $Y_{ij}-b_{j}$ and so $Y^+_i = \sum_sZ_{is}(Y_{is}-b_s).$ Then $w_i = (N_{i\sumdot}+\lambda_A)^{-1}/\sum_r(N_{r\sumdot}+\lambda_A)^{-1}$ and the updated $a_r$ is \begin{align*} \frac{Y^+_r-\sum_iw_i Y^+_i}{N_{r\sumdot}+\lambda_A} &= \frac{\sum_sZ_{rs}(Y_{rs}-b_s)-\sum_iw_i \sum_sZ_{is}(Y_{is}-b_s)}{N_{r\sumdot}+\lambda_A}. \end{align*} Using shrunken averages of $Y_{ij}-a_i$, the new $b_{j}$ are \begin{align*} b_{j} &=\frac1{N_{\sumdot j}+\lambda_B}\sum_rZ_{rj} \biggl(Y_{rj}- \frac{\sum_sZ_{rs}(Y_{rs}-b_s)-\sum_iw_i \sum_sZ_{is}(Y_{is}-b_s)}{N_{r\sumdot}+\lambda_A} \biggr). \end{align*} Now $\boldsymbol{b} \gets M\boldsymbol{b}+\eta$ for $M=M^{(2)}$, where \begin{align}\label{eq:mtwojs} M^{(2)}_{js} &=\frac1{N_{\sumdot j}+\lambda_B}\sum_r \frac{Z_{rj}}{N_{r\sumdot}+\lambda_A} \biggl(Z_{rs} - \frac{\sum_{i}\frac{Z_{is}}{N_{i\sumdot}+\lambda_{A}}}{\sum_i{\frac{1}{N_{i\sumdot}+\lambda_{A}}}}\biggr). \end{align} Our preferred algorithm applies the optimal update from Theorem~\ref{thm:smartcenter} to both $\boldsymbol{a}$ and $\boldsymbol{b}$ updates. With that choice we do not need to decide beforehand which random effects to center and which to leave uncentered to contain the intercept. We call the corresponding matrix $M^{(3)}$. Our theory below analyzes $\VertM^{(1)}\Vert_1$ and $\VertM^{(2)}\Vert_1$ which have simpler expressions than $\VertM^{(3)}\Vert_1$. Update $M^{(0)}$ uses symmetric smoothers for $A$ and $B$. Both are shrunken averages. The naive centering update $M^{(1)}$ uses a non-symmetric smoother $\mathcal{Z}_A(I_R-\mathbf{1}_R\mathbf{1}_R^\mathsf{T}/R)(\mathcal{Z}_A^\mathsf{T}\mathcal{Z}_A+\lambda_AI_R)^{-1}\mathcal{Z}_A^\mathsf{T}$ on the $a_i$ with a symmetric smoother on $b_{j}$ and hence it does not generally produce a symmetric smoother needed for efficient computation of $\widehat\cov(\hat\beta_{\mathrm{GLS}})$. The update $M^{(2)}$ uses two symmetric smoothers, one optimal and one a simple shrunken mean. The update $M^{(3)}$ takes the optimal smoother for both $A$ and $B$. Thus both $M^{(2)}$ and $M^{(3)}$ support efficient computation of $\widehat\cov(\hat\beta_{\mathrm{GLS}})$. A subtle point is that these symmetric smoothers are matrices in $\mathbb{R}^{N\times N}$ while the matrices $M^{(k)}\in\mathbb{R}^{C\times C}$ are not symmetric. \subsection{Model for $Z_{ij}$}\label{sec:modelz} We will state conditions on $Z_{ij}$ under which both $\Vert M^{(1)}\Vert_1$ and $\Vert M^{(2)}\Vert_1$ are bounded below $1$ with probability tending to one, as the problem size grows. We need the following exponential inequalities. \begin{lemma}\label{lem:hoeff} If $X\sim\mathrm{Bin}(n,p)$, then for any $t\ge0$, \begin{align*} \Pr( X\ge np+t ) &\le \exp( -2t^2/n ),\quad\text{and}\\ \Pr( X\le np-t ) &\le \exp( -2t^2/n ) \end{align*} \end{lemma} \begin{proof} This follows from Hoeffding's theorem. \end{proof} \begin{lemma}\label{lem:binounionbound} Let $X_i\sim\mathrm{Bin}(n,p)$ for $i=1,\dots,m$, not necessarily independent. Then for any $t\ge0$, \begin{align*} \Pr\Bigl( \max_{1\le i\le m} X_{i} \ge np+t \Bigr) &\le m\exp( -2t^2/n ) ,\quad\text{and}\\ \Pr\Bigl( \min_{1\le i\le m} X_{i} \le np-t \Bigr) &\le m\exp( -2t^2/n ). \end{align*} \end{lemma} \begin{proof} This is from the union bound applied to Lemma~\ref{lem:hoeff}. \end{proof} Here is our sampling model. We index the size of our problem by $S\to\infty$. The sample size $N$ will satisfy $\mathbb{E}(N)\ge S$. The number of rows and columns in the data set are $$R = S^\rho\quad\text{and}\quad C=S^\kappa$$ respectively, for positive numbers $\rho$ and $\kappa$. Because our application domain has $N\ll RC$, we assume that $\rho+\kappa>1$. We ignore that $R$ and $C$ above are not necessarily integers. In our model, $Z_{ij}\sim\mathrm{Bern}(p_{ij})$ independently with \begin{align}\label{eq:defab} \frac{S}{RC} \le p_{ij} \le \Upsilon\frac{S}{RC} \quad\text{for}\quad 1\le\Upsilon<\infty. \end{align} That is $1\le p_{ij} S^{\rho+\kappa-1}\le\Upsilon$. Letting $p_{ij}$ depend on $i$ and $j$ allows the probability model to capture stylistic preferences affecting the missingness pattern in the ratings data. \subsection{Bounds for row and column size} Letting $X \preccurlyeq Y$ mean that $X$ is stochastically smaller than $Y$, we know that \begin{align*} \mathrm{Bin}(R, S^{1-\rho-\kappa}) &\preccurlyeq N_{\sumdot j} \preccurlyeq \mathrm{Bin}( R, \Upsilon S^{1-\rho-\kappa}),\quad\text{and}\\ \mathrm{Bin}(C,S^{1-\rho-\kappa}) &\preccurlyeq N_{i\sumdot} \preccurlyeq \mathrm{Bin}( C, \Upsilon S^{1-\rho-\kappa}). \end{align*} By Lemma \ref{lem:hoeff}, if $t\ge0$, then \begin{align*} \Pr( N_{i\sumdot} \ge S^{1-\rho}(\Upsilon+t)) &\le \Pr\bigl( \mathrm{Bin}(C,\Upsilon S^{1-\rho-\kappa}) \ge S^{1-\rho}(\Upsilon+t)\bigr)\\ &\le \exp(-2(S^{1-\rho}t)^2/C)\\ &= \exp(-2S^{2-\kappa-2\rho}t^2). \end{align*} Therefore if $2\rho+\kappa<2$, we find using using Lemma~\ref{lem:binounionbound} that \begin{align*} &\Pr\bigl( \max_iN_{i\sumdot} \ge S^{1-\rho}(\Upsilon+\epsilon)\bigr) \le S^\rho\exp(-2S^{2-\kappa-2\rho}\epsilon^2)\to0 \end{align*} for any $\epsilon>0$. Combining this with an analogous lower bound, \begin{align}\label{eq:boundnid} \lim_{S\to\infty}\Pr\bigl( (1-\epsilon) S^{1-\rho}\le \min_i N_{i\sumdot} \le \max_i N_{i\sumdot} \le (\Upsilon+\epsilon) S^{1-\rho}\bigr)=1 \end{align} Likewise, if $\rho+2\kappa<2$, then for any $\epsilon>0$, \begin{align}\label{eq:boundndj} \lim_{S\to\infty}\Pr\bigl( (1-\epsilon)S^{1-\kappa}\le \min_j N_{\sumdot j} \le \max_j N_{\sumdot j} \le (\Upsilon+\epsilon) S^{1-\kappa}\bigr)=1 \end{align} \subsection{Interval arithmetic} We will replace $N_{i\sumdot}$ and other quantities by intervals that asymptotically contain them with probability one and then use interval arithmetic in order to streamline some of the steps in our proofs. For instance, $$N_{i\sumdot}\in [(1-\epsilon)S^{1-\rho},(\Upsilon+\epsilon)S^{1-\rho}] = [1-\epsilon,\Upsilon+\epsilon]\times S^{1-\rho} = [1-\epsilon,\Upsilon+\epsilon]\times \frac{S}{R}$$ holds simultaneously for all $1\le i\le R$ with probability tending to one as $S\to\infty$. In interval arithmetic, $$[A,B]+[a,b]=[a+A,b+B]\quad\text{and}\quad [A,B]-[a,b]=[A-b,B-a].$$ If $0<a\le b<\infty$ and $0<A\le B<\infty$, then $$[A,B]\times[a,b] = [Aa,Bb]\quad\text{and}\quad [A,B]/[a,b] = [A/b,B/a].$$ Similarly, if $a<0<b$ and $X\in[a,b]$, then $|X|\in[0,\max(|a|,|b|)]$. Our arithmetic operations on intervals yield new intervals guaranteed to contain the results obtained using any members of the original intervals. We do not necessarily use the smallest such interval. \subsection{Co-observation} Recall that the co-observation matrices are $Z^\mathsf{T} Z\in\{0,1\}^{C\times C}$ and $ZZ^\mathsf{T}\in\{0,1\}^{R\times R}$. If $s\ne j$, then $$ \mathrm{Bin}\Bigl( R,\frac{S^2}{R^2C^2}\Bigr) \preccurlyeq (Z^\tran Z)_{sj}\preccurlyeq \mathrm{Bin}\Bigl( R,\frac{\Upsilon^2S^2}{R^2C^2}\Bigr). $$ That is $\mathrm{Bin}(S^\rho, S^{2-2\rho-2\kappa}) \preccurlyeq (Z^\tran Z)_{sj} \preccurlyeq \mathrm{Bin}(S^\rho, \Upsilon^2S^{2-2\rho-2\kappa}). $ For $t\ge0$, \begin{align*} \Pr\Bigl( \max_s\max_{j\ne s}(Z^\tran Z)_{sj}\ge (\Upsilon^2+t)S^{2-\rho-2\kappa}\Bigr) &\le \frac{C^2}2\exp( -(tS^{2-\rho-2\kappa})^2/R)\\ &= \frac{C^2}2\exp( -t^2 S^{4-3\rho-4\kappa}). \end{align*} If $3\rho+4\kappa<4$ then \begin{align*} &\Pr\Bigl( \max_s\max_{j\ne s} \,(Z^\tran Z)_{sj} \ge (\Upsilon^2+\epsilon)S^{2-\rho-2\kappa}\Bigr)\to0, \quad\text{and}\\ &\Pr\Bigl( \min_s\min_{j\ne s} \,(Z^\tran Z)_{sj} \le (1-\epsilon)S^{2-\rho-2\kappa}\Bigr)\to0, \end{align*} for any $\epsilon>0$. \subsection{Asymptotic bounds for $\Vert M\Vert_1$} Here we prove upper bounds for $\Vert M^{(k)}\Vert_1$ for $k=1,2$ of equations~\eqref{eq:monejs} and~\eqref{eq:mtwojs}, respectively. The bounds depend on $\Upsilon$ and there are values of $\Upsilon>1$ for which these norms are bounded strictly below one, with probability tending to one. \begin{theorem}\label{thm:m1norm1} Let $Z_{ij}$ follow the model from Section~\ref{sec:modelz} with $\rho,\kappa\in(0,1)$, that satisfy $\rho+\kappa>1$, $2\rho+\kappa<2$ and $3\rho+4\kappa<4$. Then for any $\epsilon>0$, \begin{align}\label{eq:claim1} & \Pr\bigl( \Vert M^{(1)} \Vert_1\le \Upsilon^2-\Upsilon^{-2}+\epsilon \bigr)\to1 ,\quad\text{and}\\ &\Pr\bigl( \Vert M^{(2)}\Vert_1\le \Upsilon^2-\Upsilon^{-2}+\epsilon\bigr)\to1 \label{eq:claim2} \end{align} as $S\to\infty$. \end{theorem} \begin{figure}[t! \centering \includegraphics[width=.8\hsize]{figdomain2} \caption{ \label{fig:domainofinterest} The large shaded triangle is the domain of interest $\mathcal{D}$ for Theorem~\ref{thm:m1norm1}. The smaller shaded triangle shows a region where the analogous update to $\boldsymbol{a}$ would have acceptable norm. The points marked are the ones we look at numerically, including $(0.88,0.57)$ which corresponds to the Stitch Fix data in Section~\ref{sec:stitch}. } \end{figure} \begin{proof} Without loss of generality we assume that $\epsilon<1$. We begin with~\eqref{eq:claim2}. Let $M=M^{(2)}$. When $j\ne s$, \begin{align*} M_{js}&=\frac1{N_{\sumdot j}+\lambda_B}\sum_r \frac{Z_{rj}}{N_{r\sumdot}+\lambda_A} (Z_{rs} -\bar Z_{\text{\tiny$\bullet$} s}),\quad\text{for}\\ \bar Z_{\text{\tiny$\bullet$} s}&= \sum_i \frac{Z_{is}}{N_{i\sumdot}+\lambda_A} \Bigm/ {\sum_{i}\frac{1}{N_{i\sumdot}+\lambda_{A}}}. \end{align*} Although $|Z_{rs}-\bar Z_{\text{\tiny$\bullet$} s}|\le1$, replacing $Z_{rs}-\bar Z_{\text{\tiny$\bullet$} s}$ by one does not prove to be sharp enough for our purposes. Every $N_{r\sumdot}+\lambda_A\in S^{1-\rho} [1-\epsilon, \Upsilon+\epsilon]$ with probability tending to one and so \begin{align*} \frac{\bar Z_{\text{\tiny$\bullet$} s}}{N_{\sumdot j}+\lambda_B}\sum_r \frac{Z_{rj}}{N_{r\sumdot}+\lambda_A} &\in \frac{\bar Z_{\text{\tiny$\bullet$} s}}{N_{\sumdot j}+\lambda_B}\sum_r \frac{Z_{rj}}{[1-\epsilon,\Upsilon+\epsilon]S^{1-\rho}}\\ &\subseteq [1-\epsilon,\Upsilon+\epsilon]^{-1}\bar Z_{\text{\tiny$\bullet$} s} S^{\rho-1}. \end{align*} Similarly \begin{align*} \bar Z_{\text{\tiny$\bullet$} s} &\in \frac{\sum_iZ_{is}[1-\epsilon,\Upsilon+\epsilon]^{-1}} {R[1-\epsilon,\Upsilon+\epsilon]^{-1}} \subseteq\frac{N_{\sumdot s}}{R}[1-\epsilon,\Upsilon+\epsilon][1-\epsilon,\Upsilon+\epsilon]^{-1}\\ &\subseteq S^{1-\rho-\kappa} [1-\epsilon,\Upsilon+\epsilon]^2[1-\epsilon,\Upsilon+\epsilon]^{-1} \end{align*} and so \begin{align}\label{eq:zrsbarpart} \frac{\bar Z_{\text{\tiny$\bullet$} s}}{N_{\sumdot j}+\lambda_B}\sum_r \frac{Z_{rj}}{N_{r\sumdot}+\lambda_A} \in S^{-\kappa} \frac{[1-\epsilon,\Upsilon+\epsilon]^2}{[1-\epsilon,\Upsilon+\epsilon]^2} \subseteq \frac1C \Bigl[ \Bigl(\frac{1-\epsilon}{\Upsilon+\epsilon}\Bigr)^2 , \Bigl(\frac{\Upsilon+\epsilon}{1-\epsilon}\Bigr)^2 \Bigr]. \end{align} Next using bounds on the co-observation counts, \begin{align}\label{eq:zrspart} \frac1{N_{\sumdot j}+\lambda_B}\sum_r\frac{Z_{rj}Z_{rs}}{N_{r\sumdot}+\lambda_A} \in \frac{S^{\rho+\kappa-2}(Z^\tran Z)_{sj}}{[1-\epsilon,\Upsilon+\epsilon]^2} \subseteq \frac1C \frac{[1-\epsilon,\Upsilon^2+\epsilon]}{[1-\epsilon,\Upsilon+\epsilon]^2}. \end{align} Combining~\eqref{eq:zrsbarpart} and~\eqref{eq:zrspart} \begin{align*} M_{js} \in & \frac1C \Bigl[ \frac{1-\epsilon}{(\Upsilon+\epsilon)^2}- \Bigl(\frac{\Upsilon+\epsilon}{1-\epsilon}\Bigr)^2 , \frac{\Upsilon^2+\epsilon}{1-\epsilon} -\Bigl(\frac{1-\epsilon}{\Upsilon+\epsilon}\Bigr)^2 \Bigr] \end{align*} For any $\epsilon'>0$ we can choose $\epsilon$ small enough that $$M_{js} \in C^{-1}[\Upsilon^{-2}-\Upsilon^2-\epsilon', \Upsilon^2-\Upsilon^{-2}+{\epsilon'}] $$ and then $|M_{js}|\le (\Upsilon^2-\Upsilon^{-2}+\epsilon')/C$. Next, arguments like the preceding give $|M_{jj}|\le (1-\epsilon')^{-2}(\Upsilon+\epsilon')S^{\rho-1}\to0$. Then with probability tending to one, $$ \sum_j|M_{js}| \le\Upsilon^2-\Upsilon^{-2} +2\epsilon'. $$ This bound holds for all $s\in\{1,2,\dots,C\}$, establishing~\eqref{eq:claim2}. The proof of~\eqref{eq:claim1} is similar. The quantity $\bar Z_{\text{\tiny$\bullet$} s}$ is replaced by $(1/R)\sum_iZ_{is}/(N_{i\sumdot}+\lambda_A)$. \end{proof} It is interesting to find the largest $\Upsilon$ with $\Upsilon^2-\Upsilon^{-2}\le1$. It is $((1+5^{1/2})/2)^{1/2}\doteq 1.27$. \section{Convergence and computation}\label{sec:empiricalnorms} In this section we make some computations on synthetic data following the probability model from Section~\ref{sec:normconvergence}. First we study the norms of our update matrix $M^{(2)}$ which affects the number of iterations to convergence. In addition to $\Vert\cdot\Vert_1$ covered in Theorem~\ref{thm:m1norm1} we also consider $\Vert\cdot\Vert_2$, $\Vert\cdot\Vert_\infty$ and $\lambda_{\max}(\cdot)$. Then we compare the cost to compute $\hat\beta_\mathrm{GLS}$ by our backfitting method with that of lmer \citep{lme4}. The problem size is indexed by $S$. Indices $i$ go from $1$ to $R=\lceil S^\rho\rceil$ and indices $j$ go from $1$ to $C=\lceil S^\kappa\rceil$. Reasonable parameter values have $\rho,\kappa\in(0,1)$ with $\rho+\kappa>1$. Theorem~\ref{thm:m1norm1} applies when $2\rho+\kappa<2$ and $3\rho+4\kappa<4$. Figure~\ref{fig:domainofinterest} depicts this triangular domain of interest $\mathcal{D}$. There is another triangle $\mathcal{D}'$ where a corresponding update for $\boldsymbol{a}$ would satisfy the conditions of Theorem~\ref{thm:m1norm1}. Then $\mathcal{D}\cup\mathcal{D}'$ is a non-convex polygon of five sides. Figure~\ref{fig:domainofinterest} also shows $\mathcal{D}'\setminus\mathcal{D}$ as a second triangular region. For points $(\rho,\kappa)$ near the line $\rho+\kappa=1$, the matrix $Z$ will be mostly ones unless $S$ is very large. For points $(\rho,\kappa)$ near the upper corner $(1,1)$, the matrix $Z$ will be extremely sparse with each $N_{i\sumdot}$ and $N_{\sumdot j}$ having nearly a Poisson distribution with mean between $1$ and $\Upsilon$. The fraction of potential values that have been observed is $O(S^{1-\rho-\kappa})$. Given {$p_{ij}$}, we generate our observation matrix via $Z_{ij} \stackrel{\mathrm{ind}}{\sim}\mathrm{Bern}({p_{ij})}$. These probabilities are first generated via ${p_{ij}}= U_{ij}S^{1-\rho-\kappa}$ where $U_{ij}\stackrel{\mathrm{iid}}{\sim}\mathbb{U}[1,\Upsilon]$ and $\Upsilon$ is the largest value for which $\Upsilon^2-\Upsilon^{-2}\le1$. For small $S$ and $\rho+\kappa$ near $1$ we can get some values ${p_{ij}>1}$ and in that case we take ${p_{ij}=1}$. The following $(\rho,\kappa)$ combinations are of interest. First, $(4/5,2/5)$ is the closest vertex of the domain of interest to the point $(1,1)$. Second, $(2/5,4/5)$ is outside the domain of interest for the $\boldsymbol{b}$ but within the domain for the analogous $\boldsymbol{a}$ update. Third, among points with $\rho=\kappa$, the value $(4/7,4/7)$ is the farthest one from the origin that is in the domain of interest. We also look at some points on the $45$ degree line that are outside the domain of interest because the sufficient conditions in Theorem~\ref{thm:m1norm1} might not be necessary. In our matrix norm computations we took $\lambda_A=\lambda_B=0$. This completely removes shrinkage and will make it harder for the algorithm to converge than would be the case for the positive $\lambda_A$ and $\lambda_B$ that hold in real data. The values of $\lambda_A$ and $\lambda_B$ appear in expressions $N_{i\sumdot}+\lambda_A$ and $N_{\sumdot j}+\lambda_B$ where their contribution is asymptotically negligible, so conservatively setting them to zero will nonetheless be realistic for large data sets. \begin{figure \centering \includegraphics[width=.8\hsize]{norm_n_log_xy_with_lines_revised} \caption{\label{fig:1normvsn} Norm $\Vert M^{(2)}\Vert_1$ of centered update matrix versus problem size $S$ for different $(\rho, \kappa)$. } \end{figure} \noindent We sample from the model multiple times at various values of $S$ and plot $\Vert M^{(2)}\Vert_1$ versus $S$ on a logarithmic scale. Figure~\ref{fig:1normvsn} shows the results. We observe that $\Vert M^{(2)}\Vert_1$ is below $1$ and decreasing with $S$ for all the examples $(\rho,\kappa)\in\mathcal{D}$. This holds also for $(\rho,\kappa)=(0.60,0.60)\not\in\mathcal{D}$. We chose that point because it is on the convex hull of $\mathcal{D}\cup\mathcal{D}'$. The point $(\rho,\kappa)=(0.40,0.80)\not\in\mathcal{D}$. Figure~\ref{fig:1normvsn} shows large values of $\VertM^{(2)}\Vert_1$ for this case. Those values increase with $S$, but remain below $1$ in the range considered. This is a case where the update from $\boldsymbol{a}$ to $\boldsymbol{a}$ would have norm well below $1$ and decreasing with $S$, so backfitting would converge. We do not know whether $\VertM^{(2)}\Vert_1>1$ will occur for larger $S$. The point $(\rho,\kappa)=(0.70,0.70)$ is not in the domain $\mathcal{D}$ covered by Theorem~\ref{thm:m1norm1} and we see that $\VertM^{(2)}\Vert_1>1$ and generally increasing with $S$ as shown in Figure~\ref{fig:7070norms}. This does not mean that backfitting must fail to converge. Here we find that $\VertM^{(2)}\Vert_2<1$ and generally decreases as $S$ increases. This is a strong indication that the number of backfitting iterations required will not grow with $S$ for this $(\rho,\kappa)$ combination. We cannot tell whether $\VertM^{(2)}\Vert_2$ will decrease to zero but that is what appears to happen. We consistently find in our computations that $\lambda_{\max}(M^{(2)})\le \VertM^{(2)}\Vert_2\le\VertM^{(2)}\Vert_1$. The first of these inequalities must necessarily hold. For a symmetric matrix $M$ we know that $\lambda_{\max}(M)=\Vert M\Vert_2$ which is then necessarily no larger than $\Vert M\Vert_1$. Our update matrices are nearly symmetric but not perfectly so. We believe that explains why their $L_2$ norms are close to their spectral radius and also smaller than their $L_1$ norms. While the $L_2$ norms are empirically more favorable than the $L_1$ norms, they are not amenable to our theoretical treatment. \begin{figure \centering \begin{subfigure}{.48\textwidth} \centering \includegraphics[scale=.4]{norm_vs_S_with_lines_70_L1_written_norm_logxy} \end{subfigure} \begin{subfigure}{.48\textwidth} \centering \includegraphics[scale=.4]{norm_vs_S_with_lines_70_L2_written_norm_logxy_main_correct} \end{subfigure} \caption{\label{fig:7070norms} The left panel shows $\VertM^{(2)}\Vert_1$ versus $S$. The right panel shows $\VertM^{(2)}\Vert_2$ versus $S$ with a logarithmic vertical scale. Both have $(\rho,\kappa)=(0.7,0.7)$. } \end{figure} We believe that backfitting will have a spectral radius well below $1$ for more cases than we can as yet prove. In addition to the previous figures showing matrix norms as $S$ increases for certain special values of $(\rho,\kappa)$ we have computed contour maps of those norms over $(\rho,\kappa)\in[0,1]$ for $S=10{,}000$. See Figure~\ref{fig:contours}. To compare the computation times for algorithms we generated $Z_{ij}$ as above and also took $x_{ij}\stackrel{\mathrm{iid}}{\sim}\mathcal{N}(0,I_7)$ plus an intercept, making $p=8$ fixed effect parameters. Although backfitting can run with $\lambda_A=\lambda_B=0$, lmer cannot do so for numerical reasons. So we took $\sigma^2_A=\sigma^2_B=1$ and $\sigma^2_E=1$ corresponding to $\lambda_A=\lambda_B=1$. The cost per iteration does not depend on $Y_{ij}$ and hence not on $\beta$ either. We used $\beta=0$. Figure~\ref{fig:comptimes} shows computation times for one single iteration when $(\rho,\kappa)=(0.52,0.52)$ and when $(\rho,\kappa)=(0.70,0.70)$. The time to do one iteration in lmer grows roughly like $N^{3/2}$ in the first case. For the second case, it appears to grow at the even faster rate of $N^{2.1}$. Solving a system of $S^\kappa\times S^\kappa$ equations would cost $S^{3\kappa} = S^{2.1} = O(N^{2.1})$, which explains the observed rate. This analysis would predict $O(N^{1.56})$ for $\rho=\kappa=0.52$ but that is only minimally different from $O(N^{3/2})$. These experiments were carried out in R on a computer with the macOS operating system, 16 GB of memory and an Intel i7 processor. Each backfitting iteration entails solving \eqref{eq:backfit} along with the fixed effects. The cost per iteration for backfitting follows closely to the $O(N)$ rate predicted by the theory. OLS only takes one iteration and it is also of $O(N)$ cost. In both of these cases $\VertM^{(2)}\Vert_2$ is bounded away from one so the number of backfitting iterations does not grow with $S$. For $\rho=\kappa=0.52$, backfitting took $4$ iterations to converge for the smaller values of $S$ and $3$ iterations for the larger ones. For $\rho=\kappa=0.70$, backfitting took $6$ iterations for smaller $S$ and $4$ or $5$ iterations for larger $S$. In each case our convergence criterion was a relative change of $10^{-8}$ as described in Section~\ref{sec:wholeshebang}. Further backfitting to compute BLUPs $\hat\boldsymbol{a}$ and $\hat\boldsymbol{b}$ given $\hat\beta_{\mathrm{GLS}}$ took at most $5$ iterations for $\rho=\kappa=0.52$ and at most $10$ iterations for $\rho=\kappa=0.7$. In the second example, lme4 did not reach convergence in our time window so we ran it for just $4$ iterations to measure its cost per iteration. \begin{figure}[!t] \centering \begin{subfigure}{.48\textwidth} \centering \includegraphics[scale=.28]{one_norm_reshaped.png} \end{subfigure} \begin{subfigure}{.48\textwidth} \centering \includegraphics[scale=.28]{infinity_norm_reshaped.png} \end{subfigure} \centering \begin{subfigure}{.48\textwidth} \centering \includegraphics[height = 5.2cm, width = 5.5cm]{two_norm_reshaped.png} \end{subfigure} \begin{subfigure}{.48\textwidth} \centering \includegraphics[height = 5.2cm, width = 5.44cm]{spectral_radius_reshaped.png} \end{subfigure} \caption{\label{fig:contours} Numerically computed matrix norms for $M^{(2)}$ using $S=10{,}000$. The color code varies with the subfigures. } \end{figure} \begin{figure \centering \begin{subfigure}{.48\textwidth} \centering \includegraphics[width=1\linewidth]{time_per_iter_vs_n_last_point_1_point_2716_reference_slope_at_end_52_52_review.pdf} \caption{$(\rho, \kappa) = (0.52,0.52)$} \end{subfigure} \begin{subfigure}{.48\textwidth} \centering \includegraphics[width=1\linewidth]{backfitting_lmer_time_total} \caption{$(\rho, \kappa) = (0.70,0.70)$} \end{subfigure} \caption{\label{fig:comptimes} Time for one iteration versus the number of observations, $N$ at two points $(\rho,\kappa)$. The cost for lmer is roughly $O(N^{3/2})$ in the top panel and $O(N^{2.1})$ in the bottom panel. The costs for OLS and backfitting are $O(N)$. } \end{figure} \section{Example: ratings from Stitch Fix}\label{sec:stitch} We illustrate backfitting for GLS on some data from Stitch Fix. Stitch Fix sells clothing. They mail their customers a sample of items. The customers may keep and purchase any of those items that they want, while returning the others. It is valuable to predict the extent to which a customer will like an item, not just whether they will purchase it. Stitch Fix has provided us with some of their client ratings data. It was anonymized, void of personally identifying information, and as a sample it does not reflect their total numbers of clients or items at the time they provided it. It is also from 2015. While it does not describe their current business, it is a valuable data set for illustrative purposes. The sample sizes for this data are as follows. We received $N=5{,}000{,}000$ ratings by $R=762{,}752$ customers on $C=6{,}318$ items. These values of $R$ and $C$ correspond to the point $(0.88,0.57)$ in Figure~\ref{fig:domainofinterest}. Thus $C/N\doteq 0.00126$ and $R/N\doteq 0.153$. The data are not dominated by a single row or column because $\max_iN_{i\sumdot}/R\doteq 9\times 10^{-6}$ and $\max_jN_{\sumdot j}/N\doteq 0.0143$. The data are sparse because $N/(RC)\doteq 0.001$. \subsection{An illustrative linear model} The response $Y_{ij}$ is a rating on a ten point scale of the satisfaction of customer $i$ with item $j$. The data come with features about the clients and items. In a business setting one would fit and compare possibly dozens of different regression models to understand the data. Our purpose here is to study large scale GLS and compare it to ordinary least squares (OLS) and so we use just one model, not necessarily one that we would have settled on. For that purpose we use the same model that was used in \cite{crelin}. It is not chosen to make OLS look as bad as possible. Instead it is potentially the first model one might look at in a data analysis. For client $i$ and item $j$, \begin{align} Y_{ij}& = \beta_0+\beta_1\mathrm{match}_{ij}+\beta_2\mathbb{I}\{\mathrm{client\ edgy}\}_i+\beta_3\mathbb{I}\{\mathrm{item\ edgy}\}_j \notag \\ &\phe + \beta_4\mathbb{I}\{\mathrm{client\ edgy}\}_i*\mathbb{I}\{\mathrm{item\ edgy}\}_j+\beta_5\mathbb{I}\{\mathrm{client\ boho}\}_i \notag \\ &\phe + \beta_6\mathbb{I}\{\mathrm{item\ boho}\}_j+\beta_7\mathbb{I}\{\mathrm{client\ boho}\}_i*\mathbb{I}\{\mathrm{item\ boho}\}_j \notag \\ &\phe + \beta_8\mathrm{material}_{ij}+a_i+b_j+e_{ij}. \notag \end{align} Here $\mathrm{material}_{ij}$ is a categorical variable that is implemented via indicator variables for each type of material other than the baseline. Following \cite{crelin}, we chose ‘Polyester’, the most common material, as the baseline. Some customers and some items were given the adjective `edgy' in the data set. Another adjective was `boho', short for `Bohemian'. The variable match$_{ij}\in[0,1]$ is an estimate of the probability that the customer keeps the item, made before the item was sent. The match score is a prediction from a baseline model and is not representative of all algorithms used at Stitch Fix. All told, the model has $p=30$ parameters. \subsection{Estimating the variance parameters}\label{sec:estim-vari-param} We use the method of moments method from \cite{crelin} to estimate $\theta^\mathsf{T}=(\sigma^2_A, \sigma^2_B, \sigma^2_E)$ in $O(N)$ computation. That is in turn based on the method that \cite{GO17} use in the intercept only model where $Y_{ij} = \mu+a_i+b_{j}+e_{ij}$. For that model they set \begin{align*} U_{A} &= \sum_{i} \sum_{j} Z_{ij} \Bigl( Y_{ij}-\frac{1}{N_{i\sumdot}}\sum_{j^{\prime}}Z_{ij'} Y_{ij^{\prime}}\Bigr)^{2}, \\ U_{B} &= \sum_{j}\sum_{i} Z_{ij} \Bigl(Y_{ij}-\frac{1}{N_{\sumdot j}}\sum_{i^{\prime}}Z_{i'j} Y_{i^{\prime}j}\Bigr)^{2}, \quad\text{and}\\ U_{E} &= N\sum_{i j} Z_{i j} \Bigl(Y_{i j}-\frac{1}{N}\sum_{i^{\prime} j^{\prime}}Z_{i'j'} Y_{i^{\prime} j^{\prime}}\Bigr)^{2}. \end{align*} These are, respectively, sums of within row sums of squares, sums of within column sums of squares and a scaled overall sum of squares. Straightforward calculations show that \begin{align*} \mathbb{E}(U_{A})&=\bigl(\sigma^2_B+\sigma^2_E\bigr)(N-R), \\ \mathbb{E}(U_{B})&=\bigl(\sigma^2_A+\sigma^2_E \bigr)(N-C), \quad\text{and}\\ \mathbb{E}(U_{E})&=\sigma^2_A\Bigl(N^{2}-\sum_{i} N_{i\sumdot}^{2}\Bigr)+\sigma^2_B\Bigl(N^{2}-\sum_{j} N_{\sumdot j}^{2}\Bigr)+\sigma^2_E(N^{2}-N). \end{align*} By matching moments, we can estimate $\theta$ by solving the $3 \times 3$ linear system $$\begin{pmatrix} 0& N-R & N-R \\[.25ex] N-C & 0 & N-C \\[.25ex] N^{2}-\Sigma N_{i}^{2} & N^{2}-\Sigma N_{j}^{2} & N^{2}-N \end{pmatrix} \begin{pmatrix} \sigma^2_A \\[.25ex] \sigma^2_B \\[.25ex] \sigma^2_E\end{pmatrix} =\begin{pmatrix} U_{A}\\[.25ex] U_{B} \\[.25ex] U_{E}\end{pmatrix} $$ for $\theta$. Following \cite{GO17} we note that $\eta_{ij} =Y_{ij}-x_{ij}^\mathsf{T}\beta = a_i+b_{j}+e_{ij}$ has the same parameter $\theta$ as $Y_{ij}$ have. We then take a consistent estimate of $\beta$, in this case $\hat\beta_{\mathrm{OLS}}$ that \cite{GO17} show is consistent for $\beta$, and define $\hat\eta_{ij} =Y_{ij}-x_{ij}^\mathsf{T}\hat\beta_\mathrm{OLS}$. We then estimate $\theta$ by the above method after replacing $Y_{ij}$ by $\hat\eta_{ij}$. For the Stitch Fix data we obtained $\hat{\sigma}_{A}^{2} = 1.14$ (customers), $\hat{\sigma}^{2}_{B} = 0.11$ (items) and $\hat{\sigma}^{2}_{E} = 4.47$. \subsection{Computing $\hat\beta_\mathrm{GLS}$}\label{sec:wholeshebang} The estimated coefficients $\hat\beta_\mathrm{GLS}$ and their standard errors are presented in a table in the appendix. Open-source R code at \url{https://github.com/G28Sw/backfit_code} does these computations. Here is a concise description of the algorithm we used: \begin{compactenum}[\quad 1)] \item Compute $\hat\beta_\mathrm{OLS}$ via \eqref{eq:bhatols}. \item Get residuals $\hat\eta_{ij} =Y_{ij} -x_{ij}^\mathsf{T}\hat\beta_{\mathrm{OLS}}$. \item Compute $\hat\sigma^2_A$, $\hat\sigma^2_B$ and $\hat\sigma^2_E$ by the method of moments on $\hat\eta_{ij}$. \item Compute $\widetilde\mathcal{X}=(I_N-\widetilde\mathcal{S}_G)\mathcal{X}$ using doubly centered backfitting $M^{(3)}$. \item Compute $\hat\beta_{\mathrm{GLS}}$ by~\eqref{eq:covbhatgls}. \item If we want BLUPs $\hat\boldsymbol{a}$ and $\hat\boldsymbol{b}$ backfit $\mathcal{Y} -\mathcal{X}\hat\beta_{\mathrm{GLS}}$ to get them. \item Compute $\widehat\cov(\hat\beta_{\mathrm{GLS}})$ by plugging $\hat\sigma^2_A$, $\hat\sigma^2_B$ and $\hat\sigma^2_E$ into $\mathcal{V}$ at~\eqref{eq:covbhatgls}. \end{compactenum} \smallskip Stage $k$ of backfitting provides $(\tilde\mathcal{S}_G\mathcal{X})^{(k)}$. We iterate until $$ \frac{\Vert (\tilde\mathcal{S}_G\mathcal{X})^{(k+1)}-(\tilde\mathcal{S}_G\mathcal{X})^{(k)}\Vert^2_F}{\Vert (\tilde\mathcal{S}_G\mathcal{X})^{(k)}\Vert^2_F} < \epsilon $$ where $\Vert \cdot \Vert_F$ is the Frobenius norm (root mean square of all elements). Our numerical results use $\epsilon =10^{-8}$. { When we want $\widehat\cov(\hat\beta_{\mathrm{GLS}})$ then we need to use a backfitting strategy with a symmetric smoother $\tilde\mathcal{S}_G$. This holds for $M^{(0)}$, $M^{(2)}$ and $M^{(3)}$ but not $M^{(1)}$. After computing $\hat\beta_{\mathrm{GLS}}$ one can return to step 2, form new residuals $\hat\eta_{ij} =Y_{ij} -x_{ij}^\mathsf{T}\hat\beta_{\mathrm{GLS}}$ and continue through steps 3--7. We have seen small differences from doing this. } \subsection{Quantifying inefficiency and naivete of OLS} In the introduction we mentioned two serious problems with the use of OLS on crossed random effects data. The first is that OLS is naive about correlations in the data and this can lead it to severely underestimate the variance of $\hat\beta$. The second is that OLS is inefficient compared to GLS by the Gauss-Markov theorem. Let $\hat\beta_\mathrm{OLS}$ and $\hat\beta_\mathrm{GLS}$ be the OLS and GLS estimates of $\beta$, respectively. We can compute their corresponding variance estimates $\widehat\cov_\mathrm{OLS}(\hat\beta_\mathrm{OLS})$ and $\widehat\cov_\mathrm{GLS}(\hat\beta_\mathrm{GLS})$. We can also find $\widehat\cov_\mathrm{GLS}(\hat\beta_\mathrm{OLS})$, the variance under our GLS model of the linear combination of $Y_{ij}$ values that OLS uses. This section explore them graphically. We can quantify the naivete of OLS via the ratios $\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS},j})$ for $j=1,\dots,p$. Figure~\ref{fig:OLSisnaive} plots these values. They range from $ 1.75$ to $345.28$ and can be interpreted as factors by which OLS naively overestimates its sample size. The largest and second largest ratios are for material indicators corresponding to `Modal' and `Tencel', respectively. These appear to be two names for the same product with Tencel being a trademarked name for Modal fibers (made from wood). We can also identify the linear combination of $\hat\beta_\mathrm{OLS}$ for which $\mathrm{OLS}$ is most naive. We maximize the ratio $x^\mathsf{T}\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS}})x/x^\mathsf{T}\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS}})x$ over $x\ne0$. The resulting maximal ratio is the largest eigenvalue of $$\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS}}) ^{-1} \widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS}})$$ and it is about $361$ for the Stitch Fix data. \begin{figure} \centering \includegraphics[width=.9\hsize]{figOLSisnaive_katelyn_interaction_polyester_reference} \caption{\label{fig:OLSisnaive} OLS naivete $\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{OLS}}(\hat\beta_{\mathrm{OLS},j})$ for coefficients $\beta_j$ in the Stitch Fix data. } \end{figure} We can quantify the inefficiency of OLS via the ratio $\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{GLS},j})$ for $j=1,\dots,p$. Figure~\ref{fig:OLSisinefficient} plots these values. They range from just over $1$ to $50.6$ and can be interpreted as factors by which using OLS reduces the effective sample size. There is a clear outlier: the coefficient of the match variable is very inefficiently estimated by OLS. The second largest inefficiency factor is for the intercept term. The most inefficient linear combination of $\hat\beta$ reaches a variance ratio of $52.6$, only slightly more inefficient than the match coefficient alone. \begin{figure} \centering \includegraphics[width=.9\hsize]{figOLSisinefficient_katelyn_interaction_polyester_reference} \caption{\label{fig:OLSisinefficient} OLS inefficiency $\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{OLS},j})/\widehat\cov_{\mathrm{GLS}}(\hat\beta_{\mathrm{GLS},j})$ for coefficients $\beta_j$ in the Stitch Fix data. } \end{figure} The variables for which OLS is more naive tend to also be the variables for which it is most inefficient. Figure~\ref{fig:naivevsinefficient} plots these quantities against each other for the $30$ coefficients in our model. \begin{figure}[t] \centering \includegraphics[width=.8\hsize]{fignaivevsinefficient_katelyn_interaction_polyester_reference} \caption{\label{fig:naivevsinefficient} Inefficiency vs naivete for OLS coefficients in the Stitch Fix data. } \end{figure} \subsection{Convergence speed of backfitting} The Stitch Fix data have row and column sample sizes that are much more uneven than our sampling model for $Z$ allows. Accordingly we cannot rely on Theorem~\ref{thm:m1norm1} to show that backfitting must converge rapidly for it. The sufficient conditions in that theorem may not be necessary and we can compute our norms and the spectral radius on the update matrices for the Stitch Fix data using some sparse matrix computations. Here $Z\in\{0,1\}^{762,752\times6318}$, so $M^{(k)}\in\mathbb{R}^{6318\times 6318}$ for $k \in \lbrace0,1,2,3\rbrace$. The results are $$ \begin{pmatrix} \Vert M^{(0)}\Vert_1 \ & \ \Vert M^{(0)}\Vert_2 \ & \ |\lambda_{\max}(M^{(0)})|\\[.25ex] \Vert M^{(1)}\Vert_1 \ & \ \Vert M^{(1)}\Vert_2 \ & \ |\lambda_{\max}(M^{(1)})|\\[.25ex] \Vert M^{(2)}\Vert_1 \ & \ \Vert M^{(2)}\Vert_2 \ & \ |\lambda_{\max}(M^{(2)})|\\[.25ex] \Vert M^{(3)}\Vert_1 \ & \ \Vert M^{(3)}\Vert_2 \ & \ |\lambda_{\max}(M^{(3)})| \end{pmatrix} =\begin{pmatrix} 31.9525 \ & \ 1.4051 \ & \ 0.64027 \\[.75ex] 11.2191 \ & \ 0.4512 \ & \ 0.33386\\[.75ex] \phz8.9178 \ & \ 0.4541 \ & \ 0.33407\\[.75ex] \phz9.2143\ & \ 0.4546 & \ 0.33377\\ \end{pmatrix}. $$ All the updates have spectral radius comfortably below one. The centered updates have $L_2$ norm below one but the uncentered update does not. Their $L_2$ norms are somewhat larger than their spectral radii because those matrices are not quite symmetric. The two largest eigenvalue moduli for $M^{(0)}$ are $0.6403$ and $0.3337$ and the centered updates have spectral radii close to the second largest eigenvalue of $M^{(0)}$. This is consistent with an intuitive explanation that the space spanned by a column of $N$ ones that is common to the columns spaces of $\mathcal{Z}_A$ and $\mathcal{Z}_B$ is the {biggest impediment} to $M^{(0)}$ and that all three centering strategies essentially remove it. The best spectral radius is for $M^{(3)}$, which employs two principled centerings, although in this data set it made little difference. Our backfitting algorithm took $8$ iterations when applied to $\mathcal{X}$ and $12$ more to compute the BLUPs. We used a convergence threshold of $10^{-8}.$ \section{Discussion}\label{sec:discussion} We have shown that the cost of our backfitting algorithm is $O(N)$ under strict conditions that are nonetheless much more general than having $N_{i\sumdot} = N/C$ for all $i=1,\dots,R$ and $N_{\sumdot j} = N/R$ for all $j=1,\dots,C$ as in \cite{papa:robe:zane:2020}. As in their setting, the backfitting algorithm scales empirically to much more general problems than those for which rapid convergence can be proved. Our contour map of the spectral radius of the update matrix $M$ shows that this norm is well below $1$ over many more $(\rho,\kappa)$ pairs that our theorem covers. The difficulty in extending our approach to those settings is that the spectral radius is a much more complicated function of the observation matrix $Z$ than the $L_1$ norm is. Theorem 4 of \cite{papa:robe:zane:2020} has the rate of convergence for their collapsed Gibbs sampler for balanced data. It involves an auxilliary convergence rate $\rho_{\mathrm{aux}}$ defined as follows. Consider the Gibbs sampler on $(i,j)$ pairs where given $i$ a random $j$ is chosen with probability $Z_{ij}/N_{i\sumdot}$ and given $j$ a random $i$ is chosen with probability $Z_{ij}/N_{\sumdot j}$. That Markov chain has invariant distribution $Z_{ij}/N$ on $(i,j)$ pairs and $\rho_{\mathrm{aux}}$ is the rate at which the chain converges. In our notation $$ \rho_{\mathrm{PRZ}} = \frac{N\sigma^2_A}{N\sigma^2_A+R\sigma^2_E}\times\frac{N\sigma^2_B}{N\sigma^2_B+C\sigma^2_E}\times\rho_{\mathrm{aux}}. $$ In sparse data $\rho_{\mathrm{PRZ}}\approx\rho_{\mathrm{aux}}$ and under our asymptotic setting $|\rho_{\mathrm{aux}}-\rho_{\mathrm{PRZ}}|\to0$. \cite{papa:robe:zane:2020} remark that $\rho_{\mathrm{aux}}$ tends to decrease as the amount of data increases. When it does, then their algorithm takes $O(1)$ iterations and costs $O(N)$. They explain that $\rho_{\mathrm{aux}}$ should decrease as the data set grows because the auxiliary process then gets greater connectivity. That connectivity increases for bounded $R$ and $C$ with increasing $N$ and from their notation, allowing multiple observations per $(i,j)$ pair it seems like they have this sort of infill asymptote in mind. For sparse data from electronic commerce we think that an asymptote like the one we study where $R$, $C$ and $N$ all grow is a better description. It would be interesting to see how $\rho_{\mathrm{aux}}$ develops under such a model. In Section 5.3 \cite{papa:robe:zane:2020} state that the convergence rate of the collapsed Gibbs sampler is $O(1)$ regardless of the asymptotic regime. That section is about a more stringent `balanced cells' condition where every $(i,j)$ combination is observed the same number of times, so it does not describe the `balanced levels' setting where $N_{i\sumdot}=N/R$ and $N_{\sumdot j}=N/C$. Indeed they provide a counterexample in which there are two disjoint communities of users and two disjoint sets of items and each user in the first community has rated every item in the first item set (and no others) while each user in the second community has rated every item in the second item set (and no others). That configuration leads to an unbounded mixing time for collapsed Gibbs. It is also one where backfitting takes an increasing number of iterations as the sample size grows. There are interesting parallels between methods to sample a high dimensional Gaussian distribution with covariance matrix $\Sigma$ and iterative solvers for the system $\Sigma \boldsymbol{x} = \boldsymbol{b}$. See \cite{good:soka:1989} and \cite{RS97} for more on how the convergence rates for these two problems coincide. We found that backfitting with one or both updates centered worked much better than uncentered backfitting. \cite{papa:robe:zane:2020} used a collapsed sampler that analytically integrated out the global mean of their model in each update of a block of random effects. Our approach treats $\sigma^2_A$, $\sigma^2_B$ and $\sigma^2_E$ as nuisance parameters. We plug in a consistent method of moments based estimator of them in order to focus on the backfitting iterations. In Bayesian computations, maximum a posteriori estimators of variance components under non-informative priors can be problematic for hierarchical models \cite{gelm:2006}, and so perhaps maximum likelihood estimation of these variance components would also have been challenging. Whether one prefers a GLS estimate or a Bayesian one depends on context and goals. We believe that there is a strong computational advantage to GLS for large data sets. The cost of one backfitting iteration is comparable to the cost to generate one more sample in the MCMC. We may well find that only a dozen or so iterations are required for convergence of the GLS. A Bayesian analysis requires a much larger number of draws from the posterior distribution than that. For instance, \cite{gelm:shir:2011} recommend an effective sample size of about $100$ posterior draws, with autocorrelations requiring a larger actual sample size. \cite{vats:fleg:jone:2019} advocate even greater effective sample sizes. It is usually reasonable to assume that there is a selection bias underlying which data points are observed. Accounting for any such selection bias must necessarily involve using information or assumptions from outside the data set at hand. We expect that any approach to take proper account of informative missingness must also make use of solutions to GLS perhaps after reweighting the observations. Before one develops any such methods, it is necessary to first be able to solve GLS without regard to missingness. Many of the problems in electronic commerce involve categorical outcomes, especially binary ones, such as whether an item was purchased or not. Generalized linear mixed models are then appropriate ways to handle crossed random effects, and we expect that the progress made here will be useful for those problems. \section*{Acknowledgements} This work was supported by the U.S.\ National Science Foundation under grant IIS-1837931. We are grateful to Brad Klingenberg and Stitch Fix for sharing some test data with us. We thank the reviewers for remarks that have helped us improve the paper. \bibliographystyle{imsart-nameyear}
proofpile-arXiv_065-142
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A fundamental goal in computer vision is to build representations of visual data that can be used towards tasks such as object classification, detection, segmentation, tracking, and action recognition \cite{ImageNet,pascalch06,soomro2012ucf101,hmdb51}. In the past decades, a lot of research has been focused on learning directly from single images and has done so with remarkable success \cite{redmon2016you,he2017mask,he2016deep}. Single images carry crucial information about a scene. However, when we observe a temporal sequence of image frames, \emph{i.e.}, a video, it is possible to understand much more about the objects and the scene. In fact, by moving, objects reveal their shape (through a change in the occlusions), their behavior (how they move due to the laws of Physics or their inner mechanisms), and their interaction with other objects (how they deform, break, clamp etc.). However, learning such information is non trivial. Even when labels related to motion categories are available (such as in action recognition), there is no guarantee that the trained model will learn the desired information, and will not instead simply focus on a single iconic frame and recognize a key pose or some notable features strongly correlated to the action \cite{schindler2008action}. To build representations of videos that capture more than the information contained in a single frame, we pose the task of learning an accurate model of motion as that of learning to distinguish an unprocessed video from a temporally-transformed one. Since similar frames are present in both the unprocessed and transformed sequence, the only piece of information that allows their discrimination is their temporal evolution. This idea has been exploited in the past \cite{fernando2017self,lee2017unsupervised,li2019joint,misra2016shuffle,wei2018learning} and is also related to work in time-series analysis, where dynamic time warping is used as a distance for temporal sequences \cite{KenjiIwanaU2020}. In this paper, we analyze different temporal transformations and evaluate how learning to distinguish them yields a representation that is useful to classify videos into meaningful action categories. Our main finding is that the most effective temporal distortions are those that can be identified only by observing the largest number of frames. For instance, the case of substituting the second half of a video with its first half in reverse order, can be detected already by comparing just the 3 frames around the temporal symmetry. In contrast, distinguishing when a video is played backwards from when it is played forward \cite{wei2018learning} may require observing many frames. Thus, one can achieve powerful video representations by using as pseudo-task the classification of temporal distortions that differ in their long-range motion dynamics. \begin{figure}[t] \centering (a) \begin{subfigure}{0.9\textwidth} \centering \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/0.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/1.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/2.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/3.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/4.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/5.png} \vspace{-.3cm} \caption*{\tiny \phantom{0}0\hspace{0.14\linewidth}1\hspace{0.14\linewidth}\phantom{0}2\hspace{0.14\linewidth}\phantom{0}3\hspace{0.14\linewidth}\phantom{0}4\hspace{0.14\linewidth}\phantom{0}5} \vspace{.05cm} \end{subfigure}\\ (b) \begin{subfigure}{0.9\textwidth} \centering \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/0.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/2.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/4.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/6.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/8.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/10.png} \vspace{-.3cm} \caption*{\tiny \phantom{0}0\hspace{0.14\linewidth}\phantom{0}2\hspace{0.14\linewidth}\phantom{0}4\hspace{0.14\linewidth}\phantom{0}6\hspace{0.14\linewidth}8\hspace{0.14\linewidth}10} \vspace{.05cm} \end{subfigure}\\ (c) \begin{subfigure}{0.9\textwidth} \centering \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/0.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/4.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/8.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/12.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/16.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/20.png} \vspace{-.3cm} \caption*{\tiny \phantom{0}0\hspace{0.14\linewidth}\phantom{0}4\hspace{0.14\linewidth}8\hspace{0.14\linewidth}12\hspace{0.14\linewidth}16\hspace{0.14\linewidth}20} \vspace{.05cm} \end{subfigure}\\ (d) \begin{subfigure}{0.9\textwidth} \centering \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/0.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/8.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/16.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/24.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/32.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/35.png} \vspace{-.3cm} \caption*{\tiny \phantom{0}0\hspace{0.14\linewidth}\phantom{0}8\hspace{0.14\linewidth}16\hspace{0.14\linewidth}24\hspace{0.14\linewidth}32\hspace{0.14\linewidth}40} \vspace{.05cm} \end{subfigure}\\ (e) \begin{subfigure}{0.9\textwidth} \centering \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/8.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/0.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/3.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/4.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/10.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/6.png} \vspace{-.3cm} \caption*{\tiny 8\hspace{0.14\linewidth}\phantom{0}0\hspace{0.14\linewidth}3\hspace{0.14\linewidth}\phantom{0}4\hspace{0.14\linewidth}10\hspace{0.14\linewidth}6} \vspace{.05cm} \end{subfigure}\\ (f) \begin{subfigure}{0.9\textwidth} \centering \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/0.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/8.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/16.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/24.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/18.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/10.png} \vspace{-.3cm} \caption*{\tiny \phantom{0}0\hspace{0.14\linewidth}\phantom{0}8\hspace{0.14\linewidth}16\hspace{0.14\linewidth}24\hspace{0.14\linewidth}18\hspace{0.14\linewidth}\phantom{0}10} \vspace{.05cm} \end{subfigure}\\ (g) \begin{subfigure}{0.9\textwidth} \centering \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/0.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/5.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/13.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/18.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/22.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/24.png} \vspace{-.3cm} \caption*{\tiny \phantom{0}0\hspace{0.14\linewidth}\phantom{0}5\hspace{0.14\linewidth}13\hspace{0.14\linewidth}18\hspace{0.14\linewidth}22\hspace{0.14\linewidth}24} \vspace{.05cm} \end{subfigure} \caption{\textbf{Learning from Temporal Transformations}. The frame number is indicated below each image. (a)-(d) \texttt{Speed} transformation by skipping: (a) 0 frames, (b) 1 frame, (c) 3 frames, and (d) 7 frames. (e) \texttt{Random}: frame permutation (no frame is skipped). (f) \texttt{Periodic}: forward-backward motion (at the selected speed). (g) \texttt{Warp}: variable frame skipping while preserving the order.} \label{fig:time-warps} \end{figure}% Towards this goal, we investigate 4 different temporal transformations of a video, which are illustrated in Fig.~\ref{fig:time-warps}: \begin{enumerate} \item \textbf{Speed}: Select a subset of frames with uniform sub-sampling (\emph{i.e.}, with a fixed number of frames in between every pair of selected frames), while preserving the order in the original sequence; \item \textbf{Random}: Select a random permutation of the frame sequence; \item \textbf{Periodic}: Select a random subset of frames in their natural (forward) temporal order and then a random subset in the backward order; \item \textbf{Warp}: Select a subset of frames with a random sub-sampling (\emph{i.e.}, with a random number of frames in between every pair of selected frames), while preserving the natural (forward) order in the original sequence. \end{enumerate} We use these transformations to verify and illustrate the hypothesis that learning to distinguish them from one another (and the original sequence) is useful to build a representation of videos for action recognition. For simplicity, we train a neural network that takes as input videos of the same duration and outputs two probabilities: One is about which one of the above temporal transformations the input sequence is likely to belong to and the second is about identifying the correct speed of the chosen \textbf{speed} transformation. In the Experiments section, we transfer features of standard 3D-CNN architectures (C3D \cite{tran2015learning}, 3D-ResNet \cite{hara2018can}, and R(2+1)D \cite{tran2018closer}) pre-trained through the above pseudo-task to standard action recognition data sets such as UCF101 and HMDB51, with improved performance compared to prior works. We also show that features learned through our proposed pseudo-task capture long-range motion better than features obtained through supervised learning. Our project page \texttt{\url{https://sjenni.github.io/temporal-ssl}} provides code and additional experiments. Our contributions can be summarized as follows: 1) We introduce a novel self-supervised learning task to learn video representations by distinguishing temporal transformations; 2) We study the discrimination of the following novel temporal transformations: \textbf{speed}, \textbf{periodic} and \textbf{warp}; 3) We show that our features are a better representation of motion than features obtained through supervised learning and achieve state of the art transfer learning performance on action recognition benchmarks. \section{Prior Work} Because of the lack of manual annotation, our method belongs to self-supervised learning. Self-supervised learning appeared in the machine learning literature more than 2 decades ago \cite{caruana1996promoting,ando2005predictive} and has been reformulated recently in the context of visual data with new insights that make it a promising method for representation learning \cite{Carl2015}. This learning strategy is a recent variation on the unsupervised learning theme, which exploits labeling that comes for ``free'' with the data. Labels could be easily accessible and associated with a non-visual signal (for example, ego-motion \cite{agrawalCM15}, audio \cite{Owens_2018_ECCV}, text and so on), but also could be obtained from the structure of the data (\emph{e.g.}, the location of tiles \cite{Carl2015,noroozi2016unsupervised}, the color of an image \cite{zhang2016colorful,zhang2016split,larsson2017colorproxy}) or through transformations of the data \cite{gidaris2018unsupervised,jenni2018self,jenni2020steering}. Several works have adapted self-supervised feature learning methods from domains such as images or natural language to videos: Rotation prediction \cite{jing2018self}, Dense Predictive Coding \cite{han2019video}, and \cite{sun2019contrastive} adapt the BERT language model \cite{devlin2018bert} to sequences of frame feature vectors. In the case of videos, we identify three groups of self-supervised approaches: 1) Methods that learn from videos to represent videos; 2) Methods that learn from videos to represent images; 3) Methods that learn from videos and auxiliary signals to represent both videos and the auxiliary signals (\emph{e.g.}, audio). \noindent\textbf{Temporal ordering methods.} Prior work has explored the temporal ordering of the video frames as a supervisory signal. For example, Misra \emph{et al.}~\cite{misra2016shuffle} showed that learning to distinguish a real triplet of frames from a shuffled one yields a representation with temporally varying information (\emph{e.g.}, human pose). This idea has been extended to longer sequences for posture and behavior analysis by using Long Short-Term Memories \cite{brattoli2017lstm}. The above approaches classify the correctness of a temporal order directly from one sequence. An alternative is to feed several sequences, some of which are modified, and ask the network to tell them apart \cite{fernando2017self}. Other recent work predicts the permutation of a sequence of frames \cite{lee2017unsupervised} or both the spatial and temporal ordering of frame patches \cite{buchler2018improving,kim2019self}. Another recent work focuses on solely predicting the arrow of time in videos \cite{wei2018learning}. Three concurrent publications also exploit the playback speed as a self-supervision signal \cite{epstein2020oops,benaim2020speednet,yao2020video}. In contrast, our work studies a wider range of temporal transformations. Moreover, we show empirically that the temporal statistics extent (in frames) captured by our features correlates to the transfer learning performance in action recognition. \\ \noindent\textbf{Methods based on visual tracking.} The method of Wang and Gupta \cite{wang2015unsupervised} builds a metric to define similarity between patches. Three patches are used as input, where two patches are matched via tracking in a video and the third one is arbitrarily chosen. Tracking can also be directly solved during training, as shown in \cite{vondrick2018tracking}, where color is used as a supervisory signal. By solving the task of coloring a grey-scale video (in a coherent manner across time), one can automatically learn how to track objects. Visual correspondences can also be learned by exploiting cycle-consistency in time \cite{wang2019learning} or by jointly performing region-level localization and fine-grained matching \cite{li2019joint}. However, although trained on videos, these methods have not been used to build video representations or evaluated on action recognition.\\ \noindent\textbf{Methods based on auxiliary signals.} Supervision can also come from additional signals recorded with images. For example, videos come also with audio. The fact that the sounds are synchronized with the motion of objects in a video, already provides a weak supervision signal: One knows the set of possible sounds of visible objects, but not precisely their correspondence. Owens \emph{et al.}~\cite{owens2016ambient} show that, through the process of predicting a summary of ambient sound in video frames, a neural network learns a useful representation of objects and scenes. Another way to learn a similar representation is via classification \cite{arandjelovic2017look}: A network is given an image-sound pair as input and must classify whether they match or not. Korbar \emph{et al.}~\cite{korbar2018cooperative} build audio and video representations by learning to synchronize audio and video signals using a contrastive loss. Recently, \cite{patrick2020multi} use multi-modal data from videos also in a contrastive learning framework. Several methods use optical flow as a supervision signal. For example, Wang \emph{et al.}~\cite{wang2019self} extract motion and appearance statistics. Luo \emph{et al.}~\cite{luo2017unsupervised} predict future atomic 3D flows given an input sequence, and Gan \emph{et al.}~\cite{gan2018geometry} use geometry in the form of flow fields and disparity maps on synthetic and 3D movies. Optical flow is also used as input for video representation learning or filtering of the data \cite{wei2018learning}. Conversely, we do not make use of any auxiliary signals and learn video representations solely from the raw RGB frames. \section{Learning Video Dynamics} \begin{figure*}[t!] \centering \includegraphics[width=0.9\linewidth]{figures/sequence_sampling/time_warps_v3.pdf} \caption{\textbf{Training a 3D-CNN to distinguish temporal transformations.} In each mini-batch we select a video speed (out of 4 possible choices), \emph{i.e.}, how many frames are skipped in the original video. Then, the 3D-CNN receives as input mini-batch a mixture of 4 possible transformed sequences: \texttt{speed} (with the chosen frame skipping), \texttt{random}, \texttt{periodic} and \texttt{warp}. The network outputs the probability of which motion type a sequence belongs to and the probability of which speed type the speed-transformed sequence has.} \label{fig:transforms} \end{figure*} Recent work \cite{wang2019self} showed how a careful learning of motion statistics led to a video representation with excellent transfer performance on several tasks and data sets. The learning of motion statistics was made explicit by extracting optical flow between frame pairs, by computing flow changes, and then by identifying the region where a number of key attributes (\emph{e.g.}, maximum magnitude and orientation) of the time-averaged flow-change occurred. In this work, we also aim to learn from motion statistics, but we focus entirely our attention on the temporal evolution without specifying motion attributes of interest or defining a task based on appearance statistics. We hypothesize that these important aspects could be implicitly learned and exploited by the neural network to solve the lone task of discriminating temporal transformations of a video. Our objective is to encourage the neural network to represent well motion statistics that require a long-range observation (in the temporal domain). To do so, we train the network to discriminate videos where the image content has been preserved, but not the temporal domain. For example, we ask the network to distinguish a video at the original frame rate from when it is played 4 times faster. Due to the laws of Physics, one can expect that, in general, \emph{executing} the same task at different speeds leads to different motion dynamics compared to when a video is just \emph{played} at different speeds (\emph{e.g.}, compare marching vs walking played at a higher speed). Capturing the subtleties of the dynamics of these motions requires more than estimating motion between 2 or 3 frames. Moreover, these subtleties are specific to the moving object, and thus they require object detection and recognition. In our approach, we transform videos by sampling frames according to different schemes, which we call \emph{temporal transformations}. To support our learning hypothesis, we analyze transformations that require short- (\emph{i.e.}, temporally local) and long-range (\emph{i.e.}, temporally global) video understanding. As will be illustrated in the Experiments section, short-range transformations yield representations that transfer to action recognition with a lower performance than long-range ones. \subsection{Transformations of Time} Fig.~\ref{fig:transforms} illustrates how we train our neural network (a 3D-CNN \cite{tran2015learning}) to build a video representation (with 16 frames). In this section, we focus on the inputs to the network. As mentioned above, our approach is based on distinguishing different temporal transformations. We consider 4 fundamental types of transformations: Speed changes, random temporal permutations, periodic motions and temporal warp changes. Each of these transformations boils down to picking a sequence of temporal indices to sample the videos in our data set. ${\cal V}_{\kappa}^\tau\subset \{0,1,2,\dots\}$ denotes the chosen subset of indices of a video based on the transformation $\tau\in\{0,1,2,3\}$ and with speed $\kappa$.\\ \noindent\textbf{Speed ($\tau=0$): } In this first type we artificially change the video frame rate, \emph{i.e.}, its playing speed. We achieve that by skipping a different number of frames. We consider 4 cases, \textbf{Speed 0,~1,~2,~3} corresponding to $\kappa=0,~1,~2,~3$ respectively, where we skip $2^\kappa-1$ frames. The resulting playback speed of \textbf{Speed $\bm{\kappa}$} is therefore $2^\kappa$ times the original speed. In the generation of samples for the training of the neural network we first uniformly sample $\kappa \in \{0,1,2,3\} $, the playback speed, and then use this parameter to define other transformations. This sequence is used in all experiments as one of the categories against either other speeds or against one of the other transformations below. The index sequence ${\cal V}^0_\kappa$ is thus $\rho+[0,1\cdot2^\kappa,2\cdot2^\kappa,\dots,15\cdot2^\kappa]$, where $\rho$ is a random initial index.\\ \noindent\textbf{Random ($\tau=1$): } In this second temporal transformation we randomly permute the indices of a sequence without skipping frames. We fix $\kappa=0$ to ensure that the maximum frame skip between two consecutive frames is not too dissimilar to other transformations. This case is used as a reference, as random permutations can often be detected by observing only a few nearby frames. Indeed, in the Experiments section one can see that this transformation yields a low transfer performance. The index sequence ${\cal V}_0^\text{1}$ is thus $\rho+ \text{permutation}([0,1,2,\dots,15])$. This transformation is similar to that of the pseudo-task of Misra \emph{et al.}\@ \cite{misra2016shuffle}. \\ \noindent\textbf{Periodic ($\tau=2$): } This transformation synthesizes motions that exhibit approximate periodicity. To create such artificial cases we first pick a point $2\cdot2^\kappa<s<13\cdot2^\kappa$ where the playback direction switches. Then, we compose a sequence with the following index sequence: $0$ to $s$ and then from $s-1$ to $2s-15\cdot2^\kappa$. Finally, we sub-sample this sequence by skipping $2^\kappa-1$ frames. Notice that the randomization of the midpoint $s$ in the case of $\kappa>0$ yields pseudo-periodic sequences, where the frames in the second half of the generated sequence often do not match the frames in the first half of the sequence. The index sequence ${\cal V}^\text{2}_\kappa$ is thus $\rho+[0,1\cdot2^\kappa,2\cdot2^\kappa,\dots,\Bar{s}\cdot2^\kappa,(\Bar{s}-1)\cdot2^\kappa+\delta,\dots,(2\Bar{s}-15)\cdot2^\kappa+\delta])$, where $\Bar{s}=\lfloor s/2^\kappa \rfloor$, $\delta=s-\Bar{s}\cdot2^\kappa$, and $\rho=\max(0, (15-2\Bar{s})\cdot2^\kappa-\delta)$.\\ \noindent\textbf{Warp ($\tau=3$): } In this transformation, we pick a set of $16$ ordered indices with a non-uniform number of skipped frames between them (we consider sampling any frame so we let $\kappa=0$). In other words, between any of the frames in the generated sequence we have a random number of skipped frames, each chosen independently from the set $\{0,\ldots,7\}$. This transformation creates a warping of the temporal dimension by varying the playback speed from frame to frame. To construct the index sequence ${\cal V}^\text{3}_0$ we first sample the frame skips $s_j\in\{0,\ldots,7\}$ for $j=1,\ldots,15$ and set ${\cal V}^\text{3}_0$ to $\rho+[0,s_1, s_1+s_2, \dots,\sum_{j=1}^{15}s_j]$. \subsection{Training} Let $\phi$ denote our network, and let us denote with $\phi^\text{m}$ (\texttt{motion}) and $\phi^\text{s}$ (\texttt{speed}) its two softmax outputs (see Fig.~\ref{fig:transforms}). To train $\phi$ we optimize the following loss \begin{align} -&\text{E}_{\kappa\sim{\cal U}[0,3],p\in{\cal V}^0_\kappa,q\in{\cal V}^1_0,s\in{\cal V}^2_\kappa,t\in{\cal V}^3_0, x}\Big[ \log\big(\phi_0^\text{m}\left(x_p\right)\phi_1^\text{m}\left(x_q\right)\phi_2^\text{m}\left(x_s\right)\phi_3^\text{m}\left(x_t\right)\big) \Big]\\ -&\text{E}_{\kappa\sim{\cal U}[0,3],p\in{\cal V}^0_\kappa,x}\big[ \log\left(\phi_\kappa^\text{s}\left(x_p\right)\right)\big]\nonumber \end{align} where $x$ is a video sample, the sub-index denotes the set of frames. This loss is the cross entropy both for \texttt{motion} and \texttt{speed} classification (see Fig.~\ref{fig:transforms}). \subsection{Implementation} Following prior work \cite{wang2019self}, we use the smaller variant of the C3D architecture \cite{tran2015learning} for the 3D-CNN transformation classifier in most of our experiments. Training was performed using the AdamW optimizer \cite{loshchilov2018decoupled} with parameters $\beta_1=0.9, \beta_2=0.99$ and a weight decay of $10^{-4}$. The initial learning rate was set to $3\cdot10^{-4}$ during pre-training and $5\cdot10^{-5}$ during transfer learning. The learning rate was decayed by a factor of $10^{-3}$ over the course of training using cosine annealing \cite{loshchilov2016sgdr} both during pre-training and transfer learning. We use batch-normalization \cite{ioffe2015batch} in all but the last layer. Mini-batches are constructed such that all the different coarse time warp types are included for each sampled training video. The batch size is set 28 examples (including all the transformed sequences). The speed type is uniformly sampled from all the considered speed types. Since not all the videos allow a sampling of all speed types (due to their short video duration) we limit the speed type range to the maximal possible speed type in those examples. We use the standard pre-processing for the C3D network. In practice, video frames are first resized to $128\times 171$ pixels, from which we extract random crops of size $112 \times 112$ pixels. We also apply random horizontal flipping of the video frames during training. We use only the raw unfiltered RGB video frames as input to the motion classifier and do not make use of optical flow or other auxiliary signals. \begin{table}[t] \centering \caption{\textbf{Ablation experiments.} We train a 3D-CNN to distinguish different sets of temporal transformations. The quality of the learned features is evaluated through transfer learning for action recognition on UCF101 (with frozen convolutional layers) and HMDB51 (with fine-tuning of the whole network). } \label{tab:ablation} \resizebox{\textwidth}{!}{% \begin{tabular}{@{}l@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}c@{}} \toprule & \texttt{speed} & \textbf{UCF101} & \textbf{HMDB51} \\ \textbf{Pre-Training Signal} & loss & (conv frozen) & (conv fine-tuned) \\ \midrule Action Labels UCF101 & - & 60.7\% & 28.8\% \\ \midrule Speed & YES & 49.3\% & 32.5\% \\ \midrule Speed + Random & NO & 44.5\% & 31.7\% \\ Speed + Periodic & NO & 40.6\% & 29.5\% \\ Speed + Warp & NO & 43.5\% & 32.6\% \\ Speed + Random & YES & 55.1\% & 33.2\% \\ Speed + Periodic & YES & 56.5\% & 36.1\% \\ Speed + Warp & YES & 55.8\% & 36.9\% \\ \midrule Speed + Random + Periodic & NO & 47.4\% & 30.1\% \\ Speed + Random + Warp & NO & 54.8\% & 36.6\% \\ Speed + Periodic + Warp & NO & 50.6\% & 36.4\% \\ Speed + Random + Periodic & YES & 60.0\% & 37.1\% \\ Speed + Random + Warp & YES & 60.4\% & 39.2\% \\ Speed + Periodic + Warp & YES & 59.5\% & 39.0\% \\ \midrule Speed + Random + Periodic + Warp & NO & 54.2\% & 34.9\% \\ Speed + Random + Periodic + Warp & YES & 60.6\% & 38.0\% \\ \bottomrule \end{tabular}% } \end{table} \section{Experiments} \noindent\textbf{Datasets and Evaluation.} In our experiments we consider three datasets. Kinetics \cite{zisserman2017kinetics} is a large human action dataset consisting of around 500K videos. Video clips are collected from YouTube and span 600 human action classes. We use the training split for self-supervised pre-training. UCF101 \cite{soomro2012ucf101} contains around 13K video clips spanning 101 human action classes. HMDB51 \cite{hmdb51} contains around 5K videos belonging to 51 action classes. Both UCF101 and HMDB51 come with three pre-defined train and test splits. We report the average performance over all splits for transfer learning experiments. We use UCF101 train split 1 for self-supervised pre-training. For transfer learning experiments we skip 3 frames corresponding to transformation \textbf{Speed 2}. For the evaluation of action recognition classifiers in transfer experiments we use as prediction the maximum class probability averaged over all center-cropped sub-sequences for each test video. More details are provided in the supplementary material.\\ \noindent\textbf{Understanding the Impact of the Temporal Transformations.} We perform ablation experiments on UCF101 and HMDB51 where we vary the number of different temporal transformations the 3D-CNN is trained to distinguish. The 3D-CNN is pre-trained for 50 epochs on UCF101 with our self-supervised learning task. We then perform transfer learning for action recognition on UCF101 and HMDB51. On UCF101 we freeze the weights of the convolutional layers and train three randomly initialized fully-connected layers for action recognition. This experiment treats the transformation classifier as a fixed video feature extractor. On HMDB51 we fine-tune the whole network including convolutional layers on the target task. This experiment therefore measures the quality of the network initialization obtained through self-supervised pre-training. In both cases we again train for 50 epochs on the action recognition task. The results of the ablations are summarized in Table~\ref{tab:ablation}. For reference we also report the performance of network weights learned through supervised pre-training on UCF101. We observe that when considering the impact of a single transformation across different cases, the types \textbf{Warp} and \textbf{Speed} achieve the best transfer performance. With the same analysis, the transformation \textbf{Random} leads to the worst transfer performance on average. We observe that \textbf{Random} is also the easiest transformation to detect (based on training performance -- not reported). As can be seen in Fig.~\ref{fig:time-warps} (e) this transformation can lead to drastic differences between consecutive frames. Such examples can therefore be easily detected by only comparing pairs of adjacent frames. In contrast, the motion type \textbf{Warp} can not be distinguished based solely on two adjacent frames and requires modelling long range dynamics. We also observe that distinguishing a larger number of transformations generally leads to an increase in the transfer performance. The effect of the \textbf{speed} type classification is quite noticeable. It leads to a very significant transfer performance increase in all cases. This is also the most difficult pseudo task (based on the training performance -- not reported). Recognizing the speed of an action is indeed challenging, since different action classes naturally exhibit widely different motion speeds (\emph{e.g.}, ``applying make-up'' vs. ``biking''). This task might often require a deeper understanding of the physics and objects involved in the video. \begin{table}[t] \centering \caption{\textbf{Comparison to prior work on self-supervised video representation learning.} Whenever possible we compare to results reported with the same data modality we used, \emph{i.e.}, unprocessed RGB input frames. * are our reimplementations. } \label{tab:comparison} \resizebox{\linewidth}{!}{% \begin{tabular}{@{}l@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}c} \toprule \textbf{Method} & \textbf{Ref} & \textbf{Network} & \textbf{Train Dataset} & \textbf{UCF101} & \textbf{HMDB51} \\ \midrule Shuffle\&Learn \cite{misra2016shuffle} & \cite{misra2016shuffle} & AlexNet & UCF101 & 50.2\% & 18.1\% \\ O3N \cite{fernando2017self} & \cite{fernando2017self} & AlexNet & UCF101 & 60.3\% & 32.5\% \\ AoT \cite{wei2018learning} & \cite{wei2018learning} & VGG-16 & UCF101 & 78.1\% & - \\ OPN \cite{lee2017unsupervised} & \cite{lee2017unsupervised} & VGG-M-2048 & UCF101 & 59.8\% & 23.8\% \\ DPC \cite{han2019video} & \cite{han2019video} & 3D-ResNet34 & Kinetics & 75.7\% & 35.7\% \\ SpeedNet \cite{benaim2020speednet} & \cite{benaim2020speednet} & S3D-G & Kinetics & 81.1\% & 48.8\% \\ AVTS \cite{korbar2018cooperative} (RGB+audio) & \cite{korbar2018cooperative} & MC3 & Kinetics & 85.8\% & 56.9\% \\ \midrule Shuffle\&Learn \cite{misra2016shuffle}* & - & C3D & UCF101 & 55.8\% & 25.4\% \\ 3D-RotNet \cite{jing2018self}* & - & C3D & UCF101 & 60.6\% & 27.3\% \\ Clip Order \cite{xu2019self} & \cite{xu2019self} & C3D & UCF101 & 65.6\% & 28.4\% \\ Spatio-Temp \cite{wang2019self} & \cite{wang2019self} & C3D & UCF101 & 58.8\% & 32.6\% \\ Spatio-Temp \cite{wang2019self} & \cite{wang2019self} & C3D & Kinetics & 61.2\% & 33.4\% \\ 3D ST-puzzle \cite{kim2019self} & \cite{kim2019self} & C3D & Kinetics & 60.6\% & 28.3\% \\ Ours & - & C3D & UCF101 & \underline{68.3\%} & \underline{38.4\%} \\ Ours & - & C3D & Kinetics & \textbf{69.9\%} & \textbf{39.6\%} \\ \midrule 3D ST-puzzle \cite{kim2019self} & \cite{kim2019self} & 3D-ResNet18 & Kinetics & 65.8\% & 33.7\% \\ 3D RotNet \cite{jing2018self} & \cite{jing2018self} & 3D-ResNet18 & Kinetics & 66.0\% & 37.1\% \\ DPC \cite{han2019video} & \cite{han2019video} & 3D-ResNet18 & Kinetics & 68.2\% & 34.5\% \\ Ours & - & 3D-ResNet18 & UCF101 & \underline{77.3\%} & \underline{47.5\%} \\ Ours & - & 3D-ResNet18 & Kinetics & \textbf{79.3\%} & \textbf{49.8\%} \\ \midrule Clip Order \cite{xu2019self} & \cite{xu2019self} & R(2+1)D & UCF101 & \underline{72.4\%} & 30.9\% \\ PRP \cite{yao2020video} & \cite{yao2020video} & R(2+1)D & UCF101 & 72.1\% & \underline{35.0\%} \\ Ours & - & R(2+1)D & UCF101 & \textbf{81.6\%} & \textbf{46.4\%} \\ \bottomrule \end{tabular}} \end{table} Notice also that our pre-training strategy leads to a better transfer performance on HMDB51 than supervised pre-training using action labels. This suggests that the video dynamics learned through our pre-training generalize well to action recognition and that such dynamics are not well captured through the lone supervised action recognition. \\ \noindent\textbf{Transfer to UCF101 and HMDB51. } We compare to prior work on self-supervised video representation learning in Table~\ref{tab:comparison}. A fair comparison to much of the prior work is difficult due to the use of very different network architectures and training as well as transfer settings. We opted to compare with some commonly used network architectures (\emph{i.e.}, C3D, 3D-ResNet, and R(2+1)D) and re-implemented two prior works \cite{misra2016shuffle} and \cite{jing2018self} using C3D. We performed self-supervised pre-training on UCF101 and Kinetics. C3D is pre-trained for 100 epochs on UCF101 and for 15 epoch on Kinetics. 3D-ResNet and R(2+1)D are both pre-trained for 200 epochs on UCF101 and for 15 epochs on Kinetics. We fine-tune all the layers for action recognition. Fine-tuning is performed for 75 epochs using C3D and for 150 epochs with the other architectures. When pre-training on UCF101 our features outperform prior work on the same network architectures. Pre-training on Kinetics leads to an improvement in transfer in all cases. \\ \begin{figure}[t] \centering (a) \begin{subfigure}{0.95\textwidth} \centering \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/0.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/1.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/2.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/3.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/4.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/5.png}\\ \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/3.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/4.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/5.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/6.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/7.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/8.png} \end{subfigure}\\ \vspace{.2cm} (b) \begin{subfigure}{0.95\textwidth} \centering \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/0.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/1.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/2.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/3.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/4.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/5.png}\\ \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/7.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/8.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/9.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/10.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/11.png} \includegraphics[width=0.15\linewidth,trim=0 1cm 0 2cm, clip]{figures/sequence_sampling/12.png} \end{subfigure} \caption{\textbf{Time-Related Pseudo-Tasks}. (a) Synchronization problem: The network is given two sequences with a time delay (4 frames in the example) and a classifier is trained to determine the delay. (b) The before-after problem: The network is given two non-overlapping sequences, and it needs to determine which comes first (the bottom sequence after the top one in the example). } \label{fig:timerelatedtasks} \end{figure}% \noindent\textbf{Long-Range vs Short-Range Temporal Statistics.} To illustrate how well our video representations capture motion, we transfer them to other pseudo-tasks that focus on the temporal evolution of a video. One task is the classification of the synchronization of video pairs, \emph{i.e.}, how many frames one video is delayed with respect to the other. A second task is the classification of two videos into which one comes first temporally. These two tasks are illustrated in Fig.~\ref{fig:timerelatedtasks}. In the same spirit, we also evaluate our features on other tasks and data sets and we report the results at our project page \texttt{\url{https://sjenni.github.io/temporal-ssl}}. For the synchronization task, two temporally overlapping video sequences $x_1$ and $x_2$ are separately fed to the pre-trained C3D network to extract features $\psi(v_1)$ and $\psi(v_2)$ at the \texttt{conv5} layer. These features are then fused through $\psi(v_1)-\psi(v_2)$ and fed as input to a randomly initialized classifier consisting of three fully-connected layers trained to classify the offset between the two sequences. We consider random offsets between the two video sequences in the range -6 to +6. For the second task we construct a single input sequence by sampling two non-overlapping 8 frame sub-sequences $x_{i1}$ and $x_{i2}$, where $x_{i1}$ comes before $x_{i2}$. The network inputs are then either $(x_{i1}, x_{i2})$ for class ``before'' or $(x_{i2}, x_{i1})$ for the class ``after''. We reinitialize the fully-connected layers in this case as well. \begin{table}[t] \centering \caption{\textbf{Time-Related Pseudo-Tasks.} We examine how well features from different pre-training strategies can be transferred to time-related tasks on videos. As tasks we consider the synchronization of two overlapping videos and the temporal ordering of two non-overlapping videos. We report the accuracy on both tasks on the UCF101 test set and also report Mean Absolute Error (MAE) for the synchronization task. * are our reimplementations. \label{tab:timetask}} \begin{tabular}{@{}l@{\hspace{2em}}c@{\hspace{1em}}c@{\hspace{2em}}c@{}} \toprule & \multicolumn{2}{c}{\textbf{Sync.}} & \textbf{Before-After} \\ \textbf{Method} & Accuracy & MAE & Accuracy \\ \midrule Action Labels (UCF101) & 36.7\% & \underline{1.85} & 66.6\% \\ 3D-RotNet \cite{jing2018self}* & 28.0\% & 2.84 & 57.8\% \\ Shuffle\&Learn \cite{misra2016shuffle}* & \underline{39.0\%} & 1.89 & \underline{69.8\%} \\ Ours & \textbf{42.4}\% & \textbf{1.61} & \textbf{76.9}\% \\ \bottomrule \end{tabular}% \end{table} \begin{figure*}[!ht] \centering (a) \begin{subfigure}{0.9\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/gradviz1.pdf} \end{subfigure}\\ \vspace{.2cm} (b) \begin{subfigure}{0.9\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/gradviz2.pdf} \end{subfigure}\\ \vspace{.2cm} (c) \begin{subfigure}{0.9\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/gradviz3.pdf} \end{subfigure} \caption{\textbf{Visualization of active pixels}. The first row in each block corresponds to the input video. Rows two and three show the output of our adaptation of Guided Backpropagation \cite{springenberg2014striving} when applied to a network trained through self-supervised learning and supervised learning respectively. In all three cases we observe that the self-supervised network focuses on image regions of moving objects or persons. In (a) we can also observe how long range dynamics are being detected by the self-supervised model. The supervised model on the other hand focuses a lot on static frame features in the background. } \label{fig:grad_viz} \end{figure*} In Table~\ref{tab:timetask} we compare the performance of different pre-training strategies on the time-related pseudo-tasks. We see that our self-supervised features perform better at these tasks than supervised features and other self-supervised features, thus showing that they capture well the temporal dynamics in the videos.\\ \noindent\textbf{Visualization.} What are the attributes, factors or features of the videos that self-supervised and supervised models are extracting to perform the final classification? To examine what the self-supervised and supervised models focus on, we apply Guided Backpropagation \cite{springenberg2014striving}. This method allows us to visualize which part of the input has the most impact on the final decision of the model. We slightly modify the procedure by subtracting the median values from every frame of the gradient video and by taking the absolute value of the result. We visualize the pre-trained self-supervised and supervised models on several test samples from UCF101. As one can see in Fig.~\ref{fig:grad_viz}, a model pre-trained on our self-supervised task tends to ignore the background and focuses on persons performing an action and on moving objects. Models trained with supervised learning on the other hand tend to focus more on the appearance of foreground and background. Another observation we make is that the self-supervised model identifies the location of moving objects/people in past and future frames. This is visible in row number 2 of blocks \textit{(a)} and \textit{(c)} of Fig.~\ref{fig:grad_viz}, where the network tracks the possible locations of the moving ping-pong and billiard balls respectively. A possible explanation for this observation is that our self-supervised task only encourages the learning of dynamics. The appearance of non-moving objects or static backgrounds are not useful to solve the pretext task and are thus ignored. \\ \noindent\textbf{Learning Dynamics vs. Frame Features.} The visualizations in Fig.~\ref{fig:grad_viz} indicate that features learned through motion discrimination focus on the dynamics in videos and not so much on static content present in single frames (\emph{e.g.}, background) when compared to supervised features. To further investigate how much the features learned through the two pre-training strategies rely on motion, we performed experiments where we remove all the dynamics from videos. To this end, we create input videos by replicating a single frame 16 times (resulting in a still video) and train the three fully-connected layers on \texttt{conv5} features for action classification on UCF101. Features obtained through supervised pre-training achieve an accuracy of 18.5\% (vs. 56.5\% with dynamics) and features from our self-supervised task achieve 1.0\% (vs. 58.1\%). Although the setup in this experiment is somewhat contrived (since the input domain is altered) it still illustrates that our features rely almost exclusively on motion instead of features present in single frames. This can be advantageous since motion features might generalize better to variations in the background appearance in many cases. \\ \noindent\textbf{Nearest-Neighbor Evaluation.} We perform an additional quantitative evaluation of the learned video representations via the nearest-neighbor retrieval. The features are obtained by training a 3D-ResNet18 network on Kinetics with our pseudo-task and are chosen as the output of the global average pooling layer, which corresponds to a vector of size 512. For each video we extract and average features of 10 temporal crops. To perform the nearest-neighbor retrieval, we first normalize the features using the training set statistics. Cosine similarity is used as the metric to determine the nearest neighbors. We follow the evaluation proposed by \cite{buchler2018improving} on UCF101. Query videos are taken from test split 1 and all the videos of train split 1 are considered as retrieval targets. A query is considered correctly classified if the $k$-nearest neighbors contain at least one video of the correct class (\emph{i.e.}, same class as the query). We report the mean accuracy for different values of $k$ and compare to prior work in Table~\ref{tab:nn}. Our features achieve state-of-the-art performance. \\ \begin{table}[t] \centering \caption{\textbf{Video Retrieval Performance on UCF101.} We compare to prior work in terms of $k$-nearest neighbor retrieval accuracy. Query videos are taken from test split 1 and retrievals are computed on train split 1. A query is correctly classified if the query class is present in the top-$k$ retrievals. We report mean retrieval accuracy for different values of $k$. } \label{tab:nn} \begin{tabular}{@{}l@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}c} \toprule \textbf{Method} & \textbf{Network} & \textbf{Top1} & \textbf{Top5} & \textbf{Top10} & \textbf{Top20} & \textbf{Top50} \\ \midrule Jigsaw \cite{noroozi2016unsupervised} & AlexNet & 19.7 & 28.5 & 33.5 & 40.0 & 49.4 \\ OPN \cite{lee2017unsupervised} & AlexNet & 19.9 & 28.7 & 34.0 & 40.6 & 51.6 \\ B\"uchler \emph{et al.}~\cite{buchler2018improving} & AlexNet & \underline{25.7} & 36.2 & 42.2 & 49.2 & 59.5 \\ Clip Order \cite{xu2019self} & R3D & 14.1 & 30.3 & 40.0 & 51.1 & 66.5 \\ SpeedNet \cite{benaim2020speednet} & S3D-G & 13.0 & 28.1 & 37.5 & 49.5 & 65.0 \\ PRP \cite{yao2020video} & R3D & 22.8 & \underline{38.5} & \underline{46.7} & \underline{55.2} & \underline{69.1} \\ Ours & 3D-ResNet18 & \textbf{26.1} & \textbf{48.5} & \textbf{59.1} & \textbf{69.6} & \textbf{82.8} \\ \bottomrule \end{tabular} \end{table} \noindent\textbf{Qualitative Nearest-Neighbor Results} We show some examples of nearest neighbor retrievals in Fig.~\ref{fig:nn}. Frames from the query test video are shown in the leftmost block of three columns. The second and third blocks of three columns show the top two nearest neighbors from the training set. We observe that the retrieved examples often capture the semantics of the query well. This is the case even when the action classes do not agree (\emph{e.g.}, last row). \begin{figure*}[t!] \centering \includegraphics[width=0.95\linewidth]{figures/nn_pic.pdf} \caption{\textbf{Examples of Retrievals in UCF101.} Leftmost block of 3 columns: Frames from the query sequences. Second and third blocks of 3 columns: Frames from the two nearest neighbors. } \label{fig:nn} \end{figure*} \section{Conclusions} We have introduced a novel task for the self-supervised learning of video representations by distinguishing between different types of temporal transformations. This learning task is based on the principle that recognizing a transformation of time requires an accurate model of the underlying natural video dynamics. This idea is supported by experiments that demonstrate that features learned by distinguishing time transformations capture video dynamics more than supervised learning and that such features generalize well to classic vision tasks such as action recognition or time-related task such as video synchronization. \\ \noindent \textbf{Acknowledgements.} This work was supported by grants 169622\&165845 of the Swiss National Science Foundation. \clearpage \bibliographystyle{splncs04}
proofpile-arXiv_065-143
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection*{Acknowledgments} We acknowledge the contributions of the processor development and fabrication teams at D-Wave Systems, without whom this work would not be possible. The work of ALB, EDD and CN was carried out under the auspices of the U.S.~DoE through the Los Alamos National Laboratory, operated by Triad National Security, LLC (Contract No. 892333218NCA000001). We thank ISTI at Los Alamos National Laboratory for financial support; work was also supported by the Institute for Materials Science (IMS) at Los Alamos under the program of ``IMS Rapid Response''. AK wishes to thank Arnab Banerjee (Purdue) and CN wishes to thank Peter Schiffer (Yale) for in-depth discussions. \subsection*{Author Contributions} ALB and EDD conceived the project. CN, ADK, ALB, and EDD contributed to experimental design. EDD realized the embedding. ADK performed the QA experiments and data analysis. GPL calibrated the QA processor and performed supporting measurements. CN performed theoretical analysis and drafted the manuscript with ADK. All authors contributed to the final version of the manuscript. \clearpage \section*{Supplementary Materials} \subsection{Materials and Methods} \subsubsection{QA methods.} The QA system used in this study was a D-Wave 2000Q system that uses radio-frequency superconducting quantum interference device (rf-SQUID) flux qubits to realize controllable spin-1/2 particles in a transverse-field Ising model \cite{Harris2010a,Bunyk2014,Johnson2011}. In this particular system, 2041 of the 2048 fabricated qubits and 5974 of 6016 couplers were operational. The qubit connectivity lattice is a ``Chimera'' graph consisting of a $16\times 16$ grid of eight-qubit unit cells~\cite{Bunyk2014}. To implement the checkerboard Ising lattice used for our artificial square ice, we use strong FM couplings to bind together four-qubit chains; each chain then represents a single logical spin of the checkerboard lattice as depicted in Fig.~\ref{fig:1} (see Fig.~\ref{fig:embeddings} for a depiction in which qubits are represented by points and couplers are represented by lines). The available programmable range for each coupler is $[-2,1]$ (in units of Ising energy scale $\mathcal J(s)$, as discussed later). Negative values indicate FM coupling, and in a four-qubit chain we program all three couplers to $-2\mathcal J$. The $J_\perp$ and $J_\parallel$ couplers within an ice vertex are implemented using AFM couplers in a Chimera unit cell; each $J_\perp$ and $J_\parallel$ coupler is implemented using two couplers in the QA system (Fig.~\ref{fig:schedule}A). The difference in these two coupling geometries leads to a slight lifting of degeneracy between Type-I and Type-II configurations (cf.~\cite{Harris2018}). We compensate for the lifting of degeneracy by tuning the AFM couplers individually by between one and four percent, in the process of calibrating the degeneracy point. At $J=J_{\text{MAX}}=1.92\mathcal J$ we set each of these AFM couplers to be $0.96$ before the calibration refinement, to leave space within the $[-2,1]$ coupling range for fine tuning. This allows a total AFM coupling between two four-qubit chains of $J_{\text{MAX}}=1.92\mathcal J$. With this energy scale in mind, AFM couplers are programmed with an exchange strength depending on (1) the desired overall coupling strength $J/J_{\text{MAX}}$, (2) bias between $J_\parallel$ and $J_\perp$, and (3) fine-tuning of the calibration as detailed below. Since the four-qubit chains impinging on an ice vertex extend by one unit cell in each direction from the unit cell containing the in-vertex coupling terms (Fig.~\ref{fig:1}), the $16\times 16$ grid of unit cells gives us a $14\times 14$ grid of ice vertices. In the presence of a defective qubit or coupler, we modify the problem by first removing a set of four-qubit chains so that no remaining chain contains or is incident to a defective device, and second by removing (by setting to zero) all remaining AFM coupling terms within each ice vertex that has fewer than four of its chains remaining. The resulting lattice is shown in Fig.~\ref{fig:3}. The QA system has a small amount of unwanted disorder in the effective coupling terms; we minimize its impact by averaging over multiple realizations and by compensating for the disorder via calibration refinement. Each experiment uses 20 randomly generated embeddings of the simulated Ising model (with the same set of vacancies; see Fig.~\ref{fig:embeddings}). Thus we run experiments using different sets of qubits and couplers to implement the same system, and present average statistics. Within each embedding, we refine the calibration of the qubits and couplers via iterative fine-tuning. This type of refinement has emerged as an important ingredient in quantum annealing of degenerate systems \cite{King2018, King2019, Kairys2020}. In this work we tune the AFM couplers such that the spin-spin correlations across $J_\parallel$ couplers are homogeneous, and the spin-spin correlations across $J_\perp$ couplers are homogeneous. We also tune each qubit toward its degeneracy point with zero average magnetization using flux-bias offsets. When multiple boundary conditions are presented, we perform the refinement on the open boundary condition, and use the refinement for all boundary conditions. The experiments for the boundary conditions are performed iteratively, wherein each iteration includes an annealing experiment for each boundary condition. \subsubsection{Reverse anneal chains.} Here we describe the annealing protocol by which our spin ice system is relaxed. Each experiment described in this work consists of many repeated calls to the QA system. The QA realizes the TFIM Hamiltonian (\ref{eq:ham}) in a parameterized form using an annealing parameter $0\leq s\leq 1$: \begin{equation}\label{eq:qaham} \mathcal H = \mathcal J(s) \bigg(\sum_{\langle ij \rangle}J_{ij}\hat \sigma_i^z \hat\sigma_j^z + \sum_i h_i \hat \sigma_i^z\bigg) - \Gamma(s)\sum_i \hat \sigma_i^x. \end{equation} For each call, the QA system is programmed with the terms $J_{ij}$ and $h_i$, initialized in a random classical spin state, then evolved by modulation of $s$. A chain of 128 classical output states is generated; between one output state and the next, quantum and thermal fluctuations are turned on (by reducing $s$ from $1$ to a value $s^*$ that gives the desired parameters $\mathcal J(s^*)$ and $\Gamma(s^*)$), held for a pause duration $t_p$, then turned off (by increasing $s$ from $s^*$ to $1$). For Fig.~\ref{fig:2} and Fig.~\ref{fig:4}, $t_p=\SI{512}{\micro s}$. For Fig.~\ref{fig:3}, $t_p=\SI{1}{\micro s}$. The rate of change of $s$ is denoted $ds/dt = t_q$, which throughout this work is $\SI{1}{\micro s}^{-1}$, the fastest allowed by the system. This cycle of turning quantum and thermal fluctuations on and off is shown in Fig.~\ref{fig:protocol}. Between each reverse anneal is a readout taking $\SI{0.2}{ms}$ and a pause of $\SI{0.5}{ms}$. Note that quantum fluctuations are induced by the transverse field $\Gamma$, whereas thermal fluctuations arise from the system coupling to the environment; experiments are run at a fixed temperature of $T=\SI{10}{mK}$. Thus the strengths of the quantum and thermal fluctuations relative to the Ising energy scale are given by $\Gamma/J$ and $T/J$ respectively. We repeat the 128-step QA chain many times for each experiment, reporting average statistics. The main results (Figs.~\ref{fig:2} and \ref{fig:4}) reflect 200 repetitions using 20 lattice embeddings; the first 16 steps of each 128-step chain are discarded as burn-in. This gives a total of $448,000$ samples for each choice of parameters. \subsubsection{Estimation of effective Ising parameters.} Since the rf-SQUID flux qubits have multiple energy levels and provide an imperfect approximation to spins in a transverse field Ising model, we follow methods set out in Ref.~\cite{King2019} to estimate the effective coupling and tunneling terms in the transverse-field Ising model (TFIM) Hamiltonians investigated. There are three relevant systems: \begin{enumerate} \item The QA system is made up of rf-SQUID flux qubits arranged in a Chimera topology (Fig.~\ref{fig:embeddings}), in which four-qubit chains are bound together using strong FM couplings. \item The qubits provide an approximate physical realization of a TFIM Hamiltonian in the same Chimera topology, in which each degree of freedom is an ideal two-level spin-1/2 particle. \item Finally, the Chimera TFIM is used to approximately realize the Ising square ice system on the checkerboard lattice (Fig.~\ref{fig:1}-C), using each four-qubit FM chain of Chimera spins to represent a single logical ice spin. \end{enumerate} Here we describe the extraction of effective TFIM parameters from the physical qubit parameters. We follow the methods of Ref.~\cite{King2019} (SM Section 3) across a range of annealing parameters $s$---experiments described in the main body of this work were performed with $s=0.38$. In the Chimera system, $J_\perp$ and $J_\parallel$ are realized with different geometry (Fig.~\ref{fig:schedule}-A), similar to the situation in \cite{Harris2018} (SM Fig.~S10). Tuning of the $J_\perp = J_\parallel$ degeneracy point indicates that the effective difference is small (around $3\%$) so for the extraction of TFIM Hamiltonian parameters we average out the two geometries. The AFM and FM couplings in the gadget have strength $0.06J_{\mathrm{MAFM}}$ and $-2J_{\mathrm{MAFM}}$ respectively, where $J_{\textrm{MAFM}}$ is the maximum AFM coupling between two flux qubits. These values are chosen so that only chain-intact states (in which the strong FM couplers are respected), which are the most important to the experiment, dominate the low-energy spectrum. We diagonalize the rf-SQUID flux qubit Hamiltonian of the 12-qubit gadget shown in Fig.~\ref{fig:schedule}-B) according to the independently measured qubit parameters (Fig.~\ref{fig:schedule}-D black lines). Our aim is to determine TFIM energy scales $\mathcal J_{12}(s)$ and $\Gamma_{12}(s)$ such that the qubit Hamiltonian and TFIM Hamiltonian \begin{equation} \mathcal H_{12}(s) = \mathcal J_{12}(s)\bigg(\sum_{i,j}J_{ij}\sigma_i^z\sigma_j^z + \sum_i h_i \hat \sigma_i^z \bigg) - \Gamma_{12}(s)\sum_{i}\sigma_i^x \end{equation} have similar eigenspectra. Specifically we search for values for which the first seven eigengaps are almost identical (purple circles). Using a best fit we extract $\mathcal J_{12}(s)$ and $\Gamma_{12}(s)$ (Fig.~\ref{fig:schedule}-E); we also show the total inter-chain coupling $J_{\text{MAX}}(s) = 1.92\mathcal J_{12}(s)$ that reflects the maximum energy scale in our experiments. We perform the same mapping to the three-spin TFIM Hamiltonian (Fig.~\ref{fig:schedule}-C) \begin{equation} \mathcal H_{3}(s) = \mathcal J_{3}(s)\sum_{i,j}J_{ij}\sigma_i^z\sigma_j^z + \Gamma_3(s)\sum_{i}\sigma_i^x, \end{equation} where each pair of spins is coupled with strength $0.48J_{\text{MAFM}}$. Again the eigengaps are shown to provide a good match in Fig.~\ref{fig:schedule}-D, and the extracted parameters are shown in Fig.~\ref{fig:schedule}-F. \subsubsection{Magnetic structure factor.} Magnetic structure factors were computed using the vector-based approach presented in, for example, Refs.~\cite{perrin2016extensive} and \cite{farhan2019emergent} (as opposed to the scalar-based approach presented in Ref.~\cite{Henry2014}). \subsection{Effect of monopoles and quantum fluctuations on dynamics} Under the conditions of the main experiments, with $s=0.38$ and $T\approx \SI{10}{mK}$ (measured as in \cite{Johnson2011} SM p.~8), the relevant energy scales are $T/J_{\text{MAX}} = 0.083$, $\Gamma/J_{\text{MAX}} = 0.34$ for the Chimera TFIM. These parameters yield an approximate realization of the checkerboard Ising system with parameters $T/J_{\text{MAX}} = 0.089$, $\Gamma/J_{\text{MAX}} = 0.010$. This indicates that the Ising coupling strength is not strongly affected by the embedding of logical ice spins into four-qubit chains, but the tunneling is suppressed by over an order of magnitude. The ratio $T/\Gamma$ temperature in these experiments is therefore too high to reach the quantum Coulomb phase of the checkerboard Ising system (see \cite{Henry2014} Fig.~2). Despite this, the quantum fluctuations accelerate dynamics in the system. Fig.~\ref{fig:methodsdynamics} compares mixing rates for several values of $\Gamma/J$ with fixed $J/T$. This is achieved by modulating the annealing parameter $s$ between $0.36$ and $0.41$ and tuning the programmable terms $J_{ij}$ and $h_i$ so that the classical part of $\mathcal H_{12}$, i.e., \begin{equation} \mathcal J_{12}(s)\bigg(\sum_{i,j}J_{ij}\sigma^z_i\sigma^z_j+\sum_ih_i\sigma^z_i\bigg) \end{equation} remains constant for each value of $s$. As $\Gamma/J$ is increased, the sample-to-sample difference resulting from each exposure to fluctuations increases, with no accompanying increase in monopole count. The data shown in Fig.~\ref{fig:methodsdynamics} correspond to the three closed-boundary configurations studied in Fig.~\ref{fig:4}, but show exposures of only $\SI{1}{\micro s}$, as in Fig.~\ref{fig:3}. Additionally, the itinerant monopoles influence the mixing of the disordered ice system, since spins in the vicinity of a monopole can be flipped individually without changing the energy of the system. This is not the case in the absence of a monopole, where either cotunneling or excitation is required to move the system away from its current state. For exposures of $\SI{1}{\micro s}$ the boundary conditions have a significant impact: closed boundaries rely on excitations to drive mixing, evidenced by the fact that conditions C and D, with itinerant monopoles, mix considerably faster than condition B. These boundary conditions also lead to lower populations of {\em surplus} monopoles, i.e., those that are not forced by the boundary condition (1 for condition C, 2 for condition D). This is what one would expect: if a monopole pair appears spontaneously in the presence of an itinerant confined monopole, the result is, for example, a negatively charged monopole that can now annihilate with one of {\em two} positively charged monopoles rather than just one. \subsection{Additional data} \subsubsection{Vacancy-free lattices} Fig.~\ref{fig:histograms_backmatter} presents data analogous to Fig.~\ref{fig:4}, produced using a fully-yielded sublattice with no vacancies. This allows us to suppress the effect of unwanted disorder and statistical noise by averaging over lattice symmetries (bottom half). \subsubsection{Video animations} Supplementary video files show examples of ice states for a chain of 128 samples separated by $\SI{1}{\micro s}$ exposures; Fig.~\ref{fig:3} shows four states from such a sequence. The examples correspond to open boundaries with varying $J_{\perp}/J_{\parallel}$ bias as in Fig.~\ref{fig:2}, and the degenerate case with varying boundary conditions as in Fig.~\ref{fig:4}. All videos correspond to experiments with $J=J_{\mathrm MAX}$. \newpage \onecolumngrid \begin{figure*} \includegraphics[width=10cm]{embeddings_meta.pdf} \caption{{\bf Two embeddings of the square ice system.} Red and blue couplings are AFM and FM respectively. Both embeddings realize the couplings of the same Ising model using different physical qubits and couplers in the quantum annealer. Although the embeddings differ locally, they share the same general structure: a $14\times 14$ grid of ice vertices, whose internal AFM couplings are in the unit cells of the ``Chimera'' qubit topology. Each ice vertex intersects with four four-qubit FM chains. Both embeddings have the same pattern of vacant sites. For each experiment we present average statistics over 20 randomly generated embeddings.}\label{fig:embeddings} \end{figure*} \begin{figure} \includegraphics[width=18cm]{schedule_meta.pdf} \caption{{\bf Extracting effective TFIM parameters:} The Ising square ice is realized by mapping each spin in the checkerboard lattice (Fig.~1B, Fig.~3) to a ferromagnetically-coupled chain of four qubits (Fig.~1A, Fig.~\ref{fig:embeddings}). In this embedding of the square ice into the ``Chimera'' qubit arrangement, $J_\perp$ couplings and $J_\parallel$ couplings are realized with slightly different geometry ({\bf A}). To extract effective parameters of the Chimera TFIM Hamiltonian, we study a 12-variable gadget whose couplings reflect the relevant embedding geometry ({\bf B}). Using this gadget we compute the spectrum of 12 rf-SQUID flux qubits and 12 Ising spins ({\bf D}), and determine the Ising tunneling and coupling parameters $\Gamma$ and $J_{\text{MAX}}$ ({\bf E}) using a best fit on the lowest eight eigengaps. To estimate the difference in energy scales between the embedded Chimera TFIM and the logical square ice TFIM, we also map the 12-qubit (3-chain) gadget to a 3-spin gadget ({\bf C}) and extract the effective Hamiltonian ({\bf F}). }\label{fig:schedule} \end{figure} \begin{figure} \includegraphics[width=12cm]{protocol_backmatter_multi.pdf} \caption{{\bf Quantum annealing control:} Diagram (not to scale) of annealing parameter $s$ versus time in the QA protocol. The system follows a cycle of 128 reverse anneals per programming. Each reverse anneal starts with a classical state and ends with a classical state. The system is exposed to quantum and thermal fluctuations for a fixed pause time $t_p$ before the fluctuations are quenched and the state is destructively projected to the computational ($\sigma^z$) basis for readout. }\label{fig:protocol} \end{figure} \begin{figure} \includegraphics[scale=1]{sm_mixing.pdf} \caption{{\bf Effect of monopoles on mixing.} Average surplus monopole count (excess from ground state) (left) and sample-to-sample difference (in spins) (right) for QA steps with varying values of $\Gamma/J$. The three closed boundary conditions (Fig.~\ref{fig:4}B--D) are studied. The two boundary cases with itinerant monopoles show fewer surplus monopoles and faster mixing, compared to the case with no monopoles in the ground state (orange squares). As $\Gamma/J$ increases the system mixes faster without significant addition of monopole excitations, indicating dynamics driven by quantum fluctuations. Error bars are 95\% statistical bootstrap confidence intervals on the mean across 50 QA calls.}\label{fig:methodsdynamics} \end{figure} \begin{figure} \includegraphics[width=15cm]{histograms_backmatter.pdf} \caption{Monopole populations on a subgrid with no vacancies, analogous to Fig.~4. Restricting the artificial spin ice to a $9\times 9$ subgrid allows us to study a system that has no defects. Consequently the square grid has eight symmetries over which we can average the monopole population, further suppressing experimental variation (bottom). This makes the entropic screening, in particular its relation to the parity of the grid distance from the forced monopole, more clear.}\label{fig:histograms_backmatter} \end{figure} \end{document}
proofpile-arXiv_065-144
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} High throughput phenotyping (HTP), the is a limiting factor facing modern agriculture. Due to labor intensive tasks, phenotyping crops to identify color, stand count, leaf count, plant height, etc. is severely limited. This ``phenotying bottleneck'' restricts our capability to examine how phenotypes interact with the plant's genetics factors as well as environmental factors \citep{furbank2011phenomics}. For a farmer who manages upwards of 10,000 acres, it is infeasible to be able to inspect each crop individually to identify its characteristics. In an ideal scenario, HTP results in the collection, annotation, and labeling of massive amounts of data for analysis that is vital for advancing plant breeding. Unlike other domains, live, in-field data can only be collected at a specific period during a plant's growth cycle. If this time is missed, then a farmer or breeder must wait until the next growing period to collect more data which, in some cases, may be one year later. To mitigate this issue, agronomists have turned to image-based capturing techniques (such as phones and drones) as a means of data collection and storage. Through these images, farmers no longer are bound by a plant's growth cycle and can thus phenotype a crop at a later date. However, with the influx of a massive amount of image-based data, farmers and breeders are now faced with a similar but new challenge - analyzing massive amounts of images quickly. To effectively perform image-based HTP, tools must be made available to farmers and breeders to make real-time decisions to manage their crops against pests, disease, drought, etc. to maximize their growth and, ultimately, yield. Modern agriculture has evolved to encompass drone, satellite, and cellphone imagery as a method for data storage and collection. The purpose of these technologies is to capture still-images so that identification of plant characteristics can be analyzed at a later date. Although this helps in mitigating the data collection phase of HTP, this now creates a new problem in being able to accurately and efficiently analyze the captured image to obtain the desired information. Recent years have seen advancements at the intersection of planting phenotyping and traditional machine learning approaches to identify stress, coloring, and head count in soybeans, wheat, and sorghum \citep{singh2016machine, naik2017real, yuan2018wheat, guo2018aerial}. These studies showcase and emphasize the impact the HTP can have on advancing our knowledge of plants and their interaction with the environment as well as how to make effective real-time decisions to protect a crop during its growing season. Although traditional machine learning approaches have seen success in agriculture, the current state of the art in HTP is with the application of deep learning. Deep learning techniques in agriculture is used for image classification, object detection and counting. Common deep learning frameworks such as AlexNet \citep{krizhevsky2017imagenet}, LeNet \citep{lecun1998gradient}, and VGG-16 \citep{simonyan2014deep} have been applied to classify diseases in tomatoes, cherries, apples, peppers, and olives \citep{mohanty2016using, cruz2017x,wang2017automatic}. Various papers have also utilized traditional object detection architectures such as ResNet50 to count leaves and sorghum heads \citep{giuffrida2018pheno, mosley2020image}. Outside of existing frameworks, \cite{lu2017tasselnet} created a novel approach to identify corn tassels by combining convolutional neural networks (CNN) and local counts regression into what they refer to as TasselNet. In addition to various applications, numerous annotated datasets across different crops have been made publicly available to researchers for classification and detection problems \citep{zheng2019cropdeep,sudars2020dataset,haug2014crop}. Indeed, these works show how rapidly the combination of deep learning and plant breeding is growing and provide hope in mitigating the bottleneck facing the analysis of crop images. For a comprehensive overview of image-based HTP we refer the reader to a paper by \cite{jiang2020convolutional}. Although there is vast literature covering various crops and object detection approaches, there are few studies that perform HTP on commercial corn (\textit{Zea mays} L.). Previous studies proposed deep learning based methods to accurately predict corn yield based on factors such as genotype, weather, soil, and satellite imagery, but non of these studies are considered as the HTP on commercial corn \citep{khaki2019classification,russello2018convolutional,khaki2019cnn,khaki2019crop,khaki2020predicting}. Due to the number of food and industrial products that dependent on it, corn is widely regarded as one of the world's most vital crops \citep{berardi2019flooding}. Not only is corn used to create products such as starch, flour, and ethanol, it is also the primary feed for livestock (pigs, cows, cattle, etc.) due to being rich in nutrients and proteins \citep{nazli2018potential}. In 2019, corn was the United States' (U.S.) largest grown crop accounting for more than 90 million acres of land and adding more than \$140 billion to the U.S. economy \citep{usda2019}. By 2050, the world's population is expected to reach 9.1 billion \citep{stephenson2010population}. With the world's population increasing and the amount of arable land non-increasing, changes must occur to maximize corn yield while maintaining the same (or fewer) input parameters. A practical approach to increasing corn yield is to create a way for agronomists and farmers to have a real-time, precise mechanism to estimate yield during the growing season. Such a mechanism would enable farmers to make real-time grain management decisions to maximize yield and minimize potential losses of profitability. By having an estimate on yield, farmers are able to decide optimal management practices (when to apply fungicide, nitrogen, fertilizer, etc.) to aid the yield potential of corn. Currently, the process of estimating in-season yield relies on manual counting of corn kernels and using certain formulas to get an approximated yield per acre. However, a healthy ear of corn contains 650 - 800 kernels. Therefore individually counting each kernel on an ear is a labor-intensive, time-consuming, and error prone task. Moreover, to provide an accurate representation of a field's true yield, a farmer would need to count kernels on multiple ears, further adding to the labor and time requirements. To aid in solving this HTP bottleneck, \cite{khaki2020convolutional} developed a sliding window convolutional neural network (CNN) approach to count corn kernels from an image of a corn ear. However, their approach did not estimate yield. Moreover, their proposed approach required a fixed distance between ears and camera and was not invariant to the ear orientation and the kernel color. Their sliding window approach also increased the inference time. Because of these limitations, their approach is not suitable for large scale deployment. Given the need to effectively and efficiently count corn kernels to estimate yield, we present a novel approach to solve this by utilizing a new deep learning architecture, which we name DeepCorn. The problem of counting kernels is similar to counting dense groups of people in crowds due to the compactness and the varying angles and sizes of kernels. As such, we will be comparing our method to commonly used crowd counting methods in the literature, which are applicable to other counting studies. Specifically, the novelties of our approach include: \begin{enumerate} \item A robust on-ear kernel counting that is invariant to image scale variation, ear shape, size, orientation, lighting conditions, and color \item A new deep learning architecture that is superior to commonly used crowd counting models \item The utilization of a semi-supervised learning approach to further improve the performance of the proposed method. To the best of our knowledge, our paper is the first to generate pseudo-density maps of unlabeled images to improve the counting accuracy in the literature of crowd counting \citep{gao2020cnn}. \item Proposing an approach to effectively and efficiently estimate corn yield based on the output of our proposed method \item A kernel counting dataset to benchmark our proposed method \end{enumerate} To achieve this goal, in Section \ref{method} we provide an overview of our deep learning architecture. Section \ref{experiments} provides the details about our experimental setup, dataset and annotations, evaluation metrics and benchmark models. Analysis is performed in Section 4 to study the robustness of our framework and a procedure for estimating yield. Lastly, Section 5 concludes with our key results and findings. Our proposed method is motivated by the density estimation-based crowd counting methods since they both have the goal of counting a large number of densely packed objects in an image . Crowd counting methods aim to estimate the number of people in an image, which is challenging due to large image scale variations, background clutters, and occlusions \citep{gao2020cnn, cao2018scale}. Recently, CNN-based density map estimation methods have shown great promise in crowd counting task \citep{liu2018crowd,liu2019context,ma2019bayesian,liu2019crowd,liu2020efficient}. For example, \cite{liu2020efficient} proposed a novel structured knowledge transfer framework which uses the structured knowledge of a trained teacher network for generation of a lightweight student network for crowd counting. \cite{liu2019crowd} designed a deep structured scale integration network to counter the image scale variations using structured feature representation learning and hierarchically structured loss function optimization. Recent state-of-the-art crowd counting methods also have proposed context-aware and Bayesian loss for crowd counting which proved to be highly successful \citep{ma2019bayesian, liu2019context}. \section{Methodology}\label{method} The goal of this study is to count corn kernels in an image of corn ears taken in uncontrolled lighting conditions, and, ultimately, use the number of counted kernels to estimate yield. In this paper, we propose a deep learning based method, DeepCorn, to count all the corn kernels given a single 180-degree image of a corn ear. With this single angle image, we aim to estimate the number of kernels on the entire corn ear. Although, as shown later in our study, having multiple images from different sides of the ear is beneficial, there are time and automation considerations. From the perspective of the candidate farmer or plant breeder, multiple images may be taken at the expense of a few more seconds of image capturing. However, from an automation perspective where ears are on a conveyor belt before kernels are shelled off the ear, a single image is all that is manageable. Given these considerations and our goal of creating a robust, generalizable model that is applicable in different processes, we focus our attention on the case where only a single image at a single angle is given for an ear of corn. Image-based corn kernel counting is challenging due to multiple factors: (1) large variation in kernel shapes and colors (yellow, red, white, etc), (2) very small distance between kernels, and (3) different orientations, lighting conditions, and scale variations of the ears. Figure \ref{fig:8_ears} displays eight genetically different corn ears which illustrates these factors. Our proposed model is inspired by the crowd counting models because they both aim to count a large number of densely packed objects in an image \citep{gao2020cnn}. \begin{figure}[H] \centering \includegraphics[scale=0.05]{5ears.jpg} \caption{Eight genetically different corn ears. The images indicate the scale variations and the genetic difference among ears.} \label{fig:8_ears} \end{figure} \subsection{Network Architecture} Corn ear images include various sizes of kernel pixels due to the image scale variations and having genetically different corn kernel shapes and colors. As such, the proposed method should be able to counter scale variations and learn both highly semantic levels (ears, background, etc.) and low-level patterns (kernel edges, colors, etc.). Figure \ref{fig:diagram} outlines the architecture of the proposed method. The proposed network is used to estimate the image density map whose integral over any region in the density map gives the count of kernels within that region. Various state-of-the-art approaches utilize a density map construction approach to count dense objects in crowds and have shown to be highly successful \citep{liu2018crowd,liu2019context,ma2019bayesian, liu2019crowd,liu2020efficient}. We use density estimation-based approach rather than detection-based or regression-based approaches for the following reasons. Detection-based approaches usually apply an object detection method like sliding window \citep{khaki2020convolutional} or more advanced methods such as YOLO \citep{redmon2016you}, SSD \citep{liu2016ssd}, and fast R-CNN \citep{girshick2015fast} to first detect objects and subsequently count them. However, their performance is unsatisfactory in dense object counting \citep{gao2020cnn} and they also require huge amount of annotated images, which may not be publicly available for corn kernel counting. Regression-based approaches \citep{wang2015deep,chan2008privacy,idrees2013multi,chan2009bayesian} are trained to map directly an image patch to the count. Despite being successful in counting problems such as occlusion and background clutter, these methods have a tendency to ignore spatial information \citep{gao2020cnn}. \begin{figure}[H] \centering \includegraphics[scale=0.22]{diagram.png} \caption{Outline of the DeepCorn architecture. The parameters of the convolutional layers are denoted as ``Conv-(kernel size)-(number of filters)''. The amount of stride for all layers is 1 except for the layers with ``S2'' notation for which we use stride of 2. The padding type is `same' for all convolutional layers except the last layer before the concatenation, which has `valid' padding. \textcircled{\raisebox{-0.8pt}{c}} and $\Sigma$ denote matrix concatenation and summation over density map, respectively. } \label{fig:diagram} \end{figure} Our proposed network uses VGG-16 \citep{simonyan2014deep} as a backbone for feature extraction. Originally proposed for image classification, the VGG-16 network stacks convolutional layers with a fixed kernel size of $3\times3$, which usually generalizes well to other vision tasks including object counting and detection \citep{shi2018multiscale,boominathan2016crowdnet,gao2020counting,sang2019improved,valloli2019w,liu2016ssd,kumar2019mtcnet}. We exclude the last max-pooling layer and all fully connected from the VGG network. Even though, the VGG-16 network was originally designed to process input image with size of $224\times224$, we use input image with size of $300\times300$ because the proposed network can potentially learn more fine-grained patterns with higher resolution input images \citep{tan2019efficientnet}. To make the proposed model robust against scale variations and perspective change in images, we merge feature maps from multiple scales of the network. As such, the model can easily adapt to the scale and perspective changes. Similar scale-adaptive CNN approaches have also been used in other counting and detection studies \citep{zhang2018crowd,sang2019improved,liu2016ssd}. Compared to other studies \citep{zeng2017multi,cao2018scale,boominathan2016crowdnet,sam2017switching,zhang2016single,deb2018aggregated} that used multi-column architecture with different filter sizes to deal with scale variations, our proposed network has fewer parameters due to parameter sharing, which can accelerate the training process. Moreover, these skip connections can prevent the vanishing gradient problem and further accelerate the training process as recommended in \citep{he2015deep}. To concatenate the feature maps from multiple scales, we increase the spatial dimensions of the feature maps to have the same size as the largest feature map (first feature map) using zero padding, which is the concatenation approach recommended in \citep{he2015deep}. After concatenation, feature maps are passed to two convolutional layers. We use $1\times1$ convolutional layers throughout our network to increase or decrease its depth without a significant performance penalty \citep{szegedy2015going}. Due to having four max-pooling layers in our network, the special resolution of the last convolutional layer of the network is of $1/8$ of the input image. Finally, we up-sample the output of the last convolutional layer to the size of the input image using bi-linear interpolation to estimate the density map. The total count of kernels in the image can be obtained by a summation over the estimated density map. Finally, we put a threshold on the estimated density map to zero-out regions where the value of density map is very small. Then, we combine the thresholded density map with the input image which results in segmented corn ears. \subsection{Kernel Counting Using Proposed Model} The ground truth density maps are generated in our study in a way that the summation over the density map is the same as the total number of kernels in the image. Such property enables the proposed model to count the number of kernels in an image as an auxiliary task while learning how much each region of the image contributes to the total count. As a result, the proposed method is trained end-to-end to predict the density maps where the number of kernel can be easily estimated by summation over the predicted density maps at the inference time. \subsection{Network Loss} We employ the Euclidean loss as shown in Equation \ref{eq:eq_loss} to train the network. The Euclidean loss is a popular loss in crowd counting literature due to enhancing the quality of the estimated density map \citep{gao2020counting,boominathan2016crowdnet,shi2018multiscale,wang2019removing}; \begin{eqnarray} L(\theta)=\frac{1}{N}\sum_{i=1}^{N}\|F(X_i,\Theta)-D_i\|_{2}^{2}\label{eq:eq_loss} \end{eqnarray} \noindent where, $F(X_i,\Theta)$, $\theta$, $X_i$, $D_i$, and $N$ denote the predicted density map of the $i$th input image, the parameters of the network, the $i$th input image, the $i$th ground truth density map, and the number of images, respectively. Euclidean loss measures the distance between the estimated density map and the ground truth. The Euclidean loss computes the difference between the predicted and ground truth density maps at the pixel level and then sums over to compute the loss for each image. \section{Experiment and Results}\label{experiments} In this section, we present the dataset used for our experiments, the evaluation metrics, the training hyper-parameters, and final results. We conducted all experiments in Tensorflow \citep{abadi2016tensorflow} on a NVIDIA Tesla V100 GPU. \subsection{Dataset} \subsubsection{Ground Truth Density Maps} We follow the procedure in \citep{boominathan2016crowdnet} to generate ground truth density maps. If we assume the center of one corn kernel is located at pixel $x_i$, then the kernel can be represented by a delta function $\delta(x-x_i)$. As such, the ground truth for an image with $M$ kernel center annotations can represented as follows: \begin{eqnarray} H(x)=\sum_{i=1}^{M}\delta(x-x_i)\label{eq:eq_1} \end{eqnarray} \noindent Then, $H(x)$ is convoluted with a Guassian kernel to generate the density map $D(x)$: \\ \begin{eqnarray} D(x)=\sum_{i=1}^{M}\delta(x-x_i) \ast G_\sigma(x)\label{eq:eq_2} \end{eqnarray} \noindent where $\sigma$ denotes the standard deviation. The parameter $\sigma$ is determined based on the average distance of $k$-nearest neighboring kernel annotations. The summation over the density map is the same as the total number of kernels in the image. Using such a density map can be extremely beneficial as it helps the CNN learn how much each region contributes to the total count. \subsubsection{Input Images and Data Augmentation}\label{sec:data} We perform the following procedure to prepare the training data for our proposed CNN model. We use 109 corn ear images with a fixed size of $1024\times768$ as our training data. Table \eqref{tab:data_stat} shows the statistics of the dataset. The statistics reported in Table \ref{tab:data_stat} are for one side of corn ears which was captured in images. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Number of Images& Resolution & Min & Max& Avg & Total \\ \hline 109& $1024\times768$ & 16 & 1,116 & 182.02 & 19,848\\ \hline \end{tabular} \caption{The statistics of dataset used in this study. Min, Max, Avg, and Total denote the minimum, maximum, average, and total kernel numbers, respectively. } \label{tab:data_stat} \end{table} To better train the proposed CNN model, we augment the dataset in the following way. First, we construct the multi-scale pyramidal representation of each training image as in \cite{boominathan2016crowdnet} by considering scales of 0.4 to 1.6, incremented in steps of 0.1, times the original image resolution. Then, we randomly crop 80 patches of size $300\times300$ pixels from each scale of image pyramid. Such augmentation makes the proposed model robust against scale variations. In order to make the proposed model robust against orientation of ears and lighting condition, we perform extensive augmentations such as random rotation, flip, adding Gaussian noise, and modifying brightness and contrast on 40\% of the randomly selected cropped image patches. \subsection{Evaluation Metrics} To evaluate the performance of our proposed model, we use the mean absolute error (MAE), root mean squared error (RMSE) metrics, and mean absolute percentage error (MAPE), which are defined as follows:\\ \begin{eqnarray} MAE=\frac{1}{N}\sum_{i=1}^{N}|C_{i}^{GT}-C_{i}^{pred}|\label{eq:MAE_eq} \end{eqnarray} \begin{eqnarray} RMSE=\sqrt{\frac{1}{N}\sum_{i=1}^{N}|C_{i}^{GT}-C_{i}^{pred}|^2}\label{eq:RMSE_eq} \end{eqnarray} \begin{eqnarray} MAPE=\frac{1}{N}\sum_{i=1}^{N}|\frac{C_{i}^{GT}-C_{i}^{pred}}{C_{i}^{GT}}|\times 100\label{eq:MAPE_eq} \end{eqnarray} \noindent where, $N$, $C_{i}^{pred}$, and $C_{i}^{GT}$ denote the number of test images, the predicted counting for $i$th image, and the ground truth counting for $i$th image, respectively. \subsection{Semi-supervised Learning} Semi-supervised learning utilizes both labeled and unlabeled data during the training step. The amount of labeled data is often limited and scarce in many learning tasks. As a result, semi-supervised learning methods can leverage unlabeled data to improve the learning accuracy. For example, \cite{caron2018deep} proposed a clustering method called DeepCluster which jointly learns the parameters of the convolutional neural network and the cluster assignment. Their proposed method uses the cluster assignment as pseudo-labels to learn the weights of the network. \cite{wu2019progressive} developed a progressive framework for person re-identification based on only one labeled example. Their proposed framework generates pseudo-labeled data and uses them along with original labeled data to train the CNN network. In another study, \cite{xie2020self} proposed a self-training method which generates pseudo labeled images and uses them with labeled images during the training to further improves the learning accuracy. To improve the performance of our proposed method, we adopt a semi-supervised learning approach to generate pseudo-density maps and use them to train our proposed method. We use the noisy student training algorithm proposed by \cite{xie2020self} which is as follows: \begin{enumerate} \item We train the proposed model, called teacher, on the labeled dataset $\{(X_i,D_i), \, i=1,...,n\}$, where $X_i$, $D_i$, and $n$ are the $i$th labeled image, the $i$th ground truth density map, and the number of labeled images, respectively. \item We use the trained teacher model denoted as $F$ to generate pseudo-density maps, denoted as $\tilde{D}$, for the unlabeled dataset $\{\tilde{X_j}, j=1,...,m\}$, where $\tilde{X_j}$ and $m$ are the $j$th unlabeled image and the number of unlabeled images, respectively. $\tilde{D_j}=F(\tilde{X_j}), \, j=1,...,m$ \item Finally, we retrain the proposed model with noise, called student, using both labeled and pseudo-labeled images. \end{enumerate} In all, we use this semi-supervised learning approach to do a two-level training where the teacher learns on actual labelled images and then is employed to generate pseudo-labeled images. In the end, the student model learns on both labeled and pseudo-labeled images with the intent that the student model is better than the teacher. \subsection{Training Details} We train our proposed model end-to-end using the following training parameters. We apply the data augmentation described in Section \ref{sec:data} on our dataset, which resulted in 154,169 image patches. We randomly take 90\% of image patches as training data and use the rest as validation data to monitor the training process. We initialize the weights of network with the Xavier initialization \citep{glorot2010understanding}. Stochastic gradient descent (SGD) is used with a mini-batch size of 16 to optimize the loss function defined in Equation \eqref{eq:eq_loss} using Adam optimizer \citep{kingma2014adam}. We set the learning rate to be 3e-4 which was gradually decayed to 2.5e-5 during training. The model was trained for 90,000 iterations. Figure \ref{fig:loss_plot} shows the plot of training and validation losses during the training process. For the semi-supervised Learning part, we first used our trained model as a teacher and generated pseudo density maps for 1000 unlabeled images of corn ears. Then, we added input noises including random rotation, flip and color augmentations to make a noisy dataset of 30,000 pseudo labeled images. We trained the student model with a mini-batch size of 16 which includes 2 samples from pseudo labeled images and 14 samples from original labeled dataset. We trained the student model for 200,000 iterations. \begin{figure}[H] \centering \includegraphics[scale=0.4]{Loss_plot.JPG} \caption{Plot of training and validation losses of the DeepCorn model during training process.} \label{fig:loss_plot} \end{figure} \subsection{Design of Experiments} To evaluate the counting performance of our proposed model, we compare our proposed model with ten state-of-the-art models originally proposed for crowd counting, but applicable to other counting problems with dense objects, which are as follows:\\ \noindent \textbf{DensityCNN:} proposed by \cite{jiang2020density}, this model uses a density-aware CNN which utilizes high-level semantic information to provide guidance and constraint when generating density maps. Their proposed method adopts a multi-task group-based CNN structure to jointly learn density-level classification and density map estimation. \noindent \textbf{SCAR:} proposed by \cite{gao2019scar}, this model utilizes a CNN with spatial/channel-wise attention modules to estimate the density maps. Spatial-wise attention module of their proposed model encodes the pixel-wise context of the entire image to more accurately predict density maps at the pixel level while the channel-wise attention module extracts more discriminative features among different channels. \noindent \textbf{ACSPNet:} proposed by \cite{ma2019atrous}, this model employs an atrous convolutions spatial pyramid network for density estimation. Their proposed CNN model uses atrous convolutions to exaggerate the receptive field and maintain the resolution of extracted features. Their proposed model also uses atrous spatial pyramid pooling to merge features at different scales to counter image scale variations. \noindent \textbf{ASD:} proposed by \cite{wu2019adaptive}, this model proposes an adaptive scenario discovery framework for density estimation. Their proposed model has two parallel sub-networks that are trained with different sizes of the receptive field to represent different scales and object densities. Their proposed model also adaptively assigns weights to the output of two sub-networks' responses by discovering and modeling the dynamic scenarios implicitly. \noindent \textbf{SPN:} proposed by \cite{chen2019scale}, this model uses a CNN with scale pyramid network structure which uses a shared single deep column structure to extract multi-scale information in high layers by a scale pyramid module. Their proposed model adopts different rates of dilated convolutions in parallel in scale pyramid module to make their model robust against image scale variations. \noindent \textbf{SaCNN:} proposed by \cite{zhang2018crowd}, this model uses a scale-adaptive CNN architecture with VGG backbone to estimate the density map. The SaCNN model merges feature maps from two different scales of the network to make the proposed network robust against the scale variation. \noindent \textbf{CSRNet:} proposed by \cite{li2018csrnet}, this model includes VGG backbone as the front-end for 2D feature extraction and and a dilated CNN for the back-end. CSRNet adopts dilated convolutional layers to enlarge receptive fields to replace pooling operations. In the end, the output of the network is upsampled to estimate the density map. \noindent \textbf{MSCNN:} proposed by \cite{zeng2017multi}, this model uses inception-like modules to extract scale-relevant features in their CNN network architecture and estimate the density map. The inception-like module is composed of multiple filters with different kernel size to make the model robust against scale variation. \noindent \textbf{CrowdNet:} proposed by \cite{boominathan2016crowdnet}, this model combines a deep CNN and a shallow CNN to predict the density map. The CrowdNet uses VGG as a deep network to extract high-level semantics features and a three-layer shallow network to extract low-level features. This network design makes the model robust against the scale variation. \noindent \textbf{DeepCrowd:} proposed by \cite{wang2015deep}, this model is a regression based approach which directly maps the input image to the count. The DeepCrowd uses a custom CNN architecture which includes five convolutional layers and two fully connected layers at the end. \subsection{Final Results} Having trained our proposed model, we can now evaluate the performance of our proposed model to count corn kernels. To estimate the number of kernels on a whole corn ear using a 180-degree image, we count the number of kernels on one side of the ear, and then double it to estimate the total number of corn kernels on a corn ear, because of physiological factors, farmers and agronomists assume that corn ears are symmetric \citep{bennetzen2008handbook}. However, we empirically found that the multiplier coefficient of 2.10 works best for approximating the kernels on the both side of an ear from a 180 degree image. To evaluate the performance of our proposed method, we manually counted the entire number of kernels on 291 different corn ears. Specifically, for our evaluation, the ground truth for our 291 corn ears and the remainder of this paper is the number of kernels on the entire ear. We use our proposed model along with the models described in the design of experiment section to predict the number of kernels on these corn ears given a single image from a single side of the ear. The competing models are trained on our corn dataset described in \ref{sec:data} and we report the results on the testing set. The test data include many difficult test images such as non-symmetric corn ears and uncontrolled lighting conditions. Table \ref{tab:result1} compares the performances of the competing methods with respect to the evaluation metrics. We report the performance of two versions of our proposed model, namely teacher and student models. The teacher model is trained on the original labeled data while the student model is trained on the both labeled and pseudo labeled data. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|c|} \hline Method & MAE & RMSE & MAPE &\begin{tabular}{c} Number of \\ Parameters (M)\\ \end{tabular}\\ \hline DensityCNN \citep{jiang2020density} & 50.76&67.98 &52.83& 18.26\\ \hline SCAR \citep{gao2019scar}&47.95&68.02 & 35.40&16.27\\ \hline ACSPNet \citep{ma2019atrous} & 55.13&75.04&43.93&1.81\\ \hline ASD \citep{wu2019adaptive}& 50.09&67.84&38.45&50.86\\ \hline SPN \citep{chen2019scale} & 50.21 &64.21&35.82&32.41\\ \hline SaCNN \citep{zhang2018crowd} & 49.14 &68.30&50.34&25.07 \\ \hline CSRNet \citep{li2018csrnet} & 56.52& 74.21&54.65&16.26\\ \hline MSCNN \citep{zeng2017multi} & 56.07& 74.94&60.61&3.08 \\ \hline CrowdNet \citep{boominathan2016crowdnet} &90.50 &120.21&44.20&14.75 \\ \hline DeepCrowd \citep{wang2015deep} & 93.11 &112.56 &87.73&71.92 \\ \hline Our proposed (teacher) & 44.91 & 65.92&37.87&26.61 \\ \hline Our proposed (student) & \textbf{41.36} & \textbf{60.27}&\textbf{35.38}&26.61 \\ \hline \end{tabular} \caption{The comparison of the competing methods on kernel counting performance on the 291 corn ears. }\label{tab:result1} \end{table} As shown in Table \ref{tab:result1}, the proposed method outperformed the other methods to varying extents. The student model outperformed the teacher model due to using semi-supervised learning which helps the proposed model generalize better to test dataset. The SaCNN performed better than other methods except our proposed method and SCAR due to using scale-adaptive architecture and merging feature maps from two different scales. Such an architecture makes the SaCNN more robust against scale variation. CSRNet, ACSPNet, and MSCNN had a comparable performance and outperformed the CrowdNet and DeepCrowd methods. The SCAR method outperformed other methods except our proposed method with respect to MAE due to using spatial/channel wise attention modules in their network which help extract more discriminative features. The DensityCNN, ASD, SaCNN, and SPN had a similar performance, however, SaCNN had a lower MAE compared to these methods. The proposed method performed considerably better than other methods due its deep architecture and merging multiple feature maps from different depths of network, which make the model significantly robust against scale variation. The DeepCrowd method as a regression-based method had similar performance as the CrowdNet method. At the test time, we have to crop a test image into some non-overlapping patches and feed them as input to the all methods except for our proposed method and CrowdNet. Then, we assemble the corresponding estimated density maps to obtain the final total count of the image. Otherwise, the performances of these method are not satisfactory when the whole image is fed to these method. But, our proposed method and CrowdNet take the whole image as input, and output the corresponding density map in a single forward path. Therefore, the inference time of the DeepCorn and CrowdNet are significantly smaller than other methods. The highest inference time belongs to the ACSPNet method because it does not use any downsampling in its network architecture and performs convolution operations on high resolution image throughout the network. The inference time of the proposed method is 0.65 second on an Intel i5-6300U CPU 2.4 GHz. Therefore, our proposed method achieves highest prediction accuracy while having considerably small inference time and moderate amount of parameters. Figure \ref{fig:vis_result} visualizes a sample of the results that include original images, estimated density maps, and segmented ear images for our proposed method. As shown in Figure \ref{fig:vis_result}, the estimated and the ground truth counts are close and invariant to size, angle, orientation, background, lighting conditions, and the number of ears per image. The prediction accuracy of the proposed model slightly decreased for the completely white ear (image (d) in Figure \ref{fig:vis_result}) which was mainly due to the fact that most of the training data (more than 95\%) of the proposed model only consist of yellow corn ears. As a result, it might make the model confused between the actual kernel and the corn cob itself due to having the same color. This problem can be solved by adding more white color corn ears to the training data. It is worth mentioning that predicting the total number of kernels on both sides of an ear using a 180-degree image is difficult because corn ears are not 100\% symmetric, and, thus, we have non-reducible error in our estimation. \begin{figure}[H] \centering \includegraphics[scale=0.20]{visual_res.png} \caption{Visual results of our proposed method. The first, second, and third rows are input images, estimated density maps, and segmented ears, respectively. GT and Pred stand for the ground truth number of kernels and predicted number of kernels, respectively.} \label{fig:vis_result} \end{figure} \section{Analysis} \subsection{Robustness and Sensitivity Analysis} To estimate the total number of kernels on an ear using a 180-degree image, we count the number of kernels on the one side of ear and then multiply the counted kernels by 2.10 to estimate the total number of kernels on the entire ear. To evaluate the robustness and sensitivity of our proposed method in using a 180-degree image for kernel counting on entire corn ear, we perform the following analysis. For 257 test ears, we take an image of one side of the ear and then flip the ear to the backside and take another image. Similar to our previous section, the ground truth we will be using will be the number of kernels on the entire ear. For this section, the number of test ears changes from 291 ears to 257 ears because only 257 out of 291 test corn ears have the both frontside and backside images of corn ear which can be used in this experiment. We estimate the total number of kernels on a corn ear using the following scenarios:\\ \noindent \textbf{Frontside estimation:} We estimate the total number of kernel using the front-side image of the ear, and then multiply the counted kernels by the empirical coefficient of 2.10 to consider the back side of ear. \noindent \textbf{Backside estimation:} We rotate the ear 180 degrees and estimate the total number of kernel using the back-side image of the ear. Then, we multiply the counted kernels by the empirical coefficient of 2.10 to also consider the front side of ear. \noindent \textbf{Both side estimation:} We estimate the total number of kernels on an ear using images of both sides of the ear. As such, the total number of kernels is equal to the sum of the estimated kernels on the front and back sides of the ear. Table \ref{tab:result_rotate} shows the kernel counting performances of our proposed method (DeepCorn) and SaCNN method in our above-mentioned experiment. As shown in Table \ref{tab:result_rotate}, the proposed method outperformed the other method in all three scenarios. The results indicate that our proposed method can estimate the number of kernels with reasonable accuracy using 180-degree images and is robust no matter which side the ear image is captured on. If we compare the frontside and backside performances of the DeepCorn and SaCNN, we see that DeepCorn shows more sensitivity for two main reasons: (1) as shown in Figure \ref{fig:two_sides}, there are some ears in the test dataset for which the density of kernels on the front and back sides are significantly different which makes the estimation of kernels based on only a 180-degree image difficult, and (2) our proposed method is more accurate than the SaCNN method which makes its prediction more sensitive for ears that have different density of kernels on the front and back sides. As shown in Table \ref{tab:result_rotate}, the proposed method is most accurate when using both side images of ears mainly because corn ears are not 100\% symmetric and we recommend using images of both sides for ears with heterogeneous kernel distribution. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|} \hline Method & MAE & RMSE &MAPE \\ \hline SaCNN (frontside) & 50.81 &70.54&54.02\\ \hline DeepCorn (frontside) & \textbf{41.62} & \textbf{60.24}& \textbf{37.40}\\ \hline SaCNN (backside) & 50.45 &71.26&49.73\\ \hline DeepCorn (backside) & \textbf{44.71} & \textbf{67.07}&\textbf{36.72} \\ \hline SaCNN (bothside) & 47.61 &66.91&50.15\\ \hline DeepCorn (bothside) & \textbf{33.03} & \textbf{52.11}&\textbf{29.72} \\ \hline \end{tabular} \caption{The comparison of the DeepCorn and SaCNN methods on kernel counting performance on the 257 test corn ears. We use images of frontside, backside and both sides of ear for estimation of kernels on the both sides of ear. The DeepCorn is the teacher model version in this experiment.} \label{tab:result_rotate} \end{table} \begin{figure}[H] \centering \includegraphics[scale=0.06]{two_sides_ear.jpg} \caption{Images (a) and (b) shows the frontside and backside of a test ear with heterogeneous kernel distribution, respectively. The frontside, backside and both side kernel estimation for this ear using DeepCorn method are 144, 227, and 177, respectively. The ground truth number of kernels for this ear is 157. As a result, we recommend using images of both sides for ears with heterogeneous kernel distribution.} \label{fig:two_sides} \end{figure} To further compare the counting performances of the SaCNN and our proposed models, we report the total number of counting errors of these models based on both sides kernel estimation in 257 test corn ears. \begin{table}[H] \centering \begin{tabular}{|c|c|c|} \hline Method & Miss-counted Kernels &Correctly Counted Kernels \\ \hline SaCNN & 12,238 &42,379\\ \hline DeepCorn &8,489&46,128\\ \hline \end{tabular} \caption{The total number of miss-counted and correctly counted kernels. The total number of all existing kernels in the 257 test corn ears is equal to 54,617.} \label{tab:result_subset} \end{table} It is worth mentioning that the 257 test corn dataset used in our experiment is considered a difficult test data set for the following reason. Our training data consist of mostly yellow corn ears while the majority of test ears have white colors which exist in some corn breeds and in corn ears harvested early before reaching full maturity. Such white ears can make the model confused between the kernel and the cob itself due to having the same color. This problem can be solved by adding more white ears to the training data. As a result, we report the kernel counting performances of the SaCNN and the proposed model in the subset of test dataset which only includes yellow corn ears. As shown in Table \ref{tab:result_sub_yellow}, the overall prediction error of the proposed model is $9.5\%$ on yellow corn ears which indicates that the miss-counted kernels are mostly attributed to the white corn ears due to the scarcity of such ears in the training data. Table \ref{tab:yellow_part} also shows the performances of our proposed method and the SaCNN in the subset of test dataset which only includes yellow corn ears with respect to all evaluation metrics. \begin{table}[H] \centering \begin{tabular}{|c|c|c|} \hline Method & Miss-counted Kernels &Correctly Counted Kernels \\ \hline SaCNN & 1,609 &10,895\\ \hline DeepCorn &1,190&11,314\\ \hline \end{tabular} \caption{The total number of miss-counted and correctly counted kernels in the subset of test data which only includes yellow corn ears. The total number of all existing kernels in the 54 yellow test corn ears is equal to 12504.} \label{tab:result_sub_yellow} \end{table} \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|} \hline Method & MAE & RMSE &MAPE \\ \hline SaCNN (frontside) & 30.44 &39.66&20.99\\ \hline DeepCorn (frontside) & \textbf{21.19} & \textbf{25.91}& \textbf{11.21}\\ \hline SaCNN (backside) & 38.52 &\textbf{50.15}&23.52\\ \hline DeepCorn (backside) & \textbf{34.96} & 50.82&\textbf{19.31} \\ \hline SaCNN (bothside) & 29.80 &37.86&19.33\\ \hline DeepCorn (bothside) & \textbf{22.04} & \textbf{30.29}&\textbf{11.16} \\ \hline \end{tabular} \caption{The comparison of the DeepCorn and SaCNN methods on kernel counting performance in the subset of test data which only includes 54 yellow corn ears. We use images of frontside, backside and both sides of ear for estimation of kernels on the both sides of ear. The DeepCorn is the teacher model version in this experiment.} \label{tab:yellow_part} \end{table} To further analyze the effect of kernel distribution on kernel estimation using a single 180-degree image, we use five normal ears with homogeneous kernel distribution and estimate their total number of kernels four times based on images taken at angles 0, 90, 180, 270 degrees using our proposed method. That is, we simply rotate the ear and take an image from each side with the hope that, no matter which side, we achieve a consistent kernel estimation count. Figure \ref{fig:barplot} displays the barplot of estimated kernels for these five ears at four different angels. As shown in Figure \ref{fig:barplot}, the estimations at different angles are close to each other which indicates our proposed method is robust against the image angle for ears with homogeneous kernel distribution. \begin{figure}[H] \centering \includegraphics[scale=0.28]{barplot_2.png} \caption{Bar plot of estimated kernels for five ears at four different angels which are 0, 90, 180, and 270 degrees using our proposed method. GT stands for ground truth number of kernels.}\label{fig:barplot} \end{figure} \subsection{Yield Estimation} As previously mentioned, the main application of our corn kernel counting methodology is to estimate in-season corn yield (i.e. before harvest). Having an accurate, efficient yield estimator enables farmers and agronomists to make real-time grain management decisions to maximize yield and minimize potential losses of profitability. This method, which is often called yield component method, enables farmers and agronomists to estimate corn yields accurately within $\pm 20$ bushels per acre \citep{licht2017estimating}. This procedure requires taking a representative sample of ears from the field and manually counting the kernels to determine the average number of kernels per ear. However, because a healthy ear of corn contains 650 - 800 kernels, manually counting individual kernels is time-consuming, labor-intensive and subject to human error. Additionally, a farmer will need a large amount of ears to achieve an accurate estimation of yield, thus adding to an already time-consuming and exhausting task. To remedy this bottleneck, our proposed kernel counting method can be used to count multiple ears in a short time period to accelerate the yield estimation process allowing for real-time in-field yield estimation. \cite{licht2017estimating} provides a classical way to estimate corn yield based on kernel counts. The formula is based on the following components: \noindent \textbf{Stand counts (plants per acre):} This factor is the number of plants per acre and is usually determined by the number of seed planted, seed quality, and other environmental factors. The number of ears per plant is considered to be one because most corn hybrids have one dominant ear which produces kernels. \noindent \textbf{Kernel weight:} A kernel can weigh in the range 0.26-0.27 grams. The variation in kernel weight is a response to environmental stress, higher levels of environmental stress will decrease kernel weight, lower levels of stress will have the inverse effect. Accurately measuring kernel weights is difficult and time consuming, so estimations are used to provide an evaluation of potential yield for farmers and agronomists. Total kernel weight per bushel can range from 65,000 and 100,000 kernels per bushel. Ninety thousand kernels per bushel is the industry default when conducting yield estimations. Outside of extreme weather conditions, kernel weight on average is between 85,000 and 90,000 kernels per bushel \citep{ngoune2020estimation}. \noindent \textbf{Average number of kernel per ear:} This factor indicates the average number of kernels per ear determined by taking a representative sample of ears from fields and count their kernels. Combining these factors, \cite{licht2017estimating} provides a way to estimate corn yield using Equation \ref{eq:yield_stimation} in bushels per acre. \begin{eqnarray} \textit{Corn Yield}=\frac{\frac{Ears}{Acre}\times \frac{\textit {Average Kernel}}{Ear}}{\frac{\textit{90,000 Kernels}}{ Bushel}}\label{eq:yield_stimation} \end{eqnarray} Figure \ref{fig:diagram_yield} shows the diagram of the yield estimation procedure. \begin{figure}[H] \centering \includegraphics[scale=0.15]{yeild_estimation_diagram.png} \caption{The diagram of the yield estimation procedure.} \label{fig:diagram_yield} \end{figure} To show how the proposed yield estimation method can be used, we perform the following analysis. Let assume 3 different corn seeds have been planted in 3 different fields and these seeds can be categorized as low, medium, and high yielding which is determined by the size of the produced corn ears. Let also assume 32,000 corn stands have been planted in all 3 fields. Table \ref{tab:yield_estimation} shows the yield estimation results. As shown in Table \ref{tab:yield_estimation}, the size of ear has significant effect on the final estimated yield. To reduce the yield estimation error, we recommend to use a batch ears which represents the overall condition of field well. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|c|} \hline Input Image & \begin{tabular}{c} Seed\\ Type \end{tabular} & \begin{tabular}{c} Estimated \\ Kernels \end{tabular}& \begin{tabular}{c} Stand\\ Counts \end{tabular} & \begin{tabular}{c} Estimated \\Yield (bu/ac) \end{tabular} \\ \hline \raisebox{-\totalheight}{\includegraphics[width=45mm, height=30mm]{IMG_5918.JPG}} &High Yielding & 646&32,000&229.69\\ \hline \raisebox{-\totalheight}{\includegraphics[width=45mm, height=30mm]{IMG_5900.JPG}} &Medium Yielding & 541&32,000&192.36\\ \hline \raisebox{-\totalheight}{\includegraphics[width=45mm, height=30mm]{IMG_5913.JPG}} &Low Yielding & 300&32,000&106.67\\ \hline \end{tabular} \caption{The yield estimation results based on 3 different ears. bu/ac stands for bushels per acre.} \label{tab:yield_estimation} \end{table} \section{Conclusion} In this paper, we presented a deep learning based method named DeepCorn for corn kernel counting problem. The proposed model uses VGG-16 as backbone for feature extraction. To make the proposed model robust against scale variations, the proposed network merged feature maps from multiple scales of the network using skip connections. In the end, we upsampled the output of the last layer of the network using bilinear interpolation to estimate the density map. To further improve the performance of the proposed method, we used a semi-supervised learning approach to generate pseudo density maps. Then, these pseudo density maps along with the original density maps were used to train our proposed method. Our extensive experimental results demonstrate that our proposed method can successfully count the corn kernels on corn ears regardless of their orientations and the lighting condition as well as out-perform other state of the art methods commonly used in similar tasks. The results also indicated that semi-supervised learning approach improves the accuracy of the proposed method. Our experimental results demonstrated that our proposed method can predict the number of kernels accurately using a 180-degree image and is robust no matter which side the ear image is captured on. However, if a corn ear is significantly non-symmetric, it is best to estimate the number of kernels using both frontside and backside images of ear. Our proposed method can be integrated with yield estimation methods to do real-time in-season yield estimation. The yield estimation methods rely on taking a representative sample of ears from the field and manually counting the kernels to determine the average number of kernels per ear. As a result, our proposed corn kernel counting method can be used to count multiple ears in a short time period to accelerate the yield estimation process. We hope that our work leads to the advancement of high throughput phenotyping to benefit plant science as a whole. \section*{Conflicts of Interest} The authors declare no conflict of interest. \section*{Acknowledgement} This work was partially supported by the National Science Foundation under the LEAP HI and GOALI programs (grant number 1830478) and under the EAGER program (grant number 1842097). Additionally this work was partially supported by Syngenta. \bibliographystyle{elsarticle-harv} \section{Introduction} High throughput phenotyping (HTP), the is a limiting factor facing modern agriculture. Due to labor intensive tasks, phenotyping crops to identify color, stand count, leaf count, plant height, etc. is severely limited. This ``phenotying bottleneck'' restricts our capability to examine how phenotypes interact with the plant's genetics factors as well as environmental factors \citep{furbank2011phenomics}. For a farmer who manages upwards of 10,000 acres, it is infeasible to be able to inspect each crop individually to identify its characteristics. In an ideal scenario, HTP results in the collection, annotation, and labeling of massive amounts of data for analysis that is vital for advancing plant breeding. Unlike other domains, live, in-field data can only be collected at a specific period during a plant's growth cycle. If this time is missed, then a farmer or breeder must wait until the next growing period to collect more data which, in some cases, may be one year later. To mitigate this issue, agronomists have turned to image-based capturing techniques (such as phones and drones) as a means of data collection and storage. Through these images, farmers no longer are bound by a plant's growth cycle and can thus phenotype a crop at a later date. However, with the influx of a massive amount of image-based data, farmers and breeders are now faced with a similar but new challenge - analyzing massive amounts of images quickly. To effectively perform image-based HTP, tools must be made available to farmers and breeders to make real-time decisions to manage their crops against pests, disease, drought, etc. to maximize their growth and, ultimately, yield. Modern agriculture has evolved to encompass drone, satellite, and cellphone imagery as a method for data storage and collection. The purpose of these technologies is to capture still-images so that identification of plant characteristics can be analyzed at a later date. Although this helps in mitigating the data collection phase of HTP, this now creates a new problem in being able to accurately and efficiently analyze the captured image to obtain the desired information. Recent years have seen advancements at the intersection of planting phenotyping and traditional machine learning approaches to identify stress, coloring, and head count in soybeans, wheat, and sorghum \citep{singh2016machine, naik2017real, yuan2018wheat, guo2018aerial}. These studies showcase and emphasize the impact the HTP can have on advancing our knowledge of plants and their interaction with the environment as well as how to make effective real-time decisions to protect a crop during its growing season. Although traditional machine learning approaches have seen success in agriculture, the current state of the art in HTP is with the application of deep learning. Deep learning techniques in agriculture is used for image classification, object detection and counting. Common deep learning frameworks such as AlexNet \citep{krizhevsky2017imagenet}, LeNet \citep{lecun1998gradient}, and VGG-16 \citep{simonyan2014deep} have been applied to classify diseases in tomatoes, cherries, apples, peppers, and olives \citep{mohanty2016using, cruz2017x,wang2017automatic}. Various papers have also utilized traditional object detection architectures such as ResNet50 to count leaves and sorghum heads \citep{giuffrida2018pheno, mosley2020image}. Outside of existing frameworks, \cite{lu2017tasselnet} created a novel approach to identify corn tassels by combining convolutional neural networks (CNN) and local counts regression into what they refer to as TasselNet. In addition to various applications, numerous annotated datasets across different crops have been made publicly available to researchers for classification and detection problems \citep{zheng2019cropdeep,sudars2020dataset,haug2014crop}. Indeed, these works show how rapidly the combination of deep learning and plant breeding is growing and provide hope in mitigating the bottleneck facing the analysis of crop images. For a comprehensive overview of image-based HTP we refer the reader to a paper by \cite{jiang2020convolutional}. Although there is vast literature covering various crops and object detection approaches, there are few studies that perform HTP on commercial corn (\textit{Zea mays} L.). Previous studies proposed deep learning based methods to accurately predict corn yield based on factors such as genotype, weather, soil, and satellite imagery, but non of these studies are considered as the HTP on commercial corn \citep{khaki2019classification,russello2018convolutional,khaki2019cnn,khaki2019crop,khaki2020predicting}. Due to the number of food and industrial products that dependent on it, corn is widely regarded as one of the world's most vital crops \citep{berardi2019flooding}. Not only is corn used to create products such as starch, flour, and ethanol, it is also the primary feed for livestock (pigs, cows, cattle, etc.) due to being rich in nutrients and proteins \citep{nazli2018potential}. In 2019, corn was the United States' (U.S.) largest grown crop accounting for more than 90 million acres of land and adding more than \$140 billion to the U.S. economy \citep{usda2019}. By 2050, the world's population is expected to reach 9.1 billion \citep{stephenson2010population}. With the world's population increasing and the amount of arable land non-increasing, changes must occur to maximize corn yield while maintaining the same (or fewer) input parameters. A practical approach to increasing corn yield is to create a way for agronomists and farmers to have a real-time, precise mechanism to estimate yield during the growing season. Such a mechanism would enable farmers to make real-time grain management decisions to maximize yield and minimize potential losses of profitability. By having an estimate on yield, farmers are able to decide optimal management practices (when to apply fungicide, nitrogen, fertilizer, etc.) to aid the yield potential of corn. Currently, the process of estimating in-season yield relies on manual counting of corn kernels and using certain formulas to get an approximated yield per acre. However, a healthy ear of corn contains 650 - 800 kernels. Therefore individually counting each kernel on an ear is a labor-intensive, time-consuming, and error prone task. Moreover, to provide an accurate representation of a field's true yield, a farmer would need to count kernels on multiple ears, further adding to the labor and time requirements. To aid in solving this HTP bottleneck, \cite{khaki2020convolutional} developed a sliding window convolutional neural network (CNN) approach to count corn kernels from an image of a corn ear. However, their approach did not estimate yield. Moreover, their proposed approach required a fixed distance between ears and camera and was not invariant to the ear orientation and the kernel color. Their sliding window approach also increased the inference time. Because of these limitations, their approach is not suitable for large scale deployment. Given the need to effectively and efficiently count corn kernels to estimate yield, we present a novel approach to solve this by utilizing a new deep learning architecture, which we name DeepCorn. The problem of counting kernels is similar to counting dense groups of people in crowds due to the compactness and the varying angles and sizes of kernels. As such, we will be comparing our method to commonly used crowd counting methods in the literature, which are applicable to other counting studies. Specifically, the novelties of our approach include: \begin{enumerate} \item A robust on-ear kernel counting that is invariant to image scale variation, ear shape, size, orientation, lighting conditions, and color \item A new deep learning architecture that is superior to commonly used crowd counting models \item The utilization of a semi-supervised learning approach to further improve the performance of the proposed method. To the best of our knowledge, our paper is the first to generate pseudo-density maps of unlabeled images to improve the counting accuracy in the literature of crowd counting \citep{gao2020cnn}. \item Proposing an approach to effectively and efficiently estimate corn yield based on the output of our proposed method \item A kernel counting dataset to benchmark our proposed method \end{enumerate} To achieve this goal, in Section \ref{method} we provide an overview of our deep learning architecture. Section \ref{experiments} provides the details about our experimental setup, dataset and annotations, evaluation metrics and benchmark models. Analysis is performed in Section 4 to study the robustness of our framework and a procedure for estimating yield. Lastly, Section 5 concludes with our key results and findings. Our proposed method is motivated by the density estimation-based crowd counting methods since they both have the goal of counting a large number of densely packed objects in an image . Crowd counting methods aim to estimate the number of people in an image, which is challenging due to large image scale variations, background clutters, and occlusions \citep{gao2020cnn, cao2018scale}. Recently, CNN-based density map estimation methods have shown great promise in crowd counting task \citep{liu2018crowd,liu2019context,ma2019bayesian,liu2019crowd,liu2020efficient}. For example, \cite{liu2020efficient} proposed a novel structured knowledge transfer framework which uses the structured knowledge of a trained teacher network for generation of a lightweight student network for crowd counting. \cite{liu2019crowd} designed a deep structured scale integration network to counter the image scale variations using structured feature representation learning and hierarchically structured loss function optimization. Recent state-of-the-art crowd counting methods also have proposed context-aware and Bayesian loss for crowd counting which proved to be highly successful \citep{ma2019bayesian, liu2019context}. \section{Methodology}\label{method} The goal of this study is to count corn kernels in an image of corn ears taken in uncontrolled lighting conditions, and, ultimately, use the number of counted kernels to estimate yield. In this paper, we propose a deep learning based method, DeepCorn, to count all the corn kernels given a single 180-degree image of a corn ear. With this single angle image, we aim to estimate the number of kernels on the entire corn ear. Although, as shown later in our study, having multiple images from different sides of the ear is beneficial, there are time and automation considerations. From the perspective of the candidate farmer or plant breeder, multiple images may be taken at the expense of a few more seconds of image capturing. However, from an automation perspective where ears are on a conveyor belt before kernels are shelled off the ear, a single image is all that is manageable. Given these considerations and our goal of creating a robust, generalizable model that is applicable in different processes, we focus our attention on the case where only a single image at a single angle is given for an ear of corn. Image-based corn kernel counting is challenging due to multiple factors: (1) large variation in kernel shapes and colors (yellow, red, white, etc), (2) very small distance between kernels, and (3) different orientations, lighting conditions, and scale variations of the ears. Figure \ref{fig:8_ears} displays eight genetically different corn ears which illustrates these factors. Our proposed model is inspired by the crowd counting models because they both aim to count a large number of densely packed objects in an image \citep{gao2020cnn}. \begin{figure}[H] \centering \includegraphics[scale=0.05]{5ears.jpg} \caption{Eight genetically different corn ears. The images indicate the scale variations and the genetic difference among ears.} \label{fig:8_ears} \end{figure} \subsection{Network Architecture} Corn ear images include various sizes of kernel pixels due to the image scale variations and having genetically different corn kernel shapes and colors. As such, the proposed method should be able to counter scale variations and learn both highly semantic levels (ears, background, etc.) and low-level patterns (kernel edges, colors, etc.). Figure \ref{fig:diagram} outlines the architecture of the proposed method. The proposed network is used to estimate the image density map whose integral over any region in the density map gives the count of kernels within that region. Various state-of-the-art approaches utilize a density map construction approach to count dense objects in crowds and have shown to be highly successful \citep{liu2018crowd,liu2019context,ma2019bayesian, liu2019crowd,liu2020efficient}. We use density estimation-based approach rather than detection-based or regression-based approaches for the following reasons. Detection-based approaches usually apply an object detection method like sliding window \citep{khaki2020convolutional} or more advanced methods such as YOLO \citep{redmon2016you}, SSD \citep{liu2016ssd}, and fast R-CNN \citep{girshick2015fast} to first detect objects and subsequently count them. However, their performance is unsatisfactory in dense object counting \citep{gao2020cnn} and they also require huge amount of annotated images, which may not be publicly available for corn kernel counting. Regression-based approaches \citep{wang2015deep,chan2008privacy,idrees2013multi,chan2009bayesian} are trained to map directly an image patch to the count. Despite being successful in counting problems such as occlusion and background clutter, these methods have a tendency to ignore spatial information \citep{gao2020cnn}. \begin{figure}[H] \centering \includegraphics[scale=0.22]{diagram.png} \caption{Outline of the DeepCorn architecture. The parameters of the convolutional layers are denoted as ``Conv-(kernel size)-(number of filters)''. The amount of stride for all layers is 1 except for the layers with ``S2'' notation for which we use stride of 2. The padding type is `same' for all convolutional layers except the last layer before the concatenation, which has `valid' padding. \textcircled{\raisebox{-0.8pt}{c}} and $\Sigma$ denote matrix concatenation and summation over density map, respectively. } \label{fig:diagram} \end{figure} Our proposed network uses VGG-16 \citep{simonyan2014deep} as a backbone for feature extraction. Originally proposed for image classification, the VGG-16 network stacks convolutional layers with a fixed kernel size of $3\times3$, which usually generalizes well to other vision tasks including object counting and detection \citep{shi2018multiscale,boominathan2016crowdnet,gao2020counting,sang2019improved,valloli2019w,liu2016ssd,kumar2019mtcnet}. We exclude the last max-pooling layer and all fully connected from the VGG network. Even though, the VGG-16 network was originally designed to process input image with size of $224\times224$, we use input image with size of $300\times300$ because the proposed network can potentially learn more fine-grained patterns with higher resolution input images \citep{tan2019efficientnet}. To make the proposed model robust against scale variations and perspective change in images, we merge feature maps from multiple scales of the network. As such, the model can easily adapt to the scale and perspective changes. Similar scale-adaptive CNN approaches have also been used in other counting and detection studies \citep{zhang2018crowd,sang2019improved,liu2016ssd}. Compared to other studies \citep{zeng2017multi,cao2018scale,boominathan2016crowdnet,sam2017switching,zhang2016single,deb2018aggregated} that used multi-column architecture with different filter sizes to deal with scale variations, our proposed network has fewer parameters due to parameter sharing, which can accelerate the training process. Moreover, these skip connections can prevent the vanishing gradient problem and further accelerate the training process as recommended in \citep{he2015deep}. To concatenate the feature maps from multiple scales, we increase the spatial dimensions of the feature maps to have the same size as the largest feature map (first feature map) using zero padding, which is the concatenation approach recommended in \citep{he2015deep}. After concatenation, feature maps are passed to two convolutional layers. We use $1\times1$ convolutional layers throughout our network to increase or decrease its depth without a significant performance penalty \citep{szegedy2015going}. Due to having four max-pooling layers in our network, the special resolution of the last convolutional layer of the network is of $1/8$ of the input image. Finally, we up-sample the output of the last convolutional layer to the size of the input image using bi-linear interpolation to estimate the density map. The total count of kernels in the image can be obtained by a summation over the estimated density map. Finally, we put a threshold on the estimated density map to zero-out regions where the value of density map is very small. Then, we combine the thresholded density map with the input image which results in segmented corn ears. \subsection{Kernel Counting Using Proposed Model} The ground truth density maps are generated in our study in a way that the summation over the density map is the same as the total number of kernels in the image. Such property enables the proposed model to count the number of kernels in an image as an auxiliary task while learning how much each region of the image contributes to the total count. As a result, the proposed method is trained end-to-end to predict the density maps where the number of kernel can be easily estimated by summation over the predicted density maps at the inference time. \subsection{Network Loss} We employ the Euclidean loss as shown in Equation \ref{eq:eq_loss} to train the network. The Euclidean loss is a popular loss in crowd counting literature due to enhancing the quality of the estimated density map \citep{gao2020counting,boominathan2016crowdnet,shi2018multiscale,wang2019removing}; \begin{eqnarray} L(\theta)=\frac{1}{N}\sum_{i=1}^{N}\|F(X_i,\Theta)-D_i\|_{2}^{2}\label{eq:eq_loss} \end{eqnarray} \noindent where, $F(X_i,\Theta)$, $\theta$, $X_i$, $D_i$, and $N$ denote the predicted density map of the $i$th input image, the parameters of the network, the $i$th input image, the $i$th ground truth density map, and the number of images, respectively. Euclidean loss measures the distance between the estimated density map and the ground truth. The Euclidean loss computes the difference between the predicted and ground truth density maps at the pixel level and then sums over to compute the loss for each image. \section{Experiment and Results}\label{experiments} In this section, we present the dataset used for our experiments, the evaluation metrics, the training hyper-parameters, and final results. We conducted all experiments in Tensorflow \citep{abadi2016tensorflow} on a NVIDIA Tesla V100 GPU. \subsection{Dataset} \subsubsection{Ground Truth Density Maps} We follow the procedure in \citep{boominathan2016crowdnet} to generate ground truth density maps. If we assume the center of one corn kernel is located at pixel $x_i$, then the kernel can be represented by a delta function $\delta(x-x_i)$. As such, the ground truth for an image with $M$ kernel center annotations can represented as follows: \begin{eqnarray} H(x)=\sum_{i=1}^{M}\delta(x-x_i)\label{eq:eq_1} \end{eqnarray} \noindent Then, $H(x)$ is convoluted with a Guassian kernel to generate the density map $D(x)$: \\ \begin{eqnarray} D(x)=\sum_{i=1}^{M}\delta(x-x_i) \ast G_\sigma(x)\label{eq:eq_2} \end{eqnarray} \noindent where $\sigma$ denotes the standard deviation. The parameter $\sigma$ is determined based on the average distance of $k$-nearest neighboring kernel annotations. The summation over the density map is the same as the total number of kernels in the image. Using such a density map can be extremely beneficial as it helps the CNN learn how much each region contributes to the total count. \subsubsection{Input Images and Data Augmentation}\label{sec:data} We perform the following procedure to prepare the training data for our proposed CNN model. We use 109 corn ear images with a fixed size of $1024\times768$ as our training data. Table \eqref{tab:data_stat} shows the statistics of the dataset. The statistics reported in Table \ref{tab:data_stat} are for one side of corn ears which was captured in images. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Number of Images& Resolution & Min & Max& Avg & Total \\ \hline 109& $1024\times768$ & 16 & 1,116 & 182.02 & 19,848\\ \hline \end{tabular} \caption{The statistics of dataset used in this study. Min, Max, Avg, and Total denote the minimum, maximum, average, and total kernel numbers, respectively. } \label{tab:data_stat} \end{table} To better train the proposed CNN model, we augment the dataset in the following way. First, we construct the multi-scale pyramidal representation of each training image as in \cite{boominathan2016crowdnet} by considering scales of 0.4 to 1.6, incremented in steps of 0.1, times the original image resolution. Then, we randomly crop 80 patches of size $300\times300$ pixels from each scale of image pyramid. Such augmentation makes the proposed model robust against scale variations. In order to make the proposed model robust against orientation of ears and lighting condition, we perform extensive augmentations such as random rotation, flip, adding Gaussian noise, and modifying brightness and contrast on 40\% of the randomly selected cropped image patches. \subsection{Evaluation Metrics} To evaluate the performance of our proposed model, we use the mean absolute error (MAE), root mean squared error (RMSE) metrics, and mean absolute percentage error (MAPE), which are defined as follows:\\ \begin{eqnarray} MAE=\frac{1}{N}\sum_{i=1}^{N}|C_{i}^{GT}-C_{i}^{pred}|\label{eq:MAE_eq} \end{eqnarray} \begin{eqnarray} RMSE=\sqrt{\frac{1}{N}\sum_{i=1}^{N}|C_{i}^{GT}-C_{i}^{pred}|^2}\label{eq:RMSE_eq} \end{eqnarray} \begin{eqnarray} MAPE=\frac{1}{N}\sum_{i=1}^{N}|\frac{C_{i}^{GT}-C_{i}^{pred}}{C_{i}^{GT}}|\times 100\label{eq:MAPE_eq} \end{eqnarray} \noindent where, $N$, $C_{i}^{pred}$, and $C_{i}^{GT}$ denote the number of test images, the predicted counting for $i$th image, and the ground truth counting for $i$th image, respectively. \subsection{Semi-supervised Learning} Semi-supervised learning utilizes both labeled and unlabeled data during the training step. The amount of labeled data is often limited and scarce in many learning tasks. As a result, semi-supervised learning methods can leverage unlabeled data to improve the learning accuracy. For example, \cite{caron2018deep} proposed a clustering method called DeepCluster which jointly learns the parameters of the convolutional neural network and the cluster assignment. Their proposed method uses the cluster assignment as pseudo-labels to learn the weights of the network. \cite{wu2019progressive} developed a progressive framework for person re-identification based on only one labeled example. Their proposed framework generates pseudo-labeled data and uses them along with original labeled data to train the CNN network. In another study, \cite{xie2020self} proposed a self-training method which generates pseudo labeled images and uses them with labeled images during the training to further improves the learning accuracy. To improve the performance of our proposed method, we adopt a semi-supervised learning approach to generate pseudo-density maps and use them to train our proposed method. We use the noisy student training algorithm proposed by \cite{xie2020self} which is as follows: \begin{enumerate} \item We train the proposed model, called teacher, on the labeled dataset $\{(X_i,D_i), \, i=1,...,n\}$, where $X_i$, $D_i$, and $n$ are the $i$th labeled image, the $i$th ground truth density map, and the number of labeled images, respectively. \item We use the trained teacher model denoted as $F$ to generate pseudo-density maps, denoted as $\tilde{D}$, for the unlabeled dataset $\{\tilde{X_j}, j=1,...,m\}$, where $\tilde{X_j}$ and $m$ are the $j$th unlabeled image and the number of unlabeled images, respectively. $\tilde{D_j}=F(\tilde{X_j}), \, j=1,...,m$ \item Finally, we retrain the proposed model with noise, called student, using both labeled and pseudo-labeled images. \end{enumerate} In all, we use this semi-supervised learning approach to do a two-level training where the teacher learns on actual labelled images and then is employed to generate pseudo-labeled images. In the end, the student model learns on both labeled and pseudo-labeled images with the intent that the student model is better than the teacher. \subsection{Training Details} We train our proposed model end-to-end using the following training parameters. We apply the data augmentation described in Section \ref{sec:data} on our dataset, which resulted in 154,169 image patches. We randomly take 90\% of image patches as training data and use the rest as validation data to monitor the training process. We initialize the weights of network with the Xavier initialization \citep{glorot2010understanding}. Stochastic gradient descent (SGD) is used with a mini-batch size of 16 to optimize the loss function defined in Equation \eqref{eq:eq_loss} using Adam optimizer \citep{kingma2014adam}. We set the learning rate to be 3e-4 which was gradually decayed to 2.5e-5 during training. The model was trained for 90,000 iterations. Figure \ref{fig:loss_plot} shows the plot of training and validation losses during the training process. For the semi-supervised Learning part, we first used our trained model as a teacher and generated pseudo density maps for 1000 unlabeled images of corn ears. Then, we added input noises including random rotation, flip and color augmentations to make a noisy dataset of 30,000 pseudo labeled images. We trained the student model with a mini-batch size of 16 which includes 2 samples from pseudo labeled images and 14 samples from original labeled dataset. We trained the student model for 200,000 iterations. \begin{figure}[H] \centering \includegraphics[scale=0.4]{Loss_plot.JPG} \caption{Plot of training and validation losses of the DeepCorn model during training process.} \label{fig:loss_plot} \end{figure} \subsection{Design of Experiments} To evaluate the counting performance of our proposed model, we compare our proposed model with ten state-of-the-art models originally proposed for crowd counting, but applicable to other counting problems with dense objects, which are as follows:\\ \noindent \textbf{DensityCNN:} proposed by \cite{jiang2020density}, this model uses a density-aware CNN which utilizes high-level semantic information to provide guidance and constraint when generating density maps. Their proposed method adopts a multi-task group-based CNN structure to jointly learn density-level classification and density map estimation. \noindent \textbf{SCAR:} proposed by \cite{gao2019scar}, this model utilizes a CNN with spatial/channel-wise attention modules to estimate the density maps. Spatial-wise attention module of their proposed model encodes the pixel-wise context of the entire image to more accurately predict density maps at the pixel level while the channel-wise attention module extracts more discriminative features among different channels. \noindent \textbf{ACSPNet:} proposed by \cite{ma2019atrous}, this model employs an atrous convolutions spatial pyramid network for density estimation. Their proposed CNN model uses atrous convolutions to exaggerate the receptive field and maintain the resolution of extracted features. Their proposed model also uses atrous spatial pyramid pooling to merge features at different scales to counter image scale variations. \noindent \textbf{ASD:} proposed by \cite{wu2019adaptive}, this model proposes an adaptive scenario discovery framework for density estimation. Their proposed model has two parallel sub-networks that are trained with different sizes of the receptive field to represent different scales and object densities. Their proposed model also adaptively assigns weights to the output of two sub-networks' responses by discovering and modeling the dynamic scenarios implicitly. \noindent \textbf{SPN:} proposed by \cite{chen2019scale}, this model uses a CNN with scale pyramid network structure which uses a shared single deep column structure to extract multi-scale information in high layers by a scale pyramid module. Their proposed model adopts different rates of dilated convolutions in parallel in scale pyramid module to make their model robust against image scale variations. \noindent \textbf{SaCNN:} proposed by \cite{zhang2018crowd}, this model uses a scale-adaptive CNN architecture with VGG backbone to estimate the density map. The SaCNN model merges feature maps from two different scales of the network to make the proposed network robust against the scale variation. \noindent \textbf{CSRNet:} proposed by \cite{li2018csrnet}, this model includes VGG backbone as the front-end for 2D feature extraction and and a dilated CNN for the back-end. CSRNet adopts dilated convolutional layers to enlarge receptive fields to replace pooling operations. In the end, the output of the network is upsampled to estimate the density map. \noindent \textbf{MSCNN:} proposed by \cite{zeng2017multi}, this model uses inception-like modules to extract scale-relevant features in their CNN network architecture and estimate the density map. The inception-like module is composed of multiple filters with different kernel size to make the model robust against scale variation. \noindent \textbf{CrowdNet:} proposed by \cite{boominathan2016crowdnet}, this model combines a deep CNN and a shallow CNN to predict the density map. The CrowdNet uses VGG as a deep network to extract high-level semantics features and a three-layer shallow network to extract low-level features. This network design makes the model robust against the scale variation. \noindent \textbf{DeepCrowd:} proposed by \cite{wang2015deep}, this model is a regression based approach which directly maps the input image to the count. The DeepCrowd uses a custom CNN architecture which includes five convolutional layers and two fully connected layers at the end. \subsection{Final Results} Having trained our proposed model, we can now evaluate the performance of our proposed model to count corn kernels. To estimate the number of kernels on a whole corn ear using a 180-degree image, we count the number of kernels on one side of the ear, and then double it to estimate the total number of corn kernels on a corn ear, because of physiological factors, farmers and agronomists assume that corn ears are symmetric \citep{bennetzen2008handbook}. However, we empirically found that the multiplier coefficient of 2.10 works best for approximating the kernels on the both side of an ear from a 180 degree image. To evaluate the performance of our proposed method, we manually counted the entire number of kernels on 291 different corn ears. Specifically, for our evaluation, the ground truth for our 291 corn ears and the remainder of this paper is the number of kernels on the entire ear. We use our proposed model along with the models described in the design of experiment section to predict the number of kernels on these corn ears given a single image from a single side of the ear. The competing models are trained on our corn dataset described in \ref{sec:data} and we report the results on the testing set. The test data include many difficult test images such as non-symmetric corn ears and uncontrolled lighting conditions. Table \ref{tab:result1} compares the performances of the competing methods with respect to the evaluation metrics. We report the performance of two versions of our proposed model, namely teacher and student models. The teacher model is trained on the original labeled data while the student model is trained on the both labeled and pseudo labeled data. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|c|} \hline Method & MAE & RMSE & MAPE &\begin{tabular}{c} Number of \\ Parameters (M)\\ \end{tabular}\\ \hline DensityCNN \citep{jiang2020density} & 50.76&67.98 &52.83& 18.26\\ \hline SCAR \citep{gao2019scar}&47.95&68.02 & 35.40&16.27\\ \hline ACSPNet \citep{ma2019atrous} & 55.13&75.04&43.93&1.81\\ \hline ASD \citep{wu2019adaptive}& 50.09&67.84&38.45&50.86\\ \hline SPN \citep{chen2019scale} & 50.21 &64.21&35.82&32.41\\ \hline SaCNN \citep{zhang2018crowd} & 49.14 &68.30&50.34&25.07 \\ \hline CSRNet \citep{li2018csrnet} & 56.52& 74.21&54.65&16.26\\ \hline MSCNN \citep{zeng2017multi} & 56.07& 74.94&60.61&3.08 \\ \hline CrowdNet \citep{boominathan2016crowdnet} &90.50 &120.21&44.20&14.75 \\ \hline DeepCrowd \citep{wang2015deep} & 93.11 &112.56 &87.73&71.92 \\ \hline Our proposed (teacher) & 44.91 & 65.92&37.87&26.61 \\ \hline Our proposed (student) & \textbf{41.36} & \textbf{60.27}&\textbf{35.38}&26.61 \\ \hline \end{tabular} \caption{The comparison of the competing methods on kernel counting performance on the 291 corn ears. }\label{tab:result1} \end{table} As shown in Table \ref{tab:result1}, the proposed method outperformed the other methods to varying extents. The student model outperformed the teacher model due to using semi-supervised learning which helps the proposed model generalize better to test dataset. The SaCNN performed better than other methods except our proposed method and SCAR due to using scale-adaptive architecture and merging feature maps from two different scales. Such an architecture makes the SaCNN more robust against scale variation. CSRNet, ACSPNet, and MSCNN had a comparable performance and outperformed the CrowdNet and DeepCrowd methods. The SCAR method outperformed other methods except our proposed method with respect to MAE due to using spatial/channel wise attention modules in their network which help extract more discriminative features. The DensityCNN, ASD, SaCNN, and SPN had a similar performance, however, SaCNN had a lower MAE compared to these methods. The proposed method performed considerably better than other methods due its deep architecture and merging multiple feature maps from different depths of network, which make the model significantly robust against scale variation. The DeepCrowd method as a regression-based method had similar performance as the CrowdNet method. At the test time, we have to crop a test image into some non-overlapping patches and feed them as input to the all methods except for our proposed method and CrowdNet. Then, we assemble the corresponding estimated density maps to obtain the final total count of the image. Otherwise, the performances of these method are not satisfactory when the whole image is fed to these method. But, our proposed method and CrowdNet take the whole image as input, and output the corresponding density map in a single forward path. Therefore, the inference time of the DeepCorn and CrowdNet are significantly smaller than other methods. The highest inference time belongs to the ACSPNet method because it does not use any downsampling in its network architecture and performs convolution operations on high resolution image throughout the network. The inference time of the proposed method is 0.65 second on an Intel i5-6300U CPU 2.4 GHz. Therefore, our proposed method achieves highest prediction accuracy while having considerably small inference time and moderate amount of parameters. Figure \ref{fig:vis_result} visualizes a sample of the results that include original images, estimated density maps, and segmented ear images for our proposed method. As shown in Figure \ref{fig:vis_result}, the estimated and the ground truth counts are close and invariant to size, angle, orientation, background, lighting conditions, and the number of ears per image. The prediction accuracy of the proposed model slightly decreased for the completely white ear (image (d) in Figure \ref{fig:vis_result}) which was mainly due to the fact that most of the training data (more than 95\%) of the proposed model only consist of yellow corn ears. As a result, it might make the model confused between the actual kernel and the corn cob itself due to having the same color. This problem can be solved by adding more white color corn ears to the training data. It is worth mentioning that predicting the total number of kernels on both sides of an ear using a 180-degree image is difficult because corn ears are not 100\% symmetric, and, thus, we have non-reducible error in our estimation. \begin{figure}[H] \centering \includegraphics[scale=0.20]{visual_res.png} \caption{Visual results of our proposed method. The first, second, and third rows are input images, estimated density maps, and segmented ears, respectively. GT and Pred stand for the ground truth number of kernels and predicted number of kernels, respectively.} \label{fig:vis_result} \end{figure} \section{Analysis} \subsection{Robustness and Sensitivity Analysis} To estimate the total number of kernels on an ear using a 180-degree image, we count the number of kernels on the one side of ear and then multiply the counted kernels by 2.10 to estimate the total number of kernels on the entire ear. To evaluate the robustness and sensitivity of our proposed method in using a 180-degree image for kernel counting on entire corn ear, we perform the following analysis. For 257 test ears, we take an image of one side of the ear and then flip the ear to the backside and take another image. Similar to our previous section, the ground truth we will be using will be the number of kernels on the entire ear. For this section, the number of test ears changes from 291 ears to 257 ears because only 257 out of 291 test corn ears have the both frontside and backside images of corn ear which can be used in this experiment. We estimate the total number of kernels on a corn ear using the following scenarios:\\ \noindent \textbf{Frontside estimation:} We estimate the total number of kernel using the front-side image of the ear, and then multiply the counted kernels by the empirical coefficient of 2.10 to consider the back side of ear. \noindent \textbf{Backside estimation:} We rotate the ear 180 degrees and estimate the total number of kernel using the back-side image of the ear. Then, we multiply the counted kernels by the empirical coefficient of 2.10 to also consider the front side of ear. \noindent \textbf{Both side estimation:} We estimate the total number of kernels on an ear using images of both sides of the ear. As such, the total number of kernels is equal to the sum of the estimated kernels on the front and back sides of the ear. Table \ref{tab:result_rotate} shows the kernel counting performances of our proposed method (DeepCorn) and SaCNN method in our above-mentioned experiment. As shown in Table \ref{tab:result_rotate}, the proposed method outperformed the other method in all three scenarios. The results indicate that our proposed method can estimate the number of kernels with reasonable accuracy using 180-degree images and is robust no matter which side the ear image is captured on. If we compare the frontside and backside performances of the DeepCorn and SaCNN, we see that DeepCorn shows more sensitivity for two main reasons: (1) as shown in Figure \ref{fig:two_sides}, there are some ears in the test dataset for which the density of kernels on the front and back sides are significantly different which makes the estimation of kernels based on only a 180-degree image difficult, and (2) our proposed method is more accurate than the SaCNN method which makes its prediction more sensitive for ears that have different density of kernels on the front and back sides. As shown in Table \ref{tab:result_rotate}, the proposed method is most accurate when using both side images of ears mainly because corn ears are not 100\% symmetric and we recommend using images of both sides for ears with heterogeneous kernel distribution. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|} \hline Method & MAE & RMSE &MAPE \\ \hline SaCNN (frontside) & 50.81 &70.54&54.02\\ \hline DeepCorn (frontside) & \textbf{41.62} & \textbf{60.24}& \textbf{37.40}\\ \hline SaCNN (backside) & 50.45 &71.26&49.73\\ \hline DeepCorn (backside) & \textbf{44.71} & \textbf{67.07}&\textbf{36.72} \\ \hline SaCNN (bothside) & 47.61 &66.91&50.15\\ \hline DeepCorn (bothside) & \textbf{33.03} & \textbf{52.11}&\textbf{29.72} \\ \hline \end{tabular} \caption{The comparison of the DeepCorn and SaCNN methods on kernel counting performance on the 257 test corn ears. We use images of frontside, backside and both sides of ear for estimation of kernels on the both sides of ear. The DeepCorn is the teacher model version in this experiment.} \label{tab:result_rotate} \end{table} \begin{figure}[H] \centering \includegraphics[scale=0.06]{two_sides_ear.jpg} \caption{Images (a) and (b) shows the frontside and backside of a test ear with heterogeneous kernel distribution, respectively. The frontside, backside and both side kernel estimation for this ear using DeepCorn method are 144, 227, and 177, respectively. The ground truth number of kernels for this ear is 157. As a result, we recommend using images of both sides for ears with heterogeneous kernel distribution.} \label{fig:two_sides} \end{figure} To further compare the counting performances of the SaCNN and our proposed models, we report the total number of counting errors of these models based on both sides kernel estimation in 257 test corn ears. \begin{table}[H] \centering \begin{tabular}{|c|c|c|} \hline Method & Miss-counted Kernels &Correctly Counted Kernels \\ \hline SaCNN & 12,238 &42,379\\ \hline DeepCorn &8,489&46,128\\ \hline \end{tabular} \caption{The total number of miss-counted and correctly counted kernels. The total number of all existing kernels in the 257 test corn ears is equal to 54,617.} \label{tab:result_subset} \end{table} It is worth mentioning that the 257 test corn dataset used in our experiment is considered a difficult test data set for the following reason. Our training data consist of mostly yellow corn ears while the majority of test ears have white colors which exist in some corn breeds and in corn ears harvested early before reaching full maturity. Such white ears can make the model confused between the kernel and the cob itself due to having the same color. This problem can be solved by adding more white ears to the training data. As a result, we report the kernel counting performances of the SaCNN and the proposed model in the subset of test dataset which only includes yellow corn ears. As shown in Table \ref{tab:result_sub_yellow}, the overall prediction error of the proposed model is $9.5\%$ on yellow corn ears which indicates that the miss-counted kernels are mostly attributed to the white corn ears due to the scarcity of such ears in the training data. Table \ref{tab:yellow_part} also shows the performances of our proposed method and the SaCNN in the subset of test dataset which only includes yellow corn ears with respect to all evaluation metrics. \begin{table}[H] \centering \begin{tabular}{|c|c|c|} \hline Method & Miss-counted Kernels &Correctly Counted Kernels \\ \hline SaCNN & 1,609 &10,895\\ \hline DeepCorn &1,190&11,314\\ \hline \end{tabular} \caption{The total number of miss-counted and correctly counted kernels in the subset of test data which only includes yellow corn ears. The total number of all existing kernels in the 54 yellow test corn ears is equal to 12504.} \label{tab:result_sub_yellow} \end{table} \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|} \hline Method & MAE & RMSE &MAPE \\ \hline SaCNN (frontside) & 30.44 &39.66&20.99\\ \hline DeepCorn (frontside) & \textbf{21.19} & \textbf{25.91}& \textbf{11.21}\\ \hline SaCNN (backside) & 38.52 &\textbf{50.15}&23.52\\ \hline DeepCorn (backside) & \textbf{34.96} & 50.82&\textbf{19.31} \\ \hline SaCNN (bothside) & 29.80 &37.86&19.33\\ \hline DeepCorn (bothside) & \textbf{22.04} & \textbf{30.29}&\textbf{11.16} \\ \hline \end{tabular} \caption{The comparison of the DeepCorn and SaCNN methods on kernel counting performance in the subset of test data which only includes 54 yellow corn ears. We use images of frontside, backside and both sides of ear for estimation of kernels on the both sides of ear. The DeepCorn is the teacher model version in this experiment.} \label{tab:yellow_part} \end{table} To further analyze the effect of kernel distribution on kernel estimation using a single 180-degree image, we use five normal ears with homogeneous kernel distribution and estimate their total number of kernels four times based on images taken at angles 0, 90, 180, 270 degrees using our proposed method. That is, we simply rotate the ear and take an image from each side with the hope that, no matter which side, we achieve a consistent kernel estimation count. Figure \ref{fig:barplot} displays the barplot of estimated kernels for these five ears at four different angels. As shown in Figure \ref{fig:barplot}, the estimations at different angles are close to each other which indicates our proposed method is robust against the image angle for ears with homogeneous kernel distribution. \begin{figure}[H] \centering \includegraphics[scale=0.28]{barplot_2.png} \caption{Bar plot of estimated kernels for five ears at four different angels which are 0, 90, 180, and 270 degrees using our proposed method. GT stands for ground truth number of kernels.}\label{fig:barplot} \end{figure} \subsection{Yield Estimation} As previously mentioned, the main application of our corn kernel counting methodology is to estimate in-season corn yield (i.e. before harvest). Having an accurate, efficient yield estimator enables farmers and agronomists to make real-time grain management decisions to maximize yield and minimize potential losses of profitability. This method, which is often called yield component method, enables farmers and agronomists to estimate corn yields accurately within $\pm 20$ bushels per acre \citep{licht2017estimating}. This procedure requires taking a representative sample of ears from the field and manually counting the kernels to determine the average number of kernels per ear. However, because a healthy ear of corn contains 650 - 800 kernels, manually counting individual kernels is time-consuming, labor-intensive and subject to human error. Additionally, a farmer will need a large amount of ears to achieve an accurate estimation of yield, thus adding to an already time-consuming and exhausting task. To remedy this bottleneck, our proposed kernel counting method can be used to count multiple ears in a short time period to accelerate the yield estimation process allowing for real-time in-field yield estimation. \cite{licht2017estimating} provides a classical way to estimate corn yield based on kernel counts. The formula is based on the following components: \noindent \textbf{Stand counts (plants per acre):} This factor is the number of plants per acre and is usually determined by the number of seed planted, seed quality, and other environmental factors. The number of ears per plant is considered to be one because most corn hybrids have one dominant ear which produces kernels. \noindent \textbf{Kernel weight:} A kernel can weigh in the range 0.26-0.27 grams. The variation in kernel weight is a response to environmental stress, higher levels of environmental stress will decrease kernel weight, lower levels of stress will have the inverse effect. Accurately measuring kernel weights is difficult and time consuming, so estimations are used to provide an evaluation of potential yield for farmers and agronomists. Total kernel weight per bushel can range from 65,000 and 100,000 kernels per bushel. Ninety thousand kernels per bushel is the industry default when conducting yield estimations. Outside of extreme weather conditions, kernel weight on average is between 85,000 and 90,000 kernels per bushel \citep{ngoune2020estimation}. \noindent \textbf{Average number of kernel per ear:} This factor indicates the average number of kernels per ear determined by taking a representative sample of ears from fields and count their kernels. Combining these factors, \cite{licht2017estimating} provides a way to estimate corn yield using Equation \ref{eq:yield_stimation} in bushels per acre. \begin{eqnarray} \textit{Corn Yield}=\frac{\frac{Ears}{Acre}\times \frac{\textit {Average Kernel}}{Ear}}{\frac{\textit{90,000 Kernels}}{ Bushel}}\label{eq:yield_stimation} \end{eqnarray} Figure \ref{fig:diagram_yield} shows the diagram of the yield estimation procedure. \begin{figure}[H] \centering \includegraphics[scale=0.15]{yeild_estimation_diagram.png} \caption{The diagram of the yield estimation procedure.} \label{fig:diagram_yield} \end{figure} To show how the proposed yield estimation method can be used, we perform the following analysis. Let assume 3 different corn seeds have been planted in 3 different fields and these seeds can be categorized as low, medium, and high yielding which is determined by the size of the produced corn ears. Let also assume 32,000 corn stands have been planted in all 3 fields. Table \ref{tab:yield_estimation} shows the yield estimation results. As shown in Table \ref{tab:yield_estimation}, the size of ear has significant effect on the final estimated yield. To reduce the yield estimation error, we recommend to use a batch ears which represents the overall condition of field well. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|c|} \hline Input Image & \begin{tabular}{c} Seed\\ Type \end{tabular} & \begin{tabular}{c} Estimated \\ Kernels \end{tabular}& \begin{tabular}{c} Stand\\ Counts \end{tabular} & \begin{tabular}{c} Estimated \\Yield (bu/ac) \end{tabular} \\ \hline \raisebox{-\totalheight}{\includegraphics[width=45mm, height=30mm]{IMG_5918.JPG}} &High Yielding & 646&32,000&229.69\\ \hline \raisebox{-\totalheight}{\includegraphics[width=45mm, height=30mm]{IMG_5900.JPG}} &Medium Yielding & 541&32,000&192.36\\ \hline \raisebox{-\totalheight}{\includegraphics[width=45mm, height=30mm]{IMG_5913.JPG}} &Low Yielding & 300&32,000&106.67\\ \hline \end{tabular} \caption{The yield estimation results based on 3 different ears. bu/ac stands for bushels per acre.} \label{tab:yield_estimation} \end{table} \section{Conclusion} In this paper, we presented a deep learning based method named DeepCorn for corn kernel counting problem. The proposed model uses VGG-16 as backbone for feature extraction. To make the proposed model robust against scale variations, the proposed network merged feature maps from multiple scales of the network using skip connections. In the end, we upsampled the output of the last layer of the network using bilinear interpolation to estimate the density map. To further improve the performance of the proposed method, we used a semi-supervised learning approach to generate pseudo density maps. Then, these pseudo density maps along with the original density maps were used to train our proposed method. Our extensive experimental results demonstrate that our proposed method can successfully count the corn kernels on corn ears regardless of their orientations and the lighting condition as well as out-perform other state of the art methods commonly used in similar tasks. The results also indicated that semi-supervised learning approach improves the accuracy of the proposed method. Our experimental results demonstrated that our proposed method can predict the number of kernels accurately using a 180-degree image and is robust no matter which side the ear image is captured on. However, if a corn ear is significantly non-symmetric, it is best to estimate the number of kernels using both frontside and backside images of ear. Our proposed method can be integrated with yield estimation methods to do real-time in-season yield estimation. The yield estimation methods rely on taking a representative sample of ears from the field and manually counting the kernels to determine the average number of kernels per ear. As a result, our proposed corn kernel counting method can be used to count multiple ears in a short time period to accelerate the yield estimation process. We hope that our work leads to the advancement of high throughput phenotyping to benefit plant science as a whole. \section*{Conflicts of Interest} The authors declare no conflict of interest. \section*{Acknowledgement} This work was partially supported by the National Science Foundation under the LEAP HI and GOALI programs (grant number 1830478) and under the EAGER program (grant number 1842097). Additionally this work was partially supported by Syngenta. \bibliographystyle{elsarticle-harv}
proofpile-arXiv_065-145
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The sensitivity of geographical analyses to the spatial structure of data is well known since the Modifiable Areal Unit Problem was put forward by \cite{openshaw1984modifiable}. This type of issue has been generalized to various aspects since, including temporal granularity \citep{cheng2014modifiable} or the geographical context more generally \citep{kwan2012uncertain}. When studying geosimulation models \citep{benenson2004geosimulation}, similar issues must be taken into account, extending classical sensitivity analysis methods \citep{saltelli2004sensitivity} to what can be understood as \emph{Spatial Sensitivity Analysis} as proposed by \cite{raimbault2019space}. Several studies showed the importance of that approach. For example, in the case of Land-use Transport interaction models, \cite{thomas2018city} show how the delineation of the urban area can significantly impact simulation outcomes. \cite{banos2012network} studies the Schelling segregation model on networks, and shows that network structure strongly influences model behavior. The spatial resolution in raster configurations can also change results \citep{singh2007schelling}. On the other hand, the use of spatial synthetic data generation is generally bound to model parametrization without a particular focus on sensitivity analysis, such as in microsimulation models \citep{smith2009improving}, spatialized social networks \citep{barrett2009generation}, or architecture \citep{penn2006synthetic}. \cite{raimbault2019space} however showed that systematically generating synthetic data, with constraints of proximity to real data configuration, can be a powerful tool to evaluate the sensitivity of geosimulation models to the spatial configuration. This contribution describes an initiative to synthesize spatial sensitivity analysis techniques such as synthetic data generation, real data perturbation, and specific indicators, under a common operational framework. In practice, methods are implemented in the \texttt{spatialdata} scala library, allowing in particular its embedding into the OpenMOLE model exploration platform \citep{reuillon2013openmole}. \section{Spatial sensitivity methods} \paragraph{Generation of spatial synthetic data} Realistic spatial synthetic configurations can be generated for geographical systems at different scales, and as different data types. Regarding raster data, (i) at the microscopic scale raster representation of building configurations (typical scale 500m) are generated using procedural modeling, kernel mixtures, or percolation processes \citep{doi:10.1162/isala00159}; and (ii) at the mesoscopic scale, population density grids (typical scale 50km) are generated using a reaction-diffusion urban morphogenesis model \citep{raimbault2018calibration} or kernel mixture. Regarding network data, synthetic generators for spatial networks include baseline generators (random planar network, tree network) and generators tailored to resemble road networks at a mesoscopic scale, following different heuristics including gravity potential breakdown, cost-benefits link construction, and a bio-inspired (slime mould) network generation model \citep{raimbault2018multi} \citep{raimbault2019urban}. Finally, regarding vector data, spatial fields generators can be applied at any scale (points distribution following a given probability distribution, or spatial Poisson point processes), while at the macroscopic scale system of cities with a spatialized network can be generated \citep{raimbault2020unveiling}. \paragraph{Real data perturbation} Real raster data can be loaded with the library and perturbed with random noise or following a Poisson point process. A raster generator at the microscopic scale can be used to load real building configurations from OpenStreetMap. For transportation networks, vector representations can be imported from shapefiles, directly from the OpenStreetMap API, or from a database (MongoDB and PostGIS are supported), and are transformed into a proper graph representation. Network perturbation algorithms include node or link deletion (for resilience studies e.g.) and noise on nodes coordinates. \paragraph{Indicators} Finally, various indicators are included in the library, which can be used to characterize generated or real configurations, and compare them. They include spatial statistics measures (spatial moments, Ripley K), urban morphology measures at the microscopic and mesoscopic scale, and network measures (basic measures, centralities, efficiency, components, cycles). Network measures can furthermore take into account congestion effects, as basic network loading algorithms (shortest paths and static user equilibrium) are implemented. \paragraph{Implementation and integration in OpenMOLE} The library is implemented in the language scala, which is based on the Java Virtual Machine and can benefit of existing Java libraries, and couples the robustness of functional programming with the flexibility of object-oriented programming. It can therefore easily be combined with one of the numerous Java simulation frameworks \citep{nikolai2009tools}, such as for example Repast Simphony for agent-based models \citep{north2013complex}, JAS-mine for microsimulation \citep{richiardi2017jas}, or Matsim for transportation \citep{horni2016multi}. The library is open source under a GNU GPL License and available at \url{https://github.com/openmole/spatialdata/}. A significant part of the library (synthetic raster generation methods) is integrated into the OpenMOLE model exploration platform \citep{reuillon2013openmole}. This platform is designed to allow seamless model validation and exploration, using workflows making the numerical experiments fully reproducible \citep{passerat2017reproducible}. It combines (i) model embedding in almost any language; (ii) transparent access to high performance computation infrastructures; and (iii) state-of-the-art methods for models validation (including design of experiments, genetic algorithms for calibration, novelty search, etc.). \cite{reuillon2019fostering} illustrates how this tool can be particularly suited to validate geosimulation models. \section{Applications} Different applications of the library have already been described in the literature. Regarding the generation of synthetic data in itself, \cite{doi:10.1162/isala00159} show that the building configuration generators are complementary to reproduce a large sample of existing configurations in European cities. \cite{raimbault2018calibration} shows that the reaction-diffusion morphogenesis model is flexible enough to capture most existing urban forms of population distributions across Europe also. \cite{raimbault2019second} shows that it is possible to weakly couple the population density generator with the gravity-breakdown network generator, and that correlations between urban form and network indicators can be modulated this way. \cite{raimbault2019urban} does a similar coupling in a dynamic way and shows that the co-evolution between road network and population distribution can be modeled this way. For the application of the library to spatial sensitivity analysis, \cite{raimbault2019space} apply the population distribution generator to two textbook geosimulation models (Schelling and Sugarscape models), and show that model outcomes are affected by the spatial configuration not only quantitatively in a considerable way, but also qualitatively in terms of behavior of model phase diagram. \cite{raimbault2020unveiling} shows that the SimpopNet model introduced by \cite{schmitt2014modelisation} for the co-evolution of cities and transportation networks is highly sensitive both to initial population distribution across cities and to the initial transportation network structure. \section{Discussion} Beyond the direct application of the library to study the spatial sensitivity of geosimulation models, several developments can be considered. The inclusion of network and vector generation methods into OpenMOLE is currently explored, but remains not straightforward in particular because of the constraint to represent workflow prototypes as primary data structures, to ensure interoperability when embedding different models and languages. More detailed and operational transportation network capabilities are also currently being implemented into the library, including multi-modal transportation network computation and accessibility computation. Specific methods tailored for the validation of Land-use Transport Models are elaborated, such as correlated noise perturbation across different layers (coupling population and employment for example), or transportation infrastructure development scenarios. The strong coupling of generators into co-evolutive models such as done by \cite{raimbault2019urban} is being more thoroughly investigate in order to provide such coupled generators as primitives. This library and its integration with the OpenMOLE software should thus foster the development of more thorough geosimulation models validation practices, and therein strengthen the confidence in the results obtained with such models.
proofpile-arXiv_065-146
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Related Work}\label{sec:related-work} \paragraph*{Global Termination} \emph{Global} termination detection (GTD) is used to determine when \emph{all} processes have terminated \cite{matternAlgorithmsDistributedTermination1987,matochaTaxonomyDistributedTermination1998}. For GTD, it suffices to obtain global message send and receive counts. Most GTD algorithms also assume a fixed process topology. However, Lai gives an algorithm in \cite{laiTerminationDetectionDynamically1986} that supports dynamic topologies such as in the actor model. Lai's algorithm performs termination detection in ``waves'', disseminating control messages along a spanning tree (such as an actor supervisor hierarchy) so as to obtain consistent global message send and receive counts. Venkatasubramanian et al.~take a similar approach to obtain a consistent global snapshot of actor states in a distributed system~\cite{venkatasubramanianScalableDistributedGarbage1992}. However, such an approach does not scale well because it is not incremental: garbage cannot be detected until all nodes in the system have responded. In contrast, DRL does not require a global snapshot, does not require actors to coordinate their local snapshots, and does not require waiting for all nodes before detecting local terminated actors. \paragraph*{Reference Tracking} We say that an idle actor is \emph{simple garbage} if it has no undelivered messages and no other actor has a reference to it. Such actors can be detected with distributed reference counting \cite{watsonEfficientGarbageCollection1987,bevanDistributedGarbageCollection1987,piquerIndirectReferenceCounting1991} or with reference listing \cite{DBLP:conf/iwmm/PlainfosseS95,wangDistributedGarbageCollection2006} techniques. In reference listing algorithms, each actor maintains a partial list of actors that may have references to it. Whenever $A$ sends $B$ a reference to $C$, it also sends an $\InfoMsg$ message informing $C$ about $B$'s reference. Once $B$ no longer needs a reference to $C$, it informs $C$ by sending a $\ReleaseMsg$ message; this message should not be processed by $C$ until all preceding messages from $B$ to $C$ have been delivered. Thus an actor is simple garbage when its reference listing is empty. Our technique uses a form of \emph{deferred reference listing}, in which $A$ may also defer sending $\InfoMsg$ messages to $C$ until it releases its references to $C$. This allows $\InfoMsg$ and $\ReleaseMsg$ messages to be batched together, reducing communication overhead. \paragraph*{Cyclic Garbage} Actors that are transitively acquainted with one another are said to form cycles. Cycles of terminated actors are called \emph{cyclic garbage} and cannot be detected with reference listing alone. Since actors are hosted on nodes and cycles may span across multiple nodes, detecting cyclic garbage requires sharing information between nodes to obtain a consistent view of the global topology. One approach is to compute a global snapshot of the distributed system \cite{kafuraConcurrentDistributedGarbage1995} using the Chandy-Lamport algorithm \cite{chandyDistributedSnapshotsDetermining1985}; but this requires pausing execution of all actors on a node to compute its local snapshot. Another approach is to add edges to the actor reference graph so that actor garbage coincides with passive object garbage \cite{vardhanUsingPassiveObject2003,wangActorGarbageCollection2010}. This is convenient because it allows existing algorithms for distributed passive object GC, such as \cite{schelvisIncrementalDistributionTimestamp1989}, to be reused in actor systems. However, such transformations require that actors know when they have undelivered messages, which requires some form of synchronization. To avoid pausing executions, Wang and Varela proposed a reference listing based technique called the \emph{pseudo-root} algorithm. The algorithm computes \emph{approximate} global snapshots and is implemented in the SALSA runtime \cite{wangDistributedGarbageCollection2006,wangConservativeSnapshotbasedActor2011}. The pseudo-root algorithm requires a high number of additional control messages and requires actors to write to shared memory if they migrate or release references during snapshot collection. Our protocol requires fewer control messages and no additional actions between local actor snapshots. Wang and Varela also explicitly address migration of actors, a concern orthogonal to our algorithm. Our technique is inspired by \emph{MAC}, a termination detection algorithm implemented in the Pony runtime \cite{clebschFullyConcurrentGarbage2013}. In MAC, actors send a local snapshot to a designated cycle detector whenever their message queue becomes empty, and send another notification whenever it becomes non-empty. Clebsch and Drossopoulou prove that for systems with causal message delivery, a simple request-reply protocol is sufficient to confirm that the cycle detector's view of the topology is consistent. However, enforcing causal delivery in a distributed system imposes additional space and networking costs \cite{fidge1987timestamps,blessingTreeTopologiesCausal2017}. DRL is similar to MAC, but does not require causal message delivery, supports decentralized termination detection, and actors need not take snapshots each time their message queues become empty. The key insight is that these limitations can be removed by tracking additional information at the actor level. An earlier version of DRL appeared in \cite{plyukhinConcurrentGarbageCollection2018}. In this paper, we formalize the description of the algorithm and prove its safety and liveness. In the process, we discovered that release acknowledgment messages are unnecessary and that termination detection is more flexible than we first thought: it is not necessary for GC to be performed in distinct ``phases'' where every actor takes a snapshot in each phase. In particular, once an idle actor takes a snapshot, it need not take another snapshot until it receives a fresh message. \section{Preliminaries} \label{sec:background} An actor can only receive a message when it is \emph{idle}. Upon receiving a message, it becomes \emph{busy}. A busy actor can perform an unbounded sequence of \emph{actions} before becoming idle. In~\cite{aghaFoundationActorComputation1997}, an action may be to spawn an actor, send a message, or perform a (local) computation. We will also assume that actors can perform effects, such as file I/O. The actions an actor performs in response to a message are dictated by its application-level code, called a \emph{behavior}. Actors can also receive messages from \emph{external} actors (such as the user) by becoming \emph{receptionists}. An actor $A$ becomes a receptionist when its address is exposed to an external actor. Subsequently, any external actor can potentially obtain $A$'s address and send it a message. It is not possible for an actor system to determine when all external actors have ``forgotten'' a receptionist's address. We will therefore assume that an actor can never cease to be a receptionist once its address has been exposed. \begin{figure} \centering \tikzfig{contents/diagrams/actor-graph-v2} \caption{A simple actor system. The first configuration leads to the second after $C$ receives the message $m$, which contains a reference to $E$. Notice that an actor can send a message and ``forget'' its reference to the recipient before the message is delivered, as is the case for actor $F$. In both configurations, $E$ is a potential acquaintance of $C$, and $D$ is potentially reachable from $C$. The only terminated actor is $F$ because all other actors are potentially reachable from unblocked actors.} \label{fig:actor-graph-example} \end{figure} An actor is said to be garbage if it can be destroyed without affecting the system's observable behavior. However, without analyzing an actor’s code, it is not possible to know whether it will have an effect when it receives a message. We will therefore restrict our attention to actors that can be guaranteed to be garbage without inspecting their behavior. According to this more conservative definition, any actor that might receive a message in the future should not be garbage collected because it could, for instance, write to a log file when it becomes busy. Conversely, any actor that is guaranteed to remain idle indefinitely can safely be garbage collected because it will never have any effects; such an actor is said to be \emph{terminated}. Hence, garbage actors coincide with terminated actors in our model. Terminated actors can be detected by looking at the global state of the system. We say that an actor $B$ is a \emph{potential acquaintance} of $A$ (and $A$ is a \emph{potential inverse acquaintance} of $B$) if $A$ has a reference to $B$ or if there is an undelivered message to $A$ that contains a reference to $B$. We define \emph{potential reachability} to be the reflexive transitive closure of the potential acquaintance relation. If an actor is idle and has no undelivered messages, then it is \emph{blocked}; otherwise it is \emph{unblocked}. We then observe that an actor is terminated when it is only potentially reachable by blocked actors: Such an actor is idle, blocked, and can only potentially be sent a message by other idle blocked actors. Conversely, without analyzing actor code we cannot safely conclude that an actor is terminated if it is potentially reachable by an unblocked actor. Hence, we say that an actor is terminated if and only if it is blocked and all of its potential inverse acquaintances are terminated. \section{Conclusion and Future Work}\label{sec:conclusion}\label{sec:future-work} We have shown how deferred reference listing and message counts can be used to detect termination in actor systems. The technique is provably safe (Theorem~\ref{thm:safety}) and eventually live (Theorem~\ref{thm:liveness}). An implementation in Akka is presently underway. We believe that DRL satisfies our three initial goals: \begin{enumerate} \item \emph{Termination detection does not restrict concurrency in the application.} Actors do not need to coordinate their snapshots or pause execution during garbage collection. \item \emph{Termination detection does not impose high overhead.} The amortized memory overhead of our technique is linear in the number of unreleased refobs. Besides application messages, the only additional control messages required by the DRL communication protocol are $\InfoMsg$ and $\ReleaseMsg$ messages. These control messages can be batched together and deferred, at the cost of worse termination detection time. \item \emph{Termination detection scales with the number of nodes in the system.} Our algorithm is incremental, decentralized, and does not require synchronization between nodes. \end{enumerate} Since it does not matter what order snapshots are collected in, DRL can be used as a ``building block’’ for more sophisticated garbage collection algorithms. One promising direction is to take a \emph{generational} approach \cite{DBLP:journals/cacm/LiebermanH83}, in which long-lived actors take snapshots less frequently than short-lived actors. Different types of actors could also take snapshots at different rates. In another approach, snapshot aggregators could \emph{request} snapshots instead of waiting to receive them. In the presence of faults, DRL remains safe but its liveness properties are affected. If an actor $A$ crashes and its state cannot be recovered, then none of its refobs can be released and the aggregator will never receive its snapshot. Consequently, all actors potentially reachable from $A$ can no longer be garbage collected. However, $A$'s failure does not affect the garbage collection of actors it cannot reach. In particular, network partitions between nodes will not delay node-local garbage collection. Choosing an adequate fault-recovery protocol will likely vary depending on the target actor framework. One option is to use checkpointing or event-sourcing to persist GC state; the resulting overhead may be acceptable in applications that do not frequently spawn actors or create refobs. Another option is to monitor actors for failure and infer which refobs are no longer active; this is a subject for future work. Another issue that can affect liveness is message loss: If any messages along a refob $\Refob x A B$ are dropped, then $B$ can never be garbage collected because it will always appear unblocked. This is, in fact, the desired behavior if one cannot guarantee that the message will not be delivered at some later point. In practice, this problem might be addressed with watermarking. \section{Introduction} The actor model~\cite{books/daglib/0066897,journals/cacm/Agha90} is a foundational model of concurrency that has been widely adopted for its scalability: for example, actor languages have been used to implement services at PayPal~\cite{PayPalBlowsBillion}, Discord~\cite{vishnevskiyHowDiscordScaled2017}, and in the United Kingdom's National Health Service database~\cite{NHSDeployRiak2013}. In the actor model, stateful processes known as \emph{actors} execute concurrently and communicate by sending asynchronous messages to other actors, provided they have a \emph{reference} (also called a \emph{mail address} or \emph{address} in the literature) to the recipient. Actors can also spawn new actors. An actor is said to be \emph{garbage} if it can be destroyed without affecting the system's observable behavior. Although a number of algorithms for automatic actor GC have been proposed \cite{ clebschFullyConcurrentGarbage2013, kafuraConcurrentDistributedGarbage1995, vardhanUsingPassiveObject2003, venkatasubramanianScalableDistributedGarbage1992, wangConservativeSnapshotbasedActor2011, wangDistributedGarbageCollection2006}, actor languages and frameworks currently popular in industry (such as Akka \cite{Akka}, Erlang \cite{armstrongConcurrentProgrammingERLANG1996}, and Orleans \cite{bykovOrleansCloudComputing2011}) require that programmers garbage collect actors manually. We believe this is because the algorithms proposed thus far are too expensive to implement in distributed systems. In order to find applicability in real-world actor runtimes, we argue that a GC algorithm should satisfy the following properties: \begin{enumerate} \item (\emph{Low latency}) GC should not restrict concurrency in the application. \item (\emph{High throughput}) GC should not impose significant space or message overhead. \item (\emph{Scalability}) GC should scale with the number of actors and nodes in the system. \end{enumerate} To the best of our knowledge, no previous algorithm satisfies all three constraints. The first requirement precludes any global synchronization between actors, a ``stop-the-world'' step, or a requirement for causal order delivery of all messages. The second requirement means that the number of additional ``control'' messages imposed by the algorithm should be minimal. The third requirement precludes algorithms based on global snapshots, since taking a global snapshot of a system with a large number of nodes is infeasible. To address these goals, we have developed a garbage collection technique called \emph{DRL} for \emph{Deferred Reference Listing}. The primary advantage of DRL is that it is decentralized and incremental: local garbage can be collected at one node without communicating with other nodes. Garbage collection can be performed concurrently with the application and imposes no message ordering constraints. We also expect DRL to be reasonably efficient in practice, since it does not require many additional messages or significant actor-local computation. DRL works as follows. The \emph{communication protocol} (\cref{sec:model}) tracks information, such as references and message counts, and stores it in each actor's state. Actors periodically send out copies of their local state (called \emph{snapshots}) to be stored at one or more designated \emph{snapshot aggregator} actors. Each aggregator periodically searches its local store to find a subset of snapshots representing terminated actors (\cref{sec:termination-detection}). Once an actor is determined to have terminated, it can be garbage collected by, for example, sending it a \emph{self-destruct} message. Note that our termination detection algorithm itself is \textit{location transparent}. Since DRL is defined on top of the actor model, it is oblivious to details of a particular implementation (such as how sequential computations are represented). Our technique is therefore applicable to any actor framework and can be implemented as a library. Moreover, it can also be applied to open systems, allowing a garbage-collected actor subsystem to interoperate with an external actor system. The outline of the paper is as follows. We provide a characterization of actor garbage in Section~\ref{sec:background} and discuss related work in Section~\ref{sec:related-work}. We then provide a specification of the DRL protocol in Section~\ref{sec:model}. In Section~\ref{sec:chain-lemma}, we describe a key property of DRL called the \emph{Chain Lemma}. This lemma allows us to prove the safety and liveness properties, which are stated in Section~\ref{sec:termination-detection}. We then conclude in Section~\ref{sec:future-work} with some discussion of future work and how DRL may be used in practice. To conserve space, all proofs have been relegated to the Appendix. \section{Termination Detection}\label{sec:termination-detection} In order to detect non-simple terminated garbage, actors periodically sends a snapshot of their knowledge set to a snapshot aggregator actor. An aggregator in turn may disseminate snapshots it has to other aggregators. Each aggregator maintains a map data structure, associating an actor’s address to its most recent snapshot; in effect, snapshot aggregators maintain an eventually consistent key-value store with addresses as keys and snapshots as values. At any time, an aggregator can scan its local store to find terminated actors and send them a request to self-destruct. Given an arbitrary set of snapshots $Q$, we characterize the \emph{finalized subsets} of $Q$ in this section. We show that the actors that took these finalized snapshots must be terminated. Conversely, the snapshots of any closed set of terminated actors are guaranteed to be finalized. (Recall that the closure of a set of terminated actors is also a terminated set of actors.) Thus, snapshot aggregators can eventually detect all terminated actors by periodically searching their local stores for finalized subsets. Finally, we give an algorithm for obtaining the maximum finalized subset of a set $Q$ by ``pruning away’’ the snapshots of actors that appear not to have terminated. Recall that when we speak of a set of snapshots $Q$, we assume each snapshot was taken by a different actor. We will write $\Phi_A \in Q$ to denote $A$'s snapshot in $Q$; we will also write $A \in Q$ if $A$ has a snapshot in $Q$. We will also write $Q \vdash \phi$ if $\Phi \vdash \phi$ for some $\Phi \in Q$. \begin{definition} A set of snapshots $Q$ is \emph{closed} if, whenever $Q \vdash \Unreleased(\Refob x A B)$ and $B \in Q$, then also $A\in Q$ and $\Phi_A \vdash \Activated(\Refob x A B)$. \end{definition} \begin{definition} An actor $B \in Q$ \emph{appears blocked} if, for every $Q \vdash \Unreleased(\Refob x A B)$, then $\Phi_A,\Phi_B \in Q$ and $\Phi_A \vdash \SentCount(x,n)$ and $\Phi_B \vdash \RecvCount(x,n)$ for some $n$. \end{definition} \begin{definition} A set of snapshots $Q$ is \emph{finalized} if it is closed and every actor in $Q$ appears blocked. \end{definition} This definition corresponds to our characterization in Section~\ref{sec:garbage-defn}: An actor is terminated precisely when it is in a closed set of blocked actors. \begin{restatable}[Safety]{theorem}{Safety}\label{thm:safety} If $Q$ is a finalized set of snapshots at time $t_f$ then the actors in $Q$ are all terminated at $t_f$. \end{restatable} We say that the \emph{final action} of a terminated actor is the last non-snapshot event it performs before becoming terminated. Notice that an actor's final action can only be an \textsc{Idle}, \textsc{Info}, or \textsc{Release} event. Note also that the final action may come \emph{strictly before} an actor becomes terminated, since a blocked actor may only terminate after all of its potential inverse acquaintances become blocked. The following lemma allows us to prove that DRL is eventually live. It also shows that an non-finalized set of snapshots must have an unblocked actor. \begin{restatable}{lemma}{Completeness}\label{lem:terminated-is-complete} Let $S$ be a closed set of terminated actors at time $t_f$. If every actor in $S$ took a snapshot sometime after its final action, then the resulting set of snapshots is finalized. \end{restatable} \begin{theorem}[Liveness]\label{thm:liveness} If every actor eventually takes a snapshot after performing an \textsc{Idle}, \textsc{Info}, or \textsc{Release} event, then every terminated actor is eventually part of a finalized set of snapshots. \end{theorem} \begin{proof} If an actor $A$ is terminated, then the closure $S$ of $\{A\}$ is a terminated set of actors. Since every actor eventually takes a snapshot after taking its final action, \cref{lem:terminated-is-complete} implies that the resulting snapshots of $S$ are finalized. \end{proof} We say that a refob $\Refob x A B$ is \emph{unreleased} in $Q$ if $Q \vdash \Unreleased(x)$. Such a refob is said to be \emph{relevant} when $B \in Q$ implies $A \in Q$ and $\Phi_A \vdash \Activated(x)$ and $\Phi_A \vdash \SentCount(x,n)$ and $\Phi_B \vdash \RecvCount(x,n)$ for some $n$; intuitively, this indicates that $B$ has no undelivered messages along $x$. Notice that a set $Q$ is finalized if and only if all unreleased refobs in $Q$ are relevant. Observe that if $\Refob x A B$ is unreleased and irrelevant in $Q$, then $B$ cannot be in any finalized subset of $Q$. We can therefore employ a simple iterative algorithm to find the maximum finalized subset of $Q$: for each irrelevant unreleased refob $\Refob x A B$ in $Q$, remove the target $B$ from $Q$. Since this can make another unreleased refob $\Refob y B C$ irrelevant, we must repeat this process until a fixed point is reached. In the resulting subset $Q'$, all unreleased refobs are relevant. Since all actors in $Q \setminus Q'$ are not members of any finalized subset of $Q$, it must be that $Q'$ is the maximum finalized subset of $Q$. \section{Chain Lemma}\label{sec:chain-lemma} To determine if an actor has terminated, one must show that all of its potential inverse acquaintances have terminated. This appears to pose a problem for termination detection, since actors cannot have a complete listing of all their potential inverse acquaintances without some synchronization: actors would need to consult their acquaintances before creating new references to them. In this section, we show that the DRL protocol provides a weaker guarantee that will nevertheless prove sufficient: knowledge about an actor's refobs is \emph{distributed} across the system and there is always a ``path'' from the actor to any of its potential inverse acquaintances. \begin{figure} \centering \tikzfig{contents/diagrams/chain-lemma} \caption{An example of a chain from $B$ to $x_3$.} \label{fig:chain-example} \end{figure} Let us construct a concrete example of such a path, depicted by Fig.~\ref{fig:chain-example}. Suppose that $A_1$ spawns $B$, gaining a refob $\Refob{x_1}{A_1}{B}$. Then $A_1$ may use $x_1$ to create $\Refob{x_2}{A_2}{B}$, which $A_2$ may receive and then use $x_2$ to create $\Refob{x_3}{A_3}{B}$. At this point, there are unreleased refobs owned by $A_2$ and $A_3$ that are not included in $B$'s knowledge set. However, Fig.~\ref{fig:chain-example} shows that the distributed knowledge of $B,A_1,A_2$ creates a ``path'' to all of $B$'s potential inverse acquaintances. Since $A_1$ spawned $B$, $B$ knows the fact $\Created(x_1)$. Then when $A_1$ created $x_2$, it added the fact $\CreatedUsing(x_1, x_2)$ to its knowledge set, and likewise $A_2$ added the fact $\CreatedUsing(x_2, x_3)$; each fact points to another actor that owns an unreleased refob to $B$ (Fig.~\ref{fig:chain-example} (1)). Since actors can remove $\CreatedUsing$ facts by sending $\InfoMsg$ messages, we also consider (Fig.~\ref{fig:chain-example} (2)) to be a ``path'' from $B$ to $A_3$. But notice that, once $B$ receives the $\InfoMsg$ message, the fact $\Created(x_3)$ will be added to its knowledge set and so there will be a ``direct path'' from $B$ to $A_3$. We formalize this intuition with the notion of a \emph{chain} in a given configuration $\Config{\alpha}{\mu}{\rho}{\chi}$: \begin{definition} A \emph{chain to $\Refob x A B$} is a sequence of unreleased refobs $(\Refob{x_1}{A_1}{B}),\allowbreak \dots,\allowbreak (\Refob{x_n}{A_n}{B})$ such that: \begin{itemize} \item $\alpha(B) \vdash \Created(\Refob{x_1}{A_1}{B})$; \item For all $i < n$, either $\alpha(A_i) \vdash \CreatedUsing(x_i,x_{i+1})$ or the message $\Msg{B}{\InfoMsg(x_i,x_{i+1})}$ is in transit; and \item $A_n = A$ and $x_n = x$. \end{itemize} \end{definition} We say that an actor $B$ is \emph{in the root set} if it is a receptionist or if there is an application message $\AppMsg(x,R)$ in transit to an external actor with $B \in \text{targets}(R)$. Since external actors never release refobs, actors in the root set must never terminate. \begin{restatable}[Chain Lemma]{lemma}{ChainLemma} \label{lem:chain-lemma} Let $B$ be an internal actor in $\kappa$. If $B$ is not in the root set, then there is a chain to every unreleased refob $\Refob x A B$. Otherwise, there is a chain to some refob $\Refob y C B$ where $C$ is an external actor. \end{restatable} \begin{remark*} When $B$ is in the root set, not all of its unreleased refobs are guaranteed to have chains. This is because an external actor may send $B$'s address to other receptionists without sending an $\InfoMsg$ message to $B$. \end{remark*} An immediate application of the Chain Lemma is to allow actors to detect when they are simple garbage. If any actor besides $B$ owns an unreleased refob to $B$, then $B$ must have a fact $\Created(\Refob x A B)$ in its knowledge set where $A \ne B$. Hence, if $B$ has no such facts, then it must have no nontrivial potential inverse acquaintances. Moreover, since actors can only have undelivered messages along unreleased refobs, $B$ also has no undelivered messages from any other actor; it can only have undelivered messages that it sent to itself. This gives us the following result: \begin{theorem} Suppose $B$ is idle with knowledge set $\Phi$, such that: \begin{itemize} \item $\Phi$ does not contain any facts of the form $\Created(\Refob x A B)$ where $A \ne B$; and \item for all facts $\Created(\Refob x B B) \in \Phi$, also $\Phi \vdash \SentCount(x,n) \land \RecvCount(x,n)$ for some $n$. \end{itemize} Then $B$ is simple garbage. \end{theorem} \section{A Two-Level Semantic Model}\label{sec:model} Our computation model is based on the two level approach to actor semantics \cite{venkatasubramanianReasoningMetaLevel1995}, in which a lower \emph{system-level} transition system interprets the operations performed by a higher, user-facing \emph{application-level} transition system. In this section, we define the DRL communication protocol at the system level. We do not provide a transition system for the application level computation model, since it is not relevant to garbage collection (see \cite{aghaFoundationActorComputation1997} for how it can be done). What is relevant to us is that corresponding to each application-level action is a system-level transition that tracks references. We will therefore define \emph{system-level configurations} and \emph{transitions on system-level configurations}. We will refer to these, respectively, as configurations and transitions in the rest of the paper. \subsection{Overview} \label{sec:overview} Actors in DRL use \emph{reference objects} (abbreviated \emph{refobs}) to send messages, instead of using plain actor addresses. Refobs are similar to unidirectional channels and can only be used by their designated \emph{owner} to send messages to their \emph{target}; thus in order for $A$ to give $B$ a reference to $C$, it must explicitly create a new refob owned by $B$. Once a refob is no longer needed, it should be \emph{deactivated} by its owner and removed from local state. The DRL communication protocol enriches each actor's state with a list of refobs that it currently owns and associated message counts representing the number of messages sent using each refob. Each actor also maintains a subset of the refobs of which it is the target, together with associated message receive counts. Lastly, actors perform a form of ``contact tracing'' by maintaining a subset of the refobs that they have created for other actors; we provide details about the bookkeeping later in this section. The additional information above allows us to detect termination by inspecting actor snapshots. If a set of snapshots is consistent (in the sense of \cite{chandyDistributedSnapshotsDetermining1985}) then we can use the ``contact tracing'' information to determine whether the set is \emph{closed} under the potential inverse acquaintance relation (see \cref{sec:chain-lemma}). Then, given a consistent and closed set of snapshots, we can use the message counts to determine whether an actor is blocked. We can therefore find all the terminated actors within a consistent set of snapshots. In fact, DRL satisfies a stronger property: any set of snapshots that ``appears terminated'' in the sense above is guaranteed to be consistent. Hence, given an arbitrary closed set of snapshots, it is possible to determine which of the corresponding actors have terminated. This allows a great deal of freedom in how snapshots are aggregated. For instance, actors could place their snapshots in a global eventually consistent store, with a garbage collection thread at each node periodically inspecting the store for local terminated actors. \paragraph*{Reference Objects} \begin{figure} \centering \tikzfig{contents/diagrams/references} \caption{An example showing how refobs are created and destroyed. Below each actor we list all the ``facts'' related to $z$ that are stored in its local state. Although not pictured in the figure, $A$ also obtains facts $\Activated(x)$ and $\Activated(y)$ after spawning actors $B$ and $C$, respectively. Likewise, actors $B,C$ obtain facts $\Created(x),\Created(y)$, respectively, upon being spawned.} \label{fig:refob-example} \end{figure} A refob is a triple $(x,A,B)$, where $A$ is the owner actor's address, $B$ is the target actor's address, and $x$ is a globally unique token. An actor can cheaply generate such a token by combining its address with a local sequence number, since actor systems already guarantee that each address is unique. We will stylize a triple $(x,A,B)$ as $\Refob x A B$. We will also sometimes refer to such a refob as simply $x$, since tokens act as unique identifiers. When an actor $A$ spawns an actor $B$ (Fig.~\ref{fig:refob-example} (1, 2)) the DRL protocol creates a new refob $\Refob x A B$ that is stored in both $A$ and $B$'s system-level state, and a refob $\Refob y B B$ in $B$'s state. The refob $x$ allows $A$ to send application-level messages to $B$. These messages are denoted $\AppMsg(x,R)$, where $R$ is the sett of refobs contained in the message that $A$ has created for $B$. The refob $y$ corresponds to the \texttt{self} variable present in some actor languages. If $A$ has active refobs $\Refob x A B$ and $\Refob y A C$, then it can create a new refob $\Refob z B C$ by generating a token $z$. In addition to being sent to $B$, this refob must also temporarily be stored in $A$'s system-level state and marked as ``created using $y$'' (Fig.~\ref{fig:refob-example} (3)). Once $B$ receives $z$, it must add the refob to its system-level state and mark it as ``active'' (Fig.~\ref{fig:refob-example} (4)). Note that $B$ can have multiple distinct refobs that reference the same actor in its state; this can be the result of, for example, several actors concurrently sending refobs to $B$. Transition rules for spawning actors and sending messages are given in Section~\ref{sec:standard-actor-operations}. Actor $A$ may remove $z$ from its state once it has sent a (system-level) $\InfoMsg$ message informing $C$ about $z$ (Fig.~\ref{fig:refob-example} (4)). Similarly, when $B$ no longer needs its refob for $C$, it can ``deactivate'' $z$ by removing it from local state and sending $C$ a (system-level) $\ReleaseMsg$ message (Fig.~\ref{fig:refob-example} (5)). Note that if $B$ already has a refob $\Refob z B C$ and then receives another $\Refob {z'} B C$, then it can be more efficient to defer deactivating the extraneous $z'$ until $z$ is also no longer needed; this way, the $\ReleaseMsg$ messages can be batched together. When $C$ receives an $\InfoMsg$ message, it records that the refob has been created, and when $C$ receives a $\ReleaseMsg$ message, it records that the refob has been released (Fig.~\ref{fig:refob-example} (6)). Note that these messages may arrive in any order. Once $C$ has received both, it is permitted to remove all facts about the refob from its local state. Transition rules for these reference listing actions are given in Section~\ref{sec:release-protocol}. Once a refob has been created, it cycles through four states: pending, active, inactive, or released. A refob $\Refob z B C$ is said to be \emph{pending} until it is received by its owner $B$. Once received, the refob is \emph{active} until it is \emph{deactivated} by its owner, at which point it becomes \emph{inactive}. Finally, once $C$ learns that $z$ has been deactivated, the refob is said to be \emph{released}. A refob that has not yet been released is \emph{unreleased}. Slightly amending the definition we gave in \cref{sec:background}, we say that $B$ is a \emph{potential acquaintance} of $A$ (and $A$ is a \emph{potential inverse acquaintance} of $B$) when there exists an unreleased refob $\Refob x A B$. Thus, $B$ becomes a potential acquaintance of $A$ as soon as $x$ is created, and only ceases to be an acquaintance once it has received a $\ReleaseMsg$ message for every refob $\Refob y A B$ that has been created so far. \begin{figure} \centering \tikzfig{contents/diagrams/message-counts-timelines-simpler} \caption{A time diagram for actors $A,B,C$, demonstrating message counts and consistent snapshots. Dashed arrows represent messages and dotted lines represent consistent cuts. In each cut above, $B$'s message send count agrees with $C$'s message receive count.} \label{fig:message-counts} \end{figure} \paragraph*{Message Counts and Snapshots} For each refob $\Refob x A B$, the owner $A$ counts the number of $\AppMsg$ and $\InfoMsg$ messages sent along $x$; this count can be deleted when $A$ deactivates $x$. Each message is annotated with the refob used to send it. Whenever $B$ receives an $\AppMsg$ or $\InfoMsg$ message along $x$, it correspondingly increments a receive count for $x$; this count can be deleted once $x$ has been released. Thus the memory overhead of message counts is linear in the number of unreleased refobs. A snapshot is a copy of all the facts in an actor's system-level state at some point in time. We will assume throughout the paper that in every set of snapshots $Q$, each snapshot was taken by a different actor. Such a set is also said to form a \emph{cut}. Recall that a cut is consistent if no snapshot in the cut causally precedes any other \cite{chandyDistributedSnapshotsDetermining1985}. Let us also say that $Q$ is a set of \emph{mutually quiescent} snapshots if there are no undelivered messages between actors in the cut. That is, if $A \in Q$ sent a message to $B \in Q$ before taking a snapshot, then the message must have been delivered before $B$ took its snapshot. Notice that if all snapshots in $Q$ are mutually quiescent, then $Q$ is consistent. Notice also that in Fig.~\ref{fig:message-counts}, the snapshots of $B$ and $C$ are mutually quiescent when their send and receive counts agree. This is ensured in part because each refob has a unique token: If actors associated message counts with actor names instead of tokens, then $B$’s snapshots at $t_0$ and $t_3$ would both contain $\SentCount(C,1)$. Thus, $B$’s snapshot at $t_3$ and $C$’s snapshot at $t_0$ would appear mutually quiescent, despite having undelivered messages in the cut. We would like to conclude that snapshots from two actors $A,B$ are mutually quiescent if and only if their send and receive counts are agreed for every refob $\Refob x A B$ or $\Refob y B A$. Unfortunately, this fails to hold in general for systems with unordered message delivery. It also fails to hold when, for instance, the owner actor takes a snapshot before the refob is activated and the target actor takes a snapshot after the refob is released. In such a case, neither knowledge set includes a message count for the refob and they therefore appear to agree. However, we show that the message counts can nevertheless be used to bound the number of undelivered messages for purposes of our algorithm (\cref{lem:msg-counts}). \paragraph*{Definitions} We use the capital letters $A,B,C,D,E$ to denote actor addresses. Tokens are denoted $x,y,z$, with a special reserved token $\NullToken$ for messages from external actors. A \emph{fact} is a value that takes one of the following forms: $\Created(x)$, $\Released(x)$, $\CreatedUsing(x,y)$, $\Activated(x)$, $\Unreleased(x)$, $\SentCount(x,n)$, or $\RecvCount(x,n)$ for some refobs $x,y$ and natural number $n$. Each actor's state holds a set of facts about refobs and message counts called its \emph{knowledge set}. We use $\phi,\psi$ to denote facts and $\Phi,\Psi$ to denote finite sets of facts. Each fact may be interpreted as a \emph{predicate} that indicates the occurrence of some past event. Interpreting a set of facts $\Phi$ as a set of axioms, we write $\Phi \vdash \phi$ when $\phi$ is derivable by first-order logic from $\Phi$ with the following additional rules: \begin{itemize} \item If $(\not\exists n \in \mathbb N,\ \SentCount(x,n) \in \Phi)$ then $\Phi \vdash \SentCount(x,0)$ \item If $(\not\exists n \in \mathbb N,\ \RecvCount(x,n) \in \Phi)$ then $\Phi \vdash \RecvCount(x,0)$ \item If $\Phi \vdash \Created(x) \land \lnot \Released(x)$ then $\Phi \vdash \Unreleased(x)$ \item If $\Phi \vdash \CreatedUsing(x,y)$ then $\Phi \vdash \Created(y)$ \end{itemize} For convenience, we define a pair of functions $\IncSent(x,\Phi),\IncRecv(x,\Phi)$ for incrementing message send/receive counts, as follows: If $\SentCount(x,n) \in \Phi$ for some $n$, then $\IncSent(x,\Phi) = (\Phi \setminus \{\SentCount(x,n)\}) \cup \{\SentCount(x,n+1)\}$; otherwise, $\IncSent(x,\Phi) = \Phi \cup \{\SentCount(x,1)\}$. Likewise for $\IncRecv$ and $\RecvCount$. Recall that an actor is either \emph{busy} (processing a message) or \emph{idle} (waiting for a message). An actor with knowledge set $\Phi$ is denoted $[\Phi]$ if it is busy and $(\Phi)$ if it is idle. Our specification includes both \emph{system messages} (also called \emph{control messages}) and \emph{application messages}. The former are automatically generated by the DRL protocol and handled at the system level, whereas the latter are explicitly created and consumed by user-defined behaviors. Application-level messages are denoted $\AppMsg(x,R)$. The argument $x$ is the refob used to send the message. The second argument $R$ is a set of refobs created by the sender to be used by the destination actor. Any remaining application-specific data in the message is omitted in our notation. The DRL communication protocol uses two kinds of system messages. $\InfoMsg(y, z, B)$ is a message sent from an actor $A$ to an actor $C$, informing it that a new refob $\Refob z B C$ was created using $\Refob y A C$. $\ReleaseMsg(x,n)$ is a message sent from an actor $A$ to an actor $B$, informing it that the refob $\Refob x A B$ has been deactivated and should be released. A \emph{configuration} $\Config{\alpha}{\mu}{\rho}{\chi}$ is a quadruple $(\alpha,\mu,\rho,\chi)$ where: $\alpha$ is a mapping from actor addresses to knowledge sets; $\mu$ is a mapping from actor addresses to multisets of messages; and $\rho,\chi$ are sets of actor addresses. Actors in $\dom(\alpha)$ are \emph{internal actors} and actors in $\chi$ are \emph{external actors}; the two sets may not intersect. The mapping $\mu$ associates each actor with undelivered messages to that actor. Actors in $\rho$ are \emph{receptionists}. We will ensure $\rho \subseteq \dom(\alpha)$ remains valid in any configuration that is derived from a configuration where the property holds (referred to as the locality laws in \cite{Baker-Hewitt-laws77}). Configurations are denoted by $\kappa$, $\kappa'$, $\kappa_0$, etc. If an actor address $A$ (resp. a token $x$), does not occur in $\kappa$, then the address (resp. the token) is said to be \emph{fresh}. We assume a facility for generating fresh addresses and tokens. In order to express our transition rules in a pattern-matching style, we will employ the following shorthand. Let $\alpha,[\Phi]_A$ refer to a mapping $\alpha'$ where $\alpha'(A) = [\Phi]$ and $\alpha = \alpha'|_{\dom(\alpha') \setminus \{A\}}$. Similarly, let $\mu,\Msg{A}{m}$ refer to a mapping $\mu'$ where $m \in \mu'(A)$ and $\mu = \mu'|_{\dom(\mu') \setminus \{A\}} \cup \{A \mapsto \mu'(A) \setminus \{m\}\}$. Informally, the expression $\alpha,[\Phi]_A$ refers to a set of actors containing both $\alpha$ and the busy actor $A$ (with knowledge set $\Phi$); the expression $\mu, \Msg{A}{m}$ refers to the set of messages containing both $\mu$ and the message $m$ (sent to actor $A$). The rules of our transition system define atomic transitions from one configuration to another. Each transition rule has a label $l$, parameterized by some variables $\vec x$ that occur in the left- and right-hand configurations. Given a configuration $\kappa$, these parameters functionally determine the next configuration $\kappa'$. Given arguments $\vec v$, we write $\kappa \Step{l(\vec v)} \kappa'$ to denote a semantic step from $\kappa$ to $\kappa'$ using rule $l(\vec v)$. We refer to a label with arguments $l(\vec v)$ as an \emph{event}, denoted $e$. A sequence of events is denoted $\pi$. If $\pi = e_1,\dots,e_n$ then we write $\kappa \Step \pi \kappa'$ when $\kappa \Step{e_1} \kappa_1 \Step{e_2} \dots \Step{e_n} \kappa'$. If there exists $\pi$ such that $\kappa \Step \pi \kappa'$, then $\kappa'$ is \emph{derivable} from $\kappa$. An \emph{execution} is a sequence of events $e_1,\dots,e_n$ such that $\kappa_0 \Step{e_1} \kappa_1 \Step{e_2} \dots \Step{e_n} \kappa_n$, where $\kappa_0$ is the initial configuration (Section~\ref{sec:initial-configuration}). We say that a property holds \emph{at time $t$} if it holds in $\kappa_t$. \subsection{Initial Configuration}\label{sec:initial-configuration} The initial configuration $\kappa_0$ consists of a single actor in a busy state: $$\Config{[\Phi]_A}{\emptyset}{\emptyset}{\{E\}},$$ where $\Phi = \{\Activated(\Refob x A E),\ \Created(\Refob y A A),\ \Activated(\Refob y A A)\}$. The actor's knowledge set includes a refob to itself and a refob to an external actor $E$. $A$ can become a receptionist by sending $E$ a refob to itself. Henceforth, we will only consider configurations that are derivable from an initial configuration. \subsection{Standard Actor Operations}\label{sec:standard-actor-operations} \begin{figure}[t] $\textsc{Spawn}(x, A, B)$ $$\Config{\alpha, [\Phi]_A}{\mu}{\rho}{\chi} \InternalStep \Config{\alpha, [\Phi \cup \{ \Activated(\Refob x A B) \}]_A, [\Psi]_B}{\mu}{\rho}{\chi}$$ \begin{tabular}{ll} where & $x,y,B$ fresh\\ and & $\Psi = \{ \Created(\Refob x A B),\ \Created(\Refob {y} B B),\ \Activated(\Refob y B B) \}$ \end{tabular} \vspace{0.5cm} $\textsc{Send}(x,\vec y, \vec z, A, B,\vec C)$ $$\Config{\alpha, [\Phi]_A}{\mu}{\rho}{\chi} \InternalStep \Config{\alpha, [\IncSent(x,\Phi) \cup \Psi]_A}{\mu, \Msg{B}{\AppMsg(x,R)}}{\rho}{\chi}$$ \begin{tabular}{ll} where & $\vec y$ and $\vec z$ fresh and $n = |\vec y| = |\vec z| = |\vec C|$\\ and & $\Phi \vdash \Activated(\Refob x A B)$ and $\forall i \le n,\ \Phi \vdash \Activated(\Refob{y_i}{A}{C_i})$\\ and & $R = \{\Refob{z_i}{B}{C_i}\ |\ i \le n \}$ and $\Psi = \{\CreatedUsing(y_i,z_i)\ |\ i \le n \}$ \end{tabular} \vspace{0.5cm} $\textsc{Receive}(x,B,R)$ $$\Config{\alpha, (\Phi)_B}{\mu, \Msg{B}{\AppMsg(x,R)}}{\rho}{\chi} \InternalStep \Config{\alpha, [\IncRecv(x,\Phi) \cup \Psi]_B}{\mu}{\rho}{\chi}$$ \begin{tabular}{ll} where $\Psi = \{\Activated(z)\ |\ z \in R\}$ \end{tabular} \vspace{0.5cm} $\textsc{Idle}(A)$ $$\Config{\alpha, [\Phi]_A}{\mu}{\rho}{\chi} \InternalStep \Config{\alpha, (\Phi)_A}{\mu}{\rho}{\chi}$$ \caption{Rules for standard actor interactions.} \label{rules:actors} \end{figure} Fig.~\ref{rules:actors} gives transition rules for standard actor operations, such as spawning actors and sending messages. Each of these rules corresponds a rule in the standard operational semantics of actors~\cite{aghaFoundationActorComputation1997}. Note that each rule is atomic, but can just as well be implemented as a sequence of several smaller steps without loss of generality because actors do not share state -- see \cite{aghaFoundationActorComputation1997} for a formal proof. The \textsc{Spawn} event allows a busy actor $A$ to spawn a new actor $B$ and creates two refobs $\Refob x A B,\ \Refob y B B$. $B$ is initialized with knowledge about $x$ and $y$ via the facts $\Created(x),\Created(y)$. The facts $\Activated(x), \Activated(y)$ allow $A$ and $B$ to immediately begin sending messages to $B$. Note that implementing \textsc{Spawn} does not require a synchronization protocol between $A$ and $B$ to construct $\Refob x A B$. The parent $A$ can pass both its address and the freshly generated token $x$ to the constructor for $B$. Since actors typically know their own addresses, this allows $B$ to construct the triple $(x,A,B)$. Since the \texttt{spawn} call typically returns the address of the spawned actor, $A$ can also create the same triple. The \textsc{Send} event allows a busy actor $A$ to send an application-level message to $B$ containing a set of refobs $z_1,\dots,z_n$ to actors $\vec C = C_1,\dots,C_n$ -- it is possible that $B = A$ or $C_i = A$ for some $i$. For each new refob $z_i$, we say that the message \emph{contains $z_i$}. Any other data in the message besides these refobs is irrelevant to termination detection and therefore omitted. To send the message, $A$ must have active refobs to both the target actor $B$ and to every actor $C_1,\dots,C_n$ referenced in the message. For each target $C_i$, $A$ adds a fact $\CreatedUsing(y_i,z_i)$ to its knowledge set; we say that $A$ \emph{created $z_i$ using $y_i$}. Finally, $A$ must increment its $\SentCount$ count for the refob $x$ used to send the message; we say that the message is sent \emph{along $x$}. The \textsc{Receive} event allows an idle actor $B$ to become busy by consuming an application message sent to $B$. Before performing subsequent actions, $B$ increments the receive count for $x$ and adds all refobs in the message to its knowledge set. Finally, the \textsc{Idle} event puts a busy actor into the idle state, enabling it to consume another message. \subsection{Release Protocol}\label{sec:release-protocol} \begin{figure}[t!] $\textsc{SendInfo}(y,z,A,B,C)$ $$\Config{\alpha, [\Phi \cup \Psi]_A}{\mu}{\rho}{\chi} \InternalStep \Config{\alpha, [\IncSent(y,\Phi)]_A}{\mu,\Msg{C}{\InfoMsg(y,z,B)}}{\rho}{\chi}$$ \begin{tabular}{ll} where $\Psi = \{\CreatedUsing(\Refob y A C,\Refob z B C)\}$ \end{tabular} \vspace{0.5cm} $\textsc{Info}(y,z,B,C)$ $$\Config{\alpha, (\Phi)_C}{\mu, \Msg{C}{\InfoMsg(y,z,B)}}{\rho}{\chi} \InternalStep \Config{\alpha, (\IncRecv(y,\Phi) \cup \Psi)_C}{\mu}{\rho}{\chi}$$ \begin{tabular}{ll} where $\Psi = \{\Created(\Refob z B C)\}$ \end{tabular} \vspace{0.5cm} $\textsc{SendRelease}(x,A,B)$ $$\Config{\alpha, [\Phi \cup \Psi]_A}{\mu}{\rho}{\chi} \InternalStep \Config{\alpha, [\Phi]_A}{\mu, \Msg{B}{\ReleaseMsg(x,n)}}{\rho}{\chi}$$ \begin{tabular}{ll} where &$\Psi = \{\Activated(\Refob x A B), \SentCount(x,n)\}$\\ and & $\not\exists y,\ \CreatedUsing(x,y) \in \Phi$ \end{tabular} \vspace{0.5cm} $\textsc{Release}(x,A,B)$ $$\Config{\alpha, (\Phi)_B}{\mu, \Msg{B}{\ReleaseMsg(x,n)}}{\rho}{\chi} \InternalStep \Config{\alpha, (\Phi \cup \{\Released(x)\})_B}{\mu}{\rho}{\chi}$$ \begin{tabular}{l} only if $\Phi \vdash \RecvCount(x,n)$ \end{tabular} \vspace{0.5cm} $\textsc{Compaction}(x,B,C)$ $$\Config{\alpha, (\Phi \cup \Psi)_C}{\mu}{\rho}{\chi} \InternalStep \Config{\alpha, (\Phi)_C}{\mu}{\rho}{\chi}$$ \begin{tabular}{ll} where & $\Psi = \{\Created(\Refob x B C), \Released(\Refob x B C), \RecvCount(x,n)\}$ for some $n \in \mathbb N$\\ or & $\Psi = \{\Created(\Refob x B C), \Released(\Refob x B C)\}$ and $\forall n \in \mathbb N,\ \RecvCount(x,n) \not\in \Phi$ \end{tabular} \vspace{0.5cm} $\textsc{Snapshot}(A, \Phi)$ $$\Config{\alpha, (\Phi)_A}{\mu}{\rho}{\chi} \InternalStep \Config{\alpha, (\Phi)_A}{\mu}{\rho}{\chi}$$ \caption{Rules for performing the release protocol.} \label{rules:release} \end{figure} Whenever an actor creates or receives a refob, it adds facts to its knowledge set. To remove these facts when they are no longer needed, actors can perform the \emph{release protocol} defined in Fig.~\ref{rules:release}. All of these rules are not present in the standard operational semantics of actors. The \textsc{SendInfo} event allows a busy actor $A$ to inform $C$ about a refob $\Refob z B C$ that it created using $y$; we say that the $\InfoMsg$ message is sent \emph{along $y$} and \emph{contains $z$}. This event allows $A$ to remove the fact $\CreatedUsing(y,z)$ from its knowledge set. It is crucial that $A$ also increments its $\SentCount$ count for $y$ to indicate an undelivered $\InfoMsg$ message sent to $C$: it allows the snapshot aggregator to detect when there are undelivered $\InfoMsg$ messages, which contain refobs. This message is delivered with the \textsc{Info} event, which adds the fact $\Created(\Refob z B C)$ to $C$'s knowledge set and correspondingly increments $C$'s $\RecvCount$ count for $y$. When an actor $A$ no longer needs $\Refob x A B$ for sending messages, $A$ can deactivate $x$ with the \textsc{SendRelease} event; we say that the $\ReleaseMsg$ is sent \emph{along $x$}. A precondition of this event is that $A$ has already sent messages to inform $B$ about all the refobs it has created using $x$. In practice, an implementation may defer sending any $\InfoMsg$ or $\ReleaseMsg$ messages to a target $B$ until all $A$'s refobs to $B$ are deactivated. This introduces a trade-off between the number of control messages and the rate of simple garbage detection (Section~\ref{sec:chain-lemma}). Each $\ReleaseMsg$ message for a refob $x$ includes a count $n$ of the number of messages sent using $x$. This ensures that $\ReleaseMsg(x,n)$ is only delivered after all the preceding messages sent along $x$ have been delivered. Once the \textsc{Release} event can be executed, it adds the fact that $x$ has been released to $B$'s knowledge set. Once $C$ has received both an $\InfoMsg$ and $\ReleaseMsg$ message for a refob $x$, it may remove facts about $x$ from its knowledge set using the \textsc{Compaction} event. Finally, the \textsc{Snapshot} event captures an idle actor's knowledge set. For simplicity, we have omitted the process of disseminating snapshots to an aggregator. Although this event does not change the configuration, it allows us to prove properties about snapshot events at different points in time. \subsection{Composition and Effects}\label{sec:actor-composition} \begin{figure} $\textsc{In}(A,R)$ $$\Config{\alpha}{\mu}{\rho}{\chi} \ExternalStep \Config{\alpha}{\mu, \Msg{A}{\AppMsg(\NullToken, R)}}{\rho}{\chi \cup \chi'}$$ \begin{tabular}{ll} where & $A \in \rho$ and $R = \{ \Refob{x_1}{A}{B_1}, \dots, \Refob{x_n}{A}{B_n} \}$ and $x_1,\dots,x_n$ fresh\\ and & $\{B_1,\dots,B_n\} \cap \dom(\alpha) \subseteq \rho$ and $\chi' = \{B_1,\dots,B_n\} \setminus \dom(\alpha)$ \\ \end{tabular} \vspace{0.5cm} $\textsc{Out}(x,B,R)$ $$\Config{\alpha}{\mu,\Msg{B}{\AppMsg(x, R)}}{\rho}{\chi} \ExternalStep \Config{\alpha}{\mu}{\rho \cup \rho'}{\chi}$$ \begin{tabular}{ll} where $B \in \chi$ and $R = \{ \Refob{x_1}{B}{C_1}, \dots, \Refob{x_n}{B}{C_n} \}$ and $\rho' = \{C_1,\dots,C_n\} \cap \dom(\alpha)$ \end{tabular} \vspace{0.5cm} $\textsc{ReleaseOut}(x,B)$ $$\Config{\alpha}{\mu,\Msg{B}{\ReleaseMsg(x,n)}}{\rho}{\chi \cup \{B\}} \ExternalStep \Config{\alpha}{\mu}{\rho}{\chi \cup \{B\}}$$ \vspace{0.2cm} $\textsc{InfoOut}(y,z,A,B,C)$ $$\Config{\alpha}{\mu,\Msg{C}{\InfoMsg(y,z,A,B)}}{\rho}{\chi \cup \{C\}} \ExternalStep \Config{\alpha}{\mu}{\rho}{\chi \cup \{C\}}$$ \caption{Rules for interacting with the outside world.} \label{rules:composition} \end{figure} We give rules to dictate how internal actors interact with external actors in Fig.~\ref{rules:composition}. The \textsc{In} and \textsc{Out} rules correspond to similar rules in the standard operational semantics of actors. Since internal garbage collection protocols are not exposed to the outside world, all $\ReleaseMsg$ and $\InfoMsg$ messages sent to external actors are simply dropped by the \textsc{ReleaseOut} and \textsc{InfoOut} events. Likewise, only $\AppMsg$ messages can enter the system. Since we cannot statically determine when a receptionist's address has been forgotten by all external actors, we assume that receptionists are never terminated. The resulting ``black box'' behavior of our system is the same as the actor systems in \cite{aghaFoundationActorComputation1997}. Hence, in principle DRL can be gradually integrated into a codebase by creating a subsystem for garbage-collected actors. The \textsc{In} event allows an external actor to send an application-level message to a receptionist $A$ containing a set of refobs $R$, all owned by $A$. Since external actors do not use refobs, the message is sent using the special $\NullToken$ token. All targets in $R$ that are not internal actors are added to the set of external actors. The \textsc{Out} event delivers an application-level message to an external actor with a set of refobs $R$. All internal actors referenced in $R$ become receptionists because their addresses have been exposed to the outside world. \subsection{Garbage}\label{sec:garbage-defn} We can now operationally characterize actor garbage in our model. An actor $A$ can \emph{potentially receive a message} in $\kappa$ if there is a sequence of events (possibly of length zero) leading from $\kappa$ to a configuration $\kappa'$ in which $A$ has an undelivered message. We say that an actor is \emph{terminated} if it is idle and cannot potentially receive a message. An actor is \emph{blocked} if it satisfies three conditions: (1) it is idle, (2) it is not a receptionist, and (3) it has no undelivered messages; otherwise, it is \emph{unblocked}. We define \emph{potential reachability} as the reflexive transitive closure of the potential acquaintance relation. That is, $A_1$ can potentially reach $A_n$ if and only if there is a sequence of unreleased refobs $(\Refob {x_1} {A_1} {A_2}), \dots, (\Refob {x_n} {A_{n-1}} {A_n})$; recall that a refob $\Refob x A B$ is unreleased if its target $B$ has not yet received a $\ReleaseMsg$ message for $x$. Notice that an actor can potentially receive a message if and only if it is potentially reachable from an unblocked actor. Hence an actor is terminated if and only if it is only potentially reachable by blocked actors. A special case of this is \emph{simple garbage}, in which an actor is blocked and has no potential inverse acquaintances besides itself. We say that a set of actors $S$ is \emph{closed} (with respect to the potential inverse acquaintance relation) if, whenever $B \in S$ and there is an unreleased refob $\Refob x A B$, then also $A \in S$. Notice that the closure of a set of terminated actors is also a set of terminated actors. \section{Appendix} \subsection{Basic Properties} \begin{lemma}\label{lem:release-is-final} If $B$ has undelivered messages along $\Refob x A B$, then $x$ is an unreleased refob. \end{lemma} \begin{proof} There are three types of messages: $\AppMsg, \InfoMsg,$ and $\ReleaseMsg$. All three messages can only be sent when $x$ is active. Moreover, the \textsc{Release} rule ensures that they must all be delivered before $x$ can be released. \end{proof} \begin{lemma}\label{lem:facts-remain-until-cancelled} $\ $ \begin{itemize} \item Once $\CreatedUsing(\Refob y A C, \Refob z B C)$ is added to $A$'s knowledge set, it will not be removed until after $A$ has sent an $\InfoMsg$ message containing $z$ to $C$. \item Once $\Created(\Refob z B C)$ is added to $C$'s knowledge set, it will not be removed until after $C$ has received the (unique) $\ReleaseMsg$ message along $z$. \item Once $\Released(\Refob z B C)$ is added to $C$'s knowledge set, it will not be removed until after $C$ has received the (unique) $\InfoMsg$ message containing $z$. \end{itemize} \end{lemma} \begin{proof} Immediate from the transition rules. \end{proof} \begin{lemma}\label{lem:msg-counts} Consider a refob $\Refob x A B$. Let $t_1, t_2$ be times such that $x$ has not yet been deactivated at $t_1$ and $x$ has not yet been released at $t_2$. In particular, $t_1$ and $t_2$ may be before the creation time of $x$. Suppose that $\alpha_{t_1}(A) \vdash \SentCount(x,n)$ and $\alpha_{t_2}(B) \vdash \RecvCount(x,m)$ and, if $t_1 < t_2$, that $A$ does not send any messages along $x$ during the interval $[t_1,t_2]$ . Then the difference $\max(n - m,0)$ is the number of messages sent along $x$ before $t_1$ that were not received before $t_2$. \end{lemma} \begin{proof} Since $x$ is not deactivated at time $t_1$ and unreleased at time $t_2$, the message counts were never reset by the \textsc{SendRelease} or \textsc{Compaction} rules. Hence $n$ is the number of messages $A$ sent along $x$ before $t_1$ and $m$ is the number of messages $B$ received along $x$ before $t_2$. Hence $\max(n - m, 0)$ is the number of messages sent before $t_1$ and \emph{not} received before $t_2$. \end{proof} \subsection{Chain Lemma} \ChainLemma* \begin{proof} We prove that the invariant holds in the initial configuration and at all subsequent times by induction on events $\kappa \Step e \kappa'$, omitting events that do not affect chains. Let $\kappa = \Config{\alpha}{\mu}{\rho}{\chi}$ and $\kappa' = \Config{\alpha'}{\mu'}{\rho'}{\chi'}$. In the initial configuration, the only refob to an internal actor is $\Refob y A A$. Since $A$ knows $\Created(\Refob{y}{A}{A})$, the invariant is satisfied. In the cases below, let $x,y,z,A,B,C$ be free variables, not referencing the variables used in the statement of the lemma. \begin{itemize} \item $\textsc{Spawn}(x,A,B)$ creates a new unreleased refob $\Refob x A B$, which satisfies the invariant because $\alpha'(B) \vdash \Created(\Refob x A B)$. \item $\textsc{Send}(x,\vec y, \vec z, A,B,\vec C)$ creates a set of refobs $R$. Let $(\Refob z B C) \in R$, created using $\Refob y A C$. If $C$ is already in the root set, then the invariant is trivially preserved. Otherwise, there must be a chain $(\Refob{x_1}{A_1}{C}), \dots, (\Refob{x_n}{A_n}{C})$ where $x_n = y$ and $A_n = A$. Then $x_1,\dots,x_n,z$ is a chain in $\kappa'$, since $\alpha'(A_n) \vdash \CreatedUsing(x_n,z)$. If $B$ is an internal actor, then this shows that every unreleased refob to $C$ has a chain in $\kappa'$. Otherwise, $C$ is in the root set in $\kappa'$. To see that the invariant still holds, notice that $\Refob z B C$ is a witness of the desired chain. \item $\textsc{SendInfo}(y,z,A,B,C)$ removes the $\CreatedUsing(y,z)$ fact but also sends $\InfoMsg(y,z,B)$, so chains are unaffected. \item $\textsc{Info}(y,z,B,C)$ delivers $\InfoMsg(y,z,B)$ to $C$ and adds $\Created(\Refob z B C)$ to its knowledge set. Suppose $\Refob z B C$ is part of a chain $(\Refob{x_1}{A_1}{C}), \dots, (\Refob{x_n}{A_n}{C})$, i.e. $x_i = y$ and $x_{i+1} = z$ and $A_{i+1} = B$ for some $i < n$. Since $\alpha'(C) \vdash \Created(\Refob{x_{i+1}}{A_{i+1}}{C})$, we still have a chain $x_{i+1},\dots,x_n$ in $\kappa'$. \item $\textsc{Release}(x,A,B)$ releases the refob $\Refob x A B$. Since external actors never release their refobs, both $A$ and $B$ must be internal actors. Suppose the released refob was part of a chain $(\Refob{x_1}{A_1}{B}), \dots, (\Refob{x_n}{A_n}{B})$, i.e. $x_i = x$ and $A_i = A$ for some $i < n$. We will show that $x_{i+1},\dots,x_n$ is a chain in $\kappa'$. Before performing $\textsc{SendRelease}(x_i,A_i,B)$, $A_i$ must have performed the $\textsc{Info}(x_i,x_{i+1},\allowbreak A_{i+1},B)$ event. Since the $\InfoMsg$ message was sent along $x_i$, Lemma~\ref{lem:release-is-final} ensures that the message must have been delivered before the present \textsc{Release} event. Furthermore, since $x_{i+1}$ is an unreleased refob in $\kappa'$, Lemma~\ref{lem:facts-remain-until-cancelled} ensures that $\alpha'(B) \vdash \Created(\Refob{x_{i+1}}{A_{i+1}}{B})$. \item $\textsc{In}(A,R)$ adds a message from an external actor to the internal actor $A$. This event can only create new refobs that point to receptionists, so it preserves the invariant. \item $\textsc{Out}(x,B,R)$ emits a message $\AppMsg(x,R)$ to the external actor $B$. Since all targets in $R$ are already in the root set, the invariant is preserved. \end{itemize} \end{proof} \subsection{Termination Detection} Given a set of snapshots $Q$ taken before some time $t_f$, we write $Q_t$ to denote those snapshots in $Q$ that were taken before time $t < t_f$. If $\Phi_A \in Q$, we denote the time of $A$'s snapshot as $t_A$. \Completeness* Call this set of snapshots $Q$. First, we prove the following lemma. \begin{lemma}\label{lem:completeness-helper} If $Q \vdash \Unreleased(\Refob x A B)$ and $B \in Q$, then $x$ is unreleased at $t_B$. \end{lemma} \begin{proof} By definition, $Q \vdash \Unreleased(\Refob x A B)$ only if $Q \vdash \Created(x) \land \lnot \Released(x)$. Since $Q \not\vdash \Released(x)$, we must also have $\Phi_B \not\vdash \Released(x)$. For $Q \vdash \Created(x)$, there are two cases. Case 1: $\Phi_B \vdash \Created(x)$. Since $\Phi_B \not\vdash \Released(x)$, \cref{lem:facts-remain-until-cancelled} implies that $x$ is unreleased at time $t_B$. Case 2: For some $C \in Q$ and some $y$, $\Phi_C \vdash \CreatedUsing(y,x)$. Since $C$ performed its final action before taking its snapshot, this implies that $C$ will never send the $\InfoMsg$ message containing $x$ to $B$. Suppose then for a contradiction that $x$ is released at time $t_B$. Since $\Phi_B \not\vdash \Released(x)$, \cref{lem:facts-remain-until-cancelled} implies that $B$ received an $\InfoMsg$ message containing $x$ before its snapshot. But this is impossible because $C$ never sends this message. \end{proof} \begin{proof}[Proof (\cref{lem:terminated-is-complete})] By strong induction on time $t$, we show that $Q$ is closed and that every actor appears blocked. \textbf{Induction hypothesis:} For all times $t' < t$, if $B \in Q_{t'}$ and $Q \vdash \Unreleased(\Refob x A B)$, then $A \in Q$, $Q \vdash \Activated(x)$, and $Q \vdash \SentCount(x,n)$ and $Q \vdash \RecvCount(x,n)$ for some $n$. Since $Q_0 = \emptyset$, the induction hypothesis holds trivially in the initial configuration. Now assume the induction hypothesis. Suppose that $B \in Q$ takes its snapshot at time $t$ with $Q \vdash \Unreleased(\Refob x A B)$, which implies $Q \vdash \Created(x) \land \lnot\Released(x)$. $Q \vdash \Created(x)$ implies that $x$ was created before $t_f$. \cref{lem:completeness-helper} implies that $x$ is also unreleased at time $t_f$, since $B$ cannot perform a \textsc{Release} event after its final action. Hence $A$ is in the closure of $\{B\}$ at time $t_f$, so $A \in Q$. Now suppose $\Phi_A \not\vdash \Activated(x)$. Then either $x$ will be activated after $t_A$ or $x$ was deactivated before $t_A$. The former is impossible because $A$ would need to become unblocked to receive $x$. Since $x$ is unreleased at time $t_f$ and $t_A < t_f$, the latter implies that there is an undelivered $\ReleaseMsg$ message for $x$ at time $t_f$. But this is impossible as well, since $B$ is blocked at $t_f$. Finally, let $n$ such that $\Phi_B \vdash \RecvCount(x,n)$; we must show that $\Phi_A \vdash \SentCount(x,n)$. By the above arguments, $x$ is active at time $t_A$ and unreleased at time $t_B$. Since both actors performed their final action before their snapshots, all messages sent before $t_A$ must have been delivered before $t_B$. By Lemma~\ref{lem:msg-counts}, this implies $\Phi_A \vdash \SentCount(x,n)$. \end{proof} We now prove the safety theorem, which states that if $Q$ is a finalized set of snapshots, then the corresponding actors of $Q$ are terminated. We do this by showing that at each time $t$, all actors in $Q_t$ are blocked and all of their potential inverse acquaintances are in $Q$. Consider the first actor $B$ in $Q$ to take a snapshot. We show, using the Chain Lemma, that the closure of this actor is in $Q$. Then, since all potential inverse acquaintances of $B$ take snapshots strictly after $t_B$, it is impossible for $B$ to have any undelivered messages without appearing unblocked. For every subsequent actor $B$ to take a snapshot, we make a similar argument with an additional step: If $B$ has any potential inverse acquaintances in $Q_{t_B}$, then they could not have sent $B$ a message without first becoming unblocked. \Safety* \begin{proof} Proof by induction on events. The induction hypothesis consists of two clauses that must both be satisfied at all times $t \le t_f$. \begin{itemize} \item \textbf{IH 1:} If $B \in Q_t$ and $\Refob x A B$ is unreleased, then $Q \vdash \Unreleased(x)$. \item \textbf{IH 2:} The actors of $Q_t$ are all blocked. \end{itemize} \paragraph*{Initial configuration} Since $Q_0 = \emptyset$, the invariant trivially holds. \paragraph*{$\textsc{Snapshot}(B, \Phi_B)$} Suppose \(B \in Q\) takes a snapshot at time \(t\). We show that if $\Refob x A B$ is unreleased at time $t$, then $Q \vdash \Unreleased(x)$ and there are no undelivered messages along $x$ from $A$ to $B$. We do this with the help of two lemmas. \begin{lemma}\label{lem:complete-ref} If $Q \vdash \Unreleased(\Refob x A B)$, then $x$ is unreleased at time $t$ and there are no undelivered messages along $x$ at time $t$. Moreover, if $t_A > t$, then there are no undelivered messages along $x$ throughout the interval $[t,t_A]$. \end{lemma} \begin{proof}[Proof (Lemma)] Since $Q$ is closed, we have $A \in Q$ and $\Phi_A \vdash \Activated(x)$. Since $B$ appears blocked, we must have $\Phi_A \vdash \SentCount(x,n)$ and $\Phi_B \vdash \RecvCount(x,n)$ for some $n$. Suppose $t_A > t$. Since $\Phi_A \vdash \Activated(x)$, $x$ is not deactivated and not released at $t_A$ or $t$. Hence, by Lemma~\ref{lem:msg-counts}, every message sent along $x$ before $t_A$ was received before $t$. Since message sends precede receipts, each of those messages was sent before $t$. Hence there are no undelivered messages along $x$ throughout $[t,t_A]$. Now suppose $t_A < t$. Since $\Phi_A \vdash \Activated(x)$, $x$ is not deactivated and not released at $t_A$. By IH 2, $A$ was blocked throughout the interval $[t_A,t]$, so it could not have sent a $\ReleaseMsg$ message. Hence $x$ is not released at $t$. By Lemma~\ref{lem:msg-counts}, all messages sent along $x$ before $t_A$ must have been delivered before $t$. Hence, there are no undelivered messages along $x$ at time $t$. \end{proof} \begin{lemma}\label{lem:complete-chains} Let $\Refob{x_1}{A_1}{B}, \dots, \Refob{x_n}{A_n}{B}$ be a chain to $\Refob x A B$ at time $t$. Then $Q \vdash \Unreleased(x)$. \end{lemma} \begin{proof}[Proof (Lemma)] Since all refobs in a chain are unreleased, we know $\forall i \le n,\ \Phi_B \not\vdash \Released(x_i)$ and so $Q \not\vdash \Released(x_i)$. It therefore suffices to prove, by induction on the length of the chain, that $\forall i \le n,\ Q \vdash \Created(x_i)$. \textbf{Base case:} By the definition of a chain, $\alpha_t(B) \vdash \Created(x_1)$, so $\Created(x_1) \in \Phi_B$. \textbf{Induction step:} Assume $Q \vdash \Unreleased(x_i)$, which implies $A_i \in Q$. Let $t_i$ be the time of $A_i$'s snapshot. By the definition of a chain, either the message $\Msg{B}{\InfoMsg(x_i,x_{i+1})}$ is in transit at time $t$, or $\alpha_t(A_i) \vdash \CreatedUsing(x_i,x_{i+1})$. But the first case is impossible by Lemma~\ref{lem:complete-ref}, so we only need to consider the latter. Suppose $t_i > t$. Lemma~\ref{lem:complete-ref} implies that $A_i$ cannot perform the $\textsc{SendInfo}(x_i,x_{i+1},A_{i+1},B)$ event during $[t,t_i]$. Hence $\alpha_{t_i}(A_i) \vdash \CreatedUsing(x_i,x_{i+1})$, so $Q \vdash \Created(x_{i+1})$. Now suppose $t_i < t$. By IH 2, $A_i$ must have been blocked throughout the interval $[t_i,t]$. Hence $A_i$ could not have created any refobs during this interval, so $x_{i+1}$ must have been created before $t_i$. This implies $\alpha_{t_i}(A_i) \vdash \CreatedUsing(x_i,x_{i+1})$ and therefore $Q \vdash \Created(x_{i+1})$. \end{proof} Lemma~\ref{lem:complete-chains} implies that $B$ cannot be in the root set. If it were, then by the Chain Lemma there would be a refob $\Refob y C B$ with a chain where $C$ is an external actor. Since $Q \vdash \Unreleased(y)$, there would need to be a snapshot from $C$ in $Q$ -- but external actors do not take snapshots, so this is impossible. Since $B$ is not in the root set, there must be a chain to every unreleased refob $\Refob x A B$. By Lemma~\ref{lem:complete-chains}, $Q \vdash \Unreleased(x)$. By Lemma~\ref{lem:complete-ref}, there are no undelivered messages to $B$ along $x$ at time $t$. Since $B$ can only have undelivered messages along unreleased refobs (Lemma~\ref{lem:release-is-final}), the actor is indeed blocked. \paragraph*{$\textsc{Send}(x,\vec y, \vec z, A,B,\vec C)$} In order to maintain IH 2, we must show that if $B \in Q_t$ then this event cannot occur. So suppose $B \in Q_t$. By IH 1, we must have $Q \vdash \Unreleased(\Refob x A B)$, so $A \in Q$. By IH 2, we moreover have $A \not\in Q_t$ -- otherwise $A$ would be blocked and unable to send this message. Since $B$ appears blocked in $Q$, we must have $\Phi_A \vdash \SentCount(x,n)$ and $\Phi_B \vdash \RecvCount(x,n)$ for some $n$. Since $x$ is not deactivated at $t_A$ and unreleased at $t_B$, \cref{lem:msg-counts} implies that every message sent before $t_A$ is received before $t_B$. Hence $A$ cannot send this message to $B$ because $t_A > t > t_B$. In order to maintain IH 1, suppose that one of the refobs sent to $B$ in this step is $\Refob z B C$, where $C \in Q_t$. Then in the next configuration, $\CreatedUsing(y,z)$ occurs in $A$'s knowledge set. By the same argument as above, $A \in Q \setminus Q_t$ and $\Phi_A \vdash \SentCount(y,n)$ and $\Phi_C \vdash \RecvCount(y,n)$ for some $n$. Hence $A$ cannot perform the $\textsc{SendInfo}(y,z,A,B,C)$ event before $t_A$, so $\Phi_A \vdash \CreatedUsing(y,z)$ and $Q \vdash \Created(z)$. \paragraph*{SendInfo(y,z,A,B,C)} By the same argument as above, $A \not\in Q_t$ cannot send an $\InfoMsg$ message to $B \in Q_t$ without violating message counts, so IH 2 is preserved. \paragraph*{$\textsc{SendRelease}(x,A,B)$} Suppose that $A \not\in Q_t$ and $B \in Q_t$. By IH 1, $\Refob x A B$ is unreleased at time $t$. Since $Q$ is finalized, $\Phi_A \vdash \Activated(x)$. Hence $A$ cannot deactivate $x$ and IH 2 is preserved. \paragraph*{$\textsc{In}(A,R)$} Since every potential inverse acquaintance of an actor in $Q_t$ is also in $Q$, none of the actors in $Q_t$ is a receptionist. Hence this rule does not affect the invariants. \paragraph*{$\textsc{Out}(x,B,R)$} Suppose $(\Refob y B C) \in R$ where $C \in Q_t$. Then $y$ is unreleased and $Q \vdash \Unreleased(y)$ and $B \in Q$. But this is impossible because external actors do not take snapshots. \end{proof}
proofpile-arXiv_065-147
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} The potential field source surface (PFSS) model, established in the 1960s \citep{schatten1969,altschuler1969}, remains the baseline against which more sophisticated coronal magnetic field models are compared, holding its own as a first approximation for the heliospheric magnetic structure even in the era of \textit{Parker Solar Probe} \citep{badman2020}. Satisfying both $\nabla\cdot{\bm B}_{\rm p}=0$ and $\nabla\times{\bm B}_{\rm p}=\boldsymbol{0}$, the PFSS field ${\bm B}_{\rm p}$ minimizes magnetic energy in the region $r_0<r<r_1$ among all magnetic fields that match a given distribution of $B_r$ on $r=r_0$ and satisfy $B_\theta=B_\phi=0$ on $r=r_1$ \citep{priest}. Typically $r_0$ is the solar surface and $r_1$ is fixed somewhere between $1.5r_0$ and $3r_0$. By minimizing magnetic energy, the PFSS model describes the simplest coronal magnetic structure consistent with (radial component) magnetogram observations at $r=r_0$. Over the solar cycle, the PFSS corona -- and thus the minimal possible complexity of the real corona -- varies significantly in structure, in turn driving variations in the heliospheric magnetic topology \citep{wang2003,wang2014}. A natural question to ask, therefore, is how to quantify the complexity of this minimal magnetic structure at any given time. One approach is to count topological features such as magnetic null points \citep{cook2009,freed2015,edwards2015b} or separators \citep{platten2014} that divide the corona into regions of differing magnetic connectivity \citep{longcope2005}. Where they extend to the $r_1$ boundary, these connectivity regions determine the origins of different regions of the solar wind, and \citet{scott2019} have recently developed an automated technique for such a partitioning of the so-called S-web \citep{antiochos2011}. The other approach -- pursued here -- is to describe the PFSS topology in terms of magnetic flux linkage, or helicity integrals. For any magnetic field ${\bm B}=\nabla\times{\bm A}$, the magnetic helicity \begin{equation} h(V_t)=\int_{V_t}{\bm A}\cdot{\bm B}\,\mathrm{d}V \label{eqn:h} \end{equation} is well-known to be an ideal-magnetohydrodynamic invariant in any co-moving subvolume $V_t$ bounded by magnetic surfaces with ${\bm B}\cdot{\bm n}=0$ (where ${\bm n}$ is the surface normal). It measures the net linking between magnetic flux within $V_t$, and thus characterises the topology of $V_t$ \citep[\textit{e.g.},][]{pevtsov2014,moffatt}. Because the $r_0$ and $r_1$ boundaries are not magnetic surfaces, the coronal volume $V$ ($r_0<r<r_1$) cannot be divided into magnetically-closed subvolumes. But we can divide it in such a way that the only non-magnetic subvolume boundaries lie on $r_0$ and/or $r_1$. Then the individual helicity $h(V_t)$ of any of the subvolumes $V_t$ can change only by evolution of $B_r$ on $r_0$ and/or $r_1$ \citep{demonulin2009}, not by ideal motions inside $V$, even if the latter deform the internal boundaries between the subvolumes. As it stands, the definition \eqref{eqn:h} does not immediately make sense for magnetically open (sub)volumes because it then depends on the choice of ${\bm A}$. In Section \ref{sec:def} we will make a specific choice ${\bm A}_{\rm p}$ for our potential field that (i) is as small as possible in that it minimises the integral of $|{\bm A}|^2$, and (ii) ensures that $h(V)=0$ over the whole volume. The latter is a natural requirement for the minimum-energy field, and accords with the relative helicity that is often used for non-potential fields \citep{berger1984,finn1985,moraitis2018}. The interesting thing, and the premise of this Letter, is that fixing ${\bm A}={\bm A}_{\rm p}$ globally in this way does not imply ${\bm A}_{\rm p}\cdot{\bm B}_{\rm p}=0$ throughout $V$. As we will show, even potential fields may contain subvolumes with $h(V_t)\neq 0$. The lack of electric currents means that these subhelicities cannot arise from local twisting of magnetic field lines, so they must arise from mutual linking between different active regions and/or the overlying large-scale field. We will see the importance of the latter in Sections \ref{sec:bmr} and \ref{sec:cycle}. The presence of distinct connectivity regions is familiar from the aforementioned topological studies, and their mutual linkage -- and therefore subhelicity -- is effectively forced by the distribution of $B_r$ on $r=r_0$ \citep[cf.][]{bourdin2018}. We comment on the possible significance of this minimal helicity content in Section \ref{sec:conclusion}. \section{Definitions} \label{sec:def} \subsection{Vector potential} Any potential field ${\bm B}_{\rm p}$ in $V$ with no net flux through $r=r_0$ (or consequently through $r=r_1$) may be written as ${\bm B}_{\rm p}=\nabla\times{\bm A}_{\rm p}$ where ${\bm A}_{\rm p}$ is the unique vector potential determined by the conditions $\nabla\cdot{\bm A}_{\rm p}=0$ and ${\bm A}_{\rm p}\cdot\hat{\bm r}=0$ throughout $V$ \cite[\textit{e.g.},][]{berger1988}. One way to find this vector potential is to write \begin{equation} {\bm A}_{\rm p}(r,\theta,\phi) = \nabla\times\big[P(r,\theta,\phi)\hat{\bm r}\big] \label{eqn:ap} \end{equation} and find the potential $P(r,\theta,\phi)$ by solving the two-dimensional Poisson equation $\nabla_{\rm h}^2 P = -B_r$ on each surface of constant $r$. Another is to solve the Poisson equation only on $r=r_0$, then integrate radially \citep[the so-called DeVore-Coulomb gauge;][]{amari2013,yeates2016,moraitis2018} to find \begin{equation} {\bm A}_{\rm p}(r,\theta,\phi) = \frac{r_0}{r}{\bm A}_{\rm p}(r_0,\theta,\phi) + \frac{1}{r}\int_{r_0}^r{\bm B}_{\rm p}(r',\theta,\phi)\times\hat{\bm r}\,r'\,\mathrm{d}r'. \label{eqn:devore} \end{equation} This particular choice of vector potential is the ``simplest possible'' in that it minimises $\int_V|{\bm A}|^2\,\mathrm{d}V$ among all possible vector potentials \citep[cf.][]{gubarev2001}, as well as being a minimiser of $\int_{\partial V}|{\bm A}\times\hat{\bm r}|^2\,\mathrm{d}S$ on the boundary \citep[as advocated by][]{yeates2018}. As such, it is appropriate for defining the minimal field line helicity content of a potential field. \subsection{Helicity measures} \label{sec:h} We will use the finest possible subdivision of $V$: infinitesimal magnetic flux tubes surrounding every magnetic field line. Denoting such a tube of radius $\epsilon$ around a field line $L$ by $V_\epsilon(L)$, and the tube's magnetic flux by $\Phi_0(V_\epsilon(L))$, we consider the field line helicity \begin{equation} \mathcal{A}(L) = \lim_{\epsilon\to 0}\frac{\int_{V_\epsilon(L)}{\bm A}_{\rm p}\cdot{\bm B}_{\rm p}\,\mathrm{d}V}{\Phi_0(V_\epsilon(L))}, \label{eqn:lim} \end{equation} where the normalisation is needed to give a finite limit. The properties and meaning of field line helicity were first discussed in detail by \citet[][see also \citealp{aly2018}]{berger1988}, who noted \eqref{eqn:lim} as an alternative to the simpler formula \begin{equation} \mathcal{A}(L) = \int_L{\bm A}_{\rm p}\cdot\,\mathrm{d}{\bm l}, \label{eqn:flh} \end{equation} which is more convenient for calculations \citep{yeates2016,moraitis2019}. In a potential field -- which can contain no closed or ergodic field lines -- it follows from \eqref{eqn:lim} that $h(V)$ may be written as a flux-weighted integral of $\mathcal{A}$ over $\partial V$, with \begin{equation} \frac{1}{2}\int_{\partial V}\mathcal{A}|B_r|\,\mathrm{d}S = \int_V{\bm A}_{\rm p}\cdot{\bm B}_{\rm p}\,\mathrm{d}V = 0. \label{eqn:intabr} \end{equation} The factor half arises because each field line hits the boundary twice (here $\partial V$ includes both the inner and outer boundaries), and the second integral vanishes for our choice of ${\bm A}_{\rm p}$. In this sense, $\mathcal{A}$ decomposes the total helicity into a distribution of invariants for each field line. Each one is topologically meaningful because it can change only by motion of the field line endpoints on $\partial V$, or by reconnection within $V$ \citep{berger1988}. In fact, in a highly-conducting plasma, reconnection tends to redistribute $\mathcal{A}$ between field lines, rather than destroy it \citep{russell2015}. As we will see, the individual field line helicities $\mathcal{A}(L)$ do not vanish in general, even though ${\bm B}_{\rm p}$ is a potential field. As an overall measure of the field line helicity content of a given potential field, we will use the ``unsigned helicity'' \begin{equation} \overline{H} = \frac{1}{2}\int_{\partial V}|\mathcal{A}B_r|\,\mathrm{d}S. \label{eqn:absh} \end{equation} \section{Single bipolar magnetic region} \label{sec:bmr} Before studying data-driven potential-field extrapolations, it is instructive to consider the field line helicity of a single bipolar magnetic region (BMR). Figure \ref{fig:bmr3d} shows three PFSS extrapolations, for (a) a dipolar background field, (b) the single BMR, and (c) their superposition. All extrapolations in this paper are computed on a regular grid of $60\times 180\times 360$ points in $(\log(r/r_0), \cos\theta,\phi)$ coordinates, using the author's finite-difference code \citep{yeates2018code}. To calculate $\mathcal{A}$, we first determine ${\bm A}_{\rm p}$ from ${\bm B}_{\rm p}$ using a finite-difference version of \eqref{eqn:devore} and a fast-Poisson solver for $P(r_0,\theta,\phi)$. A second-order Runge-Kutta method is then used to integrate ${\bm A}_{\rm p}$ along magnetic field lines. \begin{figure*} \centering \includegraphics[width=\textwidth]{bmr_3d.pdf} \caption{PFSS extrapolations for (a) a dipolar field $B_r(r_0) \sim\cos^7\theta$; (b) a localised BMR (defined in Appendix \ref{app:bmr}); and (c) a superposition of the two (with $0^\circ$ BMR tilt). The peak field strength of the BMR is $50\,\mathrm{G}$ compared to $5\,\mathrm{G}$ for the dipolar field. Selected magnetic field lines are colored blue/red by $\mathcal{A}|B_r|$ (in units of $10^{22}\,\mathrm{Mx}^2\,\mathrm{cm}^{-2}$ and using the larger of the two endpoint $|B_r|$ values). Panel (c) is the top left (tilt $0^\circ$) case in Figure \ref{fig:bmr2d}. } \label{fig:bmr3d} \end{figure*} On its own, the dipolar background field (Figure \ref{fig:bmr3d}a) has $\mathcal{A}= 0$ on all field lines. To see this, note that axisymmetry of ${\bm B}_{\rm p}$ implies that $P$ is also axisymmetric, so that ${\bm A}_{\rm p}$ has only a $\phi$-component by \eqref{eqn:ap}. But since $B_{{\rm p}\phi}=0$ we have ${\bm A}_{\rm p}\cdot{\bm B}_{\rm p}=0$ throughout $V$. With no background field, the BMR (Figure \ref{fig:bmr3d}b) also has vanishing $\mathcal{A}$ on every closed field line, but this is due to symmetry: opposite values of ${\bm A}_{\rm p}\cdot{\bm B}_{\rm p}$ are encountered as such a field line undergoes equal displacement toward the BMR and away from it. The open field lines at either end of the BMR have a net displacement toward (or away from) the BMR, so have non-zero (but small) $\mathcal{A}$. The combined field in Figure \ref{fig:bmr3d}(c), however, has much more field line helicity. Aside from an overall scaling with magnetic field strength, the distribution of $\mathcal{A}$ in the combined field depends primarily on the orientation (tilt angle) of the BMR. This is shown by Figure \ref{fig:bmr2d}, where the distribution of $\mathcal{A}|B_r|$ is plotted for 8 different orientations of a BMR at the same location (central latitude $20^\circ$ North). The ``helicity content'' -- as measured by $\overline{H}$ -- is maximised at either $0^\circ$ tilt (top left) or $180^\circ$ tilt (top right), when the majority of the BMR flux is perpendicular to the overlying dipolar field. It is minimised at $90^\circ$ when the BMR is aligned with the dipolar field, and is only a little larger at $270^\circ$ when it is anti-aligned. The sign of $\mathcal{A}$ in each part of the BMR depends on the direction of the East-West magnetic field component relative to the overlying field. In the tilt $0^\circ$ BMR, for example, the closed field lines connecting the two BMR polarities have positive $\mathcal{A}$, whereas the field lines at the extremities connecting elsewhere have negative $\mathcal{A}$. Changing the polarity of the BMR (tilt $180^\circ$) reverses this pattern. For tilt $90^\circ$ or $270^\circ$ symmetry means that the closed BMR field lines have no net East-West displacement, so no net $\mathcal{A}$. \begin{figure*} \centering \includegraphics[width=\textwidth]{bmr_bg.pdf} \caption{Field line helicity for different orientations of a BMR, labelled by tilt angle in degrees. In each case, the left panel shows $B_r$ on $r=r_0$ (white positive, black negative) with magnetic field lines colored by $\mathcal{A}|B_r|$ (using the larger of the two endpoint $|B_r|$ values). The right panel shows the corresponding distribution of $\mathcal{A}|B_r|$ on $r=r_0$, with $B_r=\pm 1\,\mathrm{G}$ contours in black. Units of $\mathcal{A}|B_r|$ are $10^{22}\,\mathrm{Mx}^2\,\mathrm{cm}^{-2}$. The dipolar background field is always positive to the north and negative to the south.} \label{fig:bmr2d} \end{figure*} In fact, the tilt $0^\circ$ BMR shown in Figure \ref{fig:bmr2d} has a net positive helicity. Defining the BMR by the $\pm 1\,\mathrm{G}$ contour on $r=r_0$, we estimate the signed integral of $\mathcal{A}|B_r|$ over the closed field lines connecting within this region to be $1.7\times 10^{42}\,\mathrm{Mx}^2$, while that over field lines originating in this region and connecting elsewhere (either open or closed) is $-1.4\times 10^{42}\,\mathrm{Mx}^2$. The net helicity from the BMR region (inside the $1\,\mathrm{G}$ contour) is therefore $+0.3\times 10^{42}\,\mathrm{Mx}^2$. This is balanced by a net negative contribution from the closed dipolar field lines, but note that it is an order of magnitude smaller than the unsigned helicity $\overline{H}\approx3.5\times 10^{42}\,\mathrm{Mx}^2$. For the combined field, integrated over the BMR and the dipole, we find $\overline{H}\approx4.7\times 10^{42}\,\mathrm{Mx}^2$. (All of these estimates used field lines traced from a grid on $r=r_0$ at twice the resolution of the ${\bm A}_{\rm p}$ and ${\bm B}_{\rm p}$ fields. Computing $\mathcal{A}|B_r|$ by summing over these same field lines, we obtain $\int_V{\bm A}_{\rm p}\cdot{\bm B}_{\rm p}\,\mathrm{d}V=-0.07\times 10^{42}\,\mathrm{Mx}^2$, suggesting that $\overline{H}$ is accurate to about $1\%$.) In summary, significant unsigned helicity requires significant BMR flux perpendicular to the overlying field, and the signed helicity of the BMR is approximately $10\%$ of its unsigned helicity. \section{Solar cycle evolution} \label{sec:cycle} The same numerical code has been used to compute PFSS extrapolations and $\mathcal{A}$ distributions using synoptic line-of-sight magnetogram data from the Helioseismic and Magnetic Imager \citep[HMI,][]{schou2012} on \textit{Solar Dynamics Observatory}. We use the radial component, pole-filled maps in the \texttt{hmi.synoptic\_mr\_polfil\_720s} series \citep{sun2018}, for Carrington Rotations CR2098 (2010 June) to CR2226 (2020 February). The maps were prepared by (i) applying a smoothing filter of the form $\mathrm{e}^{-b_0l(l+1)}$ to the spherical harmonic coefficients; (ii) mapping to the computational grid using cubic interpolation; and (iii) correcting flux balance. The grid resolution was fixed at $60\times180\times360$ but we tried increasing the smoothing from $b_0=2\times 10^{-5}$ to $1\times10^{-4}$ and $5\times 10^{-4}$. The flux balance was corrected by multiplicative scaling of both the positive and negative regions to their original mean. Before correction, the maps had varying levels of signed flux up to about 5\% of their unsigned flux. However, the signed flux in any given map before correction showed no correlation with the approximately 1\% signed helicity ($H$) found after correction, consistent with the latter arising solely from numerical error in the subsequent calculation. The source surface was fixed at $r_1=2.5\,R_\odot$. The left column of Figure \ref{fig:bfly} shows how the magnetic flux evolves over latitude and time in this PFSS model. Panel (g) shows the total (unsigned) fluxes through the inner and outer boundaries, defined as \begin{equation} \overline{\Phi_0} = \frac12\int_{r=r_0}|B_r|\,\mathrm{d}S, \qquad \overline{\Phi_1} = \frac12\int_{r=r_1}|B_r|\,\mathrm{d}S. \end{equation} Notice in Figure \ref{fig:bfly} that $\overline{\Phi_0}$ depends on the smoothing $b_0$ but the open flux $\overline{\Phi_1}$ does not, as it is controlled by only the lowest few spherical harmonic degrees \citep{wang2014}. This particular solar cycle does not show a sharp peak in $\overline{\Phi_0}$ but there is quite a sharp peak in $\overline{\Phi_1}$ around CR2156-8. \begin{figure*} \centering \includegraphics[width=\textwidth]{bfly_hmi_1em4.pdf} \caption{Results of the Cycle 24 PFSS extrapolations. Panels (a) and (b) show longitudinal averages of $B_r$ and $\mathcal{A}|B_r|$ on $r=r_0$ as functions of time and latitude. Panels (d) and (e) show similar averages for $|B_r|$ and $|\mathcal{A}B_r|$. Panels (c) and (f) show time averages of (b) and (e), respectively, separately for the periods before and after the dipole reversal time at CR2140 (dashed green line in the other plots). Finally, as functions of time, panel (g) shows the total unsigned fluxes $\overline{\Phi_0}$ and $\overline{\Phi_1}$ and panel (h) shows $\overline{H}$ (for the different levels of magnetogram smoothing -- the other panels show only the $b_0=1\times10^{-4}$ results).} \label{fig:bfly} \end{figure*} The other columns of Figure \ref{fig:bfly} then summarize the resulting field line helicity. The stackplots in (b) and (e) show longitude averages of $\mathcal{A}|B_r|$ and $|\mathcal{A}B_r|$, respectively, and the time averages of these same quantities are shown in (c) and (f). From (f), we see that the unsigned helicity is predominantly located in the active region belts, and is about ten times the size of the signed helicity in (c), as we found for the single BMR in Section \ref{sec:bmr}. In the active region belts -- between latitudes $\pm30^\circ$ -- the time-averaged signed helicity in (c) has opposite sign in each hemisphere, and also has opposite sign before and after reversal of the Sun's polar field (blue versus red curves). This suggests that the dominant contribution is from the linking of active regions with the overlying dipolar field. Figure \ref{fig:bfly}(h) shows the overall $\overline{H}$ as a function of time -- obtained by integrating (e) over latitude. We find that it correlates most strongly not with $\overline{\Phi_0}^2$ or $\overline{\Phi_1}^2$, but with their product $\overline{\Phi_0}\,\overline{\Phi_1}$ (Figure \ref{fig:cor}). This likely arises because $\overline{\Phi_1}$ itself correlates with the amount of overlying dipolar field above active regions, since both are determined by low-order spherical harmonics. So Figure \ref{fig:cor} further supports the idea that the dominant contribution to field line helicity is linking between active regions and the overlying field. \begin{figure*} \centering \includegraphics[width=\textwidth]{hmi_scatter.pdf} \caption{Scatter plots of $\overline{H}$ against (a) $\overline{\Phi_0}^2$, (b) $\overline{\Phi_1}^2$, and (c) $\overline{\Phi_0}\,\overline{\Phi_1}$ for all Cycle 24 extrapolations. Models with different smoothing $b_0$ are shown by different weights (as per Figure \ref{fig:bfly}). There is no direct relationship in (a) or (b), but least-squares linear fits are shown in (c).} \label{fig:cor} \end{figure*} The substantial peak in $\overline{H}$ around CR2157-8 is interesting: it arises from a single Southern-hemisphere active region -- the ``Great Solar Active Region'' NOAA 12192 \citep{sun2015}, which was the largest since 1990 \citep{nagy2017}. Shown in Figure \ref{fig:cr2157}, this region has net negative helicity in our PFSS model because it emerges after the polar field reversal with positive leading polarity (following Hale's law). Its pattern corresponds roughly to the $135^\circ$ case in Figure \ref{fig:bmr2d}. Although the real region likely had significant free energy not captured in our PFSS model \citep{sun2015}, it did indeed emerge with negative helicity. This is suggested both by the chirality of extreme-ultraviolet loops in the centre of the region and by estimates of current-helicity in the region \citep{mcmaken2017}. It is also suggested by Figure 3 of \citet{pipin2019}, who use HMI vector magnetograms without extrapolation to map the local helicity density ${\bm A}\cdot{\bm B}$ on $r=r_0$. (Incidentally, those authors chose the same gauge for ${\bm A}\times\hat{\bm r}$ on $r=r_0$ as for our ${\bm A}_{\rm p}$, but their ${\bm A}$ has an additional radial component due to the non-potentiality of the real magnetic field.) We reiterate that this peak in $\overline{H}$ in CR2157 does not simply arise because of the peak in $\overline{\Phi}_0$. There is a similar peak in $\overline{\Phi}_0$ in CR2116 of slightly lower magnitude ($1.8\times 10^{23}\,\mathrm{Mx}$ instead of $2.0\times 10^{23}\,\mathrm{Mx}$ for $b_0=1\times 10^{-4}$), but no significant peak in $\overline{H}$ at that time, mainly because $\overline{\Phi}_1$ was weaker than in CR2157 ($0.97\times 10^{22}\,\mathrm{Mx}$ instead of $2.2\times 10^{22}\,\mathrm{Mx}$). \begin{figure} \centering \includegraphics[width=0.5\textwidth]{hmi_2157_1em4.pdf} \caption{PFSS model for CR2157 ($b_0=1\times10^{-4}$), showing the large active region at longitude $250^\circ$. Panel (a) shows $B_r$ on $r=r_0$ in grayscale (white positive, black negative), along with magnetic field lines colored by $\mathcal{A}|B_r|$ (using the larger of the two endpoint $|B_r|$ values). Panel (b) shows $\mathcal{A}|B_r|$ on $r=r_0$ in blue/red. In both cases the blue/red color scale is in units of $10^{22}\,\mathrm{Mx}^2\,\mathrm{cm}^{-2}$; for clarity it is capped at $\pm 5\times 10^{22}$ in (b) but covers the full range in (a). The dashed black line in both panels shows the zero contour of $B_r$ on $r=r_0$. } \label{fig:cr2157} \end{figure} \section{Discussion} \label{sec:conclusion} How do our minimal helicities obtained in Section \ref{sec:cycle} compare to the real corona, which contains non-potential magnetic energy above the minimal PFSS level? It is not yet possible to give a definitive answer, since the non-potential structure of the global coronal field -- particularly outside of active regions -- remains poorly constrained \citep{yeates2018b}. However, we can compare to rough estimates from the literature. The time-averaged $\overline{H}$ over the whole dataset in Section \ref{sec:cycle} (with $b_0=2\times10^{-5}$) is $4.0\times 10^{43}\,\mathrm{Mx}^2$. This is roughly twice the average (relative) helicity content of a significant active region, which observations and data-driven models suggest to be around $2\times 10^{43}\,\mathrm{Mx}^2$ \citep{devore2000,bleybel2002,bobra2008,pevtsov2008,georgoulis2009}. Notice that $2\times 10^{43}\,\mathrm{Mx}^2$ is an order of magnitude larger than the signed helicity of our potential BMR in Section \ref{sec:bmr}. This non-potential helicity can arise from emergence of non-potential magnetic structures, from post-emergence footpoint motions within the active region, or from large-scale shearing of the region by (primarily) differential rotation. The latter acts even on an idealised BMR \citep{devore2000}. As shown by \citet{hawkes2019}, the spatial pattern of helicity injection from differential rotation in a BMR is rather different from the patterns of PFSS field line helicity seen in Figure \ref{fig:bmr2d}, showing instead a characteristic north-south pattern of positive and negative injection \citep[see also][]{pipin2020}. Estimates also exist for the helicity in interplanetary magnetic clouds, which have originated from the corona. Our mean $\overline{H}$ is roughly ten times the helicity of a typical interplanetary magnetic cloud \citep{demoulin2016}, although the Halloween 2003 event was estimated to remove as much as $2\times 10^{44}\,\mathrm{Mx}^2$ from the Sun \citep{lynch2005}. In a magneto-frictional model, \citet{lowder2017} found that erupting flux ropes removed, on average, $2.6\times 10^{43}\,\mathrm{Mx}^2$, a more substantial fraction of $\overline{H}$ in the model. Putting these estimates together suggests that the unsigned helicity of the PFSS model is not entirely insignificant. Of course, being the minimum energy field, this helicity cannot be released in eruptions unless the pattern of $B_r$ on $r=r_0$ simplifies. This did happen, of course, during Cycle 24 after the peak seen in Section \ref{sec:cycle}. We finish by remarking that, even if the unsigned helicity of the PFSS model is modest, its basic field line helicity pattern may well be imprinted in the real non-potential field. For example, numerical simulations show that this minimal helicity can act as a seed for amplification by photospheric shearing motions \citep{yeates2016}, ultimately explaining the pattern of positive and negative helicity observed in highly sheared filament channels \citep{yeates2009}. Even helicity arising from the magnetic structure on small scales will tend to collect in these filament channels \citep{knizhnik2017}, ultimately leading to flares or eruptions. \acknowledgments This work was supported by The Leverhulme Trust (grant PRG-2017-169) and the UK STFC (grant ST/S000321/1), and benefited from the discussions of the ISSI International Team on Magnetic Helicity in Astrophysical Plasmas. The author thanks P. Wyper and two anonymous reviewers for significantly improving the paper. The \textit{SDO} data are courtesy of NASA and the \textit{SDO}/HMI science team. \vspace{5mm} \facilities{SDO (HMI)}
proofpile-arXiv_065-148
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The study of the coherent dynamics of spin ensembles in solids has a long history.\cite{NMR} More recent advances allow the study of single-spins in mesocopic and nanoscale devices.\cite{Awschalom99,Sarma04} Physical confinement to low-dimensions enhances interaction effects and leads to novel quantum coherent phenomena involving spins such as spin-charge separation in Luttinger liquids\cite{GiamarchiBook} and skyrmions in quantum Hall ferromagnets.\cite{Sondhi93,Barrett95} In zero-dimensional semiconductor quantum dots, spin-dependent effects predominantly arise from the combination of repulsive Coulomb interactions and the Pauli exclusion principle.\cite{Hanson07} Motivated by quantum information applications,\cite{Loss98} there is now increasing interest in the coherent transport of spin in large arrays of tunnel-coupled quantum dots as a means to distribute quantum information, or to realize more efficient spin-readout, across the array.\cite{Taylor05,Friesen07,Baart16,Fujita17,Mills18,Kandel19,Sigillito19b} A proposed method to achieve charge transport in quantum dot arrays is known as coherent transport by adiabatic passage (CTAP).\cite{Greentree04,Rahman10,Huneke13,Ban18,Platero19,Ban19} This protocol uses an electrical analog of the well-known stimulated Raman adiabatic passage (STIRAP) pulse sequence from atomic, molecular, and optical (AMO) physics to move the electron coherently across the array by keeping it in an adiabatic dark state.\cite{Vitanov01,Vitanov17} Charge coherence times in quantum dots are often relatively short ($\sim 1$ ns),\cite{Hayashi03,Petta04,Petersson10} so far preventing the realization of CTAP in practice. However, the elegance of this method motivates the search for spin based analogs of CTAP (spin-CTAP) that may allow robust spin transport. Single spins confined in semiconductor quantum dots can have long spin-dephasing times ($T_2^* >1~\mu$s) compared to the timescale of exchange-based spin dynamics ($\lesssim 10$ ns),\cite{Petta05,Veldhorst15,Reed16,He19} setting up much more favorable conditions for adiabatic transfer protocols. In this Article, we develop the theoretical framework of spin-CTAP using the Heisenberg exchange interaction in a linear array of quantum dots in a magnetic field gradient. The combination of exchange interactions and a magnetic field gradient leads to an effective Ising interaction.\cite{Meunier11,Russ18,Zajac18,Watson18} By modulating the exchange interaction in time, we can resonantly drive flip-flop transitions of electron spins on neighboring dots of a linear array.\cite{Nichol17,Sigillito19b,Takeda19} As we show here, applying this exchange modulation according to CTAP pulse sequences allows adiabatic spin-transfer across large quantum dot arrays. The investigation of spin transport in Heisenberg coupled spin chains dates back to foundational work on quantum magnetism,\cite{Bloch30} with many studies focused on optimized state transfer for quantum information applications.\cite{Bose03,Landahl04,Osborne04,Murphy10,Yao11,Makin12} Our approach differs in detail from these previous works because of the large magnetic field gradient imposed by a micromagnet and the use of local, time-dependent control of the exchange interaction throughout the array. For many spin systems, local control of exchange coupling is difficult to realize; however, it is readily achievable in quantum dot arrays through electrical driving of the gates used to form the dots.\cite{Petta05,Veldhorst15,Reed16,He19} Our spin transfer and entanglement generation protocols are immediately applicable to current experiments.\cite{Mills18,Kandel19,Volk19} The overall simplicity and robustness to pulse imperfections make adiabatic spin transfer a promising method for the readout of large quantum dot arrays. Motivated by similar considerations, a related adiabatic transfer scheme was recently implemented experimentally in an array of GaAs quantum dot spin-qubits.\cite{Kandel20} The paper is organized as follows: In Sec.~\ref{sec:arrays}, we introduce our theoretical model for extended arrays of quantum dots based on a Hubbard model. We then briefly review charge-CTAP in a quantum dot array containing a single electron. In Sec.~\ref{sec:spinctap}, we transition to a regime where each site in the quantum dot array is occupied by a single electron. We include the effects of a magnetic field gradient and develop the theory of spin-CTAP for three dot arrays, specifically considering the fully-polarized subspace with a single spin-flip. Varying the tunnel coupling, and therefore exchange between adjacent sites, along the array shifts subspaces with different numbers of spin flips out of resonance with the transfer protocol. We use this effect to realize a quantum-controlled version of spin-CTAP conditional on the spin state of the middle electron. We benchmark the performance of our spin-CTAP pulses in the presence of a realistic noise model and study the effects of imperfections in the adiabatic pulse sequences. In Sec.~\ref{sec:multispinctap}, spin-CTAP is generalized to arbitrarily large quantum dot arrays. In Sec.~\ref{sec:ghz}, we show how to use quantum-controlled spin-CTAP to generate many-qubit Greenberger-Horne-Zeilinger (GHZ) states.\cite{NielsenChuang} Including the effects of noise, high-fidelity GHZ state preparation is possible for three dots, with persistent entanglement achievable in arrays of up to 11 dots. We present our conclusions in Sec.~\ref{sec:conclusions}. \section{CTAP in Quantum dot arrays} \label{sec:arrays} Arrays of quantum dots with more than three independent, electrically controllable sites are now routinely studied in experiment.\cite{Zajac16,Nichol17,Mortemousque18,Mills18,Sigillito19,Kandel19,Volk19,Dehollain19} A common approach to analyze these experiments is to approximate the low-energy Hamiltonian by a single-band Hubbard model \be H = \sum_{i,j,\sigma} t_{c,ij} c_{i\sigma}^\dag c_{j \sigma} + \sum_i U_i n_i(n_i-1) - \mu_i n_i , \end{equation} where $t_{c,ij}$ is a tunnel coupling matrix element between the lowest orbital state on each dot, $U_i$ is the local Coulomb repulsion on each dot, and $\mu_i$ is the local chemical potential. Here, $c_{i \sigma}$ is a Fermion annihilation operator on dot $i$ with spin $\sigma$ = $\uparrow$ or $\downarrow$, and $n_i = \sum_{\sigma} c_{i \sigma}^\dag c_{i \sigma}$. When there is only a single electron in a fixed spin state in the entire array, then the Hamiltonian has a single-particle description \be H = \sum_{i,j} t_{c,ij} \ket{i}\bra{j} - \sum_i \mu_i \ket{i}\bra{i}, \end{equation} where $\ket{i} = c_{i \downarrow}^\dag \ket{0}$ is the electronic state with a single excess electron in dot $i$ in a spin-down state. For a linear three dot array with uniform chemical potentials, this Hamiltonian has the representation in the basis $\{\ket{1},\ket{2},\ket{3}\}$ as \be \label{eqn:hc} H = \left( \begin{array}{c c c} 0 & t_{c,12}(t) & 0 \\ t_{c,12}^*(t) & 0 & t_{c,23}(t) \\ 0 & t_{c,23}^*(t) & 0 \end{array} \right), \end{equation} The idea of CTAP is that the electron charge can be adiabatically transferred from dot 1 to dot 3 by taking advantage of special properties of three-level systems with this Hamiltonian.\cite{Greentree04} In particular, for any value of $t_{c,ij}$ there is a zero-energy eigenstate $\ket{D}$ of $H$ (i.e., $H \ket{D} = 0$) that takes the simple form \be \ket{D} \propto t_{c,23} \ket{1} - t_{c,12}^* \ket{3}. \end{equation} In AMO physics, this zero energy state is called a ``dark state'' because it is a nontrivial superposition state with zero population in the intermediate state $\ket{2}$ of the three-level system. Oftentimes, this intermediate state is an optically excited state that emits photons, which is the origin of the terminology.\cite{QuantumOpticsBook} The dark state has a minimal energy gap to the other two eigenstates of $H$ (often called ``bright states'') by an amount \be |\Delta E_{\rm min}| = \sqrt{|t_{c,12}|^2 + |t_{c,23}|^2 }. \end{equation} For a general time-dependent Hamiltonian, the adiabaticity condition to remain in the adiabatic eigenstate $\ket{n}$ takes the form $\sum_{m \ne n}\hbar |\bra{m} \dot{H} \ket{n}|/|E_m - E_n|^2 \ll 1$. Since the adiabatic dark state always has a finite gap from the other two adiabatic bright states, any sufficiently slowly evolving pulse sequence $ \dot{t}_{c,ij} \ll |\Delta E_{\rm min}|^2/\hbar$ will satisfy the adiabaticity condition and maintain population in the dark state. State transfer is achieved for pulse sequences that start with $t_{c,12}(t) \ll t_{c,23}(t) $ and ends with $t_{c,12} \gg t_{c,23}$ such that $\ket{D} $ transforms from $ \ket{1}$ at the beginning of the sequence to $\ket{3}$ at the end. In AMO physics, this adiabatic passage sequence, with its characteristic ``counterintuitive'' ordering, is commonly referred to as stimulated Raman by adiabatic passage (STIRAP).\cite{Vitanov01} Applying such a pulse sequence for a single electron in a quantum dot array leads to coherent transport of charge by adiabatic passage (CTAP).\cite{Greentree04} By adiabatically turning on a large tunnel coupling on the middle dots to energetically isolate an extended zero energy state, this three-site CTAP protocol can be directly generalized to arbitrarily large arrays of dots.\cite{Greentree04} \section{Spin-CTAP in Quantum Dot Arrays} \label{sec:spinctap} We now consider the generalization of CTAP to the spin degree of freedom. Instead of working in the limit of a single electron in the quantum dot array, we consider the half-filled case with one electron per dot. Strong Coulomb repulsion ($U \sim 2$ meV) leads to the formation of a Mott insulating state where the only mobile degrees of freedom at low energies are the electron spins [see Fig.~\ref{fig:1}(a)]. Integrating out the double occupancies from a single-band, spin-full Hubbard model at half-filling generically leads to an effective Heisenberg Hamiltonian for the spins at lowest order in $t_{c,ij}/U_{k}$ \be H= \sum_i g \mu_B \bm{B}_i^{\rm tot}\cdot \bm{s}_i + \sum_{i,j} J_{ij}(t) (\bm{s}_i \cdot \bm{s}_j - 1/4), \end{equation} where $J_{ij}(t)$ is the exchange interaction between the spins on dots $i$ and $j$, $\bm{B}_i^{\rm tot} = B_{\rm ext} \hat{z} + \bm{B}_i^M$ is the local magnetic field experienced by spin $i$ averaged over the orbital wavefunction and $s_i^\mu = \frac{1}{2} \sum_{\alpha \beta} c_{i \alpha}^\dag \sigma_{\alpha \beta}^\mu c_{i \beta}$ is the local spin-1/2 operator on dot $i$ for the Pauli matrix $\sigma^\mu$ ($\mu = x,~y,~z)$. The electronic $g$-factor $g$ $\approx$ 2 in silicon. The total field includes contributions from the global external field $ B_{\rm ext}$ and a local field $\bm{B}_i^M$ induced by an on-chip micromagnet.\cite{Russ18} The exchange interaction can be modulated in time by changing the tunnel barriers that separate the quantum dots.\cite{Petta05,Veldhorst15,Reed16,He19} In the regime we consider here, where the overall Zeeman energy is much greater than the temperature $g \mu_B B_i^{\rm tot} \gg k_B T$, we can initialize the ground state of a single dot using energy selective tunneling.\cite{Elzerman04} Other sites in the array can then be loaded by shuttling electrons\cite{Baart16,Fujita17,Mills18} or applying pairwise SWAP operations.\cite{Nichol17,Kandel19,Takeda19,Sigillito19b} Readout can also be accomplished through spin transport to dots used for spin-to-charge conversion and charge sensing in the array.\cite{Hanson07} \begin{figure}[tb] \begin{center} \includegraphics[width= 0.49\textwidth]{ConceptFig.pdf} \caption{(a) A quantum dot array realizes a spin-1/2 chain. Driving the tunnel barriers modulates the exchange interaction, allowing an adiabatic spin transport protocol which we refer to as spin-CTAP. (b) Exchange pulse profile for spin-CTAP protocol with three dots. Counterintuitively, $j_{23}$ is turned on before $j_{12}$ to keep the system in an adiabatic dark state. } \label{fig:1} \end{center} \end{figure} Single-spin addressability can be achieved in these systems by applying a varying magnetic field across the array that is larger across each pair of sites than the pairwise exchange interaction.\cite{Loss98} In this regime, we can write an effective Hamiltonian in the adiabatic approximation as \be \label{eqn:Heff} H= \sum_i \hbar \omega_i s_i^z + \sum_{i,j} \bar{J}_{ij} s_i^z s_j^z +[j_{ij}(t) e^{i \omega_{ij}t} s_i^- s_j^+ + h.c.], \end{equation} where $\bar{J}_{ij}$ is the time-averaged exchange, $s_i^\pm$ are spin raising/lowering operators, $j_{ij}(t)$ is the amplitude of the exchange oscillating at a frequency $\omega_{ij}$ near the difference in Zeeman frequency $\Delta_{ij} =g\mu_B( B_i^{\rm tot} - B_j^{\rm tot})/\hbar$, and $\hbar \omega_i = g \mu_B B_i^{\rm tot} + \sum_{j} {\bar{J}_{ij}^2}/{2\hbar \Delta_{ij}}$ is the local spin-frequency including a perturbative correction from the time-averaged dc exchange interaction.\cite{Sigillito19b} The condition for the rotating wave approximation to be valid is that the difference in Zeeman energy between each pair of sites is much larger than the exchange and the detuning from resonance. Otherwise, we do not make any assumptions about the spatial profile of the magnetic field. Several recent experiments have operated in the same regime studied here with a large magnetic field gradient and ac exchange driving to realize spin transport or entangling gates.\cite{Nichol17,Takeda19,Sigillito19b} The effective Hamiltonian $H$ conserves $S_z^{\rm tot} = \sum_i s_i^z$, which implies that, when restricted to the fully-polarized subspace with a single spin-flip, the many-body dynamics has a single particle description. In analogy to a particle in a discrete lattice, the transverse exchange interactions act as tunneling terms, while the longitudinal exchange interactions and magnetic fields act as local potentials. We exploit this simplified description to design spin-CTAP pulse sequences. Building on this, we then take advantage of the many-body interacting nature of the problem to realize a form of quantum-controlled spin-CTAP that can be used to generate GHZ states in quantum dot arrays. In the subsections below, we consider a linear array of three silicon quantum dots and show how to achieve state transfer $\ket{\uparrow \downarrow \downarrow} \to \ket{ \downarrow \downarrow \uparrow}$. In Sec.~\ref{sec:multispinctap}, we show how to generalize our results to arbitrarily large one-dimensional arrays. The basic control sequence is illustrated in Fig.~\ref{fig:1}(b). This pulse sequence has the ``counter-intuitive'' ordering that $j_{23}$ is turned on before $j_{12}$, which, we show below, ensures that the system remains adiabatically in the dark state of the three-level system without ever directly exciting the intermediate state $\ket{\downarrow \uparrow \downarrow}$.\cite{Vitanov01,Greentree04,Vitanov17} We first study state transfer for idealized Gaussian pulses \begin{align} \label{eqn:ctap1} j_{12}(t) &= j_{0} \exp\bigg[ -\bigg( t- \frac{t_{0}+2\sigma}{2}\bigg)^2/2\sigma^2\bigg] , \\ \label{eqn:ctap2} j_{23}(t) &= j_{0} \exp\bigg[ -\bigg( t- \frac{t_{0}-2\sigma}{2}\bigg)^2/2\sigma^2\bigg] , \end{align} where $j_0$ is the peak amplitude, $t_0$ is the mean center of the two pulses and $\sigma$ is the pulse width, which is set to be the same as the timing offset between the two pulses. For $t<0$ we set $j_{12}=j_{23} = 0$ and define a maximal cutoff time $t_{\max}$ such that $j_{12}=j_{23}=0$ for $t>t_{\max}$. In practice, it may be difficult to realize ideal Gaussian pulses; however, the adiabatic transfer protocol only relies on the existence of a well-defined dark state that satisfies the adiabaticity condition. As a result, it is robust to small pulse imperfections as we describe in more detail in Sec.~\ref{sec:pulse}. \subsection{Resonantly Driven Spin Subspace} We now consider the transfer of the spin state across a three-dot array. Restricting to the $S_z^{\rm tot} =-1/2$ subspace and moving into a rotating frame $H \to U^\dag H U - i U^\dag dU/dt$ with $U = e^{- i \sum_{j=1}^{N-1} \hbar \delta_j s_j^z t}$ and $\delta_j =\sum_{k \ge j } \omega_{k k+1}$, the Hamiltonian in the basis $\{\ket{\uparrow \downarrow \downarrow},\ket{ \downarrow \uparrow \downarrow},\ket{ \downarrow \downarrow \uparrow}\}$ takes the form [see Fig.~\ref{fig:3dot}(a) for the level diagram] \be \label{eqn:h0} H_{0} = \left( \begin{array}{c c c} \eta_2^0 & j_{12}(t) & 0 \\ j_{12}^*(t) & \eta_1^0 & j_{23}(t) \\ 0 & j_{23}^*(t) & 0 \end{array} \right), \end{equation} where the ``two-photon'' energy detuning (terminology is taken from quantum optics, e.g., Ref.~\onlinecite{QuantumOpticsBook}) is $ \eta_2^{0} = E_1^0-E_3^0 -\hbar (\omega_{12} +\omega_{23})$, the ``single-photon'' energy detuning is $ \eta_1^0 = E_2^0-E_3^0- \hbar \omega_{23}$, the bare energies are $E_i^0 =E_0+ \hbar \omega_i - \sum_{j} \bar{J}_{ij}/2$, and $E_0 = -\sum_i \hbar \omega_i/2$ is an energy offset. The phase of $j_{ij}$ is set by the phase of the ac exchange drive.\cite{Sigillito19b} For illustrative purposes, we have chosen a magnetic field gradient profile with $B_1^{\rm tot} < B_3^{\rm tot} < B_2^{\rm tot} $, so that the level diagram in the $S_z^{\rm tot} = \pm 1/2$ subspace maps to a canonical $\Lambda/V$ system. This assumption is not required and our numerical simulations below are performed for the more natural profile $B_1^{\rm tot} < B_2^{\rm tot} < B_3^{\rm tot} $.\cite{Zajac18} Similar to Eq.~(\ref{eqn:hc}), we can write down the adiabatic dark state of $H_0$ for $\eta_2^0$ = 0 and any value of $\eta_1^0$ \be \ket{D_0 } \propto j_{23}(t) \ket{\uparrow \downarrow \downarrow} - j_{12}^* (t) \ket{\downarrow \downarrow \uparrow}, \end{equation} which satisfies $H_0(t) \ket{D_0(t)} = 0$ for all times $t$. This state has a minimal energy gap to the other two adiabatic eigenstates (the bright states) by an amount \be |\Delta E_{\rm min}| = \sqrt{|j_{12}(t)|^2 + |j_{23}(t)|^2 + \eta_1^{0\,2}/2} - |\eta_1^0|/2. \end{equation} Thus, by choosing a sufficiently slowly varying exchange $\hbar \dot{j}_{ij}/|\Delta E_{\rm min}|^2 \ll 1$, we can ensure that the adiabaticity condition is satisfied. In this limit, the system will remain in the adiabatic eigenstates during the evolution. Note that the precise values of $\bar{J}_{ij}$ are not relevant to the design of the pulse sequence because these values only enter into the resonance conditions for the ac driving fields. In the next section, however, we will show that when the $S_z^{\rm tot} = -1/2$ subspace is tuned into resonance, then the behavior of the $S_z^{\rm tot} = 1/2$ subspace sensitively depends on the relative values of $\bar{J}_{12}$ and $\bar{J}_{23}$. \begin{figure}[bt] \begin{center} \includegraphics[width= .49\textwidth]{3dotSpinCTAP.pdf} \caption{(a) Level diagram in the $S_z^{\rm tot} = -1/2$ subspace realizes a canonical three-level system. For illustrative purposes we took $B_1^z < B_3^z < B_2^z$ to realize a $\Lambda$ system, but our analysis does not rely on this condition. (b) Spin-up population $p_{i \uparrow} = 1/2+\mean{s_i^z}$ on dots 1 dots and 3 during the spin-CTAP pulse sequence, illustrating adiabatic transfer of the spin across the array. In these simulations, we took a gradient profile with $B_1^z < B_2^z < B_3^z$, $\Delta_{i i+1}/2\pi = -150~$MHz, $\bar{J}_{12/23}/h = 20/40$ MHz, $j_0/h = 3 $ MHz, $\omega_{12/23}/2\pi = -190/100$~MHz, $t_{\rm max} =20 \hbar \pi/j_0$, and $\sigma = t_{\rm max}/8$. } \label{fig:3dot} \end{center} \end{figure} As an example of the spin-CTAP performance, we show the population dynamics of the two spin states under this driving protocol in Fig.~\ref{fig:3dot}(b). When the initial state is $\ket{\psi_0} = \ket{\uparrow \downarrow \downarrow}$, it evolves adiabatically into the state $\ket{ \downarrow \downarrow \uparrow}$ with high fidelity $>99\%$. Finally, we remark that when the system is initialized in the state $\ket{\downarrow \downarrow\uparrow}$, then the left-to-right spin-CTAP pulse sequence has the ``intuitive'' ordering and can still transfer the spin-up state across the array from right-to-left. There is an important difference, though, that this right-to-left process is mediated by the two adiabatic bright states instead of the dark state. As a result, this ``backwards'' right-to-left transfer process generally has a lower fidelity than the left-to-right transfer process. \subsection{Blockaded Spin Subspace} We next describe how to realize a quantum-controlled version of spin-CTAP that is conditioned by the spin state of the middle electron. In the $S_z^{\rm tot}= 1/2$ subspace, the Hamiltonian in the basis $\{\ket{\downarrow \uparrow \uparrow},\ket{ \uparrow \downarrow \uparrow},\ket{\uparrow \uparrow \downarrow}\}$ takes the same form as Eq.~(\ref{eqn:h0}) with $j_{ij}(t) \to j_{ij}^*(t)$, $\omega_{ij} \to -\omega_{ij}$, and the shifted energies $E_i^1 = -E_0 - \hbar \omega_i - \sum_{j} \bar{J}_{ij}/2$ [see Fig.~\ref{fig:3dot2}(a)]. The complex conjugation can be understood as arising from a time-reversal operation associated with switching to this subspace. These modifications imply that if we set $\eta_1^0 = \eta_2^0 = 0$, then the $S_z^{\rm tot}=1/2$ sector will have a finite one- and two-photon detuning $ \eta_1^1 = - \bar{J}_{12}$ and $ \eta_2^1 = \bar{J}_{23}-\bar{J}_{12}$, respectively. As a result, for a finite exchange gradient $\delta J = \bar{J}_{23}-\bar{J}_{12}$, the single-photon detuning $\eta_1^1$ becomes nonzero. \begin{figure}[bt] \begin{center} \includegraphics[width= .49\textwidth]{3dotSpinCTAP_blockade.pdf} \caption{(a) Level diagram in the $S_z^{\rm tot} = +1/2$ subspace realizes a $V$ system for the same gradient profile as Fig.~\ref{fig:3dot}(a). When the system is tuned for spin-CTAP in the $S_z^{\rm tot} = -1/2$ subspace, but $\bar{J}_{12} \ne \bar{J}_{23}$, then transport in the $S_z^{\rm tot} = 1/2$ subspace is blocked because the adiabatic dark state begins and ends on one side of the array. This blockade effect can be used to generate GHZ states. (b) Spin-up population $p_{i \uparrow} = 1/2+\mean{s_i^z}$ in the blockaded subspace. The spin-up electron in dot 2 blocks spin-CTAP because the adiabatic dark state remains localized in dot 1. We took parameters as in Fig.~\ref{fig:3dot}(b).} \label{fig:3dot2} \end{center} \end{figure} Despite the different effective Hamiltonians, when $\bar{J}_{12} = \bar{J}_{23}$ the $S_z^{\rm tot} = 1/2$ subspace still undergoes a transfer process from the state $\ket{\uparrow \uparrow \downarrow}$ to $\ket{\downarrow \uparrow \uparrow }$. This transfer proceeds through a different mechanism, however, because it is effectively driving the transfer from right to left (3 to 1) instead of left to right (1 to 3). As we mentioned in the previous subsection, in the adiabatic limit, this reversed state transfer process is mediated by the two bright states, but the transfer fidelity still converges to one in the ideal limit. Thus, for $\bar{J}_{12} = \bar{J}_{23}$, the ideal transfer process will effectively map the spin population across the array in both subspaces. On the other hand, when $\bar{J}_{12}\ne \bar{J}_{23}$ and the system is tuned for spin-CTAP in the $S_z^{\rm tot} = -1/2$ subspace, we now show that the $S_z^{\rm tot} = 1/2$ subspace is blocked from adiabatic transport. Starting from the state $\ket{\uparrow \uparrow \downarrow}$ with $j_{12}=j_{23}=0$, we can calculate the associated adiabatic eigenstate for finite $j_{ij}$ in the limit $|j_{23}(t)| \ll \hbar |\Delta_1|$ and $|j_{12}(t)j_{23}(t)/ \eta_1^1| \ll \eta_2^1$ \be \ket{D_1} \approx \bigg[ 1 - \frac{|j_{23}(t)|^2}{2 \eta_1^{1\,2}} \bigg] \ket{\uparrow \uparrow \downarrow} + \frac{j_{12}^* j_{23}^*}{ \eta_2^1 \eta_1^1} \ket{\downarrow \uparrow \uparrow} - \frac{j_{23}^*}{ \eta_1^1} \ket{\uparrow \downarrow \uparrow}. \end{equation} As a result, the adiabatic spin-state configuration in this subspace remains localized during the spin-CTAP pulse sequence. This implies that we can realize a quantum-controlled version of spin-CTAP where the spin state of the middle electron acts as the control qubit. As we show in Fig.~\ref{fig:3dot2}(b), when the middle spin is pointing up $\ket{\psi_0} = \ket{\uparrow \uparrow \downarrow}$, the spin population returns to dot 1 at the end of the pulse sequence. For the transfer process to be adiabatic, we require the pulse width $ \sigma$ and overall length $t_{\rm max}$ to be large compared to $\hbar j_0^{-1}$ and $\hbar \delta J^{-1}$. In Fig.~\ref{fig:3dot}(b) and Fig.~\ref{fig:3dot2}(b), we took $\delta J/j_0 = 6.67 $, $t_{\rm max} = 20 \pi \hbar/j_0$ and $\sigma = t_{\rm max}/8$. These values satisfy both these constraints for the experimentally relevant parameters of $J_{12/23}/h = 20/40$ MHz and $t_{\rm max} = 3.33~\mu$s.\cite{Zajac18,Watson18} An interesting subject for future work will be to consider ``shortcuts to adiabaticity'' to speed up this transfer process without reducing the fidelity.\cite{Oh13,Torrontegui13,Li18,Ban19} \subsection{Effect of Noise} To characterize the performance of spin-CTAP under more realistic conditions, we numerically characterize the performance of the protocol in the presence of noise in both the local magnetic field on each dot and the exchange interaction. For illustrative purposes, we focus on the simplest realization of spin-CTAP with three quantum dots in the resonantly driven $S_z^{\rm tot} = -1/2$ subspace. We use a noise model, described in more detail in our recent work,\cite{Gullans19} which is parameterized by the coherence time $T_{2i}^{*}$ on each dot and a quality factor $Q_{e,ij}$ that determines the envelope decay rate for exchange oscillations between dots $i$ and $j$. The $T_2^*$ decoherence processes are modeled by adding $1/f$ noise in the $\omega_i$ parameter, while the $Q_{e,ij}$ decoherence is modeled by coupling the same $1/f$ noise field to the parameters $\bar{J}_{ij}$ and $j_{ij}$. \begin{align} \omega_i(t) &= \omega_i^0 + \omega^n_{i} v_{i}(t) , \\ J_{ij}(t) & = J_{ij}^0 \{1 + \delta J_{ij}^n [v_i(t) + v_j(t)] \}, \\ j_{ij}(t) & = j_{ij}^0 \{1 + \delta J_{ij}^n [v_i(t) + v_j(t)] \}, \end{align} where the amplitude of the noise on each dot $v_i$ is given by $\mean{v_i(t) v_j(t)} = \delta_{ij} v_0^2,$ $ v_0 = \sqrt{ 2 A \log(f_c/f_\ell)}$, $A$ is the amplitude of the $1/f$ noise in eV$^2$/Hz and $f_{c/\ell}$ are high/low frequency cutoffs, $\omega_i^n = (v_0 T_{2,i}^*)^{-1}$, and $\delta J_{ij}^n = (\sqrt{2} v_0 Q_{e,ij})^{-1}$. We make the simplifying assumptions the noise is quasistatic over the relevant timescales and that $T_{2i}^{*}$ and $Q_{e,ij}$ do not vary throughout the array. In Fig.~\ref{fig:noise}(a), we show that spin-CTAP becomes robust against noise when transferring spin eigenstates already at relatively modest values of $Q_e > 20$ and $T_2^* > 1~\mu$s, which is quantified by the projection fidelity $F_p = 1/2+\mean{s_3^z}$. Under these conditions, we find that the main source of decoherence arises from charge noise that leads to a finite $Q_e$. We see very little change when increasing $T_2^*$ from 1-10~$\mu$s. \begin{figure}[tb] \begin{center} \includegraphics[width= .49\textwidth]{NoiseSim.pdf} \caption{ (a) Projection fidelity $F_p = 1/2+\mean{s_3^z}$ for three-dot spin-CTAP in the presence of quasi-static noise. The maximal fidelity is limited by nonadiabatic corrections to $\sim$95$\%$ for these parameters: $\Delta_{i i+1}/2\pi = -150~$MHz, $J_{12/23}/h = 20/40$ MHz, $j_0/h = 3 $ MHz, $\omega_{12/23}/2\pi = -190/100$~MHz, $\sqrt{A} = 0.5~\mu$eV$/\sqrt{\rm Hz}$,\cite{Yoneda18,Mi18b}, $ f_\ell = 0.16~$mHz, and $f_c = 100$~kHz, $t_{\rm max} =10 \hbar \pi/j_0$, and $\sigma = t_{\rm max}/8$. We chose a relatively fast transfer time to balance effects from noise with nonadiabatic corrections. $Q_e$ and $T_2^*$ are taken to be uniform across the array. (inset) The average gate fidelity $F_g$ rapidly converges to one with increasing $t_{\max}$. (b) $F_g$ for parameters as in (a) with a maximal fidelity of $\sim 98\%$. Error bars denote one standard deviation due to fluctuations in noise realizations.} \label{fig:noise} \end{center} \end{figure} It is also of interest to consider the performance of the transfer protocol for more general quantum states. We characterize this fidelity by treating the spin-CTAP transfer process \be \ket{\psi} \otimes \ket{\downarrow \downarrow} \to \ket{\downarrow \downarrow} \otimes \ket{\psi} \end{equation} as a quantum channel $\mathcal{E}$ that maps an arbitrary quantum state on the first site to the last site and traces over the remaining sites in the system. In the ideal case, this channel acts as an identity operation (up to a deterministic $z$-rotation that we correct) on the single-qubit Hilbert space of the transferred site. As a result, we can use the average gate fidelity to characterize the performance of the transfer protocol\cite{NielsenChuang} \be F_g = \int d \psi \bra{\psi} \mathcal{E}(\ket{\psi} \bra{\psi}) \ket{\psi}, \end{equation} where $d \psi$ is the Haar measure over the quantum states of a single-qubit. In the inset to Fig.~\ref{fig:noise}(a) we show $F_g$ vs. $t_{\max}$ in the limit of zero noise, which illustrates that the ideal fidelity rapidly converges to one. The results for $F_g$ including noise as a function of $Q_e$ are shown in Fig.~\ref{fig:noise}(b). Interestingly, the fidelity first plateaus near $2/3$ before increasing towards the noiseless limit at large values of $Q_e > 200$. The initial plateau coincides with the convergence of the projection fidelity, while the slower increase with $Q_e$ arises because the transfer of superposition states are sensitive to phase fluctuations in the wavefunction that vary from shot-to-shot due to the noise. A related feature observed in the fidelity is the much stronger dependence on $T_2^*$. When the total transfer time [$t_{\max} = 1.67~\mu$s in Fig.~\ref{fig:noise}(b)] becomes comparable to $T_2^*$, the fidelity substantially decreases from the noiseless limit due to shot-to-shot variations in the phase accumulation during the transfer process. This behavior is in sharp contrast to what was observed for $F_p$, which is insensitive to phase fluctuations even when $t_{\max} \sim T_2^*$. Finally, we remark that the average gate fidelities calculated here are comparable to measured fidelities for SWAP gates under similar conditions.\cite{Nichol17,Takeda19,Sigillito19b} Thus, we conclude that, under some conditions, spin-CTAP is a viable alternative to sequential SWAP gates for transferring spin states in the array. \subsection{Imperfections in AC Exchange Driving} \label{sec:pulse} A central requirement of our proposal is the ability to simultaneously turn on exchange between every pair of sites across the array. Achieving this regime can be challenging and often leads to a nonlinear dependence of the exchange on the external gate voltages.\cite{Qiao20,Pan20} As a result, it may be difficult in practice to realize the ideally shaped Gaussian pulses considered in the previous section. Fortunately, the adiabatic nature of the control scheme renders spin-CTAP largely insensitive to these effects. Another source of non-idealities is the potential for crosstalk between gates.\cite{vanderWiel03,Hensgens17,Mills18,Volk19} In the context of our work, one needs to avoid an effect whereby modulating the exchange on one pair of dots induces non-negligible ac exchange driving on neighboring pairs. Provided the magnetic field gradient between sites is non-uniform across the array, which is typical in devices where the gradient is produced by a proximal micromagnet,\cite{Sigillito19} this ac exchange driving will be off-resonant. As a result, these cross-driving effects can be neglected for the weakly driven limit considered here. For example, for an ac exchange driving of 10 MHz and a gate crosstalk of 10\% or less, the variation or disorder in the magnetic field gradient should be much greater than 40~$\mu$T to avoid cross-driving effects. To study the impact of pulse distortions more quantitatively, we use a simple model for the exchange interaction described in Ref.~\onlinecite{Zajac18}. In a single-band Fermi-Hubbard model for a quantum dot array, the exchange has the scaling $J \sim |t_c|^2/U$, where $t_c\sim1-100~\mu$eV is the tunneling between the two dots and $U \sim 5~$meV is the on-site interaction (estimates are for Si/SiGe quantum dots \cite{Zajac18}). By modeling the barrier between the two quantum dots as a square well and using the WKB approximation, one can derive a functional form for the exchange \be \label{eqn:jvb} J \propto |t_c|^2 = \frac{16 E(V-E) }{V^2} \exp\big( - 2 W \sqrt{2 m|V-E|} \big), \end{equation} where $V$ and $W$ are the potential barrier height and width, $E$ is the energy of the unperturbed states, and $m$ is the electron mass. Using the approximation $V \propto - V_B(t) + {\rm offset}$, where $V_B(t)$ is the voltage on the barrier separating the two dots we obtain a precise prediction for the dependence of $J[V_B(t)]$ on the barrier gate voltage, which provides a good match to experimental data.\cite{Zajac18} \begin{figure}[bt] \begin{center} \includegraphics[width= .49\textwidth]{3dotSpinCTAPDist.pdf} \caption{(a) Exchange pulse profile for spin-CTAP including pulse distortions from Eq.~(\ref{eqn:pulsedist}). We took a larger value of $j_0/h = 15~$MHz with other parameters as in Fig.~\ref{fig:3dot} to amplify the effect of shift in the dc exchange and the ac exchange pulse distortions. (b) Spin-up population $p_{i \uparrow} = 1/2+\mean{s_i^z}$ on dots 1 dots and 3 during the spin-CTAP pulse sequence. We see that even these large pulse distortions do not spoil the state-transfer fidelity. } \label{fig:3dotdist} \end{center} \end{figure} Our spin-CTAP proposal can be realized by modulating the barrier gate voltages between dots $i$ and $j$ as $V_{B,ij}(t) = V_{B0,ij} + v_{ij}(t) \cos \omega_{ij} t$, where $v_{ij}(t)$ is a slowly-varying envelope for the ac modulation term. Assuming $v_{ij}$ is a weak perturbation, we can expand the exchange as \be \begin{split} J_{ij}[V_{B0,ij} &+ v_{ij} \cos \omega_{ij} t] = \bar{J}_{ij}^0 + J_{ij}^{(1)} v_{ij} \cos \omega_{ij} t \\ & + \frac{J_{ij}^{(2)}}{2} v_{ij}^2 \cos^2 \omega_{ij} t +\frac{J_{ij}^{(3)}}{6} v_{ij}^3 \cos^3 \omega_{ij} t, \end{split} \end{equation} where $J_{ij}^{(n)} = d^n J_{ij}/d V_{B,ij}^n|_{V_{B0,ij}}$ are the derivatives of the exchange profile. In the rotating wave approximation we only need to account for the dc exchange term and the term that oscillates near the difference in Zeeman energies between the two dots. As a result, we can regroup the terms to arrive at the expression \be \begin{split} J_{ij}[V_{B,ij}( t)] &\approx \bar{J}_{ij}^0 + \frac{J_{ij}^{(2)}}{J_{ij}^{(1) 2}} [j_{ij}^{0}(t)]^2 \\ &+ \bigg(1+ \frac{J_{ij}^{(3)} [j_{ij}^0(t)]^2}{2 J_{ij}^{(1)3} }\bigg) 2 j_{ij}^0(t) \cos \omega_{ij} t, \end{split} \end{equation} where we defined $j_{ij}^0(t) = J_{ij}^{(1)} v_{ij}(t)/2$ and the first term corresponds to a slowly varying shift in the dc exchange due to the ac driving. For the dependence on $V_{B,ij}$ given by Eq.~(\ref{eqn:jvb}), we can calculate the leading order correction to the dc and ac exchange profile by approximating the dependence of the exchange on barrier gate voltage by a pure exponential $J_{ij}[V_{B0,ij}+v] \approx \bar{J}_{ij}^0 e^{\alpha v}$. This approximation leads to particularly simple expressions for the slowly-varying parameters \begin{align} \bar{J}_{ij}(t) &= \bigg(1 + \frac{[j_{ij}^0(t)]^2}{[\bar{J}_{ij}^{0}]^2} \bigg) \bar{J}_{ij}^0, \\ \label{eqn:pulsedist} j_{ij}(t)& = \bigg( 1 + \frac{[j_{ij}^{0}(t)]^2}{2 [\bar{J}_{ij}^0]^2} \bigg) j_{ij}^0(t). \end{align} Since $j_{ij}^0$ is directly proportional to the ac amplitude on the middle barrier voltage, this shows that the the dc/ac exchange amplitude has a quadratic/cubic nonlinear correction in $v_{ij}(t)$. It is most natural in experiments to design a Gaussian envelope directly for the middle barrier voltage $v_{ij}$, which does not account for these nonlinear corrections. In Fig.~\ref{fig:3dotdist}(a), we show the exchange pulse profile for this control strategy, including the nonlinear correction from Eq.~(\ref{eqn:pulsedist}). We took similar parameters as in Fig.~\ref{fig:3dot}, but with a five times larger value of peak ac exchange value $j_0/h = 15$~MHz to amplify the effect of the shift in the dc exchange and the ac exchange pulse distortions. In Fig.~\ref{fig:3dotdist}(b), we show the performance of spin-CTAP and blockaded spin-CTAP in the presence of these pulse imperfections. Although the intermediate dynamics has slight distortions compared to the ideal case, the fidelity for state transfer is nearly identical. This result is expected based on the intrinsic robustness of these transfer schemes to pulse imperfections and slowly varying perturbations provided one chooses an adiabatic pulse that starts with $j_{12} \ll j_{23} $ and ends with $j_{12} \gg j_{23}$. \section{Multidot spin-CTAP} \label{sec:multispinctap} The long-range transfer of spin states in extended arrays is a long-standing goal for quantum-dot based spin qubits.\cite{Taylor05,Friesen07,Baart16,Fujita17,Mills18,Kandel19,Sigillito19b} In the context of charge based transport, Greentree \textit{et al.\ }showed that a natural generalization of CTAP from three dots to arbitrarily large one-dimensional arrays of odd numbers of dots can be obtained by modulating a large tunnel coupling in the middle of the array.\cite{Greentree04} Partially motivated by recent experimental work in large quantum dot arrays,\cite{Zajac16,Mortemousque18,Mills18,Sigillito19,Kandel19,Volk19,Dehollain19} we now consider the multidot generalization of spin-CTAP. By applying a large ac exchange field on the middle $N-2$ dots for odd $N$, we can effectively isolate a single many-body spin state in the middle of the array that is coupled to the outer two spins by weaker driving of the ac exchange [see Fig.~\ref{fig:Ndot1}(a)]. For even $N$, adiabatic transfer is still possible, but it does not proceed through a zero energy dark state, which generally reduces the efficiency and transfer fidelities of the protocol.\cite{Greentree04} At a qualitative level, our approach is reminiscent of other methods for long-range coupling of spin qubits using intermediate states.\cite{Mehl14,Srinivasa15,Baart17,Croot18,Malinowski18} \begin{figure}[tb] \begin{center} \includegraphics[width= .49\textwidth]{MultidotCTAP_v3.pdf} \caption{ (a) Spin-CTAP protocol for extended arrays with an odd number of sites. The middle spins are taken to be strongly coupled via exchange to effectively create a single zero energy state in the middle of the array. (b) Pulse profile for multidot spin-CTAP. The primary difference from the three-dot case is the large ac exchange interaction that is turned on in the middle region during the transfer.} \label{fig:Ndot1} \end{center} \end{figure} To better understand the dynamics in this limit, we study the resonantly driven Hamiltonian in the rotating frame in the basis of states $\{ \sigma_i^+ \ket{\downarrow \cdots \downarrow}:i =1,\ldots,N\}$ \be H_0 = \left(\begin{array}{c c c c c c c} 0 & j_{12} & 0 & \cdots & 0 & 0 &0 \\ j_{12} & 0 & j_{M}& \cdots & 0 & 0 & 0\\ 0 & j_{M} & 0 & \cdots & 0 & 0 & 0\\ \vdots & \vdots& \vdots& \ddots & \vdots &\vdots& \vdots \\ 0 & 0 & 0 & \cdots & 0 & j_M & 0 \\ 0 & 0 & 0 & \cdots & j_M & 0 & j_{N-1N} \\ 0 & 0 & 0 & \cdots & 0 & j_{N-1N} & 0 \end{array} \right), \end{equation} where $j_M$ is the ac exchange interaction in the middle of the array (assumed to be uniform). Setting $j_{12} = j_{N-1N} = 0$, for odd $N$ there is a zero energy state \be \ket{0} = \frac{1}{\sqrt{(N-1)/2}} \sum_{n=1}^{(N-1)/2} (-1)^n \sigma_{2n}^{+} \ket{\downarrow \cdots \downarrow}. \end{equation} Denoting the energy eigenstates for the delocalized spin states as $\ket{-(N-3)/2},\ldots, \ket{(N-3)/2}$, the energy gaps $|E_n - E_{n+1}|$ between neighboring levels all scale as $j_M/N$. As a result, for sufficiently large $j_M$, we can reduce the problem to a three-level system in the basis $\{ \ket{\uparrow \cdots \downarrow}, \ket{0}, \ket{\downarrow \cdots \uparrow} \}$ \be \label{eqn:h0eff} H_{0} = \left( \begin{array}{c c c} 0 & j_{1}(t) & 0 \\ j_{1}(t) & 0 & j_{2}(t) \\ 0 & j_{2}(t) & 0 \end{array} \right), \end{equation} where $j_1 = - j_{12}/\sqrt{(N-1)/2}$ and $j_2 = (-1)^{(N-1)/2} j_{N-1 N}/\sqrt{(N-1)/2}$. Applying the spin-CTAP pulse sequence for $j_{1/2}$ given by Eqs.~(\ref{eqn:ctap1})-(\ref{eqn:ctap2}) now achieves spin transport across the entire array of $N$ dots. To achieve the multidot transfer process in an adiabatic manner, we also pulse on the exchange in the middle of the array. This approach is inspired by the original CTAP proposal.\cite{Greentree04} In particular, as illustrated in Fig.~\ref{fig:Ndot1}(b), we use an additional Gaussian ac exchange pulse on the middle spins \begin{align} \label{eqn:mult1} j_{ii+1}(t) &= j_{M} \exp\bigg[ -\bigg( t- \frac{t_{0}}{2}\bigg)^2/4\sigma^2\bigg] , \end{align} for $2\le i \le N-2$, with $j_{12}(t)$ and $j_{N-1N}(t)$ given by Eqs.~(\ref{eqn:ctap1})-(\ref{eqn:ctap2}). A schematic level diagram for the multidot spin-CTAP protocol is shown Figs.~\ref{fig:Ndot}(a--b). For our perturbative description above to be valid we require that $|j_i| = |j_{12,N-1N}|/\sqrt{N} \ll j_M/N$. Since the transfer time scales as $t_{\max} \sim 1/j_{i,\max}$ this implies that $t_{\max} \gg N/j_M$. As a result, $j_M$ has to scale linearly with $N$ and the maximum value of $j_{12,N-1N}$ has to scale as $\sqrt{N}$ to keep a constant transfer time in the large $N$ limit. We remark that the scaling for $j_M$ is expected from general bounds on the speed of information spreading in local Hamiltonian systems.\cite{Lieb72} \begin{figure}[tb] \begin{center} \includegraphics[width= .49\textwidth]{MultidotCTAP_v2.pdf} \caption{ (a-b) Level diagram for the $S_z^{\rm tot} = -(N-1)/2$ subspace in energy eigenbasis with $j_{12,N-1,N}=0$ illustrating how the multidot system reduces to an effective three-level state transfer problem. (c) Nine-dot spin-CTAP projection fidelity $F_p = 1/2+\mean{s_9^z}$ vs. $t_{\rm max}$ without noise for realistic pulse parameters. We took $j_0/h = 5~$MHz, $j_M = 10 j_0$, $\sigma= t_{\rm max}/8$, $\bar{J}_{12}/h=\bar{J}_{N-1N}/h = 30~$MHz, $\bar{J}_{M}/h = 60$~MHz, $\Delta_{ii+1}/2\pi = -1.5$ GHZ, and $\omega_{ij}= \Delta_{ij} - \sum_k (\bar{J}_{ik}-\bar{J}_{jk})/2\hbar$.} \label{fig:Ndot} \end{center} \end{figure} An example of the multidot spin-CTAP performance is shown in Fig.~\ref{fig:Ndot}(c) for nine dots in a linear array.\cite{Zajac16} We observe projection fidelities for transferring spin eigenstates that exceed $99\%$ for sufficiently long pulse times. As we noted above, the adiabaticity condition becomes more difficult to satisfy for large $N$ because of decreasing gaps between the dark state and other nearby eigenstates. In principle, this can be overcome by increasing the drive parameter $j_M$ on the middle dots; however, this becomes difficult to realize in practice. As a result, the requisite pulse time $t_{\rm max}$ will generally increase with $N$. \section{GHZ State Generation} \label{sec:ghz} We now show how to extend the pulse sequences described above to generate multipartite entanglement of the spins. The blockaded version of spin-CTAP for a linear array of three quantum dots can be realized whenever there is a difference in the dc exchange for each adjacent pair of dots in the array. Under these conditions, there is a natural method to generate entangled GHZ states by applying the spin-CTAP protocol to the state \be \ket{\psi} = \frac{1}{\sqrt{2}} (\ket{\uparrow\downarrow\downarrow} + \ket{\uparrow\uparrow\downarrow}) \to \frac{1}{\sqrt{2}} (e^{i \phi} \ket{\downarrow\downarrow\uparrow} + \ket{\uparrow\uparrow\downarrow}), \end{equation} where $\phi$ is a phase that will vary with the pulse profile and external noise. \begin{figure}[tb] \begin{center} \includegraphics[width= .49\textwidth]{GHZGen.pdf} \caption{(a) GHZ state fidelity for spin-CTAP protocol with $t_{\rm max} = 10 \hbar \pi/j_0$ computed using full simulations of the spin dynamics. The noiseless fidelity, limited by nonadiabatic corrections from a finite $t_{\max}$, is $\sim 98\%$. We took other parameters as in Fig.~\ref{fig:Ndot}(b). (b) Fidelity for GHZ state preparation using repeated spin-CTAP vs. $Q_e$. We took $j_0/h = 3$~MHz, $j_M=10j_0$, $t_{\rm max}= (N-1)10 \hbar \pi/j_0$, $\Delta_{ii+1}/2\pi = -150~$MHz, $T_2^* = 10~\mu$s and other parameters as in Fig.~\ref{fig:Ndot}(b). Error bars denote one standard deviation due to fluctuations in noise realizations.} \label{fig:ghz} \end{center} \end{figure} Applying a $\pi$ pulse on spin three, we arrive at the state \be \ket{\psi} = \frac{1}{\sqrt{2}} (e^{i \phi} \ket{\downarrow\downarrow\downarrow} + \ket{\uparrow\uparrow\uparrow}), \end{equation} which is equal to a GHZ state $\ket{{\rm GHZ}} = 1/\sqrt{2}(\ket{\downarrow\downarrow\downarrow} + \ket{\uparrow\uparrow\uparrow})$ up to a single-qubit $Z$ rotation. In Fig.~\ref{fig:ghz}(a), we show the state fidelity $F = |\bra{{\rm GHZ}} \psi \rangle|^2$ in the presence of noise after correcting the random phase $\phi$. We see that the GHZ state fidelity is comparable to the fidelity for transferring spin eigenstates. The noiseless limit is higher in this case than $F_p$ shown in Fig.~\ref{fig:noise}(a) because the $\ket{\downarrow \downarrow \downarrow}$ state comprises half the amplitude of the GHZ state and incurs no errors in our model for the spin-CTAP process. To spectroscopically determine the phase $\phi$ and directly measure the state fidelity in experiment, one can perform a measurement of the parity operator $P = \prod_i \sigma_i^x$.\cite{NielsenChuang} Similar to the three-dot case, we can realize a type of quantum-controlled multidot spin-CTAP by taking the value of the time-averaged exchange in the middle of the array, $\bar{J}_{i i+1} = \bar{J}_M$ for $2<i<N-1$, to be different from the two ends $\bar{J}_{12}$ and $\bar{J}_{N-1N}$. Under these conditions, we can extend the GHZ state generation scheme to arbitrarily large arrays by sequentially growing the size of the GHZ state by two qubits in each time step as follows: assume we are given an $N-2$ GHZ state on the middle qubits \be \ket{\psi} = \frac{1}{\sqrt{2}} \ket{\downarrow} \otimes \big( \ket{\uparrow\ldots \uparrow} + \ket{\downarrow\ldots \downarrow} \big) \otimes \ket{\downarrow}. \end{equation} We next flip spin one into an up state and then apply the pulse sequences from Eq.~(\ref{eqn:ctap1}) and Eq.~(\ref{eqn:mult1}). Under ideal conditions, this operation will transform the state \be \begin{split} \ket{\psi} \to \frac{1}{\sqrt{2}} ( \ket{\uparrow\uparrow\ldots \uparrow\downarrow} + e^{i\phi} \ket{\downarrow\downarrow\ldots \downarrow\uparrow}) , \end{split} \end{equation} which is equal to a GHZ state up to a single-qubit $Z$-rotation and $\pi$ pulse on the rightmost dot \be \ket{{\rm GHZ}} = \frac{1}{\sqrt{2}} \big( \ket{\uparrow\uparrow\ldots \uparrow\uparrow} + \ket{\downarrow\downarrow\ldots \downarrow\downarrow} \big). \end{equation} The main challenge in applying this GHZ state preparation scheme is the long-transfer time associated with each step in the operation, which makes the protocol sensitive to noise. In Fig.~\ref{fig:ghz}(b), we show the performance of this GHZ state generation scheme for characteristic parameters up to 11 dots obtained from full numerical simulations of the multi-dot spin dynamics. Although we can successfully generate 11 qubit entanglement with this approach, achieving the highest fidelities requires much larger values of $Q_e$ compared to the three-dot case. Furthermore, the transfer times become comparable to $T_2^*$ for $N>5$, which begins to limit the achievable fidelities. A more practical GHZ state preparation scheme for $N>3$ likely involves local CNOT gates applied to the two ends to sequentially grow the GHZ state.\cite{NielsenChuang} This method has the advantage over our proposal of not requiring full state transfer in each step. \section{Conclusions} \label{sec:conclusions} We have introduced an adiabatic protocol for spin transfer across arbitrarily large arrays of quantum dots that we refer to as spin-CTAP. The spin transfer protocol is realized in the one excitation subspace above the ground state of a spin-1/2 chain of Heisenberg exchange coupled spins in the presence of a large magnetic field gradient. Our approach is based on time-dependent modulation of the exchange interaction near the resonance frequency for nearest-neighbor flip-flops in the array. By controlling the static exchange profile across the array, we can also realize a quantum-controlled version of spin-CTAP, whereby the presence of spin flips in the middle of the array blocks the spin transfer protocol. Quantum controlled spin-CTAP can be used to generate large GHZ states. Spin-CTAP has several applications to quantum information processing with quantum dot spin qubits. In particular, high-fidelity transfer of spin-eigenstates is feasible even in the presence of modest amounts of noise in the spin sector. Thus, this approach may find immediate use in scaling up spin readout in two-dimensional arrays where the central spins cannot be directly coupled to a nearby charge sensor. The simplicity of the control sequence may have advantages for achieving high-fidelity state transfer for some applications. The adiabatic nature of the protocol makes it highly robust to pulse imperfections, but leads to relatively slow transfer times, making it more difficult to transfer superposition states than spin eigenstates. Reducing the strength of the noise by an additional order of magnitude would allow high-fidelity transfer of superposition states. Such a coherent transfer process could be used to distribute long-range entanglement across the array to implement nonlocal quantum gates. \begin{acknowledgements} We thank T. Ladd, G. Platero, A. Sigillito, and C. Tahan for helpful discussions. Funded by DARPA grant No.\ D18AC0025, Army Research Office grant No.\ W911NF-15-1-0149, and the Gordon and Betty Moore Foundation's EPiQS Initiative through Grant GBMF4535. \end{acknowledgements} \bibliographystyle{apsrev-nourl-title-PRX}
proofpile-arXiv_065-149
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} One of the most significant incentives for recent research on movement assessment, is the availability of affordable 3D skeleton recognition devices, such as Kinect, which redefine the target audience of applications that are based on user pose and movement. Addressing this problem is considered a hard task, as it requires paying attention to timings, performances and low-level details. In the recent decade, different solutions have been proposed for dealing with automatic assessment of movements, based on machine-learning algorithms. In this work, we review these solutions and compare them. We divide the assessment problem into two typical problems of detecting abnormalities in repetitive movements and predicting scores in structured movements. We then list the existing works and their features and the existing public datasets. We elaborate on the secondary problems that take part in the algorithmic flow of typical movement assessment solutions and list the methods and algorithms used by the different works. Finally, we discuss the findings in a high level. The outline of this review is as follows. In the next chapter, we first present the main types of movement assessment problems, list the features of existing works and list the used public datasets. In addition, we elaborate on the secondary problems and list the methods that were implemented to solve them. The last two chapters include a discussion and conclusions, respectively. \section{Movement Assessment} \label{lbl:movementAssessment} There are generally two main types of movement assessment solutions. The first type focuses on detecting abnormalities in relatively long, repetitive movements~\cite{paiement2014online,jun2020feature,chaaraoui2015abnormal,nguyen2016skeleton,devanne2016learning,baptista2018deformation,nguyen2018estimating,nguyen2018skeleton,khokhlova2018kinematic}, such as gait, as visualized in Figure~\ref{fig:gait}. The second type of movements, on the other hand, focuses on assessing structured movements~\cite{parisi2016human,su2013personal,eichler20183d,eichler2018non,hakim2019mal,hakim2019malf,masalha2020predicting,dressler2019data,dressler2020towards,hu2014real,capecci2016physical,capecci2018hidden,osgouei2018objective,al2019quantifying,cao2019novel,williams2019assessment,yu2019dynamic,lei2020learning}, such as movements from the Fugl-Meyer Assessment (FMA)~\cite{fugl1975post} or Berg Balance Scale (BBS)~\cite{bbs} medical assessments, as visualized in Figure~\ref{fig:fma_and_bbs}, which usually have clear definitions of starting positions, ending positions, objectives and constraints. \begin{figure}[] \centering \includegraphics[width=0.75\linewidth,keepaspectratio]{images/gait.png} \caption[]{A walking-up-stairs movement with 3D skeleton joints detected from a Kinect RGB-D video~\cite{paiement2014online}.} \label{fig:gait} \end{figure} \begin{figure}[] \centering \includegraphics[width=0.75\linewidth,keepaspectratio]{images/fma_and_bbs.png} \caption[]{An FMA assessment~\cite{eichler2018non} and a BBS assessment~\cite{masalha2020predicting}.} \label{fig:fma_and_bbs} \end{figure} While most of the works deal with assessing known-in-advance, limited range of movement types, only a few works try to provide general solutions, which aim to be adaptive to new types of movements. Such works, which were evaluated on multiple types of movements~\cite{parisi2016human,su2013personal,eichler20183d,eichler2018non,hakim2019mal,hakim2019malf,masalha2020predicting,hu2014real,capecci2016physical,capecci2018hidden,al2019quantifying,cao2019novel,williams2019assessment,lei2020learning}, may therefore assume no prior knowledge on a learned movement type, such that they may need to automatically extract its most important properties from the training set, or use learning algorithms that are adaptive in their nature. A typical movement assessment algorithm will need to address the following fundamental problems: capturing or detecting human skeleton joint positions, geometric normalization, temporal alignment, feature extraction, score prediction and feedback generation. In this chapter, we review the solutions existing works implemented for each of these problems. \subsection{Movement Domains and Solution Features} Most of the works that deal with structured movements, mainly deal with predicting the quality of a performance and sometimes producing feedback. On the contrary, most of the works that deal with repetitive movements, such as gait, give more focus to detecting abnormalities and computing scores that are based on similarity to normal movements. Table~\ref{tbl:features} summarizes the features of each of the works that deal with structured movements. When a solution produces a quality score on a continuous scale, then we consider the numerical score feature as existing. When a solution classifies performances into a discrete scale of qualities, then we consider the quality classification feature as existing. When a solution produces unbound textual feedback or presents describable features that can be directly translated into textual feedback, then we consider the unbound feedback feature as existing. When a training set that only consists of proper performances is sufficient for a solution to work, then we consider the trains-on-proper-movements feature as existing. \begin{table} \centering \resizebox{0.98\linewidth}{!}{% \begin{tabular}{ |c|c|c|c|c|c|c| } \hline \textbf{} & \textbf{Movement} & \textbf{No. Movement} & \textbf{Numerical} & \textbf{Quality} & \textbf{Unbound} & \textbf{Trains on} \\ \textbf{Work} & \textbf{Domain} & \textbf{Types Evaluated} & \textbf{Score} & \textbf{Classification} & \textbf{Feedback} & \textbf{Proper Movements} \\ \hline \cite{parisi2016human} & Powerlifting & 3 & \checkmark & & \checkmark & \checkmark \\ \hline \cite{su2013personal} & Rehabilitation & - & \checkmark & \checkmark & & \checkmark \\ \hline \cite{eichler20183d,eichler2018non} & FMA & 2 & & \checkmark & & \\ \hline \cite{hakim2019mal,hakim2019malf} & FMA & 3 & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline \cite{masalha2020predicting} & BBS & 14 & & \checkmark & & \\ \hline \cite{dressler2019data,dressler2020towards} & Deep Squats & 1 & \checkmark & & - & \checkmark\\ \hline \cite{hu2014real} & Qigong+others & 4+6 & \checkmark & & & \checkmark\\ \hline \cite{capecci2016physical,capecci2018hidden} & Physiotherapy & 5 & \checkmark & & & \checkmark\\ \hline \cite{osgouei2018objective} & Shoulder Abduction & 1 & \checkmark & & & \\ \hline \cite{al2019quantifying} & General & 3 & & \checkmark & & \checkmark \\ \hline \cite{cao2019novel} & Brunnstrom Scale & 9 & & \checkmark & & \\ \hline \cite{williams2019assessment} & Rehabilitation & 2 & \checkmark & & & \\ \hline \cite{yu2019dynamic} & Tai Chi & 1 & \checkmark & & & \checkmark \\ \hline \cite{lei2020learning} & Olympic Sports & 9 & \checkmark & & & \\ \hline \end{tabular}} \\ \caption[]{Features of works that deal with assessing structured movements. The minus sign represents missing information.} \label{tbl:features} \end{table} \subsection{Public Datasets} Many of the works used existing public datasets for evaluating their solutions, while others created their own datasets, for different assessment tasks. The used datasets have either been kept private or made public~\cite{paiement2014online,nguyen2018estimating,nguyen2018skeleton,chaaraoui2015abnormal}. Some of the works used both public and private datasets. Table~\ref{tbl:datasets} lists the public datasets used by existing works. \begin{table} \centering \resizebox{0.9\linewidth}{!}{% \begin{tabular}{ |c|c|c| } \hline \textbf{Dataset} & \textbf{Movement Types} & \textbf{Used by} \\ \hline SPHERE-staircase 2014,2015~\cite{paiement2014online} & Gait on stairs & \cite{paiement2014online,chaaraoui2015abnormal,devanne2016learning,baptista2018deformation,khokhlova2018kinematic} \\ \hline DGD: DAI gait dataset~\cite{chaaraoui2015abnormal} & Gait & \cite{chaaraoui2015abnormal,devanne2016learning,khokhlova2018kinematic} \\ \hline Walking gait dataset~\cite{nguyen2018walking} & Gait, under 9 different conditions & \cite{jun2020feature,nguyen2018estimating,nguyen2018skeleton,khokhlova2018kinematic} \\ \hline UPCV Gait K2~\cite{kastaniotis2016pose} & Gait - normal walking & \cite{khokhlova2018kinematic} \\ \hline Eyes. Mocap data~\cite{eyesmocapdata} & Gait captured by a Mocap system & \cite{nguyen2016skeleton} \\ \hline HMRA~\cite{hmra} & Qigong and others & \cite{hu2014real} \\ \hline UI-PRMD~\cite{vakanski2018data} & Physical therapy & \cite{williams2019assessment} \\ \hline MIT Olympic Scoring Dataset~\cite{mitolympic} & Olympic scoring on RGB videos & \cite{lei2020learning} \\ \hline UNLV Olympic Scoring Dataset~\cite{unlvoplymic} & Olympic scoring on RGB videos & \cite{lei2020learning} \\ \hline \end{tabular}} \\ \caption[]{Public movement assessment datasets.} \label{tbl:datasets} \end{table} \subsection{Methods and Algorithms} \subsubsection{Skeleton Detection.} The majority of the works use 3D cameras, such as Kinect1 or Kinect2, with the Microsoft Kinect SDK~\cite{shotton2011real} or OpenNI for detection of 3D skeletons. Sometimes, marker-based motion-capture (Mocap) systems are used~\cite{nguyen2016skeleton,al2019quantifying,williams2019assessment}. Lei \textit{et al.}~\cite{lei2020learning} used 2D skeletons that were extracted from RGB videos, using OpenPose~\cite{cao2017realtime}, as visualized in Figure~\ref{fig:openpose}. \begin{figure}[] \centering \includegraphics[width=0.2\linewidth,keepaspectratio]{images/openpose.png} \caption[]{An OpenPose 2D skeleton~\cite{lei2020learning}.} \label{fig:openpose} \end{figure} \subsubsection{Geometric Normalization.} People perform movements in different distances and angles in respect to the 3D camera that captures their motion. Additionally, different people have different body dimensions, which have to be addressed by either pre-normalizing the skeleton dimensions and coordinates, as demonstrated in Figure~\ref{fig:geometric}, or extracting features that are inherently invariant to the camera location and body-dimensions, such as joint angles. This step therefore, may be considered an either independent or integral part of the feature-extraction process. Table~\ref{tbl:geometric} summarizes the geometric normalization methods used by existing works. \begin{figure}[] \centering \includegraphics[width=0.35\linewidth,keepaspectratio]{images/geometric.png} \caption[]{A geometric normalization step~\cite{hakim2019mal,hakim2019malf}.} \label{fig:geometric} \end{figure} \begin{table} \centering \resizebox{0.98\linewidth}{!}{% \begin{tabular}{ |c|l| } \hline \textbf{Work} & \textbf{Implementation} \\ \hline \cite{paiement2014online} & Translation, rotation and scaling due to varying heights of the subjects. \\ \hline \cite{jun2020feature} & Implementing the method from~\cite{paiement2014online}. \\ \hline \cite{chaaraoui2015abnormal} & Translation, rotation by shoulder and hip joints, scaling.\\ \hline \cite{nguyen2016skeleton} & Using features that are invariant to camera location and angle. \\ \hline \cite{devanne2016learning} & - \\ \hline \cite{baptista2018deformation} & Projection on the main direction of the motion variation. \\ \hline \cite{nguyen2018estimating,nguyen2018skeleton} & Scaling the coordinates to the range between 0 and 1. \\ \hline \cite{khokhlova2018kinematic} & Using features that are invariant to camera location and angle. \\ \hline \cite{parisi2016human} & Translation. \\ \hline \cite{su2013personal} & Geometric calibration as a system initialization step, before capturing skeleton videos. \\ \hline \cite{eichler20183d,eichler2018non} & Using features that are invariant to camera location and angle. \\ \hline \cite{hakim2019mal,hakim2019malf} & Projection on spine-shoulders plane, translation and equalizing skeleton edge lengths. \\ \hline \cite{masalha2020predicting} & Using features that are invariant to camera location and angle. \\ \hline \cite{dressler2019data,dressler2020towards} & Using features that are invariant to camera location and angle. \\ \hline \cite{hu2014real} & Using features that are invariant to camera location and angle. \\ \hline \cite{capecci2016physical,capecci2018hidden} & Using features that are invariant to camera location and angle. \\ \hline \cite{osgouei2018objective} & Using features that are invariant to camera location and angle. \\ \hline \cite{al2019quantifying} & Using features that are invariant to camera location and angle. \\ \hline \cite{cao2019novel} & - \\ \hline \cite{williams2019assessment} & Using features that are invariant to camera location and angle. \\ \hline \cite{yu2019dynamic} & Projection on arm-leg-based coordinate system. \\ \hline \cite{lei2020learning} & Scaling of the 2D human body. \\ \hline \end{tabular}} \caption[]{Geometric normalization methods.} \label{tbl:geometric} \end{table} \subsubsection{Temporal Alignment.} In order to produce reliable assessment outputs, a tested movement, which is a temporal sequence of data, usually has to be well-aligned in time with movements it will be compared to. For that purpose, most works either use models that inherently deal with sequences, such as HMMs and RNNs, as illustrated in Figure~\ref{fig:hmm}, or use temporal alignment algorithms, such as the DTW algorithm or its variants, as illustrated in Figure~\ref{fig:dtw}. \begin{figure}[] \centering \includegraphics[width=0.45\linewidth,keepaspectratio]{images/hmm.png} \caption[]{A Hidden Markov Model (HMM), which defines states, observations and probabilities of state transitions and observations~\cite{nguyen2016skeleton}.} \label{fig:hmm} \end{figure} \begin{figure}[] \centering \includegraphics[width=0.65\linewidth,keepaspectratio]{images/dtw.png} \caption[]{Dynamic Time Warping (DTW) for alignment of two series of scalars, by matching pairs of indices~\cite{simpledtw}.} \label{fig:dtw} \end{figure} Hakim and Shimshoni~\cite{hakim2019mal,hakim2019malf} introduced a novel warping algorithm, which was based on the detection of temporal points-of-interest (PoIs) and on linearly warping the sequences between them, as illustrated in Figure~\ref{fig:warp}. Dressler \textit{et al.}~\cite{dressler2019data,dressler2020towards} introduced a novel DTW variation with skips, similarly to Hu \textit{et al.}~\cite{hu2014real}. Other novel approaches were introduced by Devanne \textit{et al.}~\cite{devanne2016learning}, by Baptista \textit{et al.}~\cite{baptista2018deformation} and by Yu and Xiong~\cite{yu2019dynamic}. Another less mentioned algorithm is the Correlation Optimized Warping (COW) algorithm~\cite{tomasi2004correlation}. Palma \textit{et al.}~\cite{palma2016hmm} and Hagelb\"{a}ck \textit{et al.}~\cite{hagelback2019variants} elaborated more on the topic of temporal alignment algorithms in the context of movement assessment. Table~\ref{tbl:temporal} summarizes the alignment methods used by existing works. \begin{figure}[] \centering \includegraphics[width=0.8\linewidth,keepaspectratio]{images/warp.png} \caption[]{A continuous warping by scaling between detected pairs of temporal points-of-interest~\cite{hakim2019mal,hakim2019malf}.}. \label{fig:warp} \end{figure} \begin{table} \centering \resizebox{0.8\linewidth}{!}{% \begin{tabular}{ |c|l| } \hline \textbf{Work} & \textbf{Implementation} \\ \hline \cite{paiement2014online} & Inherently solved by the choice to use an HMM statistical model. \\ \hline \cite{jun2020feature} & Inherently solved by the choice to use an RNN Autoencoder. \\ \hline \cite{chaaraoui2015abnormal} & Discrete warping using the Dynamic Time Warping (DTW) algorithm. \\ \hline \cite{nguyen2016skeleton} & Inherently solved by the choice to use an HMM statistical model. \\ \hline \cite{devanne2016learning} & Riemannian shape analysis of legs shape evolution within a sliding window. \\ \hline \cite{baptista2018deformation} & Key-point detection with deformation-based curve alignment~\cite{demisse2017deformation}. \\ \hline \cite{nguyen2018estimating,nguyen2018skeleton} & Inherently solved by the choice to use a recurrent neural network.\\ \hline \cite{khokhlova2018kinematic} & - \\ \hline \cite{parisi2016human} & Inherently solved by the choice to use a recurrent neural network. \\ \hline \cite{su2013personal} & Discrete warping using the Dynamic Time Warping (DTW) algorithm. \\ \hline \cite{eichler20183d,eichler2018non} & - \\ \hline \cite{hakim2019mal,hakim2019malf} & Detecting mutual temporal PoIs and continuously warping between them. \\ \hline \cite{masalha2020predicting} & - \\ \hline \cite{dressler2019data,dressler2020towards} & A novel DTW variant, with skips. \\ \hline \cite{hu2014real} & A novel DTW variant with tolerance to editing. \\ \hline \cite{capecci2016physical,capecci2018hidden} & DTW and Hidden Semi-Markov Models (HSMM). \\ \hline \cite{osgouei2018objective} & DTW and HMM. \\ \hline \cite{al2019quantifying} & - \\ \hline \cite{cao2019novel} & Inherently solved by the choice to use a recurrent neural network. \\ \hline \cite{williams2019assessment} & - \\ \hline \cite{yu2019dynamic} & A novel DTW variant that minimizes angles between pairs of vectors. \\ \hline \cite{lei2020learning} & - \\ \hline \end{tabular}} \caption[]{Temporal alignment methods.} \label{tbl:temporal} \end{table} \subsubsection{Feature Extraction.} The assessment of different types of movements requires paying attention to different details, which may include joint angles, pairwise joint distances, joint positions, joint velocities and event timings. Many of the feature extraction methods are invariant to the subject's skeleton scale and to the camera location and angle, as illustrated in Figure~\ref{fig:feature}, while others are usually preceded by a geometric normalization step. In the recent years, some works used deep features, which were automatically learned and were obscure, rather than using explainable handcrafted features. It is notable that while some works were designated for specific domains of movements and exploited their prior knowledge to choose their features, other works were designated to be more versatile and adaptive to many movement domains, and therefore used general features. Table~\ref{tbl:feature} summarizes the feature extraction methods used by existing works. \begin{figure}[] \centering \includegraphics[width=0.6\linewidth,keepaspectratio]{images/features.png} \caption[]{Angles as extracted skeleton features, which are invariant to the camera location and to body dimension differences~\cite{nguyen2016skeleton}.}. \label{fig:feature} \end{figure} \begin{table} \centering \resizebox{0.98\linewidth}{!}{% \begin{tabular}{ |c|l| } \hline \textbf{Work} & \textbf{Implementation} \\ \hline \cite{paiement2014online} & Applying Diffusion Maps~\cite{coifman2006diffusion} on the normalized 3D skeleton joint positions. \\ \hline \cite{jun2020feature} & Deep features learned by training RNN Autoencoders. \\ \hline \cite{chaaraoui2015abnormal} & Joint Motion History (JMH): spatio-temporal joint 3D positions. \\ \hline \cite{nguyen2016skeleton} & Lower-body joint angles and the angle between the two feet. \\ \hline \cite{devanne2016learning} & Square-root-velocity function (SRVF)~\cite{joshi2007novel} on temporal sequences of joint positions. \\ \hline \cite{baptista2018deformation} & Distances between the projections of the two knees on the movement direction. \\ \hline \cite{nguyen2018estimating,nguyen2018skeleton} & Deep features learned by Autoencoders. \\ \hline \cite{khokhlova2018kinematic} & Covariance matrices of hip and knee flexion angles. \\ \hline \cite{parisi2016human} & 13 joint 3D positions and velocities. \\ \hline \cite{su2013personal} & Joint 3D positions and velocities. \\ \hline \cite{eichler20183d,eichler2018non} & Joint angles, distances and heights from the ground. \\ \hline \cite{hakim2019mal,hakim2019malf} & Joint 3D positions and velocities, distances and edge angles. Sequence timings. \\ \hline \cite{masalha2020predicting} & Relative joint positions, joint distances, angles and height of joints from the ground. \\ \hline \cite{dressler2019data,dressler2020towards} & Joint positions and NASM features (a list of selected skeleton angles). \\ \hline \cite{hu2014real} & Torso direction and joint relative position represented by elevation and azimuth. \\ \hline \cite{capecci2016physical,capecci2018hidden} & Selected features varying between movement types. \\ \hline \cite{osgouei2018objective} & Shoulder and arm angles. \\ \hline \cite{al2019quantifying} & Autoencoder embeddings of manually-extracted attributes. \\ \hline \cite{cao2019novel} & Raw skeleton 3D data. \\ \hline \cite{williams2019assessment} & GMM encoding of Autoencoder dimensionality-reduced joint angle data. \\ \hline \cite{yu2019dynamic} & Angles of selected joints. \\ \hline \cite{lei2020learning} & Self-similarity descriptors of joint trajectories and a joint displacement sequence.\\ \hline \end{tabular}} \caption[]{Feature extraction methods.} \label{tbl:feature} \end{table} \subsubsection{Score Prediction.} The prediction of an assessment score refers to one or more of the following cases: \begin{enumerate} \item Classifying a performance into a class from a predefined set of discrete quality classes. \item Performing a regression that will map a performance into a number on a predefined continuous scale. \item Producing scores that reflect the similarity between given model and performance, unbound to ground-truth or predefined scales. \end{enumerate} \noindent The two first types of scoring capabilities are mainly essential for formal assessments, such as medical assessments or Olympic performance judgements. The third type of scoring is mainly useful for comparing subject performances, which can be either a certain subject whose progress is monitored over time, or different subjects who compete. Table~\ref{tbl:score} lists the algorithms used to produce quality scores. It is notable that score prediction was not implemented in many works, as it was irrelevant for them, since they only addressed normal/abnormal binary classifications. \begin{table} \centering \resizebox{0.98\linewidth}{!}{% \begin{tabular}{ |c|l| } \hline \textbf{Work} & \textbf{Implementation} \\ \hline \cite{paiement2014online} & Pose and dynamics log likelihoods. \\ \hline \cite{jun2020feature} & - \\ \hline \cite{chaaraoui2015abnormal} & - \\ \hline \cite{nguyen2016skeleton} & - \\ \hline \cite{devanne2016learning} & Mean log-probability over the segments of the test sequence. \\ \hline \cite{baptista2018deformation} & Distance between time-aligned feature sequences with reflection of time-variations. \\ \hline \cite{nguyen2018estimating,nguyen2018skeleton} & - \\ \hline \cite{khokhlova2018kinematic} & - \\ \hline \cite{parisi2016human} & Difference between actual and RNN-predicted next frames. \\ \hline \cite{su2013personal} & Handcrafted classification using Fuzzy Logic~\cite{zadeh1965fuzzy}. \\ \hline \cite{eichler20183d,eichler2018non} & SVM, Decision Tree and Random Forest quality classification using handcrafted features. \\ \hline \cite{hakim2019mal,hakim2019malf} & Thresholded weighted sum of normalized, time-filtered active/inactive joint and timing scores. \\ \hline \cite{masalha2020predicting} & SVM and Random Forest quality classification using handcrafted features. \\ \hline \cite{dressler2019data,dressler2020towards} & Weighted sum of selected feature differences. \\ \hline \cite{hu2014real} & Average of frame cross-correlations. \\ \hline \cite{capecci2016physical,capecci2018hidden} & Normalized log-likelihoods or DTW distances. \\ \hline \cite{osgouei2018objective} & Difference from proper performance divided by difference between worst and proper performances. \\ \hline \cite{al2019quantifying} & Classification using One-Class SVM. \\ \hline \cite{cao2019novel} & Classification using a hybrid LSTM-CNN model. \\ \hline \cite{williams2019assessment} & Normalized log-likelihoods. \\ \hline \cite{yu2019dynamic} & DTW similarity. \\ \hline \cite{lei2020learning} & Regression based on high-level features combined with joint trajectories and displacements. \\ \hline \end{tabular}} \caption[]{Score prediction methods.} \label{tbl:score} \end{table} \subsubsection{Feedback Generation.} There are two main types of feedback that can be generated: bound feedback and unbound feedback. Feedback is bound when it can only consist of predefined mistakes or abnormalities that can be detected. Feedback is unbound when it is a generated natural language text that can describe any type of mistake, deviation or abnormality. The generation of unbound feedback usually requires the usage of describable low-level features, so that when a performance is not proper, it will be possible to indicate the most significant features that reduced the score and translate them to natural language, such that the user can use the feedback to learn how to improve their next performance. Such feedback may include temporal sequences that deviate similarly, as visualized in Figure~\ref{fig:ParameterTimeSegmentation}. Table~\ref{tbl:feedback} summarizes the types of feedback and generation methods used by the works. It is notable that: 1) Most of the works do not generate feedback. 2) There are no works that produce feedback while not predicting quality scores, for the same reason of only focusing on binary detection of abnormalities. \begin{figure}[] \centering \includegraphics[width=0.75\linewidth,keepaspectratio]{images/segmentation.png} \caption[]{Temporal segmentation of parameter deviations for feedback generation~\cite{hakim2019mal,hakim2019malf}.} \label{fig:ParameterTimeSegmentation} \end{figure} \begin{table} \centering \resizebox{0.9\linewidth}{!}{% \begin{tabular}{ |c|l| } \hline \textbf{Work} & \textbf{Implementation} \\ \hline \cite{paiement2014online} & - \\ \hline \cite{jun2020feature} & - \\ \hline \cite{chaaraoui2015abnormal} & - \\ \hline \cite{nguyen2016skeleton} & - \\ \hline \cite{devanne2016learning} & Visualizing the deviations of the body parts. \\ \hline \cite{baptista2018deformation} & - \\ \hline \cite{nguyen2018estimating,nguyen2018skeleton} & - \\ \hline \cite{khokhlova2018kinematic} & - \\ \hline \cite{parisi2016human} & Sequences of parameter deviations and detection of predefined typical mistakes. \\ \hline \cite{su2013personal} & Three quality classifications indicating trajectory similarity and right speed. \\ \hline \cite{eichler20183d,eichler2018non} & - \\ \hline \cite{hakim2019mal,hakim2019malf} & Translation of worst class-segmented parameter temporal deviations into text. \\ \hline \cite{masalha2020predicting} & - \\ \hline \cite{dressler2019data,dressler2020towards} & Indication of weak links according to angle differences. \\ \hline \cite{hu2014real} & - \\ \hline \cite{capecci2016physical,capecci2018hidden} & - \\ \hline \cite{osgouei2018objective} & - \\ \hline \cite{al2019quantifying} & - \\ \hline \cite{cao2019novel} & - \\ \hline \cite{williams2019assessment} & - \\ \hline \cite{yu2019dynamic} & - \\ \hline \cite{lei2020learning} & - \\ \hline \end{tabular}} \caption[]{Feedback generation methods.} \label{tbl:feedback} \end{table} \section{Discussion} From the reviewed works, we can learn that a typical movement assessment solution may deal with detection of abnormal events or predicting quality scores, using classification, regression or computation of a normalized similarity measure. The task of detecting abnormal events is usually associated with repetitive movements, while the task of predicting scores is usually associated with structured movements. We can learn that while public skeleton datasets exist and are used by some of the works, most of the works use private datasets that were acquired for the sake of a specific work. It is notable that while many novelties are proposed in the different works, the absence of common datasets and evaluation metrics leads to different works dealing with different problems, evaluating themselves on different datasets of different movement domains, using different metrics. It is notable that temporal alignment is a key-problem in movement assessment. From the reviewed works, we can learn that around a half of the works base their temporal alignment solutions on models that are designated for sequential inputs, such as Hidden Markov Models and recurrent neural networks, while others use either the Dynamic Time Warping algorithm, sometimes with novel improvements, or other novel warping and alignment approaches. We can learn that while a few works use features that were automatically learned by neural networks, most of the works make use of handcrafted skeleton features. In many of those works, the use features are invariant to the camera location and angle and to the body-dimensions of the performing subjects. Other works that make use of handcrafted features usually have to apply a geometric normalization step before continuing to the next steps. It is worth to mention that while some of the works were designed to deal with a specific type of movement, other works were designed to be adaptive and deal with many types of movements, a choice that is usually clearly reflected in the feature extraction step. We can learn that a quality score is produced by most of the works. While works that deal with medical assessments mainly focus on classification into predefined discrete scoring scales, other works predict scores on continuous scales. Such scores are rarely learned as a regression problem and are usually based on a normalized similarity measure. Finally, we can learn that only a few works deal with producing feedback, which can be bound or unbound. In the future, the formation of a large, general public dataset and a common evaluation metric may help define the state-of-the-art and boost the research on the topic of movement assessment. In addition, the improvement of mobile-device cameras, as well as computer vision algorithms that detect skeletons in RGB-D or even RGB videos, may raise the interest in researching this topic. \section{Conclusions} We have provided a review of the existing works in the domain of movement assessment from skeletal data, which gives a high level picture of the problems addressed and approaches implemented by existing solutions. We divided the types of assessment tasks into two main categories, which were detection of abnormalities in long, repetitive movements and scoring structured movements, sometimes while generating textual feedback. The objectives and challenges of the assessment task were discussed and the ways they were addressed by each of the works were listed, including skeleton joint detection, geometric normalization, temporal alignment, feature extraction, score prediction and feedback generation. The existing public datasets and evaluated movement domains were listed. Finally, a high level discussion was provided. We hope that this review will provide a good starting point for new researchers. \bibliographystyle{splncs}
proofpile-arXiv_065-150
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection{Model complexity} \paragraph{Number of parameters.} As shown below, using multiple modalities does not impact the number of parameters significantly. Interestingly, majority of the parameters correspond to the BERT caption encoding module. We also note that the difference in the video encoder comes from the projections. The number of parameters of a transformer encoder is independent of the number of input embeddings, as are the parameters of a CNN from the image size. Our cross-modal architecture using 7 modalities has: 133.3M parameters, including caption encoder: 112.9M, video encoder: 20.4M (Projections: 3.3M, MMT: 17.1M). Our cross-modal architecture using 2 modalities has: 127.3M parameters, including caption encoder: 109.6M (decrease compared to 7 modalities due to using less gated embedding modules), video encoder: 17.7M (Projections: 0.6M, MMT: 17.1M). \paragraph{Training and inference times.} Training our full cross-modal architecture from scratch on MSRVTT takes about 4 hours on a single V100 16GB GPU. If we replace our multi-modal transformer by collaborative gating~\cite{liu2019use}, we reduce the number of parameters from 133.3M to 123.9M. However, the gain in inference time is minimal, from 1.1s to 0.8s, and is negligible compared to feature extraction, as detailed below. Inference time for 1k videos and 1k text queries from MSRVTT on a single V100 GPU is as follows: approximately 3000s to extract features of 7 experts on 1k videos (480s just for S3D motion features), 1.1s to process videos with MMT, 0.9s to process 1k captions with BERT+gated embedding modules, 0.05s to compute similarities and rank the video candidates for the 1k queries. \subsection{Results on additional metrics} Here, we report our results for the additional metrics R@1, R@10, R@50. Table~\ref{table:MSRVTT_results_supp} complements the results reported for the MSRVTT~\cite{xu2016msrvtt} dataset in Table~\ref{table:MSRVTT_results} of the main paper. Similarly, Table~\ref{table:ANet_results_supp} and Table~\ref{table:LSMDC_results_supp} report the additional evaluations for Table~\ref{table:ANet_results} and Table~\ref{table:LSMDC_results} of the main paper on ActivityNet~\cite{krishna2017activitynet} and LSMDC~\cite{Rohrbach2015LSMDC} datasets respectively. We observe that the results on these additional metrics are in line with the conclusions of the main paper. \begin{table}[h!] \begin{center} \caption{Retrieval performance on the MSRVTT dataset. 1k-A and 1k-B denote test sets of 1000 randomly sampled caption-video pairs used in~\cite{Yu2018JSFusion} and~\cite{miech2018learning} resp.} \label{table:MSRVTT_results_supp} \scriptsize \scalebox{0.90}{ \begin{tabular}{l | l| @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c | @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c} \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{5}{c}{\textit{Text $\longrightarrow$ Video}} & \multicolumn{5}{c}{\textit{Video $\longrightarrow$ Text}} \\ Method & Split & R@1$\uparrow$ & R@5$\uparrow$ & R@10$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ & R@1$\uparrow$ & R@5$\uparrow$ & R@10$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ \\ \hline Random baseline & 1k-A & 0.1 & 0.5 & 1.0 & 500.0 & 500.0 & 0.1 & 0.5 & 1.0 & 500.0 & 500.0 \\ JSFusion~\cite{Yu2018JSFusion} & 1k-A & 10.2 & 31.2 & 43.2 & 13 & - & - & - & - & - & - \\ HT~\cite{miech19howto100m} & 1k-A & 12.1 & 35.0 & 48.0 & 12 & - & - & - & - & - & - \\ CE~\cite{liu2019use} & 1k-A & $\hspace{1.5em}20.9_{\!\pm\!1.2}$ & $\hspace{1.5em}48.8_{\!\pm\!0.6}$ & $\hspace{1.5em}62.4_{\!\pm\!0.8}$ & $\hspace{1.5em}6.0_{\!\pm\!0.0}$ & $\hspace{1.5em}28.2_{\!\pm\!0.8}$ & $\hspace{1.5em}20.6_{\!\pm\!0.6}$ & $\hspace{1.5em}50.3_{\!\pm\!0.5}$ & $\hspace{1.5em}64.0_{\!\pm\!0.2}$ & $\hspace{1.5em}5.3_{\!\pm\!0.6}$ & $\hspace{1.5em}25.1_{\!\pm\!0.8}$ \\ Ours & 1k-A & $\hspace{1.5em}\textbf{24.6}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{54.0}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{67.1}_{\!\pm\!0.5}$ & $\hspace{1.5em}\textbf{4.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{26.7}_{\!\pm\!0.9}$ & $\hspace{1.5em}\textbf{24.4}_{\!\pm\!0.5}$ & $\hspace{1.5em}\textbf{56.0}_{\!\pm\!0.9}$ & $\hspace{1.5em}\textbf{67.8}_{\!\pm\!0.3}$ & $\hspace{1.5em}\textbf{4.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{23.6}_{\!\pm\!1.0}$ \\ \hline HT-pretrained~\cite{miech19howto100m} & 1k-A & 14.9 & 40.2 & 52.8 & 9 & - & - & - & - & - & - \\ Ours-pretrained & 1k-A & $\hspace{1.5em}\textbf{26.6}_{\!\pm\!1.0}$ & $\hspace{1.5em}\textbf{57.1}_{\!\pm\!1.0}$ & $\hspace{1.5em}\textbf{69.6}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{4.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{24.0}_{\!\pm\!0.8}$ & $\hspace{1.5em}\textbf{27.0}_{\!\pm\!0.6}$ & $\hspace{1.5em}\textbf{57.5}_{\!\pm\!0.6}$ & $\hspace{1.5em}\textbf{69.7}_{\!\pm\!0.8}$ & $\hspace{1.5em}\textbf{3.7}_{\!\pm\!0.5}$ & $\hspace{1.5em}\textbf{21.3}_{\!\pm\!0.6}$ \\ \hline\hline Random baseline & 1k-B & 0.1 & 0.5 & 1.0 & 500.0 & 500.0 & 0.1 & 0.5 & 1.0 & 500.0 & 500.0 \\ MEE~\cite{miech2018learning} & 1k-B & 13.6 & 37.9 & 51.0 & 10.0 & - & - & - & - & - & - \\ JPose~\cite{wray2019finegrained} & 1k-B & 14.3 & 38.1 & 53.0 & 9 & - & 16.4 & 41.3 & 54.4 & 8.7 & - \\ MEE-COCO~\cite{miech2018learning} & 1k-B & 14.2 & 39.2 & 53.8 & 9.0 & - & - & - & - & - & - \\ CE~\cite{liu2019use} & 1k-B & $\hspace{1.5em}18.2_{\!\pm\!0.7}$ & $\hspace{1.5em}46.0_{\!\pm\!0.4}$ & $\hspace{1.5em}60.7_{\!\pm\!0.2}$ & $\hspace{1.5em}7.0_{\!\pm\!0.0}$ & $\hspace{1.5em}35.3_{\!\pm\!1.1}$ & $\hspace{1.5em}18.0_{\!\pm\!0.8}$ & $\hspace{1.5em}46.0_{\!\pm\!0.5}$ & $\hspace{1.5em}60.3_{\!\pm\!0.5}$ & $\hspace{1.5em}6.5_{\!\pm\!0.5}$ & $\hspace{1.5em}30.6_{\!\pm\!1.2}$ \\ Ours & 1k-B & $\hspace{1.5em}\textbf{20.3}_{\!\pm\!0.5}$ & $\hspace{1.5em}\textbf{49.1}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{63.9}_{\!\pm\!0.5}$ & $\hspace{1.5em}\textbf{6.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{29.5}_{\!\pm\!1.6}$ & $\hspace{1.5em}\textbf{21.1}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{49.4}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{63.2}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{6.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{24.5}_{\!\pm\!1.8}$ \\ \hline \end{tabular} } \end{center} \vspace{-4mm} \end{table} \begin{table}[h!] \begin{center} \caption{Retrieval performance on the ActivityNet dataset.} \label{table:ANet_results_supp} \scriptsize \scalebox{0.90}{ \begin{tabular}{l | @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c | @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c} \hline \multicolumn{1}{c}{} & \multicolumn{5}{c}{\textit{Text $\longrightarrow$ Video}} & \multicolumn{5}{c}{\textit{Video $\longrightarrow$ Text}} \\ Method & R@1$\uparrow$ & R@5$\uparrow$ & R@50$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ & R@1$\uparrow$ & R@5$\uparrow$ & R@50$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ \\ \hline Random baseline & 0.02 & 0.1 & 1.02 & 2458.5 & 2458.5 & 0.02 & 0.1 & 1.02 & 2458.5 & 2458.5 \\ FSE~\cite{zhang2018HSE} & $\hspace{1.5em}18.2_{\!\pm\!0.2}$ & $\hspace{1.5em}44.8_{\!\pm\!0.4}$ & $\hspace{1.5em}89.1_{\!\pm\!0.3}$ & 7 & - & $\hspace{1.5em}16.7_{\!\pm\!0.8}$ & $\hspace{1.5em}43.1_{\!\pm\!1.1}$ & $\hspace{1.5em}88.4_{\!\pm\!0.3}$ & 7 & - \\ CE~\cite{liu2019use} & $\hspace{1.5em}18.2_{\!\pm\!0.3}$ & $\hspace{1.5em}47.7_{\!\pm\!0.6}$ & $\hspace{1.5em}91.4_{\!\pm\!0.4}$ & $\hspace{1.5em}6.0_{\!\pm\!0.0}$ & $\hspace{1.5em}23.1_{\!\pm\!0.5}$ & $\hspace{1.5em}17.7_{\!\pm\!0.6}$ & $\hspace{1.5em}46.6_{\!\pm\!0.7}$ & $\hspace{1.5em}90.9_{\!\pm\!0.2}$ & $\hspace{1.5em}6.0_{\!\pm\!0.0}$ & $\hspace{1.5em}24.4_{\!\pm\!0.5}$ \\ HSE~\cite{zhang2018HSE}& 20.5 & 49.3 & - & - & - & 18.7 & 48.1 & - & - & - \\ Ours & $\hspace{1.5em}\textbf{22.7}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{54.2}_{\!\pm\!1.0}$ & $\hspace{1.5em}\textbf{93.2}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{5.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{20.8}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{22.9}_{\!\pm\!0.8}$ & $\hspace{1.5em}\textbf{54.8}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{93.1}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{4.3}_{\!\pm\!0.5}$ & $\hspace{1.5em}\textbf{21.2}_{\!\pm\!0.5}$ \\ \hline Ours-pretrained & $\hspace{1.5em}\textbf{28.7}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{61.4}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{94.5}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{3.3}_{\!\pm\!0.5}$ & $\hspace{1.5em}\textbf{16.0}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{28.9}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{61.1}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{94.3}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{4.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{17.1}_{\!\pm\!0.5}$ \\ \hline \end{tabular} } \end{center} \vspace{-4mm} \end{table} \begin{table}[h!] \begin{center} \caption{Retrieval performance on the LSMDC dataset.} \label{table:LSMDC_results_supp} \scriptsize \scalebox{0.90}{ \begin{tabular}{l | @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c | @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c} \hline \multicolumn{1}{c}{} & \multicolumn{5}{c}{\textit{Text $\longrightarrow$ Video}} & \multicolumn{5}{c}{\textit{Video $\longrightarrow$ Text}} \\ Method & R@1$\uparrow$ & R@5$\uparrow$ & R@10$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ & R@1$\uparrow$ & R@5$\uparrow$ & R@10$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ \\ \hline Random baseline & 0.1 & 0.5 & 1.0 & 500.0 & 500.0 & 0.1 & 0.5 & 1.0 & 500.0 & 500.0 \\ CT-SAN~\cite{Yu2016CT-SAN} & 5.1 & 16.3 & 25.2 & 46 & - & - & - & - & - & - \\ JSFusion~\cite{Yu2018JSFusion} & 9.1 & 21.2 & 34.1 & 36 & - & - & - & - & - & - \\ CCA~\cite{Klein2015CCA} (rep. by~\cite{miech2018learning}) & 7.5 & 21.7 & 31.0 & 33 & - & - & - & - & - & - \\ MEE~\cite{miech2018learning} & 9.3 & 25.1 & 33.4 & 27 & - & - & - & - & - & - \\ MEE-COCO~\cite{miech2018learning} & 10.1 & 25.6 & 34.6 & 27 & - & - & - & - & - & - \\ CE~\cite{liu2019use} & $\hspace{1.5em}11.2_{\!\pm\!0.4}$ & $\hspace{1.5em}26.9_{\!\pm\!1.1}$ & $\hspace{1.5em}34.8_{\!\pm\!2.0}$ & $\hspace{1.5em}25.3_{\!\pm\!3.1}$ & - & - & - & - & - & - \\ Ours & $\hspace{1.5em}\textbf{13.2}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{29.2}_{\!\pm\!0.8}$ & $\hspace{1.5em}\textbf{38.8}_{\!\pm\!0.9}$ & $\hspace{1.5em}\textbf{21.0}_{\!\pm\!1.4}$ & $\hspace{1.5em}\textbf{76.3}_{\!\pm\!1.9}$ & $\hspace{1.5em}\textbf{12.1}_{\!\pm\!0.1}$ & $\hspace{1.5em}\textbf{29.3}_{\!\pm\!1.1}$ & $\hspace{1.5em}\textbf{37.9}_{\!\pm\!1.1}$ & $\hspace{1.5em}\textbf{22.5}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{77.1}_{\!\pm\!2.6}$ \\ \hline Ours-pretrained & $\hspace{1.5em}\textbf{12.9}_{\!\pm\!0.1}$ & $\hspace{1.5em}\textbf{29.9}_{\!\pm\!0.7}$ & $\hspace{1.5em}\textbf{40.1}_{\!\pm\!0.8}$ & $\hspace{1.5em}\textbf{19.3}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{75.0}_{\!\pm\!1.2}$ & $\hspace{1.5em}\textbf{12.3}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{28.6}_{\!\pm\!0.3}$ & $\hspace{1.5em}\textbf{38.9}_{\!\pm\!0.8}$ & $\hspace{1.5em}\textbf{20.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{76.0}_{\!\pm\!0.8}$ \\ \hline \end{tabular} } \end{center} \vspace{-4mm} \end{table} \section{Summary} We introduced multi-modal transformer, a transformer-based architecture capable of attending multiple features extracted at different moments, and from different modalities in video. This leverages both temporal and cross-modal cues, which are crucial for accurate video representation. We incorporate this video encoder along with a caption encoder in a cross-modal framework to perform caption-video matching and obtain state-of-the-art results for video retrieval. As future work, we would like to improve temporal encoding for video and text. \paragraph{Acknowledgments.} We thank the authors of~\cite{liu2019use} for sharing their codebase and features, and Samuel Albanie, in particular, for his help with implementation details. This work was supported in part by the ANR project AVENUE. \section{Experiments} \label{section:experiments} \subsection{Datasets and Metrics} \noindent\textbf{HowTo100M~\cite{miech19howto100m}}. It is composed of more than 1 million YouTube instructional videos, along with automatically-extracted speech transcriptions, which form the captions. These captions are naturally noisy, and often do not describe the visual content accurately or are temporally misaligned with it. We use this dataset only for pre-training. \noindent\textbf{MSRVTT~\cite{xu2016msrvtt}.} This dataset is composed of 10K YouTube videos, collected using 257 queries from a commercial video search engine. Each video is 10 to 30s long, and is paired with 20 natural sentences describing it, obtained from Amazon Mechanical Turk workers. We use this dataset for training from scratch and also for fine-tuning. We report results on the train/test splits introduced in~\cite{Yu2018JSFusion} that uses 9000 videos for training and 1000 for test. We refer to this split as ``1k-A''. We also report results on the train/test split in~\cite{miech2018learning} that we refer to as ``1k-B''. Unless otherwise specified, our MSRVTT results are with ``1k-A''. \noindent\textbf{ActivityNet Captions~\cite{krishna2017activitynet}.} It consists of 20K YouTube videos temporally annotated with sentence descriptions. We follow the approach of~\cite{zhang2018HSE}, where all the descriptions of a video are concatenated to form a paragraph. The training set has 10009 videos. We evaluate our video-paragraph retrieval on the ``val1'' split (4917 videos). We use ActivityNet for training from scratch and fine-tuning. \noindent\textbf{LSMDC~\cite{Rohrbach2015LSMDC}.} It contains 118,081 short video clips ($\sim$ 4–5s) extracted from 202 movies. Each clip is annotated with a caption, extracted from either the movie script or the audio description. The test set is composed of 1000 videos, from movies not present in the training set. We use LSMDC for training from scratch and also fine-tuning. \noindent\textbf{Metrics.} We evaluate the performance of our model with standard retrieval metrics: recall at rank $N$ (R@$N$, higher is better), median rank (MdR, lower is better) and mean rank (MnR, lower is better). For each metric, we report the mean and the standard deviation over experiments with 3 random seeds. In the main paper, we only report recall@5, median and mean ranks, and refer the reader to the supplementary material for additional metrics. \subsection{Implementation details} \label{section:experts} \noindent\textbf{Pre-trained experts.} Recall that our video encoder uses pre-trained experts models for extracting features from each video modality. We use the following seven experts. \textbf{Motion} features are extracted from S3D~\cite{Xie2017S3D} trained on the Kinetics action recognition dataset. \textbf{Audio} features are extracted using VGGish model~\cite{Hershey2017VGGish} trained on YT8M. \textbf{Scene} embeddings are extracted from DenseNet-161~\cite{Huang2016DenselyCC} trained for image classification on the Places365 dataset~\cite{Zhou2018PlacesA1}. \textbf{OCR} features are obtained in three stages. Overlaid text is first detected using the pixel link text detection model. The detected boxes are then passed through a text recognition model trained on the Synth90K dataset. Finally, each character sequence is encoded with word2vec~\cite{Mikolov2013Word2Vec} embeddings. \textbf{Face} features are extracted in two stages. An SSD face detector is used to extract bounding boxes, which are then passed through a ResNet50 trained for face classification on the VGGFace2 dataset. \textbf{Speech} transcripts are extracted using the Google Cloud Speech to Text API, with the language set to English. The detected words are then encoded with word2vec. \textbf{Appearance} features are extracted from the final global average pooling layer of SENet-154~\cite{Hu2017SeNet} trained for classification on ImageNet. For scene, OCR, face, speech and appearance, we use the features publicly released by~\cite{liu2019use}, and compute the other features ourselves.\\ \noindent\textbf{Training.} For each dataset, we run a grid search on the corresponding validation set to estimate the hyperparameters. We use the Adam optimizer for all our experiments, and set the margin of the bidirectional max-margin ranking loss to 0.05. We also freeze our pre-trained expert models. When pre-training on HowTo100M, we use a batch size of 64 video-caption pairs, an initial learning rate of 5e-5, which we decay by a multiplicative factor 0.98 every 10K optimisation steps, and train for 2 million steps. Given the long duration of most of the HowTo100M videos, we randomly sample 100 consecutive words in the caption, and keep 100 consecutive seconds of video data, closest in time to the selected words. When training from scratch or finetuning on MSRVTT or LSMDC, we use a batch size of 32 video-caption pairs, an initial learning rate of 5e-5, which we decay by a multiplicative factor 0.95 every 1K optimisation steps. We train for 50K steps. We use the same settings when training from scratch or finetuning on ActivityNet, except for 0.90 as the multiplicative factor. To compute our caption representation $h(c)$, we use the ``BERT-base-cased'' checkpoint of the BERT model and finetune it with a dropout probability of 10\%. To compute our video representation $\Psi_{agg}(v)$, we use MMT with 4 layers and 4 attention heads, a dropout probability of 10\%, a hidden size $d_{model}$ of 512, and an intermediate size of 3072. For datasets with short videos (MSRVTT and LSMDC), we use all the 7 experts and limit video input to 30 features per expert, and BERT input to the first 30 wordpieces. For datasets containing longer videos (HowTo100M and ActivityNet), we only use motion and audio experts, and limit our video input to 100 features per expert and our BERT input to the first 100 wordpieces. In cases where an expert is unavailable for a given video, e.g., no speech was detected, we set the aggregated feature $F_{agg}^n$ to a zero vector. We refer the reader to the supplementary material for a study of the model complexity. \subsection{Ablation studies and comparisons} We will first show the advantage of pretraining our model on a large-scale, uncurated dataset. We will then perform ablations on the architecture used for our language and video encoders. Finally, we will present the relative importance of the pretrained experts used in this work, and compare with related methods. \noindent\textbf{Pretraining.} Table~\ref{table:remove_stop_words} shows the advantage of pretraining on HowTo100M, before finetuning on the target dataset (MSRVTT in this case). We also evaluated the impact of pretraining on ActivityNet and LSMDC; see Table~\ref{table:ANet_results} and Table~\ref{table:LSMDC_results}. \begin{table}[t] \begin{center} \caption{Advantage of pretraining on HowTo100M then finetuning on MSRVTT. Impact of removing the stop words. Performance reported on MSRVTT.} \label{table:remove_stop_words} \scriptsize \begin{tabular}{l | c | @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c} \hline \multicolumn{2}{c}{} & \multicolumn{3}{c}{\textit{Text $\longrightarrow$ Video}}\\ Method & Caption & R@5$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$\\ \hline \multirow{2}{*}{Pretraining without finetuning (zero-shot setting)} & all words & $\hspace{1.5em}6.9$ & $\hspace{1.5em}160.0$ & $\hspace{1.5em}240.2$ \\ & w/o stop words & $\hspace{1.5em}\textbf{14.4}$ & $\hspace{1.5em}\textbf{66.0}$ & $\hspace{1.5em}\textbf{148.1}$ \\ \hline \multirow{2}{*}{Training from scratch on MSRVTT} & all words & $\hspace{1.5em}\textbf{54.0}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{4.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{26.7}_{\!\pm\!0.9}$ \\ & w/o stop words & $\hspace{1.5em}50.0_{\!\pm\!0.6}$ & $\hspace{1.5em}5.3_{\!\pm\!0.5}$ & $\hspace{1.5em}28.5_{\!\pm\!0.9}$ \\ \hline \multirow{2}{*}{Pretraining then finetuning on MSRVTT} & all words & $\hspace{1.5em}\textbf{57.1}_{\!\pm\!1.0}$ & $\hspace{1.5em}\textbf{4.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{24.0}_{\!\pm\!0.8}$ \\ & w/o stop words & $\hspace{1.5em}55.0_{\!\pm\!0.7}$ & $\hspace{1.5em}4.3_{\!\pm\!0.5}$ & $\hspace{1.5em}24.3_{\!\pm\!0.3}$ \\ \hline \end{tabular} \end{center} \end{table} \noindent\textbf{Language encoder.} We evaluated several architectures for caption representation, as shown in Table~\ref{table:language_model}. Similar to the observation made in~\cite{burns2019language}, we obtain poor results from a frozen, pretrained BERT. Using the [CLS] output from a pretrained and frozen BERT model is in fact the worst result. We suppose this is because the output was not trained for caption representation, but for a very different task: next sentence prediction. Finetuning BERT greatly improves performance; it is the best result. We also compare with GrOVLE~\cite{burns2019language} embeddings, frozen or finetuned, aggregated with a max-pooling operation or a 1-layer LSTM and a fully-connected layer. We show that pretrained BERT embeddings aggregated by a max-pooling operation perform better than GrOVLE embeddings processed by a LSTM (best results from~\cite{burns2019language} for the text-to-clip task). \begin{table}[t] \begin{center} \caption{Comparison of different architectures for caption embedding when training from scratch on MSRVTT.} \label{table:language_model} \scriptsize \begin{tabular}{l | l | l| @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c} \hline \multicolumn{2}{c}{} & \multicolumn{1}{c}{} & \multicolumn{3}{c}{\textit{Text $\longrightarrow$ Video}} \\ \multicolumn{2}{c|}{Word embeddings} & Aggregation & R@5$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ \\ \hline \multirow{4}{*}{GrOVLE} & \multirow{2}{*}{frozen} & maxpool & $\hspace{1.5em}31.8_{\!\pm\!0.4}$ & $\hspace{1.5em}14.7_{\!\pm\!0.5}$ & $\hspace{1.5em}63.1_{\!\pm\!1.3}$ \\ & & LSTM & $\hspace{1.5em}36.4_{\!\pm\!0.8}$ & $\hspace{1.5em}10.3_{\!\pm\!0.9}$ & $\hspace{1.5em}44.2_{\!\pm\!0.1}$ \\ \cline{2-6} & \multirow{2}{*}{finetuned} & maxpool & $\hspace{1.5em}34.6_{\!\pm\!0.1}$ & $\hspace{1.5em}12.0_{\!\pm\!0.0}$ & $\hspace{1.5em}52.3_{\!\pm\!0.8}$ \\ & & LSTM & $\hspace{1.5em}40.3_{\!\pm\!0.5}$ & $\hspace{1.5em}8.7_{\!\pm\!0.5}$ & $\hspace{1.5em}38.1_{\!\pm\!0.7}$ \\ \hline \multirow{6}{*}{BERT} & \multirow{2}{*}{frozen} & maxpool & $\hspace{1.5em}39.4_{\!\pm\!0.8}$ & $\hspace{1.5em}9.7_{\!\pm\!0.5}$ & $\hspace{1.5em}46.5_{\!\pm\!0.2}$ \\ & & LSTM & $\hspace{1.5em}36.4_{\!\pm\!1.8}$ & $\hspace{1.5em}10.7_{\!\pm\!0.5}$ & $\hspace{1.5em}42.2_{\!\pm\!0.6}$ \\ \cline{2-6} & \multirow{2}{*}{finetuned} & maxpool & $\hspace{1.5em}44.2_{\!\pm\!1.2}$ & $\hspace{1.5em}7.3_{\!\pm\!0.5}$ & $\hspace{1.5em}35.6_{\!\pm\!0.4}$ \\ & & LSTM & $\hspace{1.5em}40.1_{\!\pm\!1.0}$ & $\hspace{1.5em}8.7_{\!\pm\!0.5}$ & $\hspace{1.5em}37.4_{\!\pm\!0.5}$ \\ \cline{2-6} & frozen & BERT-frozen & $\hspace{1.5em}17.1_{\!\pm\!0.2}$ & $\hspace{1.5em}34.7_{\!\pm\!1.2}$ & $\hspace{1.5em}98.8_{\!\pm\!0.8}$ \\ & finetuned & BERT-finetuned & $\hspace{1.5em}\textbf{54.0}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{4.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{26.7}_{\!\pm\!0.9}$ \\ \hline \end{tabular} \end{center} \end{table} We also analysed the impact of removing stop words from the captions in Table~\ref{table:remove_stop_words}. In a zero-shot setting, i.e., trained on HowTo100M, evaluated on MSRVTT without finetuning, removing the stop words helps generalize, by bridging the domain gap---HowTo100M speech is very different from MSRVTT captions. This approach was adopted in~\cite{miech2019MIL-NCE}. However, we observe that when finetuning, it is better to keep all the words as they contribute to the semantics of the caption. \noindent\textbf{Video encoder.} We evaluated the influence of different architectures for computing video embeddings on the MSRVTT 1k-A test split. \begin{table}[t] \caption{Ablation studies on the video encoder of our framework with MSRVTT. \textbf{(a) Influence of the architecture and input.} With max-pooled features as input, we compare our transformer architecture (MMT) with the variant not using an encoder (NONE) and the one with Collaborative Gating~\cite{liu2019use} (COLL). We also show that MMT can attend to all extracted features, as detailed in the text. % \textbf{(b) Importance of initializing $F_{agg}^n$ features.} We compare zero-vector initialisation, mean pooling and max pooling of the expert features. \textbf{(c) Influence of the size of the multi-modal transformer.} We compare different values for number-of-layers $\times$ number-of-attention-heads.} \begin{subtable}[t]{1.\linewidth}% \centering% \caption{Encoder architecture and input} \begin{tabular}{l | l | @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c} \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{3}{c}{\textit{Text $\longrightarrow$ Video}} \\ Encoder & Input & R@5$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ \\ \hline NONE & max pool & $\hspace{1.5em}50.9_{\!\pm\!1.5}$ & $\hspace{1.5em}5.3_{\!\pm\!0.5}$ & $\hspace{1.5em}28.6_{\!\pm\!0.5}$ \\ COLL & max pool & $\hspace{1.5em}51.3_{\!\pm\!0.8}$ & $\hspace{1.5em}5.0_{\!\pm\!0.0}$ & $\hspace{1.5em}29.5_{\!\pm\!1.8}$ \\ MMT & max pool & $\hspace{1.5em}52.5_{\!\pm\!0.7}$ & $\hspace{1.5em}5.0_{\!\pm\!0.0}$ & $\hspace{1.5em}27.2_{\!\pm\!0.7}$ \\ MMT & shuffled feats & $\hspace{1.5em}53.3_{\!\pm\!0.2}$ & $\hspace{1.5em}5.0_{\!\pm\!0.0}$ & $\hspace{1.5em}27.4_{\!\pm\!0.7}$ \\ MMT & ordered feats & $\hspace{1.5em}\textbf{54.0}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{4.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{26.7}_{\!\pm\!0.9}$ \\ \hline \end{tabular} \label{table:video_model-cross_modal} \end{subtable}\par\bigskip \begin{subtable}[t]{.5\linewidth}% \centering% \caption{$F_{agg}^n$ initialisation} \begin{tabular}{l | @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c} \hline \multicolumn{1}{c}{} & \multicolumn{3}{c}{\textit{Text $\longrightarrow$ Video}} \\ $F_{agg}^n$ init & R@5$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ \\ \hline zero & $\hspace{1.5em}50.2_{\!\pm\!0.9}$ & $\hspace{1.5em}5.7_{\!\pm\!0.5}$ & $\hspace{1.5em}28.5_{\!\pm\!1.3}$ \\ mean pool & $\hspace{1.5em}\textbf{54.2}_{\!\pm\!0.3}$ & $\hspace{1.5em}5.0_{\!\pm\!0.0}$ & $\hspace{1.5em}27.1_{\!\pm\!0.9}$ \\ max pool& $\hspace{1.5em}54.0_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{4.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{26.7}_{\!\pm\!0.9}$ \\ \hline \end{tabular} \label{table:video_model-initialisation} \end{subtable}% \begin{subtable}[t]{.5\linewidth} \centering \caption{Model size} \begin{tabular}{l | l | @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c} \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{3}{c}{\textit{Text $\longrightarrow$ Video}} \\ Layers & Heads & R@5$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ \\ \hline 2 & 2 & $\hspace{1.5em}53.2_{\!\pm\!0.4}$ & $\hspace{1.5em}5.0_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{26.7}_{\!\pm\!0.4}$ \\ 4 & 4 & $\hspace{1.5em}\textbf{54.0}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{4.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{26.7}_{\!\pm\!0.9}$ \\ 8 & 8 & $\hspace{1.5em}53.9_{\!\pm\!0.3}$ & $\hspace{1.5em}4.7_{\!\pm\!0.5}$ & $\hspace{1.5em}\textbf{26.7}_{\!\pm\!0.7}$ \\ \hline \end{tabular} \label{table:video_model-model_size} \end{subtable} \label{table:video_model-ablations} \end{table} In Table~\ref{table:video_model-cross_modal}, we evaluate variants of our encoder architecture and its input. Similar to~\cite{miech2018learning}, we experiment with directly computing the caption-video similarities on each max-pooled expert features, i.e., no video encoder (NONE in the table). We compare this with the collaborative gating architecture (COLL)~\cite{liu2019use} and our MMT variant using only the aggregated features as input. For the first two variants without MMT, we adopt the approach of~\cite{miech2018learning} to deal with missing modalities by re-weighting $w_i(c)$. We also show the superior performance of our multi-modal transformer in contextualising the different modality embeddings compared to the collaborative gating approach. We argue that our MMT is able to extract cross-modal information in a multi-stage architecture compared to collaborative gating, which is limited to modulating the input embeddings. Table~\ref{table:video_model-cross_modal} also highlights the advantage of providing MMT with \textbf{all} the extracted features, instead of only aggregated ones. Temporally aggregating each expert's features ignores information about multiple events occurring in a same video (see the last three rows). As shown by the influence of ordered and randomly shuffled features on the performance, MMT has the capacity to make sense of the relative ordering of events in a video. Table~\ref{table:video_model-initialisation} shows the importance of initialising the expert aggregation feature $F_{agg}^n$. Since the output of our video encoder is extracted from the ``agg'' columns, it is important to initialise them with an appropriate representation of the experts' features. The transformer being a residual network architecture, initializing $F_{agg}^n$ input embeddings with a zero vector leads to a low performance. Initializing with max pooling aggregation of each expert performs better than mean pooling. Finally, we analyze the impact of the size of our multi-modal transformer model in Table~\ref{table:video_model-model_size}. A model with 4 layers and 4 attention heads outperforms both a smaller model (2 layers and 2 attention heads) and a larger model (8 layers and 8 attention heads). \noindent\textbf{Comparison of the different experts.} In Figure~\ref{fig:experts_ablation}, we show an ablation study when training our model on MSRVTT using only one expert (left), using all experts but one (middle), or gradually adding experts by greedy search (right). In the case of using only one expert, we note that the motion expert provides the best results. We attribute the poor performance of OCR, speech and face to the fact that they are absent from many videos, thus resulting in a zero vector input to our video encoder. While the scene expert shows a decent performance, if used alone, it does not contribute when used along other experts, perhaps due to the semantics it encodes being captured already by other experts like appearance or motion. On the contrary, the audio expert alone does not provide a good performance, but it contributes the most when used in conjunction with the others, most likely due to the complementary cues it provides, compared to the other experts. \begin{figure}[t] \centering \includegraphics[width=.32\linewidth]{img/one_mod.png} \includegraphics[width=.32\linewidth]{img/all_but_one_mod.png} \includegraphics[width=.32\linewidth]{img/adding_mods.png} \caption{MSRVTT performance (mean rank; lower is better) after training from scratch, when using only one expert (left), when using all experts but one (middle), when gradually adding experts by greedy search (right).} \label{fig:experts_ablation} \end{figure} \noindent\textbf{Comparison to prior state of the art.} We compare our method on three datasets: MSRVTT (Table~\ref{table:MSRVTT_results}), ActivityNet (Table~\ref{table:ANet_results}) and LSMDC (Table~\ref{table:LSMDC_results}). While MSRVTT and LSMDC contain short video-caption pairs (average video duration of 13s for MSRVTT, one-sentence captions), ActivityNet contains much longer videos (several minutes) and each video is captioned with multiple sentences. We consider the concatenation of all these sentences as the caption. We show that our method obtains state-of-the-art results on all the three datasets. The gains obtained through MMT's long term temporal encoding are particularly noticeable on the long videos of ActivityNet. \begin{table}[h!] \begin{center} \caption{Retrieval performance on the MSRVTT dataset. 1k-A and 1k-B denote test sets of 1000 randomly sampled caption-video pairs used in~\cite{Yu2018JSFusion} and~\cite{miech2018learning} resp.} \label{table:MSRVTT_results} \scriptsize \begin{tabular}{l | l| @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c | @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c} \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{3}{c}{\textit{Text $\longrightarrow$ Video}} & \multicolumn{3}{c}{\textit{Video $\longrightarrow$ Text}} \\ Method & Split & R@5$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ & R@5$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ \\ \hline Random baseline & 1k-A & 0.5 & 500.0 & 500.0 & 0.5 & 500.0 & 500.0 \\ JSFusion~\cite{Yu2018JSFusion} & 1k-A & 31.2 & 13 & - & - & - & - \\ HT~\cite{miech19howto100m} & 1k-A & 35.0 & 12 & - & - & - & - \\ CE~\cite{liu2019use} & 1k-A & $\hspace{1.5em}48.8_{\!\pm\!0.6}$ & $\hspace{1.5em}6.0_{\!\pm\!0.0}$ & $\hspace{1.5em}28.2_{\!\pm\!0.8}$ & $\hspace{1.5em}50.3_{\!\pm\!0.5}$ & $\hspace{1.5em}5.3_{\!\pm\!0.6}$ & $\hspace{1.5em}25.1_{\!\pm\!0.8}$ \\ Ours & 1k-A & $\hspace{1.5em}\textbf{54.0}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{4.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{26.7}_{\!\pm\!0.9}$ & $\hspace{1.5em}\textbf{56.0}_{\!\pm\!0.9}$ & $\hspace{1.5em}\textbf{4.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{23.6}_{\!\pm\!1.0}$ \\ \hline HT-pretrained~\cite{miech19howto100m} & 1k-A & 40.2 & 9 & - & - & - & - \\ Ours-pretrained & 1k-A & $\hspace{1.5em}\textbf{57.1}_{\!\pm\!1.0}$ & $\hspace{1.5em}\textbf{4.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{24.0}_{\!\pm\!0.8}$ & $\hspace{1.5em}\textbf{57.5}_{\!\pm\!0.6}$ & $\hspace{1.5em}\textbf{3.7}_{\!\pm\!0.5}$ & $\hspace{1.5em}\textbf{21.3}_{\!\pm\!0.6}$ \\ \hline\hline Random baseline & 1k-B & 0.5 & 500.0 & 500.0 & 0.5 & 500.0 & 500.0 \\ MEE~\cite{miech2018learning} & 1k-B & 37.9 & 10.0 & - & - & - & - \\ JPose~\cite{wray2019finegrained} & 1k-B & 38.1 & 9 & - & 41.3 & 8.7 & - \\ MEE-COCO~\cite{miech2018learning} & 1k-B & 39.2 & 9.0 & - & - & - & - \\ CE~\cite{liu2019use} & 1k-B & $\hspace{1.5em}46.0_{\!\pm\!0.4}$ & $\hspace{1.5em}7.0_{\!\pm\!0.0}$ & $\hspace{1.5em}35.3_{\!\pm\!1.1}$ & $\hspace{1.5em}46.0_{\!\pm\!0.5}$ & $\hspace{1.5em}6.5_{\!\pm\!0.5}$ & $\hspace{1.5em}30.6_{\!\pm\!1.2}$ \\ Ours & 1k-B & $\hspace{1.5em}\textbf{49.1}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{6.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{29.5}_{\!\pm\!1.6}$ & $\hspace{1.5em}\textbf{49.4}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{6.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{24.5}_{\!\pm\!1.8}$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[h!] \begin{center} \caption{Retrieval performance on the ActivityNet dataset.} \label{table:ANet_results} \scriptsize \begin{tabular}{l | @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c | @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c} \hline \multicolumn{1}{c}{} & \multicolumn{3}{c}{\textit{Text $\longrightarrow$ Video}} & \multicolumn{3}{c}{\textit{Video $\longrightarrow$ Text}} \\ Method & R@5$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ & R@5$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ \\ \hline Random baseline & 0.1 & 2458.5 & 2458.5 & 0.1 & 2458.5 & 2458.5 \\ FSE~\cite{zhang2018HSE} & $\hspace{1.5em}44.8_{\!\pm\!0.4}$ & 7 & - & $\hspace{1.5em}43.1_{\!\pm\!1.1}$ & 7 & - \\ CE~\cite{liu2019use} & $\hspace{1.5em}47.7_{\!\pm\!0.6}$ & $\hspace{1.5em}6.0_{\!\pm\!0.0}$ & $\hspace{1.5em}23.1_{\!\pm\!0.5}$ & $\hspace{1.5em}46.6_{\!\pm\!0.7}$ & $\hspace{1.5em}6.0_{\!\pm\!0.0}$ & $\hspace{1.5em}24.4_{\!\pm\!0.5}$ \\ HSE~\cite{zhang2018HSE}& 49.3 & - & - & 48.1 & - & - \\ Ours & $\hspace{1.5em}\textbf{54.2}_{\!\pm\!1.0}$ & $\hspace{1.5em}\textbf{5.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{20.8}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{54.8}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{4.3}_{\!\pm\!0.5}$ & $\hspace{1.5em}\textbf{21.2}_{\!\pm\!0.5}$ \\ \hline Ours-pretrained & $\hspace{1.5em}\textbf{61.4}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{3.3}_{\!\pm\!0.5}$ & $\hspace{1.5em}\textbf{16.0}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{61.1}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{4.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{17.1}_{\!\pm\!0.5}$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[h!] \begin{center} \caption{Retrieval performance on the LSMDC dataset.} \label{table:LSMDC_results} \scriptsize \begin{tabular}{l | @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c | @{\hskip -0.35cm}c @{\hskip -0.35cm}c @{\hskip -0.35cm}c} \hline \multicolumn{1}{c}{} & \multicolumn{3}{c}{\textit{Text $\longrightarrow$ Video}} & \multicolumn{3}{c}{\textit{Video $\longrightarrow$ Text}} \\ Method & R@5$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ & R@5$\uparrow$ & MdR$\downarrow$ & MnR$\downarrow$ \\ \hline Random baseline & 0.5 & 500.0 & 500.0 & 0.5 & 500.0 & 500.0 \\ CT-SAN~\cite{Yu2016CT-SAN} & 16.3 & 46 & - & - & - & - \\ JSFusion~\cite{Yu2018JSFusion} & 21.2 & 36 & - & - & - & - \\ CCA~\cite{Klein2015CCA} (rep. by~\cite{miech2018learning}) & 21.7 & 33 & - & - & - & - \\ MEE~\cite{miech2018learning} & 25.1 & 27 & - & - & - & - \\ MEE-COCO~\cite{miech2018learning} & 25.6 & 27 & - & - & - & - \\ CE~\cite{liu2019use} & $\hspace{1.5em}26.9_{\!\pm\!1.1}$ & $\hspace{1.5em}25.3_{\!\pm\!3.1}$ & - & - & - & - \\ Ours & $\hspace{1.5em}\textbf{29.2}_{\!\pm\!0.8}$ & $\hspace{1.5em}\textbf{21.0}_{\!\pm\!1.4}$ & $\hspace{1.5em}\textbf{76.3}_{\!\pm\!1.9}$ & $\hspace{1.5em}\textbf{29.3}_{\!\pm\!1.1}$ & $\hspace{1.5em}\textbf{22.5}_{\!\pm\!0.4}$ & $\hspace{1.5em}\textbf{77.1}_{\!\pm\!2.6}$ \\ \hline Ours-pretrained & $\hspace{1.5em}\textbf{29.9}_{\!\pm\!0.7}$ & $\hspace{1.5em}\textbf{19.3}_{\!\pm\!0.2}$ & $\hspace{1.5em}\textbf{75.0}_{\!\pm\!1.2}$ & $\hspace{1.5em}\textbf{28.6}_{\!\pm\!0.3}$ & $\hspace{1.5em}\textbf{20.0}_{\!\pm\!0.0}$ & $\hspace{1.5em}\textbf{76.0}_{\!\pm\!0.8}$ \\ \hline \end{tabular} \end{center} \end{table} \section{Introduction} Video is one of the most popular forms of media due to its ability to capture dynamic events and its natural appeal to our visual and auditory senses. Online video platforms are playing a major role in promoting this form of media. However, the billions of hours of video available on such platforms are unusable if we cannot access them effectively, for example, by retrieving relevant content through queries. In this paper, we tackle the tasks of caption-to-video and video-to-caption retrieval. In the first task of caption-to-video retrieval, we are given a query in the form of a caption (e.g., ``How to build a house") and the goal is to retrieve the videos best described by it (i.e., videos explaining how to build a house). In practice, given a test set of caption-video pairs, our aim is to provide, for each caption query, a ranking of all the video candidates such that the video associated with the caption query is ranked as high as possible. On the other hand, the task of video-to-caption retrieval focuses on finding among a collection of caption candidates the ones that best describe the query video. A common approach for the retrieval problem is similarity learning~\cite{Xing2003distancemetric}, where we learn a function of two elements (a query and a candidate) that best describes their similarity. All the candidates can then be ranked according to their similarity with the query. In order to perform this ranking, the captions as well as the videos are represented in a common multi-dimensional embedding space, wherein similarities can be computed as a dot product of their corresponding representations. The critical question here is how to learn accurate representations of both caption and video to base our similarity estimation on. \begin{figure*}[t] \begin{center} \includegraphics[clip, trim={1.3cm 9cm 1.2cm 3.1cm}, width=\textwidth]{img/FirstFig.png} \end{center} \caption{When matching a text query with videos, the inherent cross-modal and temporal information in videos needs to be leveraged effectively, for example, with a video encoder that handles all the constituent modalities (appearance, audio, speech) jointly across the entire duration of the video. In this example, a video encoder will be able to distinguish between ``someone walking \textit{to}" and ``someone walking \textit{away}" only if it exploits the temporal information of events occurring in the video (red arrows). Also, in order to understand that a ``motorbike failed to start", it needs to use cross-modal information (e.g., absence of noise after someone tried to start the engine, orange arrow).} \label{fig:video_signal} \end{figure*} The problem of learning representation of text has been extensively studied, leading to various methods~\cite{Zhang2010BagOfWords,Mikolov2013Word2Vec,vaswani2017transformer,Hochreiter1997lstm,devlin2018bert}, which can be used to encode captions. In contrast to these advances, learning effective video representation continues to be a challenge, and forms the focus of our work. This is in part due to the multimodal and temporal nature of video. Video data not only varies in terms of appearance, but also in possible motion, audio, overlaid text, speech, etc. Leveraging cross-modal relations thus forms a key to building effective video representations. As illustrated in Fig.~\ref{fig:video_signal}, cues jointly extracted from all the constituent modalities are more informative than handling each modality independently. Hearing a motor sound right after seeing someone starting a bike tells us that the running bike is the visible one and not a background one. Another example is the case of a video of ``a crowd listening to a talk", neither of the modalities ``appearance" or ``audio" can fully describe the scene, but when processed together, higher level semantics can be obtained. Recent work on video retrieval does not fully exploit such cross-modal high-level semantics. They either ignore the multi-modal signal~\cite{miech2019MIL-NCE}, treat modalities separately~\cite{miech2018learning}, or only use a gating mechanism to modulate certain modality dimensions~\cite{liu2019use}. Another challenge in representing video is its temporality. Due to the difficulty in handling variable duration of videos, current approaches~\cite{miech2018learning,liu2019use} discard long-term temporal information by aggregating descriptors extracted at different moments in the video. We argue that this temporal information can be important to the task of video retrieval. As shown in Fig.~\ref{fig:video_signal}, a video of ``someone walking \textit{to} an object" and ``someone walking \textit{away} from an object" will have the same representation once pooled temporally, however, the movement of the person relative to the object is potentially important in the query. We address the temporal and multi-modal challenges posed in video data by introducing our multi-modal transformer. It performs the task of processing features extracted from different modalities at different moments in video and aggregates them in a compact representation. Building on the transformer architecture~\cite{vaswani2017transformer}, our multi-modal transformer exploits the self-attention mechanism to gather valuable cross-modal and temporal cues about events occurring in a video. We integrate our multi-modal transformer in a cross-modal framework, as illustrated in Fig.~\ref{fig:architecture}, which leverages both captions and videos, and estimates their similarity. \begin{figure*}[t] \begin{center} \includegraphics[clip, trim=0cm 0cm 0cm 0cm, width=1.0\textwidth]{img/architecture.pdf} \end{center} \caption{Our cross-modal framework for similarity estimation. We use our Multi-modal Transformer (MMT, right) to encode video, and BERT (left) for text.} \label{fig:architecture} \end{figure*} \paragraph{Contributions.} In this work, we make the following three contributions: (i) First, we introduce a novel video encoder architecture for retrieval: Our multi-modal transformer processes effectively multiple modality features extracted at different times. (ii) We thoroughly investigate different architectures for language embedding, and show the superiority of the BERT model for the task of video retrieval. (iii) By leveraging our novel cross-modal framework we outperform prior state of the art for the task of video retrieval on MSRVTT~\cite{xu2016msrvtt}, ActivityNet~\cite{krishna2017activitynet} and LSMDC~\cite{Rohrbach2015LSMDC} datasets. It is also the winning solution in the CVPR 2020 Video Pentathlon Challenge~\cite{Gabeur2020Challenge}. \section{Supplementary material} \input{annex.tex} \end{document} \section{Methodology} Our overall method relies on learning a function $s$ to compute the similarity between two elements: text and video, as shown in Fig.~\ref{fig:architecture}. We then rank all the videos (or captions) in the dataset, according to their similarity with the query caption (or video) in the case of text-to-video (or video-to-text) retrieval. In other words, given a dataset of $n$ video-caption pairs $\{(v_1,c_1), ..., (v_n,c_n)\}$, the goal of the learnt similarity function $s(v_i,c_j)$, between video $v_i$ and caption $c_j$, is to provide a high value if $i = j$, and a low one if $i \ne j$. Estimating this similarity (described in Section~\ref{section:similarity_estimation}) requires accurate representations for the video as well as the caption. Fig.~\ref{fig:architecture} shows the two parts focused on producing these representations (presented in Sections~\ref{sec:videorep} and~\ref{sec:captionrep} respectively) in our cross-modal framework. \subsection{Video representation} \label{sec:videorep} The video-level representation is computed by our proposed multi-modal transformer (MMT). MMT follows the architecture of the transformer encoder presented in~\cite{vaswani2017transformer}. It consists of stacked self-attention layers and fully collected layers. MMT's input $\Omega(v)$ is a set of embeddings, all of the same dimension $d_{model}$. Each of them embeds the semantics of a feature, its modality, and the time in the video when the feature was extracted. This input is given by: \begin{equation} \label{eq:transformer_input} \Omega(v) = F(v) + E(v) + T(v), \end{equation} In the following, we describe those three components. \noindent\textbf{Features $F$.} In order to learn an effective representation from different modalities inherent in video data, we begin with video feature extractors called ``experts''~\cite{mithun2018learning,Yu2018JSFusion,miech2018learning,liu2019use}. In contrast to previous methods, we learn a joint representation leveraging both cross-modal and long-term temporal relationships among the experts. We use $N$ pretrained experts $\{F^n\}_{n=1}^N$. Each expert is a model trained for a particular task that is then used to extract features from video. For a video $v$, each expert extracts a sequence $F^n(v) = [F^n_1, ..., F^n_K]$ of $K$ features. The features extracted by our experts encode the semantics of the video. Each expert $F^n$ outputs features in $\mathbb{R}^{d_n}$. In order to project the different expert features into a common dimension $d_{model}$, we learn $N$ linear layers (one per expert) to project all the features into $\mathbb{R}^{d_{model}}$. A transformer encoder produces an embedding for each of its feature inputs, resulting in several embeddings for an expert. In order to obtain a unique embedding for each expert, we define an aggregated embedding $F^n_{agg}$ that will collect and contextualize the expert's information. We initialize this embedding with a max pooling aggregation of all the corresponding expert's features as $F^n_{agg} = maxpool(\{F^n_k\}_{k=1}^K)$. The sequence of input features to our video encoder then takes the form: \begin{equation} \label{eq:features} F(v) = [F^1_{agg}, F^1_1, ..., F^1_K, ..., F^N_{agg}, F^N_1, ..., F^N_K]. \end{equation} \begin{figure*}[t] \begin{center} \includegraphics[clip, trim=1.1cm 10cm 1cm 8cm, width=1.0\textwidth]{img/MMT_input.pdf} \end{center} \caption{Inputs to our multi-modal transformer. We combine feature semantics $F$, expert information $E$, and temporal cues $T$ to form our video embeddings $\Omega(v)$, which are input to MMT.} \label{fig:vid_transformer_input} \end{figure*} \noindent\textbf{Expert embeddings $E$.} In order to process cross-modality information, our MMT needs to identify which expert it is attending to. We learn $N$ embeddings $\{E_1, ..., E_N\}$ of dimension $d_{model}$ to distinguish between embeddings of different experts. Thus, the sequence of expert embeddings to our video encoder takes the form: \begin{equation} \label{eq:expert_embeddings} E(v) = [E^1, E^1, ..., E^1, ..., E^N, E^N, ..., E^N]. \end{equation} \noindent\textbf{Temporal embeddings $T$.} They provide temporal information about the time in the video where each feature was extracted to our multi-modal transformer. Considering videos of a maximum duration of $t_{max}$ seconds, we learn $D = \abs{t_{max}}$ embeddings $\{T_1, ..., T_D\}$ of dimension $d_{model}$. Each expert feature that has been extracted in the time range $[t,t+1)$ will be temporally embedded with $T_{t+1}$. For example, a feature extracted at 7.4s in the video will be temporally encoded with temporal embedding $T_8$. We learn two additional temporal embeddings $T_{agg}$ and $T_{unk}$, which encode aggregated features and unknown temporal information features (for experts whose temporal information is unknown), respectively. The sequence of temporal embeddings of our video encoder then takes the form: \begin{equation} \label{eq:temporal_embeddings} T(v) = [T_{agg}, T_1, ..., T_D, ..., T_{agg}, T_1, ..., T_D]. \end{equation} \noindent\textbf{Multi-modal Transformer.} The video embeddings $\Omega(v)$ defined as the sum of features, expert and temporal embeddings in (\ref{eq:transformer_input}), as shown in Fig.~\ref{fig:vid_transformer_input}, are input to the transformer. They are given by: $\Omega(v) = F(v) + E(v) + T(v) = [\omega^1_{agg}, \omega^1_1, ..., \omega^1_K, ..., \omega^N_{agg}, \omega^N_1, ..., \omega^N_K].$ MMT contextualises its input $\Omega(v)$ and produces the video representation $\Psi_{agg}(v)$. As illustrated in Fig.~\ref{fig:architecture}, we only keep the aggregated embedding per expert. Thus, our video representation $\Psi_{agg}(v)$ consists of the output embeddings corresponding to the aggregated features, i.e., \begin{equation} \label{eq:MMT processing} \Psi_{agg}(v) = MMT(\Omega(v)) = [\psi^1_{agg}, ..., \psi^N_{agg}]. \end{equation} The advantage of our MMT over the state-of-the-art collaborative gating mechanism~\cite{liu2019use} is two-fold: First, the input embeddings are not simply modulated in a single step but iteratively refined through several layers featuring multiple attention heads. Second, we do not limit our video encoder with a temporally aggregated feature for each expert, but provide all the extracted features instead, along with a temporal encoding describing at what moment of the video they were extracted from. Thanks to its self-attention modules, each layer of our multi-modal transformer is able to attend to all its input embeddings, thus extracting semantics of events occurring in the video over several modalities. \subsection{Caption representation} \label{sec:captionrep} We compute our caption representation $\Phi(c)$ in two stages: first, we obtain an embedding $h(c)$ of the caption, and then project it with a function $g$ into $N$ different spaces as $\Phi = g \circ h$. For the embedding function $h$, we use a pretrained BERT model~\cite{devlin2018bert}. Specifically, we extract our single caption embedding $h(c)$ from the [CLS] output of BERT. In order to match the size of this caption representation with that of video, we learn for function $g$ as many gated embedding modules~\cite{miech2018learning} as there are video experts. Our caption representation then consists of $N$ embeddings, represented by $\Phi(c) = \{\phi^i\}_{i=1}^N$. \subsection{Similarity estimation} \label{section:similarity_estimation} We compute our final video-caption similarity $s$, as a weighted sum of each expert $i$'s video-caption similarity $\inner{\phi^i}{\psi^i_{agg}}$. It is given by: \begin{equation} \label{eq:cosine_similarity} s(v,c) = \sum_{i = 1}^{N} w_i(c)\inner{\phi^i}{\psi^i_{agg}}, \end{equation} where $w_i(c)$ represents the weight for the $i$th expert. To obtain these mixture weights, we follow~\cite{miech2018learning} and process our caption representation $h(c)$ through a linear layer and then perform a softmax operation, i.e., \begin{equation} \label{eq:weights} w_i(c) = \frac{e^{h(c)^{\top} a_{i}}}{\sum_{j=1}^{N} e^{h(c)^{\top} a_{j}}}, \end{equation} where $(a_1, ..., a_N)$ are the weights of the linear layer. The intuition behind using a weighted sum is that a caption may not describe all the inherent modalities in video uniformly. For example, in the case of a video with a person in a red dress singing opera, the caption ``a person in a red dress'' provides no information relevant for audio. On the contrary, the caption ``someone is singing'' should focus on the audio modality for computing similarity. Note that $w_i(c), \phi^i \ \text{and} \ \psi^i_{agg}$ can all be precomputed offline for each caption and for each video, and therefore the retrieval operation only involves dot product operations. \subsection{Training} We train our model with the bi-directional max-margin ranking loss~\cite{Karpathy2014DeepFE}: \begin{equation} \label{eq:loss} \mathcal{L} = \frac{1}{B}\sum_{i=1}^{B} \sum_{j \neq i} \Big[ \max(0, s_{ij} - s_{ii} + m) + \max(0, s_{ji} - s_{ii} + m)\Big], \end{equation} where $B$ is the batch size, $s_{ij} = s(v_{i},c_{j})$, the similarity score between video $v_i$ and caption $c_j$, and $m$ is the margin. This loss enforces the similarity for true video-caption pairs $s_{ii}$ to be higher than the similarity of negative samples $s_{ij}$ or $s_{ji}$, for all $i \neq j$, by at least $m$. \section{Related work} We present previous work on language and video representation learning, as well as on visual-language retrieval. \vspace{0.2cm} \noindent \textbf{Language representations.} Earlier work on language representations include bag of words~\cite{Zhang2010BagOfWords} and Word2Vec~\cite{Mikolov2013Word2Vec}. A limitation of these representations is capturing the sequential properties in a sentence. LSTM~\cite{Hochreiter1997lstm} was one of the first successful deep learning models to handle this. More recently, the transformer~\cite{vaswani2017transformer} architecture has shown impressive results for text representation by implementing a self-attention mechanism where each word (or wordpiece~\cite{Wu2016Wordpiece}) of the sentence can attend to all the others. The transformer architecture, consisting of self-attention layers alternatively stacked with fully-connected layers, forms the base of the popular language modeling network BERT~\cite{devlin2018bert}. Burns et al.~\cite{burns2019language} perform an analysis of the different word embeddings and language models (Word2Vec~\cite{Mikolov2013Word2Vec}, LSTM~\cite{Hochreiter1997lstm}, BERT~\cite{devlin2018bert}, etc.) used in vision-language tasks. They show that the pretrained and frozen BERT model~\cite{devlin2018bert} performs relatively poorly compared to a LSTM or even a simpler average embedding model. In this work, we show that for video retrieval, a pretrained BERT outperforms other language models, but it needs to be finetuned. \vspace{0.2cm} \noindent \textbf{Video representations.} With a two-stream network, Simonyan et al.~\cite{Simonyan2014TwoStream} have used complementary information from still frames and motion between frames to perform action recognition in videos. Carreira et al.~\cite{Carreira2017Conv3D} incorporated 3D convolutions in a two-stream network to better attend the temporal structure of the signal. S3D~\cite{Xie2017S3D} is an alternative approach, which replaced the expensive 3D spatio-temporal convolutions by separable 2D and 1D convolutions. More recently, transformer-based methods, which leverage BERT pretraining~\cite{devlin2018bert}, have been applied to S3D features in VideoBERT~\cite{Sun2019VideoBERT} and CBT~\cite{Sun2019cbt}. While these works focus on visual signals, they have not studied how to encode the other multi-modal semantics, such as audio signals. \vspace{0.2cm} \noindent \textbf{Visual-language retrieval.} Harwath et al.~\cite{Harwath2018} perform image and audio-caption retrieval by embedding audio segments and image regions in the same space and requiring high similarity between each audio segment and its corresponding image region. The method presented in~\cite{lee2018stacked} takes a similar approach for image-text retrieval by embedding images regions and words in a joint space. A high similarity is obtained for images that have matching words and image regions. For videos, JSFusion~\cite{Yu2018JSFusion} estimates video-caption similarity through dense pairwise comparisons between each word of the caption and each frame of the video. In this work, we instead estimate both a video embedding and a caption embedding and then compute the similarity between them. Zhang et al.~\cite{zhang2018HSE} perform paragraph-to-video retrieval by assuming a hierarchical decomposition of the video and paragraph. Our method do not assume that the video can be decomposed into clips that align with sentences of the caption. A recent alternative is creating separate embedding spaces for different parts of speech (e.g., noun or verb)~\cite{wray2019finegrained}. In contrast to this method, we do not pre-process the sentences but encode them directly through BERT. Another work~\cite{miech19howto100m} leverages the large number of instructional videos in the HowTo100M dataset, but does not fully exploit the temporal relations. Our work instead relies on longer segments extracted from HowTo100M videos in order to learn temporal dependencies and address the problem of misalignment between speech and visual features. Mithun et al.~\cite{mithun2018learning,mithun2019joint} use three experts (Object, Activity and Place) to compute three corresponding text-video similarities. These experts however do not collaborate together as their respective similarities are simply summed together. A related approach~\cite{miech2018learning} uses precomputed features from experts for text to video retrieval, where the overall similarity is obtained as a weighted sum of each expert's similarity. A recent extension~\cite{liu2019use} to this mixture of experts model uses a collaborative gating mechanism for modulating each expert feature according to the other experts. However, this collaborative gating mechanism only strengthens (or weakens) some dimensions of the input signal in a single step, and is therefore not able to capture high level inter-modality information. Our multi-modal transformer overcomes this limitation by attending to all available modalities over multiple self-attention layers.
proofpile-arXiv_065-151
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The bent-core liquid crystals (BLCs) are a novel class of liquid crystal (LC) mesogens that manifest various unique and exciting properties such as chirality, ferroelectricity and biaxiality \cite{Photinos_biaxial_JMC,Takezoe_BLC_JJAP,Jakli_BLC_LCR,Francescangeli_cybo_SM,Punjani_Golam,Keith_NBLC_SM}. They are known to form several exotic mesophases such as the twist-bend nematic (N$_{tb}$) phase, the blue phase (BP) and the banana (B1-B7) phases \cite{Takezoe_BLC_JJAP,Cestari_Ntb_PRE,V_Borshch_Ntb_NatCom,Jakli_BLC_doped_JMC}. The nematic (N) phase of BLCs itself manifests a few of the aforementioned distinct features, such as ferroelectric response, fast switching and macroscopic biaxiality \cite{Shankar_Cybo_CPC,Shankar_Cybo_AFM,Ghosh_BLC_Ferro_JMCC,Francescangeli_Ferro_AFM,Photinos_biaxial_JMC,Francescangeli_cybo_SM}. The main reason behind these extraordinary features is the locally polar cybotactic clusters formed by BLC molecules in their N phase \cite{Francescangeli_cybo_SM,Punjani_Golam,Keith_NBLC_SM, Shankar_Cybo_CPC,Shankar_Cybo_AFM,Ghosh_BLC_Ferro_JMCC,Francescangeli_Ferro_AFM}. Due to bent molecular shape and the lack of translational symmetry, the BLC molecules in their N phase experience steric hindrance. This causes stacking of the BLC molecules in smectic layers (clusters) \cite{Scaramuzza_BLC_JAP,Jakli_Rheological_SM}. These stacks of molecules are termed as `cybotactic' clusters because they are more ordered compared to the surrounding molecules. The clusters and the other BLC molecules together constitute the macroscopic N phase \cite{Francescangeli_cybo_SM}. Recent reports, aided by various experimental techniques, have established the existence of cybotactic clusters in the nematic, smectic and even in the isotropic phases \cite{Kashima_PolarBLC_JMCC,Alaasar_Cluster_JMCC,Ghosh_BLC_Ferro_JMCC,Jakli_BLC_mixture_PRE,Domenici_NMR_SM,Goodby_Unusual_SM}. Although studied extensively, the origins of cluster formation and the effects of external factors (e.g. nanoparticle doping, electric field) on these clusters remain an open problem. Further studies are required for the manipulation and successful tailoring of cybotactic clusters for applications in science and technology, including novel BLC-driven devices. \\ Suspension of nanoparticles (NPs) in the LC matrix to improve or to selectively modify the physical properties of LCs is a widely used technique in today’s liquid crystal science. Studies have shown that the dispersion of nanoparticles in LCs can improve the electro-optic properties, modify the elastic anisotropy and the dielectric constants, and reduce the transition temperatures \cite{Takatoh_LCNP_JJAP, NAClark_LCCNT_APL, WLee_TNLC_APL, Ghosh_BLCCNT_JML, JitendraK_QDBLC_JML}. The incorporation of NPs can also affect the orientation of LCs and induce a homeotropic alignment \cite{Hegmann_NPLC_LC}. Varying the size and shapes of the dopant NPs also have a profound effect on the physical properties of LCs \cite{Orlandi_LCNP_PCCP, Mirzaei_LCQD_composite_JMC, Kinkead_QDLC_JMC}. Recently, a new class of semiconductor NPs have been discovered, called the quantum dots (QDs). Incorporation of these QDs in the LC matrix may also affect or alter the physical properties of LCs, such as a reduction in the dielectric anisotropy, faster response times, changes in the phase transition temperatures and altered boundary conditions \cite{Mirzaei_LCQD_composite_JMC, Kinkead_QDLC_JMC,Mirzaei_QDLC_dopant_JMC,Zhang_LCQD_JJAP,Urbanski_NPLC_Bulk_CPC,JitendraK_QDBLC_JML}. Changes in the dielectric anisotropy ($\Delta\epsilon$) provide an indirect measure of changes in the order parameter ($S$), because $\Delta\epsilon \propto S$ \cite{JitendraK_QDBLC_JML, maier_orderparameter}. The QDs are usually capped with functionalized ligands that prevent aggregation. In particular, this makes QDs good candidates for stabilising dilute suspensions in doping or dispersion LC experiments. To date, there has been work on QDs dispersed in calamitic nematic LCs (NLCs) while their effect on bent-core NLCs is relatively open \cite{JitendraK_QDBLC_JML}. In particular, little is known about the effect of QDs or doping in general, on the cybotactic clusters in bent core NLCs and in the absence of systematic experimental and theoretical studies on these lines, doped bent core NLC systems cannot meet their full potential. \\ We study a dilute homogeneous suspension of a QD-doped thermotropic BLC (details in the next section), confined in a planar cell with fixed boundary conditions on both cell surfaces. In particular, the undoped counterpart exhibits cybotactic clusters. Our primary investigations concern comparisons between the doped and undoped system, that give quantitative and qualitative insight into the effects of doping, the interplay between doping and cluster formation and how these effects can be tailored by temperature and external stimuli. This paper builds on our first paper \cite{patranabish2019one} wherein we focussed on a one-dimensional theoretical study of the N phase of a BLC, confined in a planar cell, within a phenomenological Landau-de Gennes (LdG) framework inspired by previous insightful modelling in \cite{madhusudana2017two}. This model is based on the premise that the N phase of BLC is characterized by two order parameters: $S_g$ that measures the ordering of the ground-state molecules (outside the clusters) and $S_c$ that measures the ordering within the smectic-like cybotactic clusters, with coupling between the two effects captured by an empirical parameter $\gamma$. In \cite{patranabish2019one}, we theoretically study the effects of spatial inhomogeneities, confinement and the coupling parameter, $\gamma$, on $S_g$ and $S_c$. Little is known about the material-dependent values of $\gamma$ or indeed how it could be experimentally controlled. Our theoretical studies showed that larger values of $\gamma$ substantially increase the interior values of $S_g$ and $S_c$ i.e. $\gamma$ promotes order outside and within the clusters of the N phase of the BLC, the effects being more pronounced for lower temperatures. However, the coupling also enhances the values of $S_g$, for temperatures above the nematic-isotropic transition temperature i.e. the bent core NLC can exhibit nematic order when the calamitic N phase does not e.g. for temperatures above the nematic-isotropic transition temperature. The model in \cite{patranabish2019one} is simplified in many ways, but yet sheds qualitative insight into the powerful prospects offered by cybotactic clusters in BLCs and how they can be used to manipulate nematic order and phase transitions for tailor-made applications.\\ In this paper, we report a combined experimental and theoretical analysis of a QDs dispersed bent-core nematic LC 14-2M-CH$_3$, in the dilute regime. The dilute regime applies to systems of nano-scale QDs (much smaller than the system size) with a low concentration of QDs, and the QDs are uniformly dispersed without any aggregation effects. We perform optical texture observations, dielectric measurements, optical birefringence measurements and the orientational order parameter calculations on the pristine BLC and its QDs-dispersed counterpart. The N phase of 14-2M-CH$_3$ contains cybotactic clusters, as already reported in our earlier work \cite{Patranabish_JML}. We find that the N phase of the QDs-dispersed counterpart also contains cybotactic clusters, albeit with modified properties. We report a number of interesting experimental results for the QDs-dispersed BLC system - the optical birefringence ($\Delta n$) is lowered and the macroscopic order parameter ($S$) is reduced compared to the undoped counterpart for a given temperature, the activation energy ($E_a$) increases compared to the undoped counterpart and based on the measurements of the relaxation frequencies ($f_R$) and activation energies, we deduce that the size of the cybotactic clusters decreases with QDs doping. We complement our experiments with a theoretical LdG-type model for the N phase of the QD-doped BLC, using the framework developed in \cite{canevari2019design}. This framework is not specific to QDs or to BLCs but to generic dilute doped LC systems and effectively captures the effects of the homogeneously suspended inclusions (in this case QDs) in terms of an additional contribution to the free energy. Hence, we apply this approach to the LdG free energy of a BLC system proposed in \cite{patranabish2019one} and \cite{madhusudana2017two} and qualitatively capture the effects of the QDs by means of suitable novel additional energetic terms. These additional terms, in principle, depend on the properties of the QDs e.g. size, shape, anchoring and preferred order etc. We introduce a weighted mean scalar order parameter, $S_m$, the theoretical analogue of the experimentally measured scalar order parameter. This simplistic approach does capture the doping-induced reduction in the mean order parameter $S_m$, which in turn qualitatively explains the reduction in birefringence, dielectric anisotropy. We present our experimental results in three parts below, followed by the mathematical model, numerical results and perspectives for future work. \section{Experimental} \begin{table}[b] \caption{Phase sequence and transition temperatures observed in this study (using POM) during slow cooling.} \begin{ruledtabular} \begin{tabular}{lc} Compound & \makecell{Phase sequence and transition \\ temperatures ($^\circ$C)}\\ \hline 14-2M-CH$_3$ & Iso 134 N$_{Cyb}$ 106 Cryst. \\ 14-2M-CH$_3$ + 0.5 wt\% QDs & Iso 134 N$_{Cyb}$ 104 Cryst. \\ \end{tabular} \end{ruledtabular} \end{table} A thermotropic bent-core nematic liquid crystal (LC) 14-2M-CH$_3$ was used for the experimental study and also as the host for the studied LC nanocomposite. The LC material was obtained from Prof. N.V.S. Rao's group at the Department of Chemistry, Assam University, Silchar, Assam, India. The molecular formula of 14-2M-CH$_3$, synthetic scheme details etc. are available in our earlier paper \cite{Patranabish_JML}. The CdSe/ZnS core-shell type quantum dots (QDs) of diameter 5.6 nm (Core dia: 2.8 nm + Shell thickness 1.4 nm) were procured from Sigma-Aldrich, Merck (USA) for preparing the LC nanocomposites. The spherical QDs were stabilized with the encapsulation of octadecylamine ligands, have absorption maxima in the range from 510 to 540 nm and emission wavelengths lying in the range of 530 to 540 nm, as provided by the manufacturer. The sequence of performed experimental steps are as follows: preparation of QDs dispersed LC nanocomposite, optical texture observation and evaluation of transition temperatures, orientational order parameter determination $via$ optical birefringence measurements, and dielectric characterization. All the experimental measurements were carried out while slowly cooling the sample from the isotropic liquid. \\ \begin{figure}[t] \centering \includegraphics[width = 0.9 \linewidth]{"Fig_1_LCQD_colloid".pdf} \caption{Visibly homogeneous solutions of (a) 14-2M-CH$_3$ (b) CdSe/ZnS QDs and (c) nanocomposite (14-2M-CH$_3$ + 0.5 wt\% CdSe/ZnS QD) in Chloroform (CHCl$_3$).} \label{Fig1} \end{figure} To prepare the LC nanocomposite, CdSe/ZnS QDs were taken at 0.5 wt\% concentration and mixed with the LC compound 14-2M-CH$_3$. To obtain a homogeneous dispersion of the quantum dots in the LC matrix, chloroform was added to the mixture, and the mixture was ultrasonicated till a visibly homogeneous dispersion was achieved (Figure 1). The mixture was kept at $\sim$ 60 $^\circ$C for 2-3 hours and it was then left overnight at room temperature for the slow evaporation of chloroform \cite{pradeepkumar}. Once the chloroform was completely evaporated, 0.5wt\% QDs dispersed LC nanocomposites were obtained. They were checked visually through a polarizing optical microscope several times, but no aggregation of QDs were noticed.\\ \begin{figure*} \centering \includegraphics[width = \linewidth]{"Fig_2_textures".pdf} \caption{Birefringent textural colour variation with temperature of (a-e) the bent-core LC 14-2M-CH$_3$ and (f-j) the 0.5wt\% CdSe/ZnS QDs dispersed 14-2M-CH$_3$, during cooling, respectively. In each image, \textbf{\textit{r}} indicates the rubbing direction and the scale-bar indicates 100 $\mu \mathrm{m}$. The periodic white spots in the background of the images are features of the LC cell (Instec Inc., USA) caused by PI printing fabric in the production line.} \label{Fig2} \end{figure*} Indium Tin Oxide (ITO) coated 5 $\mu$m planar (homogeneous alignment) cells (Instec Inc., USA) were used for the experiments. Two different cells, of this type, were used for the pristine LC and the LC nanocomposite, respectively. The LCs were filled in the cells $via$ capillary action around 10 $^\circ$C above the clearing temperature. During measurements, the cells were kept inside an Instec HCS302 hot-stage and the temperature was maintained using an Instec MK1000 temperature controller with an accuracy of $\pm$ 0.01 $^{\circ}$C. The liquid crystalline textures were recorded using an OLYMPUS BX-51P polarizing optical microscope (POM) attached to a computer, with the sample placed between two crossed polarizers. \\ The phase behaviour and transition temperatures of the LC 14-2M-CH$_3$ and its nanocomposite were determined using the POM while slowly cooling from the isotropic liquid (0.5 $^\circ$C/min) \cite{JitendraK_QDBLC_JML, pradeepkumar}. The transition temperatures of the pristine bent-core LC 14-2M-CH$_3$ were also determined previously using differential scanning calorimetry (DSC) at a scan rate of 5 $^\circ$C/min (reported elsewhere) \cite{Patranabish_JML}. The transition temperatures of the pristine LC and its nanocomposite, as obtained from the POM observations, are summarized in Table 1. The dielectric measurements were carried out in the frequency range of 20 Hz - 2 MHz using an Agilent E4980A precision LCR meter. The measuring voltage was kept at $V_{rms} = 0.2$ V. For transmission dependent birefringence measurements and the related order parameter calculations, the sample was placed between two crossed Glan-Thompson polarizers (GTH10M, Thorlabs, Inc.) and perpendicularly illuminated with a He-Ne Laser ($\sim$ 633 nm) \cite{susantaPRE, susantaJMC}. The rubbing direction $\vec{r}$ (\textit{i.e.} the LC director $\widehat{n}$) of the planar LC cell was kept at 45$^\circ$ with respect to the polarizer (P)/analyzer (A) pass-axes. Transmitted power at the output end was measured using a Gentec PH100-Si-HA-OD1 photo-detector attached to a Gentec Maestro power meter. \\ \section{Result and discussion} \subsection{Polarizing optical microscopy} The LC material is introduced in a 5 $\mu$m planar LC cell $via$ capillary action around 10 $^\circ$C above the isotropic-nematic transition temperature, and the textures were recorded between crossed polarizers. The textures recorded for the LC 14-2M-CH$_3$ and its 0.5 wt$\%$ QDs dispersed nanocomposite, during slow cooling from the isotropic liquid, are shown in Figure 2. The textures of the LC nanocomposite exhibited fairly homogeneous colours (and hence, alignment) similar to that of the pristine LC. This indicates a good, homogeneous dispersion of QDs in the LC matrix without any aggregation \cite{JitendraK_QDBLC_JML}. Close to the isotropic-nematic (Iso-N) transition temperature, we observe a sharp colour change owing to the development of nematic order in these systems (see Figures 2(a) and 2(f)). As the temperature is further lowered, uniform marble textures, typical of the nematic phase, appear with colours varying with temperature \cite{Patranabish_JML}. The isotropic-nematic transition temperature remains nearly unaltered after the incorporation of QDs. In the N phase, the emergent colours change with decreasing temperature, which indicates that the birefringence ($\Delta n$) also changes with temperature. A qualitative measurement of this change in birefringence can be made by matching the colours with the Michel-Levy chart for a given thickness \cite{Michel_Levy}. We deduce that $\Delta n$ increases with decreasing temperature from this mapping. Also, the change in $\Delta n$ with temperature is found to be quite high ($\sim$ 0.06). This is suggestive of highly ordered microstructures in the N phase of the BLC compound \cite{Keith_NBLC_SM,Nafees_RSCAdv_2015}. Also, from Figure 2 we can clearly see that the temperature dependent textural colour sequence changes/shifts after incorporation of the QDs. With the help of Michel-Levy chart, we qualitatively deduce, that the $\Delta n$ values, for a fixed temperature, are lowered on incorporation of the QDs, implying a reduction in the corresponding nematic order parameter $S$, since, $\Delta n \propto S$ \cite{pradeepkumar}. Experimentally, $\Delta n$ measurements and the associated order parameter ($S$) calculations have also been performed and they are discussed in detail in the sub-section III-B. \subsection{Optical birefringence measurements and orientational order parameter calculations} The birefringence ($\Delta n$) measurements of the LC sample and its nanocomposite, as a function of temperature, has been performed with the optical transmission technique. The planar LC sample is perpendicularly illuminated with a He-Ne laser ($\lambda$ $\sim$ 633 nm) and placed between two crossed Glan-Thompson polarizers such that the optic axis makes an angle $\varphi = 45^{\circ}$ with the polarizer/analyzer pass axis. The power at the output end is measured with a photodetector. The transmitted light intensity is then given in terms of the phase retardation ($\delta$) as \cite{dierking_LCtextures}, \begin{equation} I = I_0 \sin^2 2 \varphi \sin^2 \frac{\delta}{2} = \frac{I_0}{2} (1 - \cos \delta), \end{equation} \begin{figure}[b] \centering \includegraphics[width = 0.8\linewidth]{"Fig_3_birefringence_fit".pdf} \caption{Experimental values of birefringence ($\Delta n$) for the LC 14-2M-CH$_3$ (half-filled squares) and its nanocomposite (half-filled circles); the solid lines (pure LC: red, LC nanocomposite: blue) represent the four-parameter fit to the experimental data using Equation (3). The related fitting parameter values are shown in the figure. The fitting parameters $\chi^2$ and $R^2$ are generated by the fitting algorithm, so that $\chi^2 \sim 0$ and $R^2 \sim 1$ describe good fits.} \label{Fig3} \end{figure} Here, $\delta = \frac{2 \pi }{\lambda} \Delta n d$ is the phase retardation, $I$ is the transmitted light intensity, $I_0$ is the incident light intensity, $\varphi$ (= 45$^\circ$) is the azimuthal angle, i.e., the angle made by optic axis with the polarizer/analyzer pass axis, $\lambda$ is the incident light wavelength, $\Delta n = n_e - n_o$ is the birefringence, $n_e$ and $n_o$ are the extraordinary and ordinary refractive indices of the LC, respectively, and $d$ is the thickness of the LC cell. The birefringence, $\Delta n$, is measured directly from the experimental results using equation (1), as a function of temperature. In Figure 3, we plot the experimentally measured birefringence ($\Delta n$) values for pure 14-2M-CH$_3$ (half-filled squares) and its nanocomposite (half-filled circles), at different temperatures. For both cases, on cooling from the isotropic liquid, $\Delta n$ manifests a sharp increase following the Isotropic-N phase transition, basically due to an enhancement in the nematic order. On further cooling, $\Delta n$ retains the trend but now the increase is relatively slow. It is to be noted that the birefringence values decrease appreciably due to the incorporation of QDs in the entire mesophase range.\\ \begin{figure}[t] \centering \includegraphics[width = 0.8\linewidth]{"Fig_4_OOP".pdf} \caption{Orientational order parameter ($S$) as a function of temperature for the bent-core LC 14-2M-CH$_3$ and its 0.5wt\% CdSe/ZnS QDs incorporated nanocomposite.} \label{Fig4} \end{figure} For precise determination of the temperature dependence of the nematic order parameter (\textit{S}), we resort to the four-parameter power-law expression, which is in agreement with the mean-field theory of weakly first-order transitions \cite{four_parameter,susantaPRE}, \begin{equation} S(T) = S^{**} + A \left\lvert\left(1 - \frac{T}{T^{**}}\right)\right\rvert^\beta, \end{equation} Here, $T$ is the absolute temperature, $T^{**}$ is the absolute superheating limit of the nematic phase; at $T=T^{**}$, $S(T^{**})=S^{**}$, $\beta$ is the critical exponent and $A$ is a constant. At $T=0$, $S(0)=1$, which implies $1 = S^{**}+A$. The birefringence, ($\Delta n$), can then be expressed as \cite{susantaPRE}, \begin{equation} \Delta n = \xi\left[S^{**} + (1-S^{**}) \left\lvert\left(1 - \frac{T}{T^{**}}\right)\right\rvert^\beta\right], \end{equation} where, $\xi=(\Delta\alpha/\langle\alpha\rangle)[(n_I^2-1)/2n_I]$, $\Delta\alpha$ is the molecular polarizability anisotropy, $\langle\alpha\rangle$ is the mean polarizability and $n_I$ is the refractive index in isotropic phase just above the Isotropic-N transition temperature. The experimental birefringence ($\Delta n$) data has been well fitted with equation (3), which involves four fit parameters $\xi$, $S^{**}$, $\beta$ and $T^{**}$. The obtained fitting plots (pure LC: red solid line, LC nanocomposite: blue solid line) along with the fit parameter values, are shown in Figure 3. The four-parameter fitting is considered superior to the Haller's method that involves lesser number of fit parameters \cite{Haller_HallerFit,susantaPRE,four_parameter}. We obtain $\xi = 0.324$, $S^{**} = 0.109$, $T^{**} = 134.232 ^{\circ}C$ and $\beta = 0.251$ for the pure LC. For the LC nanocomposite, we obtain $\xi = 0.317$, $S^{**} = 0.059$, $T^{**} = 134.199 ^{\circ}C$ and $\beta = 0.253$. The fit parameter values remain almost unaltered after the incorporation of QDs, except the value of $S^{**}$, which is reduced almost by a factor of $\frac{1}{2}$. It is indicative that the QDs have a significant effect on the nematic order in the LC mesophase. The value of the critical exponent $\beta$ is around 0.25, in both cases, which is in excellent agreement with the theoretically predicted values for the nematic phase \cite{susantaPRE,four_parameter}. The temperature dependent macroscopic orientational order parameter ($S$) is calculated using equation (2) and using the parameter values obtained from the fittings. The obtained temperature-dependent profiles of $S$, for both the cases, are shown in Figure 4. The order parameter $S$ decreases appreciably after the incorporation of QDs. The decrease in order parameter can be ascribed to the reduction of cybotactic cluster size after QDs incorporation, as will be discussed in the dielectric studies section. The nematic phase range, as observed from the birefringence measurements, were found around 134-106 $^\circ$C for the pure LC and around 134-104 $^\circ$C for the QDs incorporated LC, complying with the POM observations.\\ \subsection{Dielectric Studies} \begin{figure}[b] \centering \includegraphics[width = \linewidth]{Fig_5_dielectric_spectroscopy.pdf} \caption{Frequency-dependent real ($\epsilon'$) and imaginary ($\epsilon ''$) parts of dielectric permittivity of (a-b) pristine LC (14-2M-CH$_3$) and (c-d) QDs dispersed LC nanocomposite (14-2M-CH$_{3}$ + 0.5wt\% CdSe/ZnS QDs), at different temperatures, during cooling. ($f$ in Hz)} \label{Fig5} \end{figure} Dielectric measurements have been carried out in a frequency range of 20 Hz $-$ 2 MHz (measuring voltage V$_{rms}$ = 0.2 V) and at different temperatures during the cooling cycle. The complex dielectric permittivity ($\epsilon^*$) of LCs, in the frequency domain, is expressed as, $\epsilon^*(f) = \epsilon'(f) - i\epsilon''(f)$ \cite{Haase_relaxation}. Here, $\epsilon'$ and $\epsilon''$ are the real and the imaginary parts of the complex dielectric permittivity, respectively. The dielectric spectra of $\epsilon'$ and $\epsilon''$, obtained from experiments, for the LC 14-2M-CH$_3$ and its QDs dispersed nanocomposite are shown in Figure 5. The maximum experimental error for the dielectric measurements lie within $\pm 1\%$. From Figure 5(a), we can see that the value of $\epsilon'$ at lower frequencies is $\sim$ 110, for the LC 14-2M-CH$_3$. Such high values of $\epsilon'$ have been recently observed in bent-core LCs containing cybotactic clusters \cite{Shankar_Cybo_AFM, Shankar_Cybo_CPC}. The dielectric absorption spectra unveils the associated relaxation processes in the medium. The absorption spectra of 14-2M-CH$_3$ is depicted in Figure 5(b). At any temperature, two distinct relaxation peaks (or modes) can be identified: a low-frequency mode (M$_1$) and a high-frequency mode (M$_2$). The two modes represent different relaxation processes present in the LC medium. Collective relaxation processes (due to cybotactic clusters) are known to give rise to low-frequency absorption peaks similar to M$_1$, and they are widely encountered in the N phases of bent-core LCs \cite{Haase_relaxation, Ghosh_BLC_Ferro_JMCC, Shankar_Cybo_AFM, Shankar_Cybo_CPC, Scaramuzza_BLC_JAP, Jakli_Cybo_DirectObs_PRL}. The relaxation frequencies ($f_R$) associated with cybotactic clusters can vary in the range of few tens of Hz to a few hundred Hz \cite{Shankar_Cybo_AFM, Shankar_Cybo_CPC, Ghosh_BLC_Ferro_JMCC, Scaramuzza_BLC_JAP}. Therefore, the mode M$_1$ is attributed to collective relaxation processes originating from cybotactic clusters present in the N phase of the LC. These clusters only occupy a fraction of the volume, and not all molecules form these clusters \cite{madhusudana2017two,patranabish2019one}. Experiments show that the clusters can also exist in the isotropic phase, and their size does not change significantly across the I-N transition - a unique property of BLCs that warrants further investigation \cite{Ghosh_BLC_Ferro_JMCC, Patranabish_JML, madhusudana2017two, Panarin_Vij_Cybo_ratio_BJN, Wiant_BLC_PRE}. As reported in \cite{Patranabish_JML}, through detailed small-angle X-ray scattering (SAXS) and dielectric measurements, the N phase of the pristine LC 14-2M-CH$_3$ is cybotactic in nature, \textit{i.e.}, it contains smectic-like cybotactic clusters. Also, the mode M$_1$ is not associated with ionic impurities because no polarization response (ionic) could be detected for both the pure and the doped LCs (applied voltage up to 80 $V_{pp}$, frequencies between mHz to kHz range) \cite{Jakli_BLC_LCR, Patranabish_JML}. The high-frequency mode M$_2$ represents reorientation of the LC molecular short-axis (around the long molecular axis), subject to planar anchoring conditions \cite{Ghosh_BLC_Ferro_JMCC, Patranabish_JML}. On entering the crystalline phase, M$_2$ was no more visible, which signifies that M$_2$ is a feature of the LC phase itself and it is not related to the cell's ITO electrodes. Further, with increasing temperature, the strength of M$_2$ is decreasing. It suggests that in the isotropic phase, at temperatures much higher than the isotropic-nematic transition, the mode M$_2$ will be completely absent. Therefore, we attribute M$_2$ to the reorientation of the LC molecular short-axis. Similar high-frequency modes were observed in a 5-ring bent-core LC CNRbis12OBB in the N phase and they were attributed to the independent rotation of dipolar groups (around the long axis) \cite{tadapatri2010permittivity}.\\ \begin{figure}[b] \centering \includegraphics[width = \linewidth]{"Fig_6_DC_bias".pdf} \caption{DC bias suppression of the low-frequency relaxation mode (M$_1$) in (a) pure 14-2M-CH$_3$ at 129 $^\circ$C and (b) 0.5 wt\% QDs incorporated 14-2M-CH$_3$ at 121 $^\circ$C. ($f$ in Hz)} \label{Fig6} \end{figure} \begin{figure}[t] \centering \includegraphics[width = 0.6\linewidth]{"Fig_7_Static".pdf} \caption{Dielectric permittivity of 14-2M-CH$_3$ and its nanocomposite at 10 kHz, as a function of temperature.} \label{Fig7} \end{figure} For the 0.5 wt\% QDs incorporated 14-2M-CH$_3$, the dispersion curve is shown in Figure 5(c). Similar to the pristine LC 14-2M-CH$_3$, we can see that the value of $\epsilon'$ at lower frequencies is large ($\sim$ 110). The absorption spectra of the LC nanocomposite is depicted in Figure 5(d). At any temperature, two distinct relaxation peaks (or modes) can be identified: a low-frequency mode (M$_1$) and a high-frequency mode (M$_2$). After the incorporation of QDs, a relative change in the associated relaxation frequencies ($f_R$) of the modes can be observed, compared to the pristine LC. The values of $f_R$ have been evaluated from the experimental data and it has been discussed in detail later in this section. The $f_R$ associated with M$_1$ is denoted by $f_{R1}$ and for M$_2$, it is denoted by $f_{R2}$. By comparison with the results obtained for the pristine LC 14-2M-CH$_3$, it is evident that the collective processes (and hence the cybotactic clusters) survive in the QDs dispersed LC nanocomposite. However, to establish this firmly, additional DC bias measurements have been performed. For collective processes, when a DC bias voltage is applied across a LC cell, the relaxation process ceases to exist. As a result, the dielectric relaxation modes get suppressed and at high voltages they become extinct \cite{douali_dielectric, Ghosh_BLC_Ferro_JMCC, Haase_relaxation}. A DC bias voltage of amplitude up to 20 V was applied across the LC cell and the dielectric measurements were performed (Figure 6). For the pure LC 14-2M-CH$_3$, a continuous and gradual suppression of mode M$_1$ with an applied DC bias voltage is observed (Figure 6(a)). It is a confirmatory proof of collective relaxations, and hence the presence of cybotactic clusters in the N phase of the LC \cite{Ghosh_BLC_Ferro_JMCC,Patranabish_JML}. Similarly to the pristine LC, we observe that the mode M$_1$ of the LC nanocomposite becomes suppressed (Figure 6(b)), and then completely absent at higher voltages ($\sim$ 20 V). This observation further confirms the collective behaviour of M$_1$ \cite{Ghosh_BLC_Ferro_JMCC,Patranabish_JML}, and hence corroborates retention of the cybotactic nematic phase (N$_{Cyb}$) in the QDs dispersed LC nanocomposite. The high-frequency mode M$_2$, however, does not show any change on DC bias application and hence it represents reorientation of LC molecular short axis. Moreover, we note that in the doped LC, similar to the pristine LC, M$_2$ is absent in the crystalline state and its strength decreases with increasing temperature.\\ The permittivity ($\epsilon'$) values at $f =$ 10 kHz, have been evaluated as a function of temperature (Figure 7). It shows that on incorporation of QDs, the permittivity ($\epsilon'$) increases appreciably. In a planar configuration, $\epsilon'$ represents $\epsilon_{\perp}$. The dielectric anisotropy ($\Delta \epsilon$) is defined as, $\Delta \epsilon = \epsilon_{||} - \epsilon_{\perp}$. Therefore, an increase in $\epsilon'$ implies a decrease in $\Delta \epsilon$. Further, a reduction in the dielectric anisotropy is indicative of decreasing macroscopic order parameter (since $\Delta \epsilon \propto S$) \cite{JitendraK_QDBLC_JML, maier_orderparameter}. It agrees well with the observations made in sections III-B and III-A.\\ To analyze the dielectric modes and the effects of incorporation of QDs, the associated dielectric parameters (e.g. dielectric strength ($\delta \epsilon$), relaxation frequency ($f_R$)) have been evaluated by fitting the experimental dielectric data (both $\epsilon'$ and $\epsilon''$, simultaneously), using the well-known Havriliak-Negami fit function. The frequency-dependent complex dielectric permittivity, $\epsilon^{*}(f) $, can be described by the modified Havriliak-Negami (H-N) equation, \cite{Havriliak_1966, Havriliak_1967,Ghosh_HNFit_JML,Ghosh_HNFit_LC,susantaJMC} which also includes contributions from the dc conductivity ($\sigma_0$): \begin{equation} \epsilon^*(f) = \epsilon_{\infty} + \sum_{k=1}^N \frac{\delta \epsilon_k}{[1 + (i 2\pi f\tau_k)^{\alpha_k}]^{\beta_k}} - \frac{i \sigma_0}{\epsilon_0 (2 \pi f)^s} \end{equation} \begin{figure}[b] \centering \includegraphics[width = \linewidth]{"Fig_8_HN_fit".pdf} \caption{Simultaneous fitting of the real ($\epsilon'$) and the imaginary ($\epsilon''$) parts of complex dielectric permittivity (in \textit{log} scale) using the Havriliak-Negami (H-N) equations in - (a) pure 14-2M-CH$_3$ and (b) 0.5 wt\% QDs incorporated 14-2M-CH$_3$ ($f$ in Hz). Experimental data - The green squares represent $\epsilon'$ and the hollow circles represent $\epsilon''$. Fit data - The red solid line represents fit to $\epsilon'$ and the blue solid line represents fit to $\epsilon''$. The fitting parameters $\chi^2$ and $R^2$ are generated by the fitting algorithm, so that $\chi^2 \sim 0$ and $R^2 \sim 1$ describe good fits.} \label{Fig8} \end{figure} The last term on the right-hand side of equation (4) describes the motion of free-charge carriers in the sample. The characteristic dielectric parameters such as the relaxation frequency ($f_R$) and the dielectric strength ($\delta \epsilon$) are obtained by fitting the experimental dielectric permittivity ($\epsilon'$) and dielectric loss ($\epsilon''$) data simultaneously to the real and the imaginary parts of equation (4) given by \cite{Ghosh_BLC_Ferro_JMCC,Ghosh_HNFit_JML,Ghosh_HNFit_LC,Golam_Hockey_BLC_ACSOmega,susantaJMC}, \small{ \begin{equation} \begin{aligned} & \epsilon' = \epsilon_{\infty} + \\ & \sum_{k=1}^N \frac{\delta \epsilon_k \cos (\beta_k \theta)}{[1+(2\pi f \tau_k)^{2\alpha_k} + 2(2\pi f\tau_k)^{\alpha_k} \cos (\alpha_k \pi / 2)]^{\beta_k/2}} \\ \end{aligned} \end{equation} } \small{ \begin{equation} \begin{aligned} & \epsilon'' = \frac{\sigma_0}{\epsilon_0 (2 \pi f)^s} + \\ & \sum_{k=1}^N \frac{\delta \epsilon_k \sin (\beta_k \theta)}{[1+(2\pi f \tau_k)^{2\alpha_k} + 2(2\pi f\tau_k)^{\alpha_k} \cos (\alpha_k \pi / 2)]^{\beta_k/2}} \\ \end{aligned} \end{equation} } Here, \begin{equation} \theta= \tan^{-1} \left[ \frac{(2 \pi f \tau_k)^{\alpha_k}\sin(\alpha_k \pi/2)}{(1+ (2 \pi f \tau_k)^{\alpha_k} \cos(\alpha_k \pi/2)} \right] \end{equation} \begin{figure}[b] \centering \includegraphics[width = \linewidth]{"Fig_9_strength_fR".pdf} \caption{Temperature-dependent variation of the relaxation frequency ($f_R$) and the dielectric strength ($\delta \epsilon$) corresponding to M$_1$ and M$_2$ of (a-b) the pristine LC and (c-d) 0.5 wt $\%$ QDs incorporated LC.} \label{Fig9} \end{figure} Here, $f$ is the frequency, $\epsilon_{\infty}$ is the high-frequency limit of permittivity, $\delta \epsilon_k$ is the dielectric strength for $k$-th relaxation process, $\sigma_0$ is the dc conductivity, $\epsilon_0$ is the free-space permittivity ($8.854*10^{-12}$ F/m), $s$ is a fitting parameter responsible for the nonlinearity in dc conductivity part (for ohmic behaviour, $s$ = 1), $k$ is the number of relaxation processes, $\tau_k ( = 1/2\pi f_k)$ is the relaxation time for $k$-th relaxation process, $\alpha_k$ and $\beta_k$ are the empirical fit parameters that describe symmetric and non-symmetric broadening, respectively, of the $k$-th relaxation peak. In our case, in the absorption curve, we have two different relaxation peaks and hence $k =$ 1 and 2. A representative of the obtained Havriliak-Negami (H-N) fits are shown in Figure 8. The values of $\alpha_1$ and $\alpha_2$ lie in the range of 0.97 $-$ 1 while the values of $\beta_1$ and $\beta_2$ lie in the range of 0.93 $-$ 1 (we perform the fitting over a range of temperatures $106^{\circ}$C$-134^{\circ}$C and the ranges in $\alpha_1 ..\beta_2$ are specified). In the study of a 5-ring bent-core LC C1Pbis10BB and its mixtures with a calamitic nematic LC 6OO8, the authors reported a Debye-type low-frequency relaxation mode $B_{||1}$ \cite{salamon2010dielectric}. They also write that smectic-like clusters can induce a Debye-type relaxation in the low-frequency region of dielectric spectrum. Our dielectric results also indicate that M$_1$ is a near Debye-like relaxation process and the associated relaxation frequencies overlap with the mode $B_{||1}$ reported in \cite{salamon2010dielectric}. The variations of the relaxation frequency ($f_R$) and the dielectric strength ($\delta \epsilon$) of modes M$_1$ and M$_2$ with temperature, as obtained from the fitting, are shown in Figure 9. The results show that $\delta \epsilon_1$ ($i.e.$ corresponding to M$_1$) for 14-2M-CH$_3$ increases slightly, from $\sim$ 98 to $\sim$ 104, with increasing temperature. Similarly, $\delta \epsilon_1$ for the QDs dispersed nanocomposite increases slightly, from $\sim$ 100 to $\sim$ 105, with increasing temperature. Thus, the dielectric strength $\delta \epsilon_1$ is largely unaffected on doping. Again, the value of $\delta \epsilon_1$ is quite large and it is similar to other bent-core LCs with cybotactic clusters \cite{Shankar_Cybo_AFM, Shankar_Cybo_CPC, Ghosh_BLC_Ferro_JMCC}. The dielectric strength $\delta \epsilon_2$ associated with M$_2$ is found to be very small and increases with decreasing temperature - from $\sim$ 4.5 to 5.5 for both 14-2M-CH$_3$ and its nanocomposite.\\ The relaxation frequency ($f_{R1}$) associated with M$_1$ lies in the range of $\sim$ 170 Hz to $\sim$ 320 Hz for the pure LC, similar to several other bent-core LCs with cybotactic clusters \cite{Shankar_Cybo_AFM, Shankar_Cybo_CPC, Ghosh_BLC_Ferro_JMCC}. The incorporation of QDs causes $f_{R1}$ to shift to higher frequencies ($\sim$ 220 Hz to $\sim$ 430 Hz). It indicates that there is an apparent reduction in size of the smectic-like cybotactic clusters \cite{Panarin_Vij_Cybo_ratio_BJN}. This reduction can be estimated qualitatively by taking the ratio of the relaxation frequency $f_{R1}$, for the pristine LC and its doped counterpart. The ratio has an average value $\sim$ 0.67, where the average is taken over two sets of measurements and a range of temperatures. This ratio signifies a relative change in the average number of molecules ($N_c$) present in each cluster (in accordance with our earlier theoretical model and the experiments) \cite{patranabish2019one,Panarin_Vij_Cybo_ratio_BJN}. The decrease in the measured order parameter ($S$) on doping can be ascribed to reduced cluster sizes on QDs incorporation. In our earlier theoretical work on bent-core nematic LCs, we take $N_c$ = 50 \cite{patranabish2019one}. Recent observations have shown that the typical size of smectic-like cybotactic clusters lie in the range of few tens of nanometres to around a hundred nanometres \cite{Jakli_Cybo_DirectObs_PRL}. Again, the typical dimension of a bent-core LC molecule is around 2$-$3 nanometres. Therefore, the number $N_c$ = 50 is justified in the case of pure (undoped) bent-core LCs, with cybotactic clusters. For the QDs dispersed bent-core nematic LCs, we can take $N_c$ $\sim$ 33 (= 50 x 0.67) as a reasonable value. The relaxation frequency $f_{R1}$ manifests a gradual decrease with decreasing temperature, divulging an Arrhenius-type behaviour ($f_R=f_0 \exp(-E_a/k_B T)$; $f_0$ is a temperature-independent constant, $E_a$ is the activation energy, $k_B$ is the Boltzmann’s constant and $T$ is the absolute temperature). $f_{R2}$ also demonstrates an Arrhenius-like behaviour. \\ \begin{figure}[t] \centering \includegraphics[width =\linewidth] {"Fig_10_E_a".pdf} \caption{Arrhenius plot of the M$_1$ relaxation frequency ($f_{R1}$) in the nematic (N) phase of (a) pristine and (b) 0.5wt\% CdSe/ZnS QDs incorporated 14-2M-CH$_3$. The activation energy ($E_a$) is calculated from the slope of the linear fit, represented by the solid red line.} \label{Fig10} \end{figure} The activation energy ($E_a$) associated with a relaxation process encodes the minimum amount of energy required for that process to take place \cite{Haase_relaxation}. The value of $E_a$ associated with the relaxation processes can be obtained by plotting $f_R$ as a function of $1/T$, using the relation $f_R=f_0 \exp(-E_a/k_B T)$. The Arrhenius plots of $ln$($f_{R1}$) vs. $1/T$ (for M$_1$) for the two compounds are shown in Figure 10. The activation energy ($E_a$), associated with M$_1$, is evaluated from the slope of the linear fit, as shown in Figure 10. The value of $E_a$ ($\sim$ 29.12 kJ/mol) increases significantly after the incorporation of QDs ($\sim$ 37.68 kJ/mol). For a small cluster size, the cluster's dipole moment ($\mu$) also becomes small. Hence, more energy is required to interact with an external electric field. Therefore, an increased value of $E_a$ for M$_1$, after the incorporation of CdSe/ZnS QDs, implies a decrease in the size of cybotactic clusters. It also concurs with our earlier observations. The activation energy associated with M$_2$ has rather small values ($\sim$ 8 kJ/mol) and it does not change significantly after the incorporation of QDs. \iffalse \begin{figure*}[t] \includegraphics[width = \linewidth]{Res_1.png} \caption{$\gamma = 5$ (in $10^7/4$ cgs units) with different $W$ and $N_c$}\label{Res_1} \end{figure*} \fi \section{Mathematical Model} In this section, we propose a simple mathematical model that describes the QD doping-induced reduction of the nematic scalar order parameter for dilute suspensions, that could subsequently be improved to describe novel features such as ferroelectricity, chirality, biaxiality and transition pathways between multiple stable states. Since experimental domain is a simple planar cell, denoted by $\Omega \subset \mathbb{R}^3$, with height about 5 microns ($5 \times 10^{-6} \mathrm{m}$), we assume a characteristic length of the system \begin{equation} x_s = 5 \times 10^{-8} \mathrm{m}, \end{equation} as in \cite{patranabish2019one}, so that the cell thickness is $100$ units of $x_s$. The cross-sectional dimensions of the cell are much larger than the cell height, so it is reasonable to assume that structural variations only occur across the cell height i.e. this is a one-dimensional problem. We assume that the QDs are spherical in shape, with an average radius of $2.8$ nanometres; the size of the QDs is much smaller than the typical separation between them and the total volume occupied by the QDs is small. Let $R$ denote the radius of a QD ($2.8$ nanometres as reported in these experiments) and we define a small parameter $\epsilon$ so that $$ \epsilon^{\alpha} = \frac{R}{x_s} = R_0 = 0.056$$ for some $1<\alpha < \frac{3}{2}$. The definition of $\epsilon$ is not unique, provided it is a small parameter, relevant for \emph{dilute uniform suspensions of QDs} \cite{canevari2019design}. In particular, our mathematical model is \emph{restricted} to dilute suspensions and will need to be modified for non-dilute systems. In \cite{madhusudana2017two}, N. V. Madhusudana proposes a Landau-de Gennes (LdG) type two-state model for the N phase of BLCs, accounting for cybotactic clusters. This two-state model is a phenomenological model based on the premise that the N phase of the BLC comprises two different types of molecules: ground state (GS) molecules and excited state (ES) molecules. The ES molecules define the smectic-like cybotactic clusters and the GS molecules are located outside the clusters. The generic LdG theory models a single component system e.g. the GS molecules, typically assumed to be rod-like molecules that tend to align with each other, yielding long-range orientational order \cite{ravnik2009landau}. Madhusudana's model is a two component model, the GS and ES molecules, with additional coupling effects and in \cite{patranabish2019one}, we describe the N phase of the BLC by two macroscopic tensor order parameters (with a number of simplifying assumptions) \begin{equation} \begin{aligned} \Q_g = \sqrt{\frac{3}{2} } S_g \left( \n_g \otimes \n_g - \frac{1}{3} \mathrm{I} \right)\\ \quad \Q_c = \sqrt{\frac{3}{2} } S_c \left( \n_c \otimes \n_c - \frac{1}{3} \mathrm{I} \right) \end{aligned} \end{equation} respectively, where $\Q_g$ is the LdG order parameter for the GS molecules, and $\Q_c$ is the LdG order parameter associated with the ES molecules. In both cases, we assume that $\Q_g$ and $\Q_c$ have uniaxial symmetry i.e. the GS (ES) molecules align along a single averaged distinguished direction $\n_g$ (respectively $\n_c$), and assume that $\n_g$ and $\n_c$ are constant unit-vectors or directions, \textbf{so that there are no director distortions or defects}. There are two scalar order parameters, $S_c$ and $S_g$, corresponding to ES (excited state) and GS (ground state) molecules respectively. As is standard with variational approaches, the experimentally observed configurations are described by local or global minimizers of a suitably defined LdG-type energy in terms of $S_c$, $S_g$ and the coupling between them. In \citeauthor{patranabish2019one}, the authors theoretically study stable $(S_g, S_c)$ profiles in a simple planar cell geometry (as experimentally studied in this manuscript), as a function of the temperature, in terms of minimizers of the following LdG-type energy (heavily inspired by the work in \cite{madhusudana2017two} with additional elastic effects that account for spatial inhomogeneities): \begin{equation}\label{Energy1} \begin{aligned} \mathcal{F} & = \int_{\Omega} (1 - a_x) \left( \frac{a_g}{2}(T - T^{*})S_g^2 - \frac{B_g}{3} S_g^3 + \frac{C_g}{4} S_g^4 - E_{el} S_g \right) \\ & + \frac{a_x}{N_c} \left( - (1 - a_x) \gamma S_g S_c + \frac{\alpha_c}{2} S_c^2 + \frac{\beta_c}{4} S_c^4 \right) \\ &- a_xJ E_{el} S_c + K_g |\nabla S_g|^2 + K_c |\nabla S_c|^2 \dd \x \\ \end{aligned} \end{equation} Here, the subscripts $g$ and $c$ denote the GS and the ES molecules, respectively, and the clusters are essentially formed by the ES molecules. They work with a one-constant approximation and $K_g$ and $K_c$ are the elastic constants of the GS and ES molecules respectively. $a_g$,$B_g$,$C_g$ are the material dependent parameters in the LdG free energy, $T^*$ is the nematic supercooling temperature such that the isotropic phase of the GS phase is unstable for $T<T^*$. The parameter, $\gamma$, is the coupling parameter between the GS molecules and the clusters \cite{madhusudana2017two}. The coefficients, $\alpha_c$ and $\beta_c$, are saturation parameters to ensure that the absolute value of $S_c < 1$ for physically relevant parameter regimes , $N_c$ is the number of ES molecules in each cluster, $a_x$ is the mole fraction of the ES molecules. $J$ accounts for the shape anisotropy of ES molecules. $E_{el}$ is the electric field energy ($\frac{1}{2}\epsilon_0 \Delta \epsilon E^2$) where $\epsilon_0$ is the free-space permittivity, $\Delta \epsilon$ is the dielectric anisotropy, $E$ is the applied electric field. \iffalse 1.2,. This yields A = 0.045 × 10$^7$,B = 0.3825 × 10$^7$,C = 1.0125 × 10$^7$,D = 0.00225 × 10$^7$, E = 1800,M = 0.0001 × 10$^7$,N = 0.002 × 10$^7$, and P = 240 (in respective cgs units). Therefore we have C1 = 0.31142,C2 = 0.05 (for γ = 5), C3 = 0.034,C4 = 0.00222,C5 = 0.000615, and C6 = 0.00453 \fi The mathematically and physically pertinent question is - how is the energy (\ref{Energy1}) modified by the uniformly suspended QDs in the dilute limit? Following the elegant homogenisation framework developed in \cite{canevari2019design}, the additional doping can be described by an effective field in the dilute limit, and \textbf{this effective field strongly depends on the shape and anchoring conditions on the QDs, but not the size of the QDs in the dilute limit (as will be made clearer below and the size will matter in the non-dilute limit).} We assume the QDs are spherical in shape, as stated above, and impose preferred alignment tensors on the QD surfaces, $\Q_v^g$ ( $\Q_v^c$), for QDs outside (inside the) clusters respectively We assume that $\Q_v^g$ and $\Q_v^c$ are constant tensors, given by, \begin{equation} \Q_v^g = \sqrt{\frac{3}{2} } S_g^b \left( \n_g \otimes \n_g - \frac{1}{3} \mathrm{I} \right). \end{equation} and \begin{equation} \Q_v^c = \sqrt{\frac{3}{2} } S_c^b \left( \n_c \otimes \n_c - \frac{1}{3} \mathrm{I} \right). \end{equation} for some fixed $S_g^b, S_c^b > 0$. There is no clear argument for the choice of $S_g^b$ and $S_c^b$ in this model but we make reasonable choices below. \textbf{Further, we assume that $\n_g$ ($\n_c$) is the same for $\Q_g$ and $\Q_v^g$ (likewise for $\Q_c$ and $\Q_v^c$), so that there is no director distortion at the QD interfaces.} Assuming a Rapini-Papoular surface anchoring energy on the QD surfaces, the QD surface energy density is given by \begin{equation}\label{surface_reduced} f_s^g(S_g, S_c, S_g^b, S_c^b) = W_g^0 |S_g - S_g^b|^2 + W_c^0 |S_c - S_c^b|^2, \end{equation} where $W_g^0$ and $W_c^0$ are the anchoring coefficients on the QD-GS interfaces, QD-ES interfaces respectively For relatively strong anchoring on the QD interfaces, $W_c^0$ and $W_g^0$ can be taken to be approximately $1 \times 10^{-2} \mathrm{J/m}^2$ \cite{ravnik2009landau}. In particular, $W_c^0$ is zero for QDs outside the clusters and $W_g^0$ is zero for QDs inside the clusters. Next, we follow the paradigm in \cite{canevari2019design} to describe the collective effects of a uniform dilute suspension of QDs, with surface energies as in (\ref{surface_reduced}), in terms of an additional \emph{homogenised} effective term in the free energy (\ref{Energy1}). As in \cite{patranabish2019one}, we let $A = (1 - a_x) a_g (T - T^{*}),~~ B = (1 - a_x) B_g,~~C = (1 - a_{x}) C_g, D = a_x(1 - a_x) \gamma / N_c, E = (1 - a_x) E_{el}, \quad M = \alpha_c a_{x} / N_c, N = \beta_c a_x / N_c, \quad P = J E_{el} a_x, \quad W_g = (1 - a_x) W_g^0$ and $W_c = \frac{a_x}{N_c} W_c^0$, where $a_x$ is the fixed mole fraction of the ES molecules. Moreover, we assume the $E_{el} = 0$ throughout this paper. In agreement with the parameter values used in \cite{patranabish2019one}, we use fixed values $K_g = K_c = K = 15pN$ ; $a_g = 0.04, B_g = 1.7,C_g = 4.5,\alpha_c = 0.22, \beta_c = 4.0$ ($\alpha_g,B_g,C_g,\alpha_c$ and $\beta_c$ in $10^6/4$ SI units). Following the methods in \cite{canevari2019design}, we describe the dilute QD-doped BLC system by means of the total free energy below, without rigorous justification but rather as a phenomenological model to describe salient features of novel nanocomposites. \begin{equation}\label{Energy2} \begin{aligned} \mathcal{F} & = \int \left( \frac{A}{2}S_g^2 - \frac{B}{3} S_g^3 + \frac{C}{4} S_g^4 \right) \\ & + \left( - D S_g S_c + \frac{M}{2} S_c^2 + \frac{N}{4} S_c^4 \right) + K_g |\nabla S_g|^2 + K_c |\nabla S_c|^2 \dd \x \\ & + \epsilon^{3 - 2\alpha}\int_{\pp \mathcal{P}} W_g |S_g - S_g^b|^2 \dd S + \epsilon^{3 - 2\alpha}\int_{\pp \mathcal{P}} W_c |S_c - S_c^b|^2 \dd S, \\ \end{aligned} \end{equation} \\ where $\mathcal{P}$ is the collection of the QDs in the suspension and $1 < \alpha< \frac{3}{2}$, so that $\epsilon^{ 3- 2\alpha} \to 0$ as $\epsilon \to 0$. The pre-factor of $\epsilon^{3 - 2\alpha}$ is specific to dilute systems. \textbf{The main novelty is the surface energy term originating from the QD-GS and QD-ES interfaces, and the homogenized effective field is derived in the $\epsilon \to 0$ limit, as will be discussed below.} We non-dimensionalize the free-energy (\ref{Energy2}) by letting $\bar{\x} = \x/x_s, \quad \bar{S_g} = \sqrt{\frac{27C^2}{12 B^2}}S_g, \quad \bar{S_c} = \sqrt{\frac{27 C^2}{12 B^2}}S_c \quad \bar{\mathcal{F}} = \frac{27^2 C^3}{72 B^4 x_s^3}\mathcal{F}$, Dropping all \emph{bars} for convenience (so that $S_g$ and $S_b$ denote the scaled order parameters), the dimensionless energy is (we take $E_{el} = 0$) \begin{equation} \begin{aligned} \label{EnergyH} \mathcal{F} & = \int_{\Omega_\epsilon} \left( \frac{t}{2}S_g^2 - S_g^3 + \frac{1}{2} S_g^4 \right) + \left( - C_1 S_g S_c + C_2 S_c^2 + C_3 S_c^4 \right) \\ & + \kappa_g \left(\frac{d S_g}{dx} \right)^2 + \kappa_c \left(\frac{d S_c}{dx} \right)^2 \dd \x \\ & + \epsilon^{3 - 2\alpha}\int_{\pp \mathcal{P}} w_g |S_g - S_g^b|^2 \dd S + \epsilon^{3 - 2 \alpha} \int_{\pp \mathcal{P}} w_c |S_c - S_c^b|^2 \dd S, \\ \end{aligned} \end{equation} where $\Omega_\epsilon$ is the three-dimensional planar cell with the QDs removed, $\mathcal{P}$ is the collection of the three-dimensional spherical QDs with re-scaled radius $\epsilon^{\alpha}$ for $1 < \alpha < \frac{3}{2}$ (see the definition of $\epsilon$ above), and \begin{equation} \begin{aligned} & t = \frac{27 AC}{6 B^2}, \quad C_1 = \frac{27 CD}{6 B^2}, \quad C_2 = \frac{27 C M}{12 B^2}, \quad C_3 = \frac{N}{2C},\\ & \kappa_g = \frac{27 C K_g }{6 B^2 x_s^2}, \quad \kappa_c = \frac{27 C K_c}{6 B^2 x_s^2} \\ & w_g = \frac{27 C W_g}{6 B^2 x_s} \quad w_c = \frac{27 C W_c}{6 B^2 x_s} . \\ \end{aligned} \end{equation} Note that $|\nabla S_g|^2 = \left(\frac{d S_g}{dx} \right)^2$ since we assume that structural variations in $S_g$ and $S_c$ only occur across the cell height, $0\leq x \leq 100$ (recall the choice of $x_s$ above). In \cite{canevari2019design}, the authors study minimizers of free energies of the form, \begin{equation} \label{eq:homnew}\int\int\int_{\Omega_\epsilon} f_{el}(\nabla \Q ) + f_b (\Q) dV + \epsilon^{3 - 2\alpha} \int\int_{\pp \mathcal{P}} f_s\left(\Q, \nu \right) dA, \end{equation} with $1 < \alpha < \frac{3}{2}$, where $f_{el}(\nabla \Q )$ is a general convex function of the gradient of an arbitrary order parameter $\Q$, $f_b$ is a polynomial function of the scalar invariants of $\Q$, and $f_s$ are arbitrary surface energies on the QD interfaces. The dilute limit is described by the $\epsilon \to 0$ limit, and minimizers of (\ref{eq:homnew}) converge to minimizers of the following homogenized energy, as $\epsilon \to 0$, \begin{equation} \label{eq:homnew2} \mathcal{F}_h (\Q) = \int_{\Omega} f_{el}(\nabla \Q) + f_b(\Q) + f_{hom}(\Q) dV, \end{equation} where $f_{hom} = \int_{\partial \omega} f_s\left(\Q, \nu \right) dS$, $\omega$ is a representative QD and $\nu$ is the outward normal to $\partial \omega$. \textbf{In particular, the shape, anchoring conditions, material properties including encapsulation properties of the QD inclusions are absorbed in the definition of $f_{hom}$. The distortion effects around the QDs are also described by $f_{hom}$ for dilute systems.} In our case, the QDs are spherical inclusions and applying the results in \cite{canevari2019design}, we have $f_{hom} = \int_{\pp B(\mathbf{0}, 1)} f_s(\Q, \nu) dA$, $B(\mathbf{0}, 1) \subset \mathbb{R}^3$ is a generic three-dimensional unit ball and $f_s$ is the surface energy (\ref{surface_reduced}). We apply this result to calculate the homogenized potential corresponding to (\ref{surface_reduced}), see below: \begin{equation} \label{eq:fhom} f_{hom}(S_g, S_c ) = w_{g}^{(1)} S_g^2 - w_{g}^{(2)} S_g +w_{c}^{(1)} S_c^2 - w_{c}^{(2)} S_ \end{equation} where \begin{equation} \omega_{g}^{(1)} = 4 \pi w_g, \quad \omega_{g}^{(2)} = 8 \pi S_g^b w_g \end{equation} and \begin{equation} w_{c}^{(1)} = 4 \pi w_c, \quad w_{c}^{(2)} = 8 \pi S_c^b w_c. \end{equation} Hence, the total non-dimensionalized \emph{homogenized} free energy is given by \begin{equation}\label{bulk_energy} \begin{aligned} \mathcal{F} = \int_{\Omega} & \left( \left( \frac{t}{2} + w_g^{(1)} \right)S_g^2 - \sqrt{6} S_g^3 + \frac{1}{2} S_g^4 \right) \\ & + \left( - C_1 S_g S_c + (C_2 + w_{c}^{(1)} ) S_c^2 + C_3 S_c^4 \right) \\ & + \kappa_g \left(\frac{d S_g}{dx}\right)^2 + \kappa_c \left( \frac{d S_c}{dx} \right)^2 - w_{g}^{(2)} S_g - w_{c}^{(2)} S_c \dd \x. \\ \end{aligned} \end{equation} For the parameter values as stated before, we have $C_1 = 0.0700692$, $C_2 = 0.0017$ and $C_3 = 0.0040$. \iffalse \begin{figure}[!h] \centering \begin{overpic} [width = 0.9 \linewidth]{gamma_fixed_Nc_5_W.eps} \put(-5, 72){\large (a)} \end{overpic} \vspace{2em} \begin{overpic}[width = 0.9\linewidth]{Nc_fixed_gamma_5_W.eps} \put(-5, 72){\large (b)} \end{overpic} \caption{(a) Bulk mean order parameter as a function of $\gamma$ for fixed $N_c = 50$ and $T = 379$ with $W = 0.01$ and $W = 0$. (b) Bulk mean order parameter as a function of $N_c$ for fixed $\gamma = 5$ and $T = 379$ with $W = 0.01$ and $W = 0$.}\label{Nc} \end{figure} \fi Then the equilibrium/ physically observable $(S_g, S_c)$ profiles are solutions of the Euler-Lagrange equations corresponding to (\ref{bulk_energy}). \begin{equation}\label{EL_bulk} \begin{cases} & \kappa_g \frac{d^2 S_g}{dx^2} = 2 S_g^3 - 3 \sqrt{6} S_g^2 + (t + w_g^{(1)}) S_g - C_1 S_c - w_g^{(1)} \\ & \kappa_c \frac{d^2 S_c}{dx^2} = 4 C_3 S_c^3 + (2 C_2 + 2 w_c^{(1)}) S_c - C_1 S_g - w_c^{(2)} .\\ \end{cases} \end{equation} These equations need to be complemented by boundary conditions for $S_g$ and $S_c$, we fix Dirichlet boundary conditions for the scalar order parameters on the bottom ($x=0$) and top ($x=100$) of the planar cell i.e. \begin{equation} \label{dirichletbcs} S_g = \frac{3 + \sqrt{9 - 8t}}{4}, \quad S_c = 0 ~ \textrm{on $x=0$ and $x=100$,} \end{equation} which corresponds to the absence of clusters on the planar cell boundaries. The boundary conditions (\ref{dirichletbcs}) are not special and we believe that our qualitative conclusions would hold for other choices of the Dirichlet boundary conditions too. We assume that $\mathbf{n}_g$ and $\mathbf{n}_c$ are constant unit vectors, and our analysis is independent of the choice of $\mathbf{n}_g$ and $\mathbf{n}_c$, provided they are constant vectors. We also need to specify $S_g^b$ and $S_c^b$ to determine $\omega_g^{(2)}$ and $w_c^{(2)}$ above and we choose \begin{equation} S_g^b = \frac{3 + \sqrt{9 - 8t}}{4}, \quad S_c^b = 0, \end{equation} with $W_g = W_c = W$. Next, we numerically solve the coupled equations in (\ref{EL_bulk}) to compute the equilibrium profiles of $(S_g, S_c)$ as a function of temperature, and different values of $W$. The parameters $N_c$ and $\gamma$ are coupled i.e. larger clusters are likely to have larger values of $N_c$ and $\gamma$, and we expect $N_c$ and $\gamma$ to be smaller for the doped system compared to its undoped counterpart, based on the experimental results that suggest smaller cybotactic clusters in QD-doped BLCs compared to their undoped counterparts. We define the bulk mean order parameter $S_m$, which is a weighted scalar order parameter as shown below \begin{equation} S_m = (1 - a_x) S_g + a_x S_c. \end{equation} \textbf{The weighted scalar order parameter, $S_m$, is the theoretical analogue of the measured order parameters from experimental birefringence measurements.} We use the value of $S_m$ at room temperature (293 K) with $(N_c, W, \gamma) = (50, 0, 5)$ to normalize $S_m$. Recall that $N_c=50$ and $\gamma = 5$ have been used to study the undoped BLC system in \cite{patranabish2019one}. In Figure \ref{Sm_T}, we plot $S_m$ as function of temperature for undoped and doped systems, for different values of $W$. For the undoped system, $(N_c, \gamma, W) = (50, 5, 0)$ by analogy with the values used in \cite{patranabish2019one}. \textbf{For QD-doped systems, the experimental results suggest that the clusters are shrunk by a factor of $0.67$ (qualitatively deduced by the ratio of the relaxation frequencies) and hence we take $N_c = 50 \times 0.67 =33.5$ and $\gamma = 5 \times 0.67 =3.35$ for doped systems.} We plot the solution, $S_m$-profiles for three doped systems - $(N_c, \gamma, W) = (33.5, 3.35, 0.01)$, $(N_c, \gamma, W) = (33.5, 3.35, 0.001)$, and $(N_c, \gamma, W) = (33.5, 3.35, 0.0001)$ in Figure \ref{Sm_T}. \begin{figure}[!h] \centering \includegraphics[width = \linewidth]{dope_un_dope.eps} \caption{Bulk mean order parameter as a function of temperature for the undoped (green circle) and the doped (red square, blue triangle, purple diamond) cases (Temperature `$T$' is in K).}\label{Sm_T} \end{figure} It is clear that this simple model clearly captures the doping-induced reduction in the values of $S_m$, consistent with the experimental results in Figure~$4$. $S_m$ also decreases with increasing $T$ as expected. The numerically computed values of $S_m$ for the doped systems in Figure~\ref{Sm_T} are lower than the experimentally reported values in Figure~$4$ and we think an exact fitting between the numerical and the experimental results is unlikely at this stage. Further, the numerical results are very sensitive to the values of the anchoring coefficient and we simply do not have reliable estimates of the anchoring coefficients for QDs. In fact, the numerically computed $S_m$'s, if fitted to the experimental data, could provide an ingenious method for estimating the surface energy coefficients for QD-doped BLC systems. It is clear that the predictions of the doped model approach the predictions of the undoped model as $W \to 0$, as expected. We do not pursue this further in this manuscript. This simple model provides a clear explanation of why the QD-doping reduces $S_m$ in experiments - the interaction/anchoring between the QDs and the host BLC matrix increases the effective temperature (the coefficient of $S_c^2$ and $S_g^2$ in (\ref{bulk_energy})) of both the GS state and the cybotactic clusters (ES state), and hence at a given temperature, the QD-doped system experiences a shifted higher temperature and the shift is directly determined by the homogenized potential, $f_{hom}$, which in turn is determined by the anchoring and shape of the dispersed QDs. This QD-induced effective temperature shift necessarily reduces $S_m$ compared to its undoped counterpart. A doping-induced reduced $S_m$ qualitatively explains the experimentally observed reduced dielectric anisotropy and birefringence in doped systems, compared to undoped systems. \textbf{However, several open questions remain, particularly as to how we ascribe values to the cluster parameters, $N_c$ and $\gamma$, and how to describe the effects of QD-doping on these cluster parameters. Nevertheless, this is the first attempt to mathematically model dilute suspensions of QDs in the nematic phase of BLC materials, with cybotactic clusters, providing qualitative agreement with experiments.} \section{Conclusion} We perform experimental and theoretical studies of a QDs-dispersed BLC 14-2M-CH$_{3}$ inside a planar cell. We believe that QDs are attractive nano-inclusions for stable suspensions, without aggregation effects. We present experimental optical profiles for the pristine LC and the QDs incorporated LC nanocomposite systems, tracking the textural colour changes with temperature. We perform experimental measurements of the dielectric permittivity, including the dielectric dispersion and absorption spectra, and use fitting algorithms to calculate relaxation frequencies and dielectric strengths, that are used to validate the existence of cybotactic clusters in the doped and undoped systems, the reduction of cluster sizes in doped systems and corresponding increase in activation energies. We also present experimental measurements of the optical birefringence and the orientational order parameters of the doped and undoped systems. All the experiments demonstrate doping-induced reduction of orientational order and cluster sizes, that manifest in doping-induced reduced birefringence and a reduced dielectric anisotropy (qualitatively) at a fixed temperature. In terms of future experiments, we would like to investigate biaxiality in these QDs-dispersed BLC systems, chirality and prototype devices based on such simple planar cell geometries. For example, we could treat the planar cell to have conflicting boundary conditions on both the cell surfaces, naturally inducing inhomogeneous director profiles. We support some of our experimental findings with a homogenized Landau-de Gennes type model for a doped BLC system, with two scalar order parameters, $S_g$ and $S_c$, and constant director profiles. In particular, we capture the doping-induced reduction in the mean scalar order parameter which is an informative and illuminating first step. The theory can be embellished in many ways, to make it physically realistic e.g. elastic anisotropy involving additional terms in the elastic energy density, non-constant director profiles captured by non-constant $\n_g$ and $\n_c$ and understanding how the QDs affect the cybotactic clusters. This could be done by using a general two-tensor model, $\Q_g$ and $\Q_c$, without making any additional assumptions about uniaxial symmetry or constant directors, as in our mathematical model in this paper. However, it will be a challenge to describe the cybotactic cluster-mediated coupling between $\Q_g$ and $\Q_c$ without these restrictive assumptions, and some of these directions will be pursued in future work. \section*{Credit taxonomy} S. Patranabish and A. Sinha conducted the experiments and analysed the experimental results. Y. Wang and A. Majumdar performed the modelling and comparisons between experiments and modelling. \section*{Acknowledgement} The authors would like to thank Prof. N.V.S. Rao, Department of Chemistry, Assam University, Assam, India and Dr. Golam Mohiuddin, Xi'an Jiaotong University, Xi'an, China for providing the liquid crystal samples. The authors thank Dr. Giacomo Canevari for helpful discussions on the homogenised potentials. The authors also thank Anjali Sharma and Ingo Dierking for useful feedback on the experimental sections. S.P. thanks Dr. Susanta Chakraborty for useful discussions on fitting of the experimental data. S.P. acknowledges IIT Delhi for financial support under Full-time Institute Assistantship. The authors would like to thank DST-UKIERI for generous funding to support the 3-year collaborative project. \providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
proofpile-arXiv_065-152
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Proofs of the main results} \label{App:Proofs} This Appendix provides the proofs of the technical results. In the following proofs, without loss of generality, we proceed as if the sample proportions $ n_{k}/n $ do not depend on $ n $ and equal their limits $ \rho_{k} $. Our results are applicable as long as none of the populations have comparatively very small sample sizes. Also, for the sake of convenience, with a generic function $ f (\boldsymbol{y}) $ we use \[ \frac{\partial f (\boldsymbol{y}^{*}) }{\partial \boldsymbol{y}} = \left. \frac{\partial f (\boldsymbol{y}) }{\partial \boldsymbol{y}} \right \rvert_{\boldsymbol{y} = \boldsymbol{y}^{*}}, ~~~ \frac{\partial^{2} f (\boldsymbol{y}^{*}) }{\partial \boldsymbol{y} \partial \boldsymbol{y}^{\top}} = \left. \frac{\partial^{2} f (\boldsymbol{y}) }{\partial \boldsymbol{y} \partial \boldsymbol{y}^{\top}} \right \rvert_{\boldsymbol{y} = \boldsymbol{y}^{*}}. \] Finally, the DRM parameters $ \btheta $ are arranged in the order \[ ( \theta_{1 1}, \theta_{2 1}, \ldots, \theta_{m 1}, \ldots, \theta_{1 2}, \theta_{2 2}, \ldots, \theta_{m 2}, \ldots, \theta_{1 d}, \theta_{2 d}, \ldots, \theta_{m d} ), \] where $ \theta_{i s} $ is the $ s $th component of the vector-valued parameter $ \btheta_{i} $. This order is needed for the expressions of the second derivative of $ \mathcal {D} (\blambda, \btheta) $ in the proof of Lemma \ref {lem:second_deriv} and for the covariance matrix of the first derivative in the proof of Lemma \ref {lem:first_deriv}. \input{App_Proof_SecondDeriv} \input{App_Proof_FirstDeriv} \input{App_Proof_ParamNormal_method2} \input{App_Proof_Chisq} \endinput \subsection{Proof of Theorem \ref {thm:ELRT_Stat_chisq}} \begin{proof} We notice that, as shown in \citet {cai2017hypothesis}, \[ \sup_{ \btheta, G_{0} } \{ \ell_{n} (\btheta, G_{0}) \} = \sup_{\btheta} \mathcal {D} (\boldsymbol {0}, \btheta) - n \log n. \] From \eqref {dual_relation} we also have \begin{align*} \tilde \ell_{n} (\bxi^{*}) = \mathcal {D} (\hat \blambda, \hat \btheta) - n \log n. \end{align*} These relations lead to \begin{align} R_{n} & = 2 \left [ \sup_{\btheta} \mathcal {D} (\boldsymbol {0}, \btheta) - \mathcal {D} (\hat \blambda, \hat \btheta) \right ] \nonumber \\ &= 2 \left [ \sup_{\btheta} \mathcal {D} (\boldsymbol {0}, \btheta) - \mathcal {D} (\boldsymbol {0}, \btheta^{*}) \right ] - 2 \left [ \mathcal {D} (\hat \blambda, \hat \btheta) - \mathcal {D} (\boldsymbol {0}, \btheta^{*}) \right ]. \label {ELRT_decompose} \end{align} \citet {cai2017hypothesis} show in the proof of their Theorem 1 that \begin{align*} \sup_{\btheta} \mathcal {D} (\boldsymbol {0}, \btheta ) - \mathcal {D} (\boldsymbol {0}, \btheta^{*}) & = - \frac {1} {2} \left [ n^{-1/2} \frac { \partial \mathcal {D} (\boldsymbol {0}, \btheta^{*}) } { \partial \btheta } \right ]^{\top} S_{\btheta \btheta}^{-1} \left [ n^{-1/2} \frac { \partial \mathcal {D} (\boldsymbol {0}, \btheta^{*}) } { \partial \btheta } \right ] + o_{p} (1), \end{align*} for the same $ S_{\btheta \btheta} $ given in the proof of Lemma \ref {lem:second_deriv}. For the second term in \eqref {ELRT_decompose}, utilizing the expansion of $ \hat \blambda $ and $ \hat \btheta - \btheta^{*} $ given in \eqref {eq20e}, we have \begin{align*} & \mathcal {D} (\hat \blambda, \hat \btheta) - \mathcal {D} (\boldsymbol {0}, \btheta^{*}) \\ = & \frac { \partial \mathcal {D} (\boldsymbol {0}, \btheta^{*}) } { \partial (\blambda, \btheta) } \begin{pmatrix} \hat \blambda - \boldsymbol {0} \\ \hat \btheta - \btheta^{*} \end{pmatrix} + \frac {1} {2} \begin{pmatrix} \hat \blambda - \boldsymbol {0} \\ \hat \btheta - \btheta^{*} \end{pmatrix}^{\top} \frac { \partial^{2} \mathcal {D} (\boldsymbol {0}, \btheta^{*}) } { \partial (\blambda, \btheta) \partial (\blambda, \btheta)^{\top} } \begin{pmatrix} \hat \blambda - \boldsymbol {0} \\ \hat \btheta - \btheta^{*} \end{pmatrix} + o_{p} (1) \\ = & - \frac {1} {2} \left [ n^{-1/2} \frac { \partial \mathcal {D} (\boldsymbol {0}, \btheta^{*}) } { \partial (\blambda, \btheta) } \right ]^{\top} S^{-1} \left [ n^{-1/2} \frac { \partial \mathcal {D} (\boldsymbol {0}, \btheta^{*}) } { \partial (\blambda, \btheta) } \right ] + o_{p} (1). \end{align*} Let \begin{gather*} \boldsymbol{\nu}_{1} = n^{-1/2} \left [ \frac { \partial \mathcal {D} (\boldsymbol {0}, \btheta^{*}) } { \partial \blambda } \right ], \, \, \, \, \, \, \boldsymbol{\nu}_{2} = n^{-1/2} \left [ \frac { \partial \mathcal {D} (\boldsymbol {0}, \btheta^{*}) } { \partial \btheta } \right ], \\ \Lambda = S_{\blambda \blambda} - S_{\blambda \btheta} S_{\btheta \btheta}^{-1} S_{\blambda \btheta}^{\top}, \, \, \, \, \, \, D = \begin{pmatrix} \mathbb {I}, & - S_{\blambda \btheta} S_{\btheta \btheta}^{-1} \end{pmatrix}, \end{gather*} with $ D $ and the identity matrix $ \mathbb {I} $ with proper sizes. We then get \begin{align*} R_{n} & = - \boldsymbol{\nu}_{2}^{\top} S_{\btheta \btheta}^{-1} \boldsymbol{\nu}_{2} + (\boldsymbol{\nu}_{1}^{\top}, \boldsymbol{\nu}_{2}^{\top}) S^{-1} \begin{pmatrix} \boldsymbol{\nu}_{1} \\ \boldsymbol{\nu}_{2} \end{pmatrix} + o_{p} (1) \\ & = \left \{ \boldsymbol{\nu}_{1} - S_{\blambda \btheta} S_{\btheta \btheta}^{-1} \boldsymbol{\nu}_{2} \right \}^{\top} \Lambda^{-1} \left \{ \boldsymbol{\nu}_{1} - S_{\blambda \btheta} S_{\btheta \btheta}^{-1} \boldsymbol{\nu}_{2} \right \} + o_{p} (1) \\ & = \begin{pmatrix} \boldsymbol{\nu}_{1} \\ \boldsymbol{\nu}_{2} \end{pmatrix}^{\top} (D^{\top} \Lambda^{-1} D) \begin{pmatrix} \boldsymbol{\nu}_{1} \\ \boldsymbol{\nu}_{2} \end{pmatrix} + o_{p} (1), \end{align*} where the middle step can be obtained via some typical matrix algebra or Theorem 8.5.11 in \citet {harville1997matrix}. As given in the proof of Lemma \ref {lem:first_deriv}, the asymptotic variance of $ (\boldsymbol{\nu}_{1}, \boldsymbol{\nu}_{2}) $ is $ V $. We also have \begin{align*} D V D^{\top} & = D \left [ \begin{pmatrix} S_{\blambda \blambda} & \boldsymbol {0} \\ \boldsymbol {0} & - S_{\btheta \btheta} \\ \end{pmatrix} - S \begin{pmatrix} \boldsymbol {0} & \boldsymbol {0} \\ \boldsymbol {0} & W \\ \end{pmatrix} S \right ] D^{\top} \\ & = D \begin{pmatrix} S_{\blambda \blambda} & \boldsymbol {0} \\ \boldsymbol {0} & - S_{\btheta \btheta} \\ \end{pmatrix} D^{\top} - \boldsymbol {0} \\ & = S_{\blambda \blambda} - S_{\blambda \btheta} S_{\btheta \btheta}^{-1} S_{\blambda \btheta}^{\top} \\ & = \Lambda. \end{align*} Hence, \[ V (D^{\top} \Lambda^{-1} D) V (D^{\top} \Lambda^{-1} D) V = V (D^{\top} \Lambda^{-1} D) V. \] By the result on quadratic forms of the multivariate normal (section 3.5, \citet {serfling2000approximation}), the limiting distribution of $ R_{n} $ is chi-square with the degrees of freedom being the trace of $ (D^{\top} \Lambda^{-1} D) V $, which is $ l $ as claimed in this theorem. This completes the proof. \end{proof} \endinput \section{Definability of the profile log-EL} \label{App:define_profile} Discussions of the properties of the ELRT statistic are not meaningful if the profile log-EL $ \tilde \ell_{n} (\bxi) $ is not well defined. In fact, in some situations, the constrained maximization has no solution \citep {grendar2009empty}. Such an ``empty-set'' problem can be an issue, but there are methods in the literature to overcome this obstacle \citep {chen2008adjusted, liu2010adjusted, tsao2014extended}. In this Appendix, we show that our $ \tilde \ell_{n} (\bxi) $ does not suffer from the ``empty-set'' problem under two additional mild conditions. The first condition restricts our attention to quantile values $ \{ \xi_{r}: r \in I \} $ in the range \[ \min_{j} x_{r j} < \xi_{r} < \max_{j} x_{r j}. \] The second requires one of the components of $ \bq(x) $ to be monotone in $ x $, in addition to a component being $ 1 $. All of our examples satisfy these conditions. To define the profile log-EL $ \tilde \ell_{n} (\bxi) $, we must have some $ p_{k j} > 0 \text { and } \btheta_{r} $ such that \begin{gather*} \sum_{k, j} p_{k j} \exp (\btheta_{r}^{\top} \bq (x_{k j})) = 1, \, \, \, r = 0, 1, \ldots, m, \\ \sum_{k, j} p_{k j} \exp (\btheta_{r}^{\top} \bq (x_{k j})) [\mathbbm{1} (x_{k j} \leq \xi_{r}) - \tau_{r}] = 0, \, \, \, r \in I. \end{gather*} We work on the most general case where $ I $ contains all populations, and without loss of generality let $ d = 2 $. The above expressions are equivalent to (including $ r = 0 $ and allowing $ \btheta_{0} \neq 0 $) \begin{gather*} \sum_{k, j} p_{k j} \exp (\btheta_{r}^{\top} \bq (x_{k j})) [\mathbbm{1} (x_{k j} \leq \xi_{r})] = \tau_{r}, \\ \sum_{k, j} p_{k j} \exp (\btheta_{r}^{\top} \bq (x_{k j})) [\mathbbm{1} (x_{k j} > \xi_{r})] = 1 - \tau_{r}. \end{gather*} Let \( \btheta_{r}^{\top} = (\theta_{r 1}, \theta_{r 2}), \) and \( \bq^{\top} (x) = (q_{1} (x), q_{2} (x)) \) where $ q_{1} (x) \equiv 1 $ and $ q_{2} (x) $ is monotone in $ x $. We can rewrite the equations as \begin{gather*} \sum_{k, j} p_{k j} \exp \left \{ \theta_{r 1}^{'} + \theta_{r 2} [ q_{2} (x_{k j}) - q_{2} (\xi_{r}) ] \right \} [ \mathbbm{1} (x_{k j} \leq \xi_{r}) ] = \tau_{r}, \\ \sum_{k, j} p_{k j} \exp \left \{ \theta_{r 1}^{'} + \theta_{r 2} [ q_{2} (x_{k j}) - q_{2} (\xi_{r}) ] \right \} [ \mathbbm{1} (x_{k j} > \xi_{r}) ] = 1 - \tau_{r}, \end{gather*} with $ \theta_{r 1}^{'} = \theta_{r 1} + \theta_{r 2} q_{2} (\xi_{r}) $. For notational simplicity, we retain the notation $ \theta_{r 1} $ instead of $ \theta_{r 1}^{'} $ in what follows. Let $ p_{k j}^{*} $ be any set of non-negative values such that \( \sum_{k, j} p_{k j}^{*} = 1 \). Define \begin{align*} A_{r} (\theta_{r 2}) &= \sum_{k, j} p_{k j}^{*} \exp \left \{ \theta_{r 2} [q_{2} (x_{k j}) - q_{2} (\xi_{r})] \right \} [ \mathbbm{1} (x_{k j} \leq \xi_{r}) ] \\ B_{r} (\theta_{r 2}) & = \sum_{k, j} p_{k j}^{*} \exp \left \{ \theta_{r 2} [q_{2} (x_{k j}) - q_{2} (\xi_{r})] \right \} [ \mathbbm{1} (x_{k j} > \xi_{r}) ]. \end{align*} Since $ q_{2} (x) $ is a monotone increasing function in $ x $, $ A_{r} (\theta_{r 2}) $ is decreasing in $ \theta_{r 2} $ and $ B_{r} (\theta_{r 2}) $ is increasing in $ \theta_{r 2} $. Thus, we have \begin{align*} \lim_{\theta_{r 2} \to - \infty} A_{r} (\theta_{r 2}) = \infty, & \, \, \, \lim_{\theta_{r 2} \to \infty} A_{r} (\theta_{r 2}) = 0; \\ \lim_{\theta_{r 2} \to - \infty} B_{r} (\theta_{r 2}) = 0, & \, \, \, \lim_{\theta_{r 2} \to \infty} B_{r} (\theta_{r 2}) = \infty. \end{align*} These imply that the ratio $ A_{r} (\theta_{r 2}) / B_{r} (\theta_{r 2}) $ is decreasing in $ \theta_{r 2} $ and that \begin{align*} \lim_{\theta_{r 2} \to - \infty} A_{r} (\theta_{r 2}) / B_{r} (\theta_{r 2}) = \infty, \, \, \, \lim_{\theta_{r 2} \to \infty} A_{r} (\theta_{r 2}) / B_{r} (\theta_{r 2}) = 0. \end{align*} By the intermediate value theorem, there must exist a value $ \theta_{r 2}^{*} $ such that \begin{align*} A_{r} (\theta_{r 2}^{*}) / B_{r} (\theta_{r 2}^{*}) = \tau_{r}/(1 - \tau_{r}). \end{align*} Let $ \theta_{r 1}^{*} = - \log \left \{ A_{r} (\theta_{r 2}^{*}) + B_{r} (\theta_{r 2}^{*}) \right \} $. We note that $ p_{k j}^{*} $ and $ \btheta_{r}^{*} = (\theta_{r 1}^{*}, \theta_{r 2}^{*})^{\top} $ form a solution to the system. Hence, a solution to the system always exists. We may shift the solution to set $ \btheta_0 = \boldsymbol {0} $ if required. Validity in the general case of $ d > 2 $ is implied by setting the other entries of $ \btheta_{r} $ to the value $ 0 $. Therefore, we have shown that our profile log-EL $ \tilde \ell_{n} (\bxi) $ does not suffer from the ``empty-set'' problem under mild conditions. \endinput \section{Research problem and proposed approach} \label {sec:ELRT} Let $ \{ x_{k j}: 1 \leq j \leq n_{k}, 0 \leq k \leq m \} $ be $ m+1 $ independent i.i.d. samples from a DRM defined by \eqref {DRM}. Denote by $ \xi_{k} $ the $ \tau_{k} $ quantile of the $ k $th population for some $ \tau_{k} \in (0, 1) $ and $ k = 0, 1, \ldots, m $. Let $ \bxi = \{ \xi_{k}: k \in I \} $ be the quantiles at some levels of populations in an index set $ I \subseteq \{ 0, 1, \ldots, m \} $ of size $ l $. We study the ELRT under the DRM for the following hypothesis: \begin{align} H_{0}: \bxi = \bxi^{*} \, \, \, \text { against } \, \, \, H_{1}: \bxi \neq \bxi^{*}, \label {simple_hyp} \end{align} for some given $ \bxi^{*} $ of dimension $ l $. The hypothesis formulated in \eqref {simple_hyp} has many applications. In socio-economic studies, when studying the distributions of household disposable incomes, economists and social scientists often divide the collected survey data into five groups. These groups are famously known as quintile groups. The first group consists of the lowest $ 20\% $ of the data, the second group consists of the next 20\%, and so on. Many studies have shown that the quintiles are important for explaining the economy and consumer behaviour \citep {castello2002human, wunder2012income, humphries2014households, corak2018canadian}. In statistics, the cut-off points of these quintile groups are the quantiles of the populations: for example, the $ 20 $th percentile separates the first and second quintile groups. Governments may, therefore, consider this $ 20 $th percentile as key for determining which families should receive a special subsidy to help society's less fortunate. Moreover, when new policies are implemented, the evolution of the quantiles of household income over time may reflect the impact of the policies. As a consequence, these quantiles are of particular interest to social scientists and politicians as a way to measure the effects of policy changes. In statistical inference, these types of tasks can most appropriately be carried out using a hypothesis testing procedure, which can be naturally extended to the construction of confidence regions. Hence, the research problem we study here is of scientific significance in many applications. In the real-data analysis, we study confidence regions for quantiles of household incomes based on US Consumer Expenditure Surveys. We use an ELRT to test the hypothesis in \eqref {simple_hyp}. Let $ p_{k j} = \mathrm {d} G_{0} (x_{k j}) = P (X = x_{k j}; G_{0}) $ for all applicable $ k, j $. The EL function under the DRM is given by \begin{align} L_{n} (G_{0}, \ldots, G_{m}) = \prod_{k, j} \mathrm {d} G_{k} (x_{k j}) = \big \{ \prod_{k, j} p_{k j} \big \} \big \{ \prod_{k, j} \exp (\btheta_{k}^{\top} \bq (x_{k j})) \big \}. \label {EL} \end{align} For notational convenience, we have dropped the ranges of the indices in the expressions. Observe that the EL in \eqref {EL} is 0 if $ G_{0} $ is a continuous distribution. Surprisingly, this seemingly devastating property does little harm to the usefulness of the EL. Since the EL in \eqref {EL} can also be regarded as a function of the parameters $ \btheta \coloneqq \{ \btheta_{r}: 1 \leq r \leq m \} $ and the base distribution $ G_{0} $, we may write its logarithm as \begin{align*} \ell_{n} ( \btheta, G_{0} ) = \log L_{n} (G_{0}, \ldots, G_{m}) = \sum_{k, j} \log p_{k j} + \sum_{k, j} \btheta_{k}^{\top} \bq (x_{k j}), \end{align*} where we define $ \btheta_{0} = \boldsymbol {0} $ by convention. Let $ \mathbbm {E}_{r} $ be the expectation operation under $ G_{r} $, and let \[ h_{r} (x, \btheta) = \exp (\btheta_{r}^{\top} \bq (x)) \] be the density of $ G_{r} $ with respect to $ G_{0} $ for $ r = 0, 1, \ldots, m $. Clearly, $ h_{0} (x, \btheta) = 1 $. This also implies that \begin{align} \label{auto.eq} \mathbbm {E}_{0} [ h_{r} (X, \btheta) ] = \mathbbm {E}_{0} \left [ \exp (\btheta_{r}^{\top} \bq (X)) \right ] = 1. \end{align} The $ \tau_{r} $ population quantile $ \xi_{r} $ of $ G_{r} $ satisfies or is defined to be a solution of \begin{align} \label{quantile.eq} \mathbbm {E}_{r} \big [ \mathbbm{1} (X \leq \xi_{r}) - \tau_{r} \big ] = \mathbbm {E}_{0} \left [ h_{r} (X, \btheta) \{ \mathbbm {1} (X \leq \xi_{r}) - \tau_{r} \} \right ] = 0. \end{align} Let \[ \varphi_{r} (x, \btheta, \bxi) = h_{r} (x, \btheta) [ \mathbbm {1} (x \leq \xi_{r}) - \tau_{r} ]. \] Following \citet {owen2001empirical} and \citet {qin1994empirical}, we introduce the profile log-EL of the population quantiles $ \bxi $: \begin{align} \label{profile.eq} \tilde \ell_{n} (\bxi) = \sup_{\btheta, G_{0}} \Big \{ \ell_{n} (\btheta, G_{0}) ~ | ~ & \sum_{k, j} p_{k j} h_{r} (x_{k j}, \btheta) = 1, \, r = 0, 1, \ldots, m, \nonumber \\ & \sum_{k, j} p_{k j} \varphi_{r} (x_{k j}, \btheta, \bxi) = 0, \, r \in I \Big \} \end{align} and \[ \sup_{ \btheta, G_{0} } \{ \ell_{n} (\btheta, G_{0}) \} = \sup_{ \btheta, G_{0} } \{ \ell_{n} (\btheta, G_{0}) | \sum_{k, j} p_{k j} h_{r} (x_{k j}, \btheta) = 1, r = 0, 1, \ldots, m \}. \] An ELRT statistic for the hypothesis in \eqref {simple_hyp} is defined as \begin{align*} R_{n} = 2 \left [ \sup_{ \btheta, G_{0} } \{ \ell_{n} (\btheta, G_{0})\} - \tilde \ell_{n} (\bxi^{*}) \right ]. \end{align*} We call $ R_{n} $ the ELRT statistic hereafter. Clearly, the larger the value of $ R_{n} $, the stronger the evidence for departure from the null hypothesis in the direction of the alternative hypothesis. We reject $ H_{0} $ when $ R_{n} $ exceeds some critical value that is decided based on the distributional information of $ R_{n} $ under $ H_{0} $. The limiting distribution of $ R_{n} $ and other related properties are given in the next section. We observe that the approach needs no change for a set of quantiles from the same population. For notational simplicity, the presentation is given for quantiles from different populations. \endinput \section{Introduction} \label {sec:intro} Suppose we have $ m+1 $ independent random samples from population distributions $ G_{0}, G_{1}, \ldots, G_{m} $. Let their respective density functions with respect to some $ \sigma $-finite measure be $ g_{k} (\cdot) $. If there exist a vector-valued function $ \bq (x) $ and unknown vector-valued parameters $ \btheta_{k} $ such that \begin{align} \label {DRM} g_{k} (x) = \exp \{ \btheta_{k}^{\top} \bq (x) \} g_{0} (x), \end{align} then they define a density ratio model (DRM) as introduced by \citet {anderson1979multivariate}. By convention, we call $ G_{0} $ the base distribution and $ \bq (x) $ the basis function. There is a symmetry in the DRM: any one of $ G_{0}, \ldots, G_{m} $ may serve as the base distribution. We require the first element of $ \bq (x) $ to be 1 so that the corresponding coefficient is a normalization constant, and the elements of $ \bq (x) $ must be linearly independent. The linear independence is a natural assumption: otherwise, some elements of $ \bq (x) $ are redundant. When data are collected from a DRM, the whole data set can be utilized to estimate $ G_{0} $, which will lead to efficiency gain. The nonparametric $ G_{0} $ assumption in the DRM is unrestrictive. Combined with a moderate-sized $ \bq (x) $, a single DRM contains a broad range of parametric distribution families. Thus, the DRM has a low risk of model misspecification. There is a growing interest in the DRM in statistics \citep {qin1998inferences, fokianos2001semiparametric, de2017bayesian, zhuang2019semiparametric} as well as in the machine learning community \citep {sugiyama2012density}. In this paper, we study the inference problem for population quantiles under the DRM. Population quantiles and their functions are important parameters in many applications. For example, government agents gauge the overall economic status of a country based on annual surveys of household income distribution. The trend in the quantiles of the income distribution is indicative \citep {berger2003variance, muller2008measurement}. In forestry, the lower quantiles of the mechanical strength of wood products are vital design values \citep {verrill2015simulation}. Other examples include \citet{chen1993smoothed, yang2012bayesian, chen2013quantile, chen2016monitoring, koenker2017handbook, gonccalves2018dynamic, chen2019small}. The data from DRMs are a special type of biased sample \citep{vardi1982nonparametric, vardi1985empirical, qin1998inferences, qin2017connections}. The empirical likelihood (EL) of \citet {owen2001empirical} is an ideal platform for statistical inference under the DRM. The EL retains the effectiveness of likelihood methods and does not impose a restrictive parametric assumption. The ELRT statistic has a neat chi-square limiting distribution, much like the parametric likelihood ratio test given independent and identically distributed (i.i.d.) observations \citep {owen1988empirical, qin1994empirical}. The EL has already been widely used for data analysis under the DRM \citep {qin1993empirical, qin1997goodness,chen2013quantile, cai2017hypothesis}. However, there has been limited discussion of the ELRT in the biased sampling context. Both \citet {qin1993empirical} and \citet {cai2017hypothesis} permit no additional equations. Although the classical Wald method remains effective for both hypothesis tests and confidence regions \citep{qin1998inferences, chen2013quantile, chen2016monitoring}, it must be aided by a consistent and stable variance estimate. In addition, its confidence regions are oval-shaped regardless of the shape of the data cloud. Thus, an ELRT has the potential to push the boundary of the DRM much further. This paper establishes the limiting chi-square distribution of the ELRT for quantiles under the DRM. We prove that the ELRT statistic has a chi-square limiting distribution under certain conditions. The resulting confidence regions have data-driven shapes, more accurate coverage probabilities, and smaller volumes. In Section \ref {sec:ELRT}, we state the problem of interest and the proposed ELRT under the DRM. In Section \ref {sec:main_results}, we study the limiting distribution of the ELRT statistic and some other useful asymptotic results. We illustrate the superiority of the ELRT and the associated confidence regions through simulated data in Section \ref {sec:simulations} and for real-world data in Section \ref {sec:realdata}. Technical details and the proofs of the main theorems are given in Appendices \ref {App:Proofs} and \ref {App:define_profile}. \endinput \section{Asymptotic properties of $ R_{n} $ and other quantities} \label{sec:main_results} The distributional information of $ R_{n} $ is vital to the implementation of the ELRT in applications. In this section, we show that it is asymptotically chi-square distributed. We also present some secondary but useful asymptotic results. \subsection{A dual function} The profile log-EL function $ \tilde \ell_{n} (\bxi^{*}) $ is defined to be the solution of an optimization problem that can be solved by the Lagrange multiplier method. Let $ \boldsymbol {t} = (t_{0}, \ldots, t_{m})$ and $\blambda = \{ \lambda_{r}: r \in I \} $ be Lagrange multipliers. Define a Lagrangian as \begin{align*} \mathcal{L} (\boldsymbol{t}, \blambda, \btheta, G_{0}) = & \ell_{n} (\btheta, G_{0}) + \sum_{r = 0}^{m} t_{r} \big \{ 1 - \sum_{k, j} p_{k j} h_{r} (x_{k j}, \btheta) \big \} \nonumber \\ & - \sum_{r \in I} n \lambda_{r} \big \{ \sum_{k, j} p_{k j} \varphi_{r} (x_{k j}, \btheta, \bxi^{*}) \big \}. \end{align*} In Appendix \ref {App:define_profile}, we will show that under mild conditions that are easy to verify, there aways exists some $ \btheta $ such that a solution in $ G_{0} $ to \eqref {auto.eq} and \eqref {quantile.eq} exists. With this promise, according to the Karush--Kuhn--Tucker theorem \citep {boyd2004convex}, the solution to the constrained optimization problem in \eqref{profile.eq} satisfies \begin{align*} \frac { \partial \mathcal{L} (\boldsymbol{t}, \blambda, \btheta, G_{0}) } { \partial (\boldsymbol {t}, \blambda, \btheta, p_{k j}) } = \boldsymbol {0}. \end{align*} Let $ (\hat { \boldsymbol {t} }, \hat \blambda, \hat \btheta, \hat p_{k j}) $ be the solution. Some simple algebra gives $ \hat t_{r} = n_{r} $ and \begin{align*} \hat p_{k j} = n^{-1} \left \{ \sum_{r = 0}^{m} \rho_{r} h_{r} (x_{k j}, \hat \btheta) + \sum_{r \in I} \hat \lambda_{r} \varphi_{r} (x_{k j}, \hat \btheta, \bxi^{*}) \right \}^{-1}, \end{align*} where $ \rho_r = n_{r}/n $. We now introduce another set of notation: \begin{gather*} \bar {h} (x, \btheta) = \sum_{r = 0}^{m} \rho_{r} h_{r} (x, \btheta), \\ \bh (x, \btheta) = (\rho_{1} h_{1} (x, \btheta)/\bar {h} (x, \btheta), \ldots, \rho_{m} h_{m} (x, \btheta)/\bar {h} (x, \btheta))^{\top}, \\ \psi_{r} (x, \btheta) = \varphi_{r} (x, \btheta, \bxi^{*})/\bar {h} (x, \btheta), \\ \boldsymbol{\psi} (x, \btheta) = \{ \psi_{r} (x, \btheta) : r \in I \}. \end{gather*} To aid our memory, we note that $ \bar {h} (x, \btheta) $ is a mixture density with mixing proportions $ \rho_{0}, \ldots, \rho_{m} $; $ \bh (x, \btheta) $ is a vector of density functions with respect to the mixture $ \bar {h} (x, \btheta) $ combined with the mixing proportions; and $ \boldsymbol{\psi} (x, \btheta) $ is a vector of normalized $ \varphi_{r} (x, \btheta, \bxi^{*}) $. With the help of this notation, we define a {\em dual function} \begin{align} \mathcal {D} (\blambda, \btheta) = & \sum_{k, j} \btheta_{k}^{\top} \bq (x_{k j}) - \sum_{k, j} \log \bar {h} (x_{k j}, \btheta) \nonumber \\ & - \sum_{k, j} \log \big \{ 1 + \sum_{r \in I} \lambda_{r} \psi_{r} (x_{k j}, \btheta) \big \}. \label {new_dual1} \end{align} The dual function has some easily verified mathematical properties. We can show that \begin{align} \tilde \ell_{n} (\bxi^{*}) = \mathcal {D} (\hat \blambda, \hat \btheta) - n \log n, \label {dual_relation} \end{align} and that $ (\hat \blambda, \hat \btheta) $ is a saddle point of $ \mathcal {D} (\blambda, \btheta) $ satisfying \begin{align} \frac {\partial \mathcal {D} (\blambda, \btheta)} {\partial (\blambda, \btheta)} = \boldsymbol {0}. \label {joint_solution} \end{align} In the following section, we study some of the properties of $ \tilde \ell_{n} (\bxi^{*}) $ through the dual function $ \mathcal {D} (\blambda, \btheta) $. \subsection{Asymptotic properties} We discuss the asymptotic properties under the following nonrestrictive conditions on the sampling plan and the DRM. \noindent {\bf Conditions:} \begin{enumerate}[(i)] \item \label{Condition.i} The sample proportions $ \rho_{k} = n_{k}/n $ have limits in $ (0, 1) $ as $ n \to \infty $; \item \label{Condition.ii} The matrix $ \mathbbm {E}_{0} [ \bq (X) \bq^{\top} (X) ] $ is positive definite; \item \label{Condition.iii} For each $ k = 0, 1, \ldots, m $ and $ \btheta_{k} $ in a neighbourhood of the true parameter value $ \btheta_{k}^{*} $, we have \[ \mathbbm {E}_{0} \left [ \exp (\btheta_{k}^{\top} \bq (X)) \right ] = \mathbbm {E}_{0} [ h_{k} (X, \btheta) ] < \infty. \] \end{enumerate} Here are some implications of the above conditions. \begin{enumerate} \item Under Condition \eqref{Condition.iii}, the moment generating function of $ \bq (X) $ with respect to $ G_{k} $ exists in a neighbourhood of $ \boldsymbol {0} $. Hence, all finite-order moments of $ \| \bq (X) \| $ are finite. \item When $ n $ is large enough and $ (\blambda, \btheta) $ is in a small neighbourhood of $ (\boldsymbol {0}, \btheta^{*}) $, the derivatives of the dual function $ \mathcal {D} (\blambda, \btheta) $ are all bounded by some polynomials of $ \| \bq (x) \|$. Hence, they are all integrable. \item Under Condition \eqref{Condition.ii}, the sample version of $ \mathbbm {E}_{0} [\bq (X) \bq^{\top} (X)] $ is also positive definite when $ n $ is very large. \end{enumerate} We now state the main results; the proofs are given in Appendix \ref {App:Proofs}. \begin{lem} \label{lem:second_deriv} Under Conditions \eqref{Condition.i} to \eqref{Condition.iii}, as $ n \to \infty $, \[ n^{-1} \left. \frac {\partial^{2} \mathcal {D} (\blambda, \btheta)} {\partial (\blambda, \btheta) \partial (\blambda, \btheta)^{\top}} \right \rvert_{\blambda = \boldsymbol {0}, \btheta = \btheta^*} \to S, \] almost surely for some full-rank square matrix $ S $ of dimension $ (dm+l) $ that has the expression \[ S = \sum_{k = 0}^{m} \rho_{k} \mathbbm {E}_{k} \left [ \frac { \partial^{2} \mathcal {D}_{k} (X, \boldsymbol {0}, \btheta^{*}) } { \partial (\blambda, \btheta) \partial (\blambda, \btheta)^{\top} } \right ]. \] \end{lem} The second derivative of the dual function $ \mathcal {D} (\blambda, \btheta) $ is not negative definite in comparison to a usual likelihood function. This is understandable because $ \blambda $ is not a model parameter. However, it has full rank and plays an important role in localizing $ \hat \btheta $. The next result implies that the dual function $ \mathcal {D} (\blambda, \btheta) $ resembles the log-likelihood function under regularity conditions in an important way: its first derivative is an unbiased estimating function. \begin{lem} \label{lem:first_deriv} Under Conditions \eqref{Condition.i} to \eqref{Condition.iii}, we have \[ \mathbbm {E} \left [ \left. \frac{\partial \mathcal {D} (\blambda, \btheta) }{\partial (\blambda, \btheta)} \right ] \right \rvert_{\blambda = \boldsymbol {0}, \btheta = \btheta^*} = \boldsymbol {0}, \] where the expectation is calculated by regarding $ x_{k j} $ as a random variable with distribution $ G_{k} $. Furthermore, as $ n \to \infty $, we have \[ n^{-1/2} \left. \dfrac {\partial \mathcal {D} (\blambda, \btheta)} {\partial (\blambda, \btheta)} \right \rvert_{\blambda = \boldsymbol {0}, \btheta = \btheta^*} \overset {d} {\to} N (\boldsymbol {0}, V), \] where $ V $ is a square matrix of dimension $ (dm+l) $. \end{lem} A key step in the asymptotic study of $ \hat \btheta $ and the ELRT statistic $ R_{n} $ is localization. That is, $ \hat \btheta $ is in a small neighbourhood of the true value $ \btheta^{*} $ as the sample size $ n $ goes to infinity. The following lemma asserts that $ \hat \btheta $ is almost surely located in the $ O (n^{-1/3}) $-neighbourhood of $ \btheta^{*} $. \begin{lem} \label{lem:para_normality} Under Conditions \eqref{Condition.i} to \eqref{Condition.iii}, as $ n \to \infty $, the saddle point $ (\hat \blambda, \hat \btheta) $ of the dual function $ \mathcal {D} (\blambda, \btheta) $ is in the $ n^{-1/3} $-neighbourhood of $ (\boldsymbol {0}, \btheta^{*}) $ with probability $ 1 $. In addition, $ \sqrt {n} (\hat \blambda, \hat \btheta - \btheta^{*}) $ is asymptotically multivariate normal. \end{lem} The results in the previous lemma shed light on the asymptotic properties of the EL under the DRM. At the same time, they pave the way for the following celebrated conclusion in the EL literature. \begin{thm} \label{thm:ELRT_Stat_chisq} Under Conditions \eqref {Condition.i} to \eqref {Condition.iii} and the null hypothesis \eqref{simple_hyp}, as $ n \to \infty $, the ELRT statistic \[ R_{n} = 2 \left [ \sup_{ \btheta, G_{0} } \{ \ell_{n} (\btheta, G_{0}) \} - \tilde \ell_{n} (\bxi^{*}) \right ] \overset {d} \to \chi_{l}^{2}. \] \end{thm} Theorem \ref {thm:ELRT_Stat_chisq} enables us to determine an approximate rejection region for the ELRT. We reject the null hypothesis at the significance level $ \alpha $ when the observed value of $ R_{n} $ is larger than the upper $ \alpha $ quantile of the chi-square distribution $ \chi_{l}^{2} $. This also provides a foundation for the construction of confidence regions of $ \bxi $. Let \[ R_{n} (\bxi) = 2 \left [ \sup_{ \btheta, G_{0} } \{ \ell_{n} (\btheta, G_{0}) \} - \tilde \ell_{n} (\bxi) \right ]. \] An ELRT-based $ (1 - \alpha) $ approximate confidence region for $ \bxi $ is \begin{align} \{ \bxi: R_{n} (\bxi) \leq \chi_{l}^{2} (1 - \alpha) \}, \label {ELRT_CR} \end{align} where $ \chi^{2}_{l} (1 - \alpha) $ is the $ (1 - \alpha) $ quantile of $ \chi_{l}^{2} $. \endinput \section{Real-data analysis} \label{sec:realdata} In the previous simulations, we chose the most suitable basis function $ \bq (x) $ in each case because the population distributions were known to us. This is not possible in real-world applications. In this section, we create a simulation population based on the US Consumer Expenditure Surveys data concerning US expenditure, income, and demographics. The data set is available on the US Bureau of Labor Statistics website (\url {https://www.bls.gov/cex/pumd.htm}). The data are collected by the Census Bureau in the form of panel surveys, in which approximately 5000 households are contacted each quarter. After a household has been surveyed it is dropped from subsequent surveys and replaced by a new household. The response variable is the annual sum of the wages or salary income received by all household members before any deductions. Household income is a good reflection of economic well-being. The data files include some imputed values to replace missing values due to non-response. We study a six-year period from 2013 to 2018, and we log-transform the response values to make the scale more suitable for numerical computation. Note that the quantiles are transformation equivariant. We exclude households that have no recorded income even after imputation, and there remain 4919, 5304, 4641, 4606, 4475, and 4222 households from 2013 to 2018. The histograms shown in Figure \ref {RealData_hist} indicate that it is difficult to determine a suitable parametric model for these data sets, but a DRM may work well enough. We take the basis function $ \bq (x) = (1, x, x^{2})^{\top} $; it may not be the best choice, but as a result the simulation results for the DRM analysis are more convincing. \begin{figure}[!ht] \centering \caption{Histograms of log-transformed annual household incomes.} \label{RealData_hist} \includegraphics[height = 10cm, width = \textwidth]{RealData-pdf/Hist.pdf} \end{figure} In this simulation, we form 6 populations based on the yearly data sets. We test hypotheses on the $ 20$th and $ 50$th percentiles based on independent samples of size 100, which are obtained by sampling with replacement from the respective populations. To test the size of a single quantile of a single population, the limiting distribution of $ R_{n} $ is $ \chi_{1}^{2} $. Figures \ref {RealData_QQ_ELRT_20quan} and \ref {RealData_QQ_ELRT_50quan} contain a few Q-Q plots of $ R_{n} $ versus $ \chi_{1}^{2} $ for $ H_{0}: \xi_{r} = \xi_{r}^{*} $ with $ \tau_{r} = 20\% $ or $ \tau_{r} = 50\% $. In all the plots, the points of $ R_{n} $ are close to the 45-degree line. Thus, the precision of the chi-square approximation is satisfactory. The plots for other levels or populations are similar and not presented. \begin{figure}[!ht] \centering \caption{Q-Q plots of $ R_{n} $ values against $ \chi_{1}^{2} $, based on real data of equal sample size $ n_{r} = 100 $. Quantile levels are $ 20\% $.} \label{RealData_QQ_ELRT_20quan} \includegraphics[height = 6cm, width = \textwidth]{RealData-pdf/ELRTQQplot_n100_individuals_20quan.pdf} \end{figure} \begin{figure}[!ht] \centering \caption{Q-Q plots of $ R_{n} $ values against $ \chi_{1}^{2} $, based on real data of equal sample size $ n_{r} = 100 $. Quantile levels are $ 50\% $.} \label{RealData_QQ_ELRT_50quan} \includegraphics[height = 6cm, width = \textwidth]{RealData-pdf/ELRTQQplot_n100_individuals_50quan.pdf} \end{figure} The Wald method \eqref {Wald_CR} may be regarded as being derived from an asymptotic $ \chi_{1}^{2} $ distributed statistic: \[ W_{n} = n (\tilde \xi_{r} - \xi_{r}^{*})^{\top} \tilde \Omega^{-1} (\tilde \xi_{r} - \xi_{r}^{*}). \] We also obtain $ W_{n} $ values and construct Q-Q plots, and a selected few are given in Figures \ref {RealData_QQ_Wald_20quan} and \ref {RealData_QQ_Wald_50quan}. These plots show that the chi-square approximation is not as satisfactory. There are many possible explanations, but a major factor could be the unstable variance estimator $ \tilde \Omega $ that the Wald method must rely on, especially for lower quantiles. One of the most valued properties of the likelihood ratio test approach is that there is no need to estimate a scale factor. \begin{figure}[!ht] \centering \caption{Q-Q plots of Wald statistic values against $ \chi_{1}^{2} $, based on real data of equal sample size $ n_{r} = 100 $. Quantile levels are $ 20\% $.} \label{RealData_QQ_Wald_20quan} \includegraphics[height = 6cm, width = \textwidth]{RealData-pdf/WaldQQplot_n100_individuals_20quan.pdf} \end{figure} \begin{figure}[!ht] \centering \caption{Q-Q plots of Wald statistic values against $ \chi_{1}^{2} $, based on real data of equal sample size $ n_{r} = 100 $. Quantile levels are $ 50\% $.} \label{RealData_QQ_Wald_50quan} \includegraphics[height = 6cm, width = \textwidth]{RealData-pdf/WaldQQplot_n100_individuals_50quan.pdf} \end{figure} A direct consequence of the poor chi-square approximation could be undercoverage of the confidence intervals. Table~\ref {RealData_CI_Ava_Ecv1} gives the coverage probabilities and average lengths of the confidence intervals based on three methods: ELRT in \eqref {ELRT_CR}, Wald in \eqref {Wald_CR}, and nonparametric in \eqref {Nonpara_CR}. The improved efficiency of the DRM is best reflected in the average lengths of the confidence intervals. It can be seen that the DRM-based methods achieve on average about $ 15\% $ and $ 25\% $ improvement over the nonparametric method for the $ 20 $th and $ 50 $th percentiles respectively. Comparing the ELRT and Wald methods, both done under DRM, we find that the ELRT is comparable to the Wald method for the $ 20 $th percentile and clearly more efficient for the $ 50 $th percentile. \begin{table}[!ht] \centering \caption{Average lengths and empirical coverage probabilities of the individual confidence intervals, based on real data of equal sample size $ n_{r} = 100 $.} \label{RealData_CI_Ava_Ecv1} \begin{tabular}{cccccccc} \toprule \multirow{2}{*}{} & \multirow{2}{*}{Year} & \multicolumn{2}{c}{ELRT} & \multicolumn{2}{c}{Wald} & \multicolumn{2}{c}{Nonparametric} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} & & $ 90\% $ & $ 95\% $ & $ 90\% $ & $ 95\% $ & $ 90\% $ & $ 95\% $ \\ \midrule \multicolumn{8}{c}{\multirow{1}{*}{{\bf Average lengths}}} \\ \midrule \multicolumn{8}{c}{\multirow{2}{*}{quantile levels all $ = 20\% $}} \\ \\ & 2013 & $ 0.465 $ & $ 0.563 $ & $ 0.440 $ & $ 0.524 $ & $ 0.513 $ & $ 0.611 $ \\ & 2014 & $ 0.464 $ & $ 0.559 $ & $ 0.437 $ & $ 0.520 $ & $ 0.528 $ & $ 0.630 $ \\ & 2015 & $ 0.459 $ & $ 0.553 $ & $ 0.432 $ & $ 0.515 $ & $ 0.519 $ & $ 0.619 $ \\ & 2016 & $ 0.461 $ & $ 0.558 $ & $ 0.435 $ & $ 0.519 $ & $ 0.527 $ & $ 0.628 $ \\ & 2017 & $ 0.459 $ & $ 0.557 $ & $ 0.434 $ & $ 0.518 $ & $ 0.539 $ & $ 0.642 $ \\ & 2018 & $ 0.438 $ & $ 0.529 $ & $ 0.416 $ & $ 0.496 $ & $ 0.523 $ & $ 0.623 $ \\ & average & $ 0.458 $ & $ 0.553 $ & $ 0.433 $ & $ 0.515 $ & $ 0.525 $ & $ 0.626 $ \\ \multicolumn{8}{c}{\multirow{2}{*}{quantile levels all $ = 50\% $}} \\ \\ & 2013 & $ 0.307 $ & $ 0.364 $ & $ 0.315 $ & $ 0.376 $ & $ 0.383 $ & $ 0.457 $ \\ & 2014 & $ 0.306 $ & $ 0.366 $ & $ 0.316 $ & $ 0.376 $ & $ 0.379 $ & $ 0.452 $ \\ & 2015 & $ 0.304 $ & $ 0.364 $ & $ 0.314 $ & $ 0.374 $ & $ 0.374 $ & $ 0.446 $ \\ & 2016 & $ 0.305 $ & $ 0.364 $ & $ 0.315 $ & $ 0.375 $ & $ 0.382 $ & $ 0.455 $ \\ & 2017 & $ 0.304 $ & $ 0.364 $ & $ 0.316 $ & $ 0.376 $ & $ 0.390 $ & $ 0.465 $ \\ & 2018 & $ 0.300 $ & $ 0.357 $ & $ 0.311 $ & $ 0.371 $ & $ 0.373 $ & $ 0.444 $ \\ & average & $ 0.304 $ & $ 0.363 $ & $ 0.315 $ & $ 0.375 $ & $ 0.380 $ & $ 0.453 $ \\ \\ \midrule \multicolumn{8}{c}{\multirow{1}{*}{{\bf Empirical coverage probabilities}}} \\ \midrule \multicolumn{8}{c}{\multirow{2}{*}{quantile levels all $ = 20\% $}} \\ \\ & 2013 & $ 88.0\% $ & $ 94.0\% $ & $ 88.7\% $ & $ 93.2\% $ & $ 87.7\% $ & $ 92.3\% $ \\ & 2014 & $ 90.1\% $ & $ 95.1\% $ & $ 88.7\% $ & $ 94.7\% $ & $ 87.9\% $ & $ 92.6\% $ \\ & 2015 & $ 89.8\% $ & $ 94.6\% $ & $ 88.6\% $ & $ 93.6\% $ & $ 89.5\% $ & $ 94.3\% $ \\ & 2016 & $ 89.7\% $ & $ 95.1\% $ & $ 88.6\% $ & $ 94.1\% $ & $ 87.7\% $ & $ 94.2\% $ \\ & 2017 & $ 90.0\% $ & $ 94.6\% $ & $ 87.8\% $ & $ 93.3\% $ & $ 86.6\% $ & $ 91.7\% $ \\ & 2018 & $ 90.4\% $ & $ 95.6\% $ & $ 87.5\% $ & $ 91.7\% $ & $ 89.0\% $ & $ 93.1\% $ \\ & average & $ 89.7\% $ & $ 94.8\% $ & $ 88.3\% $ & $ 93.4\% $ & $ 88.1\% $ & $ 93.0\% $ \\ \multicolumn{8}{c}{\multirow{2}{*}{quantile levels all $ = 50\% $}} \\ \\ & 2013 & $ 89.8\% $ & $ 94.2\% $ & $ 89.3\% $ & $ 95.2\% $ & $ 88.5\% $ & $ 93.3\% $ \\ & 2014 & $ 89.2\% $ & $ 95.3\% $ & $ 90.4\% $ & $ 95.4\% $ & $ 89.4\% $ & $ 94.8\% $ \\ & 2015 & $ 91.7\% $ & $ 96.0\% $ & $ 92.3\% $ & $ 95.7\% $ & $ 92.4\% $ & $ 95.9\% $ \\ & 2016 & $ 90.0\% $ & $ 95.5\% $ & $ 90.9\% $ & $ 95.5\% $ & $ 90.9\% $ & $ 94.9\% $ \\ & 2017 & $ 88.9\% $ & $ 95.2\% $ & $ 90.1\% $ & $ 96.0\% $ & $ 91.7\% $ & $ 95.9\% $ \\ & 2018 & $ 89.6\% $ & $ 94.9\% $ & $ 89.8\% $ & $ 95.4\% $ & $ 90.0\% $ & $ 95.4\% $ \\ & average & $ 89.9\% $ & $ 95.2\% $ & $ 90.5\% $ & $ 95.5\% $ & $ 90.5\% $ & $ 95.0\% $ \\ \bottomrule \end{tabular} \end{table} In the next simulation, we focus on the confidence region of the first quintiles of the household incomes in the years 2013 and 2018, namely the $ 20$th percentiles for these two years. Figure~\ref {RealData_CR_2013_2018_quan20_20} shows the $ 95\% $ confidence regions using the three methods based on simulated real data of size $ n_{r} = 100 $. Table~\ref {RealData_Ava_Ecv2} gives the average coverages and areas of the three confidence regions, based on 1000 repetitions. The ELRT produces the most satisfactory confidence regions. The ELRT confidence regions improve the Wald confidence regions by rightfully increased area to achieve more accurate coverage probabilities. They are much more efficient than the nonparametric confidence regions. \begin{figure}[!hb] \centering \caption{Confidence regions of the $ 20 $th percentiles of years 2013 and 2018 by ELRT (solid), Wald (dashed), and nonparametric (dotted) methods, based on a simulated real data set of an equal sample size $ n_{r} = 100 $. The true quantiles are marked with a diamond. The level of confidence is $ 95\% $.} \label{RealData_CR_2013_2018_quan20_20} \includegraphics[height = 0.8\textwidth, width = 0.8\textwidth]{RealData-pdf/Contours_3methods_2013_2018_quan20_20.pdf} \end{figure} \begin{table}[!ht] \centering \caption{Empirical coverage probabilities and average areas for $ 20 $th percentiles of the years 2013 and 2018, based on real data of an equal sample size.} \label{RealData_Ava_Ecv2} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccc} \toprule \multirow{2}{*}{} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{90\%} & \multicolumn{2}{c}{95\%}\\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} & & Coverage probability & Area & Coverage probability & Area \\ \midrule \multicolumn{6}{c}{\multirow{2}{*}{$ n_{r} = 100 $}} \\ \\ & ELRT & $ 89.00\% $ & $ 0.284 $ & $ 94.20\% $ & $ 0.379 $ \\ & Wald & $ 86.30\% $ & $ 0.245 $ & $ 91.80\% $ & $ 0.319 $ \\ & Nonparametric & $ 87.20\% $ & $ 0.358 $ & $ 91.60\% $ & $ 0.466 $ \\ \multicolumn{6}{c}{\multirow{2}{*}{$ n_{r} = 200 $}} \\ \\ & ELRT & $ 88.20\% $ & $ 0.130 $ & $ 93.40\% $ & $ 0.171 $ \\ & Wald & $ 86.10\% $ & $ 0.120 $ & $ 92.30\% $ & $ 0.156 $ \\ & Nonparametric & $ 88.80\% $ & $ 0.183 $ & $ 93.80\% $ & $ 0.238 $ \\ \bottomrule \end{tabular} } \end{table} \endinput \section{Simulation studies} \label{sec:simulations} In this section, we report some simulation results. We conclude that the chi-square approximation to the sample distribution of $ R_{n} $ is very accurate. The corresponding confidence regions have a data-driven shape and accurate coverage probabilities. In almost all cases considered, the $ R_{n} $-based confidence regions outperform those based on the Wald method in terms of the average areas and coverage probabilities. The DRM markedly improves the statistical efficiency, and the details are as follows. \subsection{Numerical implementation and methods included} Recall that the ELRT statistic $ R_{n} $ is defined to be \[ R_{n} = 2 \left [ \sup_{\btheta, G_{0}} \{ \ell_{n} (\btheta, G_{0}) \} - \tilde \ell_{n} (\bxi^{*}) \right ]. \] In data analysis, we must solve the optimization problem $ \sup_{\btheta, G_{0}} \{ \ell_{n} (\btheta, G_{0}) \} $. As \citet {cai2017hypothesis} suggest, it can be transformed into an optimization problem of a convex function, and it has a simple solution. We further turn this optimization problem into the problem of solving a system of equations that are formed by equating the derivatives of the induced convex function to $ \boldsymbol {0} $. The numerical implementation can be efficiently carried out by a root solver in the {\tt R} \citep {R} package {\tt nleqslv} \citep {nleqslv} for nonlinear equations. It uses either the Newton or Broyden iterative algorithms. To compute $ \tilde \ell_{n} (\bxi^{*}) $, we can solve \eqref {joint_solution}, as \eqref {dual_relation} suggests. This leads to a system of $ dm+l $ nonlinear equations in $ (\blambda, \btheta) $, with $ d $ being the dimension of the vector-valued basis function $ \bq (x) $ and $ l $ the number of population quantiles of interest specified in $ \bxi^{*} $. In most applications, a $ \bq (x) $ with dimension 4 or less is suitable. For a system of this size, the {\tt R} package {\tt nleqslv} for roots is very effective even when $ m $ is as large as $ 20 $. The existence of the solution to \eqref {auto.eq} and \eqref {quantile.eq} is proved in Appendix \ref {App:define_profile}. Guided by this proof, our choice of the initial $ \blambda $ and $ \btheta $ guarantees numerical success. As is typical for DRM examples, we simulate data from the normal and gamma distributions and examine the ELRT-based hypothesis tests and confidence regions for the population quantiles. For comparison, we include Wald-based and nonparametric inference on the same quantiles. To make the article self-contained, we now briefly review the Wald and nonparametric methods. ~ \\ \noindent {\bf Wald method.} The Wald method for confidence region construction of $ \bxi $ was given in \citet {chen2013quantile}. Let $ (\tilde \btheta, \tilde G_{0}) $ be the argument maximizer of $ \sup_{ \btheta, G_{0} } \{ \ell_{n} (\btheta, G_{0}) \} $, and also let \[ \tilde G_{r} (x) = \sum_{k, j} \mathbbm {1} (x_{k j} \leq x) h_{r} (x_{k j}, \tilde \btheta) \mathrm {d} \tilde G_{0} (x_{k j}), \] for $ r = 1, \ldots, m $, where $ \mathrm {d} \tilde G_{0} (x) = \tilde G_{0} (x) - \tilde G_{0} (x_{-}) $. The maximum EL estimator (MELE) of the $ \tau_{r} $ quantile of $ G_{r} $ is then given by \[ \tilde \xi_{r} = \inf \{ x: \tilde G_{r} (x) \geq \tau_{r} \}. \] Let $ \tilde \bxi = \{ \tilde \xi_{r}: r \in I \} $. We have, as $ n \to \infty $, \[ \sqrt {n} (\tilde \bxi - \bxi^{*}) \to N (\boldsymbol {0}, \Omega), \] for some matrix $ \Omega $ that is a function of $ G_{r} $ and $ \btheta $. A plug-in estimate $ \tilde \Omega $ of $ \Omega $ was suggested by \citet {chen2013quantile}, and an {\tt R} package {\tt drmdel} \citep {drmdel} by the authors of \citet {cai2017hypothesis} includes the MELE $ \tilde \bxi $ and $ \tilde \Omega $ in its output. A level $ (1 - \alpha) $ approximate confidence region for $ \bxi $ based on the Wald method is then given by \begin{align} \{ \bxi: n (\tilde \bxi - \bxi)^{\top} \tilde \Omega^{-1} (\tilde \bxi - \bxi) \leq \chi_{l}^{2} (1 - \alpha) \}. \label {Wald_CR} \end{align} The Wald method can also be used for hypothesis tests on quantiles. We refer to the confidence region in \eqref {Wald_CR} as the one based on the {\em Wald method}. ~ \\ \noindent {\bf Nonparametric method.} Suppose $ \hat G_{r} (x) = n_{r}^{-1} \sum_{j = 1}^{n_{r}} \mbox{$\mathbbm{1}$} (x_{r j} \leq x) $ is the empirical distribution based on a sample from the distribution $ G_{r} $, and $ \hat \xi_{r} $ is the sample quantile. The sample quantile is asymptotically normal \citep {serfling2000approximation} with asymptotic variance $ {\tau_{r} (1 - \tau_{r})}/{(\rho_{r} g_{r}^{2} (\xi_{r}))} $ as $ n \to \infty $ and $ n_{r}/n \to \rho_{r} $. In view of this, the Wald method remains applicable with the help of a nonparametric consistent density estimator. We follow the literature and let \[ \hat g_{r} (x) = \frac {1} {n_{r} b_{r}} \sum_{j = 1}^{n_{r}} K \left ( \frac {x_{r j} - x} {b_{r}} \right ), \] for some kernel function $ K (\cdot) $ and bandwidth $ b_{r} $. Under mild conditions on $ g_{r} (\cdot) $ and proper choices of $ K (\cdot) $ and $ b_{r} $, $ \hat g_{r} (x) $ is consistent \citep {silverman1986density}. We set $ K (\cdot) $ to the density function of the standard normal distribution, and we use a rule-of-thumb bandwidth suggested by \citet {silverman1986density}: \[ b_{r} = 0.9 \min \{ \hat \sigma_{r}, \widehat {\mathrm {IQR}}_{r}/1.34 \} n_{r}^{-1/5}, \] where $ \hat \sigma_{r}$ is the standard deviation of $ \hat G_{r} $ and $\widehat {\mathrm {IQR}}_{r}$ is the interquartile range. With these, we obtain a plug-in estimate \[ \hat T \coloneqq \mathrm {diag} \{ \tau_{r} (1 - \tau_{r})/(\rho_{r} \hat g_{r}^{2} (\hat \xi_{r})): r \in I \}, \] and subsequently a $ (1 - \alpha) $ approximate confidence region for $ \bxi $: \begin{align} \{ \bxi: n (\hat \bxi - \bxi)^{\top} \hat T^{-1} (\hat \bxi - \bxi) \leq \chi_{l}^{2} (1 - \alpha) \}, \label {Nonpara_CR} \end{align} where $ \hat \bxi = \{ \hat \xi_{r}: r \in I \} $. This nonparametric Wald method can also be employed for hypothesis tests on quantiles. We refer to the confidence region in \eqref {Nonpara_CR} as the one based on the {\em nonparametric method}. When constructing the confidence region in \eqref {Nonpara_CR}, density estimation is required as an intermediate step to obtain a variance estimate $ \hat T $. One may also use bootstrap method as an alternative nonparametric method to construct confidence regions for quantiles. We do not think these two nonparametric methods will lead to significantly different results, and hence we use \eqref {Nonpara_CR} as a nonparametric competitor in this article. \subsection{Data generated from normal distributions} Normality is routinely assumed but unlikely strictly valid in real-world applications. When multiple samples are available, we include all normal distributions without a normality assumption via a DRM coupled with $ \bq (x) = (1, x, x^{2})^{\top} $. In this simulation, we generate data from $ m+1 = 6 $ normal distributions with sample sizes $ n_{r} = 100 $. Their means and standard deviations are chosen to be $ (0, 0, 1, 1, 2, 2) $ and $ (1, 1.2, 1.3, 1.5, 2, 1.5) $. In the simulation experiment, we generate 1000 sets of samples of size $ n_{r} = 100 $ and compute the $ R_{n} $ values for the hypothesis on the medians of $ G_{0} $ and $ G_{5} $: \[ H_{0}: (\xi_{0}, \xi_{5}) = (\xi_{0}^{*}, \xi_{5}^{*}) \, \, \, \text { versus } H_{1}: (\xi_{0}, \xi_{5}) \neq (\xi_{0}^{*}, \xi_{5}^{*}) \] where $ \xi_{0}^{*}, \xi_{5}^{*} $ are the true values. Note that although we simulate data from normal distributions, the parametric information does not play any role in the data analysis. Because $ H_{0} $ is true, $ R_{n} $ has a $ \chi_{2}^{2} $ limiting distribution. Figure~\ref {Normal_QQ} gives a quantile-quantile (Q-Q) plot of the 1000 simulated $ R_{n} $ values against the $ \chi_{2}^{2} $ distribution. Over the range from 0 to 6 that matters in most applications, the points are close to the red 45-degree line. Clearly, the chi-square distribution is a good approximation of the sampling distribution of $ R_{n} $, demonstrating good agreement with Theorem \ref {thm:ELRT_Stat_chisq}. \begin{figure}[!ht] \centering \caption{Q-Q plot of $ R_{n} $ values against $ \chi_{2}^{2} $ based on normal data of equal sample size $ n_{r} = 100 $.} \label{Normal_QQ} \includegraphics[height = 0.8\textwidth, width = 0.8\textwidth]{Normal-pdf/QQplotNormal_n100.pdf} \end{figure} In Figure \ref {CIs_Normal1}, we depict the $ 95\% $ confidence regions of $ \bxi = (\xi_{0}, \xi_{5}) $ based on the ELRT in \eqref {ELRT_CR}, the Wald method in \eqref {Wald_CR}, and the nonparametric method in \eqref {Nonpara_CR} based on a typical simulated data set with the true $ \bxi^{*} $ marked as a red diamond. The ELRT contour is not smooth because $ R_{n} (\bxi) $ is not smooth at data points. Clearly, the ELRT confidence region has the smallest area and is therefore the most efficient. In Table \ref {Normal_Ava_Ecv1}, we make direct quantitative comparisons between the three methods in terms of the coverage probabilities and areas of the $ 90\% $ and $ 95\% $ confidence regions. Both the LRT and Wald methods under the DRM have empirical coverage probabilities close to the nominal levels; the nonparametric method has overcoverage. In general, the ELRT outperforms. \begin{figure}[!ht] \begin{center} \caption{Confidence regions of $ (\xi_{0}, \xi_{5}) $ by ELRT (solid), Wald (dashed), and nonparametric (dotted) methods, based on a simulated normal data set of equal sample size $ n_{r} = 100 $. The true quantiles are marked with a diamond. The level of confidence is $ 95\% $.} \label{CIs_Normal1} \includegraphics[height = 8cm, width = \textwidth]{Normal-pdf/CIContoursNormal_n100.pdf} \end{center} \end{figure} \begin{table}[!ht] \centering \caption{Empirical coverage probabilities and average areas based on normal data of equal sample size.} \label{Normal_Ava_Ecv1} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccc} \toprule \multirow{2}{*}{} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{90\%} & \multicolumn{2}{c}{95\%}\\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} & & Coverage probability & Area & Coverage probability & Area \\ \midrule \multicolumn{6}{c}{\multirow{2}{*}{$ n_{r} = 100 $}} \\ \\ & ELRT & $ 89.1\% $ & $ 0.250 $ & $ 95.8\% $ & $ 0.323 $ \\ & Wald & $ 90.8\% $ & $ 0.266 $ & $ 95.4\% $ & $ 0.347 $ \\ & Nonparametric & $ 91.7\% $ & $ 0.374 $ & $ 95.9\% $ & $ 0.487 $ \\ \multicolumn{6}{c}{\multirow{2}{*}{$ n_{r} = 200 $}} \\ \\ & ELRT & $ 89.7\% $ & $ 0.126 $ & $ 95.0\% $ & $ 0.164 $ \\ & Wald & $ 90.5\% $ & $ 0.132 $ & $ 95.2\% $ & $ 0.171 $ \\ & Nonparametric & $ 90.3\% $ & $ 0.183 $ & $ 95.3\% $ & $ 0.239 $ \\ \bottomrule \end{tabular} } \end{table} In applications, the sample sizes from different populations are unlikely to be equal. Does the superiority of the ELRT require equal sample sizes from these populations? We also simulated data from the same distributions with unequal sample sizes. We set the sizes of populations $ G_{0}, G_{1}, G_{4}, G_{5} $ to 100 and 200, and the sizes of populations $ G_{2}, G_{3} $ to 50 and 100, respectively. We constructed confidence regions for the $ 90$th percentile of $ G_{2} $ and the $ 95 $th percentile of $ G_{3} $, where both populations have the smaller sample sizes. Figure~\ref {CIs_Normal2} shows the three $ 95\% $ confidence regions based on a simulated data set; we see that the ELRT is superior. Admittedly, this is one of the more extreme cases. Table~\ref {Normal_Ava_Ecv2} gives the average areas and empirical coverage probabilities of the three confidence regions, based on 1000 repetitions. The ELRT confidence regions have the most accurate coverage probabilities, while the other two methods have low coverage. The ELRT confidence regions have larger average areas that are not excessive. \begin{figure}[!ht] \begin{center} \caption{Confidence regions of $ (\xi_{2}, \xi_{3}) $ by ELRT (solid), Wald (dashed), and nonparametric (dotted) methods, based on a simulated normal data set of unequal sample sizes. The true quantiles are marked with a diamond. The level of confidence is $ 95\% $.} \label{CIs_Normal2} \includegraphics[height = 8cm, width = \textwidth]{Normal-pdf/CIContoursNormal2_n100.pdf} \end{center} \end{figure} \begin{table}[!ht] \centering \caption{Empirical coverage probabilities and average areas based on normal data of unequal sample sizes.} \label{Normal_Ava_Ecv2} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccc} \toprule \multirow{2}{*}{} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{90\%} & \multicolumn{2}{c}{95\%}\\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} & & Coverage probability & Area & Coverage probability & Area \\ \midrule \multicolumn{6}{c}{\multirow{2}{*}{$ n_{2} = n_{3} = 50, n_{0} = n_{1} = n_{4} = n_{5} = 100 $}} \\ \\ & ELRT & $ 90.1\% $ & $ 1.307 $ & $ 94.5\% $ & $ 1.741 $ \\ & Wald & $ 83.7\% $ & $ 1.096 $ & $ 88.9\% $ & $ 1.427 $ \\ & Nonparametric & $ 73.6\% $ & $ 1.439 $ & $ 80.0\% $ & $ 1.873 $ \\ \multicolumn{6}{c}{\multirow{2}{*}{$ n_{2} = n_{3} = 100, n_{0} = n_{1} = n_{4} = n_{5} = 200 $}} \\ \\ & ELRT & $ 90.1\% $ & $ 0.642 $ & $ 94.5\% $ & $ 0.843 $ \\ & Wald & $ 86.7\% $ & $ 0.572 $ & $ 91.8\% $ & $ 0.744 $ \\ & Nonparametric & $ 81.3\% $ & $ 0.804 $ & $ 86.7\% $ & $ 1.046 $ \\ \bottomrule \end{tabular} } \end{table} \subsection{Data generated from gamma distributions} In applications, income, lifetime, expenditure, and strength data are positive and skewed. Gamma or Weibull distributions are often used for statistical inference in such applications. In the presence of multiple samples, replacing the parametric model by a DRM with $ \bq (x) = (1, x, \log x)^{\top} $ is an attractive option to reduce the risk of model mis-specification. We generate 1000 sets of $ m+1 = 6 $ independent samples of sizes $ n_{r} = 100 \text { and } 200 $ from gamma distributions with shape parameters $ (5, 5, 6, 6, 7, 7) $ and scale parameters $ (2, 1.9, 1.8, 1.7, 1.6, 1.5) $. We test the hypothesis on the medians of $ G_{1} $ and $ G_{2} $: \[ H_{0}: (\xi_{1}, \xi_{2}) = (\xi_{1}^{*}, \xi_{2}^{*}) \, \, \, \text { versus } H_{1}: (\xi_{1}, \xi_{2}) \neq (\xi_{1}^{*}, \xi_{2}^{*}), \] where $ \xi_{1}^{*}, \xi_{2}^{*} $ are the true medians of $ \mathrm {Gamma} (5, 1.9) $ and $ \mathrm {Gamma} (6, 1.8) $, respectively. Note that although we simulate data from gamma distributions, the parametric information does not play any role in the data analysis. Figure \ref {Gamma_QQ} shows the Q-Q plot based on 1000 $ R_{n} $ values against the theoretical limiting distribution $ \chi_{2}^{2} $. The points in the Q-Q plot are close to (but slightly above) the 45-degree line in the range from 0 to 6. This implies that the corresponding tests will have close to nominal levels. Overall, the chi-square approximation is satisfactory. \begin{figure}[!ht] \centering \caption{Q-Q plot of $ R_{n} $ values against $ \chi_{2}^{2} $ based on gamma data of equal sample size $ n_{r} = 100 $.} \label{Gamma_QQ} \includegraphics[height = 0.8\textwidth, width = 0.8\textwidth]{Gamma-pdf/QQplotGamma_n100.pdf} \end{figure} In Figure \ref {Gamma_CIs1}, we depict the $ 95\% $ confidence regions of $ \bxi = (\xi_{1}, \xi_{2}) $ using the ELRT in \eqref {ELRT_CR}, the Wald method in \eqref {Wald_CR}, and the nonparametric method in \eqref {Nonpara_CR}, based on a typical simulated data set with $ \bxi^{*} $ marked as a red diamond. Clearly, the ELRT-based confidence region has a smaller area and is therefore more efficient. In Table~\ref {Gamma_Ava_Ecv1} we make direct quantitative comparisons of the coverage probabilities and areas. Both the ELRT and Wald methods under the DRM have empirical coverage probabilities very close to the nominal levels. The nonparametric confidence regions have overcoverage and inflated sizes. We again conclude that the ELRT is superior. \begin{figure}[!ht] \centering \caption{Confidence regions of $ (\xi_{1}, \xi_{2}) $ by ELRT (solid), Wald (dashed), and nonparametric (dotted) methods, based on a simulated gamma data set of equal sample size $ n_{r} = 100 $. The true quantiles are marked with a diamond. The level of confidence is $ 95\% $.} \label{Gamma_CIs1} \includegraphics[height = 8cm, width = \textwidth]{Gamma-pdf/CIContoursGamma_n100.pdf} \end{figure} \begin{table}[!ht] \centering \caption{Empirical coverage probabilities and average areas based on gamma data of equal sample size.} \label{Gamma_Ava_Ecv1} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccc} \toprule \multirow{2}{*}{} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{90\%} & \multicolumn{2}{c}{95\%}\\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} & & Coverage probability & Area & Coverage probability & Area \\ \midrule \multicolumn{6}{c}{\multirow{2}{*}{$ n_{r} = 100 $}} \\ \\ & ELRT & $ 88.3\% $ & $ 2.808 $ & $ 94.2\% $ & $ 3.665 $ \\ & Wald & $ 89.9\% $ & $ 2.953 $ & $ 95.3\% $ & $ 3.843 $ \\ & Nonparametric & $ 92.1\% $ & $ 4.264 $ & $ 95.2\% $ & $ 5.547 $ \\ \multicolumn{6}{c}{\multirow{2}{*}{$ n_{r} = 200 $}} \\ \\ & ELRT & $ 88.6\% $ & $ 1.395 $ & $ 94.4\% $ & $ 1.822 $ \\ & Wald & $ 89.7\% $ & $ 1.451 $ & $ 95.3\% $ & $ 1.889 $ \\ & Nonparametric & $ 89.3\% $ & $ 2.111 $ & $ 94.3\% $ & $ 2.747 $ \\ \bottomrule \end{tabular} } \end{table} We also study the confidence regions for a pair of lower quantiles: the $ 5 $th percentile of $ G_{4} $ and the $ 10 $th percentile of $ G_{5} $. Figure~\ref {Gamma_CIs2} shows the three $ 95\% $ confidence regions based on a simulated data set. Table~\ref {Gamma_Ava_Ecv2} gives the average areas and coverage probabilities of the three confidence regions, based on 1000 repetitions. The ELRT method is still the most efficient. Maintaining the accurate coverage probabilities, the ELRT confidence regions still have satisfactory areas that are comparable to the Wald confidence regions. \begin{figure}[!ht] \centering \caption{Confidence regions of $ (\xi_{4}, \xi_{5}) $ by ELRT (solid), Wald (dashed), and nonparametric (dotted) methods, based on a simulated gamma data set of equal sample size $ n_{r} = 100 $. The true quantiles are marked with a diamond. The level of confidence is $ 95\% $.} \label{Gamma_CIs2} \includegraphics[height = 8cm, width = \textwidth]{Gamma-pdf/CIContoursGamma2_n100.pdf} \end{figure} \begin{table}[!ht] \centering \caption{Empirical coverage probabilities and average areas based on gamma data of equal sample size.} \label{Gamma_Ava_Ecv2} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccc} \toprule \multirow{2}{*}{} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{90\%} & \multicolumn{2}{c}{95\%}\\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} & & Coverage probability & Area & Coverage probability & Area \\ \midrule \multicolumn{6}{c}{\multirow{2}{*}{$ n_{r} = 100 $}} \\ \\ & ELRT & $ 88.4\% $ & $ 2.312 $ & $ 93.7\% $ & $ 3.022 $ \\ & Wald & $ 86.5\% $ & $ 2.236 $ & $ 92.0\% $ & $ 2.910 $ \\ & Nonparametric & $ 82.8\% $ & $ 3.250 $ & $ 88.7\% $ & $ 4.229 $ \\ \multicolumn{6}{c}{\multirow{2}{*}{$ n_{r} = 200 $}} \\ \\ & ELRT & $ 90.8\% $ & $ 1.139 $ & $ 95.3\% $ & $ 1.486 $ \\ & Wald & $ 90.4\% $ & $ 1.114 $ & $ 95.4\% $ & $ 1.449 $ \\ & Nonparametric & $ 87.0\% $ & $ 1.684 $ & $ 92.6\% $ & $ 2.191 $ \\ \bottomrule \end{tabular} } \end{table} \endinput
proofpile-arXiv_065-153
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Up to half of the baryons in the present-day Universe are unaccounted for. We know how many baryons were present in the early Universe from fluctuations in the cosmic microwave background (CMB), and some 2 billion years later at redshift 3, the majority of the baryon budget of the Universe could be found in galaxies, proto-clusters and, mostly, in the Lyman-$\alpha$ forest. In the present-day Universe, however, if we take stock of the known baryon populations we come up short, and this has given rise to the `missing baryon problem' (e.g\@. see \citealp{Nicastro2017} for review). Cosmological simulations have long pointed to the likely explanation that these Baryons reside in a warm-hot intercluster medium (WHIM) that is distributed in a large-scale filamentary network, the so-called `cosmic web' (e.g\@. \citealp{Cen1999}). However, due to its extremely diffuse nature, intermediate temperature range ($10^5 - 10^7$ K), and highly ionised state, it is very difficult to detect. The low density of this medium and intermediate temperature result in only very weak X-ray emission via thermal free-free radiation; the highly ionised state makes detection via absorption/emission lines difficult; and the low mass, low density environment makes detection via the Sunyaev-Zel'dovich effect problematic (with the exception of bridges connecting close pairs of galaxy clusters). Nonetheless, there have been early attempts to detect the cosmic web by way of some of these mechanisms. For example, \citet{Eckert2015} measured residual X-ray emission as large as \SI{8}{\mega \parsec} in scale around galaxy cluster Abell 2744, implying this existence of a large scale, energetic baryon population. \citet{Nicastro2018} claimed that Oxygen \textsc{vii} absorption features in a distant quaser pointed to the detection of an intervening overdense baryonic region. \citet{Tanimura2019} and \citet{deGraaff2019} have both independently claimed to have made statistical detections of the intercluster medium by way of the Sunyaev-Zel'dovich effect. Most recently, \citet{Macquart2020} have used the dispersion measure of a small number of localised fast radio bursts (FRBs) to measure the electron column density along the line of sight to these events, and have measured a value for the baryon count of the Universe that is consistent with those derived from CMB measurements. Recently, there has been work to understand the radio emission properties of the cosmic web. Infall accretion shocks along the length of filaments and at the edge of clusters should have high Mach numbers ($\mathcal{M} \approx$ 10-100). These in turn are capable of producing relativistic electrons and---given the presence of background magnetic fields---associated synchrotron emission (e.g\@. \citealp{Wilcots2004}). Such emission would provide not only confirmation of the cosmic web but would also provide a probe into inter-cluster magnetic field strengths, which up till now are largely unknown. Early detection attempts such as \citet{Brown2017} and \citet{Vernstrom2017} have assumed synchrotron cosmic web emission to be spatially smooth and characteristically large in angular scale, in an effort to distinguish it from the more general extra-galactic synchrotron emission produced by radio galaxies. In \citet{Vernstrom2017}, for example, low frequency radio images were cross-correlated with galaxy density maps (as tracers of large scale structure), with the expectation that the synchrotron cosmic web would appear as excess radio emission with angular scales larger than the embedded radio galaxy population. More recent work utilising full magneto-hydrodynamic simulations has attempted to directly model the filamentary accretion shocks and from this derive values for their radio luminosity \citep{Vazza2015,Vazza2019}. As is typical of synchrotron shocked emission, these simulations suggest radio emission with steep spectral indices of approximately -1 to -1.5, as well as peak radio surface brightnesses on the order of \SI{E-6}{\jansky~ {arcsecond} \squared}. Such simulations, however, depend on assumptions about filamentary magnetic field strengths and electron acceleration efficiencies, which are poorly constrained or understood. To date, these attempts at detecting the synchrotron cosmic web have been unsuccessful with two exceptions. In the first, a small `bridge' between two interacting clusters Abell 399 and 401 was recently reported to have been detected by \citet{Govoni2019}, however this emission is primarily the result of a pre-merger cluster-cluster interaction rather than the more general infall accretion shocks we expect to find in the cosmic web. The second, by \citealt{Vacca2018} (henceforth: VA18), is the focus of this current follow-up study. VA18 reported the detection of 28 candidate, large-scale synchrotron radio sources using the single dish Sardinia Radio Telescope (SRT; \citealp{SRT}) and archival interferometric NRAO VLA\footnote{National Radio Astronomy Observatory Very Large Array} Sky Survey (NVSS; \citealp{Condon1998}) data observed at 1.4 GHz. These sources were observed in an \ang{8} $\times$ \ang{8} region of sky centred at RA \ra{5;0;0} and Dec \ang{5;48;0}. This region of sky contains 43 galaxy clusters, thirteen of which have spectroscopic redshifts, with nine being in the redshift range $0.08 \leq z \leq 0.15$ (see Tables 1 \& 2 in VA18 for full list). Additionally, some of these clusters have been identified as members of superclusters: \citet{Einasto2002} have catalogued superclusters SCL 061 and SCL 062, and \citet{ChowMartinez2014} have catalogued MSCC 145 which partially overlaps with SCL 062. However, VA18 exclude the possibility that these sources are associated with galaxy cluster cores due to the lack of associated X-ray emission typical of dense cluster environments; indeed, the sources populate a previously empty region of the X-ray luminosity / radio power space ($L_\text{X,0.1-2.4keV}-P_\text{1.4GHz}$). Instead, they have raised the possibility that these new-found synchrotron sources are in fact a detection of radio emission from the intercluster medium, that is, the synchrotron cosmic web. Given the potential significance of these candidate sources and the new population of synchrotron sources they may represent, we here report on lower frequency observations using the Murchison Widefield Array (MWA; \citealp{Tingay2013, Wayth2018}) at 154 MHz and the Australian Square Kilometre Array Pathfinder (ASKAP; \citealp{Hotan2014}) at 887 MHz to verify the candidate sources and measure their spectral properties. These lower frequencies are ideal for detecting synchrotron emission. The spectral energy distribution (SED) of synchrotron sources can usually be well approximated by a power law, where the spectral flux density $S$ is a function of frequency $\nu$ of the form: \begin{equation} S\left(\nu\right) \propto \nu^\alpha \end{equation} The coefficient $\alpha$ is known as the spectral index. For astronomical synchrotron sources, this coefficient depends, amongst other things, on the electron injection power coupled with the aging dynamics of the electron population. Active radio galaxies (AGN) tend to have a shallower SED at around $\alpha \approx -0.7$, whilst as populations of relativistic electrons age, for example in AGN remnants, their SED tends to steepen. Synchrotron shocks tracing the cosmic web should have spectral indices of at least -0.7, and most likely -1 or steeper \citep{Vazza2015}. This typically negative spectral index ensures that synchrotron sources are brightest at lower radio frequencies. Thus, these lower frequency observations take advantage of the expected brighter emission to corroborate the detections in VA18, and additionally provide us with spectral information that can allow us to infer the emission mechanisms of any confirmed candidate sources. This paper proceeds as follows: in Section 2 we briefly review the observations and data of VA18, before in Section 3 detailing our own observations with both MWA and ASKAP, which also includes our point source subtraction method. We measure our surface brightness sensitivity in Section 4, and in Section 5 we present the results of our observations. Finally, in Section 6 we discuss at length all potential corroborating detections as well as drawing from other extant surveys to help classify these emission sources. \section{SRT+NVSS data} VA18 fully document their observations and data processing methods, which we briefly summarize here. The SRT data consisted of 18 hours of observing in the L-band (1.3-1.8 GHz) using the `on-the-fly' mapping strategy, as well as some additional time on specific sub fields. The SRT has a beam size of \SI{13.9 x 12.4}{\arcminute} at 1550 MHz and the resulting images had a noise of \SI{20}{\milli \jansky \per \beam}. In addition to this low-resolution, single-dish data, VA18 also obtained archival NVSS observations of the field that were in two bands centred at 1.4~GHz, and which had a resolution of \SI{45}{\arcsecond} and an average noise of \SI{0.45}{\milli \jansky \per \beam}. The data sets were combined using SCUBE \citep{Murgia2016}, which performs a weighted sum in Fourier space of the power spectra of the single dish and the NVSS data after correcting for any misalignment of overall power on overlapping angular scales. To perform the combination, an SRT image was produced over the same frequency range as the NVSS image. The combined power spectrum was tapered with the NVSS beam and the data back-transformed to obtain the combined image. The resulting combined image was finally convolved to a resolution of \SI{3.5 x 3.5}{\arcminute} to accentuate large-scale emission, producing the `SRT+NVSS' combined map with a noise of \SI{3.7}{\milli \jansky \per \beam}. To differentiate between compact emission and the presumed large-scale emission of the cosmic web, VA18 subtracted point sources from the `SRT+NVSS' map using an image-plane subtraction process. This is described in full in their paper but briefly: the brightest point source in the map was identified, fit with a 2D elliptic Gaussian sitting on top of an arbitrarily oriented plane (to account for background emission) and subtracted. The process was repeated by then subtracting the next brightest source, and so on, until a user-defined threshold was reached. This image subtraction process was performed on the SRT+NVSS map prior to convolving the image from its native \SI{45}{\arcsecond} resolution. The final `SRT+NVSS-diffuse' map, at \SI{3.5 x 3.5}{\arcminute} resolution, has a noise of \SI{3.1}{\milli \jansky \per \beam}.\footnote{Note that this is different to the value of \SI{2.5}{\milli \jansky \per \beam} given in VA18, and was calculated independently on the supplied final image. We also note that the overall mean of the image is offset from zero by \SI{-2.1}{\milli \jansky \per \beam}. When calculating detection contours, we offset multiples of our noise value by this global mean. This independent process has resulted in a small difference between the SRT+NVSS-diffuse contours published here and in VA18.} The choice to complement existing NVSS data with the deep, single dish SRT observation arises from the assumption that nearby cosmic web emission will be large-scale, smoothly varying, and highly diffuse. Typical interferometers like the VLA lack very short and `zero spacing' baselines, and as a result are likely to be increasingly insensitive to, and eventually `resolve out', emission on these large angular scales. Single dish observations like the SRT are sensitive to these large angular scale features but typically have such low resolution that unrelated compact radio sources are blended together. In combining both together, VA18 make use of the strengths of each to get higher resolution data with excellent sensitivity to diffuse, large scale emission. Finally, all candidate sources were identified from the SRT+NVSS-diffuse image using a threshold three times greater than the calculated map noise ($3 \sigma$). The resulting 35 sources were grouped into ten regions, labelled A through to J. Of these 35, VA18 classify five as likely to be the result of imperfect compact source subtraction, and two as known cluster halos, leaving 28 sources as candidates for large-scale, diffuse synchrotron emission. \section{Radio observations and data processing} In order to independently further investigate the results from VA18, these fields were observed with the MWA and ASKAP. \begin{figure*} \centering \includegraphics[width=\linewidth]{baselines.pdf} \caption{A comparison of baseline lengths for each of MWA Phase I (MWA1), MWA Phase 2 extended configuration (MWA2) and ASKAP. The lengths are measured in wavelengths (i.e. $\nicefrac{|b|}{\lambda}$, with $\lambda \approx 1.94$ m for the MWA and $\lambda \approx 0.34$ m for ASKAP), which allows us to compare the baseline coverage despite the different observing frequencies. All plots exclude baselines that were flagged. The dashed line indicates a baseline length that would result in a fringe pattern on the sky with angular periodicity of \SI{3.5}{\arcminute}; baselines shorter than this are sensitive to even larger spatial scales. \textit{Top:} The baselines distribution out to 6000 wavelengths, binned in intervals of 100. \textit{Bottom:} A zoom of the baselines under 1000 wavelengths, binned in intervals of 25.} \label{fig:baselines} \end{figure*} \begin{table*} \centering \begin{tabular}{lcccccc} \toprule Image Name & Instrument & Duration & Frequency & Briggs Weighting & Resolution & Noise \\ & & [hours] & [\SI{}{\mega \hertz}] & & [arcsecond$^2$] & [\SI{}{\jansky \per \beam}] \\ \midrule MWA-1 & MWA & 2.3 & 154 & 0 & \SI{210 x 210}{} & \SI{8.4E-3}{} \\ MWA-2 & MWA & 6.5 & 154 & 0 & \SI{79 x 62}{} & \SI{2.3E-3}{} \\ MWA-subtracted & MWA & 6.5 & 154 & 0 & \SI{210 x 210}{} & \SI{5.4E-3}{} \\ ASKAP-B+0.5 & ASKAP & 13 & 887 & 0.5 & \SI{21 x 17}{} & \SI{4.3E-5}{} \\ ASKAP-B-1 & ASKAP & 13 & 887 & -1 & \SI{9.6 x 7.6}{} & \SI{5.8E-5}{} \\ ASKAP-subtracted & ASKAP & 13 & 887 & 0.5 & \SI{210 x 210}{} & \SI{7.5E-4}{} \\ SRT-NVSS diffuse & SRT \& VLA & 18 (SRT) & 1400 & - & \SI{210 x 210}{} & \SI{3.1E-3}{} \\ \bottomrule \end{tabular} \caption{List of images used in this work. Resolution and noise values are given for the centre of the field. Resolution values describe the major and minor axes of an elliptical Gaussian fitted to the synthesised beam. The bandwidth of all MWA images is \SI{30.72}{\mega \hertz} and the bandwidth of all ASKAP images is \SI{288}{\mega \hertz}.} \label{table:images} \end{table*} \subsection{Murchison Widefield Array} The MWA data consists of two distinct datasets that were collected during different configurations of the array, known as `Phase I' and `Phase II', described in detail in \citet{Tingay2013} and \citet{Wayth2018}, respectively. While both configurations consisted of 128 tiles and had identical point source sensitivity, the tiles were arranged differently resulting in a different set of baselines (see \autoref{fig:baselines}). Phase I had a maximum baseline length of about \SI{2.6}{\kilo \metre} as well as a large number of short baselines, many under \SI{100}{\metre}. These short baselines gave Phase I excellent surface brightness sensitivity at the expense of poor resolution, which at \SI{154}{\mega \hertz} could be several arcminutes depending on the exact baseline weighting scheme used. Phase I is excellent at detecting faint, extended emission, however the poor resolution often necessitates additional, high resolution observations to discern whether such emission is truly extended or merely the result of blending of nearby sources (e.g.\ \cite{Hindson14,Zheng18}) . Phase II (extended configuration), on the other hand, redistributed the 128 tiles out to a maximum baseline of about \SI{5.4}{\kilo \metre} and a more sparse sampling of baselines under 500 m. Phase II has higher resolution at about \SI{65}{\arcsecond} at \SI{154}{\mega \hertz} and a better behaved synthesised beam (point spread function), but less sensitivity to diffuse emission. In this follow up, we make use of observations using both Phase I and II configurations so as to leverage their respective strengths. The Phase I configuration data are archival observations that were collected at various times from 2013-2016 and consist of just over 2 hours of observations. The Phase II observations consist of 6 hours of observations at 154 MHz from March 2019, plus an additional 30 minutes of archival observations from the first quarter of 2018. The latter 30 minutes were observed at high elevations at which the MWA is most sensitive, so contribute a disproportionate amount of signal to the final integration. All MWA observations were made at a central frequency of \SI{154}{\mega \hertz} with a \SI{30.72}{\mega \hertz} bandwidth. The data were originally collected with a \SI{10}{\kilo \hertz} and \SI{0.5}{\second} resolution, and were averaged down to \SI{40}{\kilo \hertz} and \SI{4}{\second} prior to calibration and processing. MWA calibration and imaging workflows operate independently on short `snapshot' observations that are typically about 2 minutes in length; this workflow is necessitated due to the complicated MWA primary beam and the stationary, non-tracking array. Snapshots are short enough in duration that we can assume a constant primary beam model and the MWA, with its more than 8,000 baselines, sufficiently samples the Fourier plane ($uv$ space) such that it is possible to image and deconvolve on time scales as short as two minutes. The downside of such a workflow is that final mosaics are only \clean{}ed down to the noise level of a single snapshot, making in-field sidelobe confusion the typically dominant source of noise, as well as prohibiting jointly imaging Phase I and Phase II observations together. For this follow up, each snapshot was independently calibrated with an `in-field' sky model using the GLEAM extra-galactic catalogue \citep{HurleyWalker2016} and the internal MWA tool \textsc{calibrate} \citep{Offringa2016} which calculated full Jones matrix corrections across the band in 120 kHz steps. Additionally, we flagged baselines shorter than 15 wavelengths at the observing frequency, as these baselines tended to pick up significant amounts of nearby Galactic emission on scales larger than several degrees. After the initial sky-model calibration, snapshots were imaged using \textsc{wsclean} \citep{Offringa2014} with a shallow \clean{} and self-calibrated using the \clean{}-component model. A final snaphsot image was then produced using a Briggs 0 weighting of the baselines with a $3 \sigma$ mask and $1 \sigma$ threshold. \clean{}ing was configured to use the \textsc{wsclean} multiscale algorithm with default settings as well as joined-channel \clean{}ing with four channels and two Taylor terms (see \citealp{Offringa2017} for a description of the implementation of these algorithms). The final image was primary beam corrected and crossed-matched with the GLEAM catalogue to correct for flux. Finally, the full set of snapshots were convolved to a common beam size (using the maximum beam size of any single snapshot), regridded onto a common projection and stacked in the image domain to give the full integration. This particular field is problematic due to the presence of a number of bright, extended sources within the large MWA field of view, specifically the Crab Nebula, the Orion Nebula and a number of large-scale supernova remnants. As a result of calibration and beam errors, these bright sources cast artifacts throughout the image and raise the noise level higher than is typical. This is particularly pronounced in the Phase I observations due to the increased power of these extended sources on the shorter baselines. We provide two images, MWA-1 and MWA-2, using the method described here for each of the Phase I and II configurations, respectively. The properties and noise values for each of these images are provided in \autoref{table:images}. In addition, we provide a third image---`MWA-subtracted'---using the Phase II data but using a point source subtraction technique described in section \ref{section:subtraction}. \subsection{ASKAP} ASKAP undertook two observations of this field as part of their early testing programme for their newly built array and data processing pipelines. The ASKAP array is situated at the Murchison Radio Observatory, alongside the MWA. The array consists of 36 tracking dishes distributed quasi-randomly so as to produce baselines ranging in length from \SI{22}{\metre} through to a maximum \SI{6.4}{\kilo \metre} (see \autoref{fig:baselines}). This large range of baselines gives ASKAP both high resolution as well as good sensitivity to extended emission, with almost a tenth of the baselines sensitive to emission on angular scales greater than \SI{3.5}{\arcminute} at \SI{887}{\mega \hertz}. Each dish is \SI{12}{\metre} in diameter, and at \SI{887}{\mega \hertz} the resulting primary beam has a full width half maximum (FWHM) of \SI{1.76}{\degree}. Additionally, each ASKAP dish is equipped with a phased array feed (PAF) allowing for 36 beams to be formed at once; depending on the configuration of these beams, this can allow for a much larger area of sky to be observed in a single pointing. The two observations (PI: Vernstrom) occurred on 10 March 2019 and 28 June 2019 for 5 hours and 8 hours respectively, and were observed at a central frequency of 887 MHz with a bandwidth of 288 MHz. The PAF was configured in the `square6x6' configuration for the first observation and in the `closepack36' configuration for the second \citep{McConnell2019}; both allowed for the simultaneous observation of almost the entire \SI{8 x 8}{\degree} field. Both of these observations were independently processed. The initial bandpass and calibration was completed by the automated ASKAPSoft pipelines\footnote{The ASKAPSoft pipeline does not yet have a paper describing its operation, however the current manual is available at \url{https://www.atnf.csiro.au/computing/software/askapsoft/sdp/docs/current/index.html}} using PKS B1934-638 as the primary calibrator providing both bandpass and phase calibration. Note that secondary phase calibrators are not used by ASKAP as the instrumental phases are assumed to remain stable throughout the observation. After this initial calibration, the observation was averaged to \SI{1}{\mega \hertz} channels and \SI{10}{\second} intervals. In addition, we applied two rounds of self-calibration for phase gains, and a final round of combined amplitude and phase gains using $\textsc{casa}$ \citep{CASA2007}. Next, each of the 36 beams were imaged with \textsc{wsclean} using the following \clean{}ing configuration: $3\sigma$ mask, $1\sigma$ threshold, multiscale enabled and joined-channels configured with six channels and two Taylor terms. We were forced to exclude the six baselines under \SI{60}{\metre} in length due to large-scale fringe patterns across the field caused by these baselines; the origin of these fringes remains unclear. Each of the final 36 beam images were primary beam corrected, truncated at their half power radius, and mosaiced using their respective primary beam weights. Finally, the mosasics from each observation were summed and weighted by the mean noise across each image. We provide two separate images, ASKAP-B+0.5 and ASKAP-B-1, imaged with Briggs weightings of 0.5 and -1, respectively. The former has good sensitivity to extended emission with a synthesised beam of about \SI{20}{\arcsecond}, whilst the latter has twice the resolution. Their combination can aid in discerning between the diffuse and compact components of regions of extended emission. Their respective noise values and other details are provided in \autoref{table:images}. Additionally, we also provide a diffuse emission map, referred hereafter as `ASKAP-subtracted', with point sources subtracted using the method described in the next section. \subsection{Source subtraction} \label{section:subtraction} From each of the MWA Phase II and ASKAP observations we created additional, lower resolution images with point sources subtracted so as to emphasise diffuse emission. Rather than attempt to fit and subtract point sources from the final, deconvolved images as was done to produce the SRT+NVSS-diffuse image, we took advantage of the \clean{} deconvolution process itself. Recall that \clean{} runs in a loop whereby it finds the brightest peak in the dirty image, models a point source at this position with some fraction of the measured peak value (the `gain' parameter, typically 0.1), and subtracts this from the image (during `minor cycles') and the visibilities (during `major' cycles). This loop continues, each time searching for a peak in the residual image and subtracting it out, until some stopping condition is met, typically when the brightest peak remaining falls under some threshold. An output of this process is a final residual image, with the \clean{} components fully subtracted. This residual image will be devoid of any bright sources, however large-scale, faint emission will typically still be present hidden in amongst the noise, and it is this image that we use to construct our diffuse maps. We used \textsc{wsclean} to perform the imaging and deconvolution with stopping conditions controlled by the mask and threshold options. The first of these options constructs a mask such that we only search for peaks within a masked region that is some factor of the noise, and the second determines that we stop \clean{}ing when there are no more peaks within the masked region greater than this factor of the noise. For the ASKAP-subtracted image, we set the threshold value to $1.5 \sigma$, which is fairly typical, however we set the mask value to $8 \sigma$, which is higher than usual. The result of this is that all bright regions of the map (greater than $8 \sigma$) are \clean{}ed all the way down to $1.5 \sigma$, whilst any regions with faint emission beneath this $8 \sigma$ threshold are left in the final residual map. This first round of \clean{}ing was run with \textsc{wsclean} multiscale disabled. Next, we continued to \clean{} the residual map, but with \textsc{wsclean}'s multiscale \clean{} algorithm enabled and with a deeper mask of $3 \sigma$. The \clean{} components found in this second round of deconvolution were not subtracted, and were either very faint point sources or large scale extended emission. Finally, this image was convolved up to a resolution of \SI{3.5 x 3.5}{\arcminute} so as to emphasise any diffuse emission present whilst suppressing any remaining faint point sources. The final image has a noise of \SI{0.75}{\milli \jansky \per \beam} and identical resolution to the SRT+NVSS-diffuse image. We used a similar process for the MWA-subtracted image. However, since we image and deconvolve each snapshot independently we use different values for each of the mask and threshold parameters. Typical two minute snapshots have a noise of about \SI{12}{\milli \jansky \per \beam}, whilst the final MWA-2 image has a noise of \SI{2.3}{\milli \jansky \per \beam}. To obtain the same \clean{}ing thresholds as in ASKAP would require us to \clean{} to a threshold under the noise of the individual snapshots, which is both unstable and unphysical. Instead, we set each of the mask and threshold to their lowest, stable values of 3 and 1, respectively, meaning the residual images contain faint emission up to approximately \SI{35}{\milli \jansky \per \beam}. Since we are already \clean{}ing down to the limits, the residual images for each snapshot are not further \clean{}ed using multiscale. Finally, as with the ASKAP-subtracted image, we convolved each snapshot to a resolution of \SI{3.5 x 3.5}{\arcminute} and stacked the images. The final MMA-subtracted image has a noise of \SI{5.4}{\milli \jansky \per \beam}. \section{Surface Brightness Sensitivity} \label{sec:surfacebrightness} \begin{figure*} \centering \begin{subfigure}{0.32\linewidth} \includegraphics[width=1\linewidth,trim={1.4cm 0 0.3cm 0}]{154-sensitivity.pdf} \caption{MWA comparison 154 MHz} \label{fig:sensitivity:a} \end{subfigure} \begin{subfigure}{0.32\linewidth} \includegraphics[width=1\linewidth,trim={1.4cm 0 0.3cm 0},clip]{887-sensitivity.pdf} \caption{ASKAP comparison 887 MHz} \label{fig:sensitivity:b} \end{subfigure} \begin{subfigure}{0.32\linewidth} \includegraphics[width=1\linewidth,trim={1.4cm 0 0.3cm 0},clip]{mwa-askap-sensitivity.pdf} \caption{MWA \& ASKAP 154 MHz} \label{fig:sensitivity:c} \end{subfigure} \caption{Surface brightness sensitivity values: \textbf{(a)} 154 MHz (MWA-1, MWA-2, and MWA-subtracted); \textbf{(b)} 887 MHz (ASKAP-B+0.5, ASKAP-subtracted). The SRT+NVSS-diffuse values (dashed blue line) are frequency adjusted from 1.4 GHz, and represent the minimum surface brightness required to corroborate candidate sources in VA18 assuming a spectral index of -0.7 or steeper. \textbf{(c)} Direct comparison at 154 MHz of MWA and ASKAP surface brightness sensitivity, where ASKAP has been frequency adjusted from 887 MHz assuming a spectral index range $-0.7 < \alpha < -1.1$, with the solid line at the midpoint $\alpha = -0.9$.} \label{fig:sensitivity} \end{figure*} Surface brightness sensitivity, $\sigma_{\text{SB}}$, measures an interferometer's response to extended emission; specifically it is the minimum surface brightness that is detectable above the noise. As we are searching for large extended emission---which we assume to be smoothly varying---surface brightness is a more useful measure than the more typically quoted point source sensitivity. In this section we measure and compare the surface brightness sensitivity of each of MWA Phase I, Phase II and ASKAP. An interferometer's sensitivity to extended emission is dependent on the same factors that contribute to point source sensitivity (such as system temperature, effective collecting area, number of antennae and baselines, etc.) but, crucially, also depends on the geometry of the array. In particular, as the angular scale of emission increases, in visibility space the power spectrum of the source shifts towards the zeroth spacing and therefore short baselines are essential to sample this region. Surface brightness sensitivity varies based on angular scale of the emission. For sources with an angular scale smaller than the synthesised beam, sensitivity scales approximately with the area of the source, until becoming most sensitive when the scale of the source matches the scale of the synthesised beam. On the other hand, extended emission above a threshold angular scale will have its power spectrum so condensed around the zeroth spacing that few baselines will properly sample its power and the sensitivity to sources above this scale will drop as we `resolve out' the source. We attempt to estimate our surface brightness sensitivity in the following way. We simulate two-dimensional, circular Gaussian sources with constant peak brightness, $P$ [Jy degree$^{-2}$], and varying FWHM values into the visibilities of the MWA Phase I, Phase II and ASKAP measurement sets. We then produce dirty images of each and measure the peak flux response $S_{\text{peak}}$ [Jy beam$^{-1}$] at the center of each Gaussian in the resulting image. We estimate the surface brightness sensitivity as: \begin{equation} \sigma_{\text{SB}} = n \sigma_{\text{RMS}} \frac{P}{S_{\text{Peak}}} \label{eqn:surfacebrightness} \end{equation} where $\sigma_{\text{RMS}}$ [Jy beam$^{-1}$] is the measured noise of our final images as detailed in \autoref{table:images}, and $n$ is the factor above the noise required for a detection (which was $3 \sigma$ in all cases). The fraction $\nicefrac{P}{S_{\text{Peak}}}$ measures the response to the simulated surface emission and is solely a function of the shape of the synthesised beam (i.e\@. the PSF); this is constant irrespective of the actual value of the simulated surface emission. Given this constant fraction, \autoref{eqn:surfacebrightness} allows us to calculate just how bright the simulated surface brightness would need to be for the response to rise above the threshold for detection (i.e\@. $n\sigma_{\text{RMS}}$). In this sensitivity estimation we use the dirty image as opposed to the deconvolved image as this better simulates how very faint sources are processed. At the limits of surface brightness sensitivity, emission in our images is buried amongst the image noise, and \clean{}ing thresholds will result in such emission being at most only partially deconvolved. Moreover, deconvolving a source makes it brighter, and so by using the dirty images we are properly modelling the worst case. To compare these values with the SRT+NVSS-diffuse image, we use their stated beam size of \SI{3.5 x 3.5}{\arcminute} and simply convolve our Gaussian sky models with a Gaussian beam of this size. From the resulting images, we measure the peak flux response. This process assumes perfect and complete $uv$ coverage with no interfering sidelobes, and so is a lower limit (i.e\@. best case) for the surface brightness sensitivity of the SRT+NVSS-diffuse image. There is one further complication. We would like to answer the question: if emission is detectable in the SRT+NVSS-diffuse image at 1.4 GHz, what level of sensitivity is required at 154 MHz and 887 MHz to be able to detect the same emission? To make this comparison, we need to make assumptions about the spectrum of such emission. Shock emission, such as relic, halo or filamentary accretion shocks typically have spectral indices of approximately -1 or steeper, while -0.7 is more typical of AGN emission. We choose here to use the more conservative value of -0.7. We can then scale the surface brightness sensitivity limits of the SRT+NVSS-diffuse image by this factor for each of the MWA and ASKAP observing frequencies: \begin{equation} \sigma_\text{min} = \left(\frac{\nu}{1.4 \text{ GHz}}\right)^{-0.7} \sigma_\text{SB} \end{equation} This frequency-adjusted limit thus represents the minimum sensitivity required to corroborate detection of a source at the limit of the SRT+NVSS sensitivity for any sources with a spectral index of -0.7 or steeper. Using this method, \autoref{fig:sensitivity} compares the surface brightness sensitivity of the MWA and ASKAP with the frequency-adjusted surface brightness of the SRT+NVSS-diffuse image. In \autoref{fig:sensitivity:a}, we compare the surface brightness sensitivity of the \SI{154}{\mega \hertz} images of the MWA with the SRT+NVSS-diffuse image. We can see that the MWA-2 image surpasses the surface brightness sensitivity of the SRT+NVSS-diffuse image only out to angular scales of approximately \SI{3}{\arcminute}. Emission on angular scales larger than this, however, is increasingly resolved out. It is interesting to note that this reduction in sensitivity occurs on angular scales much smaller than we would expect just from calculating the fringe patterns of the shortest baselines of the MWA phase 2; this discrepancy arises from the weighted addition of each baseline's respective fringe pattern that ultimately forms the shape of the synthesised beam. On the other hand, both MWA-1 and MWA-subtracted have a greater surface brightness sensitivity than the frequency-adjusted SRT+NVSS-diffuse image on all angular scales out to at least \SI{40}{\arcminute}. MWA-1 achieves this by its dense sampling of the inner region of the $uv$-plane, whilst MWA-subtracted achieves this sensitivity as a result of the extra convolution step that decreased the resolution to \SI{3.5 x 3.5}{\arcminute}. In \autoref{fig:sensitivity:b}, we compare the surface brightness sensitivity of ASKAP observing at \SI{887}{\mega \hertz}. The ASKAP-B0.5 image has greater surface brightness sensitivity than SRT+NVSS-diffuse out to angular scales of approximately \SI{7}{\arcminute}. The ASKAP-subtracted image, on the other hand, is able to exceed the frequency-adjusted limit required to corroborate synchrotron emission out to angular scales of approximately \SI{32}{\arcminute}, which is, again, solely a result of the extra convolution step used in the point source subtraction process. We can conclude that both images have the required sensitivity to detect the kind of large scale emission reported by VA18. We can also directly compare the surface brightness sensitivity of MWA and ASKAP by frequency adjusting the sensitivity values of ASKAP from \SI{887}{\mega \hertz} down to \SI{154}{\mega \hertz}. As can been seen in \autoref{fig:sensitivity:c}, we use a range of spectral indices, ranging from -0.7 to the steeper -1.1 with a solid line indicating an intermediate spectral index of -0.9. We find that the ASKAP-B0.5 image is significantly more sensitive than MWA-2 on all angular scales out to approximately \SI{5}{\arcminute}, beyond which the MWA-2 image is more sensitive to those sources with the very steepest spectral indices. ASKAP is more sensitive than MWA-1 on angular scales smaller than approximately \SI{2.5}{\arcminute}; for larger angular scales, the prevalence of short baselines in the MWA phase I array result in MWA-1 having superior surface brightness sensitivity. Nonetheless, this suggests a surprising result: ASKAP is ideally suited to the detection of synchrotron emission on scales both small and large, even for sources with moderately steep spectral indices. \section{Results} \begin{table*} \centering \begin{tabular}{cccccccp{4cm}} \toprule Source & RA (J2000) & Dec (J2000) & SRT significance & MWA & ASKAP & HII & Notes \\ & h:m:s & d:m:s & $\sigma$ & detection & detection & region & \\\midrule A1 & 04:59:08.81 & +08:48:52 & 6 & Yes & Yes & No & Radio halo in A523\\ A2 & 04:57:43.81 & +08:47:03 & 3 & No & No & No & \\ A3 & 04:56:23.85 & +09:27:59 & 3 & No & No & No & \\ B1 & 04:49:29.06 & +08:30:16 & 3 & No & No & Yes & \\ B2 & 04:53:19.21 & +07:48:11 & 3 & No & No & No &\\ B3 & 04:51:39.15 & +07:15:01 & 3 & No & No & No & Double-lobed radio galaxy immediately South of source\\ \multirow{2}{*}{C1*} & 05:15:39.81 & +06:51:47 & 5 & No & No & Partly & Excluding Northern zoom \\ & 05:15:31.00 & +06:49:40 & 5 & Yes & Yes & No & Northern zoom only \\ C2 & 05:12:24.80 & +07:25:01 & 3 & No & No & No & \\ C3 & 05:10:39.64 & +07:06:07 & 4 & No & No & No & \\ C4 & 05:12:34.29 & +06:49:01 & 3 & No & No & No & \\ C5 & 05:11:21.76 & +06:49:35 & 3 & No & No & No &\\ C6* & 05:12:26.81 & +06:20:31 & 4 & No & Yes* & Partly & North West contour only, but does not overlap\\ C7 & 05:07:44.04 & +06:26:13 & 4 & No & Yes & Yes & \\ C8 & 05:06:57.73 & +06:21:59 & 3 & No & No & Yes & \\ C9* & 05:05:57.34 & +06:14:45 & 3 & Yes & No & No & \\ C10 & 05:06:19.45 & +06:04:59 & 3 & No & Yes & Yes & \\ D1 & 05:05:00.00 & +06:44:00 & 3 & No & No & No & \\ D2 & 05:01:52.93 & +06:06:57 & 4 & No & No & No & \\ D3 & 05:00:19.57 & +05:44:24 & 3 & No & No & No & \\ E1 & 04:57:26.67 & +06:52:01 & 5 & No & Yes & No & \\ E2* & 04:55:05.24 & +06:17:21 & 4 & Yes & No & No & \\ E3 & 04:57:10.28 & +06:04:15 & 3 & No & No & No & \\ F1 & 05:11:24.89 & +03:46:42 & 3 & Yes & No & No & SRT and MWA contours only partially overlap \\ G1 & 05:02:21.28 & +05:26:12 & 3 & No & No & No & \\ G2 & 04:55:03.01 & +05:33:20 & 3 & No & Yes & Yes & \\ G3 & 05:00:28.92 & +05:03:38 & 3 & No & No & No & \\ G4 & 04:59:12.63 & +05:01:05 & 3 & No* & No & No & SRT contours sit immediately North of large extended emission system in MWA \\ G5 & 04:57:59.36 & +04:58:01 & 3 & No & No & No & \\ G6* & 04:58:34.65 & +04:42:47 & 4 & Yes & No & No & \\ H1 & 04:49:56.16 & +04:48:46 & 3 & No & No & No & \\ H2 & 04:49:28.39 & +04:31:12 & 3 & No & No & No &\\ I1 & 04:54:06.90 & +02:33:02 & 5 & Yes & Yes & No & Radio halo in A520 \\ I2 & 04:55:06.23 & +02:33:02 & 3 & No & N/a* & No & Source beyond ASKAP primary beam \\ I3 & 04:55:06.23 & +02:30:33 & 3 & No & N/a* & No & Source beyond ASKAP primary beam \\ J1 & 04:48:37.81 & +03:00:55 & 3 & No & No & No & \\\bottomrule \end{tabular} \caption{Diffuse large-scale emission regions identified by VA18. An asterisk by the name indicates that VA18 considered it possible that the region was contaminated by residuals from compact source subtraction.} \label{table:results} \end{table*} In \autoref{table:results} we present each of the 35 sources reported by VA18, the maximum significance of their detection in the SRT+NVSS-diffuse image, and whether either MWA (in any of MWA-1, MWA-2 or MWA-subtracted) or ASKAP are able to detect emission in the same region to $3\sigma$ significance. In the Appendix we provide images of every VA18 region. In \autoref{fig:askaps} we show each of the 35 sources as imaged in ASKAP-B+0.5, with contours from the SRT+NVSS-diffuse (blue) and ASKAP-subtracted (red). In \autoref{fig:mwa-2} we present each of the regions as imaged in MWA-2, with contours again from SRT+NVSS-diffuse (blue) and MWA-subtracted (red). Finally, in \autoref{fig:mwa-1}, we present the full \SI{8 x 8}{\degree} region as imaged in MWA-1, with contours from SRT+NVSS-diffuse. This latter image is scaled such that saturated black represents a $5 \sigma$ detection. \section{Discussion} \begin{figure*} \centering \begin{subfigure}{0.49\linewidth} \includegraphics[width=1\linewidth,trim={0 0.2cm 0 0.2},clip]{radio1.pdf} \caption{RA \ra{5;09;50} Dec \ang{4;20;19}} \label{fig:radiogals:a} \end{subfigure} \begin{subfigure}{0.49\linewidth} \includegraphics[width=1\linewidth,trim={0 0.2cm 0 0.2},clip]{radio2.pdf} \caption{RA \ra{4;47;23.9} Dec \ang{5;18;50}} \label{fig:radiogals:b} \end{subfigure} \caption{Images from ASKAP-B+0.5 at 887 MHz of two radio galaxies in the field mentioned by VA18. The white contours are MWA-2 at 154 MHz, starting at $3\sigma$ and increasing in increments of $+2\sigma$.} \label{fig:radiogals} \end{figure*} The known, large-scale synchrotron sources in this field are detected in all our images with strong statistical significance, and this provides a initial validation of the angular sensitivity our observations. For example, the radio halo in Abell 523 (source A1) is detected in each of ASKAP-subtracted and MWA-subtracted well above the noise (statistical significance of $20 \sigma$ and $11 \sigma$, respectively), as is the radio halo in Abell 520 (source I1; statistical significance of $8 \sigma$ and $8 \sigma$, respectively). Both are also visible in MWA-2 and MWA-1, though in the latter the more compact emission is blended in with the diffuse components. In addition, the large extended lobes of the radio galaxy that VA18 report in region F are visible in all images, as we show in \autoref{fig:radiogals:a}. The core, on the other hand, is only visible in the higher frequency ASKAP image; this is typical of galactic core emission which is dominated by free-free mechanisms and thus tends to have have a flatter spectrum at low radio frequencies.\footnote{We also identify an optical candidate for the core of this radio galaxy, which is clearly visible both in Digital Sky Survey \citep{Blanton2017} and Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; \citealp{Chambers2016}) optical surveys and has previously been catalogued in the infrared as WISEA J050950.55+042021.0. The calculations in VA18 that inferred a minimum size of the radio galaxy from the magnitude limit of the DSS survey are therefore invalid.} Similarly, the lobes of the smaller radio galaxy located at RA \ra{4;47;24} Dec \ang{5;18;50} are also clearly detected in all images as shown in \autoref{fig:radiogals:b}. Despite demonstrating that we can detect the known synchrotron sources in this field, 23 of the 35 candidate sources are undetected in any of our direct observations as well as our `subtracted' treatments. If we assume that these sources are both real and have spectra that are well approximated by a power law at this frequency range ($S \propto \nu^\alpha$), then we can calculate a lower limit value for the spectral index of these sources from ASKAP-subtracted map as $\alpha > 2.5$. The MWA-subtracted map places a less stringent constraint of $\alpha > -0.37$. Such a steep positive spectral index is atypical for synchrotron sources, with the exception of sources that exhibit a turnover due to synchrotron self-absorption or free-free absorption mechanisms. Both these mechanisms, however, are unusual to observe in this frequency range for large, diffuse systems. We turn now to discuss the sources for which we make a potentially corroborating detection, or are otherwise noteworthy. \subsection{Source B1} Source B1 appears in the SRT+NVSS-diffuse map as a $3 \sigma$ detection at 04:49:29.06 +08:30:16, for which we find no radio emission in either ASKAP-subtracted or MWA-subtracted. However, in \autoref{fig:regionB1} we present the associated Southern H-alpha Sky Survey Atlas (SHASSA; \citealp{Gaustad2001}) image showing this is a region of strong H-alpha emission, and indicating that this is a Galactic HII region. We propose that source B1 is likely a faint detection of associated thermal free-free emission produced by this Galactic HII region, and that the non-detection by both ASKAP and MWA is due to the typically inverted, blackbody spectrum of such sources, placing its surface brightness below the detection levels of our lower-frequency observations. \begin{figure} \centering \includegraphics[width=1\linewidth]{region-B1.pdf} \caption{An H-alpha map of region B1 from SHASSA showing the coincident H-alpha emission. SRT+NVSS-diffuse contours (blue) indicate $3\sigma$, $4\sigma$, $5\sigma$, etc.} \label{fig:regionB1} \end{figure} \subsection{Sources C1, C6, C7, C8, C10} \begin{figure*} \centering \includegraphics[width=1\linewidth]{region-C.pdf} \caption{An H-alpha map of region C from SHASSA. SRT+NVSS-diffuse contours (blue) indicate $3\sigma$, $4\sigma$, $5\sigma$, etc. ASKAP diffuse contours (magenta) indicate $2 \sigma$, $3 \sigma$, $4 \sigma$ etc.} \label{fig:regionC} \end{figure*} VA18 report a very large-scale detection in the vicinity of Abell 539, spanning multiple large-scale islands of emission (C1) as well as numerous small regions of diffuse emission to the West (C2-C10). In \autoref{fig:regionC} we show the SHASSA image for a large section of region C, overlaid with the SRT+NVSS-diffuse contours (blue) and the ASKAP-subtracted contours (magenta). From the contours, we can observe that the ASKAP-subtracted map shows a clearly visible ridge of flux extending approximately 40 arcminutes in a North-Easterly orientation, approximately joining the regions C7, C8 and C10. This ridge has a peak flux of \SI{6.3}{\milli \jansky \per \beam}, whilst it is undetectable in the lower frequency MWA images suggesting a shallow or inverted spectral index. From the background SHASSA map, we observe that this ridge of emission traces a similarly bright region of Galactic H-alpha emission which extends West from the Galactic equator through C1 and C6, and peaks along the ridge adjoining C7, C8 and C10. The coincident emission of H-alpha and radio strongly suggests that the ridge we are observing is a Galactic HII region, and that we are detecting the thermal free-free component of this region in the radio. Moreover, the lack of radio emission in the MWA observations is consistent with the inverted spectrum of thermal free-free emission. \autoref{fig:regionC} includes, in addition to the bright ridge of emission on the right, regions C1 and C6. We include these regions to suggest the possibility that the Western component of C6 as well as the North-East island of C1 (with the exception of the Northern `C1-zoom') may also be a detection of the extended Galactic HII region. Indeed, despite C1 lying beyond the half-power point of the ASKAP primary beam, we still detect a radio component coincident with a peak in the H-alpha map. This strongly suggests that the North-East island of C1, which lies closest to the centre of Abell 539, is not extra-galactic in origin. \subsection{Source C1 `zoom'} \begin{figure} \centering \includegraphics[width=1 \linewidth,trim={0 0 1.7cm 0},clip]{c1-zoom.pdf} \caption{The Pan-STARRS three-colour (bands Y, I, G) image of `C1-zoom', showing the presumed optical host indicated by the white arrow. The contours are: ASKAP-B+0.5 (blue) at $1.5 \sigma$ (dashed) and then 2, 3, 4, 5, 6, 7, 10, 20, 30$\sigma$; ASKAP-B-1 (red) at 3, 4, 6, 8, 30, 50, 100, 150$\sigma$; MWA-2 (magenta) at 3, 5, 15, 35, 80, 120$\sigma$.} \label{fig:c1zoom} \end{figure} The C1 Northern zoom, centred at 05:15:31 +06:49:40 and located at the very periphery of Abell 539, contains significant diffuse emission that is detected in the MWA (Phase I \& II) and ASKAP images. In \autoref{fig:c1zoom} we show the three-colour optical image from the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; \citealp{Chambers2016}) overlaid with contours from ASKAP-B-1, ASKAP-B+0.5 and MWA-2. The C1 Northern zoom contains a number of bright points of emission. The brightest, located at 5:15:29.52 +6:48:46.23 (lower right), is resolved into two conjoined points in ASKAP-B-1 with no optical association in Pan-STARRS, whilst in ASKAP-B+0.5 it has a faint extension along the same axis; we propose this source is a pair of radio lobes of a distant, background galaxy and unrelated to the extended emission in this region. The second brightest source of emission in the C1 North zoom is centered at 5:15:33.93 +6:50:33.3 and is surrounded by diffuse radio emission. It is clearly extended in the ASKAP-B+0.5 image with a largest angular scale of approximately \SI{180}{\arcsecond}. A central hotspot is visible in the ASKAP-B-1 image and in addition, two satellite patches of extended emission appear in ASKAP-B-1 to the South East and South West. The source is also visible in all MWA images, and using the MWA-2 and ASKAP-B+0.5 images we can calculate a spectral index for the total integrated flux as -0.97. In the associated Pan-STARRS image we observe a candidate host galaxy 2MASX J05153393+0650333 indicated by the arrow sitting near the peak of the emission, for which there is unfortunately no currently available redshift information, as neither of the satellite regions have any optical candidate. Given the existence of a host galaxy and the hotpots, it seems likely that this is diffuse radio-galaxy emission. The presence of a bright core suggests this is a Fanaroff \& Riley class I (FRI) radio galaxy, however there are clearly weakly emitting lobes which would suggest the presence of some environmental pressure. The overall morphology of the source is certainly atypical of normal radio jet structure, however it is suggestive of a head-tail galaxy. Whilst additional observations may aid in understanding its complex morphology, we feel confident to classify this diffuse emission as originating from 2MASX J05153393+0650333. In addition, a secondary diffuse radio source is visible in the top left of the image. This appears to be an FRII radio galaxy, with the left lobe significantly brighter than the right, possibly due to relativistic beaming. The left-most lobe is visible in lowest MWA-2 contour, that is, a $3 \sigma$ detection at 154 MHz. There is no obvious optical candidate visible in Pan-STARRS, suggesting that this is in the background of the 2MASX J05153393+0650333 system. \subsection{Source C9} Source C9 is detected at $3 \sigma$ significance in MWA-subtracted. The ASKAP-B+0.5 image shows five point sources in a small angular area, and MWA-2 detects and resolves at least 3 of these. However, the brightest of these sources in MWA-2 is just \SI{13}{\milli \jansky \per \beam}, meaning that none of these sources will have been subtracted from individual snapshots; any flux present in the MWA-subtracted image is likely unsubtracted point source emission. In agreement with VA18, source C9 is most likely the result of residual point sources. \subsection{Source E1} There is a trace of a detection at the central peak of E1 in ASKAP-subtracted (peak $3.1 \sigma$), whilst there is nothing in MWA, either in MWA-1, 2 or subtracted. In the case of MWA-1, this is a region with no nearby sources that might produce a false positive result due to blending, and given its superb surface brightness sensitivity, the absence of a lower frequency detection strongly suggests against this region as being synchrotron in origin. SHASSA, however, does not indicate any associated peak in H-alpha emission in this region. Given the low statistical significance of the ASKAP detection, and that the region above the $3 \sigma$ threshold has a maximum angular extent of just \SI{0.8}{\arcminute} (compared to a beam size of \SI{3.5 x 3.5}{\arcminute}), we would be inclined to suggest that this is noise in our own image if it were not so clearly aligned with the SRT+NVSS-diffuse peak contour. We measure a peak brightness of \SI{2.7}{\milli \jansky \per \beam} at 887 MHz, and \SI{14.5}{\milli \jansky \per \beam} in the SRT+NVSS-diffuse image at 1.4 GHz, giving a steep positive spectral index of +3.7. Whilst we conclude this is unlikely to be synchrotron, we leave open the possibility that this is emission by some other mechanism with an inverted spectrum. \subsection{Source E2} The MWA-subtracted image detects a small area of diffuse emission at E2, whilst nothing is detected in ASKAP-subtracted. The ASKAP-B+0.5 image resolves 5 bright radio sources in this small region. At least two of these are very slightly extended in the ASKAP image: the source located at 4:55:07.7 +6:16:31.6 is a star-forming spiral galaxy with a bright compact core visible in Pan-STARRS but whose spiral arms are also weakly visible in radio; the source located at 4:54:58.0 +6:17:22.5 is extended in ASKAP-B+0.5 with an extension towards the North and a bright core or hotspot visible in the ASKAP-B-1 image but no obvious optical counterpart. As VA18 suggest, the source E2 is most likely due to a blending of numerous radio sources and not due to diffuse radio emission. \subsection{Source F1} VA18 report a small region of $3 \sigma$ significance located at 05:11:24.89 +03:46:42. The MWA-subtracted image finds a small region of extended emission offset North of this, which encompasses four distinct radio sources in ASKAP-B+0.5. This extended emission signal is almost certainly just the result of blended emission from these point sources, and does not corroborate the F1 candidate region. \subsection{Source G2} \begin{figure} \centering \includegraphics[width=1\linewidth]{region-G2.pdf} \caption{An H-alpha map of region G2 from SHASSA showing the coincident H-alpha emission. SRT+NVSS-diffuse contours (blue) indicate $3\sigma$, $4\sigma$, $5\sigma$, etc. ASKAP-diffuse contours (magenta) indicate $2 \sigma$, $3 \sigma$, $4 \sigma$, etc.} \label{fig:regionG2} \end{figure} In the SRT+NVSS-diffuse image, source G2 is a small $3 \sigma$ detection. MWA-subtracted makes no detection in this region, however ASKAP-subtracted makes a similarly weak $3 \sigma$ detection in the same region. In \autoref{fig:regionG2} we show the H-alpha emission in this region from SHASSA with contour overlays from SRT+NVSS-diffuse (blue) and ASKAP-diffuse (magenta). Once again we find a correlation between a peak in the H-alpha emission and the detected diffuse radio emission, suggesting that the radio is free-free emission from a Galactic HII region. Indeed, the ASKAP-diffuse $2 \sigma$ contours appear to trace two additional H-alpha peaks both North and South of G2. \section{Conclusion} We are unable to corroborate the candidate synchrotron sources of VA18. Careful examination of each of the 35 sources suggests five classes: known halo systems (A1, I1); radio galaxies (C1-zoom); HII emission (B1, North-East C1, North-West C6, C7, C8, C10, G2); blended compact sources (C9, F2, E1); and finally one non-synchrotron but otherwise unknown source (E1). The remaining sources are not detected in our observations. The non-detections strongly suggest against these sources being synchrotron in origin. Synchrotron sources in general exhibit negative spectral indices, and models suggest the shocked emission from the cosmic web proper to have a spectral index $\alpha \lessapprox -1$. These properties ensure that synchrotron sources are brightest at lower radio frequencies, and given the surface brightness sensitivity of the MWA and ASKAP images, any large scale synchrotron emission should surely be visible at these lower frequencies. As we have noted, the ASKAP non-detection puts a stringent condition on the candidate sources as having a steep, positive spectral index of $\alpha > 2.4$, and this can only be explained if these are regions exhibiting a turnover due to synchrotron self-absorption or free-free absorption. We suggest three explanations for these non-detections. Firstly, these may be real emission that have a positive spectral index and which renders them undetectable at lower frequencies, for example thermal free-free emission. However, given the extreme spectral steepness of such a population, we consider this an unlikely scenario. Secondly, given the low $3 \sigma$ threshold used to identify the candidate sources, some fraction may simply be noise. This may be especially applicable to those regions that were small in angular extent, typically much smaller than the \SI{3.5}{\arcminute} resolution of the SRT+NVSS-diffuse image. Finally, given the significant image processing employed by VA18, which included combining systematics from both SRT and NVSS, as well as a complex and imperfect point source subtraction process, some fraction of these sources may be the result of spurious image artifacts. VA18 acknowledge this possibility but, as they detailed in Appendix C, their own simulations excluded gain fluctuations from within their pipeline as being significant, and Galactic foreground simulations suggested that less than 20\% of the candidate sources could be attributed to this foreground. Whilst this is a disappointing result, we wish to raise the possibility that large scale, extended emission may be the wrong parameter space for searching for the synchrotron cosmic web. There has been an assumption to date that the synchrotron cosmic web would match the spatial scales of the underlying filaments, which is evident both in the work of VA18 as well as others (see e.g\@. \citealp{Brown2017, Vernstrom2017}). However, the mechanism for synchrotron emission is primarily by way of accretion shocks, which are by definition regions of discontinuity. Such mechanisms may be more likely to produce sharp and smaller scale emission features as opposed to the broad, smooth and extended features that have been assumed to date. Indeed, such compact features can already be observed in simulations \citep{ArayaMelo2012,Vazza2015,Vazza2019}, suggesting that we may have in fact been looking in the wrong place. Future work in this area will be required to properly understand the characteristic spatial scales of this radio emission and constrain the parameter space as we continue to search for evidence of the synchrotron cosmic web. \section{Acknowledgements} This scientific work makes use of the Murchison Radio-astronomy Observatory, operated by CSIRO. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. Support for the operation of the MWA is provided by the Australian Government (NCRIS), under a contract to Curtin University administered by Astronomy Australia Limited. We acknowledge the Pawsey Supercomputing Centre which is supported by the Western Australian and Australian Governments. The Australian SKA Pathfinder is part of the Australia Telescope National Facility which is managed by CSIRO. Operation of ASKAP is funded by the Australian Government with support from the National Collaborative Research Infrastructure Strategy. ASKAP uses the resources of the Pawsey Supercomputing Centre. Establishment of ASKAP, the Murchison Radio-astronomy Observatory and the Pawsey Supercomputing Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. This work was supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia.
proofpile-arXiv_065-154
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Recent years witness a rapid development of machine learning and its succeeded applications such as computer vision~\cite{ma2018variational,zhu2019image,ma2019shoe}, natural language processing~\cite{ma2013vector,ma2018short} and cross-modalities learning~\cite{zhu2019image,xu2018cross} with many real-world applications~\cite{xie2018mobile,ma2019fine}. Traditional machine learning methods are typically based on the assumption that training and testing datasets are from the same distribution. However, in many real-world applications, this assumption may not hold, and the performance could degrade rapidly if the trained models are deployed to domains different from the training dataset~\cite{ganin2016domain}. More severely, to train a high-performance vision system requires a large amount of labelled data, and getting such labels may be expensive. Taking a pre-trained robotic vision system as an example, during each deployment task, the robot itself (\emph{e.g.} position and angle), the environment (\emph{e.g.} weather and illumination) and the camera (\emph{e.g.} resolution) may result in different image styles. The cost to annotate enough data for each deployment task could be very expensive. This kind of problem has been widely addressed by transfer learning (TL)~\cite{zhuang2019comprehensive} and domain adaptation (DA)~\cite{ganin2016domain}. In DA, a learner usually has access to the labelled source data and unlabelled target data, and it is typically trained to align the feature distribution between the source and target domain. However, sometimes, we could not expect the target data is accessible for the learner. In the robot example, the distribution divergences (different image styles) from training to testing domain can only be identified after the model is trained and deployed. In this scenario, it's unrealistic to collect samples before deployment. This would require a robot to have abilities to handle domain divergences even though the target data is absent. We tackle this kind of problem under domain generalization (DG) paradigm, under which the learner has access to many source domains (data and corresponding labels), and aims at generalizing to the new (target) domain, where both data and labels are unknown. The goal of DG is to learn a prediction model on training data from the seen source domains so that it can generalize well on the unseen target domain. An underlying assumption behind domain generalization is that there exists a common feature space underlying the multiple known source domains and unseen target domain. Specifically, we want to learn domain invariant features across these source domains, and then generalize to a new domain. An example of how domain generalization is processed is illustrated in Fig.\ref{fig:DG}. \begin{figure} \centering \includegraphics[width=0.99\textwidth]{figs/DG.pdf} \caption{Domain Generalization: A learner faces a set labelled data from several source domains, and it aims at extracting invariant features across the seen source domains and learn to generalize to an unseen domain. Based on the manifold assumption~\cite{goldberg2009multi}, each domain $i$ is supported by distribution $\mathcal{D}_i$. The learner can measure the source domain distribution via the source datasets but has no information on the unseen target distribution. After training on the source domains, the model is then deployed to a new domain $\mathcal{D}_t$ for prediction.} \label{fig:DG} \end{figure} A critical problem in DG and DA involves aligning the domain distributions, which typically are achieved by extracting such representations. Previous DA works usually tried to minimize the domain discrepancies, such as KL-divergence and Maximum Mean Discrepancy (MMD) etc. via adversarial training, to achieve domain distribution alignments. Due to the similar problem setting between DA and DG, many previous approaches directly adopt the same adversarial training technique for DG. For example, a MMD metric is adopted by~\cite{li2018domain} as a cross-domain regularizer and KL divergence is adopted to measure the domain shift by~\cite{Li2017dg} for domain generalization problem. The MMD metric is usually implemented in kernel space, which is not sufficient for large-scaled applications, and KL divergence is unbounded, which is also insufficient for a successful measuring domain shift~\cite{zhao2019learning}. Besides, previous domain generalization approaches~\cite{ilse2019diva,ghifary2015domain,li2018deep,d2018domain,volpi2018generalizing} mainly focused on applying similar DA technique to extract the invariant features and how to stack the learned features from each domain for generalizing to a new domain. These methods usually ignore the label information and will sometimes make the features became indistinguishable with ambiguous classification boundaries, $a.k.a$ semantic misalignment problem~\cite{8964455}. A successful generalization should guide the learner not only to align the feature distributions between each domain but also to discriminate the samples in the same class could lie close to each other while samples from different classes could stay apart from each other, $a.k.a.$ feature compactness~\cite{kamnitsas2018semi}. Aiming to solve this, we adopt Optimal Transport (OT) with Wasserstein distance to align the feature distribution for domain generalization since it could constrain labelled source samples of the same class to remain close during the transportation process~\cite{courty2016optimal}. Moreover, some information theoretical metrics such as KL divergence is not capable to measure the inherent geometric relations among the different domains~\cite{arjovsky2017wasserstein}. In contrast, OT can exactly measure their corresponding geometry properties. Besides, compared with~\cite{ben-david2010}, OT benefits from the advantages of Wasserstein distance by its gradient property~\cite{arjovsky2017wasserstein} and the promising generalization bound~\cite{redko2017theoretical}. The empirical studies~\cite{gulrajani2017improved,shen2017wasserstein} also demonstrated the effectiveness of OT for extracting the invariant features to align the marginal distributions of different domains. Furthermore, although the optimal transport process could constrain the labelled samples of the same class to stay close to each other, our preliminary results showed that just implementing optimal transport for domain generalization is not sufficient for a cohesion and separable classification boundary. The model could still suffer from indistinguishable features (see Fig.~\ref{fig:sub-third}). In order to train the model to predict well on all the domains, this separable classification boundary should also be achieved under a domain-agnostic manner. That is, for a pair of instances, no matter which domain they come from, they should stay close to each other if they are in the same class and vice-versa. To this end, we further promote metric learning as an auxiliary objective for leveraging the source domain label information for a domain-independent distinguishable classification boundary. To summarize, we deployed the optimal transport technique with Wasserstein distance for domain generalization for extracting the domain invariant features. To avoid ambiguous classification boundary, we proposed to implement metric learning strategies to achieve a distinguishable feature space. Theorefore, we proposed the Wasserstein Adversarial Domain Generalization (\emph{WADG}) algorithm. In order to check the effectiveness of the proposed approach, we tested the algorithm on two benchmarks comparing with some recent domain generalization baselines. The experiment results showed that our proposed algorithm could outperform most of the baselines, which confirms the effectiveness of our proposed algorithm. Furthermore, the ablation studies also demonstrated the contributions of our algorithm. \section{Related Works} \subsection{Domain Generalization}\noindent The goal of DG is to learn a model that can extract common knowledge that is shared across source domains and generalize well on the target domain. Compare with DA, the main challenge of DG is that the target domain data is not available during the learning process. A common framework for DG is to extract the most informative and transferable underlying common features from source instances generated from different distributions and to generalize to unseen one. This kind of approach holds with the assumption that there exists an underlying invariant feature distribution among all domains, and that consequently such invariant features can generalize well to a target domain.~\cite{muandet2013domain} implemented MMD as a distribution regularizer and proposed the kernel-based \emph{Domain Invariant Component Analysis} (DICA) algorithm. An autoencoder-based model was proposed by~\cite{ghifary2015domain} under a multi-task learning setting to learn domain-invariant features via adversarial training. \cite{li2018deep} proposed an end-to-end deep domain generalization approach by leveraging deep neural networks for domain-invariant representation learning. \cite{Motiian_2017_ICCV} proposed to minimize the semantic alignment loss as well as the separation loss based on deep learning models. \cite{li2018domain} proposed a low-rank Convolutional Neural Network model based on domain shift-robust deep learning methods. There are also some approaches to tackle the domain generalization problems in a meta-learning manner. To the best of our knowledge,~\cite{li2018learning} first proposed to adopt the Meta Agnostic Meta-Learning (MAML)~\cite{finn2017model} which back-propagates the gradients of ordinary loss function of meta-test tasks. As pointed by~\cite{dou2019domain}, such an approach might lead to a sub-optimal solution, as it is highly abstracted from the feature representations. ~\cite{balaji2018metareg} proposed \emph{MetaReg} algorithm in which a regularization function ($e.g.$ weighted $L_1$ loss) is implemented for the classification layer of the model but not for the feature extractor layers. Then,~\cite{li2019feature} proposes an auxiliary meta loss which is gained based on the feature extractor. Furthermore, the network architecture of~\cite{li2019feature} is the widely used feature-critic style model based on a similar model from domain adversarial training technique~\cite{ganin2016domain}.~\cite{dou2019domain} and~\cite{dg_mmld} also started to implement clustering techniques on the invariant feature space for better classification and showed better performance on the target domain. \subsection{Metric Learning} \label{sect:related_works_metric_learning} Metric learning aims to learn a discriminative feature embedding where similar samples are closer while different samples are further apart~\cite{8964455}. \cite{hadsell2006dimensionality} proposed the \emph{siamese network} together with \emph{contrastive loss} to guide the instances stay close with each other in the feature space if they have the same labels and push them apart vice-versa. \cite{schroff2015facenet} proposed the \emph{triplet loss} aiming to learn a feature space where a positive pair has higher similarity than the negative pair when comparing by the same anchor with a given margin. \cite{oh2016deep} showed that neither the \emph{contrastive loss} nor \emph{triplet loss} could efficiently explore the full pair-wise relations between instances under the mini-batch training setting. They further propose the \emph{lifted structure loss} to fully utilize pair-wise relations across batches. However, it only choose equal number of positive pairs as negative ones randomly, and many informative pairs are discarded~\cite{wang2019multi}, which restricts the ability of finding the informative pairs. ~\cite{yi2014deep} proposed the binomial deviance loss which could measure the hard pairs. One remarkable work by~\cite{wang2019multi} combines the advantages both from~\emph{lifted structure loss} and \emph{binomial loss} to leverage the pair-similarity. They proposed to leverage not only pair-similarities (positive or negative pairs with each other) but also self-similarity which enables the learner to collect and weight informative pairs (positive or negative pairs) under an iterative (mining and weighting) manner. For a pair of instances, the self-similarity is gained from itself. Such a multi-similarity has been shown could measure the similarity and could cluster the samplers more efficiently and accurately. In the context of domain generalization, ~\cite{dou2019domain} proposed to guide the learner to leverage from the local similarity in the semantic feature space, in which the authors argued may contain essential domain-independent \emph{general knowledge} for domain generalization and adopt the constrative loss and triplet loss to encourage the clustering for solving this issue. Leveraging from the across-domain class similarity information can encourage the learner to extract robust semantic features that regardless of domains, which is an useful auxiliary information for the learner. If the learner could not separate the samples (from different source domains) with domain-independent class-specific cohesion and separation on the domain invariant feature space, it would still suffer from ambiguous decision boundaries. This ambiguous decision boundaries might still be sensitive to the unseen target domain.~\cite{dg_mmld} implement unsupervised clustering on source domains and showed better classification performance. Our work is orthogonal to previous works, proposing to enforce more distinguishable invariant features space via Wasserstein adversarial training and encouraging to leverage from label similarity information for better classification boundary. \section{Preliminaries and Problem Setup} We start by introducing some preliminaries. \subsection{Notations and Definitions} Following~\cite{redko2017theoretical} and \cite{Li2017dg}, suppose we have $m$ known source domains distributions $\{\mathcal{D}_i\}_{i=1}^m$, and $i^{th}$ domain contains $N_i$ labeled instances in total, denoted by $\{(\mathbf{x}^{(i)}_j,y^{(i)}_j)\}^{N_i}_{j=1}$, where $\mathbf{x}^{(i)}_j\in\mathbb{R}^{n}$ is the $j^{th}$ instance feature from the $i^{th}$ domain and $y^{(i)}_j\in \{1,\dots,K\}$ are the corresponding labels. For a hypothesis class $\mathcal{H}$, the expected source and target risk of a hypothesis $h\in\mathcal{H}$ over domain distribution $\mathcal{D}_i$ is the probabilities that $h$ wrongly predicts on the entire distribution $\mathcal{D}_i$: $\epsilon_i(h)=\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}_i}\ell(h(\mathbf{x},y))$, where $\ell(\cdot)$ is the loss function. The empirical loss is also defined by: $\hat{\epsilon}_i(h)=\frac{1}{N_i}\sum_{j=1}^{N_i}\ell(h(\mathbf{x}_j,y_j))$. In the setting of domain generalization, we only have the access to the seen source domains $\mathcal{D}_i$ but have no information about the target domain. The learner is expected to extract the underlying invariant feature space across the source domains and generalize to a new target domain. \subsection{Optimal Transport and Wasserstein Distance} \begin{figure} \centering \includegraphics[width=0.60\textwidth]{figs/OT_for_DG.pdf} \caption{Use optimal transport (OT) for domain generalization: Typically to directly predict on the unseen domain (the white dashed arrow) is difficult. In order to learn domain invariant features, as showed in the direction of the green arrow we adopted the OT technique to achieve domain alignments for extracting invariant features. After the OT transition, the invariant features can be generalized to unseen domain.} \label{fig_ot_dg} \end{figure} We follow \cite{redko2017theoretical} and define $c:\mathbb{R}^n\times\mathbb{R}^n \to \mathbb{R}^{+}$ as the cost function for transporting one unit of mass $\mathbf{x}$ to $\mathbf{x}'$, then Wasserstein distance between $\mathcal{D}_i$ and $\mathcal{D}_j$ could be computed by, \begin{equation} W_p^p(\mathcal{D}_i,\mathcal{D}_j) = \inf_{\gamma\in \Pi(\mathcal{D}_i,\mathcal{D}_j)} \int_{\mathbb{R}^n\times\mathbb{R}^n}c(\mathbf{x},\mathbf{x}^\prime)^p d\gamma(\mathbf{x},\mathbf{x}^\prime) \end{equation} where $\Pi(\mathcal{D}_i,\mathcal{D}_j)$ is the probability coupling on $\mathbb{R}^n\times\mathbb{R}^n$ with marginals $\mathcal{D}_i$ and $\mathcal{D}_j$ referring to all the possible coupling functions. Throughout this paper, we adopt Wasserstein-1 distance only ($p=1$). Let $f$ be a Lipschitz-continuous function $\|f\|_L \leq 1$, according to \emph{Kantorovich-Rubinstein} theorem, we have \begin{equation} \label{original_eq_wasserstein} W_1(\mathcal{D}_i,\mathcal{D}_j) = \sup_{\|f\|_L < 1} \mathbb{E}_{x\in\mathcal{D}_i}f(x) - \mathbb{E}_{x^\prime \in \mathcal{D}_j}f(x^\prime) \end{equation} Optimal transport theory and Wasserstein distance were recently investigated in the context of machine learning \cite{arjovsky2017wasserstein} especially in the domain adaptation area \cite{courty2016optimal,zhou2020discriminative}. The general idea of implementing the optimal transport technique for domain generalization across domains is illustrated in Fig.~\ref{fig_ot_dg}. To learn domain invariant features, OT technique is implemented to achieve domain alignments for extracting invariant features. After the OT transition, the invariant features can be generalized to unseen domain. \subsection{Metric Learning} For a pair of instances $(\mathbf{x}_i,y_i)$ and $(\mathbf{x}_j,y_j)$, the notion of \emph{positive pairs} usually refers to the condition where pair $i,j$ have same labels ($y_i=y_j$), while the negative pairs usually refers to the condition $y_i\neq y_j$. The central idea of metric learning is to encourage a pair of instances who have the same labels to be closer, and push negative pairs to be apart from each other~\cite{wu2017sampling}. Follow the framework of~\cite{wang2019multi}, we show the general pair-weighting process of metric learning. Assuming the feature extractor $f$ parameterized by $\boldsymbol{\theta}_f$ projects the instance $\mathbf{x} \in\mathbb{R}^{n}$ to a $d$-dimensional normalized space: $f(\mathbf{x};\boldsymbol{\theta}_f):\; \mathbb{R}^n\to [0,1]^d$. Then, for two samples $\mathbf{x}_i$ and $\mathbf{x}_j$, the similarity between them could be defined as the inner product of the corresponding feature vector: \begin{equation} S_{i,j}:=\langle f(\mathbf{x}_i;\boldsymbol{\theta}_f),f(\mathbf{x}_j;\boldsymbol{\theta}_f)\rangle \label{Eq.dotsimilarity} \end{equation} To leverage the across-domain class similarity information can encourage the learner to extract the classification boundary that regardless of domains, which is an useful auxiliary information for the learner. We further elaborate it in section~\ref{metric_learning_domain_agnostic}. \section{Proposed Method} The high-level idea of WADG algorithm is to learn a domain-invariant feature space and domain-agnostic classification boundary. Firstly, we align the marginal distribution of different source domains via optimal transport by minimizing the Wasserstein distance to achieve the domain-invariant feature space. And then, we adopt metric learning objective to guide the learner to leverage the class similarity information for a better classification boundary. A general workflow of our method is illustrated in Fig.~\ref{fig:sub-general_workflow}. The model contains three major parts: a feature extractor, a classifier and a critic function. The feature extractor function $F$, parameterized by $\boldsymbol{\theta}_f$, extracts the features from different source domain. For set of instances $\mathbf{X}^{(i)}=\{\mathbf{x}_{j}^{(i)}\}_{j=1}^{N_i}$ from domain $\mathcal{D}_i$, we can then denote the extracted feature from domain $i$ as $\mathbf{Z}^{(i)}=F(\mathbf{X}^{(i)})$. The classification function $C$, parameterized by $\boldsymbol{\theta}_c$, is expected to learn to predict labels of instances from all the domains correctly. The critic function $D$, parameterized by $\boldsymbol{\theta}_d$, aims to measure the empirical Wasserstein distance between features from a pair of source domains. For the target domain, all the instances and labels are absent during the training time. WADG aims to learn the domain-agnostic features with distinguishable classification boundary. During each train round, the network receives the labelled data from all domains and train the classifier under a supervised mode with the classification loss $\mathcal{L}_C$. For the classification process, we use the typical cross-entropy loss for all $m$ source domains: \begin{equation} \mathcal{L}_{C} = -\sum_{i=1}^m\sum_{j=1}^{N_i} y_j\log(\mathbb{P}(C(F(\textbf{x}_j^{(i)})))) \label{Eq.cls_loss_all_domains} \end{equation} Through this, the model could learn to train the category information on over all the domains. The feature extractor $F$ is then trained to minimize the estimated Wasserstein Distance in an adversarial manner with the critic $D$ with an objective $\mathcal{L}_D$. We then adopt a metric learning objective (namely, $\mathcal{L}_{MS}$) for leveraging the similarities for a better classification boundary. Our full method then solve the joint loss function, \begin{equation*} \mathcal{L} = \arg\min_{\theta_f,\theta_c}\max_{\theta_d} \mathcal{L}_{C}+\mathcal{L}_{D} + \mathcal{L}_{MS}, \label{Eq:full_objective} \end{equation*} where $\mathcal{L}_D$ is the adversarial objective function, and $\mathcal{L}_{MS}$ is the metric learning objective function. In the sequel, we will elaborate these two objectives in section~\ref{wasserstien_over_all_domains} and section~\ref{metric_learning_domain_agnostic}, respectively. \subsection{Adversarial Domain Generalization via Optimal Transport} \label{wasserstien_over_all_domains} As optimal transport could constrain labelled source samples of the same class to remain close during the transportation process~\cite{courty2016optimal}. We deploy optimal transport with Wasserstein distance~\cite{redko2017theoretical,shen2017wasserstein} for aligning the marginal feature distribution over all the source domains. A brief workflow of the optimal transport for a pair of sourcce domains is illustrated in Fig.~\ref{fig:sub-OT alignment}. The critic function $D$ estimates the empirical Wasserstein Distance between the each source domain through a pair of instances from the empirical sets $\mathbf{x}^{(i)}\in\mathbf{X}^{(i)}$ and $\mathbf{x}^{(j)}\in\mathbf{X}^{(j)}$. In practice~\cite{shen2017wasserstein}, the original Eq.~\ref{original_eq_wasserstein} of Wasserstein distance could be computed by, \begin{equation} \begin{split} W_1(\mathbf{X}^{(i)}, \mathbf{X}^{(j)})=\frac{1}{N_{i}}\sum_{\mathbf{x}^{(i)}\in\mathbf{X}^{(i)}}D(F(\mathbf{x}^{(i)})) -\frac{1}{N_{j}}\sum_{\mathbf{x}^{(j)}\in\mathbf{X}^{(j)}}D(F(\mathbf{x}^{(j)})) \label{Eq.Wasserstein} \end{split} \end{equation} As in domain generalization setting, there usually exists more that two source domains, we can sum all the empirical Wasserstein distance between each pair of source domains, \begin{equation} \mathcal{L}_D = \sum_{i=1}^m\sum_{j=i+1}^m\big [ \frac{1}{N_i}\sum_{\mathbf{x}^{(i)}\in\mathbf{X}^{(i)}}D(F(\mathbf{x}^{(i)}))-\frac{1}{N_j}\sum_{\mathbf{x}^{(j)}\in\mathbf{X}^{(j)}}D(F(\mathbf{x}^{(j)}))\big] \label{Eq.mult_domain_Wa} \end{equation} \begin{figure} \centering \begin{subfigure}{.535\textwidth} \centering \includegraphics[width=\textwidth]{figs/Neuro_Computing_model_framework.pdf} \caption{The whole workflow the proposed WADG model.} \label{fig:sub-general_workflow} \end{subfigure} \; \begin{subfigure}{.43\textwidth} \centering \includegraphics[width=\textwidth]{figs/adv_alignment.pdf} \caption{Optimal Transport for Feature Alignment.} \label{fig:sub-OT alignment} \end{subfigure}\\ \begin{subfigure}{1.1\textwidth} \centering \includegraphics[width=\textwidth]{figs/metric_learning.pdf} \caption{Metric Learning for Clustering Proces} \label{fig:sub-metric learning} \end{subfigure} \caption{The proposed WADG method. (a): the general workflow of WADG method. The model mainly consists of three parts, the feature extractor, classifier and critic function. During training, the model receives all the source domains. The feature extractor is trained to learn invariant features together with the critic function in an adversarial manner. (b): For each pair of source domains $\mathcal{D}_i$ and $\mathcal{D}_j$, optimal transport process for aligning the features from different domains. (c): The metric learning process. For a batch of all source domain instances, we first roughly mining the positive and negative pairs via Eq.~\ref{eq_round_mining}. Then, compute the corresponding weights via Eq.~\ref{Eq.negative_pair_weight} and Eq.~\ref{Eq.positive_pair_weight} to compute $\mathcal{L}_{MS}$ to guide the clustering process.} \label{fig:whole workflow} \end{figure} Throughout this pair-wise optimal transport process, the learner could extract a domain-invariant feature space, we then propose to apply metric learning approaches to leverage the class label similarity for domain independent clustering feature extraction. We then introduce the metric learning for domain agnostic clustering in the next section. \subsection{Metric Learning for Domain Agnostic Classification Boundary} \label{metric_learning_domain_agnostic} As aforementioned, only aligning the marginal features via adversarial training is not sufficient for DG since there may exist a ambiguous decision boundary~\cite{dou2019domain}. When predicting on the target domain, the learner may still suffer from this ambiguous decision boundary. To solve this, we propose to implement the metric learning techniques to help cluster the instances and promote a better prediction boundary for better generalization. To solve this, except to the supervised source classification and alignment of the marginal distribution across domains with the Wasserstein adversarial training defined above, we then further encourage robust domain-independent local clustering via leverage from label information using the metric learning objective. The brief workflow is illustrated in Fig.~\ref{fig:sub-metric learning}. Specifically, we adopt the metric learning objective to require the images regardless of their domains to follow the two aspects: 1) images from the same class are semantically similar, thereby should be mapped nearby in the embedding space (semantic clustering), while 2) instances from different classes should be mapped apart from each other in embedding space. Since goal of domain generalization aims to learn to hypothesis could predict well on all the domains, the clustering should also be achieved under a domain-agnostic manner To this end, we mix the instances from all the source domains together and encourage the clustering for domain agnostic features via the metric learning techniques to achieve a domain-independent clustering decision boundary. For this, during each training iteration, for a batch $\{\mathbf{x}_1^{(i)},y_1^{(i)},\dots,\mathbf{x}^{(i)}_b,y^{(i)}_b\}_{i=1}^m$ from $m$ source domains with batch size $b$, we mix all the instances from each domain and denoted by $\{(\mathbf{x}_i^B,y_i^B)\}_{i=1}^{m^\prime}$ with total size $m^\prime$. We first measure the relative similarity between the negative and positive pairs, which is introduced in the next sub-section. \subsubsection{Pair Similarity Mining} Assume $\mathbf{x}_i^{B}$ is an anchor, a negative pair $\{\mathbf{x}_i^B,\mathbf{x}_j^B \}$ and a positive pair $\{\mathbf{x}_i^B,\mathbf{x}_{j^\prime}^B \}$ are selected if $S_{ij}$ and $S_{i,{j^\prime}}$ satisfy the negative condition $S_{i,j}^{-}$ and the positive condition $S_{i,j}^{+}$, respectively : \begin{equation} \label{eq_round_mining} S_{i,j}^{-}\ge \min_{y_i= y_k} S_{i,k}-\epsilon, \; \; \; \; S_{i,j^\prime}^{+}\leq \min_{y_i\neq y_k} S_{i,k}+\epsilon \end{equation} where $\epsilon$ is a given margin. Through Eq.~\ref{eq_round_mining} and specific margin $\epsilon$, we will have a set of negative pairs $\mathcal{N}$ and a set of positive pairs $\mathcal{P}$. This process (Eq.~\ref{eq_round_mining}) could roughly cluster the instances with each anchor by selecting informative pairs (inside of the margin), and discard the less informative ones (outside of the margin) With such roughly selected informative pairs $\mathcal{N}$ and $\mathcal{P}$, we then assign the instance with different weights. Intuitively, if a instance has higher similarity with an anchor, then it should stay closer with the anchor and vice-versa. We introduce the weighting process in the next section \subsubsection{Pair Weighting}~\label{sect:pair_weighting} For instances of positive pairs, if they are more similar with the anchor, then it should have higher weights while give the negative pairs with lower weights if they are more dissimilar, no matter which domain they come from. Through this process, we can push the instances into several groups via measure their similarities. For $N$ instances, computing the similarity between each pair could result in a similarity matrix $\mathbf{S}\in\mathbb{R}^{N\times N}$. For a loss function based on pair similarity, it can usually be defined by $\mathcal{F}(\mathbf{S},y)$. Let $S_{i,j}$ be the $i^{th}$ row, $j^{th}$ column element of matrix $\mathbf{S}$. The gradient $w.r.t$ the network could be computed by, \begin{equation} \begin{split} \frac{\partial\mathcal{F}(\mathbf{S},y)}{\partial\boldsymbol{\theta}_f}&= \frac{\partial\mathcal{F}(\mathbf{S},y)}{\partial\mathbf{S}}\frac{\partial \mathbf{S}}{\partial\boldsymbol{\theta}_f}=\sum_{i=1}^N\sum_{j=1}^N\frac{\partial\mathcal{F}(\mathbf{S},y)}{\partial S_{i,j}}\frac{\partial S_{i,j}}{\partial \boldsymbol{\theta}_f} \end{split} \label{Eq.dev_theta_f} \end{equation} Eq.~\ref{Eq.dev_theta_f} could be reformulated into a new loss function $\mathcal{L}_{MS}$ as, \begin{equation} \mathcal{L}_{MS}= \sum_{i=1}^N\sum_{j=1}^N\frac{\partial\mathcal{F}(\mathbf{S},y)}{\partial S_{i,j}}S_{i,j} \label{Eq.reformed_lms} \end{equation} usually the metric loss defined $w.r.t$ similarity matrix $\mathbf{S}$ and label $y$ could be reformulated by Eq.~\ref{Eq.reformed_lms}. The term $\frac{\partial\mathcal{F}(\mathbf{S},y)}{\partial S_{i,j}}$ in Eq.~\ref{Eq.reformed_lms} could be treated as an constant scalar since it doesn't contain the gradient of $\mathcal{L}_{MS}$ $w.r.t$ $\boldsymbol{\theta}_f$. Then, we just need to compute the gradient term $\frac{\partial \mathcal{F}_{i,j}}{\partial \boldsymbol{\theta}_f}$ for the positive and negative pairs. Since the goal is to encourage the positive pairs to be closer, then we can assume the gradient $\leq 0$, $i.e.$, $\frac{\partial \mathcal{F}_{i,j}}{\partial \boldsymbol{\theta}_f}\leq 0$. Conversely, for a negative pair, we could assume $\frac{\partial \mathcal{F}_{i,j}}{\partial \boldsymbol{\theta}_f}\geq 0$. Thus, Eq.~\ref{Eq.reformed_lms} is transformed by the summation over all the positive pair ($y_i=y_j$) and negative pairs ($y_i\neq y_j$), \begin{equation} \begin{split} \mathcal{L}_{MS}&=\sum_{i=1}^N\sum_{j=1}^N\frac{\partial\mathcal{F}(\mathbf{S},y)}{\partial S_{i,j}}S_{i,j}\\ &=\sum_{i=1}^N\left(\sum_{j=1,y_j\neq y_i}^N \frac{\partial\mathcal{F}(\mathbf{S},y)}{\partial S_{i,j}} S_{i,j}+\sum_{j=1,y_j= y_i}^N \frac{\partial\mathcal{F}\big(\mathbf{S},y)}{\partial S_{i,j}} S_{i,j} \right)\\ &=\sum_{i=1}^N \left(\sum_{j=1,y_j\neq y_i}^N w_{i,j}S_{i,j} - \sum_{j=1,y_j= y_i}^N w_{i,j} S_{i,j} \right) \end{split} \label{L_ms_with_weight} \end{equation} where $w_{i,j}= \big | \frac{\partial S_{i,j}}{\partial \boldsymbol{\theta}_f} \big|$ is regarded as the weight for similarity $S_{i,j}$. For each pair of instances $i,j$, we could assign different weights according to their similarities $S_{i,j}$. Then $w_{i,j}^+$ or $w_{i,j}^-$ could be defined as the weight of a positive or negative pairs' similarity, respectively. Previously,~\cite{yi2014deep} and \cite{wang2019multi} applied a soft function for measuring the similarity. We then consider the similarity of the pair itself ($i.e.$ self-similarity), the negative similarity and the positive similarity. The weight of self-similarity could be measured by $\exp({S_{i,j}-\lambda})$ with a small threshold $\lambda$. For a selected negative pair $\{\mathbf{x}^B_i, \mathbf{x}^B_j\}\in \mathcal{N}$ the corresponding weight (see Eq.~\ref{L_ms_with_weight}) could be defined by the soft function of self-similarity together with the negative similarity: \begin{equation} \begin{split} w_{i,j}^-&=\frac{1}{\exp(\beta(\lambda-S_{ij}))+\sum_{k\in\mathcal{N}}\exp(\beta(S_{i,k}-\lambda))}\\ &= \frac{\exp(\beta(S_{ij}-\lambda))}{1+\sum_{k\in\mathcal{N}}\exp(\beta(S_{ik}-\lambda))} \end{split} \label{Eq.negative_pair_weight} \end{equation} Similarly, the weight of a positive pair $\{\mathbf{x}^B_i,\mathbf{x}^B_j\}\in\mathcal{P}$ is defined by, \begin{equation} w_{i,j}^+=\frac{1}{\exp(-\alpha(\lambda-S_{i,j}))+\sum_{k\in\mathcal{P}}\exp(-\alpha(S_{i,k}-S_{i,j}))} \label{Eq.positive_pair_weight} \end{equation} Then, take Eq.~\ref{Eq.negative_pair_weight} and Eq.~\ref{Eq.positive_pair_weight} into Eq.~\ref{L_ms_with_weight}, and integrate Eq.~\ref{L_ms_with_weight} with the similarity mining $S_{i,j}$, we have the objective function for clustering, \begin{equation} \begin{split} \mathcal{L}_{MS}= \frac{1}{m}\sum_{i=1}^{m}\big\{\frac{1}{\alpha}\log[1+\sum_{k\in\mathcal{P}_i}\exp(-\alpha(S_{ik}-\lambda))] +\frac{1}{\beta}\log[1+\sum_{k\in\mathcal{N}_i}\exp(\beta(S_{ik}-\lambda))]\big\} \label{Eq.multi_similarity_loss} \end{split} \end{equation} where $\lambda$, $\alpha$ and $\beta$ are fixed hyper-parameters, we elaborate them in the empirical setting section~\ref{sect:implementation_details}. Then, the whole objective of our proposed method is, \begin{equation} \mathcal{L} = \arg\min_{\theta_f,\theta_c}\max_{\theta_d} \mathcal{L}_{C}+\lambda_d\mathcal{L}_{D} +\lambda_s \mathcal{L}_{MS} \label{Eq:full_objective} \end{equation} where $\lambda_d$ and $\lambda_s$ are coefficients to regularize $\mathcal{L}_d$ and $\mathcal{L}_{MS}$ respectively. Based on these above, we propose the WADG algorithm in Algorithm~\ref{wadg_algo}. And we show the empirical results in the next section. \begin{center} \begin{algorithm}[t!] \caption{The proposed WADG algorithm (one round)} \begin{algorithmic}[1] \REQUIRE Samples from different source domains $\{{\mathcal{D}}_i\}_{i=1}^M$ \ENSURE Neural network parameters $\boldsymbol{\theta}_{f}$, $\boldsymbol{\theta}_c$, $\boldsymbol{\theta}_d$ \FOR{mini-batch of samples $\{(\mathbf{x}^{(i)}_s,y^{(i)}_s)\}$ from source domains} \STATE Compute the classification loss $\mathcal{L}_C$ over all the domains according to Eq.~\ref{Eq.cls_loss_all_domains} \STATE Compute the Wasserstein distance $\mathcal{L}_D$ between each pair of source domains according to Eq.~\ref{Eq.mult_domain_Wa} \STATE Mix the pairs from different domains and compute the similarity by Eq.~\ref{Eq.dotsimilarity} \STATE Roughly select the positive and negative pairs by solving Eq.~\ref{eq_round_mining} \STATE Compute similarity loss $\mathcal{L}_{MS}$ on all the source instances by Eq.~\ref{Eq.multi_similarity_loss} \STATE Update $\theta_f,\theta_c$ and $\baselinestretch_d$ by solving Eq.~\ref{Eq:full_objective} with learning rate $\eta$: \begin{equation*} \begin{split} &\boldsymbol{\theta}_f \leftarrow \boldsymbol{\theta}_f - \eta\frac{\partial(\mathcal{L}_{C}+\lambda_d\mathcal{L}_D+\lambda_s\mathcal{L}_{MS})}{\partial\boldsymbol{\theta}_f}, \\ &\boldsymbol{\theta}_c \leftarrow \boldsymbol{\theta}_c - \eta\frac{\partial(\mathcal{L}_{C}+\lambda_d\mathcal{L}_D+\lambda_s\mathcal{L}_{MS})}{\partial\boldsymbol{\theta}_c}, \\ &\boldsymbol{\theta}_d \leftarrow \boldsymbol{\theta}_d + \eta\frac{\partial \mathcal{L}_D}{\partial\boldsymbol{\theta}_d} \end{split} \end{equation*} \ENDFOR \STATE Return the optimal parameters $\boldsymbol{\theta}_f^\star$, $\boldsymbol{\theta}_c^\star$ and $\boldsymbol{\theta}_d^\star$ \end{algorithmic} \label{wadg_algo} \end{algorithm} \end{center} \section{Experiments and Results} \label{experiments results} \subsection{Datasets} In order to evaluate our proposed approach, we implement experiments on two common used datasest: \textbf{VLCS}~\cite{torralba2011unbiased} and \textbf{PACS}~\cite{Li2017dg}. The VLCS dataset contains images from 4 different domains: PASCAL VOC2007 (V), LabelMe (L), Caltech (C), and SUN09 (S). Each domain includes five classes: \emph{bird, car, chair, dog} and \emph{person}. PACS dataset is a recent benchmark dataset for domain generalization. It consists of four domains: Photo (P), Art painting (A), Cartoon (C), Sketch (S), with objects from seven classes: dog, elephant, giraffe, guitar, house, horse, person. \subsection{Baselines and Implementation details} \label{sect:implementation_details} To show the effectiveness of our proposed approach, we compare our algorithm on the benchmark datasets with the following recent domain generalization methods. \begin{itemize} \item \textbf{\emph{Deep All}}: We follow the standard evaluation protocol of Domain Generalization to set up the pre-trained Alexnet or ResNet-18 fine-tuned on the aggregation of all source domains with only the classification loss. \item TF~\cite{li2017deeper}: A low-rank parameterized Convolution Neural Network model which aims to reduce the total number of model parameters for an end-to-end Domain Generalization training. \item CIDDG~\cite{li2018deep}: Matches the conditional distribution by change the class prior. \item MLDG~\cite{li2018learning}: The meta-learning approach for domain generalization. It runs the meta-optimization on simulated meta-train/ meta-test sets with domain shift \item CCSA~\cite{Motiian_2017_ICCV}: The contrastive semantic alignment loss was adopted together with the source classification loss function for both the domain adaptation and domain generalization problem. \item MMD-AAE~\cite{li2018domain}: The Adversarial Autoencoder model was adopted together with the Mean-Max Discrepancy to extract a domain invariant feature for generalization. \item D-SAM~\cite{d2018domain}: It aggregates domain-specific modules and merges general and specific information together for generalization. \item JiGen~\cite{carlucci2019domain}: It achieves domain generalization by solving the Jigsaw puzzle via the unsupervised task. \item MASF~\cite{dou2019domain}: A meta-learning style method which based on MLDG and combined with Consitrastive Loss/ Triplet Loss to encourage domain-independent semantic feature space. \item MMLD~\cite{dg_mmld}: An approach that mixes all the source domains by assigning a pseudo domain label for extract domain-independent cluster feature space. \end{itemize} Following the general evaluation protocol of domain generalization ($e.g.$~\cite{dou2019domain,dg_mmld}), on PACS and VLCS dataset. We first test our algorithm on by using AlexNet~\cite{krizhevsky2012imagenet} backbones by removing the last layer as feature extractor. For preparing the dataset, we follow the train/val./test split and the data pre-processing protocol of~\cite{dg_mmld}. As for the classifier, we initialize a three-layers MLP whose input has the same number of inputs as the feature extractor's output and to have the same number of outputs as the number of object categories (2048-256-256-$K$), where $K$ is the number of classes. For the critic network, we also adopt a three-layers MLP (2048-1024-1024-1). For the metric learning objective, we use the output of the second layer of classifier network (with size 256) for computing the similarity. We adopt the ADAM~\cite{kingma2014adam} optimizer for training with learning rate ranging from $5e-4$ to $5e-5$ for the whole model together with mini-batch size $64$. For stable training, we also set coefficient $\lambda_d= \frac{2}{1+\exp(-\delta p)}-1$ to regularize the adversarial loss, where $\delta=10$ and $p$ is the training progress. And we empirically set $\lambda_s$ ranging from $1e-4$ to $1e-6$ via reverse validation. For the hyperparameters in $\mathcal{L}_{MS}$ (see Eq.~\ref{eq_round_mining} and Eq.~\ref{Eq.multi_similarity_loss}), we empirically set $\lambda=0.5$, $\epsilon = 1e-5$, $\alpha = 2.0$, $\beta = 40.0$. The experiments are programmed with help of \emph{Pytorch}~\cite{paszke2019pytorch}. To avoid over-training, we also adopt the early stopping technique. \begin{table}[] \caption{Empirical Results (accuracy $\%$) on PACS dataset with pre-trained AlexNet as Feature Extractor. For each column, we refer the generalization taks as the target domain name. For example, the third column `Cartoon‘ refers to the generalization tasks where domain \emph{Cartoon} is the target domain while the model is trained on the rest three domains }\label{tb_pacs_dataset} \centering \resizebox{1.0\textwidth}{!}{\begin{tabular}{l|llll|l} \toprule Method & Art & Cartoon & Sketch & Photo & Avg. \\ \hline Deep All & $63.30$ & $63.13$ & $54.07$ & $87.70$ & $67.05$\\ TF\cite{li2017deeper} & $62.86$ & $66.97$ & $57.51$ & $89.50$ & $59.21$ \\ CIDDG\cite{li2018deep} & $62.70$ & $69.73$ & $64.45$ & $78.65$ & $68.88$ \\ MLDG \cite{li2018learning}& $66.23$ & $66.88$ & $58.96$ & $88.00$ & $70.01$\\ D-SAM\cite{d2018domain} & $63.87$ & $70.70$ & $64.66$ & $85.55$ & $71.20$\\ JiGen\cite{carlucci2019domain} & $67.63$ & $71.71$ & $65.18$ & $89.00$ & $73.38$\\ MASF\cite{dou2019domain} & $\mathbf{70.35}$ & $ 72.46$ & $67.33$ &$90.68$ & $75.21$\\ MMLD\cite{dg_mmld} & $69.27$ & $\mathbf{72.83}$ & $66.44$ &$88.98$ & $74.38$\\ \hline Ours &$70.21$ & $72.51$ & $\mathbf{70.32}$ & $\mathbf{89.81}$ & $\mathbf{75.71}$ \\ \bottomrule \end{tabular}} \end{table} \subsection{Experiments Results} \begin{table}[] \caption{Empirical Results (accuracy $\%$) on VLCS dataset with pre-trained AlexNet as Feature Extractor.}\label{tb_vlcs_dataset} \resizebox{1.0\textwidth}{!}{\begin{tabular}{l|llll|l} \toprule Method & Caltech & LabelMe & Pascal & Sun & Avg. \\ \midrule Deep All & $92.86$ & $63.10$ & $68.67$ & $64.11$ & $72.19$ \\ D-MATE\cite{ghifary2015domain} & $89.05$ & $60.13$ & $63.90$ & $61.33$ & $68.60$ \\ CIDDG\cite{li2018deep} & $88.83$ & $63.06$ & $64.38$ & $62.10$ & $69.59$ \\ CCSA\cite{Motiian_2017_ICCV} & $92.30$ & $62.10$ & $67.10$ & $59.10$ & $70.15$ \\ SLRC\cite{ding2017deep} & $92.76$ & $62.34$ & $65.25$ & $63.54$ & $70.97$ \\ TF\cite{li2017deeper} & $93.63$ & $63.49$ & $69.99$ & $61.32$ & $72.11$ \\ MMD-AAE~\cite{li2018domain} & $94.40$ & $62.60$ & $67.70$ & $64.40$ & $72.28$ \\ D-SAM~\cite{d2018domain} & $91.75$ & $56.95$ & $58.95$ & $60.84$ & $67.03$ \\ MLDG\cite{li2018learning} & $94.4$ & $61.3$ & $67.7$ & $65.9$ & $73.30$ \\ JiGen\cite{carlucci2019domain} & $96.93$ & $60.90$ & $70.62$ & $64.30$ & $73.19$ \\ MASF\cite{dou2019domain} & $94.78$ & $64.90$ & $69.14$ & $67.64$ & $74.11$ \\ MMLD\cite{dg_mmld} & $96.66$ & $58.77$ & $\mathbf{71.96}$ & $\mathbf{68.13}$ & $73.88$ \\ \hline Ours & $\mathbf{97.85}$ & $\mathbf{65.26}$ & $71.47$ & $66.62$ & $\mathbf{75.31}$ \\ \bottomrule \end{tabular}} \end{table} We firstly report the empirical results on PACS and VLCS dataset using AlexNet as feature extractor. For each generalization task, we train the model on all the source domains and test on the target domain and report the average of top 5 accuracy. The results on PACS and VLCS dataset using AlexNet are reported in Table~\ref{tb_pacs_dataset} and Table~\ref{tb_vlcs_dataset}, respectively. For each table, the empirical results refers to the average accuracy about training on source domains while testing on the target domain. From the empirical results, we could see our method could outperform the baselines both on the PACS and VLCS dataset, indicating an improvement on benchmark performances. \subsection{Further Analysis} \label{further_analysis} To further show the effectiveness of our algorithm especially on more deep models, follow~\cite{dou2019domain}, we also report the results of our algorithm by using ResNet-18 backbone on PACS dataset in Table~\ref{pacs_res18}. The ResNet-18 backbone, the output feature dim will be $512$. From the results, we could observe that our method could outperform the baselines on most generalization tasks and on average $+1.6\%$ accuracy improvement. Then, we implement ablation studies on each component of our algorithm. We report the empirical results of ablation studies in Table~\ref{table_ablation_study}, where we test the ablation studies on both the AlexNet backbone and ResNet-18 backbone. We compare the ablations by, (1)~\emph{Deep All}: Train the model using feature extractor on source domain datasets with classification loss only, that is, neither optimal transport nor metric learning techniques is adopted. (2)~\emph{No $\mathcal{L}_D$}: Train the model with classification loss and metric learning loss but without adversarial training component; (3)~\emph{No $\mathcal{L}_{MS}$}: Train the model with classification loss and adversarial loss but without metric learning component; (4)~\emph{WADG-All}: Train the model with full objective Eq.~\ref{Eq:full_objective}. From the results, we could observe that one we omit the adversarial training, the accuracy would drop off rapidly ($\sim3.5\%$ with AlexNet backbone and $\sim5.8\%$ with ResNet-18 backbone). The contribution of the metric learning loss is relatively small compared with adversarial loss. Once we omit the metric learning loss, the performance will drop $\sim2.1\%$ and $\sim 2.5\%$ with AlexNet and ResNet-18 backbone, respectively. \begin{figure} \centering \begin{subfigure}{.40\textwidth} \centering \includegraphics[width=\textwidth]{figs/Deep_all.pdf} \caption{Deep All} \label{fig:sub-first} \end{subfigure} \begin{subfigure}{.40\textwidth} \centering \includegraphics[width=\textwidth]{figs/No_dis.pdf} \caption{No $\mathcal{L}_D$} \label{fig:sub-second} \end{subfigure}\\ \begin{subfigure}{.40\textwidth} \centering \includegraphics[width=\textwidth]{figs/NO_MTR.pdf} \caption{No $\mathcal{L}_{MS}$} \label{fig:sub-third} \end{subfigure} \begin{subfigure}{.40\textwidth} \centering \includegraphics[width=\textwidth]{figs/WADG.pdf} \caption{WADG-All} \label{fig:sub-fourth} \end{subfigure} \caption{T-SNE visualization of ablation studies on PACS dataset for Target domain as \emph{Photo}. Detailed analysis is presented in section~\ref{further_analysis}. } \label{fig:tsne} \end{figure} \begin{table}[] \centering \caption{Empirical Results (accuracy $\%$) on PACS dataset with pre-trained ResNet-18 as feature extractor .}\label{pacs_res18} \resizebox{1.0\textwidth}{!}{\begin{tabular}{l|llll|l} \toprule Method & Art & Cartoon & Sketch & Photo & Avg. \\ \hline Deep All & $77.87$ & $75.89$ & $69.27$ & $95.19$ & $79.55$ \\ D-SAM\cite{d2018domain} & $77.33$ & $72.43$ & $77.83$ & $95.30$ & $80.72$ \\ JiGen\cite{carlucci2019domain} & $79.42$ & $75.25$ & $71.35$ & $96.03$ & $80.51$ \\ MASF\cite{dou2019domain} & $80.29$ & $77.17$ & $71.69$ & $94.99$ & $81.04$ \\ MMLD\cite{dg_mmld} & $81.28$ & $77.16$ & $72.29$ & $\mathbf{96.09}$ & $81.83$ \\ \bottomrule Ours & $\mathbf{81.56}$ & $\mathbf{78.02}$ & $\mathbf{78.43}$ & $95.82$ & $\mathbf{83.45}$ \\ \bottomrule \end{tabular}} \end{table} \begin{table}[] \centering \caption{Ablation Studies on PACS dataset on all components of our proposed method using AlexNet and ResNet-18 backbone} \label{table_ablation_study} \resizebox{1.0\textwidth}{!}{\begin{tabular}{l|lllllllll|l} \toprule & \multicolumn{5}{c|}{AlexNet} & \multicolumn{5}{c}{ResNet-18} \\ Ablation & Art & Carton & Sketch & Photo & \multicolumn{1}{l|}{Avg.} & Art & Carton & Sketch & Photo & Avg. \\ \midrule Deep All & $63.30$ & $63.13$ & $54.07$ & $87.70$ & \multicolumn{1}{l|}{$67.05$} & $77.87$ & $75.89$ & $69.27$ & $95.19$ & $79.55$ \\ No $\mathcal{L}_D$ & $65.80$ & $69.64$ & $63.91$ & $89.53$ & \multicolumn{1}{l|}{$72.22$} & $74.62$ & $73.02$ & $68.67$ & $94.86$ & $77.79$ \\ No $\mathcal{L}_{MS}$ & $66.78$ & $71.47$ & $68.12$ & $88.87$ & \multicolumn{1}{l|}{$73.65$} & $78.25$ & $76.27$ & $73.42$ & $95.68$ & $80.91$ \\ WADG-All & $\mathbf{70.21}$ & $\mathbf{72.51}$ & $\mathbf{70.32}$ & $\mathbf{89.81}$ & \multicolumn{1}{l|}{$\mathbf{75.71}$} & $\mathbf{81.56}$ & $\mathbf{78.02}$ & $\mathbf{78.43}$ & $\mathbf{95.82}$ & $\mathbf{83.45}$ \\ \bottomrule \end{tabular}} \end{table} Then, to better understand the contribution of each component of our algorithm, the T-SNE visualization of the ablation studies of each components on PACS dataset are represented in Fig.~\ref{fig:tsne} for the generalization task of target domain \emph{Photo}. Since our goal is to not only align the feature distribution but also encourage a cohesion and separable boundary, in order to show the alignment and clustering performance, we report the T-SNE features of all the source domains and target domain to show the feature alignment and clustering across domains. As we can see, the T-SNE features by \emph{Deep All} could neither project the instances from different domains to align with each other nor cluster the features into groups. The T-SNE features by \emph{No $\mathcal{L}_D$} showed the metric learning loss could to some extent to cluster the features, but without the adversarial training, the features could not be aligned well. The T-SNE features by \emph{No $\mathcal{L}_{MS}$} showed that the adversarial training could help to align the features from different domains but could not have a good clustering performance. The T-SNE features by \emph{WADG-All} showed that the full objective could help to not only align the features from different domains but also could cluster the features from different domains into several cluster groups, which confirms the effective of our algorithm. \section{Conclusion} In this paper, we proposed the Wasserstein Adversarial Domain Generalization algorithm for not only aligning the source domain features and transferring to an unseen target domain but also leveraging the label information across domains. We first adopt optimal transport with Wasserstein distance for aligning the marginal distribution and then adopt the metric learning method to encourage a domain-independent distinguishable feature space for a clear classification boundary. The experiments results showed our proposed algorithm could outperform most of the baseline methods on two standard benchmark datasets. Furthermore, the ablation studies and visualization of the T-SNE features also confirmed the effectiveness of our algorithm. \bibliographystyle{plain}
proofpile-arXiv_065-155
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} \label{introduction} In order to safely and efficiently navigate in crowded public spaces, Unmanned Ground Vehicles (UGVs) need to reason about their static and dynamic surroundings and predict the occupancy of space to reliably perform obstacle avoidance\cite{ess2009moving}. Generally, static objects can be avoided by small safety distances, whereas for compliant navigation, dynamic objects need to be avoided by larger distances~\cite{pfeiffer2016predicting}. Moreover, robots should avoid crossing pedestrian paths, which not only requires to correctly classify dynamic objects as such, but also calls for accurate motion estimation and prediction. In addition to humans, other dynamic objects with varying size and speed, such as animals or other vehicles, may appear in the surroundings. Hence, the detection may not be restricted to humans only but needs to generalize. We identified two major families of state-of-the-art techniques for detecting and tracking objects in crowded scenes. The first group uses point cloud data, often generated from highly-accurate LiDAR sensors, which allows for the detection of generic dynamic objects and is typically used for autonomous driving~\cite{kraemer2018lidar, asvadi20163d}. The costs of LiDAR sensors make most of the algorithms in this first group not applicable for small, commercially used UGVs, such as delivery robots for hospitals or airports. The second group uses visual information from images and primarily focuses on the detection of a predetermined number of object classes, such as pedestrians and vehicles\cite{jafari2014real, redmon2018yolov3}. However, these approaches often lack the ability to detect generic dynamic objects and do not run in real-time on computationally-constrained platforms~\cite{zhang2018towards}. In order to successfully deploy UGVs at large scale, low-cost sensor setups, such as stereo cameras, should be used along with efficient algorithms. \begin{figure}[t] \centering % \includegraphics[width=1.0\columnwidth]{images/title_page_gruen.png} \vspace{-6mm} \caption{Visualization of the output of the proposed dynamic object detection and tracking approach. % \textbf{Left:} the input camera image overlaid with the output of a visual people detector, indicating the confidence of the detection and a tracking ID. \textbf{Right:} a resulting occupancy grid, with correctly identified static objects (red voxels) and a detected pedestrian (the yellow point cloud). The red arrow visualizes the pedestrian's estimated velocity, the yellow track shows the past trajectory, and the blue cuboid indicates that the visual people detector recognized this cluster as a person. Gray floor areas indicate high costs in the occupancy grid. % The robot's footprint and field of view are shown in green and white, respectively.} \label{fig_title_page} \vspace{-5mm} \end{figure} We introduce a solution that leverages stereo camera data to reliably and accurately detect and track dynamic objects. To this end we first present a novel algorithm to detect generic dynamic objects based on their motion. For enhanced perceptual performance in crowded spaces, we use a visual people detector to classify humans motion-independently as a specific class of dynamic objects, as depicted in Figure~\ref{fig_title_page}. Our approach handles short-time occlusions using the estimated velocity of the dynamic objects. To the best of our knowledge this is the first work to propose a complete solution that uses stereo cameras for detecting and tracking generic dynamic objects by combining global nearest neighbor searches and a visual people detector. The system relies on noisy data of one stereo camera only and is designed to run on computationally-constrained platforms. As shown in the work of Liu \cite{LuciaQiLiu2020navigation}, our perception system has been used for navigating an UGV in real life crowds. We encourage the reader to consult the supplementary video (\url{https://youtu.be/AYjgeaQR8uQ}) for more visualizations. The contributions of this paper are as follows: \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/pipeline_small_arranged.JPG} \vspace{-5mm} \caption{ Overview of our pipeline: % the inputs are stereo images and the estimated pose of the robot from a visual SLAM module. % % % % % The output is a 2D occupancy grid, which enables planning paths close to static objects and ensures avoidance of dynamic objects by a safety distance.} \label{fig_pipeline} \vspace{-4.3mm} \end{figure*} \begin{itemize} \item a novel real-time algorithm to detect and track generic dynamic objects based on noisy stereo camera-data; \item a method to combine the aforementioned algorithm with a vision-based people detector to improve the detection and tracking performance and handle \mbox{short-time occlusions}; % \item an evaluation of our pipeline on challenging datasets, demonstrating its performance and reliability for increased mobile robot safety. % % % % \end{itemize} \section{RELATED WORK} \label{related_work} The task of detecting and tracking dynamic agents has led to a variety of different approaches, as robotic platforms range from small, commercial UGVs to autonomous cars. These approaches differ in computational load, robustness against noise, and required sensor data, namely point clouds, images, or a combination of both. \textbf{Image based algorithms:} Assuming that a feature-based SLAM system is used, one approach is to utilize outliers of the feature tracking module to detect dynamic objects~\cite{xu2018mid, barsan2018robust}. Song et al. \cite{song2017real} argue that this approach should be favored over the usage of optical flow to estimate the velocities of visible objects \cite{pfeiffer2010efficient}. This technique, however, requires the tracking algorithm to use dense feature points to ensure outliers point to all dynamic objects. This is not guaranteed for many visual SLAM systems and contradicts with our goal to design a generic dynamic object tracking pipeline. Nonetheless, the optical flow methods can be enhanced to produce so-called Scene Flow, which computes per-pixel 3D velocities. However, this method does not run in real-time on computationally-constrained platforms~\cite{schuster2018combining, DBLP:journals/corr/abs-1808-09297}. Another approach is the specific detection of pedestrians using visual data \cite{nguyen2019confidence, zhang2018towards, dollar2014fast, ess2009moving }, where deep neural networks receive much attention \cite{simon2019complexer, redmon2018yolov3, zhang2016faster }. Segmentation networks \cite{siam2018rtseg, paszke2016enet} are an alternative to detector networks, but do not imply object-instances, and hence, are impractical for differentiation between multiple people in crowded scenes. Object-instance segmentation networks overcome this drawback \cite{he2017mask}. In this work, we do not limit ourselves to detecting certain object classes only, but pursue a generic dynamic object detection. \textbf{Point cloud based algorithms:} detection algorithms based on Iterative Closest Point (ICP) match current segments of a point cloud to a previous point cloud to reveal their motion \cite{jiang2017high, li2017rgb, christie20163d}. This approach works best for rigid objects like cars and considerably decreases in performance for deforming objects like humans. This property does not match the needs of our system, where we put a special emphasis on human detection and tracking. When using a volumetric occupancy grid, the classification of points as static or dynamic can be based on the consistency of the occupancy of the voxels \cite{asvadi20163d, azim2014layer, broggi2013full}. The performance of this approach depends on the voxel-size where smaller voxels allow for a more precise occupancy analysis, while requiring more computational effort. A probabilistic metric for the occupancy of the voxels is necessary to handle noise in the data, introducing a time-lag to the classification process. Moreover, there are several algorithms designed for autonomous driving that rely on high-quality point clouds from LiDAR sensors and powerful GPUs to detect cars and pedestrians \cite{kraemer2018lidar, ku2018joint, miller2016dynamic}. In contrast, our target platforms are small UGVs with low computational power and potentially the lack of an expensive LiDAR sensor. Our approach directly compares point clouds among frames and is related to the work of Yoon et al.~\cite{yoon2019mapless} and Shi et al.~\cite{shi2018dynamic}. Yoon et al.~\cite{yoon2019mapless} work with LiDAR data and need to introduce a time-lag of one frame to reliably cope with occlusions. We expand their idea to handle noisy stereo camera data with potentially incomplete depth information and additionally introduce a novel approach to differentiate between dynamic and previously occluded points without the need of a time-lag of one frame. Shi et al.~\cite{shi2018dynamic} use RGB-D data and remove points during dense reconstruction in case they are spatially inconsistent between frames. In contrast, our method is able to classify all points of an object as dynamic, even though only their subset shows spatial inconsistency. In consequence, we obtain a more complete and robust classification of dynamic objects. Compared to the work of Osep et al. \cite{osep2017combined} we do not limit our system to a set of predefined detectable classes but implement a generic dynamic object detector instead. \section{METHODOLOGY} \label{methodology} An overview of the proposed stereo camera-based perception approach is given in Figure~\ref{fig_pipeline} and the remainder of this section details its individual modules. To associate camera-based point clouds with the global frame we localize the robot using precise Visual Inertial Odometry (VIO). \subsection{Point Cloud Generation} \label{cloud_generation} The first module generates a 3D point cloud from undistorted and rectified stereo images. We designed our approach to be generic regarding the inputs, hence any algorithm extracting a disparity map from stereo images can be used in this module, from which we consider the well-established block-matching and deep neural networks. \subsubsection{Block-Matching} \label{block_matching} We use semi-global block-matching~\cite{hirschmuller2007stereo} and apply a weighted-least-squares filter \cite{min2014fast} on the resulting disparity map. \subsubsection{Deep Stereo} \label{deep_stereo} Recently, deep neural networks that learn to infer disparity values from stereo images have emerged~\cite{khamis2018stereonet, chang2018pyramid, mayer2016large, wang2018anytime}. We use MADNet \cite{tonioni2018real}, as we found this network to deliver a suitable trade-off between run-time and performance. Figure~\ref{fig_deep} shows an exemplary disparity map generated by both methods. \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{images/triple_deep_stereo.JPG} \caption{Depth representations generated using stereo images. \textbf{Left:} Block-matching~\cite{hirschmuller2007stereo} cannot generate depth information in parts of the low-textured object on the right, or on the shiny surface of the floor. \textbf{Middle:} MADNet \cite{tonioni2018real} captures most parts of the object and the floor. Hence, it delivers more complete depth information than block-matching in this scenario. \textbf{Right:} raw image.} \label{fig_deep} \vspace{-6mm} \end{figure} \subsection{Point Cloud Filtering} \label{filter_point_cloud} We filter the point cloud generated by the previous module to reduce noise and down-sample the data for achieving real-time performance. We denote the point cloud after initial cropping as $h^d$ and after filtering as $h^s$. We applying the following sequence: \begin{itemize} \item crop the point cloud at the depth limit $l_{d}$, up to which the measurements can be trusted\footnote{The limit $l_{d}$ must be chosen based on the camera setup and the algorithm generating the disparity map.}; \item crop the point cloud at the heights $l_{g}$ and $l_h$, to remove all ground plane and ceiling points, respectively. % \item apply a voxel filter with leaf size $l_{l}$ % to reduce the size of the point cloud by an order of magnitude and to ensure even density of the points in 3D. % % % % % \item apply a % filter to remove all points with less than $l_n$ neighbors within a radius $l_r$, to reduce noise points which occur most notably at the edges of objects. \end{itemize} \subsection{Clustering and 3D Tracking} \label{clustering_tracking} In this module we identify the individual objects through clustering and track them from frame to frame. \subsubsection{Clustering} \label{clustering} We use DBSCAN~\cite{ester1996density} to cluster the point cloud, resulting in a set of $m$ clusters $C=\{C^1, C^2, ..., C^m\}$. DBSCAN grows clusters from random seeds using dense points only. Compared to Euclidean clustering \cite{douillard2011segmentation}, we experienced that DBSCAN more precisely separates individual objects in cluttered point clouds, while introducing only a marginal computational overhead. To refine the clusters, we use the bounding-boxes generated by the 2D people detector module. We separate any clusters which are associated with more than one bounding-box to distinguish individual humans. We also separate clusters whose fraction of points laying within the bounding-box is below a threshold to distinguish between humans and nearby static objects. In Section~\ref{people_detector} we describe how the bounding-boxes are obtained and associated to the clusters. \subsubsection{3D tracking} \label{tracking} First, at time $t$ we compute the centroids $c_t$ of all current clusters $C_t$ in the global frame as the average of all their points. Then, we associate them to their closest centroid $c_{t-\Delta t}^*$ of the clusters $C_{t-\Delta t}$ of the previous frame. Applying the tracking over $k$ frames separated by $\Delta t$ leads to a cluster track $T_{t,k}^i=\{c_{t-k\cdot \Delta t}^*, ..., c_{t-\Delta t}^*, c_{t}^i\}$ for a current cluster $C_t^i$. We mark current centroids $c_t$ that cannot be related to any previous centroids $c_{t-\Delta t}^*$ as newly appeared objects and non-connected previous centroids as lost objects. \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{images/occlusion_explained_small.JPG} \caption{Occlusion handling at dynamic object detection. \textbf{Left:} the current cluster \textit{C} is excluded from voting, as it was occluded in the previous frame by cluster \textit{A}, which belongs to a different cluster track $T_{t,1}^{B}$. \textbf{Right:} the current cluster \textit{E} is \textit{not} excluded from voting, as in the previous frame it was occluded by cluster \textit{D}, which belongs to the same cluster track $T_{t,1}^{E}$.} \label{fig_occlusion_explained} \vspace{-6mm} \end{figure} \subsection{Classification as Dynamic or Static} \label{classification} In this module the clusters $C_t^i$ identified in Section~\ref{clustering_tracking} are classified as either static or dynamic based on a voting strategy. First, we let the individual points of a cluster vote for the cluster's class. Then, the classification of the cluster is based on the voting information of all its points. Namely, we classify it as dynamic if the absolute or relative amount of votes for being dynamic surpass the respective thresholds $l_{dyn}^{abs}$ or $l_{dyn}^{rel}$. We use two thresholds in order to correctly classify objects at different scales. In case the classification is inconsistent over a short time horizon $\tau$, we mark the cluster as uncertain. Every cluster classified as dynamic is regarded as being an individual object. Below, we describe the voting process of an individual point and indicate which points are excluded from voting. \subsubsection{Voting of an individual point} \label{voting} We measure the global nearest neighbor distance $d^k$ from each point $k$ of a cluster $C_t^i$ in the current \textit{filtered} point cloud $h^s$ to a previous \textit{dense, non-filtered} point cloud $h^d_{t-\delta}$. We found that it is key to measure $d^k$ from the \textit{filtered} to the \textit{dense, non-filtered} point cloud $h^d_{t-\delta}$ in order to gain robustness against noise. In order to assure that points of dynamic objects move substantially more than points corrupted by noise, we compare point clouds from frames being roughly $\delta$ seconds apart, similarly to the work of Yoon et al. \cite{yoon2019mapless}. For static objects, these measured nearest neighbor distances $d$ will be in the magnitude of the noise. For points at the leading edge of moving objects, however, $d$ will be substantially higher. We convert these distances to velocities and let each point $k$ with $\frac{d^k}{\delta} < l_{NN}$ vote as being static, while points with $\frac{d^k}{\delta} \geq l_{NN}$ will vote as being dynamic. $l_{NN}$ denotes the velocity threshold, which needs to be set marginally higher than the noise level in the filtered point cloud $h^s$. \subsubsection{Excluding points from voting} \label{exclude_voting} We only can infer knowledge about a point's movement if we could observe it in both frames used for voting. This observability-requirement is not fulfilled in two cases which we detect for improving the voting performance. First, if the Field of View (FOV) of a robot changes between the two frames, points might appear in the area of the current FOV that does not overlap with the previous FOV. As we have observed those points only once, we exclude them from voting. Second, we exclude points of a current cluster $C_t^i$ from voting if they previously were occluded by a \textit{different} object $j$, i.e. by points from a cluster~$C^j_{t-\delta}$ from a different cluster track $T_{t,k}^{j\neq i}$. Specifically, we distinguish between such occlusions and self-occlusions which happen when objects move away from the camera, as visualized in Figure \ref{fig_occlusion_explained}. Occlusions are identified by first approximating the depth map of the previous frame $g_{t-\delta}$ by projecting randomly-sampled points of the non-filtered, dense point cloud $h^d_{t-\delta}$ onto the previous image plane. We then project a query point $q_t \in C_t^i$ onto $g_{t-\delta}$ and run a 2D nearest neighbor search. If a close nearest neighbor ${n_{t-\delta}^{2D} \in g_{t-\delta}}$ is found, we check for a potential occlusion, that is $depth[n_{t-\delta}] < depth[q_t]$, with $n_{t-\delta}$ being the associated 3D point of $n_{t-\delta}^{2D}$. To identify self-occlusions we associate $n_{t-\delta}$ to the cluster track of its nearest neighbor in $h_{t-\delta}^s$ and check if $T_{t-\delta}^{n_{t-\delta}} \in T_{t}^{q_t}$. In this case, the query point $q_t$ is \textit{not} excluded from voting. \subsection{2D People Detector} \label{people_detector} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{images/multi_detector_two_view_gruen_small.jpg} \caption{\hspace{-1mm}\textbf{Left}: a sample output of the Mobilenet-SSD people detector\cite{howard2017mobilenets, liu2016ssd}. Every detection is rated by a confidence, shown on top. We assign an ID to every bounding-box being tracked on the image plane. \textbf{Right}: we associate detections to clusters, indicated by the blue cuboids.} \label{fig_detector} \vspace{-5mm} \end{figure} Up to now, the system would classify a standing person as static and would only realize that it was a dynamic object once she starts walking. By adding Mobilenet-SSD~\cite{howard2017mobilenets, liu2016ssd} as a visual 2D people detector to our pipeline, we achieve a motion-independent detection of pedestrians. We select this network as it delivers a suitable trade-off between run-time and performance on grayscale images. Figure~\ref{fig_detector} shows an exemplary output of the Mobilenet-SSD people detector. % We track the bounding-boxes over frames using Intersection over Union (IoU) as a metric. We use tracking by detection, as visual trackers have not yet reached a satisfying performance level while running in real-time on CPUs~\cite{leal2017tracking}. Similar to the cluster tracks in 3D, we generate bounding-box tracks $B_{t,k}$ in the image plane. In the subsequent Section~\ref{fusion}, we use $B_{t,k}$ to make the 3D tracking more robust. The 2D bounding-boxes are associated to the 3D clusters $C$ by linking the detection to the cluster having the highest amount of points within the bounding-box. Note that other approaches exist for this 2D-3D association~\cite{shi2018dynamic, ku2018joint, qi2018frustum, lahoud20172d}. \subsection{Fusion of 2D and 3D Detection and Tracking} \label{fusion} The inputs to this module are the 3D cluster tracks $T_{t}$ classified as static or dynamic, and the 2D bounding-box tracks $B_{t}$. For every cluster track $T_{t}^i$, the frequency $f_a^i$ of associated 2D-to-3D detections is computed. If $f_a^i$ is above a certain confidence-threshold $\gamma$, we classify all clusters being added to this cluster-track as representing a pedestrian, and hence, representing a dynamic object regardless of their initial classification. Furthermore, this module checks if all bounding-boxes of a track $B_{t}^j$ are consistently associated to clusters of the same cluster track $T_{t}^i$, and resets $f_a^i$ otherwise. \subsection{Motion Estimation} \label{motion_estimation} \def\A{ \begin{bmatrix} 1 & 0 & T_s &0 \\ 0 & 1 & 0 & T_s \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}} Estimating the velocity and predicting the future trajectory of pedestrians is an active research field \cite{pfeiffer2018data, chen2017socially, wulfmeier2017large, pfeiffer2016predicting}. Similarly to the work of Azim et al.~\cite{azim2012detection}, we adopt a conservative motion model to estimate the velocities and short-term future paths of dynamic objects. Assuming that the dynamic objects move on a horizontal plane, we estimate their velocity by using a constant 2D velocity model, based on a Kalman filter (KF). The measurement inputs $\vec{z}_i$ for the KF are the cluster centroids $\vec{c}_i = [c_x, c_y, c_z]_i^\top$ of a cluster track $T_{t}^i$ measured in the x-y plane of the world frame: $\vec{z}_i = [c_x, c_y]_i^\top.$ We define the state vector as: $\vec{x}_i = [x, y, \dot{x}, \dot{y}]_i^\top$. The system dynamics and measurement model are defined as: $$\vec{x}_i[k+1] = A[k] \cdot \vec{x}_i[k] + N(0, Q)$$ $$\vec{z}_i[k] = H[k] \cdot \vec{x}_i[k] + N(0, R)$$ where $Q$ and $R$ model the system noise and measurement noise, respectively. $H$ extracts the first two dimensions of $\vec{x}$ and $A$ is defined as: $$A = \A$$ where $T_s$ denotes the time between two updates. Using the KF, we can catch short-time occlusions of dynamic objects. Specifically, when an object $i$ is lost during tracking, we keep the KF running and compute for all new appearing objects $j$ the probability $p(j=i)$ of being the same object as the lost one. We compute this probability as ${p(j=i) = N(\vec{c}_j | \vec{c}_i, C_i(\vec{x}_i))}$, with $C_i(\vec{x}_i)$ being the estimated covariance of the state $\vec{x}_i$. We connect the cluster tracks of object $i$ and $j$ if $p(j=i)$ surpasses a threshold. \subsection{2D Occupancy grid} \label{costmap} In order to allow for path planning and obstacle avoidance, we leverage occupancy grid representations~\cite{oleynikova2016voxblox, fankhauser2016universal, hornung2013octomap} and specifically use the computationally efficient Costmap\_2d implementation \cite{lu2014layered}. We create three maps in parallel for static, dynamic and uncertain objects, in which the obstacles are represented as positive costs at their respective location. In the uncertain map, we create only short-living costs for which no static/dynamic classification was done yet. In the static map, we use raytracing \cite{marder2010office} to clear free space once it was erroneously occupied. In the dynamic map, we expand the costs in the direction of the estimated velocity of the objects, such that a robot will not collide with it in the future. Safe paths can then be planned and executed by aggregating the costs of the three aforementioned layers. % \section{EVALUATION} \label{evaluation} In this section, we evaluate the performance of the presented obstacle classification and tracking solution. The run-times of the individual modules are finally presented. \subsection{Experimental Setup} \label{experimental_setup} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{images/datasets_jackal_w_white.JPG} \caption{Sample images used in our evaluation and the platform we used to collect them. Our system relies on the stereo camera mounted in the front, whereas the LiDAR was used for evaluation purposes only. % } \label{fig_datasets} \vspace{-5mm} \end{figure} We recorded multiple stereo-vision datasets that include pedestrians, featuring challenging indoor and outdoor scenes, shiny surfaces, low-textured objects, illumination changes, and empty as well as crowded spaces. To record our datasets, we used a Clearpath Robotics Jackal robot equipped with a stereo camera (grayscale, $752\times 480\si{\px}$), and an Ouster OS-1 LiDAR with 16 channels for the ground truth measurements used in Section~\ref{exactness_completeness} and \ref{classification_accuracy}. Sample images of our datasets and our robot are shown in Figure~\ref{fig_datasets}. For all experiments, including the timing presented in Section~\ref{timing}, we run our pipeline on a Intel i$7$-$8650$U processor. For our stereo camera setup we set the parameters introduced in Section~\ref{filter_point_cloud} as ${l_{l} = 0.05m}$, ${l_{g} = 0.15m}$, ${l_{h} = 1.8m}$, ${l_{n} = 30}$, ${l_r = 0.5m}$ % and $\delta = 0.4s$. Furthermore, we set ${l_{d} = 5m}$, as we chose a disparity-error of $1\si{\px}$ leading to a depth-error of $0.5m$ to be the limit we can accept. Regarding the parameters of Section~\ref{clustering_tracking}, \ref{classification}, and \ref{fusion} we set $l_{dyn}^{abs} = 100$, $l_{dyn}^{rel} = 0.8$, ${\tau = 0.4s}$, ${l_{NN} = 0.45m/s}$, and $\gamma = 1.5s^{-1}$. The robot's nominal speed is $1.2m/s$. We verified that our settings are sufficient for the objects to be reliably tracked up to the speed of a jogging person. Note that these parameters were identified using datasets independent of those used in the evaluations. For precise localization of our robot we run a VIO algorithm similar to OKVIS \cite{leutenegger2015keyframe}. For image processing we use OpenCV\cite{opencv_library}. The people detector Mobilenet-SSD \cite{howard2017mobilenets, liu2016ssd} is implemented in Caffe and the deep stereo network MADNet \cite{tonioni2018real} in Tensorflow. \subsection{Accuracy and Completeness of the Point Clouds} \label{exactness_completeness} \begin{figure}[t] \vspace{-7mm} \centering \hbox{\hspace{1.9mm} \includegraphics[width=1.0\columnwidth]{images/small_dl_histo.png}} \includegraphics[width=0.9\columnwidth]{images/log_completeness.png} \caption{Normalized histograms of nearest neighbor distances~$d$ between point clouds from the LiDAR and the stereo camera to analyze the \textit{accuracy} and \textit{completeness}. \textbf{Top/Accuracy:} $d$ measured from the camera to the LiDAR. \textbf{Bottom/Completeness:} $d$ measured from the LiDAR to the camera. The region above the accuracy-limit $l_c = 0.8m$ indicates points of objects not captured by the stereo camera point clouds. } \label{fig_exactness_completeness} \vspace{-3mm} \end{figure} In this section, we briefly evaluate the quality of the stereo camera point cloud to identify the accuracy-limit $l_c$, which we will use in Section \ref{classification_accuracy} to evaluate the static object detection precision. In order to evaluate the performance of the two point cloud generation methods identified in Section~\ref{cloud_generation}, we collected ground truth 3D LiDAR data synchronized with the stereo images. To compare both the LiDAR and vision point clouds, we use nearest neighbor distances~$d$ as the metric, that is $d = || p_1 - p_2 ||_2$, for two points $p_1$ and $p_2$. The \textit{accuracy} of our stereo camera point clouds is estimated by computing $d$ from each point of the camera clouds to its nearest neighbor in the LiDAR clouds. In the opposite manner, we measure the \textit{completeness} of our camera clouds by computing $d$ from each point of the LiDAR clouds to its nearest neighbor in the camera clouds. Note that, as the LiDAR clouds do not feature any measurements between beams, there will be non-zero nearest neighbor distances $d$, even in an ideal setting. In our setup\footnote{A 16-channels LiDAR with an opening angle of $30\degree$ and $l_d = 5m$.}, this distance is at most $0.08m$. We evaluate the accuracy and completeness of the camera point cloud on a dataset where we drove down a crowded public sidewalk. In order to achieve real-time capability, we simplify the block-matching process by down-sampling images with a factor of two. \subsubsection{Accuracy} The histogram depicted in Figure~\ref{fig_exactness_completeness} shows the normalized distribution of the distances $d$. The block-matching point cloud performs best and has an accuracy-limit of $l_c = 0.8m$, as all point are below this value. Clearly, the deep stereo network cannot compete with block-matching. Please refer to MADNet \cite{tonioni2018real} for in-depth analysis of the performance of this network. \subsubsection{Completeness} The histogram of the normalized distribution of the distances $d$ is shown in Figure~\ref{fig_exactness_completeness}. Measurements substantially larger than $l_c = 0.8m$ indicate points of objects that were not captured by the camera cloud. Deep stereo performs best, which was suggested by Figure~\ref{fig_deep}. Overall, we observe that deep stereo is less exact, but more complete than block-matching. More detailed comparisons of methods for generating dense maps from stereo data can be found in the work of Menze \cite{menze2015object}. For the remaining parts of the evaluation, we use block-matching on downsampled images, due to its real-time capability and accuracy. \subsection{Classification and Tracking Accuracy} \label{classification_accuracy} \begin{figure}[t] % \centering \includegraphics[width=1.0\columnwidth]{images/hall_tracks.png} % \caption{The tracks of dynamic objects (classified by the module presented in Section \ref{classification}) are shown in yellow, the tracks of our ground truth based on LiDAR data in blue, and the stereo camera point cloud of static objects in red. The tracks are smooth and hence, suggest the applicability of our solution to motion tracking and prediction. Short tracks are caused by the obstacles leaving the limited FOV. The hall is of size $30\times 10m$.} \label{fig_tracks} % \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{images/lidar_gt.png} % \caption{Our static LiDAR ground truth is shown in blue and the accumulated stereo camera point cloud of static objects in red. For better visualization, we also show the accumulated dynamic camera point cloud in green and the LiDAR ground plane, which we remove for the comparison between clouds.} \label{fig_LiDAR} \vspace{-5mm} \end{figure} To evaluate the classification and tracking accuracy of dynamic objects of the proposed system we compare the resulting object tracks to manually labeled tracks from LiDAR measurements that serve as our ground truth. We use the metrics MOTP and MOTA, as defined in the work of Bernardin \cite{bernardin2006multiple}. For a robotic system navigating in real environments, a precise mapping of static objects matters. As MOTP and MOTA do not evaluate the detection of static objects we evaluate the similarity between the stereo-based and ground truth static clouds. \subsubsection{Classification and tracking of dynamic objects} We first generated a map $m_{l}$ of a controlled, completely static environment $D_1$, using the measurements of the 3D LiDAR and an ICP-based algorithm similar to the work of Dub\'{e}~\cite{dube2017online}. Subsequently, we localized our robot within $m_{l}$ and recorded a second dataset $D_2$, this time with dynamic objects present. We created a ground truth of obstacle tracks by first manually excluding from $D_2$ all LiDAR measurements that belong to static objects. We then extracted the upper bodies of the pedestrians in the remaining LiDAR clouds by cropping by height, in order to remove the influence of moving legs and hence, to attain smooth trajectories. Subsequently, we applied Euclidean clustering and tracked the clusters $C^{lidar}$ using closest centroids, in the same manner as presented in Section \ref{clustering_tracking}. We visually inspected and adjusted the resulting LiDAR ground truth tracks ${T_{t, k}^{i, lidar}=\{c_{t-k\cdot \Delta t}^{lidar}, ..., c_{t-\Delta t}^{lidar}, c_{t}^{i, lidar}\}}$ to ensure the absence of false positives, false negatives, or mismatches. $D_2$ is of $4\si{\min}$ length and features $31$ encounters with pedestrians leading to $1267$ ground truth pedestrian positions $c^{lidar}$. Figure \ref{fig_tracks} shows a top-down view of our LiDAR-based ground truth tracks $T^{i, lidar}$ and the camera-based dynamic object tracks $T^{j}$ extracted from $D_2$ by our proposed system. To compute the MOTP and MOTA, we set the threshold $l_T$ for a correct match between object (object from the ground truth LiDAR track) and hypothesis (object from the camera track) as $l_T = 0.4m$, assuming a diameter of $0.4m$ for an average person and hence, an incorrect match if the object and the hypothesis do not have any overlap. Using this, we reach a MOTP of $0.07 \pm 0.07m$ and a MOTA of $85.3\%$, which is composed of a false negatives rate $f_n = 8.3\%$ (covering non-detected dynamic objects and dynamic objects erroneously classified as static or uncertain), a false positives rate $f_p = 3.0\%$ (covering ghost objects or static objects misclassified as dynamic), and a mismatch rate $f_m = 3.3\%$. \subsubsection{Classification of static objects} The map $m_l$ from $D_1$ served as our ground truth for static objects. We then accumulated all stereo camera clusters classified as static in $D_2$ resulting in the point cloud $m_s$. Figure~\ref{fig_LiDAR} visualizes $m_l$ overlaid with $m_s$. To evaluate the similarity between the stereo-based and ground truth static clouds we measure the nearest neighbor distances $d$ from points of the camera cloud $m_s$ to the LiDAR cloud $m_l$. Ideally, the static camera cloud $m_s$ coincides with the LiDAR ground truth $m_l$. Figure~\ref{fig_classification_overview} shows the normalized distribution of these distances $d$. Ideally, $d$ is zero for static objects. However, as shown in Section \ref{exactness_completeness}, in the extreme case static points can differ up to $l_c = 0.8m$ from the static ground truth. As there is no ground truth available for the per-point-classification, we estimate correct static classifications of our pipeline using one threshold $l_e$. We chose this threshold $l_e$ to be the average of both limit cases of $l_c = 0.8m$ and zero, hence $l_e = 0.4$. We declare all static points below $l_e$ of the camera cloud as correctly classified, reaching a precision of the classification of static objects of $96.9\%$. \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{images/histo_static_only.png} % \caption{Precision of static classification: normalized histogram of nearest neighbor distances $d$ from the static part of the camera point cloud to our static LiDAR ground truth point cloud. % } \label{fig_classification_overview} % \end{figure} \subsection{Run-time evaluation} \label{timing} Table \ref{table_timing} shows the timing of the modules of our pipeline when deployed on the embedded system described in Section \ref{experimental_setup}. The two most significant contributors are the block-matching and the people-detector network. Overall, the pipeline runs at $8.5\si{\Hz}$ with all features. Excluding the people detector, we reach $13.5\si{\Hz}$. The core part of our work, i.e. point cloud filtering, clustering and 3D tracking, classification, and motion estimation requires $21ms$ per stereo camera frame. \begin{table}[h] \caption{Average run-times of the major modules of our dynamic obstacle detection and tracking pipeline on a standard CPU.} \label{table_timing} \begin{center} \vspace*{-4mm} \begin{tabular}{|c|c|c|} \hline \textbf{Module} & \textbf{Timing [ms]} & \textbf{Portion [\%]}\\ \hline \hline Point Cloud Generation & 55.2 & 46.4 \\ \hline 2D People Detector & 42.4 & 35.6 \\ \hline Point Cloud Filtering & 9.9 & 8.3 \\ \hline Classification as Dynamic or Static & 8.2 & 6.9 \\ \hline Remaining modules & 3.3 & 2.8 \\ \hline \end{tabular} \end{center} \end{table} \section{CONCLUSION} \label{conclusion} In this paper we presented a method that reliably detects and tracks both generic dynamic objects and humans based on stereo images and thus provides accurate perception capabilities enabling compliant navigation in crowded places. Our novel algorithm detects generic dynamic objects based on motion and geometry in parallel to a detector network, which classifies humans as such based on their visual appearance. We handle short-time occlusions by estimating the velocities of the tracked objects and provide a 2D occupancy grid that is suitable for performing obstacle avoidance. We showed that our system is real-time capable on a computationally-constrained platform and despite the high noise level of stereo camera data we achieve a MOTP of $0.07 \pm {0.07m}$, and a MOTA of $85.3\%$ for the detection and tracking of dynamic objects, and a precision of $96.9\%$ for the detection of static objects. Future work will focus on advanced motion models, ground plane analysis and expanding the pipeline to use multiple perception sensors simultaneously. \addtolength{\textheight}{-0cm} % % % % % % \bibliographystyle{plain}
proofpile-arXiv_065-156
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The conditional autoregressive (CAR) model popularized by \citet{bym} (BYM) has become ubiquitous in spatial epidemiology and disease mapping. In addition to being used across a wide range of applications, extensions have been made to spatiotemporal \citep{waller:carlin} and general multivariate \citep{gelfand:mcar,b-r:2015} settings. Missing from the literature, however, is a convenient way to quantify the \emph{informativeness} of the BYM framework akin to the concept of ``effective sample size'' in the Bayesian clinical trials literature \citep[e.g.,][]{morita:2008}, perhaps due to the complexity of the conditionally dependent nature of spatial models. The objective of this paper is simple: to provide guidance for how to measure (or alternatively, \emph{control}) the informativeness of the BYM framework. We begin by anchoring our framework in the conjugate Poisson-gamma setting where measuring the informativeness of the prior distribution is trivial. We then propose an approach to obtain the approximate informativeness of a lognormal prior and ultimately the BYM CAR model. After demonstrating the accuracy of this approximation via simulation, we illustrate the potential for oversmoothing using county-level heart disease-relate death data. \section{Methods}\label{sec:methods} When modeling rare event and mortality data, we follow the convention set forth by \citet{brillinger} by assuming $y_i \sim \Pois\left(n_i\lambda_i\right)$, where $y_i$ denotes the number of events in region $i$ from a population of size $n_i$ and $\lambda_i$ denotes the underlying event rate, for $i=1,\ldots,I$. Since $\lambda_{i} \sim \Gam\left(a_i,b_i\right)$ is a conjugate prior for the rate parameter in a Poisson likelihood, we can write \begin{align} \lambda_{i} \given y_{i}, a_i,b_i \sim \Gam\left(y_{i} + a_i,n_{i}+b_i\right),\label{eq:poisgam} \end{align} which yields the interpretations of $a_i$ and $b_i$ as the ``prior number of events'' and ``prior sample size'', respectively. While the prior specification used to construct the posterior in~\eqref{eq:poisgam} is convenient for illustrating the effect of prior information, it is more common in the disease mapping literature to consider lognormal prior specifications for $\lambda_i$. Unfortunately, the use of priors like $\lambda_i \sim \LN\left(\mu_i,\sig_i^2\right)$ leads to posterior distributions of an unknown form, \begin{align*} p\left(\lambda_i \given y_i,\mu_i,\sig_i^2\right) &\propto \Pois\left(y_i\given n_i\lambda_i\right) \times \LN\left(\mu_i,\sig_i^2\right)\\ &\propto \exp\left[-n_i\lambda_i\right] \times \exp\left[y_i\log\lambda_i-\frac{\left(\log\lambda_i - \mu_i\right)^2}{2\sig_i^2}\right] \end{align*} obfuscating the effect of prior information on the posterior distribution. Thus, to better elucidate the effect of prior information when using lognormal priors, we may wish to construct a prior $\lambda_i\sim \LN\left(\mu_i,\sig_i^2\right)$ that contains approximately the same information as $\lambda_i \sim \Gam\left(a_i,b_i\right)$. To achieve this, a natural choice may be to equate the mean and variance of their respective distributions; i.e., \begin{align} E\left[\lambda_i\given a_i,b_i\right] =& E\left[\lambda_i\given \mu_i,\sig_i^2\right] &\implies&& a_i\slash b_i &= \exp\left[\mu_i+\sig_i^2\slash2\right]\label{eq:lognormal}\\ V\left[\lambda_i\given a_i,b_i\right] =& V\left[\lambda_i\given \mu_i,\sig_i^2\right] &\implies&& a_i\slash b_i^2 &= \left(\exp\left[\sig_i^2\right]-1\right) \exp\left[2\mu_i+\sig^2\right].\notag \end{align} From these equations, we can then write $\mu_i$ and $\sig_i^2$ as functions of $a_i$ and $b_i$ --- i.e., $\sig_i^2 = \log\left(1\slash a_i +1\right)$ and $\mu_i = \log\left(a_i\slash b_i\right) - \sig_i^2\slash 2$. To evaluate the performance of this approximation, Figure~\ref{fig:compare} compares quantiles of the posterior distribution for $\lambda$ given $y$ resulting from a gamma distribution with $a=8.75$ and $y$ taking values $\left\{1,2,\ldots,20\right\}$ with $a\slash b = y\slash n = \lambda_0$ to that resulting from our lognormal approximation, where $\lambda_0$ corresponds to a rate of 50 events per 100,000. Based on these results, we claim that the prior $\lambda_i \sim \LN\left(\mu_i,\sig_i^2\right)$ is approximately as informative as $\lambda_i \sim \Gam\left(a_i,b_i\right)$ when we define $\mu_i$ and $\sig_i^2$ in this way; further support for this claim is provided via simulation in Section~3. \begin{figure}[t] \begin{center} \includegraphics[width=.55\textwidth]{estimates_all} \end{center} \caption{Comparison of quantiles of the posterior distributions for $\lambda_i$ for gamma, independent lognormal, and \citet{bym}-inspired CAR priors.} \label{fig:compare} \end{figure} While~\eqref{eq:lognormal} measures the informativeness of independent lognormal prior distributions, spatial models such as the BYM framework utilize \emph{conditionally-dependent} prior distributions. Specifically, if we employ model structures which explicitly model the correlation between $\lambda_i$ and the remaining $\lambda_{j}$, $j\ne i$, then the informativeness of our model is not dictated by the \emph{marginal} mean and variance of $\lambda_i$, but instead by the \emph{conditional} mean and variance, denoted $E\left[\lambda_i\given \blambda_{(i)}\right]$ and $V\left[\lambda_i\given \blambda_{(i)}\right]$, respectively, where $\blambda_{(i)}$ denotes the vector $\left(\lambda_1,\ldots,\lambda_I\right)^T$ with the $i$th element removed. In the context of the CAR model of \citet{bym}, we assume \begin{align} \lambda_i \given \bbeta,\bz,\sig^2 \sim \LN\left(\bx_i^T\bbeta + z_i,\sig^2\right), \label{eq:bym} \end{align} where $\bx_i$ denotes a $p\times 1$ vector of region-specific covariates with corresponding regression coefficients, $\bbeta$, and $\bz=\left(z_1,\ldots,z_I\right)^T$ denotes a vector of spatial random effects such that \begin{align} z_i \given \bz_{(i)},\tau^2 \sim \N\left(\sum_{j\sim i} z_j \slash m_i, \tau^2\slash m_i\right),\label{eq:car} \end{align} where $j\sim i$ denotes that regions $i$ and $j$ are neighbors and $m_i$ denotes the number of regions that neighbor region $i$. As shown in Web Appendix~A, integrating $\bz$ out of~\eqref{eq:bym} leads to a conditional distribution for $\log \lambda_i$ whose precision is bounded below by $1\slash\left(\sig^2+\left[\sig^2+\tau^2\right]\slash m_i\right)$, which we could express in terms of the model's ``informativeness'' as \begin{align} \widehat{a}_0 = 1\slash \left(\exp\left[\sig^2+\left(\sig^2+\tau^2\right)\slash m_0\right] - 1\right), \label{eq:inform} \end{align} based on the approximation in~\eqref{eq:lognormal} for a baseline number of neighbors, $m_0$. The bound in~\eqref{eq:inform} is achieved when a region neighbors all $I-1$ of the remaining regions, and the precision approaches $1\slash \left(\sig^2+\tau^2\slash m_i\right)$ as the posterior estimates for the neighboring $\lambda_j$ become more precise (e.g., by increasing $m_j$ or $y_j$). As a general rule of thumb, we will evaluate~\eqref{eq:inform} for $m_0=3$ neighbors from this point forward. To demonstrate the properties of the \citet{bym}-inspired model from~\eqref{eq:bym} and~\eqref{eq:car}, we considered a scenario consisting of $I=50$ regions where each region neighbors all of the remaining $I-1=49$ regions (i.e., $m_i=49$ for all $i$). We then specified $\sig^2=0.1$ and $\tau^2=0.3$, thus constructing a model with $\Var\left(\theta_i\given \bbeta,\sig^2,\tau^2,\btheta_{(i)}\right)^{-1} = 1\slash\left(\sig^2+\left[\sig^2+\tau^2\right]\slash m_i\right) = 9.25$. Plugging this into the approximation in~\eqref{eq:lognormal}, we obtain $\widehat{a}_0=8.75$. As illustrated in Figure~\ref{fig:compare}, this prior specification results in a posterior distribution that is also nearly identical to the posteriors resulting from gamma and independent lognormal priors designed to have the same level of information. While this scenario --- i.e., $I=50$ regions that all neighbor each other --- is unrealistic, the objective here was simply to illustrate how the expression in~\eqref{eq:inform} can be used to construct priors with the desired properties while avoiding scenarios where spatial models would be inappropriate (e.g., small $I$). \section{Simulation Study}\label{sec:sim} While Section~\ref{sec:methods} demonstrates the relationships between the gamma and lognormal prior specifications when the hyperparameters of the lognormal prior specification are fixed and known, we must also demonstrate that these relationships hold when the hyperparameters are \emph{unknown}. To this end, we conducted a simulation study in which data were generated from a Poisson distribution where the underlying rates were sampled from a gamma distribution --- $y_i \sim \Pois\left(n_i\lambda_i\right)$ and $\lambda_i \sim \Gam\left(a,b\right)$ for $i=1,\ldots, I$, where $a=5$, $b=a\slash\lambda_0$, and $\lambda_0$ corresponds to a rate of 50 events per 100,000 and where $n_i=\text{20,000}$ for all $i$ such that $E\left[y_i\given a,b\right]=10$. We modeled these data using both the Poisson-gamma and Poisson-lognormal frameworks with all hyperparameters treated as being unknown. To analyze these data, we compare the following prior specifications: \begin{align} \lambda_i &\sim \Gam\left(a,b\right), &a &\sim \Unif\left(0,10\right), &\lambda_0&\sim \Unif\left(0,10^{-3}\right)\label{eq:sim1prior1}\\ \lambda_i &\sim \LN\left(\mu,1\slash \gamma\right), &\mu &\sim \Unif\left(-20,0\right), &\gamma &\sim \Unif\left(0,10\right)\label{eq:sim1prior2}, \end{align} where $b=a\slash \lambda_0$ and the bounds on the hyperparameters in~\eqref{eq:sim1prior1} and~\eqref{eq:sim1prior2}~are~intended~to~restrict the parameters to a similar range of values (e.g., when $a\approx 10$, $\gamma\approx 10$). The primary goal of this simulation study will be to assess the degree to which the lognormal prior specification in~\eqref{eq:sim1prior2} can produce a posterior distribution similar to that from the prior specification in~\eqref{eq:sim1prior1}. As the ability to estimate the hyperparameters in~\eqref{eq:sim1prior1} and~\eqref{eq:sim1prior2} depends on the amount of data observed, we let $I=\left\{10,25,50,100,200\right\}$; when $I<200$, multiple sets of data are generated to better assess the models' performance (e.g., 20 sets of data for $I=10$). All analyses are based on $L=\text{100,000}$ posterior samples obtained using the {\tt rjags} package \citep{rjags} and thinned by a factor of 10 to reduce autocorrelation. In Figure~\ref{fig:sim1post}, we see that while the informativeness of the gamma prior for $I=200$ is centered around the true value of $a=5$, the lognormal prior yields a slightly less informative posterior. Results for smaller values of $I$ are provided in Web Appendix~B. As would be expected, small values of $I$ have much less precision when measuring the informativeness of the priors. \section{Illustrative Example: Drug-Overdose Death Data}\label{sec:analysis} We now consider a dataset comprised of the number of heart disease-related deaths (ICD-9: 390--398, 402, 404--429) among those aged 35--54 in 1980 from counties in the contiguous United States. These data predate the CDC's data confidentiality protections --- namely, that counts less than 10 are suppressed for data dating back to 1989 \citep{cdc:sharing} --- and thus are publicly available without suppression. And while heart disease was the leading cause of death in 1980, mortality rates in this age bracket were still quite low, resulting in a preponderance of small counts and thus motivating the use of spatial models to produce more reliable estimates. We first consider a case study using the 77 counties of Oklahoma. Here, we begin by fitting the standard BYM CAR model based on~\eqref{eq:bym} and~\eqref{eq:car}. Standard priors were used per \citet{waller:carlin} --- $p\left(\beta_0\right) \propto 1$, $\sig^2 \sim \IG\left(1,1\slash 100\right)$, and $\tau^2 \sim \IG\left(1,1\slash 7\right)$ --- and our MCMC algorithm was run for 50,000 iterations. After fitting the model, we estimate the informativeness of this model, $\widehat{a}_0$, based on~\eqref{eq:inform}. Finally, we refit the model subject to the restriction that $\widehat{a}_0 < 6$ for a county with $m_0=3$ neighbors and explore the implications of this restriction. We then repeat this same analysis on the remaining 47 states in the contiguous United States --- one state at a time --- minus those with fewer than five counties where the use of a spatial model may not be appropriate. The goal of this second set of analyses is to highlight the heterogeneity in the informativeness of the CAR model of \citet{bym} when analyzing the same outcome (heart disease-related deaths) at the same spatial scale (counties) from different locations (states). \subsection{Case study: Heart disease-related deaths in Oklahoma} While the heart disease-related death rate for those 35--54 in the state of Oklahoma (106.5 deaths per 100,000) was on par with the national average (108.4), Oklahoma's large number of rural counties led to 64 of the 77 counties experiencing fewer than 10 deaths in this age bracket. This leads to the inferential dilemma which motivates this work --- i.e., we want to use models that \emph{leverage} spatial structure to produce more reliable estimates, but we do not want those models to \emph{overwhelm} the information contained in the data. \begin{figure}[t] \begin{center} \subfigure[Simulation Study]{\includegraphics[width=.45\textwidth]{sim1_hist200}\label{fig:sim1post}} \subfigure[Heart Disease in Oklahoma]{\includegraphics[width=.45\textwidth]{OK_hist_info}\label{fig:resinfo}} \end{center} \caption{Posterior distributions of the models' informativeness. Panel~(a) corresponds to the simulation study in Section~\ref{sec:sim} for $I=200$ compared to the true value of $a=5$, while Panel~(b) corresponds to the case study using death data from Oklahoma compared to the restriction that $\widehat{a}_0 < 6$ for a county with $m_0=3$ neighbors.} \label{fig:info} \end{figure} We begin by analyzing the data using the CAR-based model in~\eqref{eq:bym} without restrictions on the informativeness of the prior specification. Using the expression for $\widehat{a}_0$ in~\eqref{eq:inform}, the prior specification for this model is approximately equivalent to an additional 23 deaths for a county with $m_0=3$ neighbors, as indicated in Figure~\ref{fig:resinfo}. To see the effect of such strong prior information, we consider the rate estimates in Figure~\ref{fig:resrates}. In Figure~\ref{fig:resrate1}, we see that the conventional, unrestricted CAR model yields a relatively smooth map of rates, where the rates in more rural parts of the state resemble those in the urban centers of Tulsa and Oklahoma City. In contrast, if we restrict $\sig^2$ and $\tau^2$ such that the model in~\eqref{eq:bym} contributes fewer than $6$ deaths to our estimates, we obtain a less spatially smooth map, thereby allowing counties with high observed rates to differentiate themselves from their neighbors. \begin{figure}[t] \begin{center} \subfigure[Rates from Unrestricted Analysis]{\includegraphics[width=.45\textwidth]{OK_rates_full}\label{fig:resrate1}} \subfigure[Rates from Restricted Analysis]{\includegraphics[width=.45\textwidth]{OK_rates_thres}\label{fig:resrate2}} \end{center} \caption{Posterior medians for the heart disease-related death rates from the unrestricted and restricted analyses, respectively.} \label{fig:resrates} \end{figure} \subsection{Illustration of heterogeneity in informativeness} We now repeat the above unrestricted analysis on the remaining 47 states in the contiguous United States, minus those with fewer than 5 counties. Figure~\ref{fig:stateinfo} displays the estimated informativeness measure from~\eqref{eq:inform} for a county with $m_0=3$ neighbors from each state. Here, we see that the informativeness of the model in~\eqref{eq:bym} varies wildly, ranging from contributing the effect of under 5.5 additional deaths per county in Virginia to over 36 in Ohio. While not shown here, there does not appear to be a discernible pattern between our measure of the state-specific measures of model informativeness, $\widehat{a}_0$, and their respective event rates or other simple summary statistics (e.g., percent of counties with small counts, percent of rural counties, etc.). Thus, it can be difficult to predict \emph{a priori} the CAR model's informativeness and the extent to which this can alter point estimates (Figure~\ref{fig:rateratios}) and their precision. \begin{figure}[t] \begin{center} \subfigure[Informativeness of the CAR Model]{\includegraphics[width=.45\textwidth]{state_info}\label{fig:stateinfo}} \subfigure[Unrestricted vs.\ Restricted]{\includegraphics[width=.45\textwidth]{county_rate_ratios}\label{fig:rateratios}} \end{center} \caption{Results from a state-by-state analysis of the heart disease-related death rates. Panel~(a) displays the estimated informativeness of the \citet{bym} model within each state, while Panel~(b) displays the percent change resulting from the use of an unrestricted prior specification compared to the restricted (i.e., an ``increase'' indicates that the unrestricted analysis yields higher rates than one whose informativeness is restricted).} \label{fig:states} \end{figure} \begin{comment} \subsection{Illustration of heterogeneity in informativeness} We now repeat the above unrestricted analysis on the remaining 47 states in the contiguous United States, minus those with fewer than 5 counties. Figure~\ref{fig:stateinfo} displays the estimated informativeness measure from~\eqref{eq:inform} for a county with $m_0=3$ neighbors from each state. Here, we see that the informativeness of the model in~\eqref{eq:bym} varies wildly, ranging from contributing the effect of $\widehat{a}_0 < 5.5$ additional deaths per county in Virginia to $\widehat{a}_0 > 36$ in Ohio. While not shown here, there does not appear to be a discernible pattern between our measure of the state-specific measures of model informativeness, $\widehat{a}_0$, and their respective event rates or other simple summary statistics (e.g., percent of counties with small counts, percent of rural counties, etc.). Thus, it can be difficult to predict \emph{a priori} the CAR model's informativeness and the extent to which this can alter point estimates (Figure~\ref{fig:rateratios}) and their precision. \begin{figure}[t] \begin{center} \subfigure[Informativeness of the CAR Model]{\includegraphics[width=.45\textwidth]{Figures/Results_new/States/state_info}\label{fig:stateinfo}} \subfigure[Unrestricted vs.\ Restricted]{\includegraphics[width=.45\textwidth]{Figures/Results_new/county_rate_ratios}\label{fig:rateratios}} \end{center} \caption{Results from a state-by-state analysis of the heart disease-related death rates. Panel~(a) displays the estimated informativeness of the \citet{bym} model within each state, while Panel~(b) displays the percent change resulting from the use of an unrestricted prior specification compared to the restricted (i.e., an ``increase'' indicates that the unrestricted analysis yields higher rates than one whose informativeness is restricted).} \label{fig:states} \end{figure} \end{comment} \section{Discussion}\label{sec:disc} The use of spatial models is often motivated by a desire to leverage the spatial structure in the data to improve the precision of estimates from areas with limited data. While we consider this a perfectly valid rationale, we believe more care should be taken to ensure that these models do not produce estimates that are more precise and more spatially smooth than the data warrant. A review of the literature \citep[e.g.,][]{bernardinelli,waller:carlin} suggests the use of relatively noninformative priors for the variance parameters, $\sig^2$ and $\tau^2$, which may lead users to believe that the BYM framework itself will not be overly informative. As we have illustrated here, this does not appear to be the case. Furthermore, while much research has been done to construct weakly informative priors \citep{gelman2006} or to theoretically derive prior distributions that penalize complexity \citep{simpson2017}, the contribution of this paper is to quantify how informative the \emph{model} is for certain values of $\sig^2$ and $\tau^2$, regardless of the priors used. Thus, our objective is not to prescribe which priors should be used for these parameters, but instead to provide guidance regarding their specification or potential restrictions on the range of values they are allowed to take. Finally, it should be noted that while this work has focused on the CAR model proposed by \citet{bym}, similar methods can (and should) be developed for other popular disease mapping approaches such as the CAR framework of \citet{leroux:car} and the directed acyclic graph autoregressive model of \citet{datta:dagar}. \begin{comment}In settings of this nature, we anticipate two new challenges. First, while the the coefficient of variation of a gamma random variable is only a function of its shape parameter, per~\eqref{eq:cv}, the coefficient of variation for a beta random variable, $\theta\sim\tbeta\left(\alpha,\beta\right)$, is a function of both $\alpha$ and $\beta$: \begin{align*} CV\left(\theta\right) = \frac{\sqrt{V\left[\theta\right]}}{E\left[\theta\right]} = \frac{\sqrt{\frac{\alpha\beta}{\left(\alpha+\beta\right)^2\left(\alpha+\beta+1\right)}}}{\frac{\alpha}{\alpha+\beta}} = \sqrt{\frac{\beta}{\alpha\left(\alpha+\beta+1\right)}}. \end{align*} Thus, our criteria for achieving a reliable rate will depend on multiple factors --- e.g., the number of observed events and the size of the sample. In addition, our criteria must also be invariant to transformations from $\theta$ to $1-\theta$ --- e.g., if our estimate of the proportion of the population that are smokers is unreliable, so too should our estimate of for \emph{non}smokers. As such, we identify the extension of the definition of reliability proposed here to the case of binomially distributed data as an area of future research. \end{comment} \bibliographystyle{jasa}
proofpile-arXiv_065-157
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \medskip The main objective of the paper consists in studying well-posedness and probabilistic representation of the Fokker-Planck PDE with terminal condition \begin{equation} \label{EDPTerm0} \left \{ \begin{array}{lll} \partial_t {\bf u} &=& \frac{1}{2} \displaystyle{\sum_{i,j=1}^d} \partial_{ij}^2 \left( (\sigma \sigma^{\top})_{i,j}(t,x) {\bf u} \right) - div \left( b(t,x) {\bf u} \right)\\ {\bf u}(T) &=& \mu, \end{array} \right . \end{equation} where $\sigma: [0,T] \times \mathbb R^d \rightarrow M_{d,m}(\mathbb R)$, $b: [0,T] \times \mathbb R^d \rightarrow \mathbb R^d$ and $\mu$ is a prescribed finite Borel measure on $\mathbb R^d$. When ${\bf u}(t)$ admits a density for some $t \in [0,T]$ we write ${\bf u}(t) = u(t,x) dx$. This equation is motivated by applications in various domains of physical sciences and engineering, as heat conduction~\cite{beck1985inverse}, material science~\cite{ renardy1987mathematical} or hydrology~\cite{bagtzoglou2003marching}. In particular, \textit{hydraulic inversion} is interested in inverting a diffusion process representing the concentration of a pollutant to identify the pollution source location when the final concentration profile is observed. Those models are often formulated by PDE problems which are in general ill-posed because, either the solution is not unique or the solution is not stable. For this issue, the existence is ensured by the fact that the observed contaminant is necessarily originated from some place at a given time (as soon as the model is correct). Several authors have handled the lack of uniqueness problem by introducing regularization methods approaching the problem by well-posed PDEs, see typically \cite{tikhonov1977solutions} and \cite{lattes1969method}. A second issue, when the problem is well-approximated by a regularized problem, consists in providing a numerical approximating scheme to the backward diffusion process. In particular for \eqref{EDPTerm} there are very few results even concerning existence and uniqueness. Our point of view is that a probabilistic representation of \eqref{EDPTerm} can bring new insights to the treatment of the two mentioned issues: well-posedness and numerical approximation. To realize this objective we consider the renormalized PDE \begin{equation} \label{EDPTerm} \left \{ \begin{array}{lll} \partial_t {\bf \bar u} &=& \frac{1}{2} \displaystyle{\sum_{i,j=1}^d} \partial_{ij}^2 \left( (\sigma \sigma^{\top})_{i,j}(t,x) {\bf \bar u} \right) - div \left( b(t,x) {\bf \bar u} \right)\\ {\bf \bar u}(T) &=& \bar \mu, \end{array} \right. \end{equation} where $\bar \mu = \frac{\mu}{\mu(\mathbb R^d)}$ is a probability measure. We remark that the PDEs \eqref{EDPTerm} and \eqref{EDPTerm0} are equivalent in the sense that a solution \eqref{EDPTerm} (resp. \eqref{EDPTerm0}) provides a solution to the other one. The program consists in considering the McKean type stochastic differential equation (SDE) \begin{equation}\label{MKIntro} \begin{cases} \displaystyle Y_t = Y_0 - \int^{t}_{0}b\left(T-r,Y_r\right)dr + \int^{t}_{0}\left\{\frac{\mathop{div_y}\left(\Sigma_{i.}\left(T-r,Y_r\right)p_{r} \left(Y_r\right)\right)}{p_{r}\left(Y_r\right)}\right\}_{i\in[\![1,d]\!]}dr + \int^{t}_{0} \sigma\left(T-r,Y_r\right)d\beta_r, \\ p_t \ \rm{density\ law\ of} \ {\bf p}_t = \rm{law\ of} \ Y_t, t \in ]0,T[, \\ Y_0 \sim {\bf p_T} = \bar \mu, \end{cases} \end{equation} where $\beta$ is a $m$-dimensional Brownian motion and $\Sigma = \sigma \sigma^\top$, whose solution is the couple $(Y,{\bf p})$. Indeed an application of It\^o formula (see Proposition \ref{PProbRep}) shows that whenever $(Y,{\bf p})$ is a solution of \eqref{MKIntro} then $t \mapsto {\bf p}_{T-t}$ is a solution of \eqref{EDPTerm}. The idea of considering \eqref{MKIntro} comes from the SDE verified by time-reversal of a diffusion. Time-reversal of Markov processes was explored by several authors: see for instance \cite{haussmann_pardoux} for the diffusion case in finite dimension, \cite{wakolbinger} for the diffusion case in infinite dimension and \cite{jacod_levy} for the jump case. Consider a \textit{forward} diffusion process $X$ solution of \begin{equation} \label{eq:X} X_t=X_0+\int_0^t b(s,X_s)ds+\int^{t}_{0}\sigma(s,X_s)dW_s, \ t \in [0,T], \end{equation} where $\sigma$ and $b$ are Lipschitz coefficients with linear growth and $W$ is a standard Brownian motion on $\mathbb R^m$. $\hat X_t:= X_{T-t}, t \in [0,T]$ will denote the time-reversal process. In \cite{haussmann_pardoux} the authors gave sufficient general conditions on $\sigma, b$ and the marginal laws $p_t$ of $X_t$ so that $Y:= \hat X$ is a solution (in law) of the SDE \begin{equation}\label{IntroPardoux} \displaystyle Y_t = X_T - \int^{t}_{0}b\left(T-r,Y_r\right)dr + \int^{t}_{0}\left\{\frac{\mathop{div_y}\left(\Sigma_{i.}\left(T-r,Y_r\right)p_{T-r} \left(Y_r\right)\right)}{p_{T-r}\left(Y_r\right)}\right\}_{i\in[\![1,d]\!]}dr + \int^{t}_{0} \sigma\left(T-r,Y_r\right)d\beta_r. \end{equation} The key idea to show well-posedness of the McKean SDE \eqref{MKIntro}, is the study of uniqueness of the PDE \eqref{EDPTerm} (or \eqref{EDPTerm0}). For instance, the trivial case of the heat equation with terminal condition produces uniqueness. Suppose indeed that $u : [0,T] \mapsto \mathcal{S}'\left(\mathbb R^d\right)$ solves \begin{equation} \label{HeatPDE} \begin{cases} \partial_t{\bf u} = \Delta {\bf u} \\ {\bf u}\left(T\right) = \mu. \end{cases} \end{equation} Then, the Fourier transform of $u$, $v\left(t,\cdot\right) := \mathcal{F}{\bf u}\left(t,\cdot\right), t \in [0,T]$ solves the ODE (for fixed $\xi \in \mathbb R^d$) \begin{equation} \label{HeatODE} \begin{cases} \frac{d}{dt}v\left(t,\xi\right) = - \left|\xi\right|^2v\left(t,\xi\right), \left(t,\xi\right) \in [0,T]\times\mathbb R^d\\ v\left(T,\cdot\right) = \mathcal{F}\mu. \end{cases} \end{equation} This admits at most one solution, since setting $\mathcal{F}\mu = 0$ the unique solution of \eqref{HeatODE} is the null function. Another relatively simple situation is described below to study uniqueness among the solutions of \eqref{EDPTerm} starting in the class of Dirac measures. Suppose for a moment that the PDE in the first line of \eqref{EDPTerm}, but with initial condition (see \eqref{Fokker}) is well-posed. Sufficient conditions for this will be provided in Remark \ref{R1}. Let $x \in \mathbb R^d$ and $u$ be a solution of \eqref{EDPTerm} such that $u(0,\cdot) = \delta_{x}$. If $X^{x}$ is the solution of \eqref{eq:X} with initial condition $x$, it is well-known that the family of laws of $X^{x}_t, t \in [0,T]$, is a solution of \eqref{EDPTerm}. So this coincides with $u(t,\cdot)$ and in particular $\mu$ is the law of $X^{x}_T$. To conclude we only need to determine $x$. Consider the example when $\sigma$ is continuous bounded non-degenerate and the drift $b$ is affine i.e. $b(s,y) = b_0\left(s\right) + b_1\left(s\right) y, \left(s,y\right) \in [0,T]\times\mathbb R^d$, $b_0$ (resp. $\ b_1$) being mappings from $[0,T]$ to $\mathbb R^d$ (resp. to ${M}_d\left(R\right)$). Taking the expectation in the SDE fulfilled by $X^{x}$, we show that the function $t \mapsto E^x(t) := \mathbb E(X^{x}_t)$ is solution of $$ E^x(t) = \int_{\mathbb R^d} y \mu\left(dy\right) - \int_t ^T \left(b_0(s) + b_1(s) E^x(s)\right) ds.$$ Previous linear ODE has clearly a unique solution. At this point $x = E(0)$ is determined. Those examples give a flavor of how to tackle the well-posedness issue. However, generalizing those approaches is far more complicated and constitutes the first part of the present work. The contributions of the paper are twofold. \begin{enumerate} \item We investigate uniqueness for the Fokker-Planck PDE with terminal condition \eqref{EDPTerm}. This is done in Section \ref{S3} in two different situations: the case when the coefficients are bounded and the situation of a PDE associated with an inhomogeneous Ornstein-Uhlenbeck semigroup. In Section \ref{SGP} we show uniqueness when the coefficients are stepwise time-homogeneous. In Theorem \ref{P315} the coefficients are time-homogeneous, bounded and H\"older, with non-degenerate diffusion. Corollary \ref{C313} extends previous results to the case of stepwise time-inhomogeneous coefficients. In Section \ref{S34}, Theorem \ref{BwdOU_Uniq} treats the Ornstein-Uhlenbeck case. In Section \ref{S32} we show uniqueness for bounded continuous coefficients for solutions starting in the class ${\cal C}$ of multiples of Dirac measures. In Proposition \ref{propLip1} we discuss the framework of dimension $d = 1$. Theorem \ref{propLipd} is devoted to the case $d \ge 2$. We distinguish the non-degenerate case from the possibly degenerate case but with smooth coefficients: we prove uniqueness for small time horizon $T$. \item We study existence and uniqueness in law for the McKean SDE \eqref{MKIntro}, with some specific remarks concerning strong existence and pathwise uniqueness. We differentiate specifically between existence and uniqueness. After some preliminary considerations in Section \ref{Prelim}, Sections \ref{MKEX} and \ref{MKUNIQ} link the well-posedness of the PDE \eqref{EDPTerm} to the well-posedness of the McKean SDE \eqref{MKIntro}. In particular Proposition \ref{MKEx_Prop} (resp. Corollary \ref{Coro}) links the existence (resp. uniqueness) of \eqref{EDPTerm} with \eqref{MKIntro}. In Section \ref{SExamples44}, Proposition \ref{TExUniq} and Theorem \ref{TC313} discuss the case of bounded coefficients. Theorem \ref{MKOU_WellP} is Section \ref{Sex} is devoted to the case of Ornstein-Uhlenbeck (with not necessarily Gaussian terminal condition), where strong existence and pathwise uniqueness are established. \end{enumerate} \section{Notations and preliminaries} \label{SNotations} \setcounter{equation}{0} Let us fix $d,m \in \mathbb{N}^*$, $T > 0$. $\mathcal{C}^{\infty}_c\left(\mathbb R^d\right)$ is the linear space of smooth functions with compact support. For a given $p \in \mathbb N^*$, $[\![1,p]\!]$ denotes the set of all integers between $1$ and $p$ included. $M_{d,m}\left(\mathbb R\right)$ stands for the set of $d\times m$ matrices. If $d = m$, we simply use the notation $M_{d}\left(\mathbb R\right)$. For a given $A \in M_d\left(\mathbb R\right)$, $Tr\left(A\right)$ (resp. $A^{\top}$) symbolizes the trace (resp. the transpose) of the matrix $A$. $\left|\left|A\right|\right|$ denotes the usual Frobenius norm. $\left<,\right>$ denotes the usual scalar product on $\mathbb R^d$, with associated norm $\left|.\right|$. For a given $f: \mathbb R^p \rightarrow \mathbb R^l$, $p,l \in \mathbb{N}^*$, $ \partial_{j}f^i, \left(i,j\right)\in [\![1,l]\!]\times[\![1,p]\!] $ denote the partial derivatives of $f$ being defined in the sense of distributions on $\mathbb R^p$ whenever they exist. We also introduce the mapping $Jf$ from $\mathbb R^p$ to $M_{l,p}\left(\mathbb R\right)$ such that $ Jf : z \mapsto \left(\partial_{j}f^i\left(z\right)\right)_{\left(i,j\right)\in [\![1,l]\!]\times[\![1,p]\!]}. $ Let $\alpha \in ]0,1[, n \in \mathbb N$. ${\cal C}_b(\mathbb R^d)$ (resp. ${\cal C}^n_b(\mathbb R^d)$) indicates the space of bounded continuous functions (resp. bounded functions of class ${\cal C}^n$ such that all the derivatives are bounded). $ \mathcal{C}^{\alpha}(\mathbb R^d)$ is the Banach space of bounded $\alpha$-H\"older functions $\mathbb R^d \rightarrow \mathbb R$ equipped with the norm $ \left|.\right|_{\alpha} := \left|\left|.\right|\right|_{\infty} + \left[.\right]_{\alpha}, $ where $$ \left[f\right]_{\alpha} := \sup_{x,y \in \mathbb R^d, x \neq y} \frac{\left|f(x) - f(y)\right|}{\left|x-y\right|^{\alpha}} < \infty.$$ If $n$ is some integer $ \mathcal{C}^{\alpha+n}(\mathbb R^d)$ is the Banach space of bounded functions $f: \mathbb R^d \rightarrow \mathbb R$ such that all its derivatives up to order $n$ are bounded and such that the derivatives of order $n$ are $\alpha$-H\"older continuous. This is equipped with the norm obtained as the sum of the $C^n_b(\mathbb R^d)$-norm plus the sum of the quantities $[g]_\alpha$ where $g$ is an $n$-order derivative of $f$. For more details, see Section 0.2 of \cite{lunardi_1995}. If $E$ is a linear Banach space, we denote by $\left|\left|.\right|\right|_{E}$ the associated operator norm and by $\mathcal{L}\left(E\right)$ the space of linear bounded operators $E \rightarrow E$. Often in the sequel we will have $E = \mathcal{C}^{2\alpha}(\mathbb R^d)$. ${\cal P}\left(\mathbb{R}^d\right)$ (resp. ${\cal M}_+\left(\mathbb{R}^d\right), {\cal M}_{f}\left(\mathbb{R}^d\right)$) denotes the set of probability measures (resp. non-negative finite valued measure, finite signed measures) on $\left(\mathbb{R}^d,\mathcal{B}\left(\mathbb{R}^d\right)\right)$. We also denote by ${\cal S}\left(\mathbb R^d\right)$ the space of Schwartz functions and by ${\cal S}'\left(\mathbb R^d\right)$ the space of tempered distributions. For all $\phi \in {\cal S}\left(\mathbb R^d\right)$ and $\mu \in {\cal M}_f\left(\mathbb R^d\right)$, we set the notations \begin{equation*} {\cal F} \phi : \xi \mapsto \int_{\mathbb R^d}e^{-i\left<\xi,x\right>}\phi\left(x\right)dx, \ {\cal F} \mu : \xi \mapsto \int_{\mathbb R^d}e^{-i\left<\xi,x\right>}\mu\left(dx\right). \end{equation*} \smallbreak \noindent Given a mapping ${\bf u}: [0,T] \rightarrow \mathcal{M}_f\left(\mathbb{R}^d\right)$, we convene that when for $t \in [0,T]$, ${\bf u}\left(t\right)$ has a density, this is denoted by $u\left(t,\cdot\right)$. We also introduce, for a given $t$ in $[0,T]$, the differential operator, \begin{equation} \label{EqOpL} L_t f := \frac{1}{2}\sum^{d}_{i,j=1}\Sigma_{ij}(t,\cdot) \partial_{ij} f + \sum^{d}_{i=1 }b_i\left(t,\cdot\right)\partial_{i} f, \end{equation} $f \in C^2(\mathbb R^d)$ and denote by $L^*_t$ its formal adjoint, which means that for a given signed measure $\eta$ \begin{equation} \label{EqOpL*} L^*_t \eta := \frac{1}{2} \displaystyle{\sum_{i,j=1}^d} \partial_{ij}^2 \left( \Sigma_{i,j}(t,x) \eta \right) - div \left( b(t,x) \eta \right )\ . \end{equation} With this notation, equation~\eqref{EDPTerm0} rewrites \begin{equation} \label{BackwardFokker} \begin{cases} \partial_t{\bf u} = L^*_t{\bf u} \\ {\bf u}\left(T\right) = \mu. \\ \end{cases} \end{equation} In the sequel we will often make use of the following assumptions. \begin{ass}\label{Lip1d} $b,\sigma$ are Lipschitz in space uniformly in time, with linear growth. \end{ass} \begin{ass} \label{Zvon1} $b$ and $\sigma$ are bounded and $\Sigma$ is continuous. \end{ass} \begin{ass} \label{Zvon3} There exists $\epsilon > 0$ such that for all $t \in [0,T]$, $\xi \in \mathbb R^d$, $x \in \mathbb R^d$ \begin{equation} \left<\Sigma(t,x)\xi,\xi\right> \geq \epsilon \left|\xi\right|^2. \end{equation} \end{ass} For a given random variable $X$ on a probability space $\left(\Omega,{\cal F},\mathbb P\right)$, $\mathcal{L}_{\mathbb P}\left(X\right)$ denotes its law under $\mathbb P$ and $\mathbb{E}_{\mathbb P}\left(X\right)$ its expectation under $\mathbb P$. When self-explanatory, the subscript will be omitted in the sequel. \section{A Fokker-Planck PDE with terminal condition} \label{S3} \setcounter{equation}{0} \subsection{Preliminary results on uniqueness} In this section, we consider a Fokker-Planck type PDE with terminal condition for which the notion of solution is clarified in the following definition. \begin{defi} \label{Def} \noindent Fix $\mu \in {\cal M}_f\left(\mathbb R^d\right)$. We say that a mapping ${\bf u}$ from $[0,T]$ to ${\cal M}_f\left(\mathbb R^d\right)$ solves the PDE~\eqref{EDPTerm0}, if for all $\phi \in \mathcal{C}^{\infty}_c\left(\mathbb R^d\right)$ and all $t \in [0,T]$ \begin{equation} \label{weak} \int_{\mathbb{R}^d}\phi\left(y\right){\bf u}\left(t\right)\left(dy\right) = \int_{\mathbb{R}^d}\phi\left(y\right)\mu\left(dy\right) - \int^{T}_{t}\int_{\mathbb{R}^d}L_s\phi\left(y\right){\bf u}\left(s\right)\left(dy\right)ds. \end{equation} \end{defi} We consider the following assumption related to a given class $\mathcal{C} \subseteq {\cal M}_+\left(\mathbb R^d\right)$. \begin{ass} \label{GH1} For all $\nu \in \mathcal{C}$, the PDE \begin{equation}\label{Fokker} \begin{cases} \partial_t{\bf u} = L^*_t{\bf u} \\ {\bf u}\left(0\right) = \nu \end{cases} \end{equation} admits at most one solution ${\bf u}: [0,T] \rightarrow {\cal M}_+\left(\mathbb{R}^d\right)$. \end{ass} We recall that, for a given $\nu \in {\cal M}_f\left(\mathbb R^d\right)$, ${\bf u}: [0,T] \rightarrow {\cal M}_f\left(\mathbb{R}^d\right)$ is a solution of the PDE \eqref{Fokker} if for all $\phi \in \mathcal{C}^{\infty}_c\left(\mathbb R^d\right)$ and all $t \in [0,T]$, \begin{equation} \label{weakbis} \int_{\mathbb{R}^d}\phi\left(y\right){\bf u}\left(t\right)\left(dy\right) = \int_{\mathbb{R}^d}\phi\left(y\right)\nu\left(dy\right) + \int^{t}_{0}\int_{\mathbb{R}^d}L_s\phi\left(y\right){\bf u}\left(s\right)\left(dy\right)ds. \end{equation} Suppose there is an ${\cal M}_+\left(\mathbb R^d\right)$-valued solution of \eqref{Fokker} ${\bf u}$ and Assumption \ref{GH1} with respect to some class ${\cal C}$ holds and such that ${\bf u}(0) \in {\cal C}$. Then this unique solution will be denoted by ${\bf u}^{\nu}$ in the sequel. We remark that, whenever Assumption \ref{GH1} holds with respect to a given ${\cal C} \subseteq {\cal P}\left(\mathbb R^d\right)$, then \eqref{Fokker} admits at most one ${\cal M}_+\left(\mathbb R^d\right)$-valued solution with any initial value belonging to $\mathbb R^*_+{\cal C} := \left(\alpha\nu\right)_{\alpha > 0,\nu \in {\cal C}}$. \smallbreak We start with a simple but fundamental observation. \begin{prop} \label{PFundam} Let us suppose $\sigma, b$ to be locally bounded, $\nu$ be a Borel probability on $\mathbb R^d$, $\alpha > 0$, $\xi$ be a r.v. distributed according to $\nu$. Suppose that there is a solution $X$ of SDE \begin{equation} \label{EqLin} X_t = \xi + \int^{t}_{0}b\left(r,X_r\right)dr + \int^{t}_{0}\sigma\left(r,X_r\right)dW_r, \ t \in [0,T], \ \mathbb{P}\rm{-a.s.}, \end{equation} where $W$ is an $m$-dimensional standard Brownian motion. Then the ${\cal M}_+\left(\mathbb R^d\right)$-valued function $t \mapsto \alpha\mathcal{L}\left(X_t\right)$ is a solution of \eqref{Fokker} with initial value $\alpha \nu$. \end{prop} \begin{proof} \ One first applies It\^o formula to $\varphi(X_t)$, where $\varphi$ is a smooth function with compact support and then one takes the expectation. \end{proof} \begin{rem} \label{R1} \begin{enumerate} \item Suppose that the coefficients $b,\Sigma$ are bounded. Assumption \ref{GH1} holds with respect to $\mathcal{C} := {\cal M}_+\left(\mathbb R^d\right)$ as soon as the martingale problem associated with $b,\Sigma$ admits uniqueness for all initial condition of the type $\delta_x, x \in \mathbb R^d$. Indeed, this is a consequence of Lemma 2.3 in \cite{figalli}. \item Suppose $b$ and $\sigma$ with linear growth. Let $\nu \in {\cal M}_+\left(\mathbb R^d\right)$ not vanishing (resp. $\nu \in {\cal P}\left(\mathbb R^d\right)$). The existence of a ${\cal M}_+\left(\mathbb R^d\right)$-valued (resp. ${\cal P}\left(\mathbb R^d\right)$-valued) solution for \eqref{Fokker} (even on $t \ge 0$) is ensured when the martingale problem associated to $b,\Sigma$ admits existence (and consequently when the SDE \eqref{EqLin} admits weak existence) with initial condition $\nu$ (resp. $\frac{\nu}{\Vert\nu\Vert}$). This follows by Proposition \ref{PFundam}. We remark that, for example, this happens when the coefficients $b, \sigma$ are continuous with linear growth: see Theorem 12.2.3 in \cite{stroock} for the case of bounded coefficients, the general case can be easily obtained by truncation. \item The martingale problem associated to $b,\Sigma$ is well-posed for all deterministic initial condition, for instance in the following cases. \begin{itemize} \item When $\Sigma, b$ have linear growth and $\Sigma$ is continuous and non-degenerate, i.e. Assumption \ref{Zvon3}, see \cite{stroock} Corollary 7.1.7 and Theorem 10.2.2. \item Suppose $d=1$ and $\sigma $ is bounded. When $\sigma$ is lower bounded by a positive constant on each compact set, see \cite{stroock}, Exercise 7.3.3. \item When $d =2$, $\Sigma$ is non-degenerate and $\sigma$ and $b$ are time-homogeneous and bounded, see \cite{stroock}, Exercise 7.3.4. \item When $\sigma, b$ are Lipschitz with linear growth (with respect to the space variable), in which case we have even strong solutions of the corresponding stochastic differential equation. \end{itemize} \end{enumerate} \begin{lem} \label{LC313} Let $T > 0$ be arbitrary and $\nu \in {\cal P}\left(\mathbb R^d\right)$. We suppose the validity of Assumptions \ref{Zvon1} and \ref{Zvon3}. Then there is a unique ${\cal M}_+\left(\mathbb R^d\right)$-valued solution ${\bf u}$ to \eqref{Fokker} with ${\bf u}(0) = \nu$. Moreover ${\bf u}^\nu$ takes values in ${\cal P}(\mathbb R^d)$. \end{lem} \begin{proof} \ Existence follows by items 2. and 3. of Remark \ref{R1}. Uniqueness is a consequence of items 1. and 3. of the same Remark. \end{proof} \end{rem} \noindent Below we give two uniqueness results for the PDE \eqref{EDPTerm}. \begin{prop} \label{P1} Suppose Assumption \ref{GH1} holds with respect to a given ${\cal C} \subseteq {\cal M}_+(\mathbb R^d)$. Suppose that for all $\nu \in {\cal C}$ there exists an ${\cal M}_+(\mathbb R^d)$-valued solution of \eqref{Fokker} with initial value $\nu$. Then, the following properties are equivalent. \begin{enumerate} \item The mapping from $\mathcal{C}$ to ${\cal M}_+(\mathbb R^d)$ $\nu \mapsto {\bf u}^{\nu}(T)$ is injective. \item For all $\mu \in {\cal M}_+(\mathbb R^d)$, the PDE \eqref{BackwardFokker} with terminal value $\mu$ admits at most a solution in the sense of Definition \ref{Def} among all ${\cal M}_+\left(\mathbb R^d\right)$-valued solutions starting in the class $\mathcal{C}$. \end{enumerate} \end{prop} \begin{proof} \smallbreak \noindent Concerning the converse implication, consider $\left(\nu,\nu'\right) \in \mathcal{C}^2$ such that ${\bf u}^{\nu}(T) = {\bf u}^{\nu'}(T) $ and suppose that uniqueness holds for equation \eqref{BackwardFokker} for all terminal values in ${\cal M}_+\left(\mathbb R^d\right)$ in the sense of Definition \ref{Def} among non-negative measure-valued solutions starting in the class $\mathcal{C}$. We remark that ${\bf u}^{\nu},{\bf u}^{\nu'}$ are such solutions and are associated to the same terminal value. Uniqueness gives ${\bf u}^{\nu} = {\bf u}^{\nu'}$ and in particular $\nu = \nu'$. \smallbreak \noindent Concerning the direct implication, consider ${\bf u}^1,{\bf u}^2$ two non-negative measure-valued solutions of equation \eqref{EDPTerm} in the sense of Definition \ref{Def}, with the same terminal value in ${\cal M}_+\left(\mathbb R^d\right)$, such that ${\bf u}^i\left(0\right), i \in \left\{1,2\right\},$ belong to $\mathcal{C}$ and suppose that $\nu \mapsto {\bf u}^{\nu}\left(T\right)$ is injective from $\mathcal{C}$ to ${\cal M}_+\left(\mathbb R^d\right)$. Setting $\nu^i := {\bf u}^i\left(0\right)$, we remark that for a given $i \in \left\{1,2\right\}$ \begin{equation} \label{FPBis} \begin{cases} \partial_t{\bf u}^i = L^*_t{\bf u}^i \\ {\bf u}^i\left(0\right) = \nu_i, \\ \end{cases} \end{equation} in the sense of identity \eqref{weakbis}. Then, the fact ${\bf u}^1\left(T\right) = {\bf u}^2\left(T\right)$ gives $ {\bf u}^{\nu_1}\left(T\right) = {\bf u}^{\nu_2}\left(T\right). $ By injectivity $\nu_1 = \nu_2$ and the result follows by Assumption \ref{GH1}. \end{proof} Proceeding in the same way as for the proof of Proposition \ref{P1} we obtain the following. \begin{prop}\label{P2} Suppose that for all $\nu \in {\cal M}_f\left(\mathbb R^d\right)$, there exists a unique solution ${\bf u}^\nu$ of \eqref{Fokker} with initial value $\nu$. Then, the following properties are equivalent. \begin{enumerate} \item The mapping $\nu \mapsto {\bf u}^{\nu}(T)$ is injective. \item For all $\mu \in {\cal M}_f(\mathbb R^d)$, the PDE \eqref{EDPTerm0} with terminal value $\mu$ admits at most a solution in the sense of Definition \ref{Def}. \end{enumerate} \end{prop} \begin{rem} \label{RP1} \begin{enumerate} \item Suppose that the coefficients $\Sigma, b$ are bounded. Then, any measure-valued solution ${\bf u}:[0,T] \rightarrow {\cal M}_+(\mathbb R^d)$ of \eqref{Fokker} such that ${\bf u}(0) \in {\cal P}(\mathbb R^d)$ takes values in ${\cal P}(\mathbb R^d)$. Indeed, this can be shown approaching the function $\varphi \equiv 1$ from below by smooth functions with compact support. \item Replacing ${\cal M}_+(\mathbb R^d)$ with ${\cal P}(\mathbb R^d)$ in Assumption \ref{GH1}, item 2. in Proposition \ref{P1} can be stated also replacing ${\cal M}_+(\mathbb R^d)$ with ${\cal P}(\mathbb R^d)$. \end{enumerate} \end{rem} \subsection{Uniqueness: the case of Dirac initial conditions} \label{S32} \noindent In this section we give examples of functions $b,\sigma$ for which uniqueness of \eqref{BackwardFokker} among ${\cal M}_+(\mathbb R^d)$-valued solutions is ensured, supposing Assumption \ref{GH1} is in force with respect to $\mathcal{C} := \left(\alpha\delta_x\right)_{\alpha > 0,x \in \mathbb R^d}$. \smallbreak \begin{rem} \label{Ralpha} Let $\alpha \ge 0$. Let $x \in \mathbb R^d$. Suppose that there is a solution $X^x$ of SDE \eqref{EqLin} with $\xi = x$. \begin{enumerate} \item By Proposition \ref{PFundam}, the ${\cal M}_+\left(\mathbb R^d\right)$-valued mapping $t \mapsto \alpha\mathcal{L}\left(X^x_t\right)$ is a solution of \eqref{Fokker} with initial value $\alpha\delta_x$. \item $t \mapsto \alpha\mathcal{L}\left(X^x_t\right)$ can be identified with ${\bf u}^{\alpha\delta_x}$ and in particular $ \int_{\mathbb R^d} {\bf u}^{\alpha\delta_x}\left(t\right)\left(dy\right) = \alpha, \ \forall t \in [0,T].$ \end{enumerate} \end{rem} If Assumption \ref{Lip1d} holds, $X^x$ denotes the unique solution of equation \eqref{EqLin} with initial value $x \in \mathbb R^d$. \noindent We start with the case of dimension $d = m = 1$. \begin{prop} \label{propLip1} Suppose the validity of Assumption \ref{GH1} with $\mathcal{C} = \left(\alpha\delta_x\right)_{\alpha > 0,x \in \mathbb R}$ and \ref{Lip1d} with $d = m = 1$. Then, for all $\mu \in {\cal M}_+\left(\mathbb R\right)$, equation \eqref{EDPTerm} with terminal value $\mu$ admits at most one solution in the sense of Definition \ref{Def} among the ${\cal M}_+\left(\mathbb R\right)$-valued solutions starting in ${\cal C}$. \end{prop} \begin{proof} \noindent Fix $\left(x,y\right) \in \mathbb R^2$ and $\alpha,\beta \geq 0$ such that \begin{equation}\label{identity} {\bf u}^{\alpha\delta_x}\left(T\right) = {\bf u}^{\beta\delta_y}\left(T\right). \end{equation} \noindent It suffices to show that $\alpha = \beta$ and $x=y$ to conclude, thanks to Proposition \ref{P1}. \noindent By item 2. of Remark \ref{Ralpha}, we have $\alpha = \beta$ and consequently $\mathcal{L}_{\mathbb{P}}\left(X^x_T\right) = \mathcal{L}_{\mathbb{P}}\left(X^y_T\right)$. In particular $\mathbb{E}\left(X^x_T\right) = \mathbb{E}\left(X^y_T\right)$. Since $b,\sigma$ are Lipschitz in space, they have bounded derivatives in the sense of distributions that we denote by $\partial_xb$ and $\partial_x\sigma$. \smallbreak \noindent Set $Z^{x,y} := X^y - X^x$. We have \begin{equation} \label{EDol} Z^{x,y}_t = \left(y-x\right) + \int^{t}_{0}b^{x,y}_sZ^{x,y}_sds + \int^{t}_{0}\sigma^{x,y}_sZ^{x,y}_sdW_s, \forall t\in [0,T], \end{equation} \noindent where for a given $s \in [0,T]$ \begin{equation*} b^{x,y}_s = \int^{1}_{0}\partial_xb\left(s,aX^y_s + (1-a)X^x_s\right)da ,\ \sigma^{x,y}_s = \int^{1}_{0}\partial_x\sigma\left(s,aX^y_s + (1-a)X^x_s\right)da. \end{equation*} \smallbreak \noindent The unique solution of \eqref{EDol} is well-known to be \begin{equation*} Z^{x,y} = \exp\left(\int^{.}_{0}b^{x,y}_sds\right)\mathcal{E}\left(\int^{.}_{0}\sigma^{x,y}_sdW_s \right)(y-x), \end{equation*} where $\mathcal{E}\left(\cdot\right)$ denotes the Dol\'eans exponential. Finally, we have \begin{equation*} \mathbb{E}\left(\exp\left(\int^{T}_{0}b^{x,y}_sds\right)\mathcal{E}\left(\int^{.}_{0}\sigma^{x,y}_sdW_s \right)_T\right)\left(y-x\right) = 0. \end{equation*} Since the quantity appearing in the expectation is strictly positive, we conclude $x = y$. \end{proof} \noindent We continue now with a discussion concerning the multidimensional case $d \geq 2$. The uniqueness result below only holds when the time-horizon is small enough. Later, in Section \ref{SGP} we will present in a framework of piecewise time-homogeneous coefficients results which are valid for any time-horizon. Theorem \ref{propLipd} distinguishes two cases: the first one with regular possibly degenerate coefficients, the second one with non-degenerate possibly irregular coefficients. \begin{thm} \label{propLipd} We suppose Assumption \ref{GH1} with $\mathcal{C} = \left(\alpha\delta_x\right)_{\alpha > 0, x \in \mathbb R^d}$ and the validity of either item (a) or (b) below. \begin{description} \item{(a)} Assumption \ref{Lip1d}. \item{(b)} Assumptions \ref{Zvon1} and \ref{Zvon3}. \end{description} There is $T > 0$ small enough such that the following holds. For all $\mu \in {\cal M}_+\left(\mathbb R^d\right)$, equation \eqref{EDPTerm} admits at most one solution in the sense of Definition \ref{Def} among the ${\cal M}_+\left(\mathbb R^d\right)$-valued solutions starting in ${\cal C}$. \end{thm} The proof of item (a) of Theorem \ref{propLipd} relies on a basic lemma of moments estimation. \begin{lem} \label{Lemma} We suppose Assumption \ref{Lip1d}. Let $\left(x,y\right) \in \mathbb R^d\times\mathbb R^d$. Then, $ \sup_{t\in[0,T]}\mathbb{E} \left(\left|X^x_t - X^y_t \right|^2 \right) \le \left|y-x\right|^2e^{KT},$ with $K := 2K^b + \sum^{m}_{j=1}\left(K^{\sigma,j}\right)^2$, where \begin{equation*} K^b := \sup_{s\in [0,T]}\left|\left| \ \left|\left|Jb\left(s,\cdot\right)\right|\right| \ \right|\right|_{\infty} \end{equation*} and for all $j \in [\![1,m]\!]$ \begin{equation*} K^{\sigma,j} := \sup_{s\in [0,T]}\left|\left| \ \left|\left|J\sigma_{.j}\left(s,\cdot\right)\right|\right| \ \right|\right|_{\infty}. \end{equation*} \end{lem} \begin{prooff} \ (of Lemma \ref{Lemma}). \noindent For a given $\left(x,y\right) \in \mathbb R^d \times\mathbb R^d$ we set \begin{equation*} Z^{x,y}_t := X^{y}_t - X^{x} _t, t\in[0,T]. \end{equation*} We have \begin{equation} \label{EZxy} Z^{x,y}_t = y-x + \int^{t}_{0}B^{x,y}_r Z^{x,y}_rdr +\sum^{m}_{j=1} \int^{t}_{0}C^{x,y,j}_r Z^{x,y}_rdW^j_r, \ t\in[0,T], \end{equation} \noindent with, for all $r\in[0,T]$ \begin{equation*} B^{x,y}_r := \int^{1}_{0}Jb\left(r,aX^{y}_r+(1-a)X^{x}_r\right)da, \quad C^{x,y,j}_r := \int^{1}_{0}J\sigma_{.j}\left(r,aX^{y}_r+(1-a)X^{x}_r\right)da, \forall \ j \in [\![1,m]\!]. \end{equation*} By the classical existence and uniqueness theorem for SDEs with Lipschitz coefficients we know that \begin{equation} \label{SQI} \mathbb{E}(\sup_{s \leq T} \left|X^{z}_s\right|^2) < \infty, \end{equation} for all $z \in \mathbb R^d$. This implies \begin{equation} \label{sup} \mathbb{E}(\sup_{t\in[0,T]}\left| Z^{x,y}_t \right|^2) < \infty. \end{equation} Now, It\^{o}'s formula gives, for all $t \in [0,T]$ \begin{equation} \label{ItoSquareNorm} \left|Z^{x,y}_t\right|^2 = \left|y-x\right|^2 + 2\int^{t}_{0}\left<B^{x,y}_rZ^{x,y}_r,Z^{x,y}_r\right>dr + \sum^{m}_{j=1}\int^{t}_{0}\left|C^{x,y,j}_rZ^{x,y}_r\right|^2dr + 2\sum^{d}_{i=1}M^{x,y,i}_t, \end{equation} where, for a given $i \in [\![1,d]\!]$, $M^{x,y,i}$ denotes the local martingale $\int^{\cdot}_{0}Z^{x,y,i}_s\sum^{m}_{j=1}\left(C^{x,y,j}_sZ^{x,y}_s\right)_idW^j_s$. \smallbreak \noindent Consequently, for all $i \in [\![1,d]\!]$, we have \begin{align} \label{EMForm} \sqrt{\left[M^{x,y,i}\right]_T} &{}= \sqrt{\sum^{m}_{j=1} \int^{T}_{0}\left(Z^{x,y,i}_r\right)^2\left(C^{x,y,j}_r Z^{x,y}_r\right)^2_idr}, \nonumber \\ &{}\leq \sqrt{\sum^{m}_{j=1} \int^{T}_{0}\left|C^{x,y,j}_r Z^{x,y}_r\right|^2\left|Z^{x,y}_r\right|^2dr}, \\ &{}\leq \sqrt{T\sum^{m}_{j=1} \left(K^{\sigma,j}\right)^2}\sup_{r\in[0,T]}\left|Z^{x,y}_r\right|^2. \nonumber \end{align} By the latter inequality and \eqref{sup}, we know that $\mathbb{E}\left([M^{x,y,i}]_T^{\frac{1}{2}}\right) < \infty$, so for all $i \in [\![1,d]\!]$, $M^{x,y,i}$ is a true martingale. Taking expectation in identity \eqref{ItoSquareNorm}, we obtain \smallbreak \noindent \begin{equation*} \mathbb{E}\left(\left|Z^{x,y}_t\right|^2\right) = \left|y-x\right|^2 + \int^{t}_{0}\mathbb{E}\left(2\left<B^{x,y}_rZ^{x,y}_r,Z^{x,y}_r\right> + \sum^{m}_{k=1}\left|C^{x,y,k}_rZ^{x,y}_r\right|^2\right)dr. \end{equation*} Hence, thanks to Cauchy-Schwarz inequality and to the definition of $K^b$ and $K^{\sigma,j}$ for all $j \in [\![1,m]\!]$ \begin{equation*} \mathbb{E}\left(\left|Z^{x,y}_t\right|^2\right) \leq \left|y-x\right|^2 + K \int^{t}_{0}\mathbb{E}\left(\left|Z^{x,y}_r\right|^2\right)dr \end{equation*} and we conclude via Gronwall's Lemma. \end{prooff} \begin{prooff} \ (of Theorem \ref{propLipd}). Fix $\left(x_1,x_2\right) \in \mathbb R^d\times\mathbb R^d, \alpha,\beta \geq 0$ such that \begin{equation} {\bf u}^{\alpha\delta_{x_1}}\left(T\right) = {\bf u}^{\beta\delta_{x_2}}\left(T\right). \end{equation} To conclude, it suffices to show $\alpha = \beta$ and $x_1 = x_2$ thanks to Proposition \ref{P1}. \smallbreak \begin{enumerate} \item We suppose first Assumption \ref{Lip1d}. Once again, item 2. of Remark \ref{Ralpha} gives $\alpha = \beta$ and \begin{equation} \label{Eequal} \mathbb{E}\left(X^{x_1}_T\right) = \mathbb{E}\left(X^{x_2}_T\right). \end{equation} \smallbreak \noindent Adopting the same notations as in the proof of Lemma \ref{Lemma}, a similar argument as in \eqref{EMForm}, together with \eqref{sup} allow to show that the local martingale part of $Z^{x_1,x_2} = X^{x_2} - X^{x_1}$ defined in \eqref{EZxy} is a true martingale. So, taking the expectation in \eqref{EMForm} with $x=x_1, y = x_2$, by Lemma \ref{Lemma} we obtain \begin{align*} \left|\mathbb{E} \left(X^{x_2}_T - X^{x_1}_T\right) - (x_2-x_1)\right| &{}\le K_b \int^{T}_{0}\mathbb{E} \vert X^{x_2}_r - X^{x_1}_r \vert dr \\ &{}\leq K_b \int^{T}_{0} \sqrt{\mathbb{E}\left(\vert X^{x_2}_r - X^{x_1}_r \vert\right)^2}dr \\ &{}\leq \frac{K}{2} T e^{\frac{K}{2}T} \left|x_2-x_1\right|. \end{align*} Remembering \eqref{Eequal}, this implies \begin{equation*} \left(1 - \frac{K}{2}Te^{\frac{K}{2}T}\right)\left|x_2-x_1\right| \leq 0. \end{equation*} Taking $T$ such that $\frac{K}{2}T < M$ with $Me^M < 1$, we have $1 - \frac{K}{2}Te^{\frac{K}{2}T} > 0$, which implies $ \vert x_2 - x_1 \vert= 0$. \item We suppose here Assumptions \ref{Zvon1} and \ref{Zvon3}. Firstly, point 1. of Theorem 1. in \cite{z} ensures the existence of probability spaces $\left(\Omega^i, \mathcal{F}^i,\mathbb{P}^i\right), \ i \in \left\{1,2\right\}$ on which are defined respectively two $m$-dimensional Brownian motions $W^1,W^2$ and two processes $X^1,X^2$ such that \begin{equation*} X^i_t = x_i + \int^{t}_{0}b\left(s,X^i_s\right)ds + \int^{t}_{0}\sigma\left(s,X^i_s\right)dW^i_s, \ \mathbb{P}^i\rm{-a.s.}, t \in [0,T]. \end{equation*} \smallbreak \noindent Once again, item 2. of Remark \ref{Ralpha} implies $\alpha_1 = \alpha_2$ and \begin{equation} \label{TermLaw} \mathcal{L}_{\mathbb{P}^1}\left(X^1_T\right) = \mathcal{L}_{\mathbb{P}^2}\left(X^2_T\right). \end{equation} Secondly, point b. of Theorem 3 in \cite{z} shows that for every given bounded $D \subset \mathbb R^d$, for all $\phi: [0,T] \times \mathbb R^d \rightarrow \mathbb R^d$ belonging to $W^{1,2}_p\left([0,T]\times D\right)$ (see Definition of that space in \cite{z}) for a given $p > d+2$, we have for all $t\in [0,T], i \in \left\{1,2\right\}$, \begin{equation}\label{TSDE} \phi\left(t,X^i_t\right) = \phi\left(0,x_i\right) + \int^{t}_{0}\left(\partial_t + L_s\right)\phi\left(s,X^i_s\right)ds + \int^{t}_{0}J\phi\left(s,X^i_s\right)\sigma\left(s,X^i_s\right)dW^i_s, \ \mathbb{P}^i\rm{-a.s.} \end{equation} where the application of $\partial_t + L_t, t \in [0,T]$ has to be understood componentwise. \smallbreak \noindent Thirdly, Theorem 2. in \cite{z} shows that if $T$ is sufficiently small, then the system of $d$ PDEs \begin{equation}\label{E317} \forall \left(t,x\right) \in [0,T]\times\mathbb R^d, \ \begin{cases} \partial_t\phi\left(t,x\right) + L_t \phi\left(t,x\right) = 0, \\ \phi\left(T, x\right) = x, \end{cases} \end{equation} admits a solution $\phi$ in $W^{1,2}_p\left([0,T]\times D\right)$ for all $p > 1$ and all bounded $D \subset \mathbb R^d$. Moreover the partial derivatives in space of $\phi$ are bounded (in particular $J \phi$ is bounded) and $\phi\left(t,\cdot\right)$ is injective for all $t \in [0,T]$. \smallbreak \noindent Combining now \eqref{E317} with identity \eqref{TSDE}, we observe that $\phi\left(.,X^i\right), i \in \left\{1,2\right\},$ are local martingales. Using additionally the fact that $J\phi$ and $\sigma$ are bounded, it is easy to show that they are true martingales. Taking the expectation in \eqref{TSDE} with respect to $\mathbb P^i, i =1,2$, gives \begin{equation*} \phi\left(0,x_i\right) = \mathbb{E}_{\mathbb{P}^i}\left(\phi\left(T,X^i_T\right)\right), i \in \left\{1,2\right\}. \end{equation*} In parallel, identity \eqref{TermLaw} gives \begin{equation*} \mathbb{E}_{\mathbb{P}^1}\left(\phi\left(T,X^1_T\right)\right) = \mathbb{E}_{\mathbb{P}^2}\left(\phi\left(T,X^2_T\right)\right). \end{equation*} So, $\phi\left(0,x_1\right) =\phi\left(0,x_2\right)$. We conclude that $x_1 = x_2$ since $\phi\left(0,\cdot\right)$ is injective. \end{enumerate} \end{prooff} \subsection{Uniqueness: the case of bounded, non-degenerate coefficients} \label {SGP} In this section we consider the case of time-homogeneous, bounded and H\"{o}lder coefficients in dimension $d \geq 1$. We suppose that Assumption \ref{Zvon3} holds and consider the following one. \begin{ass}\label{Lun1} \begin{enumerate} \item $b,\sigma$ are time-homogeneous and bounded. \item For all $\left(i,j\right) \in [\![1,d]\!]^2$, $b_i, \Sigma_{ij} \in \mathcal{C}^{2\alpha}\left(\mathbb R^d\right)$, for a given $\alpha \in ]0,\frac{1}{2}[$. \end{enumerate} \end{ass} We refer to the differential operator \eqref{EqOpL} $L_t$ and we simply set here $L \equiv L_t$. \begin{rem} \label{RPreliminary} Suppose the validity of Assumptions \ref{Zvon3}, \ref{Lun1}. \begin{enumerate} \item Let $T > 0$. Proposition 4.2 in \cite{figalli} implies that for every $\nu \in {\cal M}_f\left(\mathbb R^d\right)$, there exists a unique ${\cal M}_f\left(\mathbb R^d\right)$-valued solution of equation \eqref{Fokker} with initial value $\nu$. This unique solution will be denoted by ${\bf u}^{\nu}$. In the sequel $T$ will be omitted. \item We remark that the uniqueness result mentioned in item 1. is unknown in the case of general bounded coefficients. In the general framework, only a uniqueness result for non-negative solutions is available, see Remark \ref{R1} 1. \item Since $L$ is time-homogeneous, taking into account Assumptions \ref{Zvon3}, \ref{Lun1}, operating a shift, uniqueness of \eqref{Fokker} also holds replacing the initial time $0$ by any other initial time, for every initial value in ${\cal M}_f\left(\mathbb R^d\right)$, with any other maturity $T$. \end{enumerate} \end{rem} \begin{thm} \label{P315} Suppose the validity of Assumptions \ref{Zvon3} and \ref{Lun1}. Then, for all $\mu \in {\cal M}_f\left(\mathbb R^d\right)$, equation \eqref{EDPTerm} with terminal value $\mu$ admits at most one ${\cal M}_f\left(\mathbb R^d\right)$-valued solution in the sense of Definition \ref{Def}. \end{thm} By Theorems 3.1.12, 3.1.14 and Corollary 3.1.16 in \cite{lunardi_1995} the differential operator $L$ suitably extends as a map ${\cal D}(L) = \mathcal{C}^{2\alpha+2}(\mathbb R^d) \subset \mathcal{C}^{2\alpha}(\mathbb R^d) \mapsto \mathcal{C}^{2\alpha}\left(\mathbb R^d\right)$ and that extension is sectorial, see Definition 2.0.1 in \cite{lunardi_1995}. We set $E:= \mathcal{C}^{2\alpha}\left(\mathbb R^d\right)$. By the considerations below that Definition, in (2.0.2) and (2.0.3) therein, one defines $P_t := e^{tL}, P_t: E \rightarrow E, t \geq 0$. By Proposition 2.1.1 in \cite{lunardi_1995}, $(P_t)_{t \geq 0}$ is a semigroup and $t \mapsto P_t$ is analytical on $]0,+\infty[$ with values in $\mathcal{L}\left(E\right)$, with respect to $\left|\left|.\right|\right|_{E}$. \smallbreak \noindent Before proving the theorem, we provide two lemmata. \begin{lem} \label{key_1} \noindent Suppose the validity of Assumptions \ref{Zvon3} and \ref{Lun1}. Then, for all $\phi \in E$ and all $\nu \in {\cal M}_f\left(\mathbb R^d\right)$, the function from $\mathbb R^*_+$ to $\mathbb{R}$ \begin{equation*} t \mapsto \int_{\mathbb R^d}P_t\phi\left(x\right)\nu\left(dx\right) \end{equation*} is analytic. \end{lem} \begin{proof} \ The result can be easily established using the fact that $\phi \mapsto P_t \phi$ with values in ${\cal L}(E)$ is analytic and the fact that the map $\psi \mapsto \int_{\mathbb R^d} \psi(x) \nu(dx)$ is linear and bounded. \end{proof} \begin{lem} \label{key_2} Suppose the validity of Assumptions \ref{Zvon3} and \ref{Lun1}. Let $T > 0$. Then for all $\nu \in {\cal M}_f\left(\mathbb R^d\right)$, $t \in [0,T]$ and $\phi \in E$ we have the identity \begin{equation} \label{EL310} \int_{\mathbb R^d}P_{t}\phi\left(x\right)\nu\left(dx\right) = \int_{\mathbb R^d}\phi\left(x\right){\bf u}^{\nu}\left(t\right)\left(dx\right), \end{equation} where ${\bf u}^{\nu}$ was defined in point 1. of Remark \ref{RPreliminary}. \end{lem} \begin{proof} \ \noindent Let $\nu \in {\cal M}_f\left(\mathbb R^d\right)$. We denote by ${\bf v}^{\nu}$ the mapping from $[0,T]$ to $\mathcal{M}_f\left(\mathbb R^d\right)$ such that $\forall t \in [0,T]$, $\forall \phi \in E$ \begin{equation} \label{ERiesz} \int_{\mathbb R^d}\phi(x){\bf v}^{\nu}\left(t\right)\left(dx\right)= \int_{\mathbb R^d}P_{t}\phi(x)\nu (dx). \end{equation} Previous expression defines the measure ${\bf v}^{\nu}(t,\cdot)$ since $\phi \mapsto \int_{\mathbb R^d}P_{t}\phi(x)\nu (dx)$ is continuous with respect to the sup-norm, using $\Vert P_t\phi \Vert_\infty \le \Vert \phi \Vert_\infty$, and Lebesgue dominated convergence theorem. \noindent By approximating elements of $E$ with elements of $\mathcal{C}^{\infty}_c\left(\mathbb R^d\right)$, it will be enough to prove \eqref{EL310} for $\phi \in \mathcal{C}^{\infty}_c\left(\mathbb R^d\right)$. \noindent Our goal is to show that ${\bf v}^{\nu}$ is a ${\cal M}_f\left(\mathbb R^d\right)$-valued solution of \eqref{Fokker} with initial value $\nu$ to conclude ${\bf v}^{\nu} = {\bf u}^{\nu}$ via point 1. of Remark \ref{RPreliminary} and so to prove \eqref{EL310} for $\phi \in \mathcal{C}^{\infty}_c\left(\mathbb R^d\right)$. \smallbreak \noindent Let $t \in [0,T]$ and $\phi \in \mathcal{C}^{\infty}_c\left(\mathbb R^d\right)$. On the one hand, point (i) of Proposition 2.1.1 in \cite{lunardi_1995} gives \begin{equation} \label{LP} LP_t\phi = P_tL\phi, \end{equation} since $\mathcal{C}^{\infty}_c\left(\mathbb R^d\right) \subset \mathcal{D}\left(L\right) = \mathcal{C}^{2\alpha + 2}\left(\mathbb R^d,\mathbb R\right)$. On the other hand, for all $s \in [0,t]$, we have \begin{align*} \left|LP_s\phi\right|_{E} &{}= \left|P_sL\phi\right|_{2\alpha} \\ &{}\leq \left|\left|P_s\right|\right|_{E}\left|L\phi\right|_{E} \\ &{}\leq M_0e^{\omega s}\left|L\phi\right|_{E}, \end{align*} with $M_0,\omega$ the real parameters appearing in Definition 2.0.1 in \cite{lunardi_1995} and using point (iii) of Proposition 2.1.1 in the same reference. Then the mapping $s \mapsto LP_s\phi$ belongs obviously to $L^1([0,t];E)$ and point (ii) of Proposition 2.1.4 in \cite{lunardi_1995} combined with identity \eqref{LP} gives \begin{equation*} P_t\phi = \phi + \int^{t}_{0}P_s L\phi ds. \end{equation*} Back to our main goal, using in particular Fubini's theorem, we have \begin{align*} \int_{\mathbb R^d}P_{t}\phi\left(x\right)\nu\left(dx\right) &{} = \int_{\mathbb R^d}\phi\left(x\right)\nu\left(dx\right) + \int_{\mathbb R^d}\int^{t}_{0}P_sL\phi\left(x\right)ds\nu\left(dx\right) \\ &{}= \int_{\mathbb R^d}\phi\left(x\right)\nu\left(dx\right) + \int^{t}_{0}\int_{\mathbb R^d}P_sL\phi\left(x\right)\nu\left(dx\right)ds \\ &{}= \int_{\mathbb R^d}\phi\left(x\right)\nu\left(dx\right) + \int^{t}_{0}\int_{\mathbb R^d}L\phi\left(x\right){\bf v}^{\nu}\left(s\right)\left(dx\right)ds. \end{align*} This shows that ${\bf v}^{\nu}$ is a solution of equation \eqref{Fokker}. \end{proof} \begin{prooff} ((of Theorem \ref{P315}). \noindent Let $\nu,\nu' \in {\cal M}_f\left(\mathbb R^d\right)$ such that \begin{equation*} \mu_T : = {\bf u}^{\nu}\left(T\right) = {\bf u}^{\nu'}\left(T\right). \end{equation*} Thanks to Proposition \ref{P2}, it suffices to show that $\nu = \nu'$ i.e. \begin{equation*} \forall \phi \in \mathcal{C}^{\infty}_c\left(\mathbb R^d\right),\ \int_{\mathbb R^d}\phi\left(x\right)\nu\left(dx\right) = \int_{\mathbb R^d}\phi\left(x\right)\nu'\left(dx\right). \end{equation*} Since $T > 0$ is arbitrary, by Remark \ref{RPreliminary} we can consider ${\bf u}^{\nu,2T}$ and ${\bf u}^{\nu',2T}$, defined as the corresponding ${\bf u}^{\nu}$ and ${\bf u}^{\nu'}$ functions obtained replacing the horizon $T$ with $2T$. They are defined on $[0,2T]$ and by Remark \ref{RPreliminary} 1. (uniqueness on $[0,T]$), they constitute extensions of the initial ${\bf u}^{\nu}$ and ${\bf u}^{\nu'}$. \noindent By Remark \ref{RPreliminary} 3., the uniqueness of an ${\cal M}_f\left(\mathbb R^d\right)$-valued solution of \eqref{Fokker} (for $t \in [T,2T]$, with $T$ as initial time) holds for \begin{equation}\label{FPShift} \begin{cases} \partial_t {\bf u}(\tau) = L^*{\bf u}(\tau), \ T \leq \tau \leq 2T \\ {\bf u}(T) = \mu_T. \\ \end{cases} \end{equation} Now, the functions ${\bf u}^{\nu, 2T}$ and ${\bf u}^{\nu', 2T}$ solve \eqref{FPShift} on $[T,2T]$. This gives in particular \begin{equation} \label{IdLawBis} \forall \tau \geq T, \ \forall \phi \in \mathcal{C}^{\infty}_c\left(\mathbb R^d\right), \ \int_{\mathbb R^d}\phi\left(x\right){\bf u}^{\nu,2T}\left(\tau\right)\left(dx\right) = \int_{\mathbb R^d}\phi\left(x\right){\bf u}^{\nu',2T}\left(\tau\right)\left(dx\right). \end{equation} Fix $\phi \in \mathcal{C}^{\infty}_c\left(\mathbb R^d\right)$. Combining now the results of Lemmata \ref{key_1} and \ref{key_2}, we obtain that the function \begin{equation} \label{ETau} \tau \mapsto \int_{\mathbb R^d}\phi\left(x\right){\bf u}^{\nu,2T}\left(\tau\right)\left(dx\right) - \int_{\mathbb R^d}\phi\left(x\right){\bf u}^{\nu',2T}\left(\tau\right)\left(dx\right) \end{equation} \noindent defined on $[0, 2T]$, is zero on $[T,2T]$ and analytic on $]0,2T]$. Hence it is zero on $]0,2T]$. By \eqref{EL310} we obtain \begin{equation} \label{ETaubis} \int_{\mathbb R^d}P_{\tau}\phi\left(x\right) \left(\nu - \nu'\right) \left(dx\right) = 0, \ \forall t \in ]0, 2T]. \end{equation} Separating $\nu$ and $\nu'$ in positive and negative components, we can finally apply dominated convergence theorem in \eqref{ETau} to send $\tau$ to $0^+$. This is possible thanks to points (i) of Proposition 2.1.4 and (iii) of Proposition 2.1.1 in \cite{lunardi_1995} together with the representation \eqref{EL310}. Indeed $P_\tau\phi\left(x\right) \rightarrow \phi\left(x\right)$ for every $\phi \in E, x \in \mathbb R^d$ when $\tau \rightarrow 0^+$. This shows $\nu = \nu'$ and ends the proof. \end{prooff} For the sake of applications it is useful to formulate a piecewise time-homogeneous version of Theorem \ref{P315}. \begin{corro} \label{C313} Let $n \in \mathbb N^*$. Let $ 0 = t_0 < \ldots < t_n = T$ be a partition. For $k \in [\![2,n]\!]$ (resp. $k=1$) we denote $I_k = ]t_{k-1},t_k]$ (resp. $[t_{0},t_1]$). Suppose that the following holds. \begin{enumerate} \item For all $k \in [\![1,n]\!]$, the restriction of $\sigma$ (resp. $b$) to $I_k \times \mathbb R^d$ is a time-homogeneous function $\sigma^k: \mathbb R^d \rightarrow M_{d}(\mathbb R)$ (resp. $b^k: \mathbb R^d \rightarrow \mathbb R^d$). \item Assumption \ref{Zvon3}. \item Assumption \ref{Lun1} is verified for each $\sigma^k, b^k$ and $\Sigma^k$, where we have set $\Sigma^k := \sigma^k {\sigma^k}^\top$. \end{enumerate} Then, for all $\mu \in {\cal M}_f\left(\mathbb R^d\right)$, equation \eqref{EDPTerm} with terminal value $\mu$ admits at most one ${\cal M}_f\left(\mathbb R^d\right)$-valued solution in the sense of Definition \ref{Def}. \end{corro} \begin{proof} \noindent For each given $k \in [\![1,n]\!]$, we introduce the PDE operator $L^k$ defined by \begin{equation} \label{OpLk} L^k := \frac{1}{2}\sum^{d}_{i,j=1}\Sigma^k_{ij}\partial_{ij} + \sum^{d}_{i=1}b^k_i\partial_{i}. \end{equation} Let now ${\bf u}^1, {\bf u}^2$ be two solutions of \eqref{EDPTerm} with same terminal value $\mu$. \smallbreak \noindent The measure-valued functions ${\bf v}^i := {\bf u}^i\left(\cdot + t_{n-1}\right), i \in \{1,2\}$ defined on $[0,T-t_{n-1}]$ are solutions of \begin{equation} \label{BackwardFokker_k} \begin{cases} \partial_t{\bf v} = \left(L^n\right)^*{\bf v} \\ {\bf v}\left(T-t_{n-1},\cdot\right) = \mu, \end{cases} \end{equation} \noindent in the sense of Definition \ref{Def} replacing $T$ by $T-t_{n-1}$ and $L$ by $L^n$. Then, Theorem \ref{P315} gives ${\bf v}^1 = {\bf v}^2$ and consequently ${\bf u}^1 = {\bf u}^2$ on $[t_{n-1},T]$. To conclude, we proceed by backward induction. \end{proof} \subsection{Uniqueness: the case of Ornstein-Uhlenbeck semigroup} \label{S34} \smallbreak \noindent In this section, we consider the case $b := \left(s,x\right) \mapsto C(s)x$ with $C$ continuous from $[0,T]$ to $M_d\left(\mathbb R\right)$ and $\sigma$ continuous from $[0,T]$ to $M_{d,m}\left(\mathbb R\right)$. We set $\Sigma := \sigma \sigma^{\top}$. We also denote by ${\cal D}\left(t\right), \ t \in[0,T],$ the unique solution of $$ {\cal D}(t) = I - \int^t_0 C(s)^{\top}{\cal D}(s)ds, \ t \in [0,T].$$ \smallbreak \noindent We recall that for every $t \in [0,T]$, ${\cal D}(t)$ is invertible and $$ {\cal D}^{-1}(t) = I + \int^t_0 C(s)^{\top}{\cal D}^{-1}(s)ds, \ t \in [0,T].$$ For previous and similar properties, see Chapter 8 of \cite{bronson}. \smallbreak \noindent \smallbreak \noindent In that setting, the classical Fokker-Planck PDE for finite measures reads \begin{equation} \label{FP_OU} \begin{cases} \displaystyle \partial_t {\bf u}\left(t\right) = \sum^{d}_{i,j=1}\Sigma(t)_{ij}\partial_{ij}{\bf u}\left(t\right) - \sum^{d}_{i=1}\partial_i\left(\left(C(t)x\right)_i{\bf u}\left(t\right)\right) \\ {\bf u}(0) = \nu \in {\cal M}_f\left(\mathbb R^d\right). \end{cases} \end{equation} \begin{prop} \label{FwdOU_Uniq} For all $\nu \in {\cal M}_f\left(\mathbb R^d\right)$, equation \eqref{FP_OU} with initial value $\nu$ admits at most one ${\cal M}_f\left(\mathbb R^d\right)$-valued solution. \end{prop} \begin{proof} \ \begin{enumerate} \item Let $\nu \in {\cal M}_f\left(\mathbb R^d\right)$ and ${\bf u}$ be a solution of \eqref{Fokker} with initial value $\nu$. Identity \eqref{weakbis} can be extended to ${\cal S}\left(\mathbb R^d\right)$ since for all $t \in [0,T]$, ${\bf u}\left(t\right)$ belongs to ${\cal M}_f\left(\mathbb R^d\right)$. Then, $t \mapsto {\cal F}{\bf u}\left(t\right)$ verifies \begin{equation} \label{WeakOUPDE} {\cal F}{\bf u}\left(t\right)(\xi) = {\cal F} \nu(\xi) + \int^{t}_{0}\left<C\left(s\right)^{\top}\xi,\nabla {\cal F} {\bf u}\left(s\right)\right > ds - \frac{1}{2}\int^{t}_{0}\left<\Sigma\left(s\right)\xi,\xi\right>{\cal F}{\bf u}\left(s\right)ds, \ (t,\xi) \in [0,T]\times\mathbb R^d. \end{equation} In fact, the integrand inside the first integral has to be understood as a Schwartz distribution: in particular the symbol $\nabla$ is understood in the sense of distributions and for each given $s \in [0,T]$, $\left<C\left(s\right)^{\top}\xi,\nabla {\cal F} {\bf u}\left(s\right)\right>$ denotes the tempered distribution \begin{equation*} \varphi \mapsto \sum^{d}_{i=1}\partial_i{\cal F} {\bf u}\left(s\right)\left(\xi \mapsto \left(C\left(s\right)^{\top}\xi\right)_i\varphi\left(\xi\right)\right). \end{equation*} Indeed, even though for any $t$, ${\cal F}{\bf u}\left(t\right)$ is a function, the equation \eqref{WeakOUPDE} has to be understood in ${\cal S}'\left(\mathbb R^d\right)$. Hence, for all $\phi \in {\cal S}\left(\mathbb R^d\right)$, this gives \begin{align} \int_{\mathbb R^d}\phi\left(\xi\right){\cal F}{\bf u}\left(t\right)\left(\xi\right)d\xi &{}- \int_{\mathbb R^d}\phi\left(\xi\right){\cal F}\nu\left(\xi\right) \phi(\xi) d\xi\\ &{}= - i \sum^{d}_{k,l=1}\int^{t}_{0} C\left(s\right)_{kl} \int_{\mathbb R^d}\xi_l{\cal F}\phi_k\left(\xi\right){\bf u}\left(s\right)\left(d\xi\right) ds - \frac{1}{2}\int^{t}_{0}\int_{\mathbb R^d}\left<\Sigma\left(s\right)\xi,\xi\right>{\cal F}{\bf u}\left(s\right)\left(\xi\right) \phi(\xi) d\xi ds \nonumber \\ &{}= - \sum^{d}_{k,l=1}\int^{t}_{0} C\left(s\right)_{kl} \int_{\mathbb R^d} {\cal F}\left(\partial_l\phi_k\right)\left(\xi\right){\bf u}\left(s\right) \left(d\xi\right)ds - \frac{1}{2}\int^{t}_{0}\int_{\mathbb R^d}\left<\Sigma\left(s\right)\xi,\xi\right>{\cal F}{\bf u}\left(s\right)\left(\xi\right)d\xi ds \nonumber \\ &{}= - \int^{t}_{0}\int_{\mathbb R^d}\left(\mathop{div_\xi}\left(C\left(s\right)^{\top}\xi\phi\left(\xi\right)\right) + \frac{1}{2}\left<\Sigma\left(s\right)\xi,\xi\right> \phi\left(\xi\right)\right){\cal F}{\bf u}(s)(\xi) d\xi ds, \nonumber \end{align} where $\phi_k : \xi \mapsto \xi_k\phi\left(\xi\right)$ for a given $k \in [\![1,d]\!]$. \smallbreak \noindent \item Let now ${\bf v}: [0,T] \rightarrow {\cal M}_f\left(\mathbb R^d\right)$ defined by \begin{equation} \label{MeasChange} \int_{\mathbb R^d}\phi\left(x\right){\bf v}\left(t\right)\left(dx\right) = \int_{\mathbb R^d}\phi\left({\cal D}\left(t\right)^{\top}x\right){\bf u}\left(t\right)\left(dx\right), \end{equation} $t \in [0,T], \phi \in {\cal C}_b(\mathbb R^d).$ For every $\xi \in \mathbb R^d$, we set $\phi(x) = \exp(-i \langle \xi, x\rangle)$ in \eqref{MeasChange} to obtain \begin{equation} \label{EAcont} {\cal F}{\bf v}\left(t\right)\left(\xi\right) = {\cal F}{\bf u}\left(t\right)\left({\cal D}\left(t\right)\xi\right), \end{equation} for all $\xi \in \mathbb R^d$, for all $t \in [0,T]$. \noindent \item We want now to show that, for each $\xi$, $t \mapsto {\cal F}{\bf v}\left(t\right)$ fulfills an ODE. To achieve this, suppose for a moment that $ \left(t,\xi\right) \mapsto {\cal F}{\bf u}\left(t\right)\left(\xi\right)$ is differentiable with respect to the variable $\xi$. Then, on the one hand, we have for all $\left(t,\xi\right)\in[0,T]\times\mathbb R^d$, \begin{equation}\label{strongPDE} {\cal F}{\bf u}\left(t\right)\left(\xi\right) = {\cal F}\nu\left(\xi\right) + \int^{t}_{0}\left<C\left(s\right)^{\top}\xi,\nabla_\xi{\cal F} {\bf u}\left(s\right)\left(\xi\right)\right>ds - \frac{1}{2}\int^{t}_{0}\left<\Sigma\left(s\right)\xi,\xi\right>{\cal F}{\bf u}\left(s\right)\left(\xi\right)ds, \end{equation} thanks to identity \eqref{WeakOUPDE}. This means in particular that, for each given $\xi \in \mathbb R^d$, $t \mapsto {\cal F}{\bf u}\left(t\right)\left(\xi\right)$ is differentiable almost everywhere on $[0,T]$. \smallbreak \noindent On the other hand, for almost every $t \in [0,T]$ and all $\xi \in \mathbb R^d$, we have \begin{align} \label{ETechnical} \partial_t{\cal F} {\bf v}\left(t\right)\left(\xi\right)&{}=\partial_t{\cal F} {\bf u}\left(t\right)\left({\cal D}\left(t\right)\xi\right) + \sum^{d}_{i=1}\left(\frac{d}{dt}\left({\cal D}\left(t\right)\xi\right)\right)_i \partial_i{\cal F} {\bf u}\left(t\right)\left({\cal D}\left(t\right)\xi\right), \nonumber \\ &{}= \partial_t{\cal F} {\bf u}\left(t\right)\left({\cal D}\left(t\right)\xi\right) - \sum^{d}_{i=1}\left(C\left(t\right)^{\top}{\cal D}\left(t\right)\xi\right)_i\partial_i{\cal F} {\bf u}\left(t\right)\left({\cal D}\left(t\right)\xi\right), \nonumber\\ &{}= - \frac{1}{2}\left<\Sigma\left(t\right){\cal D}\left(t\right)\xi,{\cal D}\left(t\right)\xi\right>{\cal F}{\bf v}\left(t\right)\left(\xi\right), \end{align} where from line 1 to line 2, we have used the fact $\frac{d}{dt}\left({\cal D}\left(t\right)\xi\right) = - C\left(t\right)^{\top}{\cal D}\left(t\right)\xi$ for all $\left(t,\xi\right) \in [0,T]\times\mathbb R^d$ and from line 2 to line 3, the identity \eqref{strongPDE}. Since $t \mapsto {\cal F} {\bf v}\left(t\right)\left(\xi\right)$ is absolutely continuous by \eqref{EAcont}, \eqref{ETechnical} implies \begin{equation}\label{EDOFourierFwd} {\cal F} {\bf v}\left(t\right)\left(\xi\right) = {\cal F} \nu \left(\xi\right) - \frac{1}{2}\int^{t}_{0}\left<\Sigma\left(s\right){\cal D}\left(s\right)\xi,{\cal D}\left(s\right)\xi\right>{\cal F} {\bf v}\left(s\right)\left(\xi\right)ds, \xi \in \mathbb R^d, \end{equation} for all $t \in [0,T]$. \noindent \item Now, if $\left(t,\xi\right) \mapsto {\cal F} {\bf u}\left(t\right)\left(\xi\right)$ is not necessarily differentiable in the variable $\xi$, we will be able to prove \eqref{EDOFourierFwd} still holds by making use of calculus in the sense of distributions. \noindent \item Suppose that \eqref{EDOFourierFwd} holds. This gives \begin{equation}\label{FourierExplicitOU} {\cal F}{\bf u}\left(t\right)\left(\xi\right) = e^{-\int^{t}_{0}\frac{\left|\sigma\left(s\right)^{\top}\xi\right|^2}{2}ds}{\cal F} \nu\left({\cal D}^{-1}\left(t\right)\xi\right). \end{equation} \noindent \item The proof is now concluded after we have established the \eqref{EDOFourierFwd}. Since both sides of it are continuous in $(t,\xi)$, it will be enough to show the equality as ${\cal S}'(\mathbb R^d)$-valued. This can be done differentiating \eqref{WeakOUPDE}, considered as an equality in ${\cal S}'(\mathbb R^d)$. For this we will apply Lemma \ref{weakDer} setting $\Phi := {\cal F} {\bf u}\left(t\right)$ for every fixed $t \in [0,T]$ and differentiating in time. We set $\Phi_t(\xi) = {\cal F} {\bf v}(t)(\xi), \ \xi \in \mathbb R^d$ and $\Phi_t(\varphi) = \int_{\mathbb R^d} \varphi(\xi) \Phi_t(\xi) {\mathrm d}\xi, \varphi \in {\cal S}(\mathbb R^d)$. We remark that $\Phi_t$ is compatible with the one defined in \eqref{EPhi}. \eqref{EDOFourierFwd} will the directly follow from Lemma \ref{weakDer}. \end{enumerate} \end{proof} \begin{lem}\label{weakDer} \noindent Let $\Phi \in {\cal S}^{'}\left(\mathbb R^d\right), t \in [0,T]$. We denote by $\Phi_t$ the element of ${\cal S}^{'}\left(\mathbb R^d\right)$ such that for all $\varphi \in {\cal S}\left(\mathbb R^d\right)$ \begin{equation}\label{EPhi} \Phi_t\left(\varphi\right) := \det\left({\cal D}^{-1}\left(t\right)\right) \Phi\left(\varphi\left({\cal D}^{-1}\left(t\right)\cdot\right)\right). \end{equation} Then, for all $t \in [0,T]$ \begin{equation}\label{EDerivS} \Phi_t(\varphi) = \Phi(\varphi) - \sum^{d}_{i=1}\int^{t}_{0}\left(\partial_i\Phi\right)_s\left(x \mapsto \left(C\left(s\right)^{\top}{\cal D}\left(s\right)x\right)_i\varphi(x)\right)ds. \end{equation} \end{lem} \begin{proof} \noindent We begin with the case $\Phi \in {\cal S}\left(\mathbb R^d\right)$ (or only ${\cal C}^\infty\left(\mathbb R^d\right)$). In this case, \begin{equation*} \Phi_t\left(x\right) = \Phi\left({\cal D}\left(t\right)x\right), \ x \in \mathbb R^d, t \in [0,T]. \end{equation*} Hence, for every $t \in [0,T]$ \begin{align*} \frac{d}{dt}\Phi_t\left(x\right) &{}= \left<\frac{d}{dt}\left({\cal D}\left(t\right)x\right),\nabla\Phi\left({\cal D}\left(t\right)x\right)\right> \\ &{}= - \left<C\left(t\right)^{\top}{\cal D}\left(t\right)x,\nabla\Phi\left({\cal D}\left(t\right)x\right)\right>\\ &{}= - \sum^{d}_{i=1}\left(C\left(t\right)^{\top}{\cal D}\left(t\right)x\right)_i\left(\partial_i\Phi\right)_t\left(x\right), \end{align*} Now, coming back to the general case, let $\Phi \in {\cal S}'\left(\mathbb R^d\right) $ and $\left(\phi_\epsilon\right)_{\epsilon > 0}$ a sequence of mollifiers in ${\cal S}\left(\mathbb R^d\right)$, converging to the Dirac measure. Then for all $\epsilon > 0$, the function $\Phi*\phi_\epsilon : x \mapsto \Phi\left(\phi_\epsilon\left(x-\cdot\right)\right)$ belongs to ${\cal S}'\left(\mathbb R^d\right)\cap{\cal C}^\infty\left(\mathbb R^d\right)$. By the first part of the proof, \eqref{EDerivS} holds replacing $\Phi = \Phi \star \varphi_\varepsilon$. Now, this converges to $\Phi$ in ${\cal S}'\left(\mathbb R^d\right)$ when $\epsilon$ tends to $0^+$. \eqref{EDerivS} follows sending $\epsilon$ to $0^+$. Indeed, for all $\varphi \in {\cal S}\left(\mathbb R^d\right)$, $t \in [0,T]$, setting $\check{\phi_\epsilon} : y \mapsto \phi_\epsilon(-y)$, we have \begin{align*} \Phi_t\left(\varphi\right) &{}= \lim\limits_{\epsilon \to 0^+}\int_{\mathbb R^d}\varphi(x)\left(\Phi*\phi_\epsilon\right)_t\left(x\right)dx\\ &{}= \lim\limits_{\epsilon \to 0^+}\int_{\mathbb R^d}\varphi(x)\Phi*\phi_\epsilon\left(x\right)dx - \lim\limits_{\epsilon \to 0^+}\sum^{d}_{i=1}\int^{t}_{0}\det\left({\cal D}^{-1}\left(s\right)\right)\int_{\mathbb R^d}\left(C\left(s\right)^{\top}x\right)_i\varphi\left({\cal D}^{-1}\left(s\right)x\right)\partial_i\Phi*\phi_\epsilon(x)dxds\\ &{}= \Phi(\varphi) - \lim\limits_{\epsilon \to 0^+}\sum^{d}_{i=1}\int^{t}_{0}\det\left({\cal D}^{-1}\left(s\right)\right)\partial_i\Phi\left(\left(\left(C\left(s\right)^{\top}\cdot\right)_i\varphi\left({\cal D}^{-1}\left(s\right)\cdot\right)\right)*\check{\phi_\epsilon}\right)ds\\ &{}= \Phi(\varphi) - \sum^{d}_{i=1}\int^{t}_{0}\det\left({\cal D}^{-1}\left(s\right)\right)\partial_i\Phi\left(\left(C\left(s\right)^{\top}\cdot\right)_i\varphi\left({\cal D}^{-1}\left(s\right)\cdot\right)\right)ds\\ &{}= \Phi(\varphi) - \sum^{d}_{i=1}\int^{t}_{0}\left(\partial_i\Phi\right)_s\left(x \mapsto \left(C\left(s\right)^{\top}{\cal D}\left(s\right)x\right)_i\varphi\left(x\right)\right)ds.\\ \end{align*} To conclude, it remains to justify the commutation between the limit in $\epsilon$ and the integral in time from line 3 to line 4 using Lebesgue dominated convergence theorem. On the one hand, for a given $i \in [\![1,d]\!]$, the fact $\partial_i \Phi$ belongs to ${\cal S}'\left(\mathbb R^d\right)$ implies that there exists $C > 0$, $N \in \mathbb N$ such that for all $\varphi \in {\cal S}\left(\mathbb R^d\right)$ \begin{equation*} \left|\partial_i \Phi\left(\varphi\right)\right| \leq C \sup_{\left|\alpha\right| \leq N}\sup_{x\in\mathbb R^d}\left(1 + \left|x\right|^2\right)^N\left|\partial^{\alpha}_x\varphi(x)\right|, \end{equation*} see Chapter 1, Exercise 8 in \cite{rudin}. On the other hand, the quantities \begin{equation*} \sup_{x\in\mathbb R^d}\left(1 + \left|x\right|^2\right)^N\left|\partial^{\alpha}_x\left(x_j\varphi({\cal D}^{-1}\left(s\right)\cdot)\right)*\check{\phi_\epsilon}\right| \end{equation*} are bounded uniformly in the couple $\left(s,\epsilon\right)$, for all $j \in [\![1,d]\!]$, $\alpha \in \mathbb{N}^d,$ taking also into account that the function $s \mapsto {\cal D}^{-1}(s)$ is continuous and therefore bounded. Since $C$ is also continuous on $[0,T]$, we are justified to use Lebesgue's dominated convergence theorem. \end{proof} \begin{thm}\label{BwdOU_Uniq} For all $\mu \in {\cal M}_f\left(\mathbb R^d\right)$, equation \eqref{EDPTerm} with terminal value $\mu$ admits at most one ${\cal M}_f\left(\mathbb R^d\right)$-valued solution in the sense of Definition \ref{Def}. \end{thm} \begin{proof} \noindent Let $\mu \in {\cal M}_f\left(\mathbb R^d\right)$ and ${\bf u}$ a solution of \eqref{BackwardFokker} with terminal value $\mu$. Then, ${\bf u}$ solves equation \eqref{Fokker} with initial value ${\bf u}\left(0\right)$. As a consequence, by I \eqref{FourierExplicitOU} appearing at the end of the proof of Proposition \ref{FwdOU_Uniq}, for all $\xi \in \mathbb R^d$, \begin{equation*} {\cal F} \mu \left(\xi\right) = e^{-\int^{T}_{0}\frac{\left|\sigma\left(s\right)^{\top}\xi\right|^2}{2} ds}{\cal F} {\bf u}\left(0\right)\left({\cal D}^{-1}\left(T\right)\xi\right), \end{equation*} so that \begin{equation*} {\cal F} {\bf u}\left(0\right)(\xi) = e^{\int^{T}_{0}\frac{\left|\sigma\left(s\right)^{\top}\xi\right|^2}{2}ds}{\cal F} \mu \left({\cal D}\left(T\right)\xi\right). \end{equation*} Hence, ${\bf u}\left(0\right)$ is entirely determined by $\mu$ and Proposition \ref{FwdOU_Uniq} gives the result. \end{proof} \section{McKean SDEs related to time-reversal of diffusions} \label{S4} \setcounter{equation}{0} \subsection{Preliminary considerations} \label{Prelim} \noindent In this last section we concentrate on the analysis of the well-posedness of the McKean SDE \eqref{MKIntro}. \smallbreak \noindent Regarding $b: [0,T]\times\mathbb{R}^d \mapsto \mathbb{R}^d$, $\sigma: [0,T]\times\mathbb{R}^d \mapsto M_{d,m}\left(\mathbb{R}\right)$, we set $\widehat{b} := b\left(T-.,\cdot\right)$, $\widehat{\sigma} := \sigma\left(T-.,\cdot\right)$, $\widehat {\Sigma}:= \widehat{\sigma}^\top \widehat{\sigma}.$ \smallbreak \noindent Given a probability-valued function ${\bf p}: [0,T] \rightarrow {\cal P}(\mathbb R^d)$, we denote by $p_t$ the density of ${\bf p}\left(t\right)$, for $t \in [0,T]$, whenever it exists. For the McKean type SDE \eqref{MKIntro}, we consider the following notion of solution. \begin{defi} \label{MKSol} On a given filtered probability space $\left(\Omega, {\cal F}, \left(\mathcal{F}_t\right)_{t\in[0,T]}, \mathbb P\right)$ equipped with an $m$-dimensional $\left(\mathcal{F}_t\right)_{t\in[0,T]}$-Brownian motion $\beta$, a {\bf solution} of equation \eqref{MKIntro} is a couple $\left(Y,{\bf p}\right)$ fulfilling \eqref{MKIntro} with Brownian motion $\beta$, such that $Y$ is $\left({\cal F}_t\right)_{t\in[0,T]}$-adapted and such that for all $i \in [\![1,d]\!]$, all compact $K \subset \mathbb R^d$, all $\tau < T$ \begin{equation} \label{IdInt} \int^{\tau}_{0}\int_{K}\left| \mathop{div_y}\left(\widehat{\Sigma}_{i.}\left(r,y\right)p_{r}\left(y\right)\right) \right| dydr < \infty. \end{equation} \end{defi} \begin{rem} \label{RDefMK} For a given solution $\left(Y,{\bf p}\right)$ of equation \eqref{MKIntro}, identity \eqref{IdInt} appearing in Definition \ref{MKSol} implies in particular that, for all $i \in [\![1,d]\!]$, all $\tau < T$ \begin{equation*} \int^{\tau}_{0} \left|\frac{\mathop{div_y}\left(\widehat{\Sigma}_{i.}\left(r,Y_r\right)p_{r}\left(Y_r\right)\right)}{p_{r}\left(Y_r\right)}\right|dr < \infty, \ \mathbb{P}\rm{-a.s.} \end{equation*} \end{rem} \noindent The terminology stating that \eqref{MKIntro} constitutes a probabilistic representation of \eqref{EDPTerm} because is justified by the result below. \begin{prop} \label{PProbRep} Suppose $b,\sigma$ locally bounded. If $\left(Y,{\bf p}\right)$ is a solution of \eqref{MKIntro} in the sense of Definition \ref{MKSol}, then ${\bf p}\left(T-\cdot\right)$ is a solution of \eqref{EDPTerm0}, with $\mu = {\bf p}(0)$ in the sense of Definition \ref{Def}. \end{prop} \begin{proof} \noindent Let $\left(Y,{\bf p}\right)$ be a solution of \eqref{MKIntro} in the sense of Definition \ref{MKSol} with a Brownian motion symbolized by $\beta$. Let $\phi \in \mathcal{C}^{\infty}_c\left(\mathbb{R}^d\right)$ and $t\in]0,T]$. It\^o's formula gives \begin{equation} \label{Ito} \phi\left(Y_{T-t}\right) = \phi\left(Y_0\right) + \int^{T-t}_{0}\left<\tilde{b}(s,Y_s; p_s),\nabla\phi\left(Y_s\right)\right> + \frac{1}{2}Tr\left(\widehat{\Sigma}\left(s,Y_s\right)\nabla^2\phi\left(Y_s\right)\right)ds + \int^{T-t}_{0}\nabla\phi\left(Y_s\right)^\top\sigma\left(s,Y_s\right)d\beta_s, \end{equation} with \begin{equation*} \tilde{b}\left(s,y; p_s\right) := \left\{\frac{\mathop{div_y}\left(\widehat{\Sigma}_{j.}\left(s,y\right)p_{s}\left(y\right)\right)}{p_{s}\left(y\right)}\right\}_{j\in[\![1,d]\!]} - \widehat{b}\left(s,y\right), \quad \left(s,y\right) \in ]0,T[\times\mathbb R^d. \end{equation*} We now want to take the expectation in identity \eqref{Ito}. On the one hand, Remark \ref{RDefMK}, implies that for all $i \in [\![1,d]\!]$ and $s\in ]0,T[$ \begin{equation*} \int_0^T ds \mathbb{E}\left \vert \frac{\mathop{div_y}\left(\widehat{\Sigma}_{i.}\left(s,Y_s\right)p_{s}\left(Y_s\right)\right)}{p_{s}\left(Y_s\right)}\partial_{i}\phi\left(Y_s\right)\right\vert < \infty. \end{equation*} On the other hand \begin{equation*} \label{E42bis} \int^{T}_{0}\mathbb{E}\left\{Tr\left(\widehat{\Sigma}\left(s,Y_s\right)\nabla^2\phi\left(Y_s\right)\right)\right\}ds = \sum^{d}_{i,j=1}\int^{T}_{0}\int_{\mathbb R^d}\widehat{\Sigma}_{ij}\left(s,y\right)\partial_{ij}\phi\left(y\right)p_s\left(y\right)dyds \ {\rm p.s.} \end{equation*} Previous expression is finite since $\sigma$ is bounded on compact sets and the partial derivatives of $\phi$ have compact supports. With similar arguments we prove that $ \int_0^T ds \mathbb{E}\left\vert\left<\widehat{b}\left(s,Y_s\right),\nabla\phi\left(Y_s\right)\right>\right \vert < \infty,\ s \in ]0,T[.$ Moreover, fixing $s\in ]0,T[$, integrating by parts we have \begin{align} \label{E42quater} \mathbb{E}\left\{\left<\tilde{b}\left(s,Y_s; p_s \right),\nabla\phi\left(Y_s\right)\right>\right\} &{}= \sum^{d}_{k,j=1}\int_{\mathbb R^d}\partial_k\left(\widehat{\Sigma}_{jk}\left(s,y\right)p_{s}\left(y\right)\right)\partial_j\phi\left(y\right)dy- \int_{\mathbb R^d}\left<\widehat{b}\left(s,y\right),\nabla\phi\left(y\right)\right>p_{s}\left(y\right)dy\\ &{}= -\int_{\mathbb R^d}Tr\left(\widehat{\Sigma}\left(s,y\right)\nabla ^2\phi\left(y\right)\right)p_{s}\left(y\right)dy- \int_{\mathbb R^d}\left<\widehat{b}\left(s,y\right),\nabla\phi\left(y\right)\right>p_{s}\left(y\right)dy. \nonumber \end{align} Now, the quadratic variation of the local martingale $M^{Y} := \int^{\cdot}_{0}\nabla\phi\left(Y_s\right)^{\top}\sigma\left(s,Y_s\right)d\beta_s$ yields \begin{equation*} \left[M^{Y}\right] = \int^{\cdot}_{0}\nabla\phi\left(Y_s\right)^{\top}\Sigma\left(s,Y_s\right)\nabla\phi\left(Y_s\right)ds. \end{equation*} We remark in particular that $\mathbb{E}\left(\left[M^Y\right]_T\right) < \infty$ since $\sigma$ is bounded on compact sets and $\phi$ has compact support. This shows $M^Y$ is a true (even square integrable) martingale and all terms involved in \eqref{Ito} are integrable. \noindent At this point we evaluate the expectation in \eqref{Ito} taking into account the considerations above together with \eqref{E42bis} and \eqref{E42quater}. We obtain \begin{equation*} \mathbb{E}\left(\phi\left(Y_{T-t}\right)\right) = \int_{\mathbb R^d}\phi\left(y\right)\mu\left(dy\right) - \int^{T-t}_{0}\int_{\mathbb R^d}L_{T-s}\phi\left(y\right)p_{s}\left(y\right)dyds. \end{equation*} Applying the change of variable $t \mapsto T-t$, we finally obtain the identity \begin{equation*} \int_{\mathbb R^d}\phi\left(y\right)p_{T-t}\left(y\right)dy = \int_{\mathbb R^d}\phi\left(y\right)\mu \left(dy\right) - \int^{T}_{t}\int_{\mathbb R^d}L_{s}\phi\left(y\right)p_{T-s}\left(y\right)dyds, \end{equation*} which means that ${\bf p}\left(T-\cdot\right)$ solves \eqref{EDPTerm} in the sense of Definition \ref{Def} with terminal value $\mu$. \end{proof} \noindent We also provide the different notions of existence and uniqueness for \eqref{MKIntro} we will use in the sequel. \begin{defi} \label{MKDSol} Let ${\cal A}$ be a class of measure-valued functions from $[0,T]$ to ${\cal P}\left(\mathbb R^d\right)$. \begin{enumerate} \item We say that \eqref{MKIntro} admits {\bf existence in law} in ${\cal A}$, if there exists a complete filtered probability space $\left(\Omega, {\cal F}, \left(\mathcal{F}_t\right)_{t\in[0,T]}, \mathbb P\right)$ equipped with an $m$-dimensional $\left(\mathcal{F}_t\right)_{t\in[0,T]}$-Brownian motion $\beta$ and a couple $\left(Y,{\bf p}\right)$ solution of \eqref{MKIntro} in the sense of Definition \ref{MKSol} such that ${\bf p}$ belongs to ${\cal A}$. \item Let $\left(Y^1,{\bf p^1}\right)$, $\left(Y^2,{\bf p^2}\right)$ be two solutions of \eqref{MKIntro} in the sense of Definition \ref{MKSol} associated to some complete filtered probability spaces $\left(\Omega^1, {\cal F}^1, \left(\mathcal{F}^1_t\right)_{t\in[0,T]}, \mathbb P^1\right)$, $\left(\Omega^2, {\cal F}^2, \left(\mathcal{F}^2_t\right)_{t\in[0,T]}, \mathbb P^2\right)$ respectively, equipped with Brownian motions $\beta^1,\beta^2$ respectively and such that ${\bf p^1},{\bf p^2}$ belong to ${\cal A}$. We say that \eqref{MKIntro} admits {\bf uniqueness in law} in ${\cal A}$, if $Y^1_0,Y^2_0$ have the same law implies that $Y^1,Y^2$ have the same law. \item We say that \eqref{MKIntro} admits {\bf strong existence} in ${\cal A}$ if for any complete filtered probability space $(\Omega, {\cal F}, \left(\mathcal{F}_t\right)_{t\in[0,T]}, \mathbb P)$ equipped with an $m$-dimensional $\left(\mathcal{F}_t\right)_{t\in[0,T]}$-Brownian motion $\beta$, there exists a solution $\left(Y,{\bf p}\right)$ of equation \eqref{MKIntro} in the sense of Definition \ref{MKSol} such that ${\bf p}$ belongs to ${\cal A}$. \item We say that \eqref{MKIntro} admits {\bf pathwise uniqueness} in ${\cal A}$ of if for any complete filtered probability space $(\Omega, {\cal F}, \left(\mathcal{F}_t\right)_{t\in[0,T]}, \mathbb P)$ equipped with an $m$-dimensional $\left(\mathcal{F}_t\right)_{t\in[0,T]}$-Brownian motion $\beta$, for any solutions $\left(Y^1,{\bf p^1}\right)$, $\left(Y^2,{\bf p^2}\right)$ of \eqref{MKIntro} in the sense of Definition \ref{MKSol} such that $Y^1_0 = Y^2_0, \ \mathbb{P}\rm{-a.s.}$ and ${\bf p^1}, {\bf p^2}$ belong to ${\cal A}$, we have $Y^1 = Y^2, \ \mathbb{P}\rm{-a.s.}$ \end{enumerate} \end{defi} We finally define the sets in which we will formulate existence and uniqueness results in the sequel. \smallbreak \begin{notation} \label{NAC1_2} \begin{enumerate} \item For a given ${\cal C} \subseteq {\cal P}\left(\mathbb R^d\right)$, ${\cal A}_{{\cal C}}$ denotes the set of measure-valued functions from $[0,T]$ to ${\cal P}\left(\mathbb R^d\right)$ ${\bf p}$ such that ${\bf p}\left(T\right)$ belongs to $\mathcal{C}$. Furthermore, for a given measure-valued function ${\bf p}: [0,T] \mapsto {\cal P}\left(\mathbb R^d\right)$, we will denote \begin{equation} \label{EBP} b(t,\cdot; {\bf p}_t ) := \left\{\frac{\mathop{div_y}\left(\widehat{\Sigma}_{i.}p_t\right)}{p_t}\right\}_{i\in[\![1,d]\!]}, \end{equation} for almost all $t \in [0,T]$ whenever $p_t$ exists and the right-hand side quantity is well-defined. The function $(t,x) \mapsto b(t,x; {\bf p}_t)$ is defined on $[0,T] \times \mathbb R^d$ with values in $\mathbb R^d$. \item Let ${\cal A}_1$ (resp. ${\cal A}_2$) denote the set of measure-valued functions from $[0,T]$ to ${\cal P}\left(\mathbb R^d\right)$ ${\bf p}$ such that, for all $t \in [0,T[$, ${\bf p}\left(t\right)$ admits a density $p_t$ with respect to the Lebesgue measure on $\mathbb R^d$ and such that $(t,x) \mapsto b(t,x; {\bf p}_t )$ is locally bounded (resp. is locally Lipschitz in space with linear growth) on $[0,T[\times\mathbb R^d$. \end{enumerate} \end{notation} We state now existence and uniqueness results for equation \eqref{MKIntro} in different settings. \subsection{PDE with terminal condition and existence for the McKean SDE} \label{MKEX} \noindent The existence result for equation \eqref{MKIntro} will be based on two pillars: the reachability condition constituted by the existence of a solution of the Fokker-Planck PDE with terminal condition and the time-reversal techniques of \cite{haussmann_pardoux}. More precisely, we suppose that Assumption \ref{GH1} is in force for a fixed $\mathcal{C} \subseteq \mathcal{P}\left(\mathbb R^d\right)$ and consider the following extra assumptions, i.e. Assumptions \ref{MKEx_1}, \ref{MKEx_2} and \ref{MKEx_3}, still with respect to $({\cal C}, \mu)$. \begin{ass} \label{MKEx_1} \noindent The backward PDE \eqref{EDPTerm0} with terminal condition $\mu$ admits at least an ${\cal M}_+\left(\mathbb R^d\right)$-valued solution ${\bf u}$ in the sense of Definition \ref{Def} verifying the following. \begin{enumerate} \item ${\bf u}\left(0\right)$ belongs to $\mathcal{C}$. \item $\forall t \in ]0,T[$, ${\bf u}\left(t\right)$ admits a density with respect to the Lebesgue measure on $\mathbb R^d$ (denoted by $u\left(t,\cdot\right)$) and for all $t_0 > 0$ and all compact $K \subset \mathbb{R}^d$ \begin{equation} \label{HP} \int^{T}_{t_0}\int_{K} \left|u\left(t,x\right)\right|^2 + \sum^{d}_{i=1}\sum^{m}_{j=1}\left|\sigma_{ij}\left(t,x\right)\partial_{i}u\left(t,x\right)\right|^2dxdt < \infty. \end{equation} \end{enumerate} \end{ass} \begin{rem} \label{R45} \noindent Suppose Assumption \ref{Lip1d} holds and let ${\bf u}$ be the measure-valued function appearing in Assumption \ref{MKEx_1}. Then \eqref{HP} implies that the family of densities $u\left(T-t,\cdot\right), t \in ]0,T[$ verifies condition \eqref{IdInt} appearing in Definition \ref{MKSol}. To show this, it suffices to check that for all $t_0 > 0$, all compact $K \subset \mathbb R^d$ and all $\left(i,j,k\right) \in [\![1,d]\!]^2\times[\![1,m]\!]$ \begin{equation}\label{Integr} \int^{T}_{t_0}\int_{K}\left|\partial_j\left(\sigma_{ik}\left(s,y\right)\sigma_{jk}\left(s,y\right)u\left(s,y\right)\right)\right|dyds < \infty. \end{equation} The integrand appearing in \eqref{Integr} is well-defined. Indeed, in the sense of distributions we have \begin{equation}\label{Deriv} \partial_j\left(\sigma_{ik}\sigma_{jk}u\right) = \sigma_{ik}\sigma_{jk}\partial_ju + u\left(\sigma_{ik}\partial_j\sigma_{jk} + \sigma_{jk}\partial_j\sigma_{ik}\right); \end{equation} moreover the components of $\sigma$ are Lipschitz, so they are (together with their space derivatives) locally bounded. Also $u$ and $ \sigma_{jk}\partial_j $ are square integrable by \eqref{HP}. This implies \eqref{Integr}. \end{rem} \begin{ass} \label{MKEx_2} Let ${\bf u}$ be the measure-valued mapping appearing in Assumption \ref{MKEx_1}. We suppose that $\mu$ admits a density and $\restriction{{\bf u}\left(T-\cdot\right)}{[0,T[\times\mathbb R^d}$ belongs to ${\cal A}_1$. \end{ass} We introduce two new assumptions. \begin{ass} \label{MKEx_3} Let ${\bf u}$ be the measure-valued mapping appearing in Assumption \ref{MKEx_1}. We suppose that $\mu$ admits a density and $\restriction{{\bf u}\left(T-\cdot\right)}{[0,T[\times\mathbb R^d}$ belongs to${\cal A}_2$. \end{ass} We remark that Assumption \ref{MKEx_3} implies \ref{MKEx_2}. \begin{prop} \label{MKEx_Prop} Suppose the validity of Assumptions \ref{Lip1d}, Assumption \ref{GH1} with respect to ${\cal C}$ and Assumption \ref{MKEx_1} with respect to $({\cal C},\mu)$. Then \eqref{MKIntro} admits existence in law in ${\cal A}_{{\cal C}}$. \\ In particular if, moreover, Assumption \ref{MKEx_2} (resp. \ref{MKEx_3}) holds, then \eqref{MKIntro} admits existence in law in ${\cal A}_{{\cal C}}\cap{\cal A}_1$ (resp. strong existence in ${\cal A}_{{\cal C}}\cap{\cal A}_2$). \end{prop} \begin{proof} \noindent By Assumption \ref{MKEx_1}, there is an ${\cal M}_+\left(\mathbb R^d\right)$-valued solution ${\bf u}$ of equation \eqref{EDPTerm0} in the sense of Definition \ref{Def} such that ${\bf u}\left(T\right) = \mu$ and ${\bf u}\left(0\right)$ belongs to $\mathcal{C}$. We consider now a filtered probability space $\left(\Omega, {\cal F}, \left(\mathcal{F}_t\right)_{t\in[0,T]}, \mathbb P\right)$ equipped with an $\left(\mathcal{F}_t\right)_{t\in[0,T]}$-Brownian motion $W$. Let $X_0$ be a r.v. distributed according to ${\bf u}(0)$. Under Assumption \ref{Lip1d}, it is well-known that there is a solution $X$ to \begin{equation}\label{SDE} X_t = X_0 + \int^{t}_{0}b\left(s,X_s\right)ds + \int^{t}_{0}\sigma\left(s,X_s\right)dW_s, \ t \in [0,T]. \end{equation} \smallbreak \noindent Now, by Proposition \ref{PFundam}, $t \mapsto \mathcal{L}\left(X_t\right)$ is a ${\cal P}\left(\mathbb R^d\right)$-valued solution of equation \eqref{Fokker} in the sense of \eqref{weakbis} with initial value ${\bf u}\left(0\right)\in\mathcal{C}$. Then Assumption \ref{GH1} gives \begin{equation} \label{MKIdLaw} \mathcal{L}\left(X_t\right) = {\bf u}\left(t\right), \ t \in [0,T], \end{equation} since ${\bf u}$ solves also \eqref{Fokker} with initial value ${\bf u}\left(0\right)\in\mathcal{C}$. This implies in particular that ${\bf u}$ is probability valued and that for all $t\in ]0,T[$, $X_t$ has $u\left(t,\cdot\right)$ as a density fulfilling condition \eqref{HP} in Assumption \ref{MKEx_1}. \smallbreak \noindent Combining this observation with Assumption \ref{Lip1d}, Theorem 2.1 in \cite{haussmann_pardoux} states that there exists a filtered probability space $\left(\Omega,{\cal G}, ({\cal G}_t)_{t\in[0,T]}, \mathbb Q\right)$ equipped with the Brownian motion $\beta$ and a copy of $\hat X$ (still denoted by the same letter) such that $\widehat{X}$ fulfills the first lign of \eqref{MKIntro} with $\beta$ and \begin{equation} \label{Eup} {\bf p}\left(t\right) = {\bf u}\left(T-t\right), \ t \in ]0,T[. \end{equation} \smallbreak \noindent Finally, existence in law for \eqref{MKIntro} in the sense of Definition \ref{MKSol} holds since $(\widehat{X}, {\bf u}\left(T-\cdot\right))$ is a solution of \eqref{MKIntro} on the same filtered probability space and the same Brownian motion above. This occurs in ${\cal A}_{{\cal C}}$ since $\mathcal{L}\left(\widehat{X}_T\right) \in {\cal C}$ thanks to equality \eqref{MKIdLaw} for $t = T$. \noindent We discuss rapidly the {\it in particular} point. \begin{itemize} \item Suppose that Assumption \ref{MKEx_2}; then ${\bf u}\left(T-\cdot\right)$ belongs to ${\cal A}_{\cal C} \cap {\cal A}_1$ and we also have existence in law in ${\cal A}_{{\cal C}}\cap{\cal A}_1$. \item Suppose the validity of Assumption \ref{MKEx_3}. Then, \eqref{Eup}, strong existence and pathwise uniqueness for the first line of \eqref{MKIntro} holds by classical arguments since the coefficients are locally Lipschitz with linear growth, see \cite{RevuzYorBook} Exercise (2.10), and Chapter IX.2 and \cite{RevuzYorBook}, Th. 12. section V.12. of \cite{rogers_v2}. By Yamada-Watanabe theorem this implies uniqueness in law, which shows that ${\bf u}\left(T-\cdot\right)$ constitutes the marginal laws of the considered strong solutions. This concludes the proof of strong existence in ${\cal A}_{{\cal C}}\cap{\cal A}_2$ since ${\bf u}\left(T-\cdot\right)$ belongs to ${\cal A}_{\cal C} \cap {\cal A}_2$, by Assumption \ref{MKEx_3}. \end{itemize} \end{proof} \begin{rem} \label{RExistence2} By \eqref{Eup}, the second component {\bf p} of the solution of \eqref{MKIntro} is given by ${\bf u}\left(T-\cdot\right).$ \end{rem} \subsection{PDE with terminal condition and uniqueness for the McKean SDE} \label{MKUNIQ} In this subsection we discuss some questions related to uniqueness for equation \eqref{MKIntro}. We state the following hypothesis related to $(\mu, \mathcal{C}) $ where $\mathcal{C}$ is a given subset of $\mathcal{P}\left(\mathbb R^d\right)$. \begin{ass} \label{APDETerm} The equation \eqref{EDPTerm0} with terminal condition $\mu$ admits at most a ${\cal P}\left(\mathbb R^d\right)$-valued solution ${\bf u}$ in the sense of Definition \ref{Def} such that ${\bf u}\left(0\right)$ belongs to $\mathcal{C}$. \end{ass} We recall that Section \ref{S32} provides various classes of examples where Assumption \ref{APDETerm} holds. \begin{prop} \label{MKProp} Suppose the validity of Assumption \ref{APDETerm} with respect to $(\mu,{\cal C})$ and suppose $b,\sigma$ to be locally bounded. \\ Let $\left(Y^i, {\bf p}^i\right), \ i \in \{1,2\}$ be two solutions of equation \eqref{MKIntro} in the sense of Definition \ref{MKSol} such that ${\bf p}^1\left(T\right), {\bf p}^2\left(T\right)$ belong to ${\cal C}$. Then, \begin{equation*} {\bf p}^1 = {\bf p}^2. \end{equation*} \end{prop} \begin{proof} \ Proposition \ref{PProbRep} shows that ${\bf p}^1\left(T-\cdot\right), {\bf p}^2\left(T-\cdot\right)$ are ${\cal P}\left(\mathbb R^d\right)$-valued solutions of equation \eqref{EDPTerm} in the sense of Definition \ref{Def} with terminal value $\mu $. Assumption \ref{APDETerm} gives the result since ${\bf p}^1\left(T\right), {\bf p}^2\left(T\right)$ belong to ${\cal C}$. \end{proof} As a corollary, we establish some consequences about uniqueness in law and pathwise uniqueness results for equation \eqref{MKIntro} in the classes ${\cal A}_1$ and ${\cal A}_2$. \begin{corro} \label{Coro} Suppose the validity of Assumption \ref{APDETerm} with respect to $(\mu,{\cal C})$. Then, the following results hold. \begin{enumerate} \item If $b$ is locally bounded, $\sigma$ is continuous and if the non-degeneracy Assumption \ref{Zvon3} holds then \eqref{MKIntro} admits uniqueness in law in ${\cal A}_{{\cal C}}\cap{\cal A}_1$. \item If $\left(b,\sigma\right)$ are locally Lipschitz with linear growth in space, then \eqref{MKIntro} admits pathwise uniqueness in ${\cal A}_{{\cal C}}\cap{\cal A}_2$. \end{enumerate} \end{corro} \begin{proof} \noindent If $\left(Y,{\bf p}\right)$ is a solution of \eqref{MKIntro} and is such that ${\bf p}\left(T\right) $ belongs to ${\cal C}$, then by Proposition \ref{MKProp} ${\bf p}$ is determined by $\mu = \mathcal{L}\left(Y_0\right)$. \\ To show that item 1. (resp. 2.) holds, it suffices to show that the classical SDE \begin{equation} \label{FrozenSDE} dX_t = b\left(t,X_t; {\bf p}_t\right)- \widehat{b}\left(t,X_t\right)dt + \widehat{\sigma}\left(t,X_t\right)dW_t, \ t \in [0,T[, \end{equation} where $b$ was defined in \eqref{EBP} and $W$ an $m$-dimensional Brownian motion, admits uniqueness in law (resp. pathwise uniqueness). The mentioned uniqueness in law is a consequence of Theorem 10.1.3 in \cite{stroock} and pathwise uniqueness holds by \cite{RevuzYorBook} Exercise (2.10), and Chapter IX.2 and \cite{rogers_v2} Th. 12. Section V.12. \end{proof} \subsection{Well-posedness for the McKean SDE: the bounded coefficients case} \label{SExamples44} \smallbreak In this section, we state a significant result related to existence and uniqueness in law together with pathwise uniqueness for equation \eqref{MKIntro}. In particular we obtain existence and uniqueness in law for \eqref{MKIntro} in the class ${\cal A}_1$ \smallbreak \noindent We formulate the following hypotheses. \begin{ass}\label{smoothness} \begin{enumerate} \smallbreak \item Assumption \ref{Zvon3} holds. \item The functions $\sigma$ is Lipschitz (in space). \item The functions $\sigma$, $b$, $\left(\nabla_xb_i\right)_{i \in [\![1,d]\!]}$, $\left(\nabla_x\Sigma_{ij}\right)_{i,j \in [\![1,d]\!]}$ are continuous bounded and $\nabla^2_x\Sigma$ is H\"{o}lder continuous with exponent $\alpha \in ]0,1[$ in space uniformly in time. \end{enumerate} \end{ass} \begin{ass}\label{smoothness1} $\Sigma$ is supposed to be H\"{o}lder continuous in time \end{ass} \begin{rem} \label{Runu} Under Assumption \ref{smoothness}, for every $\nu \in {\cal P}(\mathbb R^d)$ there exists a unique ${\cal P}\left(\mathbb R^d\right)$-valued solution ${\bf u}^\nu$ of \eqref{Fokker}.\\ Indeed the assumptions of Lemma \ref{LC313} are fulfilled. \end{rem} We continue with a fundamental lemma whose proof will appear in the Appendix. \begin{lem} \label{FriedAr} \noindent Suppose the validity of Assumptions \ref{smoothness} and \ref{smoothness1}. Then, for all $\nu \in {\cal P}\left(\mathbb R^d\right)$, ${\bf u}^\nu\left(t\right)$ admits a density $u^{\nu}\left(t,\cdot\right) \in {\cal C}^1\left(\mathbb R^d\right)$ for all $t \in ]0,T]$. Furthermore, for each compact $K$ of $]0,T] \times \mathbb R^d$, there are strictly positive constants $C^K_1, C^K_2, C^K_3$, also depending on $\nu$ such that \begin{eqnarray} C^K_1 \le u^\nu\left(t,x\right) &\leq& C^K_2 \label{dens} \\ \left|\partial_iu^\nu\left(t,x\right)\right| &\leq& C^K_3, \ i \in [\![1,d]\!], \label{DerDens} \end{eqnarray} for all $(t,x) \in K$. \end{lem} \begin{lem} \label{P49} Suppose that the initial condition $\mu$ equals ${\bf u}^\nu\left(T\right)$ for some $\nu \in {\cal P}\left(\mathbb R^d\right)$. We suppose the following. \begin{enumerate} \item Assumptions \ref{smoothness}. \item ${\bf u}^\nu\left(t\right)$ admits a density $u^{\nu}\left(t,\cdot\right) \in W^{1,1}_{\rm loc}(\mathbb R^d),$ for all $t \in ]0,T]$. \item For each compact $K$ of $]0,T] \times \mathbb R^d$, there are strictly positive constants $C^K_1, C^K_2, C^K_3$, also depending on $\nu$ such that \eqref{dens} and \eqref{DerDens} hold $\forall (t,x) \in K$. \end{enumerate} Then equation \eqref{MKIntro} admits existence in law in ${\cal A}_1$. \end{lem} \begin{corro}\label{CP49} We suppose the validity of Assumptions \ref{smoothness} and and \ref{smoothness1}. \begin{enumerate} \item Suppose the existence of $\nu \in {\cal P}(\mathbb R^d)$ such that ${\bf u}^{\nu}(T) = \mu$. Then, equation \eqref{MKIntro} admits existence in law in ${\cal A}_1$. Moreover, if $\nu$ is a Dirac mass, existence in law occurs in ${\cal A}_{\left(\delta_x\right)_{x\in \mathbb R^d}}\cap{\cal A}_1$. \item Otherwise \eqref{MKIntro} does not admit existence in law. \end{enumerate} \end{corro} \begin{proof}\ \begin{enumerate} \item The first part is a direct consequence of Lemma \ref{FriedAr}, Lemma \ref{P49} and expression \eqref{EBP}. If in addition, $\nu$ is a Dirac mass, then ${\bf u}^\nu\left(0\right)$ belongs to ${\cal C} := \left(\delta_x\right)_{x\in\mathbb R^d}$, hence existence in law occurs in ${\cal A}_{\cal C}\cap{\cal A}_1$ again by Proposition \ref{MKEx_Prop}. \item Otherwise suppose ab absurdo that $\left(Y,{\bf p}\right)$ is a solution of \eqref{MKIntro}. By Proposition \ref{PProbRep} ${\bf p}\left(T-\cdot\right)$ is a solution of \eqref{BackwardFokker}. We set $ \nu_0 = {\bf p}(T)$ so that ${\bf p}(T-\cdot)$ verifies also \eqref{Fokker} with initial value $ \nu_0$. Since, by Lemma \ref{LC313} uniqueness holds for \eqref{Fokker}, it follows that ${\bf p}(T-\cdot) = {\bf u}^{\nu_0}$ which concludes the proof of item 2. \end{enumerate} \end{proof} \begin{prooff} \ (of Lemma \ref{P49}). \noindent Suppose $\mu = {\bf u}^\nu\left(T\right)$ for some $\nu \in {\cal P}\left(\mathbb R^d\right)$. \smallbreak \noindent We recall that Assumption \ref{GH1} holds with respect to ${\cal C} := {\cal P}\left(\mathbb R^d\right)$ by Remark \ref{R1} 1. \smallbreak \noindent In view of applying Proposition \ref{MKEx_Prop}, we need to check that Assumptions \ref{MKEx_1} and \ref{MKEx_2} hold with respect to $(\mu,{\cal C})$. \smallbreak \noindent Assumption \ref{MKEx_1} is verified by ${\bf u} = {\bf u}^\nu$. Indeed the function ${\bf u}^\nu$ is a ${\cal P}\left(\mathbb R^d\right)$-valued solution of \eqref{EDPTerm} with terminal value $\mu$ and such that ${\bf u}^\nu\left(0\right)$ belongs to ${\cal C}$. Condition \eqref{HP} appearing in Assumption \ref{MKEx_1} is satisfied with ${\bf u} = {\bf u}^\nu$ thanks to the right-hand side of inequalities \eqref{dens} and \eqref{DerDens} and the fact that $\sigma$ is bounded. Hence Assumption \ref{MKEx_1} holds with respect to $\left(\mu,{\cal C}\right)$. \smallbreak \noindent It remains to show Assumption \ref{MKEx_2} holds i.e. that $$(t,x) \mapsto \frac{\mathop{div_x} \left(\widehat{\Sigma}_{i.}(t,x) u^\nu(T-t,x)\right)}{u^\nu(T-t,x)}$$ is locally bounded on $[0,T[ \times \mathbb R^d$. To achieve this, we fix $i \in [\![1,d]\!]$ and a bounded open subset $\mathcal{O}$ of $[0,T[ \times \mathbb R^d$. For $(t,x) \in \mathcal{O}$ we have $$ \left|\frac{\mathop{div_x}\left(\widehat{\Sigma}_{i.}\left(t,x\right) u^\nu\left(T-t,x\right)\right)}{u^\nu\left(T-t,x\right)}\right| {} \leq \left|\mathop{div_x}\left(\widehat{\Sigma}_{i.}\left(t,x\right)\right)\right| + \left|{\widehat \Sigma}_{i.}\left(t,x\right)\right|\frac{\left|\nabla_x u^\nu\left(T-t,x\right)\right|}{u^\nu\left(T-t,x\right)}. $$ The latter quantity is locally bounded in $t,x$ thanks to the boundedness of $\Sigma,\mathop{div_x}\left(\widehat{\Sigma}_{i.}\right)$ and inequalities \eqref{dens} and \eqref{DerDens}. Hence, Assumption \ref{MKEx_2} holds. This ends the proof. \end{prooff} \begin{prop} \label{TExUniq} Suppose the validity of Assumption \ref{smoothness} and \ref{smoothness1}. The following results hold. \begin{enumerate} \item Let us suppose $d = 1$. Suppose $\mu$ equals ${\bf u}^{\delta_{x_0}} \left(T\right)$ for some $x_0 \in \mathbb R^d$. Then \eqref{MKIntro} admits existence and uniqueness in law in ${\cal A}_{\left(\delta_x\right)_{x\in \mathbb R^d}}\cap{\cal A}_1$, pathwise uniqueness in ${\cal A}_{\left(\delta_x\right)_{x\in \mathbb R^d}}\cap{\cal A}_2$. \item Let $d \geq 2$. There is a maturity $T$ sufficiently small (only depending on the Lipschitz constant of $b, \sigma$) such that the following result holds. Suppose $\mu$ equals ${\bf u}^{\delta_{x_0}} \left(T\right)$ for some $x_0 \in \mathbb R^d$. Then \eqref{MKIntro} admits existence and uniqueness in law in ${\cal A}_{\left(\delta_x\right)_{x\in \mathbb R^d}}\cap{\cal A}_1$, pathwise uniqueness in ${\cal A}_{\left(\delta_x\right)_{x\in \mathbb R^d}}\cap{\cal A}_2$. \end{enumerate} \end{prop} \begin{proof} \noindent By Assumptions \ref{smoothness} and \ref{smoothness1}, Corollary \ref{CP49} implies that \eqref{MKIntro} admits existence in law in the two cases in the specific classes. To check the uniqueness in law and pathwise uniqueness results, we wish to apply Corollary \ref{Coro}. It suffices to check Assumption \ref{APDETerm} because the other hypotheses are included in Assumption \ref{smoothness}. Below we verify Assumption \ref{APDETerm} with respect to $(\mu, (\delta_x)_{x\in \mathbb R})$, for the separate two cases. \begin{enumerate} \item Fix $x_0 \in \mathbb R^d$. This will follow from Proposition \ref{propLip1} that holds under Assumption \ref{Lip1d} which is a consequence of Assumption \ref{smoothness}. \item We proceed as for previous case but applying Theorem \ref{propLipd} instead of Proposition \ref{propLip1}. \end{enumerate} \end{proof} \noindent We state now the most important results of the section. \begin{thm} \label{TExUniqBis} Suppose $b,\sigma$ are time-homogeneous, Assumption \ref{smoothness} and suppose there is $\nu \in {\cal P}\left(\mathbb R^d\right)$ (a priori not known) such that $\mu = {\bf u}^\nu\left(T\right)$. \begin{enumerate} \item \eqref{MKIntro} admits existence and uniqueness in law. Moreover existence in law holds in ${\cal A}_1$. \item \eqref{MKIntro} admits pathwise uniqueness in ${\cal A}_2$. \end{enumerate} \end{thm} \begin{proof} \begin{enumerate} \item \begin{enumerate} \item First, Assumption \ref{smoothness1} trivially holds since $b,\sigma$ are time-homogeneous. Then, point 1 of Corollary \ref{CP49} implies that \eqref{MKIntro} admits existence in law (in ${\cal A}_1$) since Assumption \ref{smoothness} holds. \smallbreak \item Let $\left(Y,{\bf p}\right)$ be a solution of \eqref{MKIntro}. Proceeding as in the proof of item 2. of Corollary \ref{CP49}, we obtain that ${\bf p}(T-\cdot) = {\bf u}^{ \nu_0}$ with $\nu_0 = {\bf p}\left(T\right)$. Then, Lemma \ref{FriedAr} and the fact that $\sigma$ is bounded allow to show that ${\bf p}$ belongs to ${\cal A}_1$, see \eqref{EBP} in Notation \ref{NAC1_2}. \item To conclude it remains to show uniqueness in law in ${\cal A}_1$. For this we wish to apply point 1. of Corollary \ref{Coro}. To achieve this, we check Assumption \ref{APDETerm} with respect to $\left(\mu, {\cal P}\left(\mathbb R^d\right)\right)$. This is a consequence of Assumptions \ref{Zvon3} and \ref{Lun1} and Theorem \ref{P315} This concludes the proof of item 1. \end{enumerate} \item Concerning pathwise uniqueness in ${\cal A}_2$, we proceed as for uniqueness in law but applying point 2 of Corollary \ref{Coro}. This is valid since Assumption \ref{smoothness} implies that $b,\sigma$ are bounded and Lipschitz. \end{enumerate} \end{proof} In the result below we extend Theorem \ref{TExUniqBis} to the case when the coefficients $b,\sigma$ are piecewise time-homogeneous. \begin{thm} \label{TC313} Let $n \in \mathbb N^*$. Let $ 0 = t_0 < \ldots < t_n = T$ be a partition. For $k \in [\![2,n]\!]$ (resp. $k=1$) we denote $I_k = ]t_{k-1},t_k]$ (resp. $[t_{0},t_1]$). Suppose that the following holds. \begin{enumerate} \item For all $k \in [\![1,n]\!]$ the restriction of $\sigma$ (resp. $b$) to $I_k \times \mathbb R^d$ is a time-homogeneous function $\sigma^k: \mathbb R^d \rightarrow M_{d}(\mathbb R)$ (resp. $b^k: \mathbb R^d \rightarrow \mathbb R^d$). \item Assumption \ref{Zvon3}. \item $\sigma$ is Lipschitz in space uniformly in time. \item The functions $\sigma^k$, $b^k$, $\left(\nabla_xb^k_i\right)_{i\in[\![1,d]\!]}$, $\left(\nabla_x\Sigma^k_{ij}\right)_{i,j \in [\![1,d]\!]}$ are continuous bounded and $\nabla^2_x\Sigma^k$ is H\"{o}lder continuous with exponent $\alpha \in ]0,1[$. \end{enumerate} Suppose $\mu$ equals ${\bf u}^\nu(T)$ for some $\nu \in {\cal P}\left(\mathbb R^d\right)$. Then equation \eqref{MKIntro} admits existence and uniqueness in law. Existence in law holds in ${\cal A}_1$. \end{thm} \begin{rem} \label{RC313} A similar remark as in Corollary \ref{CP49} holds for the Theorems \ref{TExUniqBis} and \ref{TC313}. If there is no $\nu \in {\cal P}(\mathbb R^d)$ such that ${\bf u}^{\nu}(T) = \mu$, then \eqref{MKIntro} does not admit existence in law. \end{rem} \begin{prooff} (of Theorem \ref{TC313}). \noindent We recall that by Lemma \ref{LC313}, ${\bf u}^{\nu_0}$ is well-defined for all $\nu_0 \in {\cal P}\left(\mathbb R^d\right)$. \begin{enumerate} \item We first show that ${\bf u}^{\nu_0}$ verifies \eqref{dens} and \eqref{DerDens}. Indeed, fix $k \in [\![1,n]\!]$. The restriction ${\bf u_k} $ of ${\bf u}^{\nu_0}$ to $\bar I_k$ is a solution ${\bf v}$ of the first line \eqref{Fokker} replacing $[0,T]$ with $\bar I_k$, $L$ by $L^k$ defined in \eqref{OpLk}, with initial condition $ {\bf v}(t_{k-1}) = {\bf u}^{\nu_0}(t_{k-1})$. That restriction is even the unique solution, using Lemma \ref{LC313} replacing $[0,T]$ with $\bar I_k$. We apply Lemma \ref{FriedAr} replacing $[0,T]$ with $\bar I_k$, taking into account Assumptions \ref{smoothness} and \ref{smoothness1}, which holds trivially replacing $\sigma, b, \Sigma$ with $\sigma^k, b^k, \Sigma^k$ This implies that ${\bf u}^{\nu_0}$ verifies \eqref{dens} and \eqref{DerDens} replacing $[0,T]$ with $\bar I_k$, and therefore on the whole $[0,T]$. \item Existence in law in ${\cal A}_1$, follows now by Lemma \ref{P49}. \item It remains to show uniqueness in law. Let $\left(Y,{\bf p}\right)$ be a solution of \eqref{MKIntro}. We set $\nu_0 := {\bf p}\left(T\right)$. Since ${\bf u}^{ \nu_0}$ and ${\bf p}(T-\cdot)$ solve \eqref{Fokker}, Lemma \ref{LC313} implies that ${\bf p}$ is uniquely determined. Similarly as in item 1.(b) of the proof of Theorem \ref{TExUniqBis}, item 1. of the present proof and Lemma \ref{FriedAr} allow to show that ${\bf p}$ belongs to ${\cal A}_1$. \item It remains to show uniqueness in law in ${\cal A}_1$. For this, Corollary \ref{C313} implies Assumption \ref{APDETerm} with ${\cal C} = {\cal P}(\mathbb R^d)$. Uniqueness of \eqref{MKIntro} in the class ${\cal A}_1$ follows now by Corollary \ref{Coro}, which ends the proof. \end{enumerate} \end{prooff} \subsection{Well-posedness for the McKean SDE: the Ornstein-Uhlenbeck semigroup} \label{Sex} In this section we consider the case $b : \left(s,x\right) \mapsto C\left(s\right)x$ with $C$ continuous from $[0,T]$ to $\mathbb R^d$ and $\sigma$ continuous from $[0,T]$ to $M_{d,m}\left(\mathbb R\right)$. We also suppose that for all $t \in [0,T]$, $\sigma\left(t\right)$ is invertible. We denote by ${\cal C}\left(t\right), t \in [0,T]$, the unique solution of the matrix-valued ODE \begin{equation*} {\cal C}(t) = I + \int^{t}_{0}C(s){\cal C}(s)ds. \end{equation*} For a given $x_0 \in \mathbb R^d$ and a given t $\in ]0,T]$, we denote by $p^{x_0}_t$ the density of a Gaussian random vector with mean $m^{x_0}_t = {\cal C}(t)x_0$ and covariance matrix $Q_t = {\cal C}(t)\int^{t}_{0}{\cal C}^{-1}(s)\Sigma(s){\cal C}^{-1}\left(s\right)^{\top}ds{\cal C}(t)^{\top}$. Note that for all $t \in ]0,T]$, $Q_t$ is strictly positive definite, in particular it is invertible. Indeed, for every $t \in [0,T]$, $\Sigma(t)$ is strictly positive definite. By continuity in $t$, $\int^{t}_{0}{\cal C}^{-1}(s)\Sigma(s){\cal C}^{-1}\left(s\right)^{\top}ds$ is also strictly positive definite and finally the same holds for $Q_t$. For a given $\nu \in {\cal P}\left(\mathbb R^d\right)$, $t \in ]0,T]$, we set the notation \begin{equation}\label{Epnu} p^\nu_t : x \mapsto \int_{\mathbb R^d}p^{x_0}_t\left(x\right)\nu\left(dx_0\right). \end{equation} At this level, we need a lemma. \begin{lem}\label{OU_lemma} Let $\nu \in {\cal P}\left(\mathbb R^d\right)$. The measure-valued function $t \mapsto p^\nu_t(x)dx $ is the unique solution of \eqref{Fokker} with initial value $\nu$ and we denote it by ${\bf u}^\nu$. Furthermore, ${\bf u}^\nu\left(T-\cdot\right)$ belongs to ${\cal A}_2$. \end{lem} \begin{proof} \begin{enumerate} \item We denote immediately ${\bf u}^\nu\left(t\right)\left(dx\right):= p^\nu_t(x) dx, \ t \in ]0,T]$. By Chapter 5, Section 5.6 in \cite{karatshreve}, for every $t \in]0,T]$, $p^{x_0}_t$ is the density of the random variable $X^{x_0}_t$, where $X^{x_0}$ is the unique strong solution of \eqref{EqLin} with initial value $x_0$. The mapping $t \mapsto p^{x_0}_t(x) dx$ is a solution of \eqref{Fokker} by Proposition \ref{PFundam}, with initial condition $\delta_{x_0}$. Consequently, by superposition, ${\bf u}^\nu$ is a solution of \eqref{Fokker} with initial value $\nu$. \item ${\bf u}^\nu$ is the unique solution of \eqref{Fokker} because of Proposition \ref{FwdOU_Uniq}. \item It remains to show that ${\bf u}^\nu\left(T-\cdot\right)$ belongs to ${\cal A}_2$, namely that for all $i \in [\![1,d]\!]$ \begin{equation*} \left(t,x\right) \mapsto \frac{{\mathop {div_x}}\left(\Sigma\left(T-t\right)_{i\cdot}p^{\nu}_{T-t}\left(x\right)\right)}{p^{\nu}_{T-t}\left(x\right)}, \end{equation*} is locally Lipschitz with linear growth in space on $[0,T[\times\mathbb R^d$. \smallbreak \noindent Fix $i \in [\![1,d]\!]$, $t \in [0,T[$ and $x \in \mathbb R^d$. Remembering the fact, $p^{x_0}_{T-t}$ is a Gaussian law with mean $m^{x_0}_{T-t}$ and covariance matrix $Q_{T-t}$ for a given $x_0 \in \mathbb R^d$, we have \begin{equation}\label{div_OU} \frac{{\mathop {div_x}}\left(\Sigma\left(T-t\right)_{i\cdot}p^{\nu}_{T-t}\left(x\right)\right)}{p^{\nu}_{T-t}\left(x\right)} = -\frac{1}{p^\nu_{T-t}\left(x\right)}\int_{\mathbb R^d}\left<\Sigma\left(T-t\right)_{i\cdot},Q^{-1}_{T-t}\left(x - m^{x_0}_{T-t}\right)\right>p^{x_0}_{T-t}\left(x\right)\nu\left(dx_0\right). \end{equation} Let $K$ be a compact subset of $]0,T] \times \mathbb R^d$; then there is $M_K > 0$ such that for all $\left(t,x\right) \in K$, $x_0 \in \mathbb R^d$, \begin{equation*} \left|\left<\Sigma\left(T-t\right)_{i\cdot},Q^{-1}_{T-t}\left(x - m^{x_0}_{T-t}\right)\right>p^{x_0}_{T-t}\left(x\right)\right| \leq \left|\Sigma\left(T-t\right)_{i\cdot}\right|\left|\left|Q^{-1}_{T-t}\right|\right|\left|x - m^{x_0}_{T-t}\right|p^{x_0}_{T-t}\left(x\right) \le M_K. \end{equation*} This follows because $t \mapsto \Sigma(T-t)$ and $t \mapsto Q^{-1}_{T-t}$ are continuous on $[0,T[$ and, setting $$c_K := \inf\{ t \vert (t,x) \in K\}, \quad m_K: = \sup_{a \in \mathbb R} \vert a\vert \exp\left(- c_K\frac{a^{2}}{2}\right),$$ we have $$ |x - m^{x_0}_{T-t}| p^{x_0}_{T-t}(x) \le m_K, \ \forall (t,x) \in K.$$ To show that left-hand side of \eqref{div_OU} is locally bounded on $[0,T[ \times \mathbb R^d$ it remains to show that $(t,x) \mapsto \int_{\mathbb R^d} p^{x_0}_{T-t}(x) \nu(dx_0)$ is lower bounded on $K$. Indeed, let $I$ be a compact of $\mathbb R^d$. Since $(t,x,x_0) \mapsto p^{x_0}_{T-t}(x)$ is strictly positive and continuous is lower bounded by a constant $c(K,I)$. The result follows choosing $I$ such that $\nu(I) > 0$. \smallbreak \noindent To conclude, it remains to show that the functions $(t,x) \mapsto \frac{{\mathop {div_x}}\left(\Sigma\left(T-t\right)_{i\cdot}p^{\nu}_{T-t}\left(x\right)\right)}{p^{\nu}_{T-t}\left(x\right)}, \ i \in [\![1,d]\!]$ defined on $[0,T[\times\mathbb R^d$ has locally bounded spatial derivatives, which implies that they are Lipschitz with linear growth on each compact of $[0,T[\times\mathbb R^d$. By technical but easy computations, the result follows using the fact the real functions $a \mapsto \vert a \vert^m \exp\left(-\frac{a^2}{2}\right)$, $m= 1,2,$ are bounded. \end{enumerate} \end{proof} \noindent We give now a global well-posedness result for equation \eqref{MKIntro}. \begin{thm}\label{MKOU_WellP} \begin{enumerate} \item Suppose the initial condition $\mu$ equals ${\bf u}^\nu\left(T\right)$ for some $\nu \in {\cal P}\left(\mathbb R^d\right)$. Then, equation \eqref{MKIntro} admits existence in law, strong existence, uniqueness in law and pathwise uniqueness. \item Otherwise \eqref{MKIntro} does not admit any solution. \end{enumerate} \end{thm} \begin{proof} Item 2. can be proved using similar arguments as for the proof of Corollary \ref{CP49}. Let $(Y, {\bf p})$ be a solution of \eqref{MKIntro} and set $ \nu_0 = {\bf p}(T)$. By Proposition \ref{PProbRep}, ${\bf p}\left(T-\cdot\right)$ is a solution of \eqref{BackwardFokker}, so that ${\bf p}(T-\cdot)$ verifies also \eqref{Fokker} with initial value $ \nu_0$. Since, by Proposition \ref{FwdOU_Uniq}, uniqueness holds for \eqref{Fokker}, it follows that ${\bf p}(T-\cdot) = {\bf u}^{\nu_0}$ which concludes the proof of item 2. \noindent We prove now item 1. For this, taking into account Proposition \ref{MKProp} and Yamada-Watanabe theorem and related results for classical SDEs, it suffices to show strong existence and pathwise uniqueness. We set ${\cal C} := {\cal P}\left(\mathbb R^d\right)$ \begin{enumerate} \item Concerning the strong existence statement, we want to apply Proposition \ref{MKEx_Prop}. For this we have to check the validity of Assumption \ref{Lip1d}, Assumption \ref{GH1} with respect to ${\cal C}$ and Assumptions \ref{MKEx_1}, \ref{MKEx_3} hold with respect to $\left(\mu,{\cal C}\right)$. \smallbreak \noindent Since $b,\sigma$ are affine, Assumption \ref{Lip1d} trivially holds. Furthermore, Assumption \ref{GH1} holds with respect to ${\cal C}$ thanks to Proposition \ref{FwdOU_Uniq}. \smallbreak \noindent Now, ${\bf u}^\nu$ is a probability valued solution of \eqref{EDPTerm0} with terminal value $\mu$. Furthermore, Lemma \ref{OU_lemma} shows that ${\bf u}^\nu$, being the unique solution of solution of \eqref{Fokker}, is such that, for all $t \in ]0,T]$, ${\bf u}^\nu(t)$ admits $p^\nu_t$ (see \eqref{Epnu}) for density. Then, relation \eqref{HP} holds since, by the considerations above \eqref{Epnu} $(t,x) \mapsto p^\nu_t(x)$ is locally bounded with locally bounded spatial derivatives. Hence, Assumption \ref{MKEx_1} holds with respect to $\left(\mu,{\cal C}\right)$. Finally, Lemma \ref{OU_lemma} implies that ${\bf u}^\nu\left(T-\cdot\right)$ belongs to ${\cal A}_2$. Hence, Assumption \ref{MKEx_3} holds with respect to $\left(\mu,{\cal C}\right)$. At this point Proposition \ref{MKEx_Prop} implies existence in law. \item Let $\left(Y,{\bf p}\right)$ be a solution of equation \eqref{MKIntro}. Proposition \ref{PProbRep} implies that ${\bf p}\left(T-\cdot\right)$ solves \eqref{EDPTerm}. Then, Proposition \ref{FwdOU_Uniq} gives ${\bf p}\left(T-\cdot\right) = {\bf u}^{\nu_0}$ with $\nu_0 = {\bf p}\left(T\right)$. Lemma \ref{OU_lemma} implies ${\bf p}$ belongs to ${\cal A}_2$. \item It remains to show pathwise uniqueness in ${\cal A}_2$. Assumption \ref{APDETerm} holds with respect to $\left(\mu, {\cal C}\right)$ thanks to Theorem \ref{BwdOU_Uniq}. Now, point 2 of Corollary \ref{Coro} implies pathwise uniqueness in ${\cal A}_2$ since $b,\sigma$ are locally Lipschitz with linear growth in space. \end{enumerate} \end{proof} \section*{Appendix} \subsection{Proof of Lemma \ref{FriedAr}} Let $\nu \in {\cal P}\left(\mathbb R^d\right)$. For each given $t \in [0,T]$, we denote by $G_t$ the differential operator such that for all $f\in\mathcal{C}^2\left(\mathbb R^d\right)$ \begin{equation*} G_tf = \frac{1}{2}\sum^{d}_{i,j=1}\partial_{ij}\left(\Sigma_{ij}\left(t,\cdot\right)f\right) - \sum^{d}_{i=1}\partial_i\left(b_i\left(t,\cdot\right)f\right). \end{equation*} Assumption \ref{smoothness} implies that for a given $f \in \mathcal{C}^{2}\left(\mathbb R^d\right)$, $G_tf$ can be rewritten in the two following ways: \begin{equation} \label{Friedman} G_tf = \frac{1}{2}\sum^{d}_{i,j=1}\Sigma_{ij}(t,\cdot)\partial_{ij}f + \sum^{d}_{i=1}(\sum^{d}_{j=1}\partial_i\Sigma_{ij}(t,\cdot)- b_i(t, \cdot))\partial_if + c^1(t,\cdot)f, \end{equation} with $$ c^1 : (t,x) \mapsto \frac{1}{2}\sum^{d}_{i,j=1}\partial_{ij}\Sigma_{ij}(t,x) - \sum^{d}_{i=1}\partial_ib_i(t,x).$$ \begin{equation} \label{Aronson} G_tf = \frac{1}{2}\sum^{d}_{i,j=1}\partial_j (\partial_i\Sigma_{ij} (t,\cdot)f + \Sigma_{ij}(t,\cdot)\partial_if - \sum^{d}_{i=1}b_i(t,\cdot)\partial_if) - \sum^{d}_{i=1}\partial_ib_i(t,\cdot)f. \end{equation} \smallbreak \noindent On the one hand, combining identity \eqref{Friedman} with Assumption \ref{smoothness}, there exists a fundamental solution $\Gamma$ (in the sense of Definition stated in Section 1. p.3 of \cite{friedman_1964}) of $\partial_tu = G_tu$, thanks to Theorem 10. Section 6 Chap. 1. in the same reference. Furthermore, there exists $C_1,C_2 > 0$ such that for all $i \in [\![1,d]\!]$, $x,\xi \in \mathbb R^d$, $\tau \in [0,T]$, $t > \tau$, \begin{equation} \label{PropFriedman_1} \left|\Gamma\left(x,t,\xi,\tau\right)\right| \leq C_1\left(t-\tau\right)^{-\frac{d}{2}}\exp\left(-\frac{C_2\left|x-\xi\right|^2}{4\left(t-\tau\right)}\right), \end{equation} \begin{equation}\label{PropFriedman_2} \left|\partial_{x_i}\Gamma\left(x,t,\xi,\tau\right)\right| \leq C_1\left(t-\tau\right)^{-\frac{d+1}{2}}\exp\left(-\frac{C_2\left|x-\xi\right|^2}{4\left(t-\tau\right)}\right), \end{equation} \noindent thanks to identities (6.12), (6.13) in Section 6 Chap. 1 in \cite{friedman_1964}. \smallbreak \noindent On the other hand, combining Identity \eqref{Aronson} with Assumption \ref{smoothness}, there exists a weak fundamental solution $\Theta$ of $\partial_tu = G_tu$ thanks to Theorem 5 in \cite{AronsonGeneral}. In addition, there exists $K_1,K_2,K_3 > 0$ such that for almost every $x,\xi \in \mathbb R^d$ , $\tau \in [0,T]$, $t \geq \tau$ \begin{equation}\label{PropAronson} \frac{1}{K_1}\left(t-\tau\right)^{-\frac{d}{2}}\exp\left(-\frac{K_2\left|x-\xi\right|^2}{4\left(t-\tau\right)}\right) \leq \Theta\left(x,t,\xi,\tau\right) \leq K_1\left(t-\tau\right)^{-\frac{d}{2}}\exp\left(-\frac{K_3\left|x-\xi\right|^2}{4\left(t-\tau\right)}\right), \end{equation} thanks to point (ii) of Theorem 10 in \cite{AronsonGeneral}. \smallbreak \noindent Our goal is now to show that $\Gamma$ and $\Theta$ coincide. To this end, we adapt the argument developed at the beginning of Section 7 in \cite{AronsonGeneral}. Fix a function $H$ from $[0,T]\times\mathbb R^d$ belonging to $\mathcal{C}^\infty_c\left([0,T]\times\mathbb R^d\right)$. Identity (7.6) in Theorem 12 Chap 1. Section 1. of \cite{friedman_1964} implies in particular that the function $$ u: \left(t,x\right) \mapsto \int^{t}_{0}\int_{\mathbb R^d}\Gamma\left(x,t,\xi,\tau\right) H\left(\tau,\xi\right)d\xi d\tau,$$ is continuously differentiable in time, two times continuously differentiable in space and is a solution of the Cauchy problem \begin{equation}\label{CauchyPb} \begin{cases} \partial_tu\left(t,x\right) = G_tu\left(t,x\right) + H\left(t,x\right), \ \left(t,x\right) \in ]0,T]\times\mathbb R^d, \\ u\left(0,\cdot\right) = 0. \end{cases} \end{equation} It is consequently also a weak (i.e. distributional) solution of \eqref{CauchyPb}, which belongs to ${\cal E}^2(]0,T]\times\mathbb R^d)$ (see definition of that space in \cite{AronsonGeneral}) since $u$ is bounded thanks to inequality \eqref{PropFriedman_1} and the fact that $H$ is bounded. Then, point (ii) of Theorem 5 in \cite{AronsonGeneral} says that $$ (t,x) \mapsto \int^{t}_{0}\int_{\mathbb R^d}\Theta\left(x,t,\xi,\tau\right)H\left(\tau,\xi\right)d\xi d\tau$$ is the unique weak solution in ${\cal E}^2(]0,T]\times\mathbb R^d)$ of \eqref{CauchyPb}. This implies that for every $(t,x) \in ]0,T] \times \mathbb R^d$ we have \begin{equation*} \int^{t}_{0}\int_{\mathbb R^d}\left(\Gamma - \Theta\right)\left(x,t,\xi,\tau\right)H\left(\tau,\xi\right) d\xi d\tau = 0. \end{equation*} Point (i) of Theorem 5 in \cite{AronsonGeneral} (resp inequality \eqref{PropFriedman_1}) implies that $\Theta$ (resp. $\Gamma$) belongs to $L^{p}\left(]0,T]\times\mathbb R^d\right)$ as a function of $(\xi,\tau)$, for an arbitrary $p \geq d + 2 $. Then, we conclude that for all $\left(t,x\right) \in ]0,T] \times \mathbb R^d$, \begin{equation} \label{coincide} \Theta\left(x,t,\xi,\tau\right) = \Gamma\left(x,t,\xi,\tau\right), \ d\xi d\tau a.e. \end{equation} for all $(\tau,\xi) \in [0,t[ \times \mathbb R^d$. This happens by density of $\mathcal{C}^\infty_c\left([0,T]\times\mathbb R^d\right)$ in $L^{q}\left(]0,T]\times\mathbb R^d\right)$, $q$ being the conjugate of $p$. \smallbreak \noindent This, together with \eqref{PropAronson} and the fact that $\Gamma$ is continuous in $(\tau,\xi)$ implies that \eqref{PropAronson} holds for all $(\tau,\xi) \in [0,t[ \times \mathbb R^d$ and therefore \begin{equation}\label{PropAronsonBis} \frac{1}{K_1}\left(t-\tau\right)^{-\frac{d}{2}}\exp\left(-\frac{K_2\left|x-\xi\right|^2}{4\left(t-\tau\right)}\right) \leq \Gamma \left(x,t,\xi,\tau\right) \leq K_1\left(t-\tau\right)^{-\frac{d}{2}}\exp\left(-\frac{K_3\left|x-\xi\right|^2}{4\left(t-\tau\right)}\right). \end{equation} We introduce $$ q_{t} := x \mapsto \int_{\mathbb R^d} \Gamma\left(x,t,\xi,0\right)\nu\left(d\xi\right). $$ By \eqref{PropAronsonBis}, with $\tau = 0$ we get \begin{equation} \label{PropFriedman_3} q_{t}\left(x\right) \geq \frac{1}{K_1}t^{-\frac{d}{2}}\int_{\mathbb R^d}\exp\left(-\frac{K_2\left|x-\xi\right|^2}{4t}\right)\nu\left(d\xi\right). \end{equation} \smallbreak \noindent We denote now by ${\bf v}^\nu$ the measure-valued mapping such that ${\bf v}^\nu\left(0,\cdot\right) = \nu$ and for all $t \in ]0,T]$, ${\bf v}^\nu\left(t\right)$ has density $q_{t}$ with respect to the Lebesgue measure on $\mathbb R^d$. We want to show that ${\bf v}^\nu$ is a solution of \eqref{Fokker} with initial value $\nu$ to conclude ${\bf u}^\nu = {\bf v}^\nu$ thanks to the validity of Assumption \ref{GH1} because of Remark \ref{R1} 1. and 3. To this end, we remark that the definition of a fundamental solution for $\partial_tu = G_tu $ says that $u$ is a $C^{1,2}$ solution and consequently also a solution in the sense of distributions. In particular for all $\phi \in \mathcal{C}^{\infty}_c\left(\mathbb R^d\right)$, for all $t \geq \epsilon > 0$ \begin{equation} \label{NearFP} \int_{\mathbb R^d}\phi\left(x\right){\bf v}^\nu\left(t\right)\left(dx\right) = \int_{\mathbb R^d}\phi\left(x\right){\bf v}^\nu\left(\epsilon\right)\left(dx\right) + \int^{t}_{\epsilon}\int_{\mathbb R^d}L_s\phi\left(x\right){\bf v}^\nu\left(s\right)\left(dx\right)ds. \end{equation} To conclude, it remains to send $\epsilon$ to $0^+$. Theorem 15 section 8. Chap 1. and point (ii) of the definition stated p. 27 in \cite{friedman_1964} imply in particular that for all $\phi \in \mathcal{C}^\infty_c\left(\mathbb R^d\right)$, $\xi \in \mathbb R^d$, \begin{equation*} \int_{\mathbb R^d}\Gamma\left(x,\epsilon,\xi,0\right)\phi\left(x\right)dx \underset{\epsilon\to 0^+}{\longrightarrow} \phi\left(\xi\right). \end{equation*} Fix now $\phi \in \mathcal{C}^\infty_c\left(\mathbb R^d\right)$. In particular thanks to Fubini's theorem, \eqref{PropAronson} and Lebesgue's dominated convergence theorem we have \begin{align*} \int_{\mathbb R^d}\phi\left(x\right){\bf v}^\nu\left(\epsilon\right)\left(dx\right) &{}= \int_{\mathbb R^d}\phi\left(x\right)\int_{\mathbb R^d}\Gamma\left(x,\epsilon,\xi,0\right)\nu\left(d\xi\right)dx \\ &{} = \int_{\mathbb R^d}\int_{\mathbb R^d}\Gamma\left(x,\epsilon,\xi,0\right)\phi\left(x\right)dx\nu\left(d\xi\right) \\ &{} \underset{\epsilon \to 0^+}{\longrightarrow} \int_{\mathbb R^d}\phi\left(\xi\right)\nu\left(d\xi\right). \end{align*} By \eqref{NearFP} ${\bf v}^\nu$ is a solution of \eqref{Fokker} and consequently ${\bf u}^\nu = {\bf v}^\nu$, so that, for every $t \in ]0,T]$, ${\bf u}^\nu\left(t\right)$ admits $ u^\nu(t,\cdot) = q_{t}$ for density with respect to the Lebesgue measure on $\mathbb R^d$. Now, integrating the inequalities \eqref{PropFriedman_1}, \eqref{PropFriedman_2} with respect to $\nu$ and combining this with inequality \eqref{PropFriedman_3}, we obtain the existence of $K_1,K_2,C_1,C_2 > 0$ such that for all $t \in ]0,T]$, for all $x \in \mathbb R^d$, for all $i \in [\![1,d]\!]$ \begin{equation*} \frac{1}{K_1}t^{-\frac{d}{2}}\int_{\mathbb R^d} \exp\left(-\frac{K_2\left|x-\xi\right|^2}{4t}\right)\nu\left(d\xi\right)\leq u^\nu\left(t,x\right)\leq K_1t^{-\frac{d}{2}}, \end{equation*} \begin{equation*} \left|\partial_iu^\nu\left(t,x\right)\right| \leq C_1t^{-\frac{d+1}{2}}. \end{equation*} Consequently, the upper bounds in \eqref{dens} and \eqref{DerDens} hold. Concerning the lower bound in \eqref{dens}, let $I$ be a compact subset of $\mathbb R^d$ such that $\nu(I) > 0$, the result follows since $(t,x,\xi) \mapsto \exp\left(-\frac{K_2\left|x-\xi\right|^2}{4t}\right)$ is strictly positive, continuous and therefore lower bounded by a strictly positive constant on $K\times I$ for each compact $K$ of $]0,T]\times\mathbb R^d$. \section*{Acknowledgments} The work was supported by a public grant as part of the {\it Investissement d'avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH,} in a joint call with Gaspard Monge Program for optimization, operations research and their interactions with data sciences. \bibliographystyle{plain}
proofpile-arXiv_065-158
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{S1} The last two decades have witnessed lots of interest in Anti-de-Sitter (AdS) spacetimes and their thermodynamics as a result of gauge/gravity dualities\cite{Maldacena1,GKP,Witten1}. These dualities relate the physics of AdS spacetimes in the bulk at weak coupling to that of conformal field theories on the boundary at strong coupling. Such dualities have revealed how gravitational solutions in AdS encodes important information about the conformal field theories on the boundary in their semi-classical gravity. Black holes in asymptotically AdS spacetimes have three types of horizon topology, namely, spherical, hyperbolic and flat, in contrast to asymptotically Minkowski black holes, which have only spherical horizons. One of the interesting classes of AdS solutions which were discussed in this context is the AdS-Taub-Bolt/NUT black holes in four \cite{clifford98, surf99} and higher dimensions \cite{adel+andrew}. These solutions were studied by serval authors \cite{mann,clifford98,surf99,mann-thermo,mann06} and have revealed unusual properties of their thermodynamics. For example, the entropy is not the area of the horizon as a result of Misner string and it is not always positive. Another unusual feature is that although we have added one more parameter to Schwarzschild-AdS solution we do not have an additional term in the first law, as in the case of Kerr solution when we add a rotation parameter "a". This is because of the known extra identification in the time direction $\beta=8\pi n$, which leaves Misner string invisible \cite{misner}, where $n$ is the nut charge. This identification render the nut parameter related to, the horizon radius $r_0$, as a result, the nut charge can not be varied independently of the horizon radius $r_0$. In this work we study thermodynamics and the first law of neutral and dyonic Taub-Bolt/NUT-AdS solutions in four dimensions with flat horizons, which are characterized by a NUT charge $n$ and have the following form, \be ds^2=f(r)\,(dt+{\cal A})^2+{dr^2 \over f(r)}+(r^2-n^2)\,d\Sigma_{B}^2. \ee where, ${\cal A}$ is the one-form in the AdS-Taub-NUT/Bolt metric, which is related to the Kahler form ${\cal F}$, through ${\cal F}=d{\cal A}$. Here we define a new thermodynamical charge $N= -\int_{B} {\cal F} = \sigma n$, where $\sigma$ is some constant\footnote{ e.g., in the case of spherical horizon the integration is over $S^2$ and $\sigma=4\pi$.}. The constant-r hypersurface of this space is a $U(1)$ fiber over the base manifold {\bf B}. This solution has been generalized to higher dimensional Taub-Bolt and Taub-NUT solutions in \cite{bais} for asymptotically locally flat spaces and in \cite{page+pope,adel+andrew} for asymptotically locally de Sitter and Anti-de-Sitter spaces. Here we present a new treatment to study the thermodynamics of these solutions in which the NUT charge, $n$, is directly related to the new charge $N= -\int_{\Sigma} {\cal F} = \sigma n$. The treatment presented here is motivated by the previous work of Hunter \cite{hunter}, in which he introduces a charge $N=-1/4 \pi \int_{S^2} {\cal F}$ and a nut potential $\psi_N$, which is similar to the chemical potential $\Phi_N$ we define in this work. In our treatment $N$ can vary independently of the horizon radius $r_0$, since we do not have $\beta=8\pi n$, as a result of the absence of Misner string in the Euclidean Taub-Bolt/NUT solution. As mentioned above, the new charge $N$ has its own conjugate thermodynamic potential, $\Phi_N$ and the $N \Phi_N$ terms is going to play an important role in the thermodynamics of these solutions. Upon calculating various thermodynamic quantities one finds that the entropy is the area of the horizon and these quantities satisfy the Smarr's relation and the first law. Upon identifying one of the coordinates, the spatial boundary of these cases consists of three surfaces, a constant-r surface and two annulus-like surfaces extended from the horizon to infinity on the top and the bottom of the cylinder. The two annulus-like surfaces in $n=0$ case (e.g., Schwarzschild-AdS and Reissner-Nordstorm-AdS), do not receive any fluxes of conserved quantities such as mass, electric and magnetic charges, therefore, one can consider the identification of the other spatial direction obtaining a torus. But as we will see here for $n\neq 0$ these annulus-like surfaces receive their own fluxes which are going to contribute in the surface integrals of various conserved charges. In fact, we are going to show that these surfaces have nontrivial contributions to all conserved quantities, such as, electric charge, magnetic charge and the mass of the solution. The thermodynamics of these solutions has been studied by Mann et al. in \cite{mann06}. In their thermodynamics treatment the mass of the solution is the usual mass $M=\sigma\, m$, which is proportional to the mass parameter $m$, but the entropy is not the area of the horizon, since it needs to satisfy the Gibbs-Duhem relation, $I=\beta\,M-S$. In section 3, we show that a careful calculation of the mass using Komar integral reveals an additional contribution coming from the annulus integrals of the cylinder which add a $-2N\Phi_N$ term to the mass. Although, the quantities in \cite{mann06} satisfy Gibbs-Duhem relation, they do not satisfy the first law unless the outer radius of the horizon $r_{0}$ is related to the nut charge. This relation is very similar to the one found in the spherical horizon case where one imposes the extra periodicity condition on Euclidean time $\beta=8\, \pi n$ to remove the Misner string singularity. In the spherical case this relation is important to remove such a singularity but for the flat horizon case we have no such a singularity and no Misner string, therefore, it is not clear how can we justify such a relation. Another important point is that in the absence of Misner strings, we expect the entropy of the solution to receive contributions solely from the area of bolts \cite{hunter+hawking}, since the known expression of entropy in this case is $S= {1 \over 4} ({\cal A}_{bolt}+{\cal A}_{MS})-\beta \, H_{MS}$, i.e., the entropy is the area of the horizon, therefore, this analysis is not consistent with these known results. Recently, the authors in \cite{mann19} have considered a similar treatment to the one we present here, but for Lorentzian spherically symmetric Taub-Bolt solutions, rather than an Euclidian Taub-Bolt solution with flat horizon. In their treatment the new charge "N" is not directly proportional to $n$ but a function of $n$ and horizon radius $r_0$ that changes from a solution to another \cite{mann06, BGK}. A second difference is that our mass calculation gives an additional contributions from the annulus-like surfaces which gives a total mass, ${\mathfrak{M}}=M-2\,N \Phi_N$, where $M={\sigma} \left(m-m_n\right)={\sigma} (m+{4n^3/l^2})$ and we have used the NUT space as a reference space. This leads to the first law, $d\mathfrak{M}=\,T\, dS-N \, d\Phi_N$, which is different from the one obtained in \cite{mann19}. A third difference is that the chemical potential $\psi_N$, associated with their New charge "N" was shown to be proportional to Misner string temperature \cite{BGHK}. But since we deal with multi-temperature system, at equilibrium they should match, which is equivalent to identifying the time direction with periodicity $\beta=8\pi n$. Although our treatment and the one in \cite{mann19} have the same general idea of introducing a pair of thermodynamical variables $N-\Phi_N$, which is a natural consequence of having independent $r_0$ and $n$, the two approaches are different in their mass and their first law looks different as we will see below. While preparing this work we found the appearance of another related paper \cite{ww} which introduces again the same general idea of adding extra thermodynamic pairs, but this time it adds two new charges $n$ and $m\,n$ to study the thermodynamics of Lorentzian Taub-Bolt and NUT solutions with spherical horizons. Although this work studies a different class of Taub-Bolt/NUT solutions with different signature which includes electrically charged Taub-Bolt solutions it did not include the dyonic solution we studied here. This work is organized as follows: In section $2$ we discuss the thermodynamical consequences of having the new thermodynamic quantity $N$ in analogy with electric and magnetic charges. In section $3$ we calculate the total mass of the spacetime given the additional contributions of the annulus-like surfaces, the entropy as the area of the horizon and show that the first law and Smarr's formula are satisfied when we include $N$ and its chemical potential $\Phi_N$. In section $4$ and $5$ we calculate all relevant thermodynamic quantities, then show that they do satisfy the first law and Smarr's formula when we include $N$ and its chemical potential $\Phi_N$ as well as the additional contribution of the magnetic charge and total mass. In section $6$, we calculate the Hamiltonian and its variation to reproduced the first law that we obtained in the previous section, which confirms its form. At the last section, $7$ we conclude our work. \section{Action and Thermodynamic Ensembles} In this section we discuss the thermodynamic consequences of introducing the new charge "N" and its chemical potential for the neutral and dyonic AdS-Taub-Bolt solutions and argue for an alternative treatment in which the entropy is the area of the horizon and the first law is satisfied without the need for the extra condition relating the radius of the horizon to the NUT charge. The action of Einstein-Maxwell theory for asymptotically AdS spacetime ${\cal M}$ with boundary $\del{\cal M}$ is given by \begin{equation} \label{Ibulk} I_G\,=-\frac{1}{16 \pi G}\,\int d^4 x \, \sqrt{-g}\, (R-2\Lambda-F^2) -\frac{1}{8\pi G} \int_{\del {\cal M}} d^3x \, \sqrt{h}\,K, \end{equation} where $\Lambda=-\frac{3}{l^2}$ is the cosmological constant, $A_{\mu}$ is the gauge potential and $F_{\mu\nu}=\del_{\mu}A_{\nu}-\del_{\nu}A_{\mu}$ is its field strength. The first two terms represent the Einstein-Hilbert action with negative cosmological constant and the electromagnetic contribution to the action. The final term is Gibbons-Hawking boundary term. Here, $h_{ab}$ is the boundary metric and $K$ is the trace of the extrinsic curvature $K^{ab}$ on the boundary. Varying the action with respect to the metric $g_{\mu \nu}$ and the gauge potential $A_{\mu}$ we get the following field equations \begin{equation} G_{\mu \nu}\,+\,\Lambda\,g_{\mu \nu}\,=\,2\,T_{\mu \nu} \end{equation} \begin{equation} \partial _{\mu}(\sqrt{-g}\,F^{\mu \nu})\,=\,0 \end{equation} where $G_{\mu \nu}$ is Einstein Tensor, $T_{\mu \nu}$ is the stress tensor, which is given by \begin{equation} T_{\mu \nu}\,=\,F_{\alpha \mu}\,F^{\alpha}_{\nu}-\frac{1}{4}g_{\mu \nu}\,F^2 \end{equation} \newline Trying to calculate the above gravitational action for the Taub-Bolt solution on shell, one encounters a finite number of divergent terms arising from integrating over the infinite AdS volume. These divergent terms can be canceled through the addition of certain local surface counterterm \cite{bala99,surf99} or through the the background subtraction technique where an action of a background space (such that of a AdS space) is subtracted from the action of the spacetime under investigation. Here we are going to use the AdS-Taub-NUT space as our background spacetime. In our case here the boundary metric is not a single hypersurface but a two-dimensional cylinder upon identifying one of the dimensions. Therefore, one expects having additional attributions from the two annulus-like surfaces of the cylinder in the action as well as the conserved charges of the solution. From various studies of charged and neutral AdS black holes with flat horizons in literature, one can see that these surfaces have vanishing contributions, when $n=0$. But for AdS solutions with nonvanishing NUT charge, $n$, these surfaces have nontrivial contribution to conserved quantities, such as, electric charge, magnetic charge and the mass of the solutions, as we will see in the coming sections. Electrically charged black holes differ from the magnetically charged ones in their boundary conditions \cite{CPW,hawking+ross}. The boundary condition on the Euclidian action of magnetic black holes fixes the magnetic charge, $Q_m$, therefore, the partition function $Z=Z(T,Q_m)$, but for the electric black holes it fixes the electric potential, $\Phi_e$ (chemical potential), therefore, the partition function $Z=Z(T,\Phi_e)$. As a result, it is natural to consider the canonical ensemble for the magnetic case and the grand canonical ensemble for the electric case. This leads to a mixed ensemble in the case of dyonic black hole with the partition function $Z(T,Q_m,\Phi_e)$. In the case of AdS-dyonic black holes with spherical horizons, one can consider the canonical ensemble with fixed charges upon adding a surface term to the action, which reads \be \tilde{I}=I-{1 \over 4\pi G}\int_{\del{\cal M}} d^3x\sqrt{h}\, n_{a}\,F^{ab}\,A_a. \ee It provides the action with the needed Legendre transformation to replace its dependence on $\Phi_e$ with a dependence on $Q_e$. It is worth mentioning here that from a thermodynamic perspective, the NUT charge is quite similar to the magnetic charge since both charges are expressed as integrals of some field strength over a closed surface at the boundary, therefore, these charges are fixed upon fixing the boundary metric. As a result, we evaluate the partition functions in a definite charge sector, or $Z(T,N)$, where the action is related to the Gibbs energy. Then, \be \left({\del G \over \del N}\right)_{T} =\Phi_N, \ee defines the chemical potential for $N$ \footnote{In \cite{hunter} Hunter argued for the existence of a similar term to $n\, \Phi_n$ in the action of Taub-NUT space with spherical horizon.}. This leads to \be d G = -S\,dT +\Phi_N d N,\ee as we are going to see in the coming sections. \section{Taub-Bolt thermodynamics with flat horizon revisited} As was mention in the previous sections our thermodynamic treatment of Taub-Bolt/NUT-AdS solutions assumes that $N$ is a new charge which can be varied independently from the horizon radious. It is constructive to discuss the neutral Taub-Bolt/NUT case before we move to the Taub-Bolt/NUT case with electric and magnetic charges. The Taub-Bolt/NUT spacetime is given by the following metric \begin{equation} dS^2=f(r)\left(dt+{2\,n\,x \over l}d\phi\right)^2+\frac{dr^2}{f(r)}+\left(\frac{r^2-n^2}{l^2}\right)(dx^2+l^2\,d\phi^2) \end{equation} where \begin{equation} f(r)=\frac{r^4-6n^2r^2-3n^4-2ml^2r}{l^2(r^2-n^2)} \end{equation} Here, $r$ is a radial coordinate, $x\in [-L/2,L/2]$ and $\phi \in [0,2\pi]$. Total surface area at constant r is $\int l\, d\phi \, dx=2\pi\,l\,L=\sigma\,l^2$. Notice that the spatial boundary is a two-dimensional cylinder. The horizon radius $r_0$ is defined as $f(r_0)=0$. One of the distinguishing features of this solutions is the absence of Misner string (for example see appendix C \cite{mann06} ), which means that we do not have the additional condition $f'(r_0)={1 \over 2n}$, as a result, the temperature is $T=f'(r_{0})/4\pi$ \cite{clifford98,surf99}. Since there are no contributions from Misner string, the entropy is just the area of the horizon (i.e., the Bolt contribution), in contrast with the spherical horizon case. The condition for a nut is that the mass parameter $m=m_n=-4n^3/l^2$ \cite{clifford98,surf99}, which leads to a zero-dimensional fixed point rather than a two-dimensional fixed point which we call a bolt. The temperature at the horizon is proportional to surface gravity of the black hole and is given by, \begin{equation} \label{T} T=\frac{f^{\prime}(r)}{4\,\pi}=3\frac{(r_0^2-n^2)}{4 \pi l^2 r_0}. \end{equation} Calculating the on-shell action using the background method using NUT spacetime as a background space, one gets the action \be I=\frac{\sigma\,\beta}{2}\,\left( m+\frac{r_0(3n^2-r_0^2)+2n^3}{l^2} \right). \ee Our analysis is based on the on-shell gravitational action $I(T,N)$ which we are going to use to calculate the entropy and the chemical potential $\Phi_N$ of the conserved charge $N=\sigma\,n$. The change in Gibbs energy $G(T,N)=I/\beta$ is \be dG=-S\,dT+\,\Phi_N \,dN, \ee where \be \left({\del G \over \del T} \right)_N=-S, \, \hspace{0.6 in} \left({\del G \over \del N} \right)_T=\, \Phi_N. \label{gibbsv}\ee Using equation (\ref{gibbsv}), or varying the action with respect to $\beta=1/T$, one can obtain the entropy from the action \be S=\beta \del_{\beta}I-I, \ee which is the area of horizon, in contrast with the entropy calculated in \cite{mann06}, \begin{equation}\label{S} S=\frac{A_H}{4\,G}=\sigma\,{\pi (r_0^2-n^2)}. \end{equation} \begin{center} \centerline{ \includegraphics[angle=0,width=120mm]{f2.eps}} {\footnotesize Figure 1: The boundary manifold at a constant time form a closed two dimensional paces with three surfaces that $B_{r}$, $B_{x+}$ and $B_{x-}$ } \label{fig-1} \end{center} The total mass ${\mathfrak{M}}$ can be obtained through a careful calculation of Komar mass integral which have three contributions, since the two-dimensional boundary is a cylinder-like surface with a top and a bottom annuli, $B_{x+}$ and $B_{x-}$. These are constant-x surfaces, in addition to $B_{r}$ the constant-r surface at large $r$ as shown in Figure-1. \be \mathfrak{M}=\int_{B}d\sigma^{\mu \nu}\,\zeta_{\mu;\nu}=\int_{B_{r}}d\sigma_r^{\mu \nu}\,\zeta_{\mu;\nu} +\int_{B_{x+}}d\sigma_{x_{+}}^{\mu \nu}\,\zeta_{\mu;\nu} +\int_{B_{x-}}d\sigma_{x_{-}}^{\mu \nu}\, \zeta_{\mu;\nu},\ee where, $\zeta=\partial_t$ is a time-like Killing vector. The annulus integrals give vanishing contributions for $n=0$ case, and the only nonvanishing contribution is coming from the constant-r surface at infinity. But for $n\neq 0$ case, these annulus integrals leads to additional terms in the total mass, namely, $2N\,\Phi_N$, leading to ${\mathfrak{M}}=M-2N \Phi_N$, where $M={\sigma} \left(m-m_n\right)={\sigma}(m+{4n^3/l^2})$ and we have used Taub-NUT space as a background space. Also, this mass might be obtainable from the counterterm method, keeping in mind that we have three surfaces that form the boundary at infinity, in this calculation one obtain an additional, $2N\Phi_N$ term from the annulus integrals, but there is a linear divergent term $~ n^2\, r$, which we could not remove using the known counterterms. Another way of calculating the total mass of the spacetime is through the relation \be {\mathfrak{M}}=\del_{\beta}I=(I+S)\,T-N\,\Phi_N,\ee which is given by the following expression \be \mathfrak{M}=\sigma {\left[ 2n^2+(r_0+n)^2\right]\,(r_0-n)^2 \over 2\,l^2\,r_0}, \ee we have also found that \be \Phi_n=\left({\del \mathfrak{M} \over \del n}\right)_{S}=-\frac{3\,n \,\sigma\,(r_0-n)^2}{2\,r_0\l^2}.\ee Varying this mass with respect to the entropy, one finds \be \left({\del \mathfrak{M} \over \del S}\right)_{N}=\, T.\ee One can check that the first law is satisfied or \be d\mathfrak{M}=\,T\, dS-N \, d\Phi_N. \ee Notice that the mass, $\mathfrak{M}$ is not the internal energy but its Legendre transformation, since the internal energy depends on extensive quantities such as conserved charges rather their chemical potentials. This is very similar to identifying the mass with enthalpy when we introduce the cosmological constant as pressure in extended black hole thermodynamics. Here we define another type of energy $M_H$, \be \mathfrak{M}=M_H-N\,\Phi_N, \ee where \be dM_H=d\mathfrak{M}+d(N\,\Phi_N)=\,T\, dS+\Phi_N\, dN. \ee $M_H$ depends on the extensive quantities $S$ and $N$, which behaves more like an internal energy for these variables and it is the Legendre transform of the total mass. We will see that upon adding pressure (allow cosmological constant to vary), $M_H$ is not the internal energy but the enthalpy of the spacetime. Let us stop here for a moment to discuss the meaning of the negative chemical potential $\Phi_N<0$. One notices that in order to increase $N\rightarrow N+\delta N$, the spacetime must do some work, or spent some energy for this change to take place. This energy is nonvanishing as long as $r_0>n$, but as $r_0\rightarrow n$, this amount is going to be vanishingly small. An important issue here which needs some discussion is the effects of these annulus integrals on the action calculation. These integrals might modify the action through adding new surface terms to the Gibbons-Hawking surface term. In the previous Komar mass calculation, although the extrinsic curvature $K_{ij}$ of the constant-x hypersurface is not vanishing in the $n\neq 0$ case (which contribute to the mass), its trace is vanishing, as a result, the action expression is not changed. This explains why we keep using the action with the Gibbons-Hawking term at constant-r surface. If we allow the cosmological constant $\Lambda=-{3 \over l^2}$ to vary\footnote{To describe a dynamically varying cosmological constant in gravitational theories one might introduce a sourced totally anti-symmetric tensor field couples to gravity \cite{brown1987dynamical,brown1988neutralization,duff1980quantum,nicolai1980}. In the classical equations of such a system the anti-symmetric tensor gives rise to a term that acts as cosmological constant. In this work we consider the cosmological constant as a thermodynamic variable following the phenomenological approach suggested and developed by the authors in \cite{enth-1,enth-2,enth-3,enth-4} which interprets it as a pressure since its conjugate variable is the volume. This approach is sometimes called extended thermodynamics.}, according to extended thermodynamics\cite{enth-1,enth-2,enth-3,enth-4}, we get a pressure $P=-{\Lambda \over 2}$, the thermodynamic volume is given by \be V= \left({\del G \over \del P}\right)_{S,N}={\sigma \over 3}\,(2n+r_0)(r_0-n)^2,\ee which is always positive for $r_0>n$. Notice that the volume here is different from the one discussed in \cite{clifford14-1,clifford14-2} or \cite{mann19} since our action is $I=I_b-I_n$, i.e., our reference spacetime is the Taub-NUT instead of AdS. It is relevant here to see the variation of different thermodynamics quantities, \be d M_H=TdS+\Phi_N\,dN+VdP,\ee \be d {\mathfrak{M}}=TdS-N\,d\Phi_N+VdP,\ee where, \be \left({\del M_H \over \del N}\right)_{S,P}=\Phi_N, \hspace{1.0 in} \left({\del M_H \over \del P}\right)_{S,N}=V, \ee \be \left({\del \mathfrak{M} \over \del \Phi_N}\right)_{S,P}=-N, \hspace{1.0 in} \left({\del \mathfrak{M} \over \del P}\right)_{S,\Phi_N}=V. \ee Testing the above expressions through Smarr formula, one find that the above quantities satisfy \be {\mathfrak{M}}=2\, S\,T-2\,P\, V,\ee which is nontrivial test for the consistency of this treatment and in the case of vanishing $\Lambda$ it reduces to the same expression given by Hunter \cite{hunter}. Notice here that total mass is neither the internal energy, $U$, nor enthalpy but \be {\mathfrak{M}}=U+PV-N\Phi_N=H-N\Phi_N,\ee or the Legendre transform of the enthalpy, $H=M_H$ since it depends on $\Phi_N$ rather than $N$. \section{Dyonic-Taub-(Bolt/NUT) Black Holes with Flat Horizon} The dyonic-Taub-Bolt/NUT solution has the following form for the metric \begin{equation} dS^2=f(r)\left(dt+{2\,n\,x \over l}d\phi\right)^2+\frac{dr^2}{f(r)}+\left(\frac{r^2-n^2}{l^2}\right)(dx^2+l^2\,d\phi^2) \end{equation} where \begin{equation} f(r)\,=\,\frac{r^4-6\,n^2\,r^2+l^2\,p^2-3\,n^4-l^2\,q^2-2\,m\,l^2\,r}{l^2\,(r^2-n^2)}, \end{equation} and the nonvanishing components of the gauge potential are \begin{equation} A_t\,=\,\frac{n\,p+V_e(\,n^2-r^2\,)-q\,r}{n^2-r^2} \end{equation} \begin{equation} A_{\phi}\,=\,\frac{p\,(n^2+r^2)-2\,n\,r\,q}{l^2\,(r^2-n^2)}\,x \end{equation} Thermodynamics imposes some regularity conditions on the gauge potential $A_\mu$ at the horizon which are different for the Bolt and NUT cases. For the Bolt case, it requires the following relation, \begin{equation} q_b\,=\,\frac{n\,p_b+V_e\,(n^2-r_{0}^2)}{r_{0}}, \end{equation} For the NUT case, the $A_\mu$ regularity condition requires, \be p_n=-2\,n\,V_e, \hspace{0.6 in} q_n=-2\,n\,V_e, \ee where, these conditions leads to $f(r=n)=0$, for $m=m_n=-4\,n^3/ l^2$. The temperature at horizon is given by, \begin{equation} \label{T} T\,=\,\frac{f^{\prime}(r)}{4\,\pi}\,=\,\frac{1}{4\,\pi\,l^2\,r_{0}^3}\,\left[l^2\,V_e^2\,(r_{0}^2-n^2)\,+\,3\,r_{0}^4-l^2\,p\,(p+2\,n\,pV_e)-3\,r_{0}^2\,n^2\right]. \end{equation} Here we have electric and magnetic potentials, $\Phi_e$ and $\Phi_m$, which can be calculated as follows \begin{equation} \Phi_e\,=\,A_t\bigg\rvert_{\infty}\,-\,A_t\bigg\rvert_{r_{0}}\,=\,V_e \end{equation} \begin{equation} \Phi_m\,=\,\Tilde{A_t}\bigg\rvert_{\infty}\,-\,\Tilde{A_t}\bigg\rvert_{r_{0}}\,=\,\frac{p\,+\,n\,V_e}{r_{0}}, \end{equation} where, $\Tilde{A_t}$ is defined as $\Tilde{F}=d\Tilde{A}$ and $\Tilde{F}$ is the dual field strength. Magnetic and electric charges are defined as the surface integral of the electromagnetic tensor $F$ and its dual $\Tilde{F}$ \be { Q}_m\,=-\int_{B_\infty}\,F, \hspace{0.7 in} { Q}_e\,=-\,\int_{B_\infty}\,\Tilde{F}. \ee Here it is important to notice the nontrivial contributions coming from the two annuli $B_{x\pm}$, which are going to add new contributions to the the electric and magnetic charges at the boundary compared to their values when $n$ is vanishing. \begin{equation} { Q}_m\,=-\int_{B_\infty}\,F\,=-\,\int_{B_r}\,F_{xy}\,dx\,dy\,-\,\int_{B_{x\pm}}\,F_{yr}\,dy\,dr, \end{equation} Then, the total magnetic charge is given by \begin{equation}\label{finalP} { Q}_{m}\,=\,{\sigma}\,(\,p\,+\,2\,n\,V_e\,). \end{equation} The electric charge $Q_e$ is defined as \begin{equation} { Q}_e\,=-\,\,\int_{B_\infty}\,\Tilde{F}\,=-\,\int_{B_r}\,\Tilde{F}_{xy}\,dx\,dy\,-\,\int_{B_x}\,\Tilde{F}_{yr}\,dy\,dr. \end{equation} Following the same analysis, one gets \begin{equation} \label{Q} { Q}_e\,=-\,\frac{\sigma}{r_{0}}\,({n\,p\,+\,V_e\,n^2\,+\,V_e\,r_{0}^2}). \end{equation} The total conserved electric charge $Q_e$, can be written as, \be { Q}_e= Q^{\infty}_e-2\,N\,\Phi_m,\ee where $Q^{\infty}_e=\sigma\,q$, is the total charge when we set $n=0$. This electric charge is going to play a role in thermodynamics rather than ${ Q}_e$. Also, the total conserved magnetic charge $Q_m$, can be written as, \be { Q}_m=Q^{\infty}_m+2\,N\,\Phi_e, \ee where, $Q^{\infty}_m=\sigma\,p$ is the total magnetic charge when we set $n=0$. The electric and magnetic potentials can be written in the following form \be \Phi_e= -{Q_e+N\,\Phi_m \over \sigma \, r_0}, \hspace{0.4 in} \Phi_m= {Q_m-N\,\Phi_e \over \sigma \, r_0}.\ee \section{Thermodynamics and the first law} Calculating the on-shell action using the background method and choosing Taub-NUT as the reference space, one gets \begin{equation} I\,=\,\frac{\sigma\,\beta}{2}\, \left[\,m-q\,V_e+(p+n\,V_e)(p\,+2\,n\,V_e\,)/{r_{0}}+{r_{0}}\,(\,3\,n^2-r_{0}^2)/{l^2}\,+2n^3/l^2\right].\label{I} \end{equation} Again, varying the action with respect to $\beta$, one can calculate the entropy from the action \be S=\beta \del_{\beta}I-I, \ee \begin{equation}\label{S} S\,=\,\frac{A_H}{4\,G}\,=\,\pi\,A_H\,=\sigma\,{\pi\,({r_0}^2-n^2)}, \end{equation} which is the area of horizon. Let us check the thermodynamic quantities of these solutions, at the same time, allow for a varying cosmological constant, or pressure, $P={3 \over 2\,l^2}$. The Gibbs energy $G(T,N,\Phi_e,Q_m,P)$ is nothing but $I/\beta$. Notice that the volume is same as the noncharged solution, $V={\sigma \over 3}\,(2n+r_0)(r_0-n)^2$. One can check the consistency of thermodynamics through the variation of $G$ \be \left({\del G \over \del \Phi_e} \right)_{T,\Phi_e,Q_m,P} =-Q^{\infty}_e, \hspace{0.5 in} \left({\del G \over \del T} \right)_{N,\Phi_e,Q_m,P} =-S, \ee \be \left({\del G \over \del Q_m} \right)_{T,N,\Phi_e,P} =\,\Phi_m, \hspace{0.3 in} \left({\del G \over \del P}\right)_{T,N,\Phi_e,Q_m} =V. \ee In addition, one can calculate $\Phi_N$, as \be \left({\del G \over \del N}\right)_{T,\Phi_e,Q_m,P} =\Phi_N\, \ee \be \Phi_N=-\, {1\, \over 2\,r_0^3}\,\left(2\,p_m\,(\,n\,p_m+\,V_e\,(r_0^2-n^2))\,+n\,{V_e}^2\,(\,n^2-3\,r_0^2)\,+{3\,n\,(n-r_0)^2\,r_0^2 \over l^2}\right), \ee where, $p_m=p-2\,n\,V_e$. These results are consistent with the change in $G$. \begin{equation}\label{FL} d {G}\,=-S\,dT\,-\,Q^{\infty}_e\,d\Phi_e\,+\Phi_m\,d\,Q_m\,+\,\Phi_N\,dN+V\,dP, \end{equation} Defining the total energy/mass of the spacetime as \be {\mathfrak{M}}=\del_{\beta}I+Q^{\infty}_e\,\Phi_e, \label{tmass}\ee one gets the following expression \be {\mathfrak{M}}\, = \,{\sigma \over 2\,r_0\,}\left[ {[2n^2+(r_0+n)^2 ](r_0-n)^2 \over l^2}+ {\left[ (r_0^4+4\,n^2\,r_0^2-n^4)\,(q^2-p_m^2)-8\,n^3\,r_0\,q\,p_m\right] \over (r_0^2+n^2)^2} \right].\ee Similar to the neutral case the mass of the solution can be written as \be {\mathfrak{M}}\,=M-2\,N\,\Phi_N \ee The first law of thermodynamics for the charged solution takes the form \begin{equation}\label{FL} d {\mathfrak{M}}\,=\,T\,dS\,+\,\Phi_e\,d\,Q^{\infty}_e\,+\Phi_m\,d\,Q_m\,-N\, d\Phi_N+V\,dP. \end{equation} The first law can be checked through the following relations \bea && \left({\del {\mathfrak{M}} \over \del Q_m}\right)_{S,Q^{\infty},\Phi_N,P} =\Phi_m ,\hspace{0.5 in}\left({\del {\mathfrak{M}} \over \del S} \right)_{\Phi_N,Q^{\infty}_e,Q_m,P}=T, \hspace{0.3 in} \left({\del {\mathfrak{M}} \over \del \Phi_N}\right)_{S,Q^{\infty},Q_m,P} =-N , \hspace{0.3 in} \nonumber\\ && \left({\del {\mathfrak{M}} \over \del Q^{\infty}_e} \right)_{S,\Phi_N,Q_m,P} =\Phi_e, \hspace{0.3 in} \left({\del {\mathfrak{M}} \over \del P}\right)_{S,\Phi_N,Q^{\infty}_e,Q_m} =V.\eea The change in enthalpy, \be H=M_H=M-N\,\Phi_N={\mathfrak{M}}+N\,\Phi_N \ee takes the following form \be dH=TdS+\Phi_N\,dN+\Phi_e\,d\,Q^{\infty}_e\,+\Phi_m\,d\,Q_m+V\,dP. \ee Again, testing these expressions through Smarr' formula, one find that the above quantities satisfy \begin{equation} {\mathfrak{M}}=2\, S\,T-2\,P\, V\,+\,Q^{\infty}_e\,\Phi_e\,+\,Q_m\,\Phi_m,\end{equation} which is a sign for the consistency of this approach. If we substitute with $\mathfrak{M}=M-2N\Phi_N$, our Smarr's relation is identical to the one obtained recently in \cite{smarr-TN} for the spherical Lorentzian Taub-Bolt case. It is interesting to see here that the electric charge that enter the first law is $Q^{\infty}_e$ but the magnetic charge is the full charge $Q_m$. In order to confirm these results formally, we are going to express the mass variation using Hamiltonian calculations. This is the task of the coming section. \section{Hamiltonian and First Law} Following the Hamiltonian calculation in \cite{horowitz} and its variation one can confirm the previous results. To calculate the Hamiltonian we analytically continue the dyonic solution to get its Lorentzian version. Let us take, \be t=i\, \tau, \hspace{0.5 in}, n=i\, \tld{n},\hspace{0.5 in},q=i\,\tld{q},\hspace{0.5 in} V_e=-i\, \tld{V}_e,\hspace{0.5 in}, p=\tld{p}.\ee Our solution becomes \begin{equation} dS^2=-f(r)\left(d\tau+{2\,\tld{n}\,x \over l}d\phi\right)^2+\frac{dr^2}{f(r)}+\left(\frac{r^2+\tld{n}^2}{l^2}\right)(dx^2+l^2\,d\phi^2), \end{equation} where, \begin{equation} f(r)\,=\,\frac{r^4+6\,\tld{n}^2\,r^2+l^2\,p^2-3\,\tld{n}^4+l^2\,\tld{q}^2-2\,m\,l^2\,r}{l^2\,(r^2+\tld{n}^2)}, \end{equation} and the gauge potential nonvanishing components are \begin{equation} \tld{A}_t\,=\,\frac{\tld{n}\,\tld{p}+\tld{V}_e(r^2+\tld{n}^2)-\tld{q}\,r}{r^2+n^2} \end{equation} \begin{equation} \tld{A}_{\phi}\,=\,\frac{\tld{p}\,(\tld{n}^2-r^2)-2\,\tld{n}\,r\,\tld{q}}{l^2\,(r^2+\tld{n}^2)}\,x \end{equation} Thermodynamical quantities of this solution are \bea && \tld{Q}_e\,={\sigma}\,(\tld{q}_e-2 \tld{n}\,\tld{\Phi}_m)=\tld{Q}^{\infty}_e-2 \tld{N}\,\tld{\Phi}_m, \hspace{0.5 in} \tld{\Phi}_e= \tld{V}_e \nonumber\\ &&\tld{{ Q}}_{m}\,=\,{\sigma}\,(\,\tld{p}\,+\,2\,\tld{n}\,\tld{\Phi}_e\,)=\tld{Q}^{\infty}_m+2 \tld{N}\,\tld{\Phi}_e,,\hspace{0.5 in}\tld{\Phi}_m= {\tld{p}+\tld{n}\,\tld{V}_e \over r_0}. \eea Putting the metric in the ADM form \begin{equation} ds^2\,=-\,N^2\,dt^2\,+\,h_{ij}\,(\,dx^i+\beta^i\,dt)\,(dx^j+\beta^j\,dt) \end{equation} where, \begin{equation*} N^2\,=\,\frac{f(r)\,(r^2+\tld{n}^2)}{r^2+\tld{n}^2-4\,x^2\,\tld{n}^2\,f(r)\,l^{-2}}, \hspace{0.6 in} \beta^{\phi}\,=\,-\,\frac{2}{l} \left( \frac{\,\tld{n}\,x\,f(r)}{{r^2+\tld{n}^2-4\,x^2\,n^2\,f(r)l^{-2}}}\right), \end{equation*} where, $N$ is the lapse function and $\beta$ is the shift vector. To do $3+1$ splitting, we start with a time-flow vector $t^\mu$, defined as $t^\mu \nabla_\mu t\,=\,1$ or equivalently we have, $t^\mu\,=\,\delta^{\mu}_0$. Also the spatial metric can be constructed as follows, \begin{equation} h^{\mu \nu}=g^{\mu \nu}+ n^{\mu}n^{\nu} \end{equation} where, \begin{equation} n^{\mu}=\frac{1}{N}\,(\,t^{\mu}-\beta^{\mu}). \end{equation} In ADM $3+1$ split, we have a spatial hyper-surface $\Sigma$ with a unit normal vector $n_{\mu}$, the metric on $\Sigma$ is $h^{\mu \nu}$. In Hamiltonian formalism our dynamical fields are $h_{ab}$ and $A_a$ (after dropping tilde) where, $a,b=1,2,3$. the momenta of the field variables are $\Pi_G^{a b}$ for the gravitational field and $\pi^{a}$ for the electromagnetic field. where, \begin{equation} \Pi_G^{a b}=\frac{\partial L}{\partial \Dot{h}_{ab}}=-\frac{\sqrt{h}}{2 \kappa}\,(\,K^{a b}-h^{ab}\,K\,), \end{equation} \begin{equation} \Pi_{EM}^{a}=\frac{\partial L}{\partial \Dot{A}_{a}}=\frac{\sqrt{h}}{2 \kappa}\,(\,F^{\mu a}n_{\mu}\,)= \frac{\sqrt{h}}{2 \kappa}\,E^a, \end{equation} where, $K^{ab}$ is the extrinsic curvature of $h_{ab}$ and $K$ is its trace. We have two constraints $C_\mu$, and $C$, where $C$ is Gaussian constraint or $D_a E^a=0$ while $C_\mu$ is the general relativity constraints determined by Einstein field equations, then one gets \begin{equation} H=\int_{\Sigma } \xi^{\mu} C_{\mu}+\xi^{\mu} A_{\mu} C \end{equation} where, $\xi^{\mu}$ is the time evolution vector or time-like vector that vanishes at the horizon. and, \begin{equation} \label{Gaussconst} C=-\frac{\sqrt{h}}{4}\,D_a\,E^a \end{equation} \bea \label{GRconst1} C_{0}&&=-2\sqrt{h}\,(G_{\mu\nu}+\Lambda \, g_{\mu\nu}-2\,T_{\mu \nu})\,n^\mu n^\nu \nonumber\\ &&=-\frac{\sqrt{h}}{4} R^{(3)}-{4 \sqrt{h}}(\Pi_G^{ab}\Pi^G_{ab}-{\Pi_G^2/2})+2\sqrt{h}\Lambda-8\frac{\pi^a \pi_a}{\sqrt{h}}-\frac{1}{ 4} \sqrt{h} F_{ab} F^{ab}\nonumber\\ \eea \be \label{GRconst2} C_{a}=-2\,\sqrt{h}\,(G_{a\nu}+\Lambda \, g_{a\nu}-2\,T_{a \nu}) n^\nu = -2\,{\sqrt{h}}\,h_{ab}\, D_c({\Pi^{bc}\over \sqrt{h}})+4\,F_{ab}\,\pi^b. \ee The vanishing of the Hamiltonian leads to a relation between the mass and the other thermodynamic parameters \cite{horowitz}. Upon varying the mass with respect to electromagnetic quantities one gets, \begin{equation} \delta M_{EM}= -\int dS_b \left[\xi^\mu A_\mu \delta E^b+(N F^{a b}-2\,E^{[a} \beta^{\,b]})\,\delta A_a \right], \end{equation} where $\xi$ is the time-like $\partial_t$. This is very similar to the variation in \cite{horowitz}, but in four dimensions. Calculating this expression and taking into account that for every term we have three contributions coming from the boundary surfaces, the constant-r surface $B_r$, and the two annuli $B_{x_{\pm}}$ one gets the following result \begin{equation}\label{deltaHem} \delta M_{EM}= \,\tld{V}_e\,d\,\tld{Q}^{\infty}_e+\, \tld{\Phi}_m\,d\,\tld{Q}_m \end{equation} which reproduces the electrodynamic part of first law in a more formal manner. This confirms the previous thermodynamic results. \section{Conclusion} I this work we present a new treatment for studying thermodynamics and the first law of topological neutral and dyonic Taub-Bolt/NUT-AdS solutions in four dimensions. This treatment is based on introducing a new charge $N$ which is directly related to the NUT charge, $n$. In our treatment, $N$ can vary independently of the horizon radius $r_0$, as a result of the absence of Misner string in the Euclidean Taub-Bolt/NUT solution. Upon identifying one of the coordinates the spatial boundary of these cases, the spatial boundary is a cylinder-like surface at large radial distance $r$ with three distinguished surfaces, two at constant-x and one at constant-r. Although the two constant-x surfaces, or the annulus-like surfaces, do not receive fluxes of conserved quantities in the $n=0$ case. The $n \neq 0$ case is different and one can show that they bring additional contributions to the mass, electric and magnetic charges, which are given as $Q_e\sim q-2n\Phi_m$, $Q_m \sim p+2n\Phi_e$ and $\mathfrak{M} \sim m-2n\Phi_n$. The calculated thermodynamic quantities obey the first law of thermodynamics and the entropy is the area of the horizon. Using $N=\sigma\,n$ as a new charge we were able to show that the first law in the neutral and dyonic cases are satisfied using the quantities $Q_e$, $Q_m $ and $\mathfrak{M}$, the entropy $S$, as the area of the horizon, $N$ and $\Phi_N$. Furthermore, these quantities do satisfy Smarr's formula in the neutral and dyonic cases. One of the intriguing issues of the dyonic case is that although the full magnetic charge $Q_m$ contributes to the thermodynamics, only part of the electric charge, namely, $Q^{\infty}_e$ contributes to it. To make sure that we got the correct first law we followed the work in \cite{horowitz} to calculate the Hamiltonian and its variation. Using the Hamiltonian variation we obtained a similar formula to that of \cite{horowitz}, but in four dimensions, which reproduced the first law obtained in the previous sections. This reflects the consistency of our thermodynamic results. It would be interesting to extend this treatment to Taub-Bolt solutions with Lorentzian signature and spherical or hyperbolic horizons which is under investigation now and we are going to report on it soon. Another issue worth investigating is the reason the first law is only satisfied with $Q^{\infty}_e$ and $Q_m$. We hope that we can investigate this issue in some future work.
proofpile-arXiv_065-159
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{SecIntro} Given the class of metric spaces, consider the following three kinds of morphisms: (1) homeomorphisms --- maps preserving the topological structure ---, (2) coarse equivalences --- maps preserving the large-scale geometry --- and (3) bijective coarse equivalences. In short, coarse equivalences uniformly send close points to close points, far points to far points, and have large image in their codomain. Although, the set of homeomorphisms of a metric space forms a group under composition, this is not the case for coarse equivalences. Indeed, coarse equivalences need to be neither injective nor surjective. However, the set of coarse equivalences on a metric space $(X,d)$ becomes a group after identifing coarse equivalences which are \emph{close} to each other (see Definition~\ref{def:close}). We denote by $\mathrm{Coa}(X)$ the group of all coarse equivalences of $X$ modulo the closeness relation and by $\mathrm{BijCoa}(X)$ the group of all bijective coarse equivalences of $X$ modulo closeness (we refer the reader to \S\ref{SecPrelim} for details). In case $X$ is locally compact, homeomorphisms correspond, thanks to Gelfand's transform, to automorphisms of the $\mathrm{C}^*$-algebra of continuous functions on $X$ vanishing at infinity, $C_0(X)$. The goal of this paper is to give an operator algebraic characterisation of the groups $\mathrm{Coa}(X)$ and $\mathrm{BijCoa}(X)$, at least when dealing with metric spaces with certain regularity properties. The objects apt to this coarse Gelfand-type correspondence are Roe-type $\mathrm{C}^*$-algebras. These $\mathrm{C}^*$-algebras were introduced by Roe in \cite{Roe1993} for their connections to (higher) index theory and the associated applications to manifold topology and geometry (\cite{Roe1996}). The Roe algebra and its uniform version were studied precisely to detect $\mathrm{C}^*$-algebraically the large-scale geometry of metric spaces. Their study was boosted due to their intrinsic relation with the coarse Baum-Connes conjecture and consequently with the coarse Novikov conjecture (\cite{Yu2000}). Recently, Roe-type algebras and their $K$-theory have been used as a framework in mathematical physics to study the classification of topological phases and the topology of quantum systems (\cite{EwertMeyer2019,Kubota2017}). We now describe our main results. Given a metric space $(X,d)$ and a Hilbert space $H$, $\ell_2(X,H)$ denotes the Hilbert space of square summable $H$-valued functions on $X$. Operators in $\mathcal B(\ell_2(X,H))$ can be seen as $X\times X$-matrices whose entries are in $\mathcal B(H)$. Given an operator $a=(a_{xy})_{x,y\in X}$ in $\mathcal B(\ell_2(X,H))$, we define its propagation as the quantity \[ \propg(a)=\sup \{d(x,y)\mid a_{xy}\neq 0\}. \] If $H$ is separable and infinite-dimensional, the \emph{$\mathrm{C}^*$-algebra of band-dominated operators of $(X,d)$}, denoted by $\mathrm{BD}(X)$, is the norm closure of the $^*$-algebra of finite propagation operators. If in addition we demand each entry $a_{xy}$ to be compact, we obtain the \emph{Roe algebra of $X$}, $\mathrm{C}^*(X)$. Finally, if $H=\mathbb C$, the \emph{uniform Roe algebra of $X$}, $\mathrm{C}^*_u(X)$, is defined once again as the norm closure of the $^*$-algebra of finite propagation operators on $\ell_2(X)=\ell_2(X,\mathbb{C})$ (see \S\ref{SecIntro} for more details on those definitions). Let us first focus on the case of $\mathrm{BijCoa}(X)$. A bijective coarse equivalence of $X$ induces an automorphism of $\mathrm{C}^*_u(X)$ in a canonical way, and two bijective coarse equivalences are close if and only if the associated isomorphisms are unitarily equivalent in $\mathrm{C}^*_u(X)$ (see \S\ref{SubsectionCanonicalMap}). This gives a well defined canonical group monomorphism from $\mathrm{BijCoa}(X)$ into $\mathrm{Out}(\mathrm{C}^*_u(X))$, the latter being the group of outer automorphisms of $\mathrm{C}^*_u(X)$, i.e., \[ \mathrm{Out}(\mathrm{C}^*_u(X))=\mathrm{Aut}(\mathrm{C}^*_u(X))/\mathrm{Inn}(\mathrm{C}^*_u(X)). \] \begin{problemi}[Gelfand-type duality for bijective coarse equivalences] Let $X$ be a uniformly locally finite metric space. Is the canonical homomorphism \[\mathrm{BijCoa}(X)\to \mathrm{Out}(\mathrm{C}^*_u(X))\] a group isomorphism?\label{ProblemA} \end{problemi} The work of White and Willett on uniqueness of Cartan masas in uniform Roe algebras in presence of Yu's property A can be used to give a positive answer to Problem \ref{ProblemA} (again in the presence of property A). The following, proven as Theorem \ref{ThmIsoCoarseAutURA}, is a consequence of \cite[Theorem E]{WhiteWillett2017}: \begin{theoremi}\label{thm:uniform} Let $(X,d)$ be a uniformly locally finite metric space with property A. The canonical homomorphism \[ \mathrm{BijCoa}(X)\to\mathrm{Out}(\mathrm{C}^*_u(X)) \] is a group isomorphism. \end{theoremi} Theorem \ref{thm:uniform} gives an alternative way to compute the outer automorphism group of a uniform Roe algebra. As a simple application, it can be used to show that all automorphisms of $\mathrm{C}^*_u(\mathbb{N})$ are inner and that $\mathrm{Out}(\mathrm{C}^*_u(\mathbb{Z}))\cong\mathbb{Z}_2$ (see Corollary \ref{CorOutNandZ}). Let us now focus on the case of coarse equivalences. Although coarse equivalences do not induce uniform Roe algebra isomorphisms (for instance, all finite metric spaces are coarsely equivalent, but, if $X$ and $Y$ are finite, $\mathrm{C}^*_u(X)$ and $\mathrm{C}^*_u(Y)$ are isomorphic if and only if $|X|=|Y|$), they do induce isomorphisms between Roe algebras. If $X$ is a uniformly locally finite metric space, assigning an element of $\mathrm{Aut}(\mathrm{C}^*(X))$ to a coarse equivalence of $X$ is highly non-canonical (see \S\ref{SubsectionCanonicalMap}). Such an assignment becomes canonical when considered as a map from $\mathrm{Coa}(X)$ to $\mathrm{Aut}(\mathrm{C}^*(X))$ modulo $\mathrm{Inn}(\mathrm{BD}(X))$ --- notice that there are a couple of hidden claims in here: (1) we are allowed to mod out our maps and (2) $\mathrm{Inn}(\mathrm{BD}(X))$ is a normal subgroup of $\mathrm{Aut}(\mathrm{C}^*(X))$ (the latter follows since we prove that $\mathrm{BD}(X)$ is the multiplier algebra of $\mathrm{C}^*(X)$, see Theorem~\ref{prop:mult}). We form the outer automorphism group of $\mathrm{C}^*(X)$ by letting \[ \mathrm{Out}(\mathrm{C}^*(X))=\mathrm{Aut}(\mathrm{C}^*(X))/\mathrm{Inn}(\mathrm{BD}(X)). \] \begin{problemi}[Gelfand-type duality for coarse equivalences] Let $X$ be a uniformly locally finite metric space. Is the canonical homomorphism \[\mathrm{Coa}(X)\to \mathrm{Out}(\mathrm{C}^*(X))\] a group isomorphism?\label{ProblemB} \end{problemi} We give a positive answer to Problem \ref{ProblemB} above in the case of property A. This is proven as Theorem~\ref{ThmIsoCoa}. \begin{theoremi}\label{thm:main} Let $(X,d)$ be a uniformly locally finite metric space with property A. The canonical homomorphism \[ \mathrm{Coa}(X)\to\mathrm{Out}(\mathrm{C}^*(X)) \] is a group isomorphism. \end{theoremi} Computing $\mathrm{Coa}(X)$ is in general a very difficult task, even for a simple space such as $\mathbb{Z}$. However, using results present in the literature, Theorem \ref{thm:main} gives us some interesting applications. For instance, $\mathrm{Out}(\mathrm{C}^*(\mathbb{Z}))$ contains isomorphic copies of Thompson's group $F$ and of the free group of rank continuum (see Corollary \ref{CorThomp}). Using results of Eskin, Fisher and Whyte (\cite{EskinFisherWhyte2012Annals}), and of Farb and Mosher (\cite{FarbMosher1998Inventiones}), we obtain a complete computation of $\mathrm{Out}(\mathrm{C}^*(X))$ for solvable Baumslag-Solitar groups, and for lamplighter graphs $F\wr \mathbb{Z}$, where $F$ is a finite group (see Corollaries~\ref{CorBaumslagSolitar} and~\ref{CorLamplighter}). For our main results (Theorem~\ref{thm:uniform} and~\ref{thm:main}), we assume Yu's property A. This is one of the best known regularity properties in the setting of coarse spaces. It is equivalent to many algebraic and geometric properties such as the non-existence of noncompact ghost operators in $\mathrm{C}^*_u(X)$ (\cite[Theorem 1.3]{RoeWillett2014}), nuclearity of $\mathrm{C}^*_u(X)$ (\cite[Theorem 5.5.7]{BrownOzawa}), and the operator norm localisation property, ONL (\cite[Theorem 4.1]{Sako2014}) --- the latter is in fact the formulation of property A we use in our proofs. Property A is fairly broad: for example, all finitely generated exact groups have property A (more precisely, their Cayley graphs, when endowed with the shortest path metric, have property A). In particular, this includes the classes of linear groups, groups with finite asymptotic dimension and amenable groups. A key step in the proof of Theorem~\ref{thm:main} is to show `uniqueness of Cartan masas' in Roe algebras, generalising \cite[Theorem E]{WhiteWillett2017} to Roe algebras. In the case of uniform Roe algebras, the canonical Cartan masa of $\mathrm{C}^*_u(X)$ is $\ell_\infty(X)$. In $\mathrm{C}^*(X)$, there are many canonical Cartan masas which depend on the choice of an orthonormal basis of $H$. Precisely, if $\bar\xi=(\xi_n)_n$ is an orthonormal basis of $H$, we obtain a Cartan masa of $\mathrm{C}^*(X)$ by considering all operators $a\in \mathcal{B}(\ell_2(X,H))$ such that for all $x\in X$ there is $(\lambda_n)_n\in c_0$ for which $a(\delta_x\otimes \xi_n)=\lambda_n(\delta_x\otimes \xi_n)$ for all $n\in\mathbb{N}$. This masa is isomorphic to $\ell_\infty(c_0)$ and we denote it by $\ell_\infty(X,\bar \xi)$. \begin{theoremi}\label{ThmCartan} Let $X$ be a uniformly locally finite metric space with property A, and let $\bar\xi$ and $\bar\zeta$ be orthonormal bases of the separable Hilbert space $H$. If $\Phi\in\mathrm{Aut}(\mathrm{C}^*(X))$, then there is a unitary $v\in\mathrm{BD}(X)$ such that \[\Ad(v)\circ\Phi(\ell_\infty(X,\bar \xi))=\ell_\infty(X,\bar \zeta).\] \end{theoremi} Theorem \ref{ThmCartan} is proven below as Theorem \ref{ThmIsoMakeAsympCoarseLike}. We point out that this result is actually stronger: the unitary $v\in \mathrm{BD}(X)$ is chosen so that $\Ad(v)\circ\Phi:\mathrm{C}^*(X)\to \mathrm{C}^*(X)$ respects the coarse geometry of $X$ in a very strong sense (see Definition \ref{DefiCoarseLikeProp} and Theorem \ref{ThmIsoMakeAsympCoarseLike} for details). To prove Theorem~\ref{ThmCartan}, and consequently Theorem~\ref{thm:main}, we state and prove uniform approximability results for Roe algebras. These results ensure that certain maps between Roe algebras respect in some sense the coarse geometry of the metric spaces. The concept of uniform approximability was introduced in \cite{BragaFarah2018}, and then further developed in \cite{BragaFarahVignati2019} and \cite{BragaFarahVignati2018}, for maps between uniform Roe algebras. These tools were already applied for a better understanding of uniform Roe algebras (e.g., \cite{LorentzWillett2020,WhiteWillett2017}), and we believe our generalisations can be the key to a further development of the theory of Roe algebras. The paper is structured as follows: \S\ref{SecPrelim} contains preliminaries and sets the notation. Our uniform approximability results are proven in \S\ref{SecUnifApprox}, and applied in \S\ref{SecMult} and \S\ref{SectionMakingIsoCoarseLike}, where our main theorems are proved. \S\ref{SecApp} is dedicated to some applications. \section{Notation and preliminaries}\label{SecPrelim} If $H$ is a Hilbert space, $\mathcal{B}(H)$ denotes the space of bounded operators on $H$ and $\mathcal{K}(H)$ its ideal of compact operators. We denote the identity of $\mathcal B(H)$ by $1_H$. Given a set $X$, we denote by $\ell_2(X,H)$ the Hilbert space of square-summable functions $X\to H$. If $x\in X$ and $\eta\in H$, $\delta_x\otimes \eta$ is the function that sends $x$ to $\eta$ and all other elements of $X$ to $0$. Elements in $\mathcal B(\ell_2(X,H))$ can be viewed as (bounded) $X\times X$ matrices where each entry is an operator in $\mathcal B(H)$. With this identification, given $a\in \mathcal{B}(\ell_2(X,H))$, and $x,y \in X$, we denote by $a_{xy}$ the operator in $\mathcal{B}(H)$ determined by \[ \langle a_{xy}\xi,\eta\rangle=\langle a(\delta_x\otimes\xi),\delta_y\otimes\eta\rangle, \text{ for all } \xi,\eta\in H.\] Multiplication in $\mathcal B(\ell_2(X,H))$ corresponds to matrix multiplication, that is, \[ (ab)_{xy}=\sum_{z\in X}a_{xz}b_{zy}. \] We make the following abuse of notation throughout this article: given a metric space $X$ and a Hilbert space $H$, we write $\chi_{C}$ to denote both \begin{enumerate} \item the projection in $\ell_\infty(X)\subset \mathcal{B}(\ell_2(X))$ defined by $\chi_C\delta_x=\delta_x$ if $x\in C$ and $\chi_C\delta_x=0$ if $x\not\in C$, and \item the projection in $\ell_\infty(X,\mathcal{B}(H))$ defined by $\chi_C\delta_x\otimes \xi=\delta_x\otimes \xi$ if $x\in C$ and $\chi_C\delta_x\otimes \xi=0$ if $x\not\in C$. \end{enumerate} If $\chi_C$ denotes a projection in $ \ell_\infty(X)$, when considering elements of $\mathcal{B}(\ell_2(X,H))$, $\chi_C$ will always be accompanied by an operator on $H$ --- for instance: $\chi_C\otimes p$, where $p\in \mathcal{B}(H)$. Given $a\in \mathcal{B}(\ell_2(X,H))$ and $x,y\in X$, the operator $a_{xy}$ can be identified with the standard restriction of $\chi_{\{x\}}a\chi_{\{y\}}$ to an operator in $\mathcal{B}(H)$. Let $(X,d)$ be a metric space. For $a\in\mathcal B(\ell_2(X,H))$, the \emph{support of $a$} is defined by $\supp(a)=\{(x,y)\in X^2\mid a_{xy}\neq 0\}$ and its \emph{propagation} by \[\propg(a)=\sup\{d(x,y)\mid (x,y)\in \supp(a)\}.\] If $H$ is infinite-dimensional and separable, we construct the \emph{band-dominated algebra of $X$}, denoted $\mathrm{BD}(X)$, by letting \[ \mathrm{BD}(X)=\overline{\{a\in\mathcal B(\ell_2(X,H))\mid \propg(a) \text{ is finite} \}}. \] The \emph{Roe algebra} is the ideal of $\mathrm{BD}(X)$ given by \emph{locally compact} operators i.e., $a_{xy}$ is compact for all $x,y\in X$. So \[ \mathrm{C}^*(X)=\{a\in \mathrm{BD}(X) \mid \forall x,y\in X, \ a_{xy}\in\mathcal K(H)\}. \] If $H=\mathbb C$, all band-dominated operators are locally compact, as $\mathcal K(H)=\mathcal{B}(H)$. In this case, the algebra of band-dominated operators is called the \emph{uniform Roe algebra of $X$} and denoted by $\mathrm{C}^*_u(X)$. The algebra $\mathrm{C}^*(X)$ is not unital, but it has a well-behaved approximate identity if $X$ is uniformly locally finite. A metric space $(X,d)$ is \emph{uniformly locally finite} (\emph{u.l.f.}) if \[\sup_{x\in X}|B_r(x)|<\infty\ \text{ for all }\ r>0,\] where $|B_r(x)|$ denotes the cardinality of the closed ball of radius $r$ centered at $x$. Let $(p_j)_j$ be a sequence of finite rank projections on $H$ which converges to the identity $1_H$ in the strong operator topology. Given a function $f\colon X\to\mathbb{N}$, let \[q_f=\mathrm{SOT}\text{-}\sum_{x\in X}\chi_{\{x\}}\otimes p_{f(x)}.\] Each $q_f$ is in $\mathrm{C}^*(X)$, and $q_f\leq q_g$ if and only if for all $x\in X$ we have $f(x)\leq g(x)$. \begin{proposition}\label{prop:approxid} Let $X$ be u.l.f.\@ metric space. The net $\{q_f\mid f\colon X\to\mathbb{N}\}$ is an approximate identity for $\mathrm{C}^*(X)$ consisting of projections. \end{proposition} \begin{proof} Pick $a\in\mathrm{C}^*(X)$ with $\propg(a)\leq r$, and let $\varepsilon>0$. Since $X$ is u.l.f.\@ and $r$ is fixed, we can find $a_0$ and $a_1$ in $\mathrm{C}^*(X)$ with propagation at most $ r$ and finite sets $A_n\subseteq X$, for $n\in\mathbb{N}$, such that \begin{itemize} \item $a=a_0+a_1$, \item $\supp(a_0)\cap \supp(a_1)=\emptyset$, \item $a_0=\sum_{n }\chi_{A_{2n}}a_0\chi_{A_{2n}}$ and $a_1=\sum_{n }\chi_{A_{2n+1}}a_1\chi_{A_{2n+1}}$, and \item if $|i-j|\geq 2$ then $A_i\cap A_j=\emptyset$. \end{itemize} Since each $A_i$ is finite and each entry of $\chi_{A_{2i}}a_0$ is compact, there is a sequence of natural numbers $(n(i))_i$ such that \[\norm{(\chi_{A_{2i}}\otimes p_{n(i)}) a_0-\chi_{A_{2i}}a_0}<\varepsilon\] for all $i\in\mathbb{N}$. Define $f_0\colon X\to \mathbb{N}$ by letting $f_0(x)=n(i)$ if $x\in A_{2i}$, and $f_0(x)=0$ elsewhere. Then $\norm{q_{f_0}a_0-a_0}<\varepsilon$. Notice that if $f_0\leq f$, then $q_fq_{f_0}=g_{f_0}$, hence \[ \norm{q_fa_0-a_0}\leq\norm{q_fa_0-q_fq_{f_0}a_0}+\norm{q_fq_{f_0}a_0-a_0}\leq2\varepsilon. \] One can analogously define $f_1\colon X\to X$ such that $\norm{q_{f_1}a_1-a_1}<\varepsilon$. Then $\norm{q_ga-a}\leq 4\varepsilon$ for all $g\colon X\to N$ with $g\geq {\max\{f_0,f_1\}}$. Since $\varepsilon$ was arbitrary, we are done. \end{proof} By its definition, the Roe algebra $\mathrm{C}^*(X)$ is an essential ideal in $\mathrm{BD}(X)$, hence $\mathrm{BD}(X)\subseteq\mathcal M(\mathrm{C}^*(X))$ (see e.g., \cite[II.7.3.5]{Blackadar.OA}). We will prove that this is an equality in Theorem~\ref{prop:mult}. \subsection{Coarse geometry} In the setting of coarse geometry, homeomorphisms are replaced by maps preserving the large-scale geometry. This is a crucial definition we will be using in the whole paper. \begin{definition}\label{def:close} Let $(X,d)$ and $(Y,\partial)$ be metric spaces, and let $f$ and $g$ be functions $X\to Y$. The map $f$ is called \emph{coarse} if for all $r>0$ there is $s>0$ so that \[d(x,y)<r\text{ implies }\partial (f(x),f(y))<s.\] We say that $f$ and $g$ are \emph{close} if \[ \sup_x\partial(f(x),g(x))<\infty. \] The map $f$ is called a \emph{coarse equivalence} if it is coarse and there exists a coarse map $h\colon Y\to X$ so that $f\circ h$ and $h\circ f$ are close to $\mathrm{Id}_Y$ and $\mathrm{Id}_X$, respectively. \end{definition} Notice that a coarse equivalence $f\colon X\to Y$ is automatically \emph{cobounded}, i.e., $\sup_{y\in Y}\partial(y,f(X))<\infty$. Throughout the paper, every time $X$ and $Y$ are metric spaces, we will assume without further notice that $d$ and $\partial$ are the metrics of $X$ and $Y$, respectively. \subsection{The canonical maps}\label{SubsectionCanonicalMap} We now present the construction of the canonical map associating an automorphism of $\mathrm{C}^*_u(X)$ to a bijective coarse equivalence of $X$. We then prove Theorem~\ref{thm:uniform}. Finally, we associate an automorphism of $\mathrm{C}^*(X)$ to a coarse equivalence of $X$. Although such an association is highly noncanonical we prove that it becomes canonical when $\mathrm{Aut}(\mathrm{C}^*(X))$ is replaced by $\mathrm{Out}(\mathrm{C}^*(X))$. Let $f\colon X\to X$ be a bijective coarse equivalence. Consider the unitary $v_f\in \mathcal{B}(\ell_2(X))$ given by $v_f\delta_x=\delta_{f(x)}$ for all $x\in X$. Since $f$ and its inverse are coarse, $\Ad(v_f)$ is an automorphism of $\mathrm{C}^*_u(X)$. If $f$ and $g$ are bijections, then $v_{f\circ g}=v_fv_g$ and $v_{f^{-1}}=v_f^*$. Moreover, $v_f\in\mathrm{C}^*_u(X)$ if and only if $f$ is close to the identity; this gives an injective homomorphism $\mathrm{BijCoa}(X)\to\mathrm{Out}(\mathrm{C}^*_u(X))$. The proof of Theorem~\ref{thm:uniform} amounts then to show surjectivity in case property A is assumed. To do so, we recall the work of White and Willett, who proved uniqueness of Cartan masas in uniform Roe algebras for property A spaces. We slightly restate their result. \begin{theorem}[Theorem E of \cite{WhiteWillett2017}]\label{thm:WW} Let $X$ be a u.l.f.\@ metric space with property $A$. Let $\Phi\in\mathrm{Aut}(\mathrm{C}^*_u(X))$. Then there is a unitary $u\in\mathrm{C}^*_u(X)$ such that $\Ad(u)\circ\Phi(\ell_\infty(X))=\ell_\infty(X)$. \end{theorem} We restate Theorem~\ref{thm:uniform} for convenience. \begin{theorem}\label{ThmIsoCoarseAutURA} Let $X$ be a u.l.f.\@ metric space with property A. The canonical map \[ \mathrm{BijCoa}(X)\to \mathrm{Out}(\mathrm{C}^*_u(X)) \] is a group isomorphism. \end{theorem} \begin{proof} If $\Phi \in \mathrm{Aut}(\mathrm{C}^*_u(X))$, then Theorem \ref{thm:WW} gives a unitary $u \in \mathrm{C}^*_u(X)$ so that $\Psi=\Ad(u)\circ \Phi$ is an automorphism of $\mathrm{C}^*_u(X)$ which takes $\ell_\infty(X)$ to itself. As every automorphism of $\mathrm{C}^*_u(X)$ is implemented by a unitary in $\mathcal{B}(\ell_2(X))$ (see \cite[Lemma 3.1]{SpakulaWillett2013AdvMath}), let $v$ be this unitary, i.e., $\Psi=\Ad(v)$. As $v\ell_\infty(X)v*=\ell_\infty(X)$, there is a bijection $f\colon X\to X$ and a family $(\lambda_x)_{x\in X}$ in the unit circle of $\mathbb C$ so that $v\delta_x=\lambda_x\delta_{f(x)}$ for all $x\in X$ (see \cite[Lemma 8.10]{BragaFarah2018} for a proof of that). Hence, $\Ad(v_f)$ equals $\Psi$ modulo $\mathrm{Inn}(\mathrm{C}^*_u(X))$, which in turn, as $u\in \mathrm{C}^*_u(X)$, equals $\Phi$ modulo $\mathrm{Inn}(\mathrm{C}^*_u(X))$. \end{proof} We now present a map which associates to a coarse equivalence of $X$ an automorphism of $\mathrm{C}^*(X)$. This construction is well known to specialists but, as we use its specifics in the proof of Theorem~\ref{thm:main}, we give its details in full. Fix metric spaces $X$ and $Y$, two orthonormal bases of $H$, $\bar\xi=(\xi_n)_n$ and $\bar\zeta=(\zeta_n)_n$, and let $f\colon X\to Y$ be a coarse equivalence. Let $Y_0=f[X]$. For each $y\in Y_0$, pick $x_y\in X$ such that $f(x_y)=y$, and let $X_0=\{x_y\mid y\in Y_0\}$. Since $f$ is a coarse equivalence, $X_0$ and $Y_0$ are cobounded in $X$ and $Y$, respectively. By the last statement, we can pick sequences of disjoint finite sets $(X^x)_{x\in X_0}$ and $(Y^y)_{y\in Y_0}$, and $r_0>0$, so that \begin{enumerate} \item $X=\bigsqcup_{x\in X_0}X^x$ and $Y=\bigsqcup_{x\in Y_0}Y^y$, \item $x\in X^x$ and $\diam(X^x)\leq r_0$ for all $x\in X_0$, and \item $y\in Y^y$ and $\diam(Y^y)\leq r_0$ for all all $y\in Y_0$. \end{enumerate} For each $x\in X_0$, pick a bijection \[g_x\colon X^x\times \mathbb{N}\to Y^{f(x)}\times \mathbb{N}.\] Define \[ g=\bigcup_{x\in X_0}g_x. \] So $g$ is a bijection between $X\times\mathbb{N}$ and $Y\times\mathbb{N}$. Let $g_1\colon X\times \mathbb{N}\to Y$ and $g_2\colon X\times \mathbb{N}\to \mathbb{N}$ be the compositions of $g$ with the projections onto the first and second coordinates, respectively. If $x\in X$, we write $g_1(x,\mathbb{N})$ for the set $\{g_1(x,n)\mid n\in\mathbb{N}\}$. Define a unitary $u=u_g\colon\ell_2(X,H)\to \ell_2(Y,H)$ by \[ u\delta_x\otimes \xi_n=\delta_{g_1(x,n)}\otimes \zeta_{g_2(x,n)} \] for all $(x,n)\in X\times \mathbb{N}$. For $a\in\mathcal B(\ell_2(X,H))$, define $\Psi\colon\mathcal{B}(\ell_2(X,H))\to \mathcal{B}(\ell_2(Y,H))$ by \[ \Psi(a)=uau^*. \] Notice that $\Psi$ maps locally compact elements to locally compact elements. We intend to show that the image of $\mathrm{C}^*(X)$ is in $\mathrm{C}^*(Y)$. \begin{claim}\label{Claim1} If $a$ has finite propagation so does $\Psi(a)$. \end{claim} \begin{proof} Fix $r>0$ and pick $s'$ such that if $d(z,z')\leq r+2r_0$ then $\partial(f(z),f(z'))\leq s'$ whenever $z$ and $z'$ are in $X$. This exists as $f$ is coarse. Suppose now that $x$ and $x'$ are elements of $X$ with $d(x,x')\leq r$. Let $y$ and $y'$ be such that $y\in g_1(x,\mathbb{N})$ and $y'\in g_1(x',\mathbb{N})$. Pick $z$ and $z'$ such that $x\in X^z$ and $x'\in X^{z'}$. Notice that $y\in Y^{f(z)}$ and $y'\in Y^{f(z')}$. Since $f$ is coarse and the diameter of each $X^z$ is bounded by $r_0$, $d(z,z')\leq r+2r_0$, hence $\partial(f(z),f(z'))\leq s'$. Since the diameter of each $Y^y$ is bounded by $r_0$, we have that $\partial(y,y')\leq s'+2r_0$. Pick now $a\in\mathrm{BD}(X)$ with $\propg(a)\leq r$, and suppose that $y$ and $y'$ are elements on $Y$ with $\partial(y,y')> s'+2r_0$. Fix also $n,n'\in \mathbb{N}$. Let $w$ and $w'$ be in $X$ and, let $m,m'\in \mathbb{N}$ be such that $g(w,m)=(y,n)$ and $g(w',m')=(y',n')$. Since $\partial(y,y')>s'+2r_0$, then $d(w,w')>r$, hence $a_{ww'}=0$. In particular \begin{align*} 0&=\langle a(\delta_w\otimes\xi_n), \delta_{w'}\otimes\xi_{n'}\rangle=\langle u^*\Psi(a)u(\delta_w\otimes\xi_n), \delta_{w'}\otimes\xi_{n'}\rangle\\&=\langle \Psi(a)u(\delta_w\otimes\xi_n), u\delta_{w'}\otimes\xi_{n'}\rangle=\langle \Psi(a)(\delta_y\otimes\zeta_m),\delta_{y'}\otimes\zeta_{m'}\rangle \end{align*} Since $m$ and $m'$ are arbitrary, then $\Psi_{yy'}=0$. Since $y$ and $y'$ were arbitrary elements such that $\partial(y,y')>s'+2r_0$, then $\propg(\Psi(a))\leq s'+2r_0$. This concludes the proof. \end{proof} Claim~\ref{Claim1} implies that $\Psi(\mathrm{C}^*(X))\subset \mathrm{C}^*(Y)$. By symmetry of the arguments, it follows that $\Psi^{-1}(\mathrm{C}^*(Y))\subset \mathrm{C}^*(X)$. Hence $\Psi$ restricts to an isomorphism between $\mathrm{C}^*(X)$ and $\mathrm{C}^*(Y)$. We set \[ \Phi_{f,(\xi_n),(\zeta_n),g}=\Psi\mathord{\upharpoonright}\mathrm{C}^*(X). \] This map is highly noncanonical, as it depends on the choices of $\bar\xi$, $\bar\zeta$, and $g$. (The latter in turns depends on $f$, but not canonically.) We want to make this association canonical. If $\Phi$ and $\Psi$ are isomorphisms between $\mathrm{C}^*(X)$ and $\mathrm{C}^*(Y)$, we write \[ \Phi\sim_{u,\mathrm{BD}}\Psi \iff\exists u\in \mathrm{BD}(Y) \text{ such that } \Phi=\Ad(u)\circ\Psi. \] We show that our association becomes canonical when $\mathrm{Aut}(\mathrm{C}^*(X))$ is divided by $\mathrm{Inn}(\mathrm{BD}(X))$. \begin{proposition} Let $X$ and $Y$ be u.l.f.\@ metric spaces, and let $f$ and $h$ be coarse equivalences between $X$ and $Y$. Suppose that $\Phi_{f,(\xi_n),(\zeta_n),g}$ and $\Phi_{h,(\xi'_n),(\zeta'_n),g'}$ are constructed as above from different parameters. Then \[ \Phi_{f,(\xi_n),(\zeta_n),g}\sim_{u,\mathrm{BD}}\Phi_{h,(\xi'_n),(\zeta'_n),g'} \text{ if and only if } f \text{ is close to } h. \] \end{proposition} \begin{proof} Let $u\in\mathrm{BD}(X)$ be the unitary such that for all $x\in X$ and $n\in\mathbb{N}$, $u\delta_x\otimes\xi_n=\delta_x\otimes\xi'_n$, and let $w\in\mathrm{BD}(Y)$ be the unitary such that for all $y\in Y$ and $n\in\mathbb{N}$, $w(\delta_y\otimes \zeta_n)=\delta_y\otimes\zeta'_n$. Then \[ \Phi_{h,(\xi_n),(\zeta_n),g'}=\Ad(w^*)\circ \Phi_{h,(\xi'_n),(\zeta'_n),g'}\circ\Ad(u). \] Since $w$ and $\Phi_{h,(\xi'_n),(\zeta'_n),g'}(u)$ are in $\mathrm{BD}(Y)$, then \[ \Phi_{f,(\xi_n),(\zeta_n),g}\sim_{u,\mathrm{BD}}\Phi_{h,(\xi'_n),(\zeta'_n),g'} \ \Leftrightarrow\ \Phi_{f,(\xi_n),(\zeta_n),g}\sim_{u,\mathrm{BD}}\Phi_{h,(\xi_n),(\zeta_n),g'}. \] Let $g''\colon Y\times\mathbb{N}\to Y\times\mathbb{N}$ given by $g''=g\circ g'^{-1}$. Since $g$ and $g'$ are bijections, so is $g''$. Define a unitary $v\in\mathcal B(\ell_2(Y,H))$ by \[ v(\delta_y\otimes\zeta_n)=\delta_{g''_1(y,n)}\otimes\zeta_{g''_2(y,n)}. \] It is routine to check that $\Phi_{f,(\xi_n),(\zeta_n),g}=\Ad(v)\circ\Phi_{h,(\xi'_n),(\zeta'_n),g'}$. Hence we just need to show that $v\in\mathrm{BD}(Y)$. This follows from how $g$ and $g'$ are constructed because $f$ and $h$ are close. \end{proof} We have constructed a canonical injective homomorphism $\mathrm{Coa}(X)\to\mathrm{Out}(\mathrm{C}^*(X))$. As in the proof of Theorem~\ref{thm:uniform}, our efforts will amount to prove surjectivity. \section{Uniform approximability in Roe algebras}\label{SecUnifApprox} This section deals with uniform approximability of maps $\Phi$ between $\mathrm{C}^*$-subalgebras of $\mathcal{B}(\ell_2(X,H))$ and $\mathcal{B}(\ell_2(Y,H))$. Precisely, in this section we study when maps satisfy coarse-like properties, that is, when morphisms respect the large-scale geometry of the underlying spaces. This concept was introduced for maps between uniform Roe algebras in \cite{BragaFarah2018}, and formalised in \cite{BragaFarahVignati2018}; here we define it in the setting of Roe algebras. \begin{definition}\label{DefiCoarseLikeProp} Let $X$ and $Y$ be metric spaces, $\cA\subset \mathcal{B}(\ell_2(X,H))$ and $\mathcal{B}\subset \mathcal{B}(\ell_2(Y,H))$ be $\mathrm{C}^*$-subalgebras. \begin{enumerate} \item Given $\varepsilon,r>0$, an element $a\in \cA$ is \emph{$\varepsilon$-$r$-approximable in $\cA$} if there is $c\in \cA$ with $\propg(c)\leq r$ so that $\|a-c\|\leq \varepsilon$. \item A map $\Phi\colon\cA\to \mathcal{B}$ is \emph{coarse-like} if for all $\varepsilon,r>0$ there is $s>0$ so that $\Phi(a)$ is $\varepsilon$-$s$-approximable in $\mathcal{B}$ for all contractions in $a\in \cA$ with $\propg(a)\leq r$. \end{enumerate} \end{definition} The following theorem is the starting point for our research on uniform approximability for Roe algebras. It is a simple consequence of \cite[Lemma 4.9]{BragaFarah2018} (see \cite[Proposition 3.3]{BragaFarahVignati2019} for a precise proof; cf.\@ \cite[Theorem 4.4]{BragaFarah2018}). \begin{theorem}\label{ThmCoarseLikeURA} Let $X$ and $Y$ be u.l.f.\@ metric spaces and let $\Phi\colon\mathrm{C}^*_u(X)\to \mathrm{C}^*_u(Y)$ be a strongly continuous compact preserving linear map. Then $\Phi$ is coarse-like. \qed \end{theorem} In layman terms, Theorem \ref{ThmCoarseLikeURA} says that, for uniform Roe algebras, uniform approximability holds in a very strong sense. This result is false for Roe algebras. \begin{proposition}\label{PropNotCoarseLike} Given a finite metric space $X$ and a metric space of infinite diameter $Y$, there is a compact preserving strongly continuous embedding $\Phi\colon\mathrm{C}^*(X)\to \mathrm{C}^*(Y)$ onto a hereditary subalgebra of $\mathrm{C}^*(Y)$ which is not coarse-like. \end{proposition} \begin{proof} For simplicity, we assume $X$ to be a singleton, say $X=\{x\}$, and $Y$ to be countable, say $Y=\mathbb{N}$. Fix a metric $\partial$ on $Y$ of infinite diameter. Let $(\xi_n)_n$ be an orthonormal basis for $H$, and define $u\colon\ell_2(X,H)\to \ell_2(Y,H)$ by \[ u\delta_x\otimes \xi_n=\delta_n\otimes \xi_1, \,\,\text{ for } n\in\mathbb{N}. \] The operator $u$ is an isometry, and $\Ad(u)$ is a compact preserving strongly continuous embedding of $\mathcal B(\ell_2(X,H))$ into $\mathcal B(\ell_2(Y,H))$. As $\mathrm{C}^*(X)=\chi_{\{x\}}\otimes \mathcal{K}(H)$, then $\Ad(u)(\mathrm{C}^*(X))\subset \mathcal{K}(\ell_2(Y,H))\subset \mathrm{C}^*(Y)$. The image of $\Ad(u)$ is a hereditary subalgebra of $\mathrm{C}^*(Y)$ since it equals $\mathcal{K}(\ell_2(Y))\otimes q_1$, where $q_1\colon H\to\Span \{\xi_1\}$ is the standard projection. We are left to show that $\Ad(u)$ is not coarse-like. Fix $n\in\mathbb{N}$, and let $m\in\mathbb{N}$ be such that $\partial(1,m)\geq n$. Let $v\in\mathcal{B}(\ell_2(X,H))$ be the rank $1$ partial isometry sending $\delta_x\otimes \xi_1$ to $\delta_x\otimes \xi_m$. Then $v$ has propagation $0$ but, as \[\langle \Ad(u)v(\delta_1\otimes \xi_1),\delta_m\otimes \xi_1\rangle=1,\] it follows that $\Ad(u)v$ cannot be $1/2$-$n$-approximated. As $n$ is arbitrary, we are done. \end{proof} The map of Proposition~\ref{PropNotCoarseLike} sends a sequence which is converging in the strong operator topology to an element outside of $\mathrm{C}^*(X)$ to a sequence which is converging in the strong operator topology to an element in $\mathrm{C}^*(Y)$. This is the only obstacle in generalising Theorem~\ref{ThmCoarseLikeURA} to Roe algebras. The following two theorems are our main uniform approximability results and most of this section is dedicated to prove them. \begin{theorem}\label{ThmCoarseLike} Let $X$ and $Y$ be u.l.f.\@ metric spaces. Then every isomorphism between $\mathrm{C}^*(X)$ and $\mathrm{C}^*(Y)$ is coarse-like.\end{theorem} Although Proposition \ref{PropNotCoarseLike} shows that Theorem \ref{ThmCoarseLikeURA} cannot be extended to Roe algebras, the latter can be extended to band-dominated algebras as follows: \begin{theorem}\label{ThmCoarseLikeBD} Let $X$ and $Y$ be u.l.f.\@ metric spaces. Then every strongly continuous compact preserving $^*$-homomorphism $\Phi\colon \mathrm{BD}(X)\to \mathrm{BD}(Y)$ is coarse-like. \end{theorem} We now proceed to prove Theorems~\ref{ThmCoarseLike} and \ref{ThmCoarseLikeBD}. First, we show that $\varepsilon$-$r$-approximability does not depend on the ambient algebra. \begin{proposition}\label{PropApprox} Let $X$ be a u.l.f.\@ metric space, $r>0$, $\varepsilon>0$, and $a\in \mathrm{C}^*(X)$. The following are equivalent: \begin{enumerate} \item\label{ItemPropApprox1} $a$ is $\varepsilon$-$r$-approximable in $\mathrm{BD}(X)$, \item\label{ItemPropApprox2} $a$ is $(\varepsilon+\delta)$-$r$-approximable in $\mathrm{C}^*(X)$, for all $\delta>0$, and \item\label{ItemPropApprox3} $a$ is $(\varepsilon+\delta)$-$r$-approximable in $\mathrm{BD}(X)$, for all $\delta>0$. \end{enumerate} \end{proposition} \begin{proof} \eqref{ItemPropApprox1}$\Rightarrow$\eqref{ItemPropApprox2} If $a$ is $\varepsilon$-$r$-approximable in $\mathrm{BD}(X)$, pick $b$ of propagation $r$ with $\norm{a-b}\leq\varepsilon$. Let $p\in\mathrm{C}^*(X)$ be a projection with $\propg(p)=0$ and such that $\norm{pa-a}<\delta$. This exists by Proposition~\ref{prop:approxid}. Then $pb\in\mathrm{C}^*(X)$, $\propg(pb)\leq r$ and \[\norm{a-pb}\leq \norm{a-pa+pa-pb}<\norm{a-b}+\delta/2<\varepsilon+\delta.\] \eqref{ItemPropApprox2}$\Rightarrow$\eqref{ItemPropApprox3} This is immediate. \eqref{ItemPropApprox3}$\Rightarrow$\eqref{ItemPropApprox1} For each $n\in\mathbb{N}$, pick $b_n\in \mathrm{BD}(X)$ with $\propg(b_n)\leq r$ and $\|a-b_n\|\leq \varepsilon+1/n$. Then $(b_n)_n$ is bounded and, by compactness of the unit ball of $\mathcal{B}(\ell_2(X,H))$ with respect to the weak operator topology, by going to a subsequence if necessary, we can define $b=\mathrm{WOT}\text{-}\lim_n b_n$. Clearly, $\propg(b)\leq r$, so we only need to notice that $\|a-b\|\leq \varepsilon$. Suppose $\|a-b\|>\varepsilon$. Pick unit vectors $\xi$ and $\zeta$ in $\ell_2(X,H)$, and $n\in\mathbb{N}$ so that $|\langle (a-b)\xi,\zeta\rangle|>\varepsilon+1/n$. By the definition of $b$, there is $m>n$ so that $|\langle (a-b_m)\xi,\zeta\rangle|>\varepsilon+1/n$. As $m>n$, this implies that $\|a-b_m\|> \varepsilon+1/n$; contradiction. \end{proof} The set of $\varepsilon$-$r$-approximable elements is strongly closed: \begin{proposition}\label{PropSOTConvApproxBD} Let $X$ be a u.l.f.\@ metric space, $r>0$, $\varepsilon>0$, $a\in \mathcal{B}(\ell_2(X,H))$ and let $(a_n)_n$ be a sequence in $\mathrm{BD}(X)$ so that $a=\mathrm{SOT}\text{-}\lim a_n$. If each $a_n$ is $\varepsilon$-$r$-approximable in $\mathrm{BD}(X)$, then $a$ is $\varepsilon$-$r$-approximable in $\mathrm{BD}(X)$. \end{proposition} \begin{proof} As each $a_n$ is $\varepsilon$-$r$-approximable in $\mathrm{BD}(X)$, pick a sequence $(b_n)_n$ in $\mathrm{BD}(X)$ so that $\propg(b_n)\leq r$ and $\|a_n-b_n\|\leq \varepsilon$ for all $n\in\mathbb{N}$. As $a=\mathrm{SOT}\text{-}\lim_na_n$, the principle of uniform boundedness implies that $(a_n)_n$ is a bounded sequence, hence so is $(b_n)_n$. By compactness of the unit ball of $\mathcal{B}(\ell_2(X,H))$ in the weak operator topology, going to a subsequence if necessary, we can assume that $b=\mathrm{WOT}\text{-}\lim_nb_n$ exists. Clearly, $\propg(b)\leq r$ and $\|a-b\|\leq \varepsilon$, so we are done. \end{proof} Propositions~\ref{PropApprox} and~\ref{PropSOTConvApproxBD} together give the following: \begin{proposition}\label{PropSOTConvApprox} Let $X$ be a u.l.f.\@ metric space, $r>0$, $\varepsilon>0$, $a\in\mathrm{C}^*(X)$ and let $(a_n)_n$ be a sequence in $\mathrm{BD}(X)$ so that $a=\mathrm{SOT}\text{-}\lim a_n$. If each $a_n$ is $\varepsilon$-$r$-approximable in $\mathrm{C}^*(X)$, then $a$ is $(\varepsilon+\delta)$-$r$-approximable in $\mathrm{C}^*(X)$ for all $\delta>0$. \qed \end{proposition} Next, we study how strong convergence and $\varepsilon$-$r$-approximability interact. We prove the Roe algebra and the band-dominated algebra versions of \cite[Lemma 4.9]{BragaFarah2018}. Let $\mathbb{D}=\{z\in \mathbb{C}\mid |z|\leq 1\}$. If $(a_n)_n$ is a sequence of operators and $\bar\lambda\in\mathbb{D}^\mathbb{N}$ is such that $\mathrm{SOT}\text{-}\sum_n\lambda_na_n$ exists, we write \[ a_{\bar\lambda}=\mathrm{SOT}\text{-}\sum_n\lambda_na_n. \] When writing $a_{\bar\lambda}$, it is implicit that the limit above exists. \begin{lemma}\label{Lemma4.9BD} Let $X$ be a u.l.f.\@ metric space and let $(a_n)_n$ be a sequence of compact operators in $\mathrm{BD}(X)$ so that $a_{\bar\lambda}\in \mathrm{BD}(X)$ for all $\bar\lambda\in \mathbb{D}^\mathbb{N}$. Then for all $\varepsilon>0$ there is $r>0$ so that $a_{\bar\lambda}$ is $\varepsilon$-$r$-approximable in $\mathrm{BD}(X)$ for all $\bar\lambda\in \mathbb{D}^\mathbb{N}$. \end{lemma} \begin{proof} Let $(p_j)_j$ be a sequence of finite rank projections on $H$ which strongly converges to $1_H$. Let $1_X$ denote the identity on $\ell_2(X)$ and notice that $a=\mathrm{SOT}\text{-} \lim_j(1_X\otimes p_j) a (1_X\otimes p_j)$. Given $\bar\lambda=(\lambda_n)_n\in \mathbb{D}^\mathbb{N}$ and $j\in\mathbb{N}$, let \[a_{\bar\lambda,j}= (1_X\otimes p_j)a_{\bar\lambda} (1_X\otimes p_j), \] and $a_{\bar\lambda,\infty}=a_{\bar\lambda}$, so that $a_{\bar\lambda,\infty}=\mathrm{SOT}\text{-}\lim_j a_{\bar\lambda,j}$ for all $\bar\lambda\in \mathbb{D}^\mathbb{N}$. By Proposition \ref{PropSOTConvApprox}, it is enough to show that \begin{itemize} \item[$(\ast)$]\label{ItemStatement} for all $\varepsilon>0$ there is $r>0$ so that $a_{\bar\lambda,j} $ is $\varepsilon$-$r$-approximable in $\mathrm{BD}(X)$ for all $\bar\lambda\in \mathbb{D}^\mathbb{N}$ and all $j\in\mathbb{N}$. \end{itemize} Suppose ($\ast$) fails for $\varepsilon>0$. For each finite $I\subset \mathbb{N}$, let \[\mathcal{Z}_I=\Big\{\bar\lambda\in \mathbb{D}^\mathbb{N}\mid \forall j\in I, \lambda_j=0\Big\}\text{ and }\mathcal{Y}_I=\Big\{\bar\lambda\in \mathbb{D}^\mathbb{N}\mid \forall j\not\in I, \lambda_j=0\Big\}.\] So $\mathcal{Y}_I$ is compact in the product space $\mathbb{D}^\mathbb{N}$. \begin{claim}\label{Claim1Lemma4.9} For all $r>0$ and all finite $I\subset \mathbb{N}$, there exist $\bar\lambda\in \mathcal{Z}_I$ and $j\in\mathbb{N}$ so that $a_{\bar\lambda,j}$ is not $\varepsilon/2$-$r$-approximable in $\mathrm{BD}(X)$. \end{claim} \begin{proof} Suppose the claim fails and let $r>0$ and $I\subset \mathbb{N}$ witness that. Let $\mathbb{N}_\infty$ be the one-point compactification of $\mathbb{N}$. Notice that the map \[(\bar\lambda,j)\in \mathcal{Y}_I\times \mathbb{N}_\infty\mapsto a_{\bar\lambda,j}\in \mathrm{BD}(X)\] is continuous. Indeed, the map is clearly continuous on $\mathcal{Y}_I\times \mathbb{N}$ and continuity on $\mathcal{Y}_I\times \{\infty\}$ follows since each $a_n$ is compact; therefore, $a_n=\lim_j(1_X\otimes p_j)a_n(1_X\otimes p_j)$ for all $n\in\mathbb{N}$ and it follows that $a_{\bar\lambda,\infty}=\lim_ja_{\bar\lambda, j}$ for all $\bar\lambda\in \mathcal{Y}_I$. The continuity of this map and the compactness of $\mathcal{Y}_I\times \mathbb{N}_\infty$ imply that $\{a_{\bar\lambda,j}\mid \bar\lambda\in \mathcal{Y}_I, j\in\mathbb{N}_\infty\}$ is a norm compact subset of $\mathrm{BD}(X)$. In particular, the total boundedness of this set gives $s>0$ so that $a$ is $\varepsilon/2$-$s$-approximable in $\mathrm{BD}(X)$ for all $a\in \{a_{\bar\lambda,j}\mid \bar\lambda\in \mathcal{Y}_I, j\in\mathbb{N}\}$. Let $\bar\lambda\in \mathbb{D}^\mathbb{N}$ and $j\in\mathbb{N}$. Write $\bar\lambda=\bar\lambda_1+\bar\lambda_2$ for $\bar\lambda_1\in \mathcal{Y}_I$ and $\bar\lambda_2\in \mathcal{Z}_I$, so $a_{\bar\lambda,j}=a_{\bar\lambda_1,j}+a_{\bar\lambda_2,j}$. As $r$ and $I$ witness that the claim fails, $a_{\bar\lambda_2,j}$ is $\varepsilon/2$-$r$-approximable in $\mathrm{BD}(X)$. Moreover, our choice of $s$ implies that $a_{\bar\lambda_1,j}$ is $\varepsilon/2$-$s$-approximable in $\mathrm{BD}(X)$; hence $a_{\bar\lambda,j}$ is $\varepsilon$-$\max\{r,s\}$-approximable in $\mathrm{BD}(X)$. As $\bar\lambda\in \mathbb{D}^\mathbb{N}$ and $j\in\mathbb{N}$ were arbitrary, this contradicts that $(\ast)$ fails for $\varepsilon$. \end{proof} For $r>0$ and $\delta>0$, let \[U_{r,\delta}=\Big\{\bar\lambda \in \mathbb{D}^\mathbb{N} \mid a_{\bar\lambda,j} \text{ is }\delta\text{-}r\text{-approximable in }\mathrm{BD}(X)\text{ for all }j\in\mathbb{N}\Big\}.\] \begin{claim}\label{claim:NWD1} The set $U_{r,\delta}$ is closed for all $r>0$ and $\delta>0$. \end{claim} \begin{proof} Suppose the claim fails for $r>0$ and $\delta>0$. Then there is $\bar\lambda \in U^\complement_{r,\delta}\cap \overline{U_{r,\delta}}$. As $\bar\lambda \not\in U_{r,\delta}$, there is $j\in\mathbb{N}$ so that $a_{\bar\lambda,j}$ is not $\delta$-$r$-approximable in $\mathrm{BD}(X)$. Proposition \ref{PropSOTConvApprox} implies that there is a finite $F\subset X$ so that $\chi_{F}a_{\bar\lambda,j}\chi_{F} $ is not $\delta$-$r$-approximable in $\mathrm{BD}(X)$. Fix $\gamma>0$. By the definition of $a_{\bar \theta,j}$, $\chi_{F}a_{\bar\theta,j}\chi_{F}=(\chi_{F}\otimes p_j) a_{\bar\theta}(\chi_{F}\otimes p_j)$; since $\chi_{F}\otimes p_j$ is compact, then there exists a finite $I\subset \mathbb{N}$ so that $\|\chi_{F}a_{\bar\theta,j}\chi_{F}\|<\gamma$ for all $\bar\theta\in \mathcal{Z}_I$. Indeed, this follows since \[\chi_{F}a_{\bar\theta,j}\chi_{F}=\mathrm{SOT}\text{-}\sum_n\theta_n(\chi_{F}\otimes p_j)a_n (\chi_{F}\otimes p_j) \] for all $\bar\theta\in \mathbb{D}^\mathbb{N}$. Let $\bar\lambda_1\in \mathcal{Y}_I$ and $\bar\lambda_2\in \mathcal{Z}_I$ be so that $\bar\lambda=\bar\lambda_1+\bar\lambda_2$. As $\bar\lambda\in \overline{U_{r,\delta}}$, there exists $\bar\theta_1\in \mathcal{Y}_I$ and $\bar\theta_2\in \mathcal{Z}_I$ so that $\bar\theta=\bar\theta_1+\bar\theta_2\in U_{r,\delta}$ and $\|a_{\bar\lambda_1,j}-a_{\bar\theta_1,j}\|\leq \gamma$. As $\bar\theta\in U_{r,\delta}$, $a_{\bar\theta,j}$ is $\delta$-$r$-approximable in $\mathrm{BD}(X)$. In particular, as $\chi_{F}\otimes p_m$ is a projection with propagation $0$, $\chi_{F}a_{\bar\theta,j}\chi_{F}$ is $\delta$-$r$-approximable in $\mathrm{BD}(X)$. Therefore, since \[a_{\bar\lambda,j}=a_{\bar\theta,j}+a_{\bar\lambda_1,j}-a_{\bar\theta_1,j}+a_{\bar\lambda_2,j}-a_{\bar\theta_2,j},\] then $\chi_{F}a_{\bar\lambda,j}\chi_{F}$ is $(\delta+3\gamma)$-$r$-approximable in $\mathrm{BD}(X)$. As $\gamma$ was arbitrary, Proposition \ref{PropApprox} implies that $\chi_{F}a_{\bar\lambda,j}\chi_{F}$ is $\delta $-$r$-approximable in $\mathrm{BD}(X)$; contradiction. \end{proof} Fix $\delta=\varepsilon/4$. \begin{claim}\label{claim:NWD2} For all $r>0$, the set $U_{r,\delta}$ has empty interior. \end{claim} \begin{proof} Fix $r>0$ and let $\bar\lambda\in U_{r,\delta}$. Fix a finite $I\subset \mathbb{N}$ and write $\bar\lambda=\bar\lambda_1+\bar\lambda_2$, for $\bar\lambda_1\in \mathcal{Y}_I$ and $\bar\lambda_2\in \mathcal{Z}_I$. Pick $s>r$ so that $a_{\bar\lambda_1, j}$ is $\delta$-$s$-approximable in $\mathrm{BD}(X)$. By Claim \ref{Claim1Lemma4.9}, there is $\bar\theta_2\in \mathcal{Z}_I$ and $j\in\mathbb{N}$ so that $a_{\bar\theta_2,j}$ is not $2\delta$-$s$-approximable in $\mathrm{BD}(X)$. Hence, letting $\bar\theta=\bar\lambda_1+\bar\theta_2$, we have that $a_{\bar\theta,j}$ is not $\delta$-$s$-approximable in $\mathrm{BD}(X)$. As $s>r$, $a_{\bar\theta,j}$ is not $\delta$-$r$-approximable in $\mathrm{BD}(X)$, so $\bar\theta\not\in U_{r,\delta}$. Since $I$ was an arbitrary finite subset of $\mathbb{N}$, this shows that $\bar\lambda$ is not an interior point of $U_{r,\delta}$. \end{proof} By Claim~\ref{claim:NWD1} and \ref{claim:NWD2}, $U_{r,\delta}$ is nowhere dense for all $r>0$. However \[\mathbb{D}^\mathbb{N}=\bigcup_{n\in\mathbb{N}}U_{n,\delta}.\] Indeed, if $\bar\lambda\in \mathbb{D}^\mathbb{N}$, then there is $n\in\mathbb{N}$ so that $a_{\bar\lambda,\infty}$ is $\delta$-$n$-approximable in $\mathrm{BD}(X)$. In particular, $a_{\bar\lambda,j}$ is $\delta$-$n$-approximable in $\mathrm{BD}(X)$ for all $j\in\mathbb{N}$, so $\bar\lambda\in U_{n,\delta}$. Since $\mathbb{D}^\mathbb{N}$ is a Baire space, we have a contradiction. \end{proof} \iffalse \begin{proof} Fix $(a_n)_n$ as in the hypothesis. \begin{claim} Let $p\in\mathcal{B}(\ell_2(X,H))$ be a finite rank projection. Then for all $\delta>0$ there is $j$ such that whenever $\bar\lambda\in\mathbb{D}^\mathbb{N}$ is such that $\lambda_i=0$ for all $i\leq j$ then $\norm{pa_{\bar\lambda}p}<\delta$ \end{claim} \begin{proof} Fix $\delta>0$. If $\norm{pa_{\bar\lambda}p}>\delta$ then there is a finite set $F\subseteq\mathbb{N}$ such that $\norm{p(\sum_{n\in F}\lambda_na_n)p}>\delta$. Hence, if the thesis does not hold, we can find a sequence of finite sets $F_n\subseteq\mathbb{N}$ with $\max F_n<\min F_{n+1}$ with the property that there are $\bar\lambda^{F_n}\in\mathbb{D}^{|F_n|}$ such that \[ \norm{p\Big(\sum_{i\in F_n}\lambda_i^{F_n}a_i\Big)p}>\delta. \] Since $p$ is finite dimensional, we can refine the sequence so that \[ \norm{p\Big(\sum_{i\in F_n}\lambda_i^{F_n}a_i\Big)p-p\Big(\sum_{i\in F_m}\lambda_i^{F_m}a_i\Big)p}<\delta/4. \] Let $\mu_i=\lambda_i^{F_n}$ if $i\in F_n$, and $\mu_i=0$ if $i\notin\bigcup_n F_n$. Then $a_{\bar\mu}$ is not a bounded operator. This is a contradiction. \end{proof} We work by contradiction, and suppose the thesis of the lemma does not work, and that $\varepsilon>0$ witnesses this. \begin{claim} There are sequences of disjoint consecutive finite sets $F_n\subseteq \mathbb{N}$, strictly increasing natural numbers $s(n)$, for $n\geq 0$, and $t(n)$, for $n\geq -1$ with $\max F_{n+1}\leq t(n)\leq\min F_{n+2}$, and tuples $\bar\lambda^{F_n}\in \mathbb{D}^\mathbb{N}$ such that for every $n\in\mathbb{N}$ and for every $\bar\mu\in\mathbb{D}^{\max F_n}$ the element $\sum_{i\leq\max F_n}\mu_ia_i$ can be $\varepsilon/4$-$s(n)$-approximated, but if $\bar\lambda$ is such that $\lambda_i=\lambda_i^{F_{n+1}}$ for all $i\in F_{n+1}$ and $\lambda_i=0$ if $i\notin\bigcup_j F_j$ then $a_{\bar\lambda}$ cannot be $\varepsilon/4$-$s(n)$-approximated. \end{claim} \begin{proof} We construct such sequences by induction. For the base step, let $F_0=\{1\}$. Since $a_1\in\mathrm{BD}(X)$ then there is $s(0)$ such that $a_1$ can be $\varepsilon/4$-$s(0)$-approximated. Let $\lambda^{F_0}=1$ and $t_{-1}=2$. Suppose now that we have constructed the sets $F_0,\ldots,F_n$, the naturals $s(0), \ldots,s(n),t(0),\ldots,t(n-1)$ and the tuples $\bar\lambda^{F_0},\ldots,\bar\lambda^{F_n}$, where $\bar\lambda^{F_i}\in \mathbb{D}^{F_i}$. Let $m=t(n-1)$. Since $a_i\in\mathrm{BD}(X)$ for all $i\leq m$, and $\mathbb{D}^m$ is compact, there is $s(n+1)>s(n)$ such that for all $\bar\mu\in\mathbb{D}^m$ the element $\sum_{i\leq m}\mu_ia_i$ can be $\varepsilon/4$-$s(n+1)$-approximated. Let $\bar\lambda$ such that $a_{\bar\lambda}$ cannot be $\varepsilon$-$s(n+1)$-approximated, and define $\bar\lambda'$ by \[ \lambda'_i=\begin{cases} 0 &\text{ if } i\leq m\\ \lambda_i&\text{ else.} \end{cases} \] Since $\sum_{i\leq m}\lambda_ia_i$ can be $\varepsilon/4$-$s(n+1)$-approximated but $a_{\bar\lambda}$ cannot be $\varepsilon$-$s(n+1)$-approximated, then $a_{\bar\lambda'}$ cannot be $3\varepsilon/4$-$s(n+1)$-approximated. By Proposition~\ref{PropSOTConvApproxBD} there is then a finite set $F_{n+1}$ with $\min F_{n+1}>m$ such that $\sum_{i\in F_{n+1}}\lambda_ia_i$ cannot be $3\varepsilon/4$-$s(n+1)$-approximated. We are left to define $t(n)$. This will ensure that $\min F_{n+2}$ is so large that whenever $\bar\mu$ is such that $\mu_i=\lambda_i$ for all $i\in F_{n+1}$ and $\mu_i=0$ for all $i\notin\bigcup_n F_n$, then $a_{\bar\mu}$ cannot be $\varepsilon/4$-$s(n+1)$-approximated. Let $b=\sum_{i\in F_{n+1}}\lambda_ia_i$. Since $b\in\mathrm{BD}(X)$, $b$ is the strong operator topology limit of $pbp$ where $p$ is a finite rank projection of propagation $0$. As $b$ cannot be $3\varepsilon/4$-$s(n+1)$-approximated, Proposition~\ref{PropSOTConvApproxBD} implies that there is a finite rank projection $p$ with $\propg(p)=0$ such that $pbp$ cannot be $3\varepsilon/4$-$s(n+1)$-approximated. Let now $t(n)>\max F_{n+1}$ such that $\norm{p(\sum_{j\geq t(n)}\mu_ia_i)}\leq\varepsilon/4$ for any choice of $\bar\mu\in\mathbb{D}^\mathbb{N}$. We are done. Suppose now that we have $\bar\nu$ such that $\nu_i=\lambda_i$ if $i\in F_{n+1}$ and $\nu_i=0$ if $i\notin\bigcup F_i$. If $a_{\bar\nu}$ can be $\varepsilon/4$-$s(n+1)$-approximated by $c$, then $pa_{\bar\nu}p$ can be $\varepsilon/4$-$s(n+1)$-approximated by $pcp$. Let $d=\mathrm{SOT}\text{-}\sum_{j\geq t(n)}\nu_ja_j$, so that \[ a_{\bar\nu}=\sum_{i\leq t(n-1)}\nu_ia_i+b+d. \] Since $\norm{pd}<\varepsilon/4$, then \[ \norm{pa_{\bar\nu}p-p\Big(\sum_{i\leq t(n-1)}\nu_ia_i+b\Big)p}<\varepsilon/4,\] hence \[ \norm{p\Big(\sum_{i\leq t(n-1)}\nu_ia_i+b\Big)p-pcp}<\varepsilon/2, \] so $p(\sum_{i\leq t(n-1)}\nu_ia_i+b)p$ can be $\varepsilon/2$-$s(n+1)$-approximated. Since $\sum_{i\leq t(n-1)}\nu_ia_i$ can be $\varepsilon/4$-$s(n+1)$-approximated, so does $p(\sum_{i\leq t(n-1)}\nu_ia_i)p$. Hence $pbp$ can be $3\varepsilon/4$-$s(n+1)$-approximated. This is a contradiction. \end{proof} We are done: let $\mu_i=\lambda_i^{F_n}$ if $i\in F_n$, and $\mu_i=0$ if $i\notin\bigcup F_n$. Then $a_{\bar\mu}$ cannot be $\varepsilon/4$-$s(n)$-approximated for all $n$. Since $a_{\bar\mu}\in\mathrm{BD}(X)$ and $s(n)\to\infty$, we have a contradiction. \end{proof} \fi \begin{remark} The Baire categorical nature of the proof of Lemma \ref{Lemma4.9BD} implies that its statement holds outside the scope of metrizable coarse spaces (for brevity, we refer the reader to \cite{BragaFarah2018} for definitions). Indeed, if $(X,\mathcal{E})$ is a coarse space which is \emph{small} (see \cite[Definition 4.2]{BragaFarah2018}), Lemma \ref{Lemma4.9BD} still holds. In particular, Theorems \ref{ThmCoarseLike} and \ref{ThmCoarseLikeBD} also hold for small coarse spaces. \end{remark} The following is the Roe algebra version of Lemma~\ref{Lemma4.9BD}. \begin{lemma}\label{Lemma4.9} Let $X$ be a u.l.f.\@ metric space and let $(a_n)_n$ be a sequence of compact operators in $\mathrm{C}^*(X)$ so that $a_{\bar\lambda}\in \mathrm{C}^*(X)$ for all $\bar\lambda\in \mathbb{D}^\mathbb{N}$. Then for all $\varepsilon>0$ there is $r>0$ so that $a_{\bar\lambda}$ is $\varepsilon$-$r$-approximable in $\mathrm{C}^*(X)$ for all $\bar\lambda\in \mathbb{D}^\mathbb{N}$. \end{lemma} \begin{proof} By Lemma~\ref{Lemma4.9BD}, we know that for all $\varepsilon>0$ there is $r$ such that $a_{\bar\lambda}$ is $\varepsilon/2$-$r$-approximable in $\mathrm{BD}(X)$ for all $\bar\lambda\in \mathbb{D}^\mathbb{N}$. Given $\bar\lambda\in\mathbb{D}^\mathbb{N}$, Proposition~\ref{PropApprox} implies that $a_{\bar\lambda}$ is $\varepsilon$-$r$-approximable in $\mathrm{C}^*(X)$. Since $\varepsilon$ and $\bar\lambda$ are arbitrary, we are done. \end{proof} \begin{lemma}\label{LemmaCoarseLikeRA} Let $X$ and $Y$ be u.l.f.\@ metric spaces and let $\Phi\colon\mathrm{C}^*(X)\to \mathrm{C}^*(Y)$ be a strongly continuous compact preserving linear map. If $\Phi\restriction \chi_F\mathrm{C}^*(X)\chi_F$ is coarse-like for all finite $F\subset X$, then $\Phi$ is coarse-like. \end{lemma} \begin{proof} Suppose that $\Phi\restriction\chi_E\mathrm{C}^*(X)\chi_E$ is coarse-like for all finite $E\subseteq X$ but $\Phi$ is not coarse-like. Let $\varepsilon>0$ and $r>0$ be such that for every $s>0$ there is a contraction $a_s\in\mathrm{C}^*(X)$ of propagation at most $ r$ such that $\Phi(a_s)$ is not $\varepsilon$-$s$-approximable. \begin{claim} For all cofinite $F\subset X$ and all $s>0$ there is a contraction $a\in \chi_F\mathrm{C}^*(X)\chi_F$ with finite support and propagation at most $r$ so that $\Phi(a)$ is not $\varepsilon/2$-$s$-approximable. \end{claim} \begin{proof} Fix $F$ and $s$. Let \[ E=\Big\{x\in X\mid d(x, X\setminus F)\leq r\Big\}. \] As $X\setminus F $ is finite and $X$ is u.l.f., $E$ is finite, and therefore $\Phi\restriction \chi_E\mathrm{C}^*(X)\chi_E$ is coarse-like. Pick $s'>s$ so that $\Phi(\chi_Ea\chi_E)$ is $\varepsilon/2$-$s'$-approximable for all contractions $a\in\mathrm{C}^*(X)$. By the definition of $E$, we have that \[ \{(x,y)\in X^2\mid d(x,y)\leq r\}\subset (E\times E)\cup (F\times F). \] Hence, if $a\in\mathrm{C}^*(X)$ has propagation at most $ r$ then there is $b\in\chi_E\mathrm{C}^*(X)\chi_E$ such that $a=b+\chi_Fa\chi_F$. If $a$ is a contraction, then $\norm{b}\leq 2$ and $\propg(b)\leq r$. Let $b\in\chi_E\mathrm{C}^*(X)\chi_E$ be as above for $a=a_{s'}$. By our choice of $s'$, $\Phi(b)$ is $\varepsilon/2$-$s'$-approximable. As $s'>s$, if $\Phi(\chi_Fa\chi_F)$ is $\varepsilon/2$-$s$-approximable, then $\Phi(a)$ is $\varepsilon$-$s'$-approximable. This contradicts our choice of $a_s$, so $\Phi(\chi_Fa_s\chi_F)$ is not $\varepsilon/2$-$s$-approximable. By Proposition \ref{PropSOTConvApprox}, we can obtain a finite $G\subset X\setminus F$ so that $\Phi(\chi_Ga_s\chi_G)$ is not $\varepsilon/2$-$s$-approximable. This finishes that claim. \end{proof} By the previous claim, there exists a disjoint sequence of finite subsets $(E_n)_n$ of $X$ and a sequence $(a_n)_n$ in $\mathrm{C}^*(X)$ so that $a_n\in \mathcal{B}(\ell_2(E_n,H))$, $\propg(a_n)\leq r$ and $\Phi(a_n)$ is not $\varepsilon/2$-$n$-approximable for all $n\in\mathbb{N}$. Since the $E_n$'s are disjoint, for all $\bar\lambda\in\mathbb{D}^\mathbb{N}$, $a_{\bar\lambda}$ is a well defined element of $\mathrm{C}^*(X)$. By Lemma \ref{Lemma4.9}, there is $s>0$ so that $\Phi(a_n)$ is $\varepsilon/2$-$s$-approximable for all $n\in\mathbb{N}$, a contradiction. \end{proof} If we substitute instances of $\mathrm{C}^*(X)$ and of $\mathrm{C}^*(Y)$ with $\mathrm{BD}(X)$ and $\mathrm{BD}(Y)$ in the proof of Lemma~\ref{LemmaCoarseLikeRA}, we obtain the following. \begin{lemma}\label{LemmaCoarseLikeRA2} Let $X$ and $Y$ be u.l.f.\@ metric spaces and let $\Phi\colon\mathrm{BD}(X)\to \mathrm{BD}(Y)$ be a strongly continuous compact preserving linear map. If $\Phi\restriction \chi_F\mathrm{BD}(X)\chi_F$ is coarse-like for all finite $F\subset X$, then $\Phi$ is coarse-like.\qed \end{lemma} We are ready to prove our uniform approximability results. \begin{proof}[Proof of Theorem~\ref{ThmCoarseLikeBD}] Fix u.l.f metric spaces $X$ and $Y$. Recall that we need to show that any strongly continuous compact preserving $^*$-homomorphism $\Phi\colon\mathrm{BD}(X)\to \mathrm{BD}(Y)$ is coarse-like. By Lemma~\ref{LemmaCoarseLikeRA2}, it is enough to show that $\Phi\restriction\chi_F\mathrm{BD}(X)\chi_F$ is coarse-like for all such $\Phi$ and all finite $F\subseteq X$. As $\chi_{\{x\}}\mathrm{BD}(X)\chi_{\{y\}}\cong\mathcal{B}(H)$ for all $x$ and $y$ in $X$, it is enough to show that any strongly continuous compact preserving $^*$-homomorphism $\Phi\colon \mathcal{B}(H)\to \mathrm{BD}(Y)$ is coarse-like. Fix such $\Phi$. Let $(\xi_n)_n$ be an orthonormal basis for $H$. If $F\subseteq\mathbb{N}$, let $p_F$ be the projection onto $\overline{\spann}\{\xi_i\mid i\in F\}$. We write $p_n$ for $p_{\{1,\ldots,n\}}$. By Proposition \ref{PropSOTConvApproxBD}, it is enough to show that for all $\varepsilon>0$ there is $s>0$ so that for all $n\in\mathbb{N}$ and all contractions $a\in \mathcal{B}(H)$, $\Phi(p_nap_n)$ is $\varepsilon$-$s$-approximable in $\mathrm{BD}(Y)$. Suppose that this fails for $\varepsilon\in (0,1)$. By compactness of the unit sphere of $p_n\mathcal{B}(H)p_n$, we then have that \begin{itemize} \item[$(\ast)$] For all finite $E\subset \mathbb{N}$ and all $s>0$ there is a contraction $a\in \mathcal{B}(H)$ with finite support so that $p_Eap_E=0$ and $\Phi(a)$ is not $\varepsilon/2$-$s$-approximable in $\mathrm{BD}(Y)$. \end{itemize} \begin{claim} For all cofinite $F\subset \mathbb{N}$ and all $s>0$ there is a contraction $a\in \mathcal{B}(p_FH)$ with finite support so that $\Phi(a)$ is not $\varepsilon/4$-$s$-approximable in $\mathrm{BD}(Y)$. \end{claim} \begin{proof} Suppose the claim fails for a cofinite $F\subset\mathbb{N}$ and $s>0$. Let $A_0=\mathbb{N}\setminus F$. By $(\ast)$ there are an increasing sequence of finite subsets $(A_n)_n$ of $\mathbb{N}$ and a sequence of contractions $(a_n)_n$ in $\mathcal{B}(H)$ so that $\supp(a_n)\subset A_n^2\setminus A_{n-1}^2$ and $\Phi(a_n)$ is not $\varepsilon/2$-$n$-approximable in $\mathrm{BD}(Y)$ for all $n\in\mathbb{N}$. Since the claim fails, going to a subsequence and redefining $(a_n)_n$, we can assume that $\supp(a_n)\subset (A_n\setminus A_{n-1})\times A_0$ for all $n\in\mathbb{N}$ and that $\Phi(a_n)$ is not $\varepsilon/8$-$n$-approximable in $\mathrm{BD}(Y)$ for all $n\in\mathbb{N}$ (otherwise, we could assume that $\supp(a_n)\subset A_0\times (A_n\setminus A_{n-1})$ and the proof would proceed similarly). Let $F_n=A_n\setminus A_{n-1}$. Let $(E_n)_n$ be a disjoint sequence of finite subsets of $\mathbb{N}$ so that $|E_n|=|A_0|$ for all $n\in\mathbb{N}$. For each $n\in\mathbb{N}$, let $b_n\in\mathcal{B}(p_{A_0}H,p_{E_n}H)$ be a unitary. For each $n\in\mathbb{N}$, let $k(n)\in\mathbb{N}$ be so that $\Phi(b_n)$ can be $\varepsilon/17$-$k(n)$-approximable in $\mathrm{BD}(Y)$ and let $m(n)\in\mathbb{N}$ be so that $\Phi(a_{m(n)})$ is not $\varepsilon/8$-$(k(n)+n)$-approximable in $\mathrm{BD}(Y)$. Without loss of generality $(m(n))_n$ is strictly increasing. Notice that $\supp(b_n a_{m(n)})\subset F_{m(n)}\times E_n$ for all $n\in\mathbb{N}$. As both $(E_n)_n$ and $(F_{m(n)})_n$ are disjoint sequences, we have that \[ \mathrm{SOT}\text{-}\sum_{n\in\mathbb{N}}\lambda_nb_na_{m(n)}\in \mathcal{B}(H) \] for all $\lambda_n\in\mathbb{D}^\mathbb{N}$. As $\Phi$ is strongly continuous and compact preserving, each $\Phi(b_na_{m(n)})$ is compact and \[\mathrm{SOT}\text{-}\sum_{n\in\mathbb{N}}\lambda_n\Phi(b_na_{m(n)})\in \mathrm{BD}(Y)\] for all $(\lambda_n)_n\in\mathbb{D}^\mathbb{N}$. By Lemma \ref{Lemma4.9BD}, there exists $s'>0$ so that $\Phi(b_na_{m(n)})$ is $\varepsilon/16$-$s'$-approximable in $\mathrm{BD}(Y)$ for all $n\in\mathbb{N}$. Fix a sequence $(c_n)_n$ in $\mathrm{BD}(Y)$ so that $\propg(c_n)\leq s'$ and $\|\Phi(b_na_{m(n)})-c_n\|\leq \varepsilon/16$ for all $n\in\mathbb{N}$. As $b_n$ is unitary, each $\Phi(b_n^{-1})$ is $\varepsilon/17$-$k(n)$-approximable in $\mathrm{BD}(Y)$. Fix $(d_n)_n$ in $\mathrm{BD}(Y)$ so that $\propg(d_n)\leq k(n)$ and $\|\Phi(b_n^{-1})-d_n\|\leq \varepsilon/17$ for all $n\in\mathbb{N}$. Then, for all $n\in\mathbb{N}$, \begin{align*} \|\Phi(a_{m(n)})- & d_nc_n\|=&\\=&\norm{\Phi(b_n^{-1})\Phi(b_na_{m(n)})-d_nc_n}\\ \leq& \|\Phi(b_n^{-1})\Phi(b_na_{m(n)})-\Phi(b_n)^{-1} c_n\|+\|\Phi(b_n)^{-1} c_n-d_nc_n\|\\ \leq& \|\Phi(b_na_{m(n)})-c_n\|+\|\Phi(b_n)^{-1}-d_n\|\cdot\|c_n\|\\ \leq& \frac{\varepsilon}{16}+\frac{\varepsilon}{17}\Big(1+\frac{\varepsilon}{16}\Big)\leq \frac{\varepsilon}{8}. \end{align*} As $\propg(d_nc_n)\leq k(n)+s'$, then $\Phi(a_{m(n)})$ is $\varepsilon/2$-$(k(n)+s')$-approximable in $\mathrm{BD}(Y)$ for all $n\in\mathbb{N}$. For $n>s'$, this gives a contradiction. \end{proof} By the previous claim we can pick mutually disjoint finite sets $E_n\subseteq \mathbb{N}$ and a sequence of contractions $(a_n)_n$ so that $a_n\in \mathcal{B}(P_{E_N}H)$ and $\Phi(a_n)$ is not $\varepsilon/4$-$n$-approximable for all $n\in\mathbb{N}$. Since the $E_n$'s are mutually disjoint, $a_{\bar\lambda}\in\mathcal{B}(H)$ for all $\bar\lambda\in\mathbb{D}^\mathbb{N}$. By Lemma \ref{Lemma4.9BD}, there exists $s>0$ so that $\Phi(a_n)$ is $\varepsilon/4$-$s$-approximable in $\mathrm{BD}(Y)$ for all $n\in\mathbb{N}$, a contradiction. \end{proof} \begin{proof}[Proof of Theorem~\ref{ThmCoarseLike}] Fix u.l.f metric spaces $X$ and $Y$, and an isomorphism $\Phi\colon\mathrm{C}^*(X)\to \mathrm{C}^*(Y)$. By Lemma~\ref{LemmaCoarseLikeRA}, it is enough to show that $\Phi\restriction\chi_F\mathrm{C}^*(X)\chi_F$ is coarse-like for all finite $F\subseteq X$. Therefore, finiteness of $F$ implies that we only need to show that $\Phi\restriction\chi_{\{x\}}\mathrm{C}^*(X)\chi_{\{y\}}$ is coarse-like for all $x$ and $y$ in $X$. To simplify the notation, assume $x=y$. We prove the following stronger statement: \begin{enumerate} \item[($*$)] For every $\varepsilon>0$ there is a finite $F\subseteq Y$ such that $\norm{\chi_F\Phi(a)\chi_F-\Phi(a)}<4\varepsilon$ for all positive contractions $a\in \chi_{\{x\}}\mathrm{C}^*(X)\chi_{\{x\}}$. \end{enumerate} Notice that, since $\chi_F\Phi(a)\chi_F$ has propagation at most $\diam(F)$, for all $a\in\mathrm{C}^*(X)$, ($*$) implies the desired result. We proceed by contradiction, so assume the statement in ($*$) fails for $\varepsilon>0$. Let $(\xi_n)_n$ be an orthonormal base for $H$, and let $p_n$ be the projection onto $\spann\{\xi_i\mid i\leq n\}$. For each $n\in\mathbb{N}$, let $q_n=\chi_{\{x\}}\otimes p_n$. Given a finite $I\subset \mathbb{N}$, write $q_I= q_{\max I}-q_{\min I}$. \begin{claim} For every finite $F\subseteq Y$ and $n\in\mathbb{N}$ there is a positive contraction $a\in \chi_{\{x\}}\mathrm{C}^*(X)\chi_{\{x\}}$ with $aq_n=q_na=0$ and \[ \norm{\chi_{Y\setminus F}\Phi(a)\chi_{Y\setminus F}}>\varepsilon^2. \] \end{claim} \begin{proof} Fix a finite $F\subset Y$ and $n\in\mathbb{N}$. Since $q_n \mathrm{C}^*(X) q_n$ is finite dimensional and each element of $q_n\mathrm{C}^*(X)q_n$ has finite rank, there is a finite $G\subseteq Y$ such that whenever $a\in q_n\mathrm{C}^*(X)q_n$ is a contraction and $G'\supset G$, then \[\norm{\Phi(a)-\chi_{G'}\Phi(a)}<\varepsilon^2.\] Fix $G'=G\cup F$. By our choice of $\varepsilon$, there is a positive contraction $b\in \chi_{\{x\}}\mathrm{C}^*(X)\chi_{\{x\}}$ such that $\norm{\chi_{G'}\Phi(b)\chi_{G'}-\Phi(b)}>4\varepsilon$. In particular, the triangular inequality implies that \[\norm{\chi_{Y\setminus G'}\Phi(b)}=\norm{\chi_{G'}\Phi(b)-\Phi(b)}>2\varepsilon.\] Let $a=(\chi_{\{x\}}-q_n)b^2(\chi_{\{x\}}-q_n)$. So $a$ is a positive contraction. Assume for a contradiction that $\norm{\chi_{Y\setminus F}\Phi(a)\chi_{Y\setminus F}}\leq \varepsilon^2$. Then $\norm{\chi_{Y\setminus G'}\Phi(a)\chi_{Y\setminus G'}}\leq \varepsilon^2$. Since \[ b^2=a+q_nb^2q_n+q_nb^2(\chi_{\{x\}}-q_n)+(\chi_{\{x\}}-q_n)b^2q_n, \] we have that $\norm{\chi_{Y\setminus G'}\Phi(b^2)\chi_{Y\setminus G'}}\leq 4\varepsilon^2$, so $\norm{\chi_{Y\setminus G'}\Phi(b)}\leq 2\varepsilon$. This is a contradiction. \end{proof} Notice that $a=\mathrm{SOT}\text{-}\lim_m(q_m-q_n)a(q_m-q_n)$, for all $a\in \chi_{\{x\}}\mathrm{C}^*(X)\chi_{\{x\}}$ with $q_na=aq_n=0$. Therefore, the previous claim can be used to produce a sequence $(a_n)_n$ of contractions in $\chi_{\{x\}}\mathrm{C}^*(X)\chi_{\{x\}}$, a sequence of natural numbers $(k(n))_n$, and sequences $(F_n)_n$ and $(I_n)_n$ of finite subsets of $Y$ and $\mathbb{N}$, respectively, so that $(F_n)_n$ is a disjoint sequence, $\max I_n<\max I_ {n+1}$ for all $n\in\mathbb{N}$, and \[ \norm{(\chi_{F_n}\otimes p_{k(n)})\Phi(q_{I_n}a_na^*_nq_{I_n})(\chi_{F_n}\otimes p_{k(n)})}>\varepsilon^2/4 \] for all $n\in\mathbb{N}$. Hence, the $\mathrm{C}^*$-equality gives that \[ \norm{\chi_{F_n}\otimes p_{k(n)}\Phi(q_{I_n}a_n)}>\varepsilon/2 \] for all $n\in\mathbb{N}$. As both $(\chi_{F_n}\otimes p_{k(n)})_n$ and $(\Phi(q_{I_n}a_n))_n$ are sequences of compact operators converging to zero in the strong operator topology, passing to a subsequence if necessary, we can assume that \[ \norm{\chi_{F_n}\otimes p_{k(n)}\Phi(q_{I_m}a_m)}\leq 2^{-n-3}\varepsilon \] for all $n\neq m$ in $\mathbb{N}$. As $(F_n)_n$ is a disjoint sequence, $b=\mathrm{SOT}\text{-}\sum_n\chi_{F_n}\otimes p_{k(n)}$ exists and it clearly belongs to $\mathrm{C}^*(Y)$. Let $c=\Phi^{-1}(b)$; in particular $c\in \mathrm{C}^*(X)$. Then \begin{align*} \|cq_{I_m}\|&\geq \|cq_{I_m}a_m\|=\|b\Phi(q_{I_m}a_m)\|\\ &\geq \norm{\chi_{F_n}\otimes p_{k(n)}\Phi(q_{I_m}a_m)}-\sum_{n\neq m}\norm{\chi_{F_n}\otimes p_{k(n)}\Phi(q_{I_m}a_m)} \\ &\geq \varepsilon/4 \end{align*} for all $m\in\mathbb{N}$. As $(I_n)_n$ are disjoint, this contradicts the fact that $c\in \mathrm{C}^*(X)$, i.e., that $c$ is locally compact. \end{proof} \begin{remark} We used positivity and the fact that the $q_n$'s are projection in the above proof. We would not need it, by playing with functional analysis. So if we ever want, we could follow the strategy highlighted above to show that if $\Phi\colon\mathrm{C}^*(X)\to\mathrm{C}^*(Y)$ is a strongly continuous linear compact preserving map with the property that for every sequence of operators $(a_n)$ we have that if $\Phi(a_n)$ converges strongly to $b$ in $\mathrm{C}^*(Y)$ then there is $c\in\mathrm{C}^*(X)$ such that $a_n$ converges strongly to $c$. This shows that having a strongly continuous sequence which is converging outside $\mathrm{C}^*(X)$ and which is sent to a strongly converging sequence converging inside $\mathrm{C}^*(Y)$ is indeed the only obstruction to generalising Theorem~\ref{ThmCoarseLikeURA} to the Roe algebra setting (see, e.g., the discussion after Proposition~\ref{PropNotCoarseLike}). \end{remark} We finish the section introducing a weaker version of coarse-likeness for which Theorem \ref{ThmCoarseLikeURA} has an equivalent for Roe algebras. \begin{definition}\label{DefiAsympCoarseLikeDEF} Let $X$ and $Y$ be metric spaces. A map $\Phi\colon\mathrm{C}^*(X)\to \mathrm{C}^*(Y)$ is \emph{asymptotically coarse-like} if for all $\varepsilon>0$ and $r>0$ there are $s>0$ and a cofinite $X'\subset X$ so that $\Phi(a)$ is $\varepsilon$-$s$-approximable in $\mathrm{C}^*(Y)$ for all contractions in $a\in \mathrm{C}^*(X')$ with $\propg(a)\leq r$. \end{definition} \begin{theorem}\label{ThmAsympCoarseLike} Let $X$ and $Y$ be u.l.f.\@ metric spaces. Every strongly continuous compact preserving linear map $\Phi\colon\mathrm{C}^*(X)\to \mathrm{C}^*(Y)$ is asymptotically coarse-like. \end{theorem} \begin{proof} Suppose this fails. So there is $\varepsilon>0$ and $r>0$ so that for all $n\in\mathbb{N}$ and all cofinite $X'\subset X$, there is a contraction $a\in \mathrm{C}^*(X')$ with $\propg(a)\leq r$ so that $\Phi(a)$ is not $\varepsilon$-$n$-approximable. Then, by Proposition \ref{PropSOTConvApprox}, we can pick a disjoint sequence $(X_n)_n$ of finite subsets of $X$ and a sequence $(a_n)_n$ of contractions so that $a_n\in \mathrm{C}^*(X_n)$, $\propg(a_n)\leq r$, and $\Phi(a_n)$ is not $\varepsilon$-$n$-approximable for all $n\in\mathbb{N}$. Since each $X_n$ is finite and $\Phi$ preserves the compacts, each $\Phi(a_n)$ is compact. Hence, as $(X_n)_n$ is a disjoint sequence and as each $a_n$ has propagation at most $r$, strong continuity of $\Phi$ gives that $\mathrm{SOT}\text{-}\sum_n\lambda_n\Phi(a_n)\in \mathrm{C}^*_u(Y)$ for all $(\lambda_n)_n\in \mathbb{D}^\mathbb{N}$. Therefore, Lemma \ref{Lemma4.9} implies that there is $s>0$ to that each $\Phi(a_n)$ is $\delta$-$s$-approximable; contradiction. \end{proof} \begin{remark} Although we will prove Theorems \ref{thm:main} and \ref{thm:WW} below using Theorem \ref{ThmCoarseLike}, we point out that both those results could be obtained (in a very similar way) using Theorem \ref{ThmAsympCoarseLike} above instead of Theorem \ref{ThmCoarseLike}. \end{remark} \section{The multiplier algebra of $\mathrm{C}^*(X)$}\label{SecMult} In this short section, we characterize $\mathrm{BD}(X)$ as the multiplier algebra of $\mathrm{C}^*(X)$. As a consequence, this shows that $\mathrm{Inn}(\mathrm{BD}(X))$ is a normal subgroup of $\mathrm{Aut}(\mathrm{C}^*(X))$. Since all automorphisms of $\mathrm{C}^*(X)$ are strongly continuous, being induced by a unitary in $\mathcal{B}(\ell_2(X,H))$ (see e.g., \cite[Lemma 3.1]{SpakulaWillett2013AdvMath}), we have that all automorphisms of $\mathrm{C}^*(X)$ extend to automorphisms of $\mathrm{BD}(X)$. An operator algebraist used to work with multipliers would not be surprised by this result, and may even find it obvious. However, we do not know of a proof that does not use uniform approximability in some way. \begin{theorem}\label{prop:mult} Let $X$ be a u.l.f.\@ metric space. Then $\mathrm{BD}(X)=\mathcal M(\mathrm{C}^*(X))$. \end{theorem} \begin{proof} We use the characterisation of the multiplier algebra given in \cite[II.7.3.5]{Blackadar.OA}. As $\mathrm{C}^*(X)$ is already represented faithfully on $\ell_2(X,H)$, the multiplier algebra of $\mathrm{C}^*(X)$ coincides with its idealizer, that is, \[ \mathcal M(\mathrm{C}^*(X))=\Big\{b\in\mathcal{B}(\ell_2(X,H))\mid \forall a\in\mathrm{C}^*(X),\ ba, ab\in\mathrm{C}^*(X)\Big\}. \] As $\mathrm{C}^*(X)$ is an ideal in $\mathrm{BD}(X)$, we clearly have that $\mathrm{BD}(X)\subseteq\mathcal M(\mathrm{C}^*(X))$. \begin{claim}\label{claim:multipliers} Let $b\in\mathcal M(\mathrm{C}^*(X))$, $\varepsilon>0$, and let $F\subseteq X$ be finite. Then there is a finite $G\subset X$ such that $\norm{\chi_Gb\chi_F-b\chi_F}<\varepsilon$ and $\norm{\chi_Fb\chi_G-\chi_Fb}<\varepsilon$. In particular, $b\chi_F $ and $\chi_Fb$ belong to $ \mathrm{BD}(X)$. \end{claim} \begin{proof} Suppose not. Then, without loss of generality, we assume that there is a sequence $(G_n)_n$ of disjoint finite subsets $X$ so that $\norm{\chi_{G_n}b\chi_F}>\varepsilon/2$ for all $n\in\mathbb{N}$. For each $n\in\mathbb{N}$, fix a unit vector $\xi_n$ such that $\norm{\chi_{G_n}b\chi_F\xi_n }>\varepsilon/2$ and let $p_n$ be a finite rank projection in $\mathcal{B}(H)$ so that $\|(\chi_{G_n}\otimes p_n)b\chi_F\xi_n\|>\varepsilon/2 $. Then $\mathrm{SOT}\text{-}\sum_n(\chi_{G_n}\otimes p_n)\in\mathrm{C}^*(X)$, so, as $b\chi_F\in\mathcal M(\mathrm{C}^*(X))$, then $\mathrm{SOT}\text{-}\sum_n(\chi_{G_n}\otimes p_n)b\chi_F\in\mathrm{C}^*(X)$. Fix $k\in \mathbb{N}$ such that $\mathrm{SOT}\text{-}\sum_n(\chi_{G_n}\otimes p_n)b\chi_F$ can be $\varepsilon/2$-$k$-approximated in $\mathrm{C}^*(X)$ and $m$ large enough so that $d(G_m,F)>k$. Since $\chi_{G_m}\otimes p_m$ and $\chi_F$ have propagation $0$, $(\chi_{G_{m}}\otimes p_m)b\chi_F$ can be $\varepsilon/2$-$k$-approximated in $\mathrm{C}^*(X)$. Let $c\in \mathrm{C}^*(X)$ be an element with propagation at most $k$ so that $\norm{c-(\chi_{G_{m}}\otimes p_m)b\chi_F}<\varepsilon/2$. Since $c$ has propagation at most $k$ and $d(G_{m},F)>k$, then $(\chi_{G_{m}}\otimes p_m)c\chi_F=0$. Hence $\norm{(\chi_{G_{m}}\otimes p_m)b\chi_F}< \varepsilon/2$, a contradiction. \end{proof} In order to get a contradiction, suppose $b\in\mathcal M(\mathrm{C}^*(X))$ is such that there is $\varepsilon>0$ for which $b$ cannot be $\varepsilon$-$n$-approximated for every $n\in\mathbb{N}$. We will construct two sequences $(F_n)_n$ and $(p_n)_n$ of finite subsets of $X$ and finite-rank projections in $\mathcal B(H)$, respectively, with the following properties: \begin{itemize} \item the $F_n$ are disjoint, \item for all $n$, \[ \norm{(\chi_{\bigcup_{m\neq n}F_m}\otimes 1_H)b(\chi_{F_n}\otimes p_n)}<2^{-n}, \] \item each $(\chi_{F_n}\otimes p_n)b(\chi_{F_n}\otimes p_n)$ cannot be $\varepsilon/2$-$n$-approximated. \end{itemize} We do this by induction. Let $(q_n)_n$ be a sequence of finite-rank projections in $\mathcal{B}(H)$ which is converging strongly to $1_H$. Since $b$ cannot be $\varepsilon$-$0$-approximated and \[ b=\mathrm{SOT}\text{-}\lim_{\substack{F\subseteq X\text{ finite }\\n\in\mathbb{N}}}(\chi_F\otimes q_n)b(\chi_F\otimes q_n), \] Proposition~\ref{PropSOTConvApproxBD} gives a finite $F_0\subseteq X$ and $n\in\mathbb{N}$ such that $(\chi_F\otimes q_n)b(\chi_F\otimes q_n)$ cannot be $\varepsilon$-$0$-approximated. Let $p_0=q_n$. We now make the inductive step: suppose that $p_0,\ldots,p_n$ and $F_0,\ldots,F_n$ have been defined. Let $b_n=b\chi_{\bigcup_{m\leq n}F_n}$. Using the previous claim, pick a finite $G\subseteq X$ such that $\norm{\chi_Gb_n-b_n}<2^{-n-1}$. By Claim \ref{claim:multipliers}, $\chi_{X\setminus G}b\chi_G+\chi_Gb\chi_{X\setminus G}+\chi_Gb\chi_G\in\mathrm{BD}(X)$, so there is $n'>n$ such that $\chi_{X\setminus G}b\chi_G+\chi_Gb\chi_{X\setminus G}+\chi_Gb\chi_G$ can be $\varepsilon/2$-$n'$-approximated. As \[ b=\chi_{X\setminus G}b\chi_{X\setminus G}+\chi_{X\setminus G}b\chi_G+\chi_Gb\chi_{X\setminus G}+\chi_Gb\chi_G \] cannot be $\varepsilon$-$n'$-approximated, $\chi_{X\setminus G}b\chi_{X\setminus G}$ cannot be $\varepsilon/2$-$n'$-approximated. As \[ \chi_{X\setminus G}=\mathrm{SOT}\text{-}\lim_{\substack{F\subseteq X\setminus G \text{ finite}\\ i\in\mathbb{N}}}\chi_{F}\otimes q_i, \] Proposition~\ref{PropSOTConvApproxBD} gives a finite $F_{n+1}$ and $i\in\mathbb{N}$ such that $(\chi_{F_{n+1}}\otimes q_i)b(\chi_{F_{n+1}}\otimes q_i)$ cannot be $\varepsilon/2$-$n'$-approximated. Setting $p_{n+1}=q_i$ concludes the construction. Let now $c=\mathrm{SOT}\text{-}\sum_n(\chi_{F_n}\otimes p_n)$ and notice that \[ cbc=\mathrm{SOT}\text{-}\sum_n(\chi_{F_n}\otimes p_n)b(\chi_{F_n}\otimes p_n)+d \] where \[ d=\mathrm{SOT}\text{-}\sum_{ n\in \mathbb{N} } \Big((\chi_{F_n}\otimes p_n)b\sum_{m\neq n} \chi_{F_m}\otimes p_m\Big). \] By our choice of $(F_n)_n$ and $(p_n)_n$, we have that \begin{eqnarray*} \norm{\Big(1_X\otimes 1_H-\sum_{m\leq n}\chi_{F_m}\otimes p_m\Big)d}&\leq& \sum_{m>n}\norm{(\chi_{F_m}\otimes p_m)b\sum_{n'\neq m}\chi_{F_{n'}}\otimes p_{n'}}\\&\leq&\sum_{m>n}2^{-m}\leq 2^{-n+1} \end{eqnarray*} for all $n\in\mathbb{N}$. Hence $d$ is compact, so $d\in \mathrm{C}^*(X)$. Let \[ b'=\mathrm{SOT}\text{-}\sum_n(\chi_{F_n}\otimes p_n)b(\chi_{F_n}\otimes p_n). \] As $b\in\mathcal M(\mathrm{C}^*(X))$, it follows that $cbc\in\mathrm{C}^*(X)$. Hence, as $d\in\mathrm{C}^*(X)$, we have that $b'\in\mathrm{C}^*(X)$. Pick $n$ such that $b'$ can be $\varepsilon/2$-$n$-approximated. Since $\chi_{F_n}\otimes p_n$ has propagation $0$, $(\chi_{F_n}\otimes p_n)b'(\chi_{F_n}\otimes p_n)$ can be $\varepsilon/2$-$n$-approximated. This is a contradiction since $(\chi_{F_n}\otimes p_n)b'(\chi_{F_n}\otimes p_n)=(\chi_{F_n}\otimes p_n)b(\chi_{F_n}\otimes p_n)$. \end{proof} The following is a simple consequence of Theorem \ref{prop:mult}. \begin{corollary}\label{CorNormalSubgroup} Let $X$ be a u.l.f.\@ metric space. Any $\Phi\in\mathrm{Aut}( \mathrm{C}^*(X))$ extends to an automorphism of $\mathrm{BD}(X)$. Moreover, $\mathrm{Inn}(\mathrm{BD}(X))$ is a normal subgroup of $\mathrm{Aut}(\mathrm{C}^*(X))$.\qed \end{corollary} \begin{remark} If one is only interested in Corollary \ref{CorNormalSubgroup}, Theorem \ref{prop:mult} is not necessary. In fact, Theorem \ref{ThmCoarseLike} gives us an easy proof of Corollary \ref{CorNormalSubgroup}. We outline it here as an example of the power of Theorem \ref{ThmCoarseLike}. Fix $\Phi\in \mathrm{Aut}(\mathrm{C}^*(X))$ and let $u\in \mathcal{B}(\ell_2(X,H))$ be a unitary so that $\Phi=\Ad(u)$ (e.g., \cite[Lemma 3.1]{SpakulaWillett2013AdvMath}). Fix $r>0$ and $\varepsilon>0$ and let $a\in \mathrm{BD}(X)$ be a contraction with $\propg(a)\leq r$. Then, $a$ can be easily written as $a=\mathrm{SOT}\text{-}\lim_n a_n$ where $(a_n)_n$ is a sequence of contractions in $\mathrm{C}^*(X)$ with $\propg(a_n)\leq r$ for all $n\in\mathbb{N}$. By Theorem \ref{ThmCoarseLike}, there is $s=s(r,\varepsilon)>0$ and a sequence $(b_n)_n$ in $\mathrm{C}^*(X)$ so that $\propg(b_n)\leq s$ and $\|\Phi(a_n)-b_n\|\leq \varepsilon$ for all $n\in\mathbb{N}$. Using weak operator compactness and going to a subsequence, we can assume that $b=\mathrm{WOT}\text{-}\lim_nb_n$ exists. Clearly, $\propg(b)\leq r$ and $\|\Ad(u)(a)-b\|\leq \varepsilon$. As $\varepsilon$ was arbitrary, this shows that $\Ad(u)(a)\in \mathrm{BD}(X)$. We leave the remaining details to the reader. \end{remark} \section{Proof of the main result}\label{SectionMakingIsoCoarseLike} We use the uniform approximability results of \S\ref{SecUnifApprox} to prove Theorems~\ref{thm:main} and \ref{ThmCartan}. \subsection{Technical lemmas}\label{SubsectionTechnical} We prove several technical lemmas in this subsection. Their proofs are inspired by techniques in \cite[Section 6]{WhiteWillett2017}. \begin{definition} A u.l.f.\@ metric space $X$ has the \emph{operator norm localisation property} (\emph{ONL}) if for all $s>0$ and all $\rho\in (0,1)$ there is $r>0$ so that if $a\in \mathcal{B}(\ell_2(X,H))$ has propagation at most $s$ then there exists a unit vector $\xi\in \ell_2(X,H)$ with $\diam(\supp(\xi))\leq r$ so that $\|a\xi\|\geq \rho\|a\|$.\footnote{Recall, $\supp(\xi)=\{x\in X\mid \xi(x)\neq 0\}$ for a given $\xi:X\to H$.} \end{definition} By \cite[Theorem 4.1]{Sako2014}, a u.l.f.\@ metric space has property A if and only if it has ONL. The following assumption will be recurrent: \begin{assumption}\label{AssumptionIsoONL} Let $X$ and $Y$ be u.l.f.\@ metric spaces with ONL, $H$ be a separable infinite-dimensional Hilbert spaces and let $\Phi\colon\mathrm{C}^*(X)\to \mathrm{C}^*(Y)$ be an isomorphism. For $\delta>0$, and projections $p\in \mathrm{C}^*(X)$ and $q\in\mathrm{C}^*(Y)$ of rank 1, we let \[Y_{p,\delta}=\{y\in Y \mid \|\Phi(p)\chi_{\{y\}}\|\geq \delta\}\] and \[X_{q,\delta}=\{x\in X\mid \|\Phi^{-1}(q)\chi_{\{x\}}\|\geq \delta\}.\] \end{assumption} We point out that isomorphisms between Roe algebras are automatically strongly continuous and rank preserving (see \cite[Lemma 3.1]{SpakulaWillett2013AdvMath}). This will be used with no further mention in the proofs of the lemmas below. \begin{lemma}\label{Lemma1} In the setting of Assumption \ref{AssumptionIsoONL}, for all $r>0$ and $\varepsilon>0$ there is $t>0$ so that for all projections $p\in \mathrm{C}^*_u(X)$ and $q\in\mathrm{C}^*(Y)$ with propagation at most $r$ there is $E\subset X$ with $\diam(E)\leq t$ so that \[\|\Phi(p)q\chi_E\|\geq (1-\varepsilon)\|\Phi(p)q\|-\varepsilon.\] \end{lemma} \begin{proof} Fix $\varepsilon>0$. Theorem \ref{ThmCoarseLike} gives $s>0$ so that $\Phi(p)$ is $\varepsilon/2$-$s$-approximable for all projections $p\in \mathrm{C}^*(X )$ with $\propg(p)\leq r$. Fix projections $p\in \mathrm{C}^*(X)$ and $q\in \mathrm{C}^*(Y)$ with propagation at most $r$. So there is $b\in \mathrm{C}^*(Y)$ with $\propg(b)\leq s+r$ so that $\|\Phi(p)q-b\|\leq \varepsilon/2$. As $Y$ has ONL, there exists $t>0$ (depending only on $\varepsilon$ and $r$) and a unit vector $\xi\in \ell_2(Y,H)$ so that $\supp(\xi)\leq t$ and $\|b\xi\|\geq (1-\varepsilon)\|b\|$. Let $E=\supp(\xi)$. Hence, \begin{align*} \|\Phi(p)q\chi_E\|&\geq\|b\chi_E\|- \|\Phi(p)q\chi_E-b\chi_E\|\\ &\geq (1-\varepsilon)\|b\| -\frac{\varepsilon}{2}\\ &\geq (1-\varepsilon)\|\Phi(p)q\|-(1-\varepsilon)\|\Phi(p)q-b\|-\frac{\varepsilon}{2}\\ &\geq (1-\varepsilon)\|\Phi(p)q\|-\varepsilon, \end{align*} and we are done. \end{proof} Given $n\in\mathbb{N}$, $r\geq 0$, we write \[\Proj_{n,r}(X)=\Big\{p\in \Proj(\mathrm{C}^*(X))\mid \rank(p)\leq n\text{ and }\propg(p)\leq r\Big\}.\] We define $\Proj_{n,r}(Y)$ analogously. \begin{lemma}\label{LemmaYxndeltaBoundedDiam} In the setting of Assumption \ref{AssumptionIsoONL}, for all $r>0$ and $\delta>0$, we have that \[\sup_{p\in \Proj_{1,r}(X) }\diam(Y_{p,\delta})<\infty.\] \end{lemma} \begin{proof} By Lemma \ref{Lemma1}, there is $t>0$ so that for all $p\in \Proj_{1,r}(X)$ there is $E\subset Y$ with $\diam(E)\leq t$ so that $\|\Phi(p)\chi_E\|^2> 1-\delta^2$. Fix $p\in \Proj_{1,r}(X)$ and let $E$ be as above. As $\Phi$ is rank preserving, $\Phi(p)$ has rank 1. Pick a unit vector $\xi\in \ell_2(Y,H)$ in the range of $\Phi(p)$. Then \[\|\Phi(p)\chi_F\|=\|\chi_F\Phi(p)\|=\|\chi_F\xi\|\] for all $F\subset Y$. In particular, $\|\chi_E\xi\|^2> 1-\delta^2$. If $y\not\in E$, then \[\|\Phi(p)\chi_{\{y\}}\|^2=\|\chi_{\{y\}}\xi\|^2\leq \|\xi\|^2-\|\chi_{Y\setminus \{y\}}\|^2\leq 1-\|\chi_E\xi\|^2 < \delta^2.\] So, $y\not\in Y_{p,\delta}$. This shows that $Y_{p,\delta}\subset E$ and we must have $\diam(Y_{p,\delta})\leq t$. \end{proof} \begin{lemma}\label{Lemma2} In the setting of Assumption \ref{AssumptionIsoONL}, for all $r>0$ and $\varepsilon>0$ there is $\delta>0$ so that \[\inf_{p\in \Proj_{1,r}(X)}\|\Phi(p)\chi_{Y_{p,\delta}}\|\geq 1-\varepsilon.\] \end{lemma} \begin{proof} Fix $r>0$ and $\varepsilon\in (0,1)$. By Lemma \ref{Lemma1}, there is $t>0$ so that for all $p\in \Proj_{1,r}(X )$ there is $E\subset Y$ with $\diam(E)\leq t$ such that $\|\Phi(p)\chi_E\|^2> 1-\varepsilon$. As $Y$ is u.l.f., $N=\sup_{y\in Y}|B_t(y)|$ is finite. Let $\delta\in (0, \sqrt{(\varepsilon-\varepsilon^2)/N})$ and fix $p\in \Proj_{1,r}(X)$. Let $\xi\in \ell_2(Y,H)$ be a unit vector in the range of the rank 1 projection $\Phi(p)$. Then, picking $E$ as above, we have that \begin{align*} \|\Phi(p)\chi_{ Y_{p,\delta}}\|^2& =\|\chi_{Y_{p,\delta}}\xi\|^2\\ &\geq \|\chi_{E\cap Y_{p,\delta}}\xi\|^2\\ &\geq \|\chi_E\xi\|^2-\|\chi_{ E\setminus Y_{p,\delta}}\xi\|^2\\ &= \|\Phi(p)\chi_E\|^2-\|\Phi(p)\chi_{ E\setminus Y_{p,\delta}}\|^2\\ &\geq 1-\varepsilon-\delta^2N\\ &\geq (1-\varepsilon)^2, \end{align*} and we are done. \end{proof} Before stating the next lemma, we need to introduce some technical notation. Given positive reals $t$, $r$ and $k$, we denote by $\cD_{t,r,k}(X)$ the set of all families $(p_i)_{i\in \mathbb{N}}$ of orthogonal projections in $\mathrm{C}^*(X)$ satisfying: \begin{enumerate} \item each $p_i$ has rank at most 1, \item each $p_i$ has propagation at most $r$, and \item $|\{i\in\mathbb{N}\mid p_i\chi_E\neq 0\}|\leq k$ for any $E\subset X$ with $\diam(E)\leq t$. \end{enumerate} If $(p_i)_{i\in \mathbb{N}}\in \cD_{t,r,k}(X)$, then $\mathrm{SOT}\text{-}\sum_{i\in \mathbb{N}}p_i \in \mathrm{C}^*(X)$. Let $\bar p=\mathrm{SOT}\text{-}\sum_{i\in\mathbb{N}}p_i$. By abuse of notation, we identify $\bar p$ with $(p_i)_{i\in \mathbb{N}}$ and write $\bar p\in \cD_{t,r,k}(X)$. We define $\cD_{t,r,k}(Y)$ analogously, and write $\bar q\in\cD_{t,r,k}(Y)$ if $\bar q=\mathrm{SOT}\text{-}\sum_{i\in\mathbb{N}}q_i$ where $(q_i)_{i\in\mathbb{N}}$ is in $\cD_{t,r,k}(Y)$. \begin{remark}\label{RemarkDktX} A word on the prototypical elements of $\cD_{t,r,k}(X)$ is useful: If $(P_x)_{x\in X}$ is a family of projections on $H$ and $p_x=\chi_{\{x\}}\otimes P_x$, then $(p_x)_{x\in X}$ belongs to $\cD_{t,r,k}(X)$ for any $t$ and $r\geq 0$, where $k=\sup_{x\in X}|B_t(x)|$. More generally, let $(X_n)_n$ be a sequence of disjoint subsets of $X$ with $r=\sup_n\diam(X_n)<\infty$, $\ell\in\mathbb{N}$, and for each $n\in\mathbb{N}$ let $(p_{n,i})_{i=1}^\ell$ be a family of orthogonal projections in $\mathcal{B}(\ell_2(X_n,H))$ of rank at most 1. Then, for any $t>0$, there is $k>0$ so that $\bar p=((p_{n,i})_{i=1}^\ell)_{n\in\mathbb{N}}\in \cD_{t,r,k}(X)$. Indeed, first notice that the propagation of each $p_{n,i}$ is at most $r$. Moreover, since $X$ is u.l.f., there is $k_0\in\mathbb{N}$ so that any $E\subset X$ with $\diam(E)\leq t$ intersects at most $k_0$-many $X_n$'s. Therefore, $p_{n,i}\chi_E\neq 0$ for at most $k_0\ell$-many $(n,i)$'s. So $\bar p\in \cD_{t,r,k}(X)$ for $k=k_0\ell$. Notice that $k$ depends only on $\ell$ and $t$ (i.e., it depends on neither $\bar p$ not $r$). \end{remark} Let $t,r,k$ and $\delta$ be positive reals, $\bar p\in \cD_{t,r,k}(X)$ and $\bar q=(q_i)_{i\in\mathbb{N}}\in \cD_{t,r,k}(Y)$, we write \[Y_{\bar p,\delta}=\bigcup_{i\in\mathbb{N}} Y_{p_i,\delta}\text{ and }X_{\bar q,\delta}=\bigcup_{i\in\mathbb{N}} X_{q_i,\delta}.\] \begin{lemma}\label{Lemma3} In the setting of Assumption \ref{AssumptionIsoONL}, for all $r>0$ and $\varepsilon>0$, there is $t>0$ so that for all $k\in\mathbb{N}$, there is $\delta>0$ so that \[ \sup_{\bar p\in \cD_{t,r,k}(X )} \|\Phi (\bar p)\chi_{Y\setminus Y_{\bar p,\delta}}\|\leq \varepsilon.\] \end{lemma} \begin{proof} Let $\theta=\varepsilon/(2+\varepsilon)$. By Lemma \ref{Lemma1} applied to $\Phi^{-1}$, $r$ and $\theta$, there exists $t>0$ so that for all projections $p\in \mathrm{C}^*(X)$ and $q\in \mathrm{C}^*(Y)$ with propagation at most $r$ there is $E\subset X$ with $\diam (E)\leq t$ so that \[\|\Phi(p\chi_E)q\|=\|\Phi^{-1}(q)p\chi_E\|\geq (1-\theta)\|\Phi^{-1}(q)p\|-\theta=(1-\theta)\|\Phi(p)q\|-\theta.\] Fix $k\in\mathbb{N}$. By Lemma \ref{Lemma2}, pick $\delta>0$ so that $\|\Phi(p)\chi_{Y_{p,\delta}}\|^2\geq 1-(\theta/k)^2$ for all $p\in \Proj_{1,r}(X)$. Fix $\bar p\in\cD_{t,r,k}(X)$. For each $i\in\mathbb{N}$ with $p_i\neq 0$, $p_i$ has rank 1; hence so has $\Phi(p_i)$. For each $i\in\mathbb{N}$, pick a unit vector $\xi_i\in \ell_2(Y,H)$ in the range of $\Phi( p_i)$. Then \begin{align*} \|\Phi( p_i)(1-\chi_{Y_{ p_i,\delta}})\|^2&=\|(1-\chi_{Y_{ p_i,\delta}})\xi_i\|^2=\|\xi_i\|^2-\|\chi_{Y_{p_i,\delta}}\xi_{i}\|^2\\ &= 1-\|\Phi(p_i)\chi_{Y_{p_i,\delta}}\|^2\leq(\theta/k)^2. \end{align*} In particular, for all $i\in\mathbb{N}$ \[ \|\Phi( p_i)(1-\chi_{Y_{ \bar p,\delta}})\|\leq \|\Phi( p_i)(1-\chi_{Y_{ p_i,\delta}})\|\leq \theta/k. \] Let $C= Y\setminus Y_{\bar p,\delta}$ and $Q$ be a finite rank projection on $H$. So $q=\chi_C\otimes Q$ is a projection in $\mathrm{C}^*(Y)$ and $\propg(q)=0$. As $\propg(\bar p)\leq r$, our choice of $t$ gives $E\subset Y$ with $\diam(E)\leq t$ so that \[\|\Phi(\bar p)q\|\leq \frac{\|\Phi(\bar p\chi_E)q\|+\theta}{1-\theta}.\] Therefore, as $\bar p\in\cD_{t,r,k}(X)$, we must have \begin{align*} \|\Phi( \bar p)q\|&\leq \frac{k \cdot \sup_{i\in\mathbb{N}}\|\Phi( p_i)q\|+\theta}{1-\theta}\\ &\leq \frac{k\cdot \sup_{i\in\mathbb{N}}\|\Phi( p_i)(1-\chi_{Y_{ \bar p,\delta}})\|+\theta}{1-\theta}\\ &\leq \frac{2\theta}{1-\theta} \end{align*} As $Q$ was an arbitrary finite rank projection on $H$, this shows that \[ \|\Phi( \bar p)(1-\chi_{Y_{ \bar p,\delta}})\| \leq \frac{2\theta}{1-\theta}\leq \varepsilon,\] and we are done. \end{proof} We need one more lemma before presenting the proof of Theorem \ref{ThmCartan}. \begin{lemma}\label{LemmaDistYpdelta} In the setting of Assumption \ref{AssumptionIsoONL}, for every positive reals $\delta$ and $t$ there exists $s>0$ so that for all $E\subset X $ with $\diam(E)\leq t$ and all rank 1 projections $p$ and $q$ in $\mathcal{B}(\ell_2(E,H))$ we have that $\partial(Y_{p,\delta},Y_{q,\delta})\leq s$. \end{lemma} \begin{proof} Suppose the lemma fails for $\delta$ and $t$. Without loss of generality, assume $\delta\in (0,1)$. Then there are a sequence of disjoint subsets $(E_n)_n$ of $X$, and sequences of rank 1 projections $(p_n)_n$ and $(q_n)_n$ so that \begin{enumerate} \item $\diam(E_n)\leq t$ for all $n\in\mathbb{N}$, \item $p_n,q_n\in \mathcal{B}(\ell_2(E_n,H))$ for all $n\in\mathbb{N}$, and \item $\partial(Y_{p_n,\delta},Y_{q_n,\delta})>n$ for all $n\in\mathbb{N}$. \end{enumerate} For each $n\in\mathbb{N}$, let $a_n\in \mathcal{B}(\ell_2(E_n,H))$ be a partial isometry so that $a_na_n^*=p_n$ and $a_n^*a_n=q_n$. Let $\gamma>0$ be so that $\delta=\gamma(2-\gamma)$. As $\Phi$ is coarse-like (Theorem \ref{ThmCoarseLike}), there is $s>0$ so that $\Phi(a)$ is $\gamma/4$-$s$-approximable for all contractions $a\in \mathrm{C}^*(X)$ with $\propg(a)\leq t$. Notice that all operators in each $\mathcal{B}(\ell_2(E_n,H))$ must have propagation at most $t$. Hence, for each $n\in\mathbb{N}$ pick $b_n\in \mathrm{C}^*(Y)$ so that $\propg(b_n)\leq s$ and $\|\Phi(a_n)-b_n\|\leq \gamma/4$. Since $Y$ has ONL, there are $s'>0$ and a sequence of subsets $(A_n)_n$ of $Y$ so that $\diam(A_n)\leq s'$ and $\|b_n\chi_{A_n}\|\geq 1-\gamma/2$ for all $n\in\mathbb{N}$. As $\propg(b_n\chi_{A_n})\leq s$ for all $n\in\mathbb{N}$, we can use that $Y$ has ONL once again in order to obtain a sequence of subsets $(B_n)_n$ of $Y$ so that $\propg(B_n)\leq s'$ and $\|\chi_{B_n}b_n\chi_{A_n}\|\geq 1-3\gamma/4$ for all $n\in\mathbb{N}$. Hence, for all $n\in\mathbb{N}$, \begin{align}\label{EqAnBn} \|\chi_{B_n}\Phi(a_n)\chi_{A_n}\|\geq \|\chi_{B_n}b_n\chi_{A_n}\|-\|\Phi(a_n)-b_n\|> 1-\gamma. \end{align} Hence, as $\Phi(a_n)$ is $\gamma/4$-$s$-approximable for all $n\in\mathbb{N}$, this implies that $d(A_n,B_n)\leq s$ for all $n\in\mathbb{N}$. Therefore, \[\diam(A_n\cup B_n)\leq s+2s'\] for all $n\in\mathbb{N}$. As each $a_n$ is a partial isometry, it follows that \begin{align*} \|\Phi(q_n)\chi_{A_n}\|\geq \|\Phi(a_nq_n)\chi_{A_n}\|=\|\Phi(a_na_n^*a_n)\chi_{A_n}\| =\|\Phi(a_n)\chi_{A_n}\|> 1-\gamma \end{align*} for all $n\in\mathbb{N}$. Similarly, we have that $\|\chi_{B_n}\Phi(p_n)\|\geq 1-\gamma $ for all $n\in\mathbb{N}$. Let $(\xi_n)_n$ and $(\zeta_n)_n$ be sequences of unit vectors in $\ell_2(Y,H)$ so that, for all $n\in\mathbb{N}$, $\xi_n$ and $\zeta_n$ belong to the range of $\Phi(p_n)$ and $\Phi(q_n)$, respectively. Given $ n\in\mathbb{N}$ and $y\not\in A_n$, we have that \[\|\Phi(q_n)\chi_{\{y\}}\|^2=\|\chi_{\{y\}}\zeta_n\|^2\leq \|\zeta_n\|^2-\|\chi_{A_n}\zeta_n\|^2\leq 1-\|\Phi(q_n)\chi_{A_n}\|^2< \gamma(2-\gamma).\] As $\delta=\gamma(2-\gamma)$, this implies that $y\not\in Y_{q_n,\delta}$. As $y$ was an arbitrary element in $X\setminus A_n$, this shows that $Y_{q_n,\delta}\subset A_n$. Analogous arguments applied to $(\xi_n)_n$ and $(B_n)_n$ give us that $Y_{p_n,\delta}\subset B_n$ for all $n\in\mathbb{N}$. Therefore, we must have that \[\partial (Y_{p_n,\delta},Y_{q_n,\delta})\leq s+2s'\] for all $n\in\mathbb{N}$. This contradicts our choice of $(p_n)_n$ and $(q_n)_n$. \end{proof} \subsection{Cartan masas}\label{SubsectionProof} We now prove Theorem~\ref{ThmCartan}; even better, we prove a stronger version of it in Theorem~\ref{ThmIsoMakeAsympCoarseLike}. Recall, given an orthonormal basis $\bar\xi=(\xi_n)_n$ of $H$, we denote by $\ell_\infty(X,\bar \xi)$ the masa of $\mathrm{C}^*(X)$ consisting of all operators $a\in\mathcal{B}(\ell_2(X,H))$ such that for all $x\in X$ there is $(\lambda_n)_n\in c_0$ with \[ a(\delta_x\otimes \xi_n)=\lambda_n\delta_x\otimes \xi_n \ \text{ for all }\ n\in\mathbb{N}. \] This algebra is a masa in the algebra of those operators in $\mathcal{B}(\ell_2(X,H))$ such that each entry is locally compact, and therefore it is such in $\mathrm{C}^*(X)$. \begin{definition} Given metric spaces $X$ and $Y$, a map $\Phi\colon\mathrm{C}^*(X) \to \mathrm{C}^*(Y)$ is \emph{strongly coarse-like} if for all $r>0$ there is $s>0$ so that $\propg(\Phi(a))\leq s$ for all $a\in \mathrm{C}^*(X)$ with $\propg(a)\leq r$. \end{definition} \begin{theorem}\label{ThmIsoMakeAsympCoarseLike} Let $X$ and $Y$ be u.l.f.\@ metric spaces and assume that $Y$ has property $A$. Let $ \Phi\colon\mathrm{C}^*(X)\to\mathrm{C}^*(Y)$ be an isomorphism. Let $\bar\xi=(\xi_n)$ and $\bar\zeta=(\zeta_n)$ be orthonormal bases of $H$. Then there exists a unitary $v\in \mathrm{BD}(Y)$ such that $\Ad(v)\circ \Phi:\mathrm{C}^*(X)\to\mathrm{C}^*(Y)$ is strongly coarse-like and \[ \Ad(v)\circ \Phi(\ell_\infty(X,\bar \xi))=\ell_\infty(Y,\bar \zeta). \] \end{theorem} \begin{proof} By \cite[Theorem 4.1]{SpakulaWillett2013AdvMath}, $X$ and $Y$ are coarsely equivalent. Hence $X$ has property A and, as property A and ONL are equivalent for u.l.f.\@ metric spaces \cite[Theorem 4.1]{Sako2014}, we conclude that $X$, $Y$ and $\Phi$ satisfy Assumption \ref{AssumptionIsoONL}. For $n\in\mathbb{N}$, let $P_n$ and $Q_n$ be the projections onto $\Span\{\xi_n\}$ and $\Span\{\zeta_n\}$, respectively. For $x\in X$, $y\in Y$ and $n\in\mathbb{N}$, let $p_{x,n}=\chi_{\{x\}}\otimes P_n$ and $q_{y,n}=\chi_{\{y\}}\otimes Q_n$. By \cite[Lemma 4.6]{SpakulaWillett2013AdvMath} (or \cite[Lemma 3.1]{BragaChungLi2019}), there is $\delta>0$ so that $X_{q_{y,1},\delta}$ and $Y_{p_{x,1},\delta}$ are nonempty for all $x\in X$ and all $y\in Y$; such $\delta$ is fixed for the remainder of the proof. Hence, we can pick a map $f\colon X\to Y$ so that $f(x)\in Y_{p_{x,1},\delta}$ for all $x\in X$. By the proof of \cite[Theorem 4.1]{SpakulaWillett2013AdvMath}, $f$ is a coarse equivalence. Let $X_0\subset X$, $Y_0\subset Y$, $r_0>0$, $(X^x)_{x\in X_0}$ and $(Y^y)_{y\in Y_0}$ be given as in \S\ref{SubsectionCanonicalMap} for $f$, i.e., \begin{enumerate} \item $f\colon X_0\to Y_0$ is a bijection, \item $X=\bigsqcup_{x\in X_0}X^x$ and $Y=\bigsqcup_{x\in Y_0}Y^y$, \item $x\in X^x$ and $\diam(X^x)\leq r_0$ for all $x\in X_0$, and \item $y\in Y^y$ and $\diam(Y^y)\leq r_0$ for all all $y\in Y_0$. \end{enumerate} Let $g\colon X\times \mathbb{N}\to Y\times \mathbb{N}$ and $u=u_g\colon\ell_2(X,H)\to \ell_2(Y,H)$ be obtained as in \S\ref{SubsectionCanonicalMap}, i.e., for each $x\in X_0$, $g$ restricts to a bijection $X^x\times \mathbb{N}\to Y^{f(x)}\times \mathbb{N}$ and \[u\delta_x\otimes \xi_n=\delta_{g_1(x,n)}\otimes \zeta_{g_2(x,n)}\] for all $(x,n)\in X\times \mathbb{N}$. Therefore, the discussion in \S\ref{SubsectionCanonicalMap} implies that $\Psi=\Ad(u)\colon\mathrm{C}^*(X)\to \mathrm{C}^*(Y)$ is a strongly coarse-like isomorphism. By \cite[Lemma 3.1]{SpakulaWillett2013AdvMath}, we can pick a unitary $w\colon\ell_2(X,H)\to \ell_2(Y,H)$ so that $\Phi=\Ad(w)$. Let $v=uw^*$, so $\Ad(v)\circ\Phi=\Psi$. Therefore, as noticed above, $\Ad(v)\circ\Phi\colon\mathrm{C}^*(X)\to \mathrm{C}^*(Y)$ is strongly coarse-like. Moreover, it is clear from the definition of $u$ that \[\Ad(v)\circ \Phi\Big(\ell_\infty\big(X,c_0(\bar \xi)\big)\Big)\subset \ell_\infty\big(Y,c_0(\bar \zeta)\big).\] Therefore, in order to conclude the proof, we only need to notice that $v\in \mathrm{BD}(Y)$. For that, we now use the technical results proven in \S\ref{SubsectionTechnical}. \begin{claim}\label{ClaimContainment} For all $\delta'\in (0,\delta]$, there is $ r>0$ so that for all $y\in Y$ and all rank 1 projections $q\in \mathcal{B}(\ell_2(\{y\},H))$ we have $Y_{\Psi^{-1}(q),\delta'}\subset B_r(y)$. \end{claim} \begin{proof} Fix $\delta'\in (0,\delta]$. Let $s>0$ be given by Lemma \ref{LemmaDistYpdelta} applied to $\Phi$, $\delta'$ and $r_0$. Lemma \ref{LemmaYxndeltaBoundedDiam} gives $k>0$ so that $\diam(Y_{p,\delta'})\leq k$ for all $p\in \mathrm{Proj}_{1, r_0}(X)$. Fix $y\in Y$ and a rank 1 projection $q\in \mathcal{B}(\ell_2(\{y\},H))$. Let $y'\in Y_0$ and $x'\in X_0$ be so that $y\in Y^{y'}$ and $y'=f(x')$. Since $g\colon X\times \mathbb{N}\to Y\times \mathbb{N}$ restricts to a bijection $X^{x'}\times \mathbb{N}\to Y^{y'}\times \mathbb{N}$, the definition of $\Psi$ clearly implies that $\Psi^{-1}(q)\in \mathcal{B}(\ell_2(X^{x'},H))$. Hence, as $p_{x', 1}\in \mathcal{B}(\ell_2(X^{x'},H))$, $\diam (X^{x'})\leq r_0$, our choice of $s$ implies that $\partial(Y_{\Psi^{-1}(q),\delta'},Y_{p_{x',1},\delta'})\leq s$. By the defining property of $f\colon X\to Y$, we have that $y'= f(x')\in Y_{p_{x',1},\delta}$. As $\delta'\leq \delta$, $y'\in Y_{p_{x',1},\delta'}$. Therefore, as $\diam(Y^{y'})\leq r_0$ we have $\partial (y,y')\leq r_0$, and our choices of $ \ell$ and $k$ imply that \[Y_{\Psi^{-1}(q),\delta'}\subset B_{r_0+s+2k}(y)\] The claim then follows by letting $r=r_0+s+2k$. \end{proof} We now show that $v=uw^*$ belongs to $\mathrm{BD}(Y)$. By \cite[Theorem 3.3]{SpakulaZhang2018}, as $Y$ has property A, it is enough to show that $v$ is \emph{quasi-local}.\footnote{An operator $a\in \mathcal{B}(\ell_2(Y,H))$ is \emph{quasi-local} if for all $\varepsilon>0$ there is $r>0$ so that $\partial(A,B)>r$ implies $\|\chi_Aa\chi_B\|<\varepsilon$ for all $A,B\subset Y$.} Fix $\varepsilon>0$. Let $t>0$ be given by Lemma \ref{Lemma3} for $\varepsilon$, $r_0$ and $\Phi $. Then, for all $k\in\mathbb{N}$ there is $\delta'\in (0,\delta]$ so that \begin{align}\label{Eq1} \|\Phi(\bar p)\chi_{Y\setminus Y_{\bar p,\delta'}}\|\leq \varepsilon \end{align} for all $\bar p=(p_i)_{i\in \mathbb{N}}\in \cD_{t,r_0,k}(X) $. Before finishing the proof, we introduce some notation: given $C\subset Y$ and a sequence of projections $(Q_y)_{y\in C}$ on $H$ of rank at most 1, we write $q_y=\chi_{\{y\}}\otimes Q_y$ for each $y\in C$.\footnote{Notice that $q_y$ and $Q_y$ (for $y\in C$) are distinct from $q_{y,n}$ and $Q_n$ (for $n\in\mathbb{N}$). We believe that, since the indices are different, this abuse of notation will cause no confusion.} Let \[X_C=\Big\{x\in X\mid \exists x'\in X_0\text{ with }x\in X^{x'}\text{ and }C\cap Y^{f(x')}\neq \emptyset\Big\}.\] By the definition of $\Psi$, it follows that $\Psi^{-1}(q_y)\in \mathcal{B}(\ell_2(X^{x'},H))$ for all $y'\in Y_0$ with $y\in Y^{y'}$ and $x'=f^{-1}(y')$. Remark \ref{RemarkDktX} then implies that there is $k>0$ so that $\Psi^{-1}(\bar q)=(\Psi^{-1}(q_y))_{y\in C}\in \cD_{t,r_0,k}(X_C)$. Moreover, as noticed in Remark \ref{RemarkDktX}, $k$ depends only on $t$ and $\sup_{y\in Y_0}|Y^{y}|$ (i.e., it depends on neither $C$ nor $(Q_y)_{y\in C}$). Fix such $k$. By the defining property of $t$, pick $\delta'\in (0,\delta]$ so that \eqref{Eq1} holds for $k$. Claim \ref{ClaimContainment} gives $r>0$ so that $Y_{\Psi^{-1}(q),\delta'}\subset B_r(y)$ for all $y\in Y$ and all rank 1 projections $q\in \mathcal{B}(\ell_2(\{y\},H))$. Therefore, if $C\subset Y$ and $(Q_y)_{y\in C}$ is a sequence of projections on $H$ of rank at most 1, it follows that \[Y_{\Psi^{-1}(\bar q),\delta}=\bigcup_{y\in C} Y_{\Psi^{-1}(q_y),\delta}\subset B_r(C).\] Fix $C\subset Y$, $(Q_y)_{y\in C}$ and $(q_y)_{y\in C}$ as above. Then the previous inequalities give that \[\|\Phi(\Psi^{-1}(\bar q))\chi_{Y\setminus B_r(C)}\|\leq \varepsilon.\] As $\Psi^{-1}=\Ad(u^*)$ and $\Phi=\Ad(w)$, this implies that \[\|\bar qv\chi_{Y\setminus B_r(C)}\|=\| \bar quw^*\chi_{Y\setminus B_r(C)}\|=\|wu^*\bar quw^*\chi_{Y\setminus B_r(C)}\|\leq \varepsilon.\] The arbitrariness of $\bar q=(q_y)_{y\in C}$ (i.e., the arbitrariness of $C\subset Y$ and $(Q_y)_{y\in C}$) gives that $\|\chi_Cv\chi_{Y\setminus B_r(C)}\| \leq \varepsilon$ for all $C\subset Y$. Let $A,B\subset Y$ be so that $d(A,B)> r$. Then \[ \|\chi_Av\chi_B\|\leq \|\chi_{A\cap Y'}v\chi_{ Y\setminus B_r(A)}\| \leq \varepsilon \] As $\varepsilon$ was arbitrary, this shows that $v$ is quasi-local, so we are done. \end{proof} \begin{remark} Although we chose to work with fixed bases $\bar \xi$ and $\bar\zeta$ inside each $H$-coordinate of $\ell_2(X,H)$ in Theorem \ref{ThmCartan} (Theorem \ref{ThmIsoMakeAsympCoarseLike}), we chose so merely for simplicity. The same proof holds if for each $x\in X$ we choose basis $\bar \xi^x=(\xi_n^x)_n$ and $\bar\zeta^x=(\zeta_n^x)_n$ of $H$. Precisely, given those choices, let $\prod_{x\in X} c_0(\bar \xi^x)$ be the $\ell_\infty$-sum of $(c_0(\bar \xi^x))_{x\in X}$, where $c_0(\bar \xi^x)$ consists of all $a\in \mathcal{B}(H)$ so that there is $(\lambda_n)_n\in c_0$ for which $a \xi^x_n=\lambda_n\xi^x_n$ for all $n\in\mathbb{N}$; $\prod_{x\in X} c_0(\bar \zeta^x)$ is defined analogously. Then, if $\Phi\in \mathrm{Aut}(\mathrm{C}^*(X))$, there exists a unitary $v\in \mathrm{BD}(X)$ so that \[\Ad(v)\circ \Phi\Big(\prod_{x\in X} c_0\big(\bar \xi^x\big)\Big)= \prod_{x\in X} c_0\big(\bar \zeta^x\big).\] \end{remark} We are ready to prove Theorem~\ref{thm:main}, which we restate for convenience. \begin{theorem}\label{ThmIsoCoa} Let $(X,d)$ be a u.l.f.\@ metric space with property A. The canonical homomorphism \[ \mathrm{Coa}(X)\to\mathrm{Out}(\mathrm{C}^*(X)) \] described in \S\ref{SubsectionCanonicalMap} is an isomorphism. \end{theorem} \begin{proof} Let $T\colon\mathrm{Coa}(X)\to\mathrm{Out}(\mathrm{C}^*(X))$ be the injective homomorphism constructed in \S\ref{SubsectionCanonicalMap}. Fix $\Phi\in \mathrm{Aut}(\mathrm{C}^*(X))$. Let $v\in \mathrm{BD}(X)$ be given by Theorem \ref{ThmCartan} for $\Phi$. Moreover, let $f\colon X\to X$, $u\colon \ell_2(X,H)\to\ell_2(X,H)$ and $w\colon \ell_2(X,H)\to\ell_2(X,H)$ be as in the proof of Theorem \ref{ThmCartan} for $X=Y$ and $\bar\xi=\bar\zeta $. Hence, $\Phi=\Ad(w)$, $v=uw^*$. Notice that $T(f)=[\Ad(u)]$ when the latter is computed in $\mathrm{Out}(\mathrm{C}^*(X))$. We are left to show that $T(f)=[\Phi]$, that is, that $\Ad(u)\circ\Phi^{-1}\in \mathrm{Inn}(\mathrm{BD}(X))$. But this follows since \[\Ad(u)\circ\Phi^{-1}=\Ad(u)\circ\Ad(w^*)=\Ad(v)\] and $v\in \mathrm{BD}(X)$. \end{proof} \iffalse Before stating the next lemma, we need some terminology to simplify notation. Consider a u.l.f.\@ metric space $X$ and let $\bar\xi=(\xi_n)_n$ be an orthonormal basis of $H$. Given $(x,n),(z,m)\in X\times \mathbb{N}$, we let $e_{(x,n),(z,m)}$ be the partial isometry in $\mathrm{C}^*(X)$ defined by \[e_{(x,n),(z,m)}\delta_y\otimes \xi_i= \langle \delta_x,\delta_z\rangle\langle \xi_n,\xi_i\rangle\delta_z\otimes \xi_M\] for all $(y,n)\in X\times \mathbb{N}$. I.e., $e_{(x,n),(z,m)}$ is the partial isometry taking $\delta_x\otimes \xi_n$ to $\delta_z\otimes \xi_m$. Notice that, if $P_n\colon H\to \Span \{e_n\}$ denotes the standard projection, then $e_{(x,n),(x,n)}=\chi_{\{x\}}\otimes P_n$. Clearly, $e_{(x,n),(x,n)}\in \ell_\infty(X,c_0(\bar\xi))$. \begin{lemma}\label{Lemma89} Let $X$ and $Y$ be u.l.f.\@ metric spaces, and $\bar\xi=(\xi_n)_n$ and $\bar\zeta=(\zeta_{n})_n$ be orthonormal basis of $H$ and $H$, respectively. Let $\Phi\colon\mathrm{C}^*(X)\to\mathrm{C}^*(Y)$ be an isomorphism so that $\Phi(\ell_\infty(X,c_0(\bar \xi)))\subset \ell_\infty(Y,c_0(\bar \zeta))$. Then there is a bijection $g\colon X\times \mathbb{N}\to Y\times \mathbb{N}$ so that \begin{enumerate} \item\label{ItemLemmaAut1} $\Phi(e_{(x,n),(x,n)})=e_{g(x,n),g(x,n)}$ for all $(x,n)\in X\times\mathbb{N}$ and \item\label{ItemLemmaAut2} for each $(x,n),(z,m)\in X\times \mathbb{N}$ there is $\lambda\in \mathbb{C}$ with $|\lambda|=1$ so that $\Phi(e_{(x,n),(z,m)})=\lambda e_{g(x,n),g(z,m)}$. \end{enumerate} \end{lemma} \begin{proof} Given $(x,n)\in X\times \mathbb{N}$, $e_{(x,n),(x,n)}$ is a rank 1 projections, hence, as $\Phi$ is rank preserving, so is $\Phi(e_{(x,n),(x,n)})$. As $e_{(x,n),(x,n)}\in \ell_\infty(X,c_0(\bar \xi))$, $\Phi(e_{(x,n),(x,n)})\in \ell_\infty(Y,c_0(\bar \zeta))$. So there is $(y,m)\in Y\times\mathbb{N}$ so that $\Phi(e_{(x,n),(x,n)})=e_{(y,m),(y,m)}$. Define $g\colon X\times \mathbb{N}\to Y\times \mathbb{N}$ by letting $g(x,n)=(y,m)$. Clearly, $g$ satisfies \eqref{ItemLemmaAut1}. As $\Phi$ is injection, so is $f$. Moreover, if there is $(y,m)\not\in g(X\times \mathbb{N})$, then \begin{align*} \|\Phi^{-1}(e_{(y,m),(y,m)})e_{(x,n),(x,n)}\|&= \|e_{(y,m),(y,m)}\Phi(e_{(x,n),(x,n)})\|\\ &=\|e_{(y,m),(y,m)}e_{g(x,n),g(x,n)}\|\\ &=0 \end{align*} for all $(x,n)\in X\times \mathbb{N}$. So $\Phi^{-1}(e_{(y,m),(y,m)})$; contradiction. So $f$ is a bijection. Let $(x,n),(x',n')\in X\times \mathbb{N}$. Notice that $e^*_{(x,n),(z,m)}=e_{(z,m),(x,n)}$, $e_{(x,n),(z,m)}e^*_{(x,n),(z,m)}=e_{(z,m),(z,m)}$ and $e^*_{(x,n),(z,m)}e_{(x,n),(z,m)}=e_{(x,n),(x,n)}$. Hence, if $(y,m),(y',m')\in Y\times \mathbb{N}$, \eqref{ItemLemmaAut1} implies that \begin{align*} \langle\Phi(e_{(x,n),(x',n')}&) \delta_{y}\otimes \zeta_m ,\delta_{y'}\otimes \zeta_{m'} \rangle \\ &=\langle\Phi(e_{(x,n),(x',n')})e_{g(x,n),g(x,n)}\delta_{y}\otimes \zeta_m,\delta_{y'}\otimes \zeta_{m'} \rangle\\ &=\langle e_{g(x,n),g(x,n)}\delta_{y}\otimes \zeta_m,\Phi(e_{(x',n'),(x,n)})\delta_{y'}\otimes \zeta_{m'} \rangle\\ &=\langle e_{g(x,n),g(x,n)}\delta_{y}\otimes \zeta_m,\Phi(e_{(x',n'),(x,n)})e_{g(x',n'),g(x',n')}\delta_{y'}\otimes \zeta_{m'} \rangle. \end{align*} Hence, if $\langle\Phi(e_{(x,n),(x',n')}) \delta_{y}\otimes \zeta_m ,\delta_{y'}\otimes \zeta_{m'} \rangle \neq 0$, then $g(x,n)=(y,m)$ and $g(x',n')=(y',m')$. So, there is $|lambda\in \mathbb{C}$ with $|\lambda|=1$ so that $\Phi(e_{(x,n),(x',n')})=\lambda e_{g(x,n),g(x',n')}$. \end{proof} In the lemma below, given a map $g\colon X\times \mathbb{N}\to Y\times\mathbb{N}$, we denote the composition of $g$ with the projections onto the first and second coordinates by $g_1\colon X\times\mathbb{N}\to Y$ and $ g_2=\colon X\times \mathbb{N}\to \mathbb{N}$, respectively. \begin{lemma}\label{Lemma810} Let $X$ and $Y$ be u.l.f.\@ metric spaces, and $\bar\xi=(\xi_n)_n$ and $\bar\zeta=(\zeta_{n})_n$ be orthonormal basis of $H$ and $H$, respectively. Let $\Phi\colon\mathrm{C}^*(X)\to\mathrm{C}^*(Y)$ be an isomorphism so that $\Phi(\ell_\infty(X,c_0(\bar \xi)))\subset \ell_\infty(Y,c_0(\bar \zeta))$ and $u\colon\ell_2(X,H)\to \ell_2(Y,H)$ be so that $\Phi=\Ad(u)$. Let $g\colon X\times \mathbb{N}\to Y\times \mathbb{N}$ is the bijection given by\ref{Lemma89}. Then for all $(x,n)\in X\times \mathbb{N}$ there is $\lambda\in \mathbb{C}$ with $|\lambda|=1$ so that $u\delta_x\otimes \xi_n=\lambda\delta_{g_1(x,n)}\otimes \zeta_{g_2(x,n)}$. \end{lemma} \begin{proof} Let $(x,n)\in X\times\mathbb{N}$. By the choice of $g$, $ue_{(x,n),(x,n)}u^*=e_{g(x,n),g(x,n)}$, so $u\delta_x\otimes \xi_n=e_{g(x,n),g(x,n)}u\delta_x\otimes \xi_n$. So $u\delta_x\otimes \xi_n$ is a multiple of $\delta_{g_1(x,n)}\otimes \zeta_{g_2(x,n)}$. As $u$ is an isometry, the result follows. \end{proof} \fi \section{Applications} \label{SecApp} In this section, we use Theorem \ref{thm:uniform} and Theorem \ref{thm:main} in order to compute --- or at least better understand --- $\mathrm{Out}(\mathrm{C}^*_u(X))$ and $\mathrm{Out}(\mathrm{C}^*(X))$ for some specific spaces $X$. In \S\ref{SubsectionOutX}, \S\ref{SubsectionOutNZ} and \S\ref{SubsectionOutNZn}, we apply our results to $\{n^2\mid n\in\mathbb{N}\}$, $\mathbb{N}^n$, and $\mathbb{Z}^n$, while in \S\ref{SubsectionSolBauSol} and \S\ref{SubsectionLamp}, we work with the solvable Baumslag-Solitar groups $B(1,n)$ and the lamplighter group $F\wr \mathbb{Z}$ for a finite group $F$. For brevity, we skip some definitions in these subsections and refer the reader to an appropriate source. \subsection{Outer automorphisms of the (uniform) Roe algebra of $\{n^2\mid n\in\mathbb{N}\}$}\label{SubsectionOutX} Denote the group of permutations on $\mathbb{N}$ by $S_\infty$. Let $\sim_0$ be the equivalence relation on $S_\infty$ given by $\pi\sim_0\pi'$ if there is $n_0\in\mathbb{N}$ so that $\pi(n)=\pi'(n)$ for all $n\geq n_0$. Clearly, $N=\{\pi\in S_\infty\mid \pi\sim_0 \mathrm{Id}_\mathbb{N}\}$ is a normal subgroup of $S_\infty$, so $S_\infty/{\sim_0}=S_\infty/N$ is a group. \begin{corollary} Let $X=\{n^2\mid n\in\mathbb{N}\}$. Then $\mathrm{BijCoa}(X) $ is isomorphic to $S_\infty/{\sim_0}$. In particular, $\mathrm{Out}(\mathrm{C}^*_u(X)) $ is isomorphic to $S_\infty/{\sim_0}$. \end{corollary} \begin{proof} First, notice that $X$ has property A. For this, notice that $\mathrm{C}^*_u(X)$ is generated by $\ell_\infty(X)$ and $\mathcal K(\ell_2(X))$. Moreover the uniform Roe corona of $X$, the algebra $\mathrm{C}^*_u(X)/\mathcal K(\ell_2(X))$ is isomorphic to $\ell_\infty/c_0$, and it is therefore nuclear. Since $\mathcal K(\ell_2(X))$ is nuclear, so is $\mathrm{C}^*_u(X)$, and therefore $X$ has property $A$. A map $f\colon X\to X$ is a bijective coarse equivalence if and only if $f$ is a bijection. So the group of bijective coarse equivalences of $X$ is isomorphic to $S_\infty$. Moreover, two maps $f,g\colon X\to X$ are close if and only if they eventually coincide, i.e., there is $n_0\in \mathbb{N}$ so that $f(n^2)=g(n^2)$ for all $n>n_0$. The result now follows. \end{proof} Denote the group of \emph{cofinite partial bijections} on $\mathbb{N}$ by $S_\infty^*$, i.e., \begin{align*} S_\infty^*=\Big\{(\pi,A,B)\in \mathbb{N}^\mathbb{N}\times \mathcal{P}(\mathbb{N})\times \mathcal{P}(\mathbb{N})\colon &|A^\complement|, |B^\complement|<\infty \text{ and } \\ & \pi\restriction A\colon A\to B\text{ is a bijection}\Big\}.\end{align*} By a slight abuse of notation, we denote by $\sim_0$ the equivalence relation on $S^*_\infty$ given by $(\pi,A,B)\sim_0(\pi',A',B')$ if there is $n_0\in\mathbb{N}$ so that $\pi(n)=\pi'(n)$ for all $n\geq n_0$. \begin{corollary} Let $X=\{n^2\mid n\in\mathbb{N}\}$. Then $\mathrm{Coa}(X)/\sim$ is isomorphic to $S^*_\infty/\sim_0$. In particular, $\mathrm{Out}(\mathrm{C}^*(X)) $ is isomorphic to $S^*_\infty/\sim_0$. \end{corollary} \begin{proof} Clearly, a map $f\colon X\to X$ is a coarse equivalence if and only if $f$ there are cofinite $A,B\subset \mathbb{N}$ so that $f\restriction A\colon A\to B$ is a bijection. Moreover, maps $f,g\colon X\to X$ are close if and only if they eventually coincide. The result follows. \end{proof} \subsection{Outer automorphisms of the uniform Roe algebras of $\mathbb{N}$ and $\mathbb{Z}$}\label{SubsectionOutNZ} \begin{corollary}\label{CorOutNandZ} The group $\mathrm{BijCoa}(\mathbb{N})$ is trivial and $\mathrm{BijCoa}(\mathbb{Z})$ is isomorphic to $\mathbb{Z}_2$. In particular, $\mathrm{Out}(\mathrm{C}^*_u(\mathbb{N})) $ is trivial and $\mathrm{Out}(\mathrm{C}^*_u(\mathbb{Z})) $ is isomorphic to $\mathbb{Z}_2$. \end{corollary} \begin{proof} Fix a bijective coarse equivalence $f\colon\mathbb{N}\to \mathbb{N}$. We are going to prove that $f$ is close to the identity $\mathrm{Id}_\mathbb{N}$. Suppose this is not the case. Then there is a sequence $(x_n)_n$ in $\mathbb{N}$ so that $|f(x_n)-x_n|>n$ for all $n\in\mathbb{N}$. Without loss of generality, we can assume that $(x_n)_n$ is strictly increasing. Moreover, replacing $f$ by $f^{-1}$ if necessary, we can also assume that $f(x_n)+n<x_n$ for all $n\in\mathbb{N}$. For each $n\in\mathbb{N}$, let $z_n=\max\{z\in \mathbb{N}\mid f^{-1} (z)\leq x_n\}$. As $f$ is a bijection and $f(x_n)+n<x_n$, it follows that $z_n-f(x_n)>n$ for all $n\in\mathbb{N}$. As $f^{-1}$ is expanding, it follows that $\lim_{n}(f^{-1}(z_{n}+1)-x_n)=\infty$. So, as $f^{-1}(z_n)\leq x_n$, we have that $\lim_{n}(f^{-1}(z_{n}+1)-f^{-1}(z_n))=\infty$. This contradicts coarseness of $f^{-1}$. This shows that $\mathrm{BijCoa}(\mathbb{N}) $ is the trivial group. Now fix a bijective coarse equivalence $f\colon\mathbb{Z}\to \mathbb{Z}$. So either $\lim_{n\to \infty}f(n)=\infty$ or $\lim_{n\to \infty}f(n)=-\infty$. Assume that $\lim_{n\to \infty}f(n)=\infty$ and let $x_0=\min(f(\mathbb{N}))$. \begin{claim} There are bijective coarse equivalences $h_1\colon\mathbb{N}\to \mathbb{N}$ and $h_2\colon\mathbb{Z}\setminus \mathbb{N}\to \mathbb{Z}\setminus \mathbb{N}$ so that $f$ is close to the bijection $h_1\cup h_2\colon\mathbb{Z}\to \mathbb{Z}$. \end{claim} \begin{proof} Notice that $f$ is close to $g=f-x_0$ and that $g(\mathbb{N})$ is a cofinite subset of $\mathbb{N}$, say $n_0=|\mathbb{N}\setminus g(\mathbb{N})|$. Pick bijections \[i\colon\{-n_0,\ldots, -1\}\to \mathbb{N}\setminus g(\mathbb{N})\] and \[j\colon g^{-1}(\mathbb{N}\setminus g(\mathbb{N}))\to g(\{-n_0,\ldots,-1\}),\] and notice that $g$ is close to \[h(x)=\left\{\begin{array}{ll} g(x),& x\in \mathbb{N},\\ i(x), & x\in \{-n_0,\ldots, -1\},\\ j(x),& x\in g^{-1}(\mathbb{N}\setminus g(\mathbb{N})),\\ \end{array}\right.\] For each $x\in \mathbb{Z}$, let $h_0(x)=h(x-n_0)$ and let $h_1=\restriction \mathbb{N}$ and $h_2=h\restriction \mathbb{Z}\setminus \mathbb{N}$. Since $h$ is close to $h_0$, the result follows. \end{proof} Let $h_1$ and $h_2$ be given by the claim above. As $\mathrm{BijCoa}(\mathbb{N}) $ is the trivial group, it follows that $h_1$ is close to $\mathrm{Id}_\mathbb{N}$ and $h_2$ is close to $\mathrm{Id}_{\mathbb{Z}\setminus \mathbb{N}}$ . So $f$ is close to $\mathrm{Id}_\mathbb{Z}$. If $\lim_{n\to \infty}f(n)=-\infty$, then proceeding analogously as above, we obtain bijective coarse equivalences $h_1\colon\mathbb{N}\to \mathbb{Z}\setminus \mathbb{N}$ and $h_2\colon\mathbb{Z}\times \mathbb{N}\to \mathbb{N}$ so that \begin{enumerate} \item $h_1$ is close to the map $x\in \mathbb{N}\to -x-1\in \mathbb{Z}\setminus \mathbb{N}$, \item $h_2$ is close to the map $x\in \mathbb{Z}\setminus \mathbb{N}\to -x-1\in \mathbb{N}$, and \item $f$ is close to $h_1\cup h_2$. \end{enumerate} As $ h_1\cup h_2$ is close to $-\mathrm{Id}_\mathbb{Z}$, so is $f$. We have then shown that $\mathrm{BijCoa}(\mathbb{Z}) $ is isomorphic to $\{-\mathrm{Id}_\mathbb{Z},\mathrm{Id}_\mathbb{Z}\}$. This completes the proof. The last statement follows from the above and Theorem \ref{thm:uniform}. \end{proof} \subsection{Outer automorphisms of the Roe algebra of $\mathbb{Z}^n$}\label{SubsectionOutNZn} Recall, given metric spaces $(X,d)$ and $(Y,\partial)$, a map $f\colon X\to Y$ is a \emph{coarse Lipschitz equivalence}\footnote{Coarse Lipschitz equivalences are also referred to as \emph{quasi-isometries} in the literature.} if it is cobounded and there is $L>0$ so that \[L^{-1}d(x,y)-L\leq \partial(f(x),f(y))\leq Ld(x,y)+L\] for all $x,y\in X$. Define \[\mathrm{CoaLip}(X)=\Big\{f\colon X\to X\mid f\text{ is a coarse Lipschitz equivalence}\Big\}/{ \sim },\] where $\sim$ is the closeness relation on functions $X\to X$. Clearly, $\mathrm{CoaLip}(X)$ is a group under composition, i.e., $[f]\circ[g]=[f\circ g]$. Given $n\in\mathbb{N}$, the inclusion $\mathbb{Z}^n\hookrightarrow \mathbb{R}^n$ is a coarse equivalence (coarse Lipschitz equivalence even). Therefore $\mathrm{Coa}(\mathbb{Z}^n)\cong \mathrm{Coa}(\mathbb{R}^n)$. Moreover, notice that a map $ \mathbb{R}\to \mathbb{R}$ is coarse if and only if it is coarse Lipschitz \cite[Theorem 1.4.13]{NowakYuBook}. Therefore, we have that \[ \mathrm{Coa}(\mathbb{Z}^n)\cong \mathrm{CoaLip}(\mathbb{R}^n).\] As a consequence of that, results in the literature give us the next corollaries of Theorem \ref{thm:main}. We denote by $\mathrm{PL}_\delta(\mathbb{Z})$ the group of piecewise linear homeomorphisms $f\colon\mathbb{R}\to \mathbb{R}$ so that $\{|f'(x)|\mid x\in \mathbb{R}\}\subset [M^{-1},M]$ for some $M>0$. Modding out by the closeness relation, we obtain the group $\mathrm{PL}_\delta(\mathbb{Z})/\sim $. \begin{corollary} The group $\mathrm{Out}(\mathrm{C}^*(\mathbb{Z}))$ has trivial center. Moreover, $\mathrm{Out}(\mathrm{C}^*(\mathbb{Z}))$ is isomorphic to $\mathrm{PL}_\delta(\mathbb{Z})/{\sim}$. \end{corollary} \begin{proof} This follows immediately from Theorem \ref{thm:main}, the discussion preceding the corollary, \cite[Theorem 1.1]{Chakraborty2019IndPureApplMath}, and \cite[Theorem 1.2]{Sankaran2006PAMS}. \end{proof} \begin{corollary}\label{CorThomp} The group $\mathrm{Out}(\mathrm{C}^*(\mathbb{Z}))$ contains isomorphic copies of the following groups: \begin{enumerate} \item $\mathrm{PL}_\kappa(\mathbb{R})$, the group of piecewise linear homeomorphisms $f\colon\mathbb{R}\to \mathbb{R}$ so that $\overline{\{x\in \mathbb{R}\mid f(x)\neq x\}}$ is compact, \item Thompson's group $F$,\footnote{We refer the reader to \cite[Section 1]{CannonFloydParry1996EnseignMath} for the definition of Thompson's group $F$.} and \item the free group of rank the continuum. \end{enumerate} \end{corollary} \begin{proof} This follows immediately from Theorem \ref{thm:main}, the discussion above and \cite[Theorem 1.3]{Sankaran2006PAMS}. \end{proof} Given a metric space $(X,d)$, a map $f\colon X\to X$ is a bi-Lipschitz equivalence if there is $L>0$ so that \[L^{-1}d(x,y)\leq d(f(x),f(y))\leq Ld(x,y)\] for all $x,y\in X$. We let \[\mathrm{BiLip}(X)=\Big\{f\colon X\to X\mid f\text{ is a bi-Lipschitz equivalence}\Big\}.\] So $\mathrm{BiLip}(X)$ is a group under composition (notice that we do not mod out the bi-Lipschitz equivalences by closeness). \begin{corollary} Given $n\in\mathbb{N}$, the group $\mathrm{Out}(\mathrm{C}^*(\mathbb{Z}^n))$ contains isomorphic copies of the following groups: \begin{enumerate} \item $\mathrm{BiLip}(\mathbb S^{n-1})$, where $\mathbb S=\{z\in \mathbb{C}\mid |z|=1\}$, and \item $\mathrm{BiLip}(\mathbb D^n,\mathbb S^{n-1})$. \end{enumerate} \end{corollary} \begin{proof} This follows immediately from Theorem \ref{thm:main}, the discussion above and \cite[Theorem 1.1]{Mitraankaran2019TopAppl}. \end{proof} \subsection{Solvable Baumslag-Solitar groups}\label{SubsectionSolBauSol} Given $n\in\mathbb{N}$, $\mathbb{Q}_n$ denotes the $n$-adic rationals and $B(1,n)$ denotes the \emph{solvable Baumslag-Solitar group}, i.e., the group generated by elements $a$ and $b$ subject to the relation $aba^{-1}=b^n$. We endow $B(1,n)$ with the metric given by its Cayley graph structure (see \cite[Definition 1.2.7]{NowakYuBook} for definitions). \begin{corollary}\label{CorBaumslagSolitar} Given $n\in\mathbb{N}$, the group $\mathrm{Out}(\mathrm{C}^*(B(1,n)))$ is isomorphic to $\mathrm{BiLip}(\mathbb{R})\times \mathrm{BiLip}(\mathbb{Q}_n)$. \end{corollary} \begin{proof} Since $B(1,n)$ is solvable, it is amenable. Therefore, as $B(1,n)$ is finitely generated, it must also have property A \cite[Theorem 4.14]{NowakYuBook}. By Theorem \ref{thm:main}, we only need to compute $\mathrm{Coa}(B(1,n))$. As $B(1,n)$ is a finitely generated group, we have that $\mathrm{Coa}(B(1,n)) =\mathrm{CoaLip}(B(1,n))$ \cite[Corollary 1.4.15]{NowakYuBook}. The result then follows since it is known that $\mathrm{CoaLip}(B(1,n))$ is isomorphic to $\mathrm{BiLip}(\mathbb{R})\times \mathrm{BiLip}(\mathbb{Q}_n)$ (see \cite[Theorem 8.1]{FarbMosher1998Inventiones}). \end{proof} \subsection{The lamplighter group $F\wr \mathbb{Z}$}\label{SubsectionLamp} Given a group $F$, we denote the wreath product of $F$ and $\mathbb{Z}$ by $F\wr \mathbb{Z}$ (we refer the reader to \cite[Definition 2.6.2]{NowakYuBook} for a precise definition). This group is commonly called the \emph{lamplighter group $F\wr \mathbb{Z}$}, and we endow $F\wr \mathbb{Z}$ with the metric given by its Cayley graph structure \cite[Definition 1.2.7]{NowakYuBook}. Consider the semidirect product $(\mathrm{BiLip}(\mathbb{Q}_n)\times \mathrm{BiLip}(\mathbb{Q}_n))\rtimes \mathbb{Z}_2$ given by the action of $\mathbb{Z}_2$ on $\mathrm{BiLip}(\mathbb{Q}_n)\times \mathrm{BiLip}(\mathbb{Q}_n)$ of switching factors.\footnote{Recall that if $N$ and $H$ are groups and $\alpha\colon H\curvearrowright N$ is an action, then $N\rtimes H$ denotes the \emph{semidirect product}, i.e., the set $N\times H$ endowed with the product $(n,h)\cdot (n',h')=(n\alpha(h)n' ,hh')$.} \begin{corollary}\label{CorLamplighter} Let $F$ be a group with $|F|=n$. Then the group $\mathrm{Out}(\mathrm{C}^*(F\wr \mathbb{Z}))$ is isomorphic to \[\Big(\mathrm{BiLip}(\mathbb{Q}_n)\times \mathrm{BiLip}(\mathbb{Q}_n)\Big)\rtimes \mathbb{Z}_2\] \end{corollary} \begin{proof} Since $F$ and $\mathbb{Z}$ are finitely generated, so is $F\wr \mathbb{Z}$ \cite[Chapter 2, Exercise 2.5]{NowakYuBook}. The lamplighter group $F\wr \mathbb{Z}$ is amenable (see \cite[Corollary 2.5]{Woess2013IntMathNach}), and since it is finitely generated, it has property A \cite[Theorem 4.14]{NowakYuBook}. Therefore, we only need to compute $\mathrm{Coa}(F\wr \mathbb{Z}) $ (Theorem \ref{thm:main}). As $F\wr \mathbb{Z}$ is finitely generated, $\mathrm{Coa}(F\wr \mathbb{Z}) =\mathrm{CoaLip}(F\wr \mathbb{Z})$, by \cite[Theorem 1.4.13]{NowakYuBook}. Moreover, $F\wr \mathbb{Z}$ is quasi-isometric to the Diestel-Leader graph $D(n,n)$ (we refer to \cite[Section 1]{EskinFisherWhyte2012Annals} for both the definition of $\mathrm{DL}(n,n)$ and this fact). So it is enough to compute $\mathrm{CoaLip}(\mathrm{DL}(n,n))$. The result then follows since it is known that $\mathrm{CoaLip}(\mathrm{DL}(n,n))$ is isomorphic to $(\mathrm{BiLip}(\mathbb{Q}_n)\times \mathrm{BiLip}(\mathbb{Q}_n))\rtimes \mathbb{Z}_2$ \cite[Theorem 2.1]{EskinFisherWhyte2012Annals} (see the discussion at the end of \cite[Section 2]{EskinFisherWhyte2012Annals}). \end{proof} \begin{acknowledgements*} The current paper started from a question asked by Ralf Meyer to the authors about whether Theorem \ref{thm:main} was true for the metric space $\mathbb{Z}^n$. The authors would like to thank Ralf Meyer for the interesting question and for several comments on a previous version of this paper. AV is partially supported by the ANR Project AGRUME (ANR-17-CE40-0026). \end{acknowledgements*}
proofpile-arXiv_065-160
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{T}he recent years have seen increasing research on learning the representation of complex categorical data ('categorical representation' for short) \cite{bengio2013representation,boriah2008similarity,bai2013impact,wang2015coupled,seth2016archetypal}, which shows critical for downstream tasks, e.g., regression \cite{tutz2016regularized}, clustering \cite{qian2016space}, classification \cite{zhu2018heterogeneous}, and outlier detection \cite{pang2016outlier}. Different from numerical data, the attribute values, attributes and objects of categorical data are often coupled with each other w.r.t. various aspects, e.g., value frequency, co-occurrence and distribution; attribute relations including correlations and dependency, interactions and hierarchy; and other data characteristics \cite{cao2015coupling,Bremaud2017,zhu2018heterogeneous,dst_Cao15}. We broadly refer them to \textit{couplings} \cite{cao2015coupling}, which are heterogeneous - diverse interactions and distributions, and hierarchical - from values to objects, and drive the complexities and dynamics of categorical data. Learning such heterogeneous and hierarchical couplings shows fundamental for appropriate categorical data representation, however, rarely explored in unsupervised settings. Besides the critical progress made in learning the similarity and metrics of categorical data w.r.t. value co-occurrences and attribute relations \cite{ahmad2007method,ienco2012context,jia2015distance}, coupling learning \cite{cao2015coupling} explores even more comprehensive and stronger categorical representations by revealing and embedding heterogeneous value-to-object couplings on explicit attributes and latent factors \cite{wang2015coupled,zhang2015categorical,jianembedding,nips_DoC18}. Typically, categorical data is converted to either a vector \cite{jianembedding} or a similarity \cite{ng2007impact} space to leverage the missing numerical intervals between categorical values, then numerical analytical tools can be used. Such methods demonstrate a significant potential of deeply understanding intrinsic couplings in categorical data. However, rare work is available and it is very challenging to handle diverse data characteristics and complexities, including heterogeneities, interactions, structures, relations, distributions and nonstationarity, in categorical data representation \cite{cao2014non,cao2015coupling,dst_Cao15}. A critical question raised in heterogeneous coupling learning is \textit{whether learning more couplings enhances categorical data representation}. This issue has been initially studied by Ienco et al. \cite{ienco2012context}. They found the redundant information in various couplings may hamper the quality of categorical data representation and proposed the \textit{symmetric uncertainty} (SU) as a criterion to filter redundant couplings w.r.t. the correlations between two attributes to largely reduce the redundant information and enable better representation performance. Other recent work \cite{jianembedding,ijcai_PangCCL17} further shows that the redundant couplings can be reduced by methods like principal component analysis and shows better categorical representation performance than vector, similarity and embedding-based methods. However, rare work identifies redundant couplings and decouple them from those important ones. Another open issue is to capture diverse interactions and relations that are complementary and inconsistent with each other while as many types of couplings are learned as possible. As illustrated by Table \ref{tab:toy}, if an intra-attribute coupling (i.e., value couplings) is measured in terms of value frequency, the difference between slightly curled and curled watermelons per the attribute \textit{root shape} is $0$ due to their same frequency. However, we can easily differentiate them (i.e. difference is not $0$) because the curled root is more related to the yellow and green watermelons while the slightly curled root is more associated with the green and black ones when the inter-attribute couplings between \textit{color} and \textit{root shape} are considered. \begin{table}[!htpb]% \centering \caption{\textit{Toy Example.} The watermelon information table. Each watermelon with different sweetness is described w.r.t. three attributes: \emph{Texture}, \emph{Color}, and \emph{Root Shape}.\label{tab:toy}}% \begin{tabular}{l|lll|l} \toprule \textbf{ID} & \textbf{Texture} & \textbf{Color} & \textbf{Root Shape} & \textbf{Sweetness} \\ \midrule A1 & clear & white & straight & low\\ A2 & blurry & yellow & straight & low\\ A3 & blurry & yellow & curled & low\\ A4 & clear & green & slightly curled & low\\ A5 & blurry & green & curled & high\\ A6 & clear & black & slightly curled & high\\ \bottomrule \end{tabular} \end{table}% The heterogeneity and inconsistency between couplings \cite{ralaivola2010chromatic,cao2014non} may be caused by (1) different types of couplings corresponding to distinct interactions in data and following different data distributions; and (2) multiple distributions existing in a data set. While our earlier work in \cite{zhu2018heterogeneous} analyzes and captures the heterogeneous couplings for supervised learning, no existing methods on similarity, metric and representation learning effectively handle the above challenges in unsupervised categorical representation, which is critical yet challenging for understanding the intrinsic data complexities in unlabeled categorical data. Embedding lookup table and deep representation learning methods \cite{bengio2013representation} such as one-hot embedding, word embedding, autoencoder \cite{VincentLLBM10}, adversarial learning \cite{donahue2016adversarial}, and deep models such as wide and deep model \cite{cheng2016wide} and auto-instructor MAI \cite{JianHCL18} significantly outperform shallow methods in capturing latent features and relations. However, their common approaches and advantages are built on simplifying and equally treating input (e.g., by a one-hot encoder), involving a special modeling mechanism or structure, ignoring or disentangling complicated couplings, and a deep abstraction of large data with high computational power. They struggle in representing small yet complex unlabeled categorical data and also ignore the semantics and other diverse explicit characteristics of categorical values, attributes and objects, critical for categorical data representation and learning \cite{cao2015coupling,wang2015coupled,zhu2018heterogeneous}. In this paper, we build a shallow but powerful UNsupervised heTerogeneous couplIng lEarning (UNTIE, for short) approach to learning heterogeneous and hierarchical couplings that may be complementary yet inconsistent in small unlabeled categorical data with complex data characteristics. As the first attempt, UNTIE simultaneously represents (1) diverse value-to-object couplings (2) complex relations between heterogeneous and hierarchical couplings, and (3) heterogeneous distributions of respective couplings by unsupervised multikernel learning. Specifically, complex relations are entangled by the nonlinear mapping of various kernelized coupling functions, and the heterogeneous distributions are sensitively modeled by different kernels respectively. Instead of directly combining heterogeneous couplings, UNTIE first remodels the diverse couplings by multiple kernels to transform the various couplings-based spaces to respective kernelized representation spaces of higher dimensionality. Then, UNTIE learns both the weight of each attribute value in an individual kernel space and the weights of the learned kernel spaces to reflect both the heterogeneous data distributions of respective couplings and the interactions between couplings. Further, to efficiently learn heterogeneous couplings, UNTIE seamlessly wraps the kernel space weights by a positive semi-definite kernel and optimizes this kernel by optimizing an unsupervised kernel k-means objective. Lastly, the optimized kernel is used as the similarity representation of categorical data to generate a vector representation by further decomposing this kernel. We provide theoretical analysis (see Theorem \ref{thm:cut}) to show UNTIE can represent categorical data with maximizing the separability for further learning tasks. Accordingly, this work delivers the following significant contributions to categorical representation, unsupervised representation, and coupling learning. \begin{itemize} \item UNTIE is the first unsupervised categorical representation method to learn various value-to-object couplings and their complementarity and inconsistency. UNTIE collectively captures heterogeneous data distributions of diverse couplings and adaptively integrates the couplings by involving their interactions. \item UNTIE maps the heterogeneous intra- and inter-attribute couplings into multiple kernels to capture the coupling heterogeneity (\S \ref{subsec:heterogeneity}). In the kernel spaces, learning heterogeneous couplings is formalized as an efficient unsupervised optimization problem by optimizing an UNTIE-enabled kernel \textit{k}-means objective (\S \ref{subsec:unsupervised}). \item UNTIE works in a completely unsupervised fashion to capture the intrinsic data characteristics in categorical data with both theoretical and experimental verification. Theoretical analysis is provided that shows the UNTIE-represented data has the minimum normalized cut and increases data separability (\S \ref{sec:theory}). \end{itemize} We substantially verify the UNTIE effectiveness, representation quality, efficiency, flexibility and stability on 25 real-life categorical data sets with diversified data characteristics (including multi-dimensional, multi-class and multi-valued objects) and four synthetic data sets generated per a variety of data factors. UNTIE is compared with vector, similarity, embedding and deep representation methods. (1) UNTIE can effectively address both complementarity and inconsistency in learning heterogeneous couplings. (2) UNTIE enjoys accuracy gain (up to 51.72\% in terms of F-score on these data sets) from the learned heterogeneity and produces substantially better representation performance than the state-of-the-art shallow and deep categorical representation methods. (3) The efficiency of UNTIE is insensitive to the volume of data, which indicates UNTIE is scalable for large data. (4) The learned UNTIE representations can enhance different downstream learning tasks. This work also shows that shallow learning does not lose the ground under handling complex (small or large) data to deep models, particularly for unsupervised settings. \section{Related Work} \label{Related Work} The quality of categorical data representations affects the performance of representations-based learning tasks. Categorical representations are determined by how well a representer captures the various value-to-object coupling relationships and their heterogeneities within and between categorical values, attributes and objects \cite{cao2015coupling,ng2007impact,boriah2008similarity}. For example, embedding methods like one-hot embedding and word embedding only encode the existence of a value or the IDF-based textual vector to a vector space. Matching-based methods treat categorical values equally and overlook their rich differences. A recent effort made for categorical representation is the coupling learning of complex interactions and relations \cite{cao2015coupling} and demonstrates great potential in (1) intra-attribute couplings-based representations \cite{ng2007impact,cao2012dissimilarity} and (2) inter-attribute couplings-based representations \cite{le2005association,ahmad2007method,boriah2008similarity,jia2015distance}. The former reveals the way and degree that values are coupled within an attribute. For example, the method in \cite{ng2007impact} adopts the conditional probability of the attribute values of an object w.r.t. the attribute cluster centers to represent categorical data, and the method in \cite{cao2012dissimilarity} introduces set theory for measuring intra-attribute value similarity to represent categorical data. The latter captures the way and degree that attributes are coupled. They typically measure the inter-attribute couplings w.r.t. the conditional probabilities \cite{le2005association,ahmad2007method} or co-occurrence frequencies \cite{jia2015distance} between values of different attributes. These two groups of representations outperform other classic methods such as matching-based as they capture richer interactions in categorical data. However, most of such work only considers a single type of couplings and overlooks many other characteristics in categorical data. The work in \cite{zhang2015categorical} shows that representing more couplings in categorical data may significantly upgrade learning performance. However, other recent research also shows that capturing more but duplicated couplings does not guarantee better categorical representation as in \cite{ienco2012context}. A symmetric uncertainty (SU) criterion enables better representation performance in several categorical data representation methods with multiple couplings \cite{ienco2012context,wang2013coupled,wang2015coupled}. Alternatively, the method in \cite{jianembedding} uses the principal component analysis to reduce the redundancy between couplings. These methods make performance gain by reducing redundant couplings but ignore the inconsistency between heterogeneous couplings with diverse interactions and distributions, i.e., heterogeneity \cite{cao2014non} of different couplings. This issue was studied in \cite{zhu2018heterogeneous} by capturing hierarchical couplings to enhance categorical data representation with label information. None of existing unsupervised categorical representation methods explicitly and effectively model heterogeneous couplings, which brings about significant challenge to representation learning, as explored in this work. Deep representation learning presents increasingly promising power in representing images, text, networks, etc. \cite{bengio2013representation}. However, unsupervised categorical representation learning has not been well explored and presents a challenge to deep learning. Existing methods typically convert categorical input into a vector space through encoding such as one-hot encoding and word embedding and then rely on a deep neural architecture such as autoencoder \cite{VincentLLBM10}, adversarial learning (e.g., BiGAN and GAN variants, wide and deep network, and variational autoencoder) \cite{donahue2016adversarial,cheng2016wide,kingma2013auto} or auto-instructor \cite{jianembedding,JianHCL18} to learn the hidden relations and features. Such methods overlook or simplify most of data characteristics of categorical data (e.g., value semantics, frequencies, attribute interactions, distributions, etc.), decouple and disentangle couplings \cite{bengio2013representation}, and rely on high computational power and large (partially labeled) data. They are troubled by small, unlabeled, and complicatedly coupled categorical data. \section{Preliminaries}\label{sec:preliminary} Assume a categorical data set drawn from distributions $\Phi$ can be represented as a three-element tuple $C = <O, A, V>$, where $O = \{\mathsf{o}_i | i \in N_o\}$ is an object set with $n_o$ objects; $A = \{\mathsf{a}_i | i \in N_a \}$ is an attribute set with $n_a$ attributes; and $V = \bigcup_{j=1}^{n_a}V^{(j)}$ is the collection of attribute values with $n_v$ values, in which $V^{(j)} = \{\mathsf{v}_i^{(j)} | i \in N_v^{(j)}\}$ is the set of attribute values $\mathsf{v}_i^{(j)}$ with $n_v^{(j)}$ values of attribute $\mathsf{a}_j$. $N_o$, $N_a$ and $N_v^{(j)}$ are the sets of indices for objects, attributes, and values of the $j$-th attribute, respectively. For the $i$-th object $\mathsf{o}_i$, the categorical value in the $j$-th attribute $\mathsf{a}_j$ can be represented as $v_i^{(j)}$. The main notations in this paper are defined in Table \ref{tab:notationList} \footnote{The specifications for symbol styles in this paper are as follows. Element: lowercase with Sans Serif font; value: lowercase; vector: lowercase with bold font; matrix: uppercase with bold font; set: uppercase; function: lowercase with parentheses; space: uppercase with Calligraphic font; value index: subscript; attribute index: superscript with parenthesis.}. For example, in Table \ref{tab:toy}, $O$ = \{A1, A2, A3, A4, A5, A6\}, $A$ = \{\text{Texture, Color, Root Shape}\}, $V$ = \{clear, blurry, white, yellow, green, black, straight, curled, slightly curled\}, $V^{(1)}$ = \{clear, blurry\}, and $v_2^{(3)}$ = straight. \begin{table}[!htpb] \caption{List of Notations \label{tab:notationList}} \begin{tabular}{l|l} \toprule Symbol & Meaning \\ \midrule $\Phi$ & Categorical data distributions \\ $C$ & Categorical data tuple\\ $O$ & Object set \\ $A$ & Attribute set \\ $V$ & Categorical value set of all attributes\\ $V^{(j)}$ & Categorical value set of the $j$-th attribute\\ $\mathsf{o}_i$ & The $i$-th object in $O$\\ $\mathsf{a}_i$ & The $i$-th attribute in $A$\\ $\mathsf{v}_i$ & The $i$-th categorical value in $V$\\ $v_i^{(j)}$ & The categorical value of $\mathsf{o}_i$ in $\mathsf{a}_j$\\ $\mathsf{v}_i^{(j)}$ & The $i$-th categorical value in $V^{(j)}$\\ $\mathsf{m}_i$ & The vector corresponding to $i$-th value in a coupling space\\ $\mathsf{c}_k$ & The $k$-th cluster\\ $\bm{\omega}$ & The heterogeneity parameter \\ $n_o$ & The number of objects in $O$\\ $n_a$ & The number of attributes in $A$\\ $n_v$ & The number of categorical values in $V$ \\ $n_v^{(j)}$ & The number of categorical values in $V^{(j)}$\\ $n_k$ & The number of kernel matrices transformed from coupling spaces\\ $n_{c}$ & The number of clusters\\ $n_{oc}$ & The size of $c$-th cluster\\ $n_{mv}$ & The maximal number of values in attributes\\ $n_{av}$ & The average number of attribute values\\ $n_{\bm{\omega}}$ & The number of elements in $\bm{\omega}$\\ $n_i$ & The number of iterations\\ $n_b$ & The training batch size\\ $n_m^{(z)}$ & The number of couplings for the $z$-th attribute \\ $N_o$ & The set of indices for objects in $O$ \\ $N_a$ & The set of indices for attributes in $A$ \\ $N_v^{(j)}$ & The set of indices for categorical values in $V^{(j)}$ \\ $\mathcal{M}_{Ia}$ & Intra-attribute coupling spaces\\ $\mathcal{M}_{Ie}$ & Inter-attribute coupling spaces\\ $\mathcal{K}_p$ & The $p$-th kernel space transformed from a coupling space\\ $\mathcal{K}_p^{\prime}$ & The heterogeneous kernel space transformed from $\mathcal{K}_p$\\ $\mathbf{K}_p$ & The kernel matrix that spans $\mathcal{K}_p$\\ $\mathbf{K}_p^{\prime}$ & The kernel matrix that spans $\mathcal{K}_p^{\prime}$\\ $\mathbf{T}_p$ & The transformation matrix from $\mathcal{K}_p$ to $\mathcal{K}_p^{\prime}$\\ $\mathbf{C}^{(z,k)}$ & The $k$-th coupling matrix of the $z$-th attribute \\ $\alpha_{pi}$ & The weight of the $i$-th value in the $p$-th kernel space\\ $\beta_p$ & The weight of the $p$-th kernel space \\ $\mathbf{S}$ & UNTIE similarity representation\\ $\mathbf{X}$ & UNTIE vector representation\\ \bottomrule \end{tabular} \end{table} \section{The UNTIE Design}\label{sec:method} UNTIE learns unsupervised categorical representation based on the rationale below: (1) a categorical value may belong to multiple distributions; (2) a coupling may make different contributions to different value distributions; and (3) the overall distribution of a categorical value can be described by multiple distributions. We call the above \textit{heterogeneity hypotheses}, which are theoretically supported by Theorem \ref{thm:factorize} in Section \ref{sec:hypothesis-proof}. \subsection{The UNTIE Framework} While this paper focuses on a specific instance of UNTIE, as shown in Fig. \ref{fig:untie}, UNTIE actually presents a framework of unsupervised categorical representation. It represents categorical data in both vector (as a vector representation) and similarity (as a kernel matrix) spaces. To reveal heterogeneous couplings, UNTIE first converts categorical data to several coupling spaces by multiple coupling learning functions. Then, it feeds and transforms each coupling space to multiple kernel spaces. Further, it reduces the redundancy and inconsistency between heterogeneous couplings by learning the heterogeneity between couplings in the kernel spaces. Specifically, UNTIE differentiates the contributions of individual kernel spaces and reveals the kernel-sensitive distribution within each kernel space. To efficiently learn the heterogeneities in an unsupervised way, UNTIE wraps the weight of each kernel space and the weights of the values embedded in a kernel space by a wrapper kernel. It then optimizes this kernel by solving a kernel k-means objective, i.e., regularizing the objects within one cluster to be more similar to each other than to those in others. UNTIE uses the optimized wrapper kernel as the similarity representation of categorical data, and further generates the vector representation by decomposing the optimized wrapper kernel. \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{UNTIE.eps} \caption{The UNTIE Framework. It first transforms the coupling spaces to multiple kernel spaces and then learns the heterogeneities within and between couplings in these kernel spaces by solving a kernel k-means objective.} \label{fig:untie} \end{figure*} To effectively address the coupling inconsistency problem, UNTIE learns heterogeneous couplings by multiple kernels, which transforms a coupling from its original space to several kernel spaces. Since a kernel is sensitive to a distribution \cite{bucak2014multiple}, these kernel spaces reflect different value distributions. In a kernel space, a coupling is preserved if it matches the kernel-sensitive distribution while other couplings are filtered. For this, UNTIE learns the weights of values in each kernel space to effectively reveal the multiple distributions corresponding to a coupling. The multiple distributions of a categorical value jointly contribute to the overall distribution of the value. Therefore, the learned weights of these kernel spaces filter the redundant information but integrate the complementary information. To learn the heterogeneous couplings in an unsupervised manner, without loss of generality, UNTIE takes the assumption that the objects within one cluster are more similar to each other than to those in other clusters. This assumption is commonly taken in most of clustering and classification methods and shows its validity with real data distributions. Accordingly, UNTIE learns heterogeneous couplings for categorical data representation in an iterative way. In each iteration, UNTIE first analyzes the clusters based on its generated representation, and then tunes the representation based on the obtained clusters. To efficiently cluster data, UNTIE wraps the weight of each kernel space and the weight of the values embedded in kernel spaces by a wrapper kernel and optimizes this kernel by solving a kernel k-means objective. In addition to efficiency, the kernel k-means objective also brings other benefits for categorical data representation, such as good separability, which will be discussed in Section \ref{sec:normalized-cut}. \subsection{Coupling Learning} As discussed in Introduction, we broadly refer \textit{couplings} to any interactions and relations within and between values, attributes, and objects \cite{cao2015coupling,wang2015coupled,zhu2018heterogeneous}. In categorical data, possible types of couplings include \textit{intra-attribute couplings} (i.e., value couplings, such as per value frequency, co-occurrence and matching), inter-attribute couplings (e.g., attribute correlation and dependency and unknown linkage), and object couplings built on the value and attribute couplings. UNTIE learns the value-to-attribute-to-object hierarchical couplings after learning and fusing various intra- and inter-attribute couplings in unlabeled categorical data. \textbf{Learning Intra-attribute Couplings.} \textit{Intra-attribute couplings} represent the interactions between the values of an attribute and the value distributions in an attribute \cite{boriah2008similarity,wang2015coupledMixed}. We measure intra-attribute couplings in terms of intra-attribute distributions by a value frequency function and calculate the Euclidean distance in a numerical space. Although the value frequency function has only one input value, it measures the value distribution against all values. For a categorical value $\mathsf{v}_i^{(j)}$ in the $j$-th attribute, the value frequency function $m_{Ia}^{(j)}(\mathsf{v}_i^{(j)})$ maps an intra-attribute coupling between this value and the other categorical values in this attribute to a one-dimensional intra-attribute coupling vector $ m_{Ia}^{(j)}(\mathsf{v}_i^{(j)})$. \begin{equation} \label{eq:intra} m_{Ia}^{(j)}(\mathsf{v}_i^{(j)}) = [\frac{|g^{(j)}(\mathsf{v}_i^{(j)})|}{n_o}], \end{equation} where $g^{(j)}(\cdot): V^{(j)}\rightarrow O$ maps the value $\mathsf{v}_i^{(j)}$ to a set of objects that have value $\mathsf{v}_i^{(j)}$ in the $j$-th attribute, $n_o$ is the number of objects, and $|\cdot|$ refers to the count of a set. For example, in Table \ref{tab:toy}, a relationship between value \textit{yellow} and object attribute \textit{color} is $g^{(j)}(yellow) = \{A2, A3\}$. The intra-attribute coupling vector of value \textit{yellow} is $m_{Ia}^{(2)}(yellow) = [\frac{|\{A2, A3\}|}{6}] = [\frac{1}{3}]$. An intra-attribute coupling space $\mathcal{M}_{Ia}^{(j)}$ is spanned by the intra-attribute coupling vectors obtained in an attribute by Eq. (\ref{eq:intra}) and is defined below: \begin{equation}\label{eq:intra-couplings} \mathcal{M}_{Ia}^{(j)} = \{m_{Ia}^{(j)}(\mathsf{v}_i^{(j)})| \mathsf{v}_i^{(j)} \in V_j\}. \end{equation} For categorical data with $n_a$ attributes, the intra-attribute coupling spaces are $\mathcal{M}_{Ia} = \{\mathcal{M}_{Ia}^{(1)}, \cdots, \mathcal{M}_{Ia}^{(n_a)}\}$. The intra-attribute coupling spaces only present a one-dimensional embedding of the categorical data space w.r.t. each attribute. The following inter-attribute couplings consider the interactions between attributes. \textbf{Learning Inter-attribute Couplings.} \textit{Inter-attribute couplings} refer to the interactions between attributes and the contextual (and/or semantic) information of attribute values w.r.t. other attributes \cite{ienco2012context,wang2011coupled,cao2015coupling}. This attribute-based interactive and contextual information complements the value distributions and interactions captured by intra-attribute couplings. For example, in Table \ref{tab:toy}, white and black watermelons have the same frequency but can be distinguished by involving their \textit{root shapes} which are significantly different. Here, the inter-attribute couplings are represented by the information conditional probability, which reveals the distributions of an attribute value in the spaces spanned by the values of the other attributes. Given a value $\mathsf{v}^{(j)}$ of attribute $\mathsf{a}_j$ and a value $\mathsf{v}^{(k)}$ of attribute $\mathsf{a}_k$, the information conditional probability function is defined as follows: \begin{equation}\label{eq:ICPF-Attri} p(\mathsf{v}^{(j)}|\mathsf{v}^{(k)}) = \frac{|g^{(j)}(\mathsf{v}^{(j)})\cap g^{(k)}(\mathsf{v}^{(k)})|}{|g^{(k)}(\mathsf{v}^{(k)})|}, \end{equation} where $\cap$ returns the intersection of two sets. Based on the information conditional probability function, the inter-attribute coupling learning function $m_{Ie}^{(j)}(\mathsf{v_i}^{(j)})$ embeds interactions between value $\mathsf{v_i}^{(j)}$ and other attributes as a $|V_*|$-dimensional inter-attribute coupling vector $m_{Ie}^{(j)}(\mathsf{v}_i^{(j)})$, \begin{equation} m_{Ie}^{(j)}(\mathsf{v}_i^{(j)}) = \left[ \begin{array}{cccc} p(\mathsf{v}_i^{(j)}|\mathsf{v}_{*1}), & \cdots , & p(\mathsf{v}_i^{(j)}|\mathsf{v}_{*|V_*|}) \end{array} \right]^{\top}, \label{eq:inter} \end{equation} where $V_* = \{V^{(k)}| k \in N_a, k \neq j\}$ is the set of values in all attributes except $\mathsf{a}_j$, and $\mathsf{v}_{*i} \in V_*$ is a categorical value in set $V_*$. For example, the information condition probability between the yellow watermelons and those with curled root shape is $p(yellow|curled) = \frac{|\{A2, A3\} \cap \{A3, A5\}|}{|\{A3, A5\}|} = \frac{|\{A3\}|}{|\{A3, A5\}|} = \frac{1}{2}$. The inter-attribute coupling vector of value \textit{yellow} is calculated as \begin{equation*} \begin{aligned} &m_{Ie}^{(2)}(yellow) \\ & = [p(yellow|clear), \cdots, p(yellow|slightly~curled)] \\ &= [0, \frac{2}{3}, \frac{1}{2}, \frac{1}{2}, 0] \end{aligned} \end{equation*} An inter-attribute coupling space $\mathcal{M}_{Ie}^{(j)}$ is spanned by the inter-attribute coupling vectors obtained in an attribute by Eq. \eqref{eq:inter}: \begin{equation}\label{eq:inter-coupling} \mathcal{M}_{Ie}^{(j)} = \{m_{Ie}^{(j)}(\mathsf{v}_i^{(j)})|\mathsf{v}_i^{(j)}\in V^{(j)}\}. \end{equation} For categorical data with $n_a$ attributes, the inter-attribute coupling spaces $\mathcal{M}_{Ie} = \{ \mathcal{M}_{Ie}^{(1)}, \cdots, \mathcal{M}_{Ie}^{(n_a)} \}$. An inter-attribute coupling learning function projects categorical values into a higher dimensional space if $|V| > 2 |V^{(j)}| - 1$, because the dimensionality of inter-attribute coupling space equals $|V| - |V^{(j)}|$, while the degree of freedom (equivalent to the dimensionality of transforming a categorical value to a dummy variable) of the $j$-th attribute is $|V^{(j)}|-1$. In this way, the value couplings incurred by other attributes are captured, which complements with the intra-attribute couplings to form a complete representation of categorical attribute space. \subsection{Heterogeneity Learning in Kernel Spaces}\label{subsec:heterogeneity} With the coupling spaces built above from the intra- and inter-attribute perspectives, UNTIE further constructs an entire coupling space $\mathcal{M}$ which is a collection of heterogeneous intra-attribute coupling spaces $\mathcal{M}_{Ia}$ and inter-attribute coupling spaces $\mathcal{M}_{Ie}$, \begin{equation} \mathcal{M} = \mathcal{M}_{Ia} \cup \mathcal{M}_{Ie}. \end{equation} To effectively integrate the heterogeneous couplings in the learned coupling space set, UNTIE transforms the learned heterogeneous coupling spaces to uniform spaces, in which heterogeneous couplings are comparable. Specifically, UNTIE uses multiple kernels to transform each coupling space into its corresponding kernel spaces, where each kernel space corresponds to the transformed coupling space w.r.t. a particular kernel mapping function. It generates a set of $n_k$ ($n_k= |\mathcal{M}| \times |F|$, where $F$ is the set of kernel functions for the transformation) kernel spaces $\{\mathcal{K}_1, \mathcal{K}_2, \cdots, \mathcal{K}_{n_k} \}$, and the $p$-th ($p \leq n_k$) space is spanned by a kernel matrix $\mathbf{K}_p$, which is constructed from a coupling space $\mathcal{M}_j$ by a kernel function $k_p(\cdot,\cdot)$ for an attribute. Denoting $\mathsf{m}_i$ as a vector in $\mathcal{M}_j$ corresponding to the $i$-th categorical value, $\mathbf{K}_p$ is represented as follows, \begin{equation}\label{eq:kernelspace} \mathbf{K}_p = \left[ \begin{matrix} k_p(\mathsf{m}_1, \mathsf{m}_1)& k_p(\mathsf{m}_1, \mathsf{m}_2)&\cdots& k_p(\mathsf{m}_1, \mathsf{m}_{n_v^*}) \\ k_p(\mathsf{m}_2, \mathsf{m}_1)& k_p(\mathsf{m}_2, \mathsf{m}_2)&\cdots& k_p(\mathsf{m}_2, \mathsf{m}_{n_v^*}) \\ \vdots & \vdots & \ddots & \vdots \\ k_p(\mathsf{m}_{n_v^*}, \mathsf{m}_1)& k_p(\mathsf{m}_{n_v^*}, \mathsf{m}_2)&\cdots& k_p(\mathsf{m}_{n_v^*}, \mathsf{m}_{n_v^*}) \\ \end{matrix} \right], \end{equation} where $n_v^*$ is the number of categorical values represented by $\mathcal{M}_j$. For example, in Table \ref{tab:toy}, if $k_p(\cdot,\cdot)$ is a linear kernel and $\mathcal{M}_j$ is $\mathcal{M}_{Ie}^{(2)}$, let $\mathsf{m}_2$ correspond to value $yellow$, $k_p(\mathsf{m}_2,\mathsf{m}_2) = [0, \frac{2}{3}, \frac{1}{2}, \frac{1}{2}, 0]^{\top} \cdot [0, \frac{2}{3}, \frac{1}{2}, \frac{1}{2}, 0] = \frac{17}{18}$. To reveal the heterogeneity within a coupling, UNTIE learns the weights of values in each kernel space. Specifically, it learns a set of transformation matrices $\{\mathbf{T}_1,\mathbf{T}_2,\cdots, \mathbf{T}_{n_k}\}$ to reconstruct the kernel spaces $\{\mathcal{K}_1^{\prime}, \cdots, \mathcal{K}_{n_k}^{\prime} \}$, in which the $p$-th kernel matrix $\mathbf{K}_p^{\prime}$ only contains the $p$-th kernel sensitive distribution that suits for the corresponding coupling. We call the reconstructed kernel spaces \textit{heterogeneous kernel spaces}. $\mathbf{K}_p^{\prime}$ is defined as: \begin{equation}\label{eq:trans} \mathbf{K}_p^{\prime} = \mathbf{T}_p\cdot \mathbf{K}_p. \end{equation} UNTIE regulates $\mathbf{T}_p$ as a diagonal matrix: \begin{equation} \mathbf{T}_p = \left[ \begin{matrix} \alpha_{p1} & 0 & \cdots & 0\\ 0 & \alpha_{p2} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \alpha_{pn_v}\\ \end{matrix} \right]. \end{equation} As a result, $\alpha_{pi}$ is the weight of the $i$-th value in the $p$-th kernel space, i.e., $[k_p(\mathsf{m_1},\mathsf{m_1}), k_p(\mathsf{m_1}, \mathsf{m_2}), \cdots, k_p(\mathsf{m_1}, \mathsf{m_{n_v^*})}]$. The larger $\alpha_{pi}$ implies stronger coupling of the $i$-th value revealed by the coupling space corresponding to the $p$-th kernel space. To further capture the heterogeneity between couplings, UNTIE learns the contribution of each heterogeneous kernel space to a final representation. It first defines a similarity measure between objects in the heterogeneous kernel space and then learns the weight of each kernel space based on this similarity measure to reflect their contribution. Given a categorical data set, considering the $p$-th kernel matrix, let $\mathfrak{i}$ and $\mathfrak{j}$ represent the indices of values in the $p$-th kernel space corresponding to the $i$-th and $j$-th objects respectively, and using $\mathbf{K}_{p,\mathfrak{i}\cdot}^{\prime}$ (the $\mathfrak{i}$-th row in $\mathbf{K}_p^{\prime}$) to denote $\mathsf{o}_i$ in the $p$-th heterogeneous kernel space, the similarity $S_{p, ij}$ measured by the linear kernel of the $i$-th and $j$-th objects in this space is \begin{equation}\label{eq:Sij} S_{p, ij} = \mathbf{K}_{p,\mathfrak{i}\cdot}^{\prime\top}\mathbf{K}_{p,\mathfrak{j}\cdot}^{\prime}. \end{equation} By considering Eq. \eqref{eq:trans}, Eq. \eqref{eq:Sij} equals \begin{equation} S_{p, ij} = \mathbf{K}_{p,\mathfrak{i}\cdot}^{\top}\mathbf{T}_p^{\top}\mathbf{T}_{p} \mathbf{K}_{p,\mathfrak{j}\cdot}. \end{equation} UNTIE defines the final similarity representation $S_{ij}$ between the $i$-th and $j$-th objects as a linear combination of base similarity measures from heterogeneous spaces to filter redundant information and integrate complementary information between couplings: \begin{equation}\label{eq:sijc} S_{ij} = \sum\limits_{p=1}^{n_k}\beta_p S_{p,ij}, \end{equation} where $\beta_p \geq 0$ is the weight for the $p$-th base similarity. Denoting a diagonal matrix $\bm{\omega}_p = \beta_p \mathbf{T}^{\top}\mathbf{T}$, Eq. \eqref{eq:sijc} is rewritten as: \begin{equation} S_{ij} = \sum\limits_{p=1}^{n_k}\mathbf{K}_{p,\mathfrak{i}\cdot}^{\top}\bm{\omega}_p\mathbf{K}_{p,\mathfrak{j}\cdot}. \end{equation} Accordingly, UNTIE simultaneously learns $\alpha$ and $\beta$ by learning $\bm{\omega}$, a heterogeneity parameter. The optimized $\bm{\omega}$ guides to integrate the heterogeneous couplings into the similarity representation $S_{ij}$. \subsection{Kernel K-means-based Representation Learning}\label{subsec:unsupervised} In an unsupervised way, UNTIE learns the heterogeneous couplings by wrapping $\alpha$ which reveals the heterogeneity within couplings and $\beta$ which reveals the heterogeneity between couplings into a wrapper kernel. It then further optimizes this kernel by solving a kernel k-means objective \cite{dhillon2004kernel}. K-means is a popular clustering algorithm that minimizes the distance between an object and its assigned cluster center, which was also used for information integration. Given a set of objects $O = \{\mathsf{o}_i \in \mathcal{R}^{n_a}|i=1,\cdots,n_o\}$, the k-means objective is formalized as: \begin{equation}\label{eq:kmeans} \begin{aligned} & \underset{\mathbf{Z}\in\{0,1\}^{n_o \times n_c}}{\text{minimize}} & & \sum\limits_{i=1, c=1}^{n_o,n_c}z_{ic}\lVert \mathsf{o}_i - \bm{\mu}_c \rVert_{2}^{2}\\ & \text{subject to} & & \sum\limits_{c=1}^{n_c} z_{ic} = 1, \end{aligned} \end{equation} where $z_{ic}$ indicates whether $\mathsf{o}_i$ belongs to the $c$-th cluster, $\bm{\mu}_c = \frac{1}{n_{oc}}\sum_{i=1}^{n_o} z_{ic}\mathbf{x}_{i}$ is the centroid of the $c$-th cluster, and $n_{oc} = \sum_{i=1}^{n_o}z_{ic}$ refers to the size of the $c$-th cluster. To address the issue that k-means cannot cluster data with a nonlinear boundary, the kernel k-means first uses a mapping function to map data to a higher dimensional space and then adopts k-means to cluster the mapped data. With a kernel function $k(\cdot)$, the kernel k-means is formalized as: \begin{equation}\label{eq:kkmeans} \begin{aligned} & \underset{\mathbf{Z}\in\{0,1\}^{n_o \times n_c}}{\text{minimize}} & & \sum\limits_{i=1, c=1}^{n_o,n_{c}}z_{ic}\lVert k(\mathsf{o}_i) - \bm{\mu}_c \rVert_{2}^{2}\\ & \text{subject to} & & \sum\limits_{c=1}^{n_c} z_{ic} = 1, \end{aligned} \end{equation} where $\bm{\mu}_c = \frac{1}{n_{oc}}\sum_{i=1}^{n_o} z_{ic}k(\mathsf{o}_{i})$. Eq. \eqref{eq:kkmeans} is rewritten in the following form: \begin{equation}\label{eq:kkmeans-matrix} \begin{aligned} & \underset{\mathbf{Z}\in\{0,1\}^{n_o \times n_c}}{\text{minimize}} & & \text{Tr}(\mathbf{K}) - \text{Tr}(\mathbf{L}^{\frac{1}{2}}\mathbf{Z}^{\top}\mathbf{K}\mathbf{Z}\mathbf{L}^{\frac{1}{2}})\\ & \text{subject to} & & \mathbf{Z}\mathbf{1}_{n_c} = \mathbf{1}_{n_o}, \end{aligned} \end{equation} where $\text{Tr}(\cdot)$ calculates the trace of a matrix, $\mathbf{K}$ is a matrix with $k_{ij} = k(\mathsf{o}_i^{\top})k(\mathsf{o}_j)$, $\mathbf{L} = \text{diag}([n_{o1}^{-1}, n_{o2}^{-1}, \cdots, n_{on_c}^{-1}])$ and $\mathbf{1}_{\ell}\in \{1\}^{\ell}$ is a column vector with all elements being 1. Directly solving Eq. \eqref{eq:kkmeans-matrix} is difficult since the values of $\mathbf{Z}$ are limited to either 0 or 1. Typically, Eq. \eqref{eq:kkmeans-matrix} is relaxed by letting $\mathbf{Z}$ take real values. Denoting $\mathbf{H} = \mathbf{ZL}^{\frac{1}{2}}$, the above problem is restated as \begin{equation}\label{eq:rkkmeans-matrix} \begin{aligned} & \underset{\mathbf{H}}{\text{minimize}} & & \text{Tr}(\mathbf{K}(\mathbf{I}_{n_o} - \mathbf{H}\mathbf{H}^{\top}))\\ & \text{subject to} & & \mathbf{H}\in \mathcal{R}^{n_o \times n_c},\\ & & & \mathbf{H}^{\top}\mathbf{H} = \mathbf{I}_{n_c}, \end{aligned} \end{equation} where $\mathbf{I}_{n_c}$ is an identity matrix with size $n_c \times n_c$. The optimal $\mathbf{H}$ for Eq. \eqref{eq:rkkmeans-matrix} can be obtained by taking the $n_c$ eigenvectors having large eigenvalues of $\mathbf{K}$ \cite{jegelka2009generalized}. UNTIE integrates heterogeneous coupling learning into the kernel k-means seamlessly by wrapping $\alpha$ and $\beta$ to a wrapper kernel $s(\cdot,\cdot): O\times O \rightarrow \mathcal{R}$, which is defined below, \begin{equation}\label{eq:wrapper_kernel} s(\mathsf{o}_i, \mathsf{o}_j) = S_{ij}. \end{equation} Accordingly, UNTIE constructs a kernel matrix $\mathbf{S}$ w.r.t. kernel $s(\cdot, \cdot)$ and categorical object set $O$: \begin{equation}\label{eq:kernelmatrix} \mathbf{S} = \left[ \begin{matrix} s(\mathsf{o}_1, \mathsf{o}_1)& s(\mathsf{o}_1, \mathsf{o}_2)&\cdots& s(\mathsf{o}_1, \mathsf{o}_{n_o}) \\ s(\mathsf{o}_2, \mathsf{o}_1)& s(\mathsf{o}_2, \mathsf{o}_2)&\cdots& s(\mathsf{o}_2, \mathsf{o}_{n_o}) \\ \vdots & \vdots & \ddots & \vdots \\ s(\mathsf{o}_{n_o}, \mathsf{o}_1)& s(\mathsf{o}_{n_o}, \mathsf{o}_2)&\cdots& s(\mathsf{o}_{n_o}, \mathsf{o}_{n_o}) \\ \end{matrix} \right]. \end{equation} Since $s(\cdot,\cdot)$ is proved as a valid positive semi-definite kernel (see details in Section \ref{sec:psd-kernel}), $\mathbf{S}$ can replace $\mathbf{K}$ in Eq. \eqref{eq:rkkmeans-matrix}. In this way, the objective function of kernel k-means-based representation learning can be formalized as: \begin{equation}\label{eq:obj} \begin{aligned} & \underset{\mathbf{H}, \bm{\omega}}{\text{minimize}} & & \text{Tr}(\mathbf{S}(\mathbf{I}_{n_o} - \mathbf{H}\mathbf{H}^{\top}))\\ & \text{subject to} & & \mathbf{H}\in \mathcal{R}^{n_o \times n_c},\\ & & & \mathbf{H}^{\top}\mathbf{H} = \mathbf{I}_{n_c}, \end{aligned} \end{equation} where $\bm{\omega}$ is a heterogeneity parameter to learn, and we obtain the similarity representation of categorical data as $\mathbf{S}$. The corresponding vector representation can be obtained by \begin{equation}\label{eq:vetorrepresentation} \mathbf{x}_i = [\sqrt{\bm{\omega}_{1,11}}\mathbf{K}_{1,\mathfrak{i}1},\sqrt{\bm{\omega}_{1,22}}\mathbf{K}_{1,\mathfrak{i}2}, \cdots,\sqrt{\bm{\omega}_{n_k,n_v^*n_v^*}}\mathbf{K}_{n_k, \mathfrak{i}n_v^*}], \end{equation} where $\bm{\omega}_{i,jj}$ refers to the value of the $(j,j)$-th entry in $\bm{\omega}_i$ and $n_v^*$ refers to the number of values in the attribute corresponding to the $n_k$-th kernel. The learned representation $\mathbf{x}_i$ is a numerical approximation of categorical data, which can be fed into vector-based learning methods. \subsection{The UNTIE Algorithm} The UNTIE objective function in Eq. \eqref{eq:obj} can be solved by alternatively updating $\mathbf{H}$ and $\bm{\omega}$: (1) Optimizing $\mathbf{H}$ given $\bm{\omega}$: by fixing the parameter $\bm{\omega}$, $\mathbf{H}$ can be obtained by solving a kernel $k$-means clustering optimization problem shown in Eq. \eqref{eq:rkkmeans-matrix} by eigenvalue decomposition; (2) Optimizing $\bm{\omega}$ given $\mathbf{H}$: with $H$ fixed, the objective function of learning $\bm{\omega}$ is \begin{equation}\label{eq:omega} \begin{aligned} & \underset{\bm{\omega}}{\text{minimize}} & & \text{Tr}(\mathbf{S}(\mathbf{I}_{n_o} - \mathbf{H}\mathbf{H}^{\top})), \end{aligned} \end{equation} which can be optimized by linear programming. For large-scale data, Eq. \eqref{eq:omega} can be solved by the stochastic gradient descent (SGD) method, e.g., AdaGrad \cite{duchi2011adaptive} and Adam \cite{kingma2014adam}. We analyze the computational cost of UNTIE w.r.t different optimization methods in Section \ref{sec:complexity}. Algorithm \ref{algorithm} explains the UNTIE working process. \begin{algorithm}[!htpb] \caption{The UNTIE Algorithm for Unsupervised Categorical Representation}\label{algorithm} \small \begin{algorithmic}[1] \Require Categorical data set $C$, a set of kernel functions $K = \{k_1(\cdot,\cdot),\cdots,k_{n^*_k}(\cdot,\cdot)\}$, the number of clusters $n_c$, and convergence rate $\delta$. \Ensure Similarity representation $\mathbf{S}$, and vector representation $\mathbf{X}$. \State Mapping categorical data to coupling spaces according to Eqs. \eqref{eq:intra-couplings} and \eqref{eq:inter-coupling}. \State Mapping coupling spaces to multiple kernel spaces $\{\mathbf{K}_1, \cdots, \mathbf{K}_{n_k}\}$ by using $K$ according to Eq. \eqref{eq:kernelspace}. \State Initializing the wrapper kernel matrix $\mathbf{S}$ by setting $\alpha$ and $\beta$ as $1$, and setting $l^{\prime} = +\infty$ and $\Delta = +\infty$. \For{$\Delta > \delta$} \State Calculating the $n_c$ eigenvectors that have the largest eigenvalues of $\mathbf{S}$. Constructing $\mathbf{H}$ by these eigenvectors. \State Optimizing $\bm{\omega}$ by solving Eq. \eqref{eq:omega}. \State Calculating loss per $l = \text{Tr}(\mathbf{S}(\mathbf{I}_{n_o} - \mathbf{H}\mathbf{H}^{\top}))$. \State Calculating loss change per $\Delta = |l - l^{\prime}|$ \State Setting $l^{\prime} = l$. \State $n_i = n_i + 1$. \EndFor \State Calculating the vector representation $\mathbf{X}$ per Eq. \eqref{eq:vetorrepresentation}. \State \Return{$\mathbf{S}$, $\mathbf{X}$} \end{algorithmic} \vspace{-3pt} \end{algorithm} \section{Theoretical Analysis}\label{sec:theory} \subsection{The Fitness of Heterogeneity Hypotheses}\label{sec:hypothesis-proof} To discuss the fitness of heterogeneity hypotheses, we first introduce the following theorem. \begin{theorem}\label{thm:factorize} The distribution $\Phi$ of a categorical data set can be described as a probability tensor $\bm{\Phi}$, where each entry corresponds to the joint probability of a set of categorical values from $n_a$ different attributes. $\bm{\Phi}$ can be decomposed as $\bm{\Phi} = \sum\limits_{h=1}^{k} \pi_h \Theta_h$, where $\Theta_h = \bm{\theta}_{h}^{(1)} \otimes \bm{\theta}_{h}^{(2)} \otimes \cdots \otimes \bm{\theta}_{h}^{(n_a)} $, $\pi_h$ is the weight of $\Theta_h$ for composing $\bm{\Phi}$, $\otimes$ refers to the outer product, $\bm{\theta}_h^{(j)}$ is a probability vector of categorical values in the $j$-th attribute with a size of $n_v^{(j)} \times 1$ for $h = 1, \cdots, k$ and $j = 1, \cdots, n_a$. \end{theorem} Theorem \ref{thm:factorize} can be proved by Corollary 1 in \cite{dunson2009nonparametric}. The categorical data distribution $\Phi$ is a joint distribution of categorical values in each attribute. It is defined as $\Phi = \{\phi_{v^{(1)}v^{(2)}\cdots v^{(n_a)}} | v^{(j)} \in V^{(j)}\}$, where $\phi_{v^{(1)}v^{(2)}\cdots v^{(n_a)}}$ are the probabilities of values $v^{(1)},v^{(2)},\cdots,v^{(n_a)}$ co-occur. $\bm{\Phi}$ is a probability tensor, where each entry is an element in $\Phi$. Theorem \ref{thm:factorize} indicates the following categorical data characteristics: \begin{itemize} \item \textit{A value may belong to multiple distributions}. For different $h$ values in Theorem \ref{thm:factorize}, a categorical value in $\mathsf{a}_j$ have different distributions $\bm{\theta}_h^{(j)}$. Therefore, if $h > 1$, the categorical value may have different distributions. \item \textit{A coupling may contribute differently to respective distributions}. The interactions under various distributions may differ. For distribution $\Theta_h$, the attributes are independent (indicated by $\Theta_h$ which equals the outer product of attribute distributions). In this case, inter-attribute couplings may not make contribution. On the contrary, for distribution $\Phi$, the attributes interact with each other, which is mainly reflected by inter-attribute couplings. \item \textit{The overall distribution of a categorical value is a mixture of multiple distributions}. In Theorem \ref{thm:factorize}, the overall distribution of a categorical value equals the weighted sum of its multiple distributions. The parameter $\pi_h$ reflects the interactions between the distributions. \end{itemize} The heterogeneity hypotheses fit the above categorical data characteristics and provide a solid foundation for UNTIE to effectively capture the heterogeneity in couplings. \subsection{The Positive Semi-definite Wrapper Kernel}\label{sec:psd-kernel} As stated in Section \ref{subsec:unsupervised}, $s(\cdot, \cdot)$ has to be a positive semi-definite kernel to enable that $\mathbf{S}$ can be integrated into kernel k-means objective Eq. \eqref{eq:obj}. Before proving the above, we introduce a lemma of kernel properties. \begin{lemma}\label{lemma:psd-op} If $k_1(\mathsf{o}_i,\mathsf{o}_j)$ and $k_2(\mathsf{o}_i,\mathsf{o}_j)$ are positive semi-definite kernels in $O \times O$, and a constant $a > 0$, then the following $k(\mathsf{o}_i,\mathsf{o}_j)$ functions are positive semi-definite kernels: (1) $k(\mathsf{o}_i,\mathsf{o}_j) = ak_1(\mathsf{o}_i,\mathsf{o}_j)$, (2) $k(\mathsf{o}_i,\mathsf{o}_j) = k_1(\mathsf{o}_i,\mathsf{o}_j) + k_2(\mathsf{o}_i,\mathsf{o}_j)$. \end{lemma} This lemma can be found in Section 4.1 of \cite{steinwart2008support}. \begin{theorem}\label{thm:psd} The wrapper kernel $s(\mathsf{o}_i,\mathsf{o}_j) = S_{ij}$, which is defined in Eq. \eqref{eq:wrapper_kernel}, is a positive semi-definite kernel. \end{theorem} \begin{proof} Given coupling spaces and multiple kernel functions, the $i$-th object $\mathsf{o}_i$ corresponds to a real value vector $\mathbf{K}_{p,\mathfrak{i}\cdot}^{\prime} \in \mathcal{R}^{n_o}$ in the $p$-th kernel space. Therefore, $s_p(\mathsf{o}_i, \mathsf{o}_j) = S_{p,ij} =\mathbf{K}_{p,\mathfrak{i}\cdot}^{\prime\top}\mathbf{K}_{p,\mathfrak{j}\cdot}^{\prime}$ is a well-known linear kernel, which is positive semi-definite. Treating $s_p(\mathsf{o}_i, \mathsf{o}_j)$ as $k_1(\mathsf{o}_i, \mathsf{o}_j)$ in Lemma \ref{lemma:psd-op}, since $\beta_p \geq 0$, consequently, $\beta_ps_p(\mathsf{o}_i,\mathsf{o}_j)$ is a positive semi-definite kernel according to Formula (1) of Lemma \ref{lemma:psd-op}. Consequently, as the wrapper kernel $s(\mathsf{o}_i,\mathsf{o}_j) = S_{ij} = \sum\limits_{p=1}^{n_k}\beta_p S_{p,ij}$ is an accumulative summation of $\beta_ps_p(\mathsf{o}_i,\mathsf{o}_j)$, $s(\mathsf{o}_i,\mathsf{o_j})$ is positive semi-definite by repeatedly adopting Formula (2) of Lemma \ref{lemma:psd-op} (treating $\sum\limits_{p=1}^{q}\beta_pS_{p,ij}$ as $k_1(\mathsf{o}_i, \mathsf{o}_j)$, and $\beta_{q+1}S_{q+1, ij}$ as $k_2(\mathsf{o}_i, \mathsf{o}_j)$ for $1 \leq q \leq n_k$). \end{proof} Theorem \ref{thm:psd} guarantees that the kernel matrix $\mathbf{S}$, which is constructed by kernel $s(\mathsf{o}_i, \mathsf{o}_j)$ as Eq. \eqref{eq:kernelmatrix}, can be incorporated into the kernel k-means objective. This is a fundamental property to support the effective unsupervised learning by UNTIE. \subsection{The Separability of UNTIE-represented Data}\label{sec:normalized-cut} The separability of a representation can be measured according to the overlap between object sets (e.g., clusters or classes) w.r.t. the representation. The UNTIE-represented data is with good separability since the resultant representation has the minimum normalized cut, which reflects the minimum overlap. Here, we first define the normalized cut and then prove that the objective of UNTIE is equivalent to learning a representation with the minimum normalized cut. Given a graph consisting of categorical data $G = <O, \mathbf{A}>$, where $O$ is the set of categorical objects and $\mathbf{A}$ is a non-negative and symmetric affinity matrix that contains the connected strength or similarity between objects: \newtheorem{definition}{Definition} \begin{definition} \label{def:cut} The \textit{normalized cut} specifies the connection strength between two sets relative to the total connection strengths in a graph. Formally, the normalized cut between sets $O_1, O_2 \subseteq O$ is \begin{equation*} normCut(O_1, O_2) = \frac{\sum_{i\in O_1, j\in O_2}\mathbf{A}(i,j)}{\sum_{i\in O_1, j\in O}\mathbf{A}(i,j)}. \end{equation*} \end{definition} Definition \ref{def:cut} shows that the normalized cut indicates the overlap between object sets. In other words, the \textit{minimum normalized cut} reflects the maximum separability between clusters, which is essential for most of learning tasks, e.g., clustering. \begin{theorem}\label{thm:cut} The objective of UNTIE in Eq. \eqref{eq:obj} is equivalent to learning a representation with the minimum normalized cut. \end{theorem} \begin{proof} The objective of minimizing the normalized cuts between clusters can be formalized as, \begin{equation*} \begin{aligned} & \text{minimize} & & \frac{1}{n_c}\sum\limits_{j=1}^{n_c} normCut(O_j, O \setminus O_j), \end{aligned} \end{equation*} where $O_j$ refers to the set of objects in the $j$-th cluster, and $O \setminus O_j$ refers to the set of objects in the clusters instead of $j$-th cluster. This objective function can be converted to a trace maximization problem according to \cite{stella2003multiclass} as follows, \begin{equation*} \begin{aligned} & \text{maximize} & & \frac{1}{n_c}\text{Tr}(\mathbf{V}^{\top}\mathbf{A}\mathbf{V}), \end{aligned} \end{equation*} where $\mathbf{V} = \mathbf{Z}(\mathbf{Z}^{\top}\mathbf{D}\mathbf{Z})^{-\frac{1}{2}}$, and $\mathbf{Z}\in\{0,1\}^{n_0 \times n_c}$ is an indicator matrix for the cluster, and $\mathbf{D}$ is the diagonal matrix, in which ($i$,$i$)-entry is the sum of the $i$-th row in $\mathbf{A}$. If further relaxing the matrix $\mathbf{Z}$ to a real value matrix and denoting $\mathbf{H} = \mathbf{D}^{\frac{1}{2}}\mathbf{V}$, the objective function can be converted to \begin{equation*} \begin{aligned} & \text{maximize} & & \frac{1}{n_c}\text{Tr}(\mathbf{H}^{\top}\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}\mathbf{H})\\ & \text{subject to} & & \mathbf{H} \in \mathcal{R}^{n_o \times n_c}\\ & & &\mathbf{H}^{\top}\mathbf{H} = \mathbf{I}_{n_c}. \end{aligned} \end{equation*} Let $\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}$ be $\mathbf{S}$ in Eq. \eqref{eq:obj}, by adding a low rank regularization $\text{Tr}(\mathbf{S})$, this objective function is equivalent to the UNTIE objective function Eq. \eqref{eq:obj}. Therefore, the objective of UNTIE in Eq. \eqref{eq:obj} is equivalent to learning a representation with the minimum normalized cut. \end{proof} \subsection{Convergence of the UNTIE Algorithm}\label{sec:convergency} \begin{theorem}\label{thm:convergency} The UNTIE algorithm converges to a local minimal solution in a finite number of iterations. \end{theorem} \begin{proof} Let $y$ be the number of all possible partitions of a categorical data set $C$, and each partition can be represented by an indicator matrix $\mathbf{H}$. If two partitions are different, their indicator matrices are also different; otherwise, they are identical. We note that $y$ is finite, given $C$ and the number of cluster $n_c$. Therefore, there are a finite number of $\mathbf{H}$ on $C$. While applying UNTIE to cluster $C$, we obtain a series of $\mathbf{H}$, i.e., $\mathbf{H}_1, \mathbf{H}_2, \cdots, \mathbf{H}_{n_i}$, and a series of $\bm{\omega}$, i.e., $\bm{\omega}_1, \bm{\omega}_2, \cdots, \bm{\omega}_{n_i}$, along the iterations. Given a matrix $\mathbf{H}$ and a heterogeneity parameter $\bm{\omega}$, denote the loss of the UNTIE objective function Eq.\eqref{eq:obj} as $l_{\mathbf{H},\bm{\omega}}$. Since kernel $k$-means and linear programming for Eq. \eqref{eq:omega} converge to minimal solutions, $l_{\mathbf{H},\bm{\omega}}$ is strictly decreasing, i.e., $l_{\mathbf{H}_1,\bm{\omega}_1} \geq l_{\mathbf{H}_2,\bm{\omega}_2} \geq \cdots \geq l_{\mathbf{H}_{n_i},\bm{\omega}_{n_i}}$. We assume that the number of iterations $n_i$ is more than $y+1$. That indicates there are at least two same indicator matrices in the sequence, i.e., $\mathbf{H}_i = \mathbf{H}_j$, $1 \leq i \neq j \leq n_i$. For $\mathbf{H}_i$ and $\mathbf{H}_j$, we have the optimized heterogeneity parameter $\bm{\omega}_i$ and $\bm{\omega}_j$, respectively. It is clear that $\bm{\omega}_i = \bm{\omega}_j$ since $\mathbf{H}_i = \mathbf{H}_j$. Therefore, we obtain $l_{\mathbf{H}_i,\bm{\omega}_i} = l_{\mathbf{H}_j, \bm{\omega}_i} = l_{\mathbf{H}_j, \bm{\omega}_j}$, i.e., the value of objective function does not change. If the value of the objective function does not change, UNTIE stops, and $n_i$ is not more than $y+1$. UNTIE converges to a local minimal solution in a finite number of iterations. \end{proof} \subsection{Computational Efficiency}\label{sec:complexity} The time complexity of UNTIE is determined by two parts, i.e., building coupling spaces and learning heterogeneities. In building coupling spaces, the time cost depends on what kind of couplings UNTIE captures. In this paper, UNTIE captures the intra- and inter-attribute couplings. The intra-attribute couplings measure the frequency of each value, corresponding to complexity $O(n_v)$. The inter-attribute couplings calculate the relationship between each value pair in each attribute pair, corresponding to the time cost $O(n_{mv}^2n_{a}^2)$, where $n_{mv}$ is the maximal number of values in attributes. Consequently, the entire time complexity of building the coupling spaces is $O(n_v + n_{mv}^2n_{a}^2)$. In heterogeneity learning, its time complexity is determined by the time cost of calculating eigenvectors and $\bm{\omega}$ and the number of iterations. If involving all data in each iteration, it requires $O(n_o^3)$ to calculate eigenvectors and $O((n_o^2)^{3.5}n_{\bm{\omega}}^2)$ to solve linear optimization of calculating $\bm{\omega}$ \cite{zhu2018heterogeneous}. Denoting the number of iterations as $n_i$, the time complexity of heterogeneity learning is $O(n_in_o^3 + n_i(n_o^2)^{3.5}n_{\bm{\omega}}^2)$, where $n_{\bm{\omega}}$ refers to the number of elements in $\bm{\omega}$. If taking stochastic optimization, denoting the number of object pairs in each batch as $n_b$, the time complexity is only $O(n_b^3n_i + n_bn_{\bm{\omega}}n_i)$. Since the batch size $n_b \ll n_o$ holds for large data sets, stochastic optimization is much more efficient if it can converge within a small number of iterations. We thus recommend stochastic optimization to solve the UNTIE objective. Accordingly, the time complexity of UNTIE is $O(n_v + n_{mv}^2n_{a}^2+n_b^3n_i + n_bn_{\bm{\omega}}n_i)$. The space complexity of UNTIE with stochastic optimization is $O(n_o^2n_{\bm{\omega}})$. For large categorical data, the space complexity is very high when full data optimization is used, which approaches $O(n_o^2)$. However, conducting stochastic optimization to obtain an approximate solution can largely reduce the space complexity since $n_b \ll n_o^2$. Therefore, we take stochastic optimization in UNTIE to tackle large data. Overall, UNTIE has the time complexity $O(n_v + n_{mv}^2n_{a}^2+n_b^3n_i + n_bn_{\bm{\omega}}n_i)$ and space complexity $O(n_bn_{\bm{\omega}})$. This means UNTIE is scalable for large data. In addition, UNTIE can be further sped up by distributed and parallel computing in both building coupling spaces and learning heterogeneities, which will be explored in our future work. \section{Experiments}\label{sec:experiment} \subsection{Experimental Settings}\label{sec:parameters} \subsubsection{Data Sets} 25 data sets\footnote{More details of these data sets can be found on http://archive.ics.uci.edu/ml and https://www.sgi.com/tech/mlc/db. While we could not find much larger public categorical data, these small sets have rich data characteristics which challenge categorical representation.} are used to evaluate UNTIE. The data includes medical data: Hepatitis, Audiology, Spect, Mammographic, Wisconsin, Breastcancer, Primarytumor, and Dermatology; gene data: DNAPromoter, DNANominal, and Splice; social and census data: Housevotes, and Hayesroth; hierarchical decision-making data: Krvskp, Tictactoe, and Connect4; nature data: Zoo, Soybeanlarge, Flare, and Mushroom; business data: Crx; disaster prediction data: Titanic; and synthetic data with heterogeneous couplings: Mofn3710, Led24, and ThreeOf9. In Table \ref{tab:data}, they show strong diversity in terms of data factors: the number of objects ($n_o$), the number of attributes ($n_a$), the number of classes ($n_c$), the average number of nominal values in each attribute ($n_{av}$), and the maximum number of nominal values in all attributes ($n_{mv}$). Specifically, the number of objects ranges from 101 to 67,557, and the number of attributes ranges from 4 to 69. The data sets contain both binary and multiple classes with the maximum number of 24 classes. The average and maximal numbers of distinct attribute values range from 2 to 10 and from 2 to 15, respectively. \begin{table}[!htbp] \centering \footnotesize \caption{Characteristics of 25 Categorical Data Sets} \begin{tabular}{l|lllll} \toprule Dataset & $n_o$ & $n_a$ & $n_c$ & $n_{av}$ & $n_{mv}$ \\ \midrule Zoo & 101 & 16 & 7 & 2.25 & 6 \\ DNAPromoter & 106 & 57 & 2 & 4 & 4 \\ Hayesorth & 132 & 4 & 3 & 3.75 & 4 \\ Hepatitis & 155 & 13 & 2 & 2.77 & 3 \\ Audiology & 200 & 69 & 24 & 2.23 & 7 \\ Housevotes & 232 & 16 & 2 & 2 & 2\\ Spect & 267 & 22 & 2 & 2 & 2 \\ Mofn3710 & 300 & 10 & 2 & 2 & 2 \\ Soybeanlarge & 307 & 35 & 19 & 3.77 & 8 \\ Primarytumor & 339 & 17 & 21 & 2.47 & 4 \\ Dermatology & 366 & 33 & 6 & 3.91 & 4 \\ ThreeOf9 & 512 & 9 & 2 & 2 & 2 \\ Wisconsin & 683 & 9 & 2 & 9.89 & 10 \\ Crx & 690 & 9 & 2 & 5 & 15 \\ Breastcancer & 699 & 9 & 2 & 10.00 & 11 \\ Mammographic & 830 & 4 & 2 & 5 & 7 \\ Tictactoe & 958 &9 &2 & 3 & 3\\ Flare & 1,066 & 11 & 6 & 3.73 & 8\\ Titanic & 2,201 & 3 & 4 & 2 & 2\\ DNANominal & 3,186 & 60 & 3 & 4 & 4\\ Splice & 3,190 & 60 & 3 & 4.78 & 6\\ Krvskp & 3,196 & 36 & 2 & 2.03 & 3\\ Led24 & 3,200 & 24 & 10 & 2 & 2\\ Mushroom & 5,644 & 22 & 2 & 4.45 & 9\\ Connect4 & 67,557 & 42 & 3 & 3 & 3\\ \bottomrule \end{tabular}% \label{tab:data} \end{table}% \subsubsection{Representation and Downstream Learning Baselines} We evaluate the UNTIE representation (UNTIE for short) against (1) the categorical distance measure Hamming distance (Hamming); (2) five state-of-the-art categorical data representation methods: CDE \cite{jianembedding}, COS \cite{wang2015coupled}, DILCA \cite{ienco2012context}, Ahmad \cite{ahmad2007method}, and Rough \cite{cao2012dissimilarity}; (3) two unsupervised deep representation methods BiGAN \cite{donahue2016adversarial} and VAE \cite{kingma2013auto} based on the wide-and-deep network \cite{cheng2016wide} (denoted as BiGAN\_WD and VAE\_WD, respectively). The various learning tasks \textit{clustering} (two simple but popular clustering methods k-means and k-modes), \textit{classification} (K-Nearest Neighbor-KNN, Support Vector Machine-SVM, Random Forest-RF, and Logistic Regression-LR), and \textit{object retrieval} are undertaken on the UNTIE representations to test their downstream task performance. \subsubsection{Evaluation Perspectives} The following aspects of UNTIE performance are evaluated. \begin{itemize} \item \textbf{Representation quality}: to reveal why UNTIE produces better representation results; \item \textbf{Downstream effectiveness}: to evaluate whether the UNTIE representation effectively improves various downstream learning tasks; \item \textbf{Efficiency}: to reflect the cost sensitivity of UNTIE in representing data w.r.t. different data characteristics; \item \textbf{Flexibility}: to verify whether the UNTIE representation fits and upgrades different learning methods. \item \textbf{Stability}: to test UNTIE's parameter sensitivity. \end{itemize} To avoid the impact of class imbalance, we evaluate the UNTIE-enabled clustering and classification performance by F-score ($\%$). The higher F-score indicates better learning performance. \subsubsection{Implementation Settings} The default settings for implementing UNTIE are as follows. The kernels used in UNTIE are 11 Gaussian kernels with width from $2^{-5}$ to $2^5$ and three Polynomial kernels with order from $1$ to $3$. We use the stochastic optimization method Adam \cite{kingma2014adam} to solve the UNTIE objective function with the initial learning rate $10^{-3}$, the batch size $20$, and the max number of iterations $1,000$. For the parameters of the baseline methods, we take their recommended settings. UNTIE is implemented in Python 3.5 and Tensorflow r1.2, all experiments are conducted in a Windows 10 workstation with Intel i5-5300 U CPU@2.30GHz and 8GB memory. \subsection{Evaluating the UNTIE Representation Quality} \subsubsection{Reducing Heterogeneous Couplings Inconsistency} Here, we evaluate how UNTIE effectively reduces the inconsistency during learning heterogeneous couplings. First, we quantitatively measure the inconsistency by the intra- and inter-coupling heterogeneity indicators. Then, we analyze the relations between UNTIE-enabled clustering results and the inconsistency indicated by these indicators. The intra-coupling heterogeneity indicator ($I_{intra}$) measures the degree of heterogeneity in value distributions. It assigns a higher heterogeneity degree to couplings if each value in the couplings has more significant diverse distributions. Intuitively, if a value has multiple distributions, its representations in different distributions may be inconsistent with each other. The larger difference its distributions have (i.e., higher heterogeneity degree), the stronger inconsistency may exist. In our experiment, $I_{intra}$ compares the difference between distributions of a value per each ground-truth cluster, which is formalized as follows: \begin{equation} I_{intra} =\frac{ \sum\limits_{j=1}^{n_a}\sum\limits_{i=1}^{n_v^{(j)}}\text{NM}_{n_c}\left(\sqrt{\sum\limits_{k=1}^{n_{c}}\left(\frac{|g^{(j)}(\mathsf{v_i^{(j)}})\cap g^{(c)}(\mathsf{c}_k)|}{|g^{(j)}(\mathsf{v_i^{(j)}})|}\right)^2}\right)}{n_a}, \end{equation} where $\mathsf{c}_k$ refers to the $k$-th cluster, $n_{c}$ is the number of ground-truth clusters, function $g(\cdot)$ has the same definition as in Eq. \eqref{eq:intra}, and function $\text{NM}_{n_c}$ normalizes a value to $[0,1]$ w.r.t. $n_c$ clusters, which is defined as \begin{equation} \text{NM}_{n_c}(x) = 1 - \frac{1 - x}{1 - \sqrt{\sum\limits_{k=1}^{n_{c}}\left(\frac{1}{n_{c}}\right)^2}}. \end{equation} $I_{intra}$ reflects the inconsistency within couplings caused by heterogeneous data distributions. $I_{intra}$ is large if each value has diverse distributions in each cluster, otherwise small. A larger $I_{intra}$ indicates a stronger inconsistency within a coupling. In extreme cases, $I_{intra}$ is $1$ when each value only appears in a ground-truth cluster, and $I_{intra}$ is $0$ when every value has the same distribution in every ground-truth cluster. To show the distribution of intra-coupling inconsistency, we illustrate the probability density of $I_{intra}$ on 25 testing data sets in Fig. \ref{fig:intra_heterogeneity_distribution} per kernel density estimation (KDE)\footnote{Here we treat the intra-coupling inconsistency of the 25 testing data sets as the random variable to calculate its KDE.}. The KDE smoothly estimates the probability density of $I_{intra}$ within its range. A larger density indicates a higher probability of an inconsistency degree. As shown in Fig. \ref{fig:intra_heterogeneity_distribution}, most of data sets have strong inconsistency within couplings, which indicates the necessity of eliminating inconsistency in learning heterogeneous couplings. \begin{figure}[!hbtp] \centering \includegraphics[width=0.75\columnwidth]{intra_heterogeneity_distribution.eps} \caption{The Probability Density of Intra-coupling Heterogeneity Indicator per Kernel Density Estimation.} \label{fig:intra_heterogeneity_distribution} \end{figure} The inter-heterogeneity indicator ($I_{inter}$) represents the degree of difference between couplings. Intuitively, the increase of difference between couplings (i.e., higher heterogeneity degree) may increase the inconsistency between categorical data representations. In our experiment, $I_{inter}$ calculates the distance between coupling matrices to reflect the inter-coupling inconsistency. Here, a coupling matrix is a similarity matrix of the values in an attribute, which is defined by a coupling learning method. For UNTIE, a coupling matrix is calculated by the Euclidean distance w.r.t. a coupling vector representation of values (e.g., the intra-attribute coupling value representation as Eq. \eqref{eq:intra} or the inter-attribute coupling value representation as Eq. \eqref{eq:inter}). For CDE, a coupling matrix is calculated by the Euclidean distance w.r.t. a value clustering representation. For COS and DILCA, the coupling matrices are their value similarity matrices before weighted integration. Denoting the $k$-th coupling matrix of the $z$-th attribute as $\mathbf{C}^{(z,k)}$, the inter-heterogeneity indicator is defined as \begin{equation} I_{inter} = \sqrt{\frac{\sum\limits_{k=1}^{n_m^{(z)}}\sum\limits_{l=1}^{n_m^{(z)}}\frac{\sum\limits_{i=1}^{n_v^{(z)}}\sum\limits_{j=1}^{n_v^{(z)}}\left(\mathbf{C}^{(z,k)}_{ij} - \mathbf{C}^{(z,l)}_{ij}\right)^2}{n_v^{(z)2}}}{n_m^{(z)2}}}, \end{equation} $n_m^{(z)}$ is the number of couplings for the $z$-th attribute, and $\mathbf{C}_{ij}^{z,k}$ is the $(i,j)$-th entry of $\mathbf{C}_{ij}$. $I_{inter}$ differs for different coupling learning methods. A larger $I_{inter}$ indicates a stronger inconsistency between couplings. $I_{inter}$ is large if each coupling matrix is largely different from the others, and otherwise small. In an extreme case, $I_{inter}$ is $0$ when all coupling matrices are the same. The probability density of $I_{inter}$ on 25 testing data sets is shown in Fig. \ref{fig:inter_heterogeneity_distribution} per KDE, which shows UNTIE and CDE involve a higher degree of inter-coupling heterogeneity compared to DILCA and COS. On one hand, the higher degree of inter-coupling heterogeneity provides richer information for representation. On the other hand, the higher degree of inter-coupling heterogeneity may contain inconsistency representation with higher probability. \begin{figure}[!hbtp] \centering \includegraphics[width=0.75\columnwidth]{inter_heterogeneity_distribution.eps} \caption{The Probability Density of Inter-coupling Heterogeneity Indicator per Kernel Density Estimation.} \label{fig:inter_heterogeneity_distribution} \end{figure} To evaluate whether UNTIE can effectively resolve the coupling inconsistency problem, we analyze the UNTIE-enabled clustering performance at different inconsistency levels indicated by the intra-/inter-coupling heterogeneity indicators. Considering the imbalanced distributions of these indicators as shown in Fig. \ref{fig:intra_heterogeneity_distribution} and Fig. \ref{fig:inter_heterogeneity_distribution}, we set 5 inconsistency levels, each of them contains the same number of data sets, according to the intra-/inter-coupling heterogeneity indicators. For example, the first level contains $20\%$ data sets with the smallest values of $I_{intra}$/$I_{inter}$, while the fifth level contains $20\%$ data sets with the largest values of $I_{intra}$/$I_{inter}$. We calculate the performance of a heterogeneous coupling learning method at an inconsistency level by averaging its enabled clustering rank in Table \ref{tab:clustering} on the data sets with the inconsistency level. We illustrate the relations between the UNTIE-enabled clustering performance and the inconsistency level in Fig. \ref{fig:intra_heterogeneity} and Fig. \ref{fig:inter_heterogeneity} per $I_{intra}$ and $I_{inter}$, respectively. As shown in Fig. \ref{fig:intra_heterogeneity}, UNTIE significantly outperforms its competitors on data sets at the inconsistency levels 2 to 5 in terms of $I_{intra}$, which have strong intra-coupling heterogeneity as shown in Fig. \ref{fig:intra_heterogeneity_distribution}. While other methods ignore the intra-coupling heterogeneity, UNTIE captures it by learning value weights in kernel spaces w.r.t. Eq. \eqref{eq:trans}. As a result, UNTIE reduces the inconsistency and achieves better performance. In contrast, on the data sets with the inconsistency level 1, UNTIE dose not show superiority. This is because these data sets do not have much coupling inconsistency, which further demonstrates that UNTIE gains better performance by reducing the inconsistency between heterogeneous couplings. \begin{figure}[!hbtp] \centering \includegraphics[width=0.75\columnwidth]{intra_heterogeneity_conflict_indicator.eps} \caption{The UNTIE-enabled Clustering Performance on Data Sets with Different Inconsistency Levels per the Intra-heterogeneity Indicator.} \label{fig:intra_heterogeneity} \end{figure} In Fig. \ref{fig:inter_heterogeneity}, the UNTIE-enabled clustering performs much better than that enabled by existing heterogeneous coupling learning methods, especially on the data with higher inconsistency levels. Since the couplings represented by UNTIE may also incur inconsistency as shown in Fig. \ref{fig:inter_heterogeneity_distribution}, UNTIE well reduces the inconsistency to guarantee a good representation. This reduction is mainly contributed by the multiple kernels used in UNTIE, which enables to capture the fitness of couplings for different distributions. In Fig. \ref{fig:inter_heterogeneity}, UNTIE shows similar performance by simply combining multiple couplings on data sets with the inconsistency levels 2 and 3. This phenomenon indicates the heterogeneous couplings learned in our method capture much richer data information compared to other methods and enable better performance when the inconsistency between these couplings is not so substantial. However, with the inconsistency increase, simply combining these couplings may worsen the results, as shown on the data sets with the inconsistency levels 4 and 5 in Fig. \ref{fig:inter_heterogeneity}. \begin{figure}[!hbtp] \centering \includegraphics[width=0.75\columnwidth]{inter_each_heterogeneity_indicator.eps} \caption{The UNTIE-enabled Clustering Performance on Data Sets with Different Inconsistency Levels per the Inter-heterogeneity Indicator.} \label{fig:inter_heterogeneity} \end{figure} \subsubsection{Goodness of the UNTIE-enabled Metric} \begin{figure*}[htbp] \centering \subfigure[$(\epsilon,\gamma)$-curve on Mofn3710.] { \label{fig:e} \includegraphics[width=0.45\columnwidth]{mofn3710.eps} } \subfigure[$(\epsilon,\gamma)$-curve on Dermatology.] { \label{fig:f} \includegraphics[width=0.45\columnwidth]{dermatology_goodness.eps} } \subfigure[$(\epsilon,\gamma)$-curve on Crx.] { \label{fig:b} \includegraphics[width=0.45\columnwidth]{crx.eps} } \subfigure[$(\epsilon,\gamma)$-curve on Breastcancer.] { \label{fig:a} \includegraphics[width=0.45\columnwidth]{breastcancer.eps} } \caption{The $(\epsilon,\gamma)$-curves of Different Transformed Similarity Measures: A better metric yields a better result.} \label{fig:goodnessCurve} \end{figure*} Per Definition 4 in \cite{balcan2008theory}, a similarity function $Q$ (e.g., the similarity function $s(\cdot,\cdot)$ in Eq. \eqref{eq:wrapper_kernel}) is strongly $(\epsilon,\gamma)$-good for a learning problem $P$ if at least a $1 - \epsilon$ probability mass of objects $\mathsf{o}$ satisfies: \begin{equation*} \begin{aligned} &E_{\mathsf{o},\mathsf{o}'\sim \Phi}[Q(\mathsf{o},\mathsf{o}')|c(\mathsf{o}) = c(\mathsf{o}')] \\ &\geqslant E_{\mathsf{o},\mathsf{o}'\sim \Phi}[Q(\mathsf{o},\mathsf{o}')|c(\mathsf{o}) \neq c(\mathsf{o}')] + \gamma, \end{aligned} \end{equation*} where $E$ refers to the expected value on distribution $\Phi$ in the learning problem $P$, and $c(\mathsf{o})$ refers to the class of $\mathsf{o}$. In this criterion, $\epsilon$ indicates the proportion of objects whose averaged intra-class similarity is not $\gamma$ degrees larger than their averaged inter-class similarity value. With the same $\gamma$, the smaller $\epsilon$ reflects the better similarity function for the learning problem $P$. Here, $\gamma$ indicates to what extent the intra-class similarity is larger than the inter-class similarity on the $1-\epsilon$ proportion of data which is best separated. When $\epsilon$ is fixed, the larger $\gamma$ reflects the better similarity function. The intuition of this criterion is that a good similarity measure should effectively differ data in the same class from those in other classes. More importantly, a $(\epsilon,\gamma)$-good similarity can induce a classifier with a bounded error (see more details in Theorem 1 of \cite{balcan2008theory}). In this experiment, we compare the UNTIE's similarity function $s(\cdot,\cdot)$ (defined in Eq. \eqref{eq:wrapper_kernel}) with the similarity functions in the other similarity-based representation methods per the $(\epsilon,\gamma)$-good criterion. For the CDE-learned vector representation, we reverse the Euclidean distance to measure the similarity between objects. Since different $\epsilon$ values may correspond to different $\gamma$ values, we draw the $(\epsilon,\gamma)$-curves to demonstrate the quality of the learned metric and the compared methods. With the same $\epsilon$, the better metric incurs a greater $\gamma$. In other words, the better metric yields a higher curve in the $(\epsilon,\gamma)$-curve. In this experiment, we draw the $(\epsilon,\gamma)$-curves on four data sets, i.e., Mofn3710, Dermatology, Crx and Breastcancer. The results are shown in Fig. \ref{fig:goodnessCurve}. It should be noted that, we only focus on $\epsilon$ that can guarantee a non-negative margin, i.e., $\gamma \geqslant 0$. Therefore, Fig. \ref{fig:goodnessCurve} only displays a part of the $(\epsilon,\gamma)$-curve, in which $\gamma \geqslant 0$. The results illustrate that UNTIE is better than its competitors in terms of the $(\epsilon,\gamma)$-good criterion. The results also reveal the insight behind the clustering performance in Table \ref{tab:clustering}. For data Dermatology and Crx, the UNTIE-enabled clustering has much higher F-score than others since UNTIE yields larger margins between different classes, which is reflected by the $(\epsilon,\gamma)$-good in Figs. \ref{fig:f} and \ref{fig:b}. For Mofn3710, all methods obtain low F-score, and the UNTIE-enabled clustering achieves the same result as CDE which is only slightly better than the other competitors. The reason is shown in Fig. \ref{fig:e}, where nearly $20\%$ of the Mofn3710 data cannot be well separated by UNTIE ($\gamma$ is 0 when $\epsilon$ is smaller than 0.2) while nearly $30\%$ of that cannot be well separated by its competitors. For other data sets, e.g., Breastcancer, all methods achieve good results since the $(\epsilon, \gamma)$-good criterion indicates that these methods can well separate the data sets. \begin{figure*}[!hbtp] \centering \subfigure[UNTIE-represented Distributions.] { \includegraphics[width=0.55\columnwidth]{dermatology.eps} } \subfigure[CDE-represented Distributions.] { \includegraphics[width=0.55\columnwidth]{dermatology_CDE.eps} } \subfigure[BiGAN\_WD-represented Distributions.] { \includegraphics[width=0.55\columnwidth]{dermatology_bigan.eps} } \caption{The Visualization of Different Representation Methods on Dermatology. The UNTIE-represented data shows clearer boundaries between different clusters. The plotted two-dimensional embedding is converted from high-dimensional representation by t-SNE. Different symbols refer to different data clusters per the ground truth.} \label{fig:visualize} \end{figure*} \subsubsection{Visualization of UNTIE Representations} We visualize the separability of different representations. The UNTIE, CDE and BiGAN\_WD-represented data is converted from high-dimensional representation to two-dimensional embedding by the t-Distributed Stochastic Neighbor Embedding (t-SNE) \cite{maaten2008visualizing}. The basic idea of t-SNE is to minimize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional representation. Fig. \ref{fig:visualize} shows the visualization of these different representation methods on Dermatology. The UNTIE-represented data has a more compact distribution and leads to clearer boundaries between different clusters, compared to that from CDE and BiGAN\_WD-represented data. It qualitatively demonstrates that the UNTIE representation is more separable and suitable for downstream tasks such as clustering and classification. This is because UNTIE learns the representation by optimizing the objective function Eq. \eqref{eq:obj}, i.e., by minimizing the distance between objects within a cluster and maximizing the distance between objects in different clusters. \begin{figure}[!hbtp] \centering \subfigure[Training Loss on Hepatitis.] { \includegraphics[width=0.28\columnwidth]{hepatitis_convergence.eps} } \subfigure[Training Loss on Housevotes.] { \includegraphics[width=0.28\columnwidth]{housevotes_convergence.eps} } \subfigure[Training Loss on Mofn3710.] { \includegraphics[width=0.28\columnwidth]{mofn3710_convergence.eps} } \subfigure[Training Loss on Dermatology.] { \includegraphics[width=0.28\columnwidth]{dermatology_convergence.eps} } \subfigure[Training Loss on Crx.] { \includegraphics[width=0.28\columnwidth]{crx_convergence.eps} } \subfigure[Training Loss on Breastcancer.] { \includegraphics[width=0.28\columnwidth]{breastcancer_convergence.eps} } \caption{The UNTIE's Training Loss on Different Data Sets. The stochastic optimization method for UNTIE is Adam \cite{kingma2014adam} with the initial learning rate $10^{-3}$ and the batch size $20$. The X-axis refers to the number of iterations, and y-axis refers to the loss value of UNITE's objective function Eq. \eqref{eq:omega}.} \label{fig:convergence} \end{figure} \begin{figure*}[hbtp] \centering \subfigure[Time Cost w.r.t. Number of Objects.] { \label{fig:object_time} \includegraphics[width=0.45\columnwidth]{object_time.eps} } \subfigure[Time Cost w.r.t. Number of Attributes.] { \label{fig:attribute_time} \includegraphics[width=0.45\columnwidth]{attribute_time.eps} } \subfigure[Time Cost w.r.t. Number of Attribute Values.] { \label{fig:value_time} \includegraphics[width=0.45\columnwidth]{value_time.eps} } \subfigure[Time Cost w.r.t. Number of Kernels.] {\label{fig:kernelTimeCost} \includegraphics[width=0.45\columnwidth]{kernel_time.eps} } \caption{The UNTIE Computational Cost w.r.t. Data Factors: Object Number $n_o$, Attribute Number $n_a$, and Maximum Number of Attribute Values $n_{mv}$. The solid line refers to the total time cost of UNTIE. The dotted line refers to the time cost of building the coupling spaces. The star line refers to the time cost of the heterogeneity learning.} \label{fig:timeCost} \end{figure*} \subsection{Evaluating the UNTIE Effectiveness} \subsubsection{UNTIE-enabled Clustering} The vector-based representations learned by methods UNTIE, CDE, BiGAN\_WD and VAE\_WD are incorporated into K-means, which is probably the most popular clustering method and is sensitive to a distance measure. The representations learned by similarity-based representation methods, including COS, DILCA, Ahmda, Rough and Hamming, are fed into K-modes, which is the most commonly used clustering method for categorical data. To evaluate the performance of heterogeneity learning, we compare UNTIE with its variant which concatenates the representations in the coupling spaces without heterogeneity learning, denoting as Couplings, which is incorporated into K-means. The comparison results are shown in Table \ref{tab:clustering}. The best results are highlighted in bold and $\Delta$ is the ratio of UNTIE's improvement over the best results of other measures. On half of the data sets, UNTIE performs significantly better than the compared methods. For example, the F-score improves 51.72\% on DNAnominal and 30.75\% on Dermatology compared to the best-performing methods DILCA and COS. On the other half of the data sets, UNTIE achieves the same or comparable results to other methods. For example, the F-scores of UNTIE and CDE are both 56.56\% on Mofn3710 and 54.8\% on Tictactoe. UNTIE effectively captures the intrinsic categorical data characteristics by revealing the value and attribute couplings and the heterogeneity within and between these couplings to induce the representation. These embedded characteristics guarantee the effectiveness of UNTIE representations, ensuring that the UNTIE-enabled clustering can generally achieve better results than others. It is noted that BiGAN\_WD, and VAE\_WD perform badly on most of data sets except Mofn3710 where BiGAN\_WD achieves the highest. \begin{table*}[htbp] \centering \footnotesize \caption{Clustering F-Score ($\%$) with Different Embedding Methods: The value-based representations are fed into k-means and the similarity-based representations are fed into k-modes to get the clustering results. The best results are highlighted in bold. $\Delta$ indicates the UNTIE's improvement over the best results of the other measures. The averaged rank of a method over all data sets with significant difference from others w.r.t. the Bonferroni-Dunn test (p-value $<$ 0.1) is labelled by $^{*}$. } \begin{tabular}{l|ll|llllllll|l} \toprule Dataset & UNTIE & Couplings & CDE & COS & Ahmad & DILCA & Rough & Hamming & BiGAN\_WD & VAE\_WD & $\Delta$ \\ \midrule Zoo & \textbf{76.12} & 74.85 & 75.04 & 72.10 & 71.34 & 71.34 & 62.79 & 73.27 & 56.93 & 24.41 & 1.44$\%$ \\ DNAPromoter & \textbf{95.28} & 92.45 & 61.61 & 49.24 & 49.92 & 85.85 & 63.20 & 52.68 & 51.99 & 50.87 & 10.98$\%$ \\ Hayesroth & \textbf{54.17} & \textbf{54.17} & 52.85 & 38.98 & 33.76 & 32.87 & 38.92 & 33.06 & 44.91 & 37.14 & 2.50$\%$ \\ Hepatitis & 70.40 & \textbf{73.64} & 69.82 & 46.29 & 66.72 & 65.13 & 59.21 & 59.21 & 61.08 & 51.24 & 0.00$\%$ \\ Audiology & 34.99 & 34.48 & 32.18 & 27.71 & \textbf{35.38} & 31.77 & 22.36 & 29.05 & 20.00 & 19.97 & 0.00$\%$ \\ Housevotes & \textbf{90.51} & 88.36 & 89.65 & 88.36 & 88.36 & 88.79 & 87.04 & 86.64 & 83.64 & 53.84 & 0.96$\%$ \\ Spect & 55.04 & 55.04 & 52.55 & 36.26 & 34.93 & 34.76 & \textbf{57.63} & 35.94 & 34.71 & 48.38 & 0.00$\%$ \\ Mofn3710 & 56.65 & 44.69 & 56.65 & 50.18 & 50.22 & 48.68 & 50.62 & 50.98 & \textbf{60.34} & 49.00 & 0.00$\%$ \\ Soybeanlarge & \textbf{69.29} & 64.88 & 62.19 & 60.10 & 56.84 & 59.42 & 46.41 & 55.31 & 48.38 & 14.83 & 11.42$\%$ \\ Primarytumor & 24.62 & 24.87 & 23.43 & 19.81 & 23.65 & 21.76 & 22.38 & \textbf{26.19} & 22.17 & 14.68 & 0.00$\%$ \\ Dermatology & \textbf{97.51} & 72.78 & 73.10 & 74.58 & 72.87 & 72.61 & 57.99 & 66.60 & 38.54 & 23.82 & 30.75$\%$ \\ ThreeOf9 & 34.86 & 34.86 & 54.63 & 35.32 & 35.32 & 35.32 & \textbf{65.19} & 54.22 & 50.03 & 54.64 & 0.00$\%$ \\ Wisconsin & 93.91 & 95.58 & \textbf{96.20} & 94.28 & 95.12 & 95.49 & 94.44 & 89.98 & 74.26 & 81.45 & 0.00$\%$ \\ Crx & \textbf{85.49} & 52.65 & 52.65 & 36.99 & 52.65 & 79.29 & 63.47 & 79.29 & 51.81 & 51.69 & 7.82$\%$ \\ Breastcancer & 93.27 & 94.75 & 95.20 & 93.56 & 94.89 & \textbf{95.25} & 94.37 & 93.27 & 65.94 & 79.15 & 0.00$\%$ \\ Mammographic & 82.77 & \textbf{82.89} & 81.66 & 80.06 & 81.66 & 82.65 & 80.67 & 81.50 & 60.48 & 70.59 & 0.00$\%$ \\ Tictactoe & 54.80 & \textbf{62.61} & 54.80 & 51.88 & 50.87 & 52.97 & 50.19 & 53.59 & 54.38 & 50.24 & 0.00$\%$ \\ Flare & 37.08 & 31.20 & 32.44 & 35.79 & 34.20 & 35.59 & 38.85 & \textbf{39.22} & 31.98 & 22.30 & 0.00$\%$ \\ Titanic & 33.72 & 29.77 & 33.72 & 29.77 & 33.72 & 33.72 & \textbf{36.27} & 33.72 & 31.58 & 28.61 & 0.00$\%$ \\ DNAnominal & \textbf{89.79} & 67.70 & 51.14 & 41.91 & 46.68 & 59.18 & 43.28 & 41.44 & 35.18 & 32.21 & 51.72$\%$ \\ Splice & 79.73 & 42.29 & \textbf{87.12} & 31.31 & 47.34 & 45.87 & 42.79 & 42.48 & 26.60 & 32.55 & 0.00$\%$ \\ Krvskp & 51.09 & 51.09 & 51.03 & 46.72 & \textbf{55.17} & \textbf{55.17} & 53.73 & 53.86 & 42.94 & 50.36 & 0.00$\%$ \\ Led24 & \textbf{69.50} & 45.82 & 48.03 & 53.91 & 51.83 & 61.08 & 32.65 & 28.82 & 18.38 & 13.12 & 13.79$\%$ \\ Mushroom & 82.69 & 82.76 & 82.83 & \textbf{82.91} & 82.86 & 82.39 & 78.18 & 82.29 & 71.48 & 60.78 & 0.00$\%$ \\ Connect4 & \textbf{33.20} & 31.14 & 31.91 & 27.23 & 32.88 & 33.14 & 30.34 & 31.43 & 30.53 & 29.18 & 0.18 $\%$ \\ \midrule Averaged Rank$^{*}$ & \textbf{2.82} & 4.34 & 3.62 & 6.62 & 4.9 & 4.78 & 5.7 & 5.66 & 7.8 & 8.76 & 0.8 \\ \bottomrule \end{tabular}% \label{tab:clustering}% \end{table*}% To statistically compare UNTIE's performance with the above categorical representation methods, we calculate their averaged ranks by the Friedman test and Bonferroni-Dunn test \cite{demvsar2006statistical}. The $\chi_F^2$ of Friedman test is 83.21 associated with $p$-value $3.71e^{-14}$. This result indicates that the performance of all the compared methods is not equal. Further, the Bonferroni-Dunn test evaluates the critical difference (CD) between UNTIE and other methods, and shows the CD at $p$-value $< 0.1$ is 2.17. As shown in Table \ref{tab:clustering}, UNTIE achieves an overall averaged rank 2.82, which is better than other measures. For example, it is 0.8 better than that of the best state-of-the-art method CDE (3.62), and 5.94 better than VAE\_WD (8.76). Regarding the CD, the UNTIE's performance is significantly better than most of state-of-the-art methods except CDE, DILCA, and Adam. Although UNTIE and CDE do not show significant difference under the Bonferroni-Dunn test at $p$-value $< 0.1$, UNTIE captures the heterogeneity in couplings which cannot be learned by CDE. Therefore, the performance of UNTIE is better than CDE in most cases, especially on data sets with complex structures and heterogeneous distributions. For example, on DNAnominal, UNTIE achieves $89.79\%$ while CDE only achieves $51.14\%$ in terms of F-score. All the comparison results are shown in Fig. \ref{fig:CDDiagram}, which reveals UNTIE is significantly ($p < 0.1$) better than almost all the compared categorical representation methods. \begin{figure}[!hbtp] \centering \includegraphics[width=0.75\columnwidth]{CDDiagram.eps} \caption{Comparison of UNTIE vs. the Other Representation Methods per the Bonferroni-Dunn Test. All methods with ranks outside the marked interval are significantly different (p $<$ 0.1) from UNTIE.} \label{fig:CDDiagram} \end{figure} The results also show that UNTIE and Couplings achieve the overall performance of 2.82 and 4.34 averaged rank, respectively, in comparison with 7.8 and 8.76 for BiGAN\_WD and VAE\_WD which rank the worst. This shows that heterogeneity learning contributes to an additional 1.52 averaged rank over the representation of Couplings. UNTIE does not consistently beat Couplings over all data sets, showing that not all data sets involve strong heterogeneity. For example, on Hepatitis, Mammographic and Tictactoe, the couplings-enabled representations show better results, while both UNTIE and Couplings do not make significant improvement over other methods, which also demonstrates that the clustering labels in these data sets are not sensitive to the captured couplings and heterogeneity. This may indicate other unknown complexities in these data sets that could be further explored. \subsubsection{UNTIE-enabled Retrieval} \begin{figure*}[!hbtp] \centering \subfigure[Precision on Dermatology.] { \label{fig:apk} \includegraphics[width=0.6\columnwidth]{dermatology_precision.eps} } \subfigure[Precision on DNAnominal.] { \label{fig:bpk} \includegraphics[width=0.6\columnwidth]{DNAnominal_precision.eps} } \subfigure[Precision on Splice.] { \label{fig:cpk} \includegraphics[width=0.6\columnwidth]{splice_precision.eps} } \caption{The Precision@k of Different Categorical Data Representation Methods: A better metric yields a higher value.} \label{fig:retrieval} \end{figure*} We further test the UNTIE representation performance of object retrieval, which also heavily depends on data representation. Every object is used as a query, and its \textit{k}-closest objects are retrieved per a distance measure. The precision@\textit{k}, i.e., the fraction of the retrieved \textit{k} objects that are the same-class neighbors, is reported. We use the Euclidean distance for UNTIE-, CDE-, BiGAN\_WD- and VAE\_WD-represented data to compare with the distance measured by COS, DILCA, Ahmda, Rough and Hamming for retrieval. Three data sets Dermatology, DNAnominal and Splice are tested to evaluate the UNTIE-enabled retrieval performance. Different from the clustering results, the precision@\textit{k} of retrieval can demonstrate the quality of learned representation from local (when \textit{k} is small) to global (when \textit{k} is large). The results are shown in Fig. \ref{fig:retrieval}, in which the precision of UNTIE-enabled retrieval consistently outperforms the others. It reflects that UNTIE can capture more details of data distributions than other representation methods, which is powered by learning hierarchical value-to-attribute couplings and heterogeneities. \subsection{Evaluating the UNTIE Efficiency} The efficiency of UNTIE is affected by the number of iterations to achieve convergence in Algorithm \ref{algorithm} and different data factors. In this section, we first empirically evaluate the convergence speed of UNTIE, and then evaluate the computational cost of UNTIE under different data factors. \subsubsection{The UNTIE Convergence} Due to space limitation, we randomly select six real data sets to demonstrate the convergence of UNTIE. The optimization method for UNTIE is Adam with the same setting as in Section \ref{sec:parameters}. The training loss of objective function Eq. \eqref{eq:omega} on these data sets is shown in Fig. \ref{fig:convergence}, the loss value converges rapidly at around 1,000 iterations. Since the batch size in each iteration is only $20$, the time cost of 1,000 iterations is very low. \subsubsection{The UNTIE Computational Cost w.r.t. Data Factors} We further generate synthetic data to evaluate the computational cost of UNTIE in terms of the following data factors \cite{dst_Cao15}: the number of objects $n_o$, the number of attributes $n_a$, and the maximum number of values in each attribute $n_{mv}$. The default settings of these factors are as follows: $n_o$ is $1,000$, $n_a$ is $10$, and $n_{mv}$ is $3$. We generate three groups of data and tune one of these factors for each group. For the first group of data, the number of objects is adjusted from $1,000$ to $100,000$. For the second group of data, the number of attributes is tuned from $10$ to $100$. For the third group of data, the maximum number of values in attributes is changed from $10$ to $100$. The time cost of UNTIE under each data factor is shown in Fig. \ref{fig:timeCost}. Fig. \ref{fig:object_time} shows that the time cost of UNTIE is almost stable (from $1.8$(s) to $3.6$(s)), which demonstrates it has good scalability w.r.t. the amount of data $n_o$. Our analysis shows that the minor time cost increase is caused by the Python built-in functions when identifying categorical value location in the data. Since this cost increases with an extremely small proportion w.r.t. the amount of data, we can ignore it when applying UNTIE. Fig. \ref{fig:attribute_time} and Fig. \ref{fig:value_time} demonstrate the time cost has approximately linear relation with both $n_a$ and $n_{mv}$, which is consistent with the time complexity of UNTIE analyzed in Section \ref{sec:complexity}. These results also show that the main cost of UNTIE is on the heterogeneity learning (HL), which has the linear relation with both $n_a$ and $n_{mv}$. Meanwhile, the cost of building hierarchical coupling learning (HCL) has quadratic relation with $n_a$ and $n_{mv}$. The reason is that UNTIE calculates the pairwise value relations when learning inter-attribute couplings. However, it only slightly affects the cost of UNTIE when $n_a$ and $n_{mv}$ are small. For categorical data with high dimensionality, a trade-off between sufficiently capturing couplings and preserving efficiency is required. The computational costs of UNTIE and state-of-the-art methods are at the same level in terms of data factor $n_o$. It indicates all of these methods can handle large amount of data. We can see that UNTIE has a higher computational cost compared to other methods in terms of $n_a$. As shown in \ref{fig:attribute_time}, the higher cost is brought by heterogeneity learning, which has linear relation with $n_a$. For the hierarchical coupling learning, the cost of UNTIE is at the same level as the state-of-the-art methods. As for $n_{mv}$, UNTIE is much more efficient than CDE, which is the state-of-the-art method with the best representation performance as shown in the previous experiments. To evaluate the relation between time cost and the number of kernel functions, we set the number of kernels used in UNTIE from 1 to 30 and test the computational cost of UNTIE on the synthetic data set with the default data factors. The UNTIE time cost with a different number of kernels is shown in Fig. \ref{fig:kernelTimeCost}. This shows that the UNTIE time cost is linear to the number of kernels with a very small slope. Increasing the number of kernels only slightly affects the computational time of UNTIE. This is consistent with our theoretical analysis, which indicates $n_{\bm{\omega}}$ is linear to the time complexity of UNTIE. Here, $n_{\bm{\omega}}$ has a linear relation with the number of kernels. \begin{table*}[!htbp] \centering \footnotesize \caption{KNN, SVM, RF and LR Classification F-score (\%) with UNTIE and CDE. The best results are highlighted in bold.} \begin{tabular}{l|ll|ll|ll|ll} \toprule Data set & UNTIE-SVM & CDE-SVM & UNTIE-KNN & CDE-KNN & UNTIE-RF & CDE-RF & UNTIE-LR & CDE-LR \\ \midrule zoo & \textbf{100} & 88.00$\pm$18.33 & \textbf{100} & \textbf{100} & \textbf{100} & \textbf{100} & \textbf{100} & \textbf{100} \\ DNAPromoter & \textbf{94.42$\pm$6.81} & 91.37$\pm$7.41 & \textbf{87.19$\pm$10.79} & 76.06$\pm$10.62 & \textbf{89.32$\pm$8.81} & 87.58$\pm$11.37 & \textbf{94.41$\pm$6.03} & 90.35$\pm$8.20 \\ hayesroth & \textbf{82.23$\pm$6.22} & 80.92$\pm$6.96 & 60.15$\pm$10.52 & \textbf{62.38$\pm$10.33} & \textbf{82.48$\pm$9.00} & 82.11$\pm$8.26 & \textbf{82.08$\pm$7.29} & \textbf{82.08$\pm$7.29} \\ lymphography & \textbf{87.06$\pm$12.02} & 85.68$\pm$11.26 & \textbf{82.22$\pm$11.08} & 79.03$\pm$15.07 & 82.17$\pm$14.59 & \textbf{84.16$\pm$11.31} & \textbf{83.12$\pm$12.90} & 81.67$\pm$13.84 \\ hepatitis & \textbf{48.13$\pm$8.65} & 46.85$\pm$6.04 & 70.16$\pm$13.37 & \textbf{70.47$\pm$14.52} & \textbf{66.61$\pm$11.12} & 64.30$\pm$12.25 & \textbf{70.39$\pm$16.85} & \textbf{70.39$\pm$16.85} \\ audiology & \textbf{73.41$\pm$7.29} & 73.25$\pm$5.90 & \textbf{48.19$\pm$9.47} & 47.79$\pm$8.20 & \textbf{63.80$\pm$10.24} & 59.78$\pm$11.83 & 47.70$\pm$7.11 & \textbf{66.29$\pm$11.26} \\ housevotes & 96.71$\pm$3.88 & \textbf{96.92$\pm$3.69} & \textbf{95.58$\pm$3.69} & 92.54$\pm$4.18 & \textbf{96.02$\pm$4.82} & 95.13$\pm$4.61 & 93.74$\pm$5.00 & \textbf{93.98$\pm$5.15} \\ spect & 67.60 $\pm$ 11.92 & \textbf{68.46$\pm$10.88} & \textbf{55.41$\pm$9.95} & 50.54$\pm$8.56 & 66.61$\pm$10.93 & \textbf{67.13$\pm$12.08} & \textbf{69.48$\pm$12.52} & \textbf{69.48$\pm$12.52} \\ mofn3710 & \textbf{100} & \textbf{100} & \textbf{87.18$\pm$6.29} & 86.04$\pm$7.77 & 76.64$\pm$10.51 & \textbf{78.31$\pm$12.16} & \textbf{100} & \textbf{100} \\ soybeanlarge & 90.94$\pm$3.83 & \textbf{93.61$\pm$4.34} & 93.70$\pm$4.26 & \textbf{96.03$\pm$3.85} & \textbf{92.88$\pm$5.81} & 92.69$\pm$6.12 & \textbf{89.57 $\pm$5.98} & 88.57$\pm$6.97 \\ \midrule Averaged Rank & \textbf{1.35} & 1.65 & \textbf{1.35} & 1.65 & \textbf{1.35} & 1.65 & 1.45 & \textbf{1.55} \\ \bottomrule \end{tabular}% \label{tab:flexity}% \end{table*}% \subsection{Evaluating the UNTIE Flexibility} We further demonstrate the UNTIE flexibility when fed into the classifiers KNN, SVM, RF and LR. We randomly select 90\% of objects in each data set for training and the remainder for testing. To reduce the impact of noise and randomness, 20 sampling iterations generate 20 sets of training and test data for the experiments. The averaged classification performance and standard deviation are reported w.r.t. F-score ($\%$). The vector representations learned by UNTIE and CDE are used as the input of these classifiers. The results comparing with CDE-enabled classifiers are shown in Table \ref{tab:flexity} and illustrate that UNTIE representations can fit different classifiers and enhance their performance on categorical data, as compared with the results of CDE-enabled classifiers. \subsection{Evaluating the UNTIE Stability} To evaluate the stability of UNTIE per the kernel functions in heterogeneity learning, we adopt three groups of kernel functions where each has a varying number of functions. The first group only contains Gaussian kernels, the second group only contains Polynomial kernels, and the third group mixes Gaussian and Polynomial kernels. The kernel functions in each set are shown in Table \ref{tab:stability}. The clustering F-score enabled by UNTIE w.r.t. different kernel function sets on two data sets DNAPromoter and Monfn3710 are illustrated in Fig.\ref{fig:stability}. \renewcommand\arraystretch{1.5} \begin{table}[!htbp] \centering \footnotesize \caption{Three Groups of Kernel Functions for Testing the UNTIE Stability}\label{tab:stability} \begin{tabular}{l|l|l} \toprule Group & Set & Kernel Functions\\ \midrule \multirow{2}{*}{Group 1} & $F_1$ & Gaussian kernels with width $\{2^{-3}, 2^{-2}, \cdots, 2^{3}\}$\\ \cline{2-3} & $F_2$ & Gaussian kernels with width $\{2^{-5}, 2^{-4}, \cdots, 2^{5}\}$\\ \midrule \multirow{2}{*}{Group 2} & $F_3$ & Polynomial Kernels with order $\{1,2\}$\\ \cline{2-3} & $F_4$ & Polynomial Kernels with order $\{1,2,3\}$\\ \midrule \multirow{4}{*}{Group 3} & \multirow{2}{*}{$F_5$} & Gaussian kernels with width $\{2^{-3}, 2^{-2}, \cdots, 2^{3}\}$\\ & & Polynomial Kernels with order $\{1,2\}$\\ \cline{2-3} & \multirow{2}{*}{$F_6$} & Gaussian kernels with width $\{2^{-5}, 2^{-4}, \cdots, 2^{5}\}$\\ & & Polynomial Kernels with order $\{1,2,3\}$\\ \bottomrule \end{tabular} \end{table} \begin{figure}[!hbtp] \centering \subfigure[F-Score ($\%$) on DNAPromoter.] { \includegraphics[width=0.45\columnwidth]{DNAPromoterDifferentKernels.eps} } \subfigure[F-Score ($\%$) on Mofn3710.] { \includegraphics[width=0.45\columnwidth]{Mofn3710DifferentKernels.eps} } \caption{The Clustering F-Score ($\%$) with UNTIE w.r.t. Different Kernel Function Sets: The same color indicates the same kernel function group.} \label{fig:stability} \end{figure} UNTIE is stable in terms of the number of kernel functions. As shown in Fig. \ref{fig:stability}, UNTIE achieves the same clustering F-score w.r.t. kernel function sets in the same group, where kernel functions are of the same type but different numbers. Although different kernel functions may generate different effects of UNTIE on different data sets (e.g., the Gaussian kernel family in group 1 enables better performance on DNAPromoter, and the Polynomial kernel family in group 2 enables better performance on Mofn3710), UNTIE can comprehensively learn information from multiple kernels while eliminates their redundancy and inconsistency. Accordingly, it always enables the best clustering performance with the kernel function sets $F_5$ and $F_6$ in group 3, which involves kernel functions in both groups 1 and 2. \section{Conclusions}\label{sec:conclusion} Categorical data representation is critical yet challenging as complicated coupling relationships and heterogeneities are often embedded in complex categorical values, attributes and objects. Existing work including deep learning is troubled by unsupervised categorical representation learning. This paper introduces a heterogeneous coupling learning method UNTIE for unsupervised categorical data representation. By modeling value-to-object hierarchical couplings and their complementary and inconsistent influence on representations, UNTIE reveals the nonlinear relations between couplings and discloses the heterogeneous distributions within couplings. Both theoretical and empirical analyses show the effectiveness and efficiency of UNTIE. An important lesson learned in UNTIE is to select appropriate kernels w.r.t. specific data characteristics and domain knowledge of the underlying problems. This work shows the need and potential of shallow learners in handling complex data characteristics in particular couplings, heterogeneities, and inconsistency. The poor results on some data in Table \ref{tab:clustering} also show the challenge and open issues on categorical representation of complex data characteristics even in small data. In addition, modeling more complex couplings, such as very high-dimensional couplings, may represent more complicated relations and interactions embedded in high-dimensional data. \section{Acknowledgment} This work is partially sponsored by the Australian Research Council grants DP190101079 and FT190100734. \bibliographystyle{IEEEtran}
proofpile-arXiv_065-161
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\@startsection{section}{1}% \z@{.7\linespacing\@plus\linespacing}{.5\linespacing}% {\normalfont\scshape\Large\centering}} \def\subsection{\@startsection{subsection}{2}% \z@{.5\linespacing\@plus.7\linespacing}{.5\linespacing}% {\normalfont\scshape\centering}} \makeatother \newcounter{licznik}[section] \def\thesection.\arabic{licznik}{\thesection.\arabic{licznik}} \newtheorem{theorem}[licznik]{Theorem} \newtheorem{Lemma}[licznik]{Lemma} \newtheorem{Proposition}[licznik]{Proposition} \newtheorem{cor}[licznik]{Corollary} \newtheorem{remark}[licznik]{Remark} \newtheorem{definition}[licznik]{Definition} \defd{d} \def\tilde{\tilde} \def\varepsilon{\varepsilon} \def\rightarrow{\rightarrow} \def\times{\times} \def\mathop{\lim\sup}{\mathop{\lim\sup}} \def\mathop{\lim\operatornamewithlimits{inf\vphantom{p}}}{\mathop{\lim\operatornamewithlimits{inf\vphantom{p}}}} \def\operatornamewithlimits{\mathrm{ess\,sup}}{\operatornamewithlimits{\mathrm{ess\,sup}}} \def\operatornamewithlimits{\mathrm{ess\,inf\vphantom{p}}}{\operatornamewithlimits{\mathrm{ess\,inf\vphantom{p}}}} \def\noindent{\noindent} \def\underbar{\underbar} \def\prime{\prime} \def\alpha{\alpha} \def\beta{\beta} \def\delta{\delta} \def\gamma{\gamma} \def\lambda{\lambda} \def\Omega{\Omega} \def\sigma{\sigma} \def\varepsilon{\varepsilon} \def\mathcal{F}{\mathcal{F}} \def\mathbb{E}{\mathbb{E}} \def\mathbb{R}{\mathbb{R}} \def\mathbb{P}{\mathbb{P}} \def\mathcal{T}{\mathcal{T}} \def\mathcal{P}{\mathcal{P}} \def\mathcal{A}{\mathcal{A}} \def\mathcal{A}_{ac}{\mathcal{A}_{ac}} \def\mathcal{S}{\mathcal{S}} \def\operatornamewithlimits{inf\vphantom{p}}{\operatornamewithlimits{inf\vphantom{p}}} \newcommand{\ind}[1]{I_{#1}} \newcommand{\indd}[1]{\ind{\{#1\}}} \newcommand{\vspace*{\bigskipamount}}{\vspace*{\bigskipamount}} \newcommand{\vspace*{\medskipamount}}{\vspace*{\medskipamount}} \newcommand{\vspace*{\smallskipamount}}{\vspace*{\smallskipamount}} \newcommand{\le}{\le} \newcommand{\ge}{\ge} \newcommand{\mathcal}{\mathcal} \newcommand{\mathbb}{\mathbb} \newcommand{\varepsilon}{\varepsilon} \newcommand{\Omega^s}{\Omega^s} \newcommand{\mathcal{F}^s}{\mathcal{F}^s} \newcommand{\mathbb{P}^s}{\mathbb{P}^s} \newcommand\mcalO{\mathcal{O}} \newcommand{\widehat}{\widehat} \makeatletter \def\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}}{\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}}} \makeatother \newcommand{c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}}}{c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}}} \newcommand{c\`agl\`ad\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}}}{c\`agl\`ad\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}}} \newcommand\as{\mbox{-a.s.}\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}}} \renewcommand\ae{\mbox{-a.e.}\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}}} \newcommand{\I}{I \newcommand{\J}{J \newcommand{\mcalI}{\mathcal{I} \newcommand{\mcalJ}{\mathcal{J} \newcommand{\mcalL}{{\mathcal{L}_b} \newcommand{\mathcal{G}}{\mathcal{G}} \newcommand{{\mathcal{A}^\circ}}{{\mathcal{A}^\circ}} \newcommand{{\mathcal{A}^\circ_{ac}}}{{\mathcal{A}^\circ_{ac}}} \newcommand{{\tilde{\mathcal{A}}^\circ}}{{\tilde{\mathcal{A}}^\circ}} \definecolor{brightmaroon}{rgb}{0.7, 0.23, 0.2} \definecolor{DarkOrange}{rgb}{0.8, 0.45, 0} \makeatletter \newcommand\assumptionlabel[1]{\hspace\labelsep \normalfont\bfseries #1\ \ \gdef\@currentlabel{#1}} \newenvironment{assumption} {\medskip\list{}{\labelwidth\z@ \itemindent-\leftmargin \let\makelabel\assumptionlabel}} {\endlist} \makeatother \begin{document} \title[Non-Markovian Dynkin games with partial/asymmetric information]{On the value of non-Markovian Dynkin games \\ with partial and asymmetric information} \author[T. De Angelis, N. Merkulov, J. Palczewski]{Tiziano De Angelis, Nikita Merkulov, Jan Palczewski} \address{T.\ De Angelis: School of Management \& Economics, Dept.\ ESOMAS, University of Turin, C.so Unione Sovietica 218bis, 10134, Torino, Italy} \email{tiziano.deangelis@unito.it (T. De Angelis)} \address{N.\ Merkulov and J.\ Palczewski: School of Mathematics, University of Leeds, LS2 9JT, Leeds, UK} \email{mmnme@leeds.ac.uk (N. Merkulov)} \email{j.palczewski@leeds.ac.uk (J. Palczewski)} \thanks{2020 {\em Mathematics Subject Classification}: 91A27, 91A55, 91A15, 60G07, 60G40} \keywords{non-Markovian Dynkin games, partial information, asymmetric information, optimal stopping, randomised stopping times, regular processes, predictable-jump processes.} \thanks{{\em Acknowledgments}: T.~De Angelis gratefully acknowledges support by the EPSRC grant EP/R021201/1. We are grateful to an anonymous referee who pointed us to the existence of optimal strategies and suggested the equivalence of our topology with the one used by Baxter and Chacon \cite{BaxterChacon} and Meyer \cite{Meyer} (see our Lemma \ref{lem:top})} \begin{abstract} We prove that zero-sum Dynkin games in continuous time with partial and asymmetric information admit a value in randomised stopping times when the stopping payoffs of the players are general c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} measurable processes. As a by-product of our method of proof we also obtain existence of optimal strategies for both players. The main novelties are that we do not assume a Markovian nature of the game nor a particular structure of the information available to the players. This allows us to go beyond the variational methods (based on PDEs) developed in the literature on Dynkin games in continuous time with partial/asymmetric information. Instead, we focus on a probabilistic and functional analytic approach based on the general theory of stochastic processes and Sion's min-max theorem (M.\ Sion, Pacific J. Math., {\bf 8}, 1958, pp.\ 171-176). Our framework encompasses examples found in the literature on continuous time Dynkin games with asymmetric information and we provide counterexamples to show that our assumptions cannot be further relaxed. \medskip \end{abstract} \maketitle \section{Introduction}\label{sec:intro} In this paper we develop a framework for the study of the existence of a value (also known as {\em Stackelberg equilibrium}) in zero-sum Dynkin games with partial/asymmetric information in a non-Marko\-vian setting, when the payoffs are general c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} measurable processes and players are allowed to use randomised stopping times. As a by-product of our method of proof we also obtain existence of optimal strategies for both players. The games are considered on both finite and infinite-time horizon and the horizon is denoted by $T$. The payoff processes can be decomposed into the sum of a {\em regular} process (in the sense of Meyer \cite{Meyer}) and a pure jump process with mild restrictions on the direction of predictable jumps for one of the two players. Regular processes form a very broad class encompassing, for example, all c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} processes that are also quasi left-continuous (i.e., left-continuous over stopping times). We allow for a very general structure of the information available to the players. All processes are adapted to an overarching filtration $(\mathcal{F}_t)$ whereas each player makes decisions based on her own filtration, representing her access to information. Letting $(\mathcal{F}^{\,i}_t)$ be the filtration of the $i$-th player, with $i=1,2$, we only need to assume that $\mathcal{F}^{\,i}_t\subseteq\mathcal{F}_t$ for all $t\in[0,T]$. In particular, we cover the case in which players are equally (partially) informed, i.e., $\mathcal{F}^1_t=\mathcal{F}^2_t$, and, more importantly, the case in which they have asymmetric (partial) information, i.e., $\mathcal{F}^1_t\neq\mathcal{F}^2_t$. Under this generality we prove that Dynkin games with second-mover advantage admit a value in mixed strategies (which in this context are represented by randomised stopping times) and optimal strategies exist for both players. Our framework encompasses most (virtually all) examples of zero-sum Dynkin games (in continuous time) with partial/asymmetric information that we could find in the literature (see, e.g., De Angelis et al. \cite{DGV2017}, \cite{DEG2020}, Gensbittel and Gr\"un \cite{GenGrun2019}, Gr\"un \cite{Grun2013} and Lempa and Matom\"aki \cite{lempa2013}) and we give a detailed account of this fact in Section \ref{sec:examples} (notice that \cite{DGV2017} and \cite{lempa2013} obtain a saddle point for the game in pure strategies, i.e., using stopping times, but in very special examples). Broadly speaking, all those papers' solution methods hinge on variational inequalities and PDEs and share two key features: (i) a specific structure of the information flow in the game and (ii) the Markovianity assumption. In our work instead we are able to analyse games at a more abstract level that allows us to drop the Markovianity assumption and to avoid specifying an information structure. Of course the cost to pay for such greater generality is a lack of explicit results concerning the value and the optimal strategies (beyond their existence), which instead may be obtained in some problems satisfying (i) and (ii) above. We also show by several counterexamples that our main assumptions cannot be further relaxed as otherwise a value for the game may no longer exist. Our methodology draws on the idea presented in Touzi and Vieille \cite{TouziVieille2002} of using Sion's min-max theorem (Sion \cite{Sion1958}). In \cite{TouziVieille2002} the authors are interested in non-Markovian zero-sum Dynkin games with full information in which first- and second-mover advantage may occur at different points in time, depending on the stochastic dynamics of the underlying payoff processes. In that context randomisation is essentially used by the players to attain a value in the game and avoid stopping simultaneously (another general study on such class of problems, but without using Sion's theorem, is contained in Laraki and Solan \cite{LarakiSolan2005}). Since our set-up is different, due to the partial/asymmetric information features and relaxed assumptions on the payoff processes, we encounter some non-trivial technical difficulties in repeating arguments from \cite{TouziVieille2002}; for example, our class of randomised stopping times is not closed with respect to the topology used in \cite{TouziVieille2002} (see Remark \ref{rem-TV-norm-doesnt-work}). For this reason we develop an alternative approach based on the general theory of stochastic processes combined with ideas from functional analysis. \subsection{Literature review} Existence of a value (Stackelberg equilibrium) in zero-sum Dynkin games is a research question that goes back to the 70's in the classical set-up where players have full and symmetric information. A comprehensive and informative review of the main results since Dynkin's inception of stopping games \cite{dynkin1969} is contained in the survey paper by Kifer \cite{kifer2013}. Here we recall that early results on the existence of a value in a diffusive set-up were obtained by Bensoussan and Friedman \cite{bensoussan1974} via PDE methods, and by Bismut \cite{bismut1977} via probabilistic methods (and allowing for processes with jumps). Those results were later extended to right-continuous Markov processes by Stettner \cite{stettner1982} (see also Stettner \cite{stettner1982b} and \cite{stettner1984}). In the non-Markovian setting the early results are due to Lepeltier and Maingueneau \cite{lepeltier1984}. Around the year 2000 zero-sum Dynkin games gained popularity thanks to their applications in mathematical finance suggested by Kifer \cite{kifer2000} (see also Kyprianou \cite{kyprianou2004} for an early contribution). Numerous other papers have addressed related questions, including the existence of value and optimal strategies, with various methods. The interested reader may consult Ekstr\"om and Peskir \cite{ekstrom2008} for modern results in a general Markovian setting or Ekstr\"om and Villeneuve \cite{ekstrom2006} for the special case of one-dimensional linear diffusions. It is also worth mentioning that nonzero-sum Dynkin games have been studied by many authors including, e.g., Hamadene and Zhang \cite{hamadene2010} for non-Markovian games, Attard \cite{attard2018} for general Markovian games and De Angelis et al. \cite{de2018nash} for games on one-dimensional linear diffusions. All the papers mentioned in this (largely incomplete) literature review deal with players holding full and symmetric information and the value is found in {\em pure strategies}, i.e., in stopping times. Yasuda \cite{yasuda1985} was probably the first to study the existence of a value for Dynkin games with {\em randomised stopping times}, in the case of discrete-time Markov processes. In that context, randomisation is specified by assigning a probability of stopping at each time $n=1,2,\ldots$. A similar type of games for Markov chains with an absorbing state was also studied by Domansky \cite{domansky2002}. Rosenberg, Solan and Vieille \cite{rosenberg2001} developed more general results by removing the Markovianity assumption and the assumption of an ordering of payoffs (i.e., they did not require that there be a first- or second-mover advantage). Randomised stopping times are also used in mathematical finance, in the context of pricing and hedging game options in discrete time with transaction costs (see \cite[Sec.\ 5]{kifer2013}), and in mathematical economics, to construct subgame-perfect equilibria in (nonzero-sum) Dynkin games (see, e.g., Riedel and Steg \cite{riedel2017}). In continuous time, the most recent results on the existence of a value for non-Markovian zero-sum Dynkin games with randomised stopping times are contained in Touzi and Vieille \cite{TouziVieille2002} and in Laraki and Solan \cite{LarakiSolan2005}. We emphasise that in all the papers mentioned above players are equally informed and the need for randomisation stems mainly from a specific structure of the game's payoff (often due to the lack of an ordering of the payoff processes that induces alternating first- and second-mover advantage). Our work instead is inspired by a more recent strand of the literature on continuous-time Dynkin games that addresses the role of information across players and its impact on their strategies (see also Section \ref{sec:examples} for a fuller account). We believe this strand was initiated with work by Gr\"un \cite{Grun2013}, where one of the players knows the payoff process in the game while the other one has access only to an initial distribution of possible payoff processes. Gr\"un proves existence of a value in randomised strategies and the existence of an optimal strategy for the informed player. In Gr\"un and Gensbittel \cite{GenGrun2019} players observe two different stochastic processes and the payoffs in the game depend on both processes. The authors prove existence of a value and, under some additional conditions, of optimal strategies for both players. Both these papers attack the problem via a characterisation of the value as the viscosity solution of a certain variational inequality (of a type which is rather new in the literature and is inspired by similar results in the context of differential games; see, e.g., Cardaliaguet and Rainer \cite{cardaliaguet2009}). Free-boundary methods in connection with randomised stopping times are instead used in De Angelis et al. \cite{DEG2020}, where players have asymmetric information regarding the drift of a linear diffusion underlying the game, and in Ekstr\"om et al. \cite{ekstrom2017}, where the two players estimate the drift parameter according to two different models. The methods used in those papers cannot be extended to the non-Markovian framework of our paper. Although not directly related to Dynkin games, we notice that methods from functional analysis and the general theory of stochastic processes have been used recently to study optimal stopping problems by Pennanen and Perkk\"io \cite{pennanen2018}. By relaxing the problem to include randomised stopping times the authors reduce the optimal stopping problem to an optimisation of a linear functional over a convex set of randomised stopping times and find that the solution exists as an extreme point, i.e., a pure stopping time. Closely related contributions date back to Baxter and Chacon \cite{BaxterChacon} and Meyer \cite{Meyer} who establish compactness of the set of randomised stopping times in weak topologies defined by functionals which can be interpreted as stopping of quasi left-continuous processes, in \cite{BaxterChacon}, and regular processes, in \cite{Meyer}. In our game framework we need to rely on min-max arguments instead of convex optimisation as in the optimal stopping case, so compactness arguments are not immediately applicable. However, \cite{BaxterChacon, Meyer} inspired our approach and some of our convergence results in Section \ref{sec:tech}. Furthermore, the topology of \cite{Meyer} on the set of randomised stopping times turns out to be equivalent to the topology obtained via different routes in our paper (Lemma \ref{lem:top}). \subsection{Structure of the paper} The paper is organised as follows. The problem is set in Section \ref{sec:setting} where we also state our main result on the existence of a value in full generality (Theorem \ref{thm:main2}). For the ease of readability we also state a version of the result under slightly stronger conditions on the underlying processes (Theorem \ref{thm:main}), which allows a more linear approach to the proof. An extension to the case of a game with conditioning on some initial information is also stated as Theorem \ref{thm:ef_0_value}. Before turning to proofs of our results we use Section \ref{sec:examples} to illustrate how our framework encompasses Dynkin games in continuous time with partial and asymmetric information that have appeared in the literature to date. Section \ref{sec:reform} is used to reformulate the Dynkin game in terms of a game of increasing (singular) controls. Section \ref{sec-Sion-existence-of-value} begins with a statement of Sion's min-max theorem which is followed by a (short) proof of our Theorem \ref{thm:main}. The latter is based on several technical results which are addressed in detail in Sections \ref{sec:tech}, \ref{sec:verif} and \ref{sec:approx}. The proof of Theorem \ref{thm:main2} is then given in Section \ref{sec:relax} and the one of Theorem \ref{thm:ef_0_value} is finally given in Section \ref{sec:ef_functional}. We close the paper with a few counterexamples in Section \ref{sec:Nikita-examples} showing that our conditions cannot be relaxed. \section{Problem setting and main results}\label{sec:setting} Fix a complete probability space $(\Omega,\mathcal{F}, \mathbb{P})$ equipped with a filtration $(\mathcal{F}_t)_{t\in[0,T]}$, where $T\in(0,\infty]$ is the time horizon of our problem. All random variables, processes and stopping times are considered on this filtered probability space unless specified otherwise. We write $\mathbb{E}$ for the expectation with respect to measure $\mathbb{P}$. By a \emph{measurable process} we mean a stochastic process which is $\mathcal{B}([0,T])\times\mathcal F$-measurable. We denote by $\mcalL$ a Banach space of c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} measurable processes with the norm \[ \| X \|_{\mcalL} := \mathbb{E} \Big[\sup_{t\in[0,T]} |X_t|\Big] < \infty. \] A process $(X_t)_{t\in [0,T]} \in \mcalL$ is called \emph{regular} if \begin{align}\label{eq:cond-reg} \mathbb{E} [X_\eta - X_{\eta-}|\mathcal{F}_{\eta-}] = 0\quad \mathbb{P}\as \text{ for all predictable $(\mathcal{F}_t)$-stopping times $\eta$.} \end{align} Notice that if $T=\infty$, then $\infty$ is a one-point compactification of $[0, \infty)$, so that c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} and regular processes are understood as follows (c.f. \cite[Remark VI.53e]{DellacherieMeyer}): a process $(X_t)_{t \in [0, \infty]}$ is c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} if it is c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} on $[0, \infty)$ and the limit $X_{\infty-}:=\lim_{t \to \infty} X_t$ exists; the random variable $X_\infty$ if $\mathcal{F}_\infty$-measurable and $\mathcal{F}_\infty$ is potentially different from $\mathcal{F}_{\infty-} = \sigma\big(\cup_{t \in [0, \infty)} \mathcal{F}_t\big)$. Furthermore, $(X_t)_{t\in[0,\infty]}$ is regular, if it is regular on $[0, \infty)$ and \[ \mathbb{E}[X_\infty - \lim_{t \to \infty} X_t | \mathcal{F}_{\infty-} ] = 0. \] Throughout the paper we consider several filtrations and stochastic processes, so in order to keep the notation simple we will often use $(\mathcal{F}_t)$ instead of $(\mathcal{F}_t)_{t\in[0,T]}$ and similarly $(X_t)$ (or simply $X$) for a process $(X_t)_{t\in[0,T]}$. We consider two-player zero-sum Dynkin games on the (possibly infinite) horizon $T$. Actions of the first player are based on the information contained in a filtration $(\mathcal{F}^1_t) \subseteq (\mathcal{F}_t)$. Actions of the second player are based on the information contained in a filtration $(\mathcal{F}^2_t) \subseteq (\mathcal{F}_t)$. Each player selects a random time (taking values in $[0,T]$) based on the information she acquires via her filtration: the first player's random time is denoted by $\tau$ while the second player's random time is $\sigma$. The game terminates at time $\tau \wedge \sigma \in[0, T]$ with the first player delivering to the second player the payoff \begin{equation}\label{eqn:payoff} \mathcal{P} (\tau, \sigma) = f_{\tau} \ind{\{\tau<\sigma\}} + g_{\sigma} \ind{\{{\sigma}<{\tau}\}} + h_{\tau} \ind{\{\tau=\sigma\}}. \end{equation} The first player (or $\tau$-player) is the {\em minimiser} in the game whereas the second player (or $\sigma$-player) is the {\em maximiser}. That means that the former will try to minimise the expected payoff (see \eqref{eq-uninf-payoff} below) while the latter will try to maximise it. The {\em payoff processes} $f$, $g$ and $h$ satisfy the following conditions: \begin{assumption} \item[(A1)]\label{eq-integrability-cond} $f, g \in\mcalL$, \item[(A2)]\label{ass:regular} $f, g$ are $(\mathcal{F}_t)$-adapted regular processes, \item[(A3)]\label{eq-order-cond} $f_t\ge h_t \ge g_t$ for all $t\in[0,T]$, $\mathbb{P}$\as, \item[(A4)]\label{eq-terminal-time-order-cond} $h$ is an $(\mathcal{F}_t)$-adapted, measurable process. \end{assumption} In particular, we do not assume that $h$ is c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}}. In the context of zero-sum games that we will address below, our assumption \ref{eq-order-cond} corresponds to the so-called games with the {\em second-mover advantage}, i.e., games in which both players have an incentive to wait for the opponent to end the game. Assumption \ref{eq-integrability-cond} is natural in the framework of optimal stopping problems (see, e.g., \cite[Section 2.2]{Peskir2006}, \cite[Eq. (D.29)]{Karatzas1998}) and Dynkin games (\cite{TouziVieille2002}). With \ref{ass:regular} we replace semimartingale assumptions on $f$ and $g$ from \cite{TouziVieille2002} while imposing the regularity condition dating back to \cite{Meyer} in the optimal stopping framework. Regular processes encompass a large family of stochastic processes encountered in applications. It is straightforward to see that quasi left-continuous processes \cite[Section III.11]{RogersWilliams} are regular. In the Markovian framework all standard processes \cite[Def. I.9.2]{Blumenthal1968} and, in particular, weak Feller processes (\cite[Def. III.6.5 and Thm. III.11.1]{RogersWilliams} or \cite[Thm. I.9.4]{Blumenthal1968}) are regular. Hence, strong and weak solutions to stochastic differential equations (SDEs) driven by multi-dimensional Brownian motion (or L\`evy process and bounded coefficients) \cite[Section 6.7]{Appelbaum2009} and solutions to jump-diffusion SDEs are also regular processes. More generally, a regular process is allowed to jump at predictable times $\eta$, provided that such jumps have zero mean conditional on $\mathcal{F}_{\eta-}$. We subsequently relax Assumption \ref{ass:regular} by allowing payoff processes with predictable jumps of nonzero mean. That is, we replace \ref{ass:regular} with the following: \begin{assumption} \item[(A2')] \label{ass:regular_gen} Processes $f$ and $g$ have the decomposition $f = \tilde f + \hat f$, $g = \tilde g + \hat g$ with \begin{enumerate} \item $\tilde f, \tilde g \in\mcalL$, \item $\tilde f, \tilde g $ are $(\mathcal{F}_t)$-adapted regular processes, \item $\hat f, \hat g$ are $(\mathcal{F}_t)$-adapted (right-continuous) piecewise-constant processes of integrable variation with $\hat f_{0} = \hat g_0 = 0$, $\Delta \hat f_T = \hat f_{T} - \hat f_{T-} = 0$ and $\Delta \hat g_T =\hat g_T- \hat g_{T-}= 0$, \item either $\hat f$ is non-increasing or $\hat g$ is non-decreasing. \end{enumerate} \end{assumption} Notice that there are non-decreasing processes $(\hat f^+_t), (\hat f^-_t), (\hat g^+_t), (\hat g^-_t) \in \mcalL$ starting from $0$ such that $\hat f = \hat f^+-\hat f^-$ and $\hat g = \hat g^+ - \hat g^-$ \cite[p. 115]{DellacherieMeyer}. Under Assumption \ref{ass:regular_gen}, we allow jumps of $\hat f$ in any direction and only upward jumps of $\hat g$, or, viceversa, jumps of $\hat g$ in any direction and downward jumps of $\hat f$. This ensures a certain closedness property (see Section \ref{sec:relax}). It is worth emphasising that further relaxation of condition \ref{ass:regular_gen} is not possible in the generality of our setting as demonstrated in Remark \ref{rem:contrad} and in Section \ref{subsec:example_jumps}. While regular processes have no restrictions on non-predictable jumps, \ref{ass:regular_gen} relaxes condition in Eq. \eqref{eq:cond-reg} by allowing predictable jumps with non-zero (conditional) mean. The necessity to restrict the direction of predictable jumps of one of the payoff processes is a new feature introduced by the asymmetry of information. In classical Dynkin games it is not necessary, see \cite{ekstrom2008, lepeltier1984, stettner1982b}. We further require a technical assumption \begin{assumption} \item[(A5)]\label{ass:filtration} The filtrations $(\mathcal{F}_t)$ and $(\mathcal{F}^i_t)$, $i=1,2$, satisfy the usual conditions, i.e., they are right-continuous and $\mathcal{F}^i_0$, $i=1,2$, contain all sets of $\mathbb{P}$-measure zero. \end{assumption} Players assess the game by looking at the \emph{expected payoff} \begin{equation} N(\tau,\sigma)= \mathbb{E} \big[ \mathcal{P} (\tau, \sigma) \big]. \label{eq-uninf-payoff} \end{equation} The game is said to have a \emph{value} if \[ \sup_\sigma \operatornamewithlimits{inf\vphantom{p}}_\tau N(\tau, \sigma) = \operatornamewithlimits{inf\vphantom{p}}_\tau \sup_\sigma N(\tau, \sigma), \] where, for now, we do not specify the nature of the admissible random times $(\tau,\sigma)$. The mathematical difficulty with establishing existence of a value lies in the possibility to swap the order of `inf' and `sup' and this is closely linked to the choice of admissible random times. Furthermore, an admissible pair $(\tau_*,\sigma_*)$ is said to be a {\em saddle point} (or a pair of \emph{optimal strategies}) if \[ N(\tau_*,\sigma)\le N(\tau_*,\sigma_*)\le N(\tau,\sigma_*), \] for all other admissible pairs $(\tau,\sigma)$. \begin{remark}\label{rem:ineq} Our problem formulation enjoys a symmetry which will be later used in proofs. Since we do not make assumptions on the sign of $f$, $g$, $h$, if the value exists for the game with payoff $\mathcal{P}(\tau,\sigma)$, then it also exists for the game with payoff $\mathcal{P}'(\tau,\sigma):=-\mathcal{P}(\tau,\sigma)$. However, in the latter game the $\tau$-player is a maximiser and the $\sigma$-player is a minimiser, by the simple fact \begin{align}\label{eq:swap} \sup_\sigma\operatornamewithlimits{inf\vphantom{p}}_\tau \mathbb{E}[\mathcal{P}(\tau,\sigma)]=-\operatornamewithlimits{inf\vphantom{p}}_\sigma\sup_\tau \mathbb{E}[\mathcal{P}'(\tau,\sigma)], \end{align} where, obviously, in $\mathcal{P}'$ we have the payoff processes $f'_t:=-f_t$, $g'_t:=-g_t$ and $h'_t:=-h_t$. \end{remark} It has been indicated in the literature that games with asymmetric information may not have a value if players' strategies are stopping times for their respective filtrations, see \cite[Section 2.1]{Grun2013}. Indeed, in Section \ref{sec:Nikita-examples} we demonstrate that the game studied in this paper may not, in general, have a value if the first player (resp. the second player) uses $(\mathcal{F}^1_t)$-stopping times (resp. $(\mathcal{F}^2_t)$-stopping times). It has been proven in certain Markovian set-ups that the relaxation of player controls to randomised stopping times may be sufficient for the existence of the value (see, e.g., \cite{Grun2013}, \cite{GenGrun2019}). The goal of this paper is to show that this is indeed true in the generality of our non-Markovian set-up for the game with payoff \eqref{eq-uninf-payoff}. The framework of this paper encompasses all two-player zero-sum Dynkin games in continuous time that we found in the literature. Indeed, when $(\mathcal{F}_t) = (\mathcal{F}^1_t) = (\mathcal{F}^2_t)$, the game \eqref{eq-uninf-payoff} is the classical Dynkin game with full information for both players. The case of $(\mathcal{F}^1_t) = (\mathcal{F}^2_t)$ but $(\mathcal{F}^1_t) \ne (\mathcal{F}_t)$ corresponds to a game with partial but symmetric information about the payoff processes (e.g., \cite{DGV2017}), whereas $(\mathcal{F}^1_t) \ne (\mathcal{F}^2_t)$ is the game with asymmetric information. One can have $(\mathcal{F}^1_t)=(\mathcal{F}_t)$, i.e., only the second player is uninformed (e.g., \cite{Grun2013}), or $(\mathcal{F}^1_t)\ne(\mathcal{F}_t)$ and $(\mathcal{F}^2_t) \ne (\mathcal{F}_t)$, i.e., both players access different information flows and neither of them has full knowledge of the underlying world (e.g., \cite{GenGrun2019}). In Section \ref{sec:examples}, we present in full detail how games with asymmetric information studied in the literature fit into our framework. As mentioned above the concept of randomised stopping time is central in our work, so we introduce it here. For that we need to consider increasing processes: given a filtration $(\mathcal{G}_t) \subseteq (\mathcal{F}_t)$ let \begin{align*} {\mathcal{A}^\circ} (\mathcal{G}_t):=&\,\{\rho\,:\,\text{$\rho$ is $(\mathcal{G}_t)$-adapted with $t\mapsto\rho_t(\omega)$ c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}},}\\ &\qquad\,\text{non-decreasing, $\rho_{0-}(\omega)=0$ and $\rho_T(\omega)=1$ for all $\omega\in\Omega$}\}. \end{align*} In the definition of ${\mathcal{A}^\circ}(\mathcal{G}_t)$ we take the opportunity to require that the stated properties hold for all $\omega \in \Omega$. This leads to no loss of generality if $\mathcal{G}_0$ contains all $\mathbb{P}$-null sets of $\Omega$. Hence for any $\omega\in \mathcal{N} \subset \Omega$ with $\mathbb{P}(\mathcal{N})=0$ we can simply set $\rho_t(\omega)=0$ for $t\in[0,T)$ and $\rho_T(\omega)=1$. Recall that in the infinite-time horizon case, $T=\infty$, we understand $\rho_T$ as an $\mathcal{F}_{\infty}$-measurable random variable while $\rho_{T-}:=\lim_{t\to\infty}\rho_t$ (which exists by the assumption that $(\rho_t)$ is a c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} process). Randomised stopping times can be defined as follows. \begin{definition}\label{def-rand-st} Given a filtration $(\mathcal{G}_t) \subseteq (\mathcal{F}_t)$, a random variable $\eta$ is called a \emph{$(\mathcal{G}_t)$-randomised stopping time} if there exists a random variable $Z$ with uniform distribution $U([0,1])$, independent of $\mathcal{F}_T$, and a process $\rho\in{\mathcal{A}^\circ}(\mathcal{G}_t)$ such that \begin{equation} \eta=\eta(\rho,Z)=\operatornamewithlimits{inf\vphantom{p}}\{t\in [0,T]: \rho_t > Z\}, \quad \mathbb{P}\as \label{eq-def-rand-st} \end{equation} The variable $Z$ is called a \emph{randomisation device} for the randomised stopping time $\eta$, and the process $\rho$ is called the \emph{generating process}. The set of $(\mathcal{G}_t)$-randomised stopping times is denoted by $\mathcal{T}^R(\mathcal{G}_t)$. It is assumed that randomisation devices of different stopping times are independent. \end{definition} We refer to \cite{Solanetal2012}, \cite{TouziVieille2002} for an extensive discussion on various definitions of randomised stopping times and conditions that are necessary for their equivalence. To avoid unnecessary complication of notation, we assume that the probability space $(\Omega, \mathcal{F}, \mathbb{P})$ supports two independent random variables $Z_\tau$ and $Z_\sigma$ which are also independent of $\mathcal{F}_T$ and are the randomisation devices for the randomised stopping times $\tau$ and $\sigma$ of the two players. \begin{definition}\label{def-value-rand-strat} Define \begin{equation*} V_*:=\sup_{\sigma \in \mathcal{T}^R(\mathcal{F}^2_t)} \operatornamewithlimits{inf\vphantom{p}}_{\tau\in \mathcal{T}^R(\mathcal{F}^1_t)} N(\tau,\sigma)\quad\text{and}\quad V^*:= \operatornamewithlimits{inf\vphantom{p}}_{\tau\in \mathcal{T}^R(\mathcal{F}^1_t)}\sup_{\sigma \in \mathcal{T}^R(\mathcal{F}^2_t)} N(\tau,\sigma). \end{equation*} The \emph{lower value} and {\em upper value of the game in randomised strategies} are given by $V_*$ and $V^*$, respectively. If they coincide, the game is said to have a \emph{value in randomised strategies} $V=V_*=V^*$. \end{definition} The following theorem states the main result of this paper. \begin{theorem} \label{thm:main2} Under assumptions \ref{eq-integrability-cond}, \ref{ass:regular_gen}, \ref{eq-order-cond}-\ref{ass:filtration}, the game has a value in randomised strategies. Moreover, if $\hat f$ and $\hat g$ in \ref{ass:regular_gen} are non-increasing and non-decreasing, respectively, there exists a pair $(\tau_*,\sigma_*)$ of optimal strategies. \end{theorem} For the clarity of presentation of our methodology, we first prove a theorem with more restrictive regularity properties of payoff processes and then show how to extend the proof to the general case of Theorem \ref{thm:main2}. \begin{theorem} \label{thm:main} Under assumptions \ref{eq-integrability-cond}-\ref{ass:filtration}, the game has a value in randomised strategies and there exists a pair $(\tau_*,\sigma_*)$ of optimal strategies. \end{theorem} Proofs of the above theorems are given in Section \ref{sec:sions}. They rely on two key results: an approximation procedure (Propositions \ref{thm:conv_lipsch} and \ref{thm:conv_lipsch_gen}) and an auxiliary game with `nice' regularity properties (Theorem \ref{th-value-cont-strat} and \ref{th-value-cont-strat_gen}) which enables the use of a known min-max theorem (Theorem \ref{th-the-Sion}). The $\sigma$-algebra $\mathcal{F}_0$ is not assumed to be trivial. It is therefore natural to consider a game in which players assess their strategies ex-post, i.e., after observing the information available to them at time $0$ when their first action may take place. Allowing for more generality, let $\mathcal{G}$ be a $\sigma$-algebra contained in $\mathcal{F}^1_0$ and in $\mathcal{F}^2_0$, i.e., containing only information available to both players at time $0$. The expected payoff of the game in this case takes the form (recall that $\tau,\sigma\in[0,T]$): \begin{equation}\label{eqn:cond_func} \mathbb{E}\big[ \mathcal{P}(\tau, \sigma) \big| \mathcal{G} \big] = \mathbb{E}\big[ f_{\tau} \ind{\{\tau<\sigma\}} + g_{\sigma} \ind{\{{\sigma}<{\tau}\}} + h_{\tau} \ind{\{\tau=\sigma\}}\big| \mathcal{G} \big]. \end{equation} The proof of the following theorem is in Section \ref{sec:ef_functional}. \begin{theorem}\label{thm:ef_0_value} Under the assumptions of Theorem \ref{thm:main2} and for any $\sigma$-algebra $\mathcal{G} \subseteq \mathcal{F}^1_0 \cap \mathcal{F}^2_0$, the $\mathcal{G}$-conditioned game has a value, i.e. \begin{equation}\label{eqn:value_ef0} \operatornamewithlimits{\mathrm{ess\,sup}}_{\sigma \in \mathcal{T}^R(\mathcal{F}^2_t)} \operatornamewithlimits{\mathrm{ess\,inf\vphantom{p}}}_{\tau\in \mathcal{T}^R(\mathcal{F}^1_t)} \mathbb{E}\big[ \mathcal{P}(\tau, \sigma) \big| \mathcal{G} \big] = \operatornamewithlimits{\mathrm{ess\,inf\vphantom{p}}}_{\tau\in \mathcal{T}^R(\mathcal{F}^1_t)}\operatornamewithlimits{\mathrm{ess\,sup}}_{\sigma \in \mathcal{T}^R(\mathcal{F}^2_t)} \mathbb{E}\big[\mathcal{P}(\tau, \sigma) \big| \mathcal{G} \big], \qquad \mathbb{P}\as \end{equation} Moreover, if $\hat f$ and $\hat g$ in \ref{ass:regular_gen} are non-increasing and non-decreasing, respectively, there exists a pair $(\tau_*,\sigma_*)$ of optimal strategies in the sense that \begin{equation}\label{eqn:saddleG} \mathbb{E}\big[ \mathcal{P}(\tau_*, \sigma) \big| \mathcal{G} \big] \le \mathbb{E}\big[ \mathcal{P}(\tau_*, \sigma_*) \big| \mathcal{G} \big] \le \mathbb{E}\big[ \mathcal{P}(\tau, \sigma_*) \big| \mathcal{G} \big], \qquad \mathbb{P}\as \end{equation} for all other admissible pairs $(\tau,\sigma)$. \end{theorem} \section{Examples}\label{sec:examples} Before moving on to prove the theorems stated above, in this section we illustrate some of the specific games for which our general results apply. We draw form the existing literature on two-player zero-sum Dynkin games in continuous time and show that a broad class of these (all those we are aware of) fits within our framework. Since our contribution is mainly to the theory of games with partial/asymmetric information, we exclude the well-known case of games with full information which has been extensively studied (see our literature review in the introduction). \subsection{Game with partially observed scenarios}\label{subsec:game_1} Our first example extends the setting of \cite{Grun2013} and it reduces to that case if $J=1$ and the {\em payoff processes} $f$, $g$ and $h$ are deterministic functions of an It\^o diffusion $(X_t)$ on $\mathbb{R}^d$, i.e., $f_t=f(t,X_t)$, $g_t=g(t,X_t)$ and $h_t=h(t, X_t)$. On a discrete probability space $(\Omega^s, \mathcal{F}^s, \mathbb{P}^s)$, consider two random variables $\mcalI$ and $\mcalJ$ taking values in $\{1,\ldots,\I\}$ and in $\{1,\ldots,\J\}$, respectively. Denote their joint distribution by $(\pi_{i,j})_{i=1, \ldots, \I,j=1,\ldots,\J}$ so that $\pi_{i,j} = \mathbb{P}^s(\mcalI = i,\mathcal J=j)$. The indices $(i,j)$ are used to identify the {\em scenario} in which the game is played and are the key ingredient to model the asymmetric information feature. Consider another probability space $(\Omega^p, \mathcal{F}^p, \mathbb{P}^p)$ with a filtration $(\mathcal{F}^p_t)$ satisfying the usual conditions, and $(\mathcal{F}^p_t)$-adapted payoff processes $f^{i,j}$, $g^{i,j}$, $h^{i,j}$, with $(i,j)$ taking values in $\{1,\ldots,\I\} \times \{1,\ldots,\J\}$. For all $i,j$, we assume that $f^{i,j}$, $g^{i,j}$, $h^{i,j}$ satisfy conditions \ref{eq-integrability-cond}-\ref{eq-terminal-time-order-cond}. The game is set on the probability space $(\Omega, \mathcal{F}, \mathbb{P}) := (\Omega^p \times \Omega^s, \mathcal{F}^p \vee \mathcal{F}^s, \mathbb{P}^p \otimes \mathbb{P}^s)$. The first player is informed about the outcome of $\mcalI$ before the game starts but never directly observes $\mcalJ$. Hence, her actions are adapted to the filtration $\mathcal{F}^1_t = \mathcal{F}^p_t \vee \sigma(\mcalI)$. Conversely, the second player knows $\mcalJ$ but not $\mcalI$, so her actions are adapted to the filtration $\mathcal{F}^2_t = \mathcal{F}^p_t \vee \sigma(\mcalJ)$. Given a choice of random times $\tau\in \mathcal{T}^R(\mathcal{F}^1_t)$ and $\sigma \in \mathcal{T}^R(\mathcal{F}^2_t)$ for the first and the second player, the payoff is \begin{equation*} \mathcal{P} (\tau, \sigma) = f^{\mcalI,\mcalJ}_{\tau} \ind{\{\tau<\sigma\}} + g^{\mcalI,\mcalJ}_{\sigma} \ind{\{{\sigma}<{\tau}\}} + h^{\mcalI,\mcalJ}_\tau \ind{\{\tau = \sigma\}}. \end{equation*} Players assess the game by looking at the expected payoff as in \eqref{eq-uninf-payoff}. It is worth noticing that this corresponds to the so-called `{\em ex-ante}' expected payoff, i.e., the expected payoff before the players acquire the additional information about the values of $\mcalI$ and $\mcalJ$. The structure of the game is common knowledge, i.e., both players know all processes $f^{i,j}$, $g^{i,j}$ and $h^{i,j}$ involved; however, they have partial and asymmetric knowledge on the couple $(i,j)$ which is drawn at the start of the game from the distribution of $(\mcalI,\mcalJ)$. Drawing a precise parallel with the framework of Section \ref{sec:setting}, the above setting corresponds to $f_t = f^{\mcalI,\mcalJ}_t$, $g_t = g^{\mcalI,\mcalJ}_t$, and $h_t = h_t^{\mcalI,\mcalJ}$ with the filtration $\mathcal{F}_t = \mathcal{F}^p_t \vee \sigma(\mcalI, \mcalJ)$. The observation flows for the players are given by $(\mathcal{F}^1_t)$ and $(\mathcal{F}^2_t)$, respectively. The particular structure of players' filtrations $(\mathcal{F}^1_t)$ and $(\mathcal{F}^2_t)$ allows for the following decomposition of randomised stopping times, see \cite[Proposition 3.3]{esmaeeli2018} (recall the radomisation devices $Z_\tau\sim U([0,1])$ and $Z_\sigma\sim U([0,1])$, which are mutually independent and independent of $\mathcal{F}_T$). \begin{Lemma}\label{lem:tau_decomposition} Any $\tau \in \mathcal{T}^R(\mathcal{F}^1_t)$ has a representation \begin{equation}\label{eqn:tau_decomposition} \tau = \sum_{i=1}^\I \ind{\{\mcalI = i\}} \tau_i, \end{equation} where $\tau_1,\ldots,\tau_\I \in \mathcal{T}^R(\mathcal{F}^p_t)$, with generating processes $\xi^1,\ldots,\xi^\I \in {\mathcal{A}^\circ} (\mathcal{F}^p_t)$ and a common randomisation device $Z_\tau$. An analogous representation holds for $\sigma$ with $\sigma_1, \ldots, \sigma_\J \in \mathcal{T}^R(\mathcal{F}^p_t)$, generating processes $\zeta^1_t, \ldots, \zeta^\J_t \in {\mathcal{A}^\circ} (\mathcal{F}^p_t)$, and a common randomisation device $Z_\sigma$. \end{Lemma} \begin{cor} Any $(\mathcal{F}^1_t)$-stopping time $\tau$ has a decomposition \eqref{eqn:tau_decomposition} with $\tau_1,\ldots,\tau_\I$ being $(\mathcal{F}^p_t)$-stopping times (and analogously for $(\mathcal{F}^2_t)$-stopping times). \end{cor} Hence, given a realisation of the idiosyncratic scenario variable $\mcalI$ (resp.\ $\mcalJ$), the first (second) player chooses a randomised stopping time whose generating process is adapted to the common filtration $(\mathcal{F}^p_t)$. The resulting expected payoff can be written as \begin{equation*} N(\tau, \sigma) = \sum_{i=1}^\I \sum_{j=1}^\J \pi_{i,j} \mathbb{E} \Big[ f^{i,j}_{\tau_i} \ind{\{\tau_i<\sigma_j\}}+ g^{i,j}_{\sigma_j} \ind{\{{\sigma_j}<{\tau_i}\}}+ h^{i,j}_{\tau_i} \ind{\{\tau_i = \sigma_j\}} \Big]. \end{equation*} \subsection{Game with a single partially observed dynamics} \label{subsec:game_2} Our second example generalises the set-ups of \cite{DGV2017} and \cite{DEG2020} and reduces to those cases when $J=2$, the time horizon is infinite and the payoff processes are (particular) time-homogeneous functions of a (particular) one-dimensional diffusion. Here the underlying dynamics of the game is a diffusion, whose drift depends on the realisation of an independent random variable $\mcalJ\in\{1,\ldots, J\}$. Formally, on a probability space $(\Omega, \mathcal{F}, \mathbb{P})$ we have a Brownian motion $(W_t)$ on $\mathbb{R}^d$, an independent random variable $\mcalJ\in\{1,\ldots, J\}$ with distribution $\pi_j=\mathbb{P}(\mcalJ=j)$, and a process $(X_t)$ on $\mathbb{R}^d$ with the dynamics \[ dX_t=\sum_{j=1}^J \ind{\{\mcalJ=j\}} \mu_j(X_t)dt+\sigma(X_t)dW_t,\quad X_0=x, \] where $\sigma$, $(\mu_j)_{j=1,\ldots J}$ are given functions (known to both players) that guarantee existence of a unique strong solution of the SDE for each $j=1,\ldots J$. The payoff processes are deterministic functions of the underlying process, i.e., $f_t=f(t,X_t)$, $g_t=g(t,X_t)$ and $h_t=h(t,X_t)$, and they are known to both players. We assume that the payoff processes satisfy conditions \ref{eq-integrability-cond}-\ref{eq-terminal-time-order-cond}. It is worth to remark that in the specific setting of \cite{DGV2017} the norms $\| f \|_{\mcalL}$ and $\| g \|_{\mcalL}$ are not finite so that our results cannot be directly applied. However, the overall structure of the game in \cite{DGV2017} is easier than ours so that some other special features of the payoff processes can be used to determine existence of the value therein. To draw a precise parallel with the notation from Section \ref{sec:setting}, here we take $\mathcal{F}_t=\mathcal{F}^W_t\vee\sigma(\mcalJ)$, where $(\mathcal{F}^W_t)$ is the filtration generated by the Brownian sample paths and augmented with $\mathbb{P}$-null sets. Both players observe the dynamics of $X$, however they have partial/asymmetric information on the value of $\mcalJ$. In \cite{DGV2017} neither of the two players knows the true value of $\mcalJ$, so we have $(\mathcal{F}^1_t)=(\mathcal{F}^2_t)=(\mathcal{F}^X_t)$, where $(\mathcal{F}^X_t)$ is generated by the sample paths of the process $X$ and it is augmented by the $\mathbb{P}$-null sets (notice that $\mathcal{F}^X_t\subsetneq \mathcal{F}_t$). In \cite{DEG2020} instead, the first player (minimiser) observes the true value of $\mcalJ$. In that case $(\mathcal{F}^1_t)=(\mathcal{F}_t)$ and $(\mathcal{F}^2_t)=(\mathcal{F}^X_t)$, so that $\mathcal{F}^2_t\subsetneq \mathcal{F}^1_t$. Using the notation $X^\mcalJ$ to emphasise the dependence of the underlying dynamics on $\mcalJ$, and given a choice of random times $\tau\in \mathcal{T}^R(\mathcal{F}^1_t)$ and $\sigma \in \mathcal{T}^R(\mathcal{F}^2_t)$ for the first and the second player, the game's payoff reads \begin{equation*} \mathcal{P} (\tau, \sigma) = f(\tau,X^\mcalJ_\tau) \ind{\{\tau<\sigma\}} + g (\sigma,X^\mcalJ_\sigma) \ind{\{{\sigma}<{\tau}\}} + h (\tau, X^\mcalJ_\tau) \ind{\{\tau = \sigma\}}. \end{equation*} Players assess the game by looking at the expected payoff as in \eqref{eq-uninf-payoff}. Finally, we remark that under a number of (restrictive) technical assumptions and with infinite horizon \cite{DGV2017} and \cite{DEG2020} show the existence of a value and of a saddle point in a smaller class of strategies. In \cite{DGV2017} both players use $(\mathcal{F}^X_t)$-stopping times, with no need for additional randomisation. In \cite{DEG2020} the uninformed player uses $(\mathcal{F}^X_t)$-stopping times but the informed player uses $(\mathcal{F}_t)$-randomised stopping times. \subsection{Game with two partially observed dynamics Here we show how the setting of \cite{GenGrun2019} also fits in our framework. This example is conceptually different from the previous two because the players observe two different stochastic processes. On a probability space $(\Omega,\mathcal{F},\mathbb{P})$ two processes $(X_t)$ and $(Y_t)$ are defined (in \cite{GenGrun2019} these are finite-state continuous-time Markov chains). The first player only observes the process $(X_t)$ while the second player only observes the process $(Y_t)$. In the notation of Section \ref{sec:setting}, we have $(\mathcal{F}^1_t)=(\mathcal{F}^X_t)$, $(\mathcal{F}^2_t)=(\mathcal{F}^Y_t)$ and $(\mathcal{F}_t)=(\mathcal{F}^X_t\vee\mathcal{F}^Y_t)$, where the filtration $(\mathcal{F}^X_t)$ is generated by the sample paths of $(X_t)$ and $(\mathcal{F}^Y_t)$ by those of $(Y_t)$ (both filtrations are augmented with $\mathbb{P}$-null sets). The payoff processes are deterministic functions of the underlying dynamics, i.e., $f_t=f(t,X_t,Y_t)$, $g_t=g(t,X_t,Y_t)$ and $h_t=h(t, X_t,Y_t)$, and they satisfy conditions \ref{eq-integrability-cond}-\ref{eq-terminal-time-order-cond}. Given a choice of random times $\tau \in \mathcal{T}^R(\mathcal{F}^1_t)$ and $\sigma\in \mathcal{T}^R(\mathcal{F}^2_t)$ for the first and the second player, the game's payoff reads \begin{equation*} \mathcal{P} (\tau, \sigma) = f(\tau,X_\tau,Y_\tau) \ind{\{\tau<\sigma\}} + g (\sigma,X_\sigma,Y_\sigma) \ind{\{{\sigma}<{\tau}\}} + h (\tau, X_\tau,,Y_\tau) \ind{\{\tau = \sigma\}}. \end{equation*} Players assess the game by looking at the expected payoff as in \eqref{eq-uninf-payoff}. We remark that the proof of existence of the value in \cite{GenGrun2019} is based on variational inequalities and relies on the finiteness of the state spaces of both underlying processes, and therefore cannot be extended to our general non-Markovian framework. \subsection{Game with a random horizon Here we consider a non-Markovian extension of the framework of \cite{lempa2013}, where the time horizon of the game is exponentially distributed and independent of the payoff processes. On a probability space $(\Omega,\mathcal{F},\mathbb{P})$ we have a filtration $(\mathcal{G}_t)_{t\in[0,T]}$, augmented with $\mathbb{P}$-null sets, and a positive random variable $\theta$ which is independent of $\mathcal{G}_T$ and has a continuous distribution. Let $\Lambda_t:=\ind{\{t\ge \theta\}}$ and take $\mathcal{F}_t=\mathcal{G}_t\vee\sigma(\Lambda_s,\,0\le s\le t)$. The players have asymmetric knowledge of the random variable $\theta$. The first player observes the occurrence of $\theta$, whereas the second player does not. We have $(\mathcal{F}^1_t)=(\mathcal{F}_t)$ and $(\mathcal{F}^2_t)=(\mathcal{G}_t)\subsetneq (\mathcal{F}^1_t)$. Given a choice of random times $\tau \in \mathcal{T}^R(\mathcal{F}^1_t)$ and $\sigma\in \mathcal{T}^R(\mathcal{F}^2_t)$ for the first and the second player, the game's payoff reads \begin{align}\label{eq:PLM} \mathcal{P} (\tau, \sigma) &= \indd{\tau \wedge \sigma \le \theta} \big(f^0_\tau \ind{\{\tau<\sigma\}} + g^0_\sigma \ind{\{{\sigma}<{\tau}\}} + h^0_\tau \ind{\{\tau = \sigma\}} \big), \end{align} where $f^0$, $g^0$ and $h^0$ are $(\mathcal{G}_t)$-adapted processes that satisfy conditions \ref{eq-integrability-cond}-\ref{eq-terminal-time-order-cond} and $f^0 \ge 0$. Notice that the problem above does not fit directly into the framework of Section \ref{sec:setting}: Assumption \ref{eq-integrability-cond} is indeed violated, because the processes $(\indd{t \le \theta} f^0_t),(\indd{t \le \theta}g^0_t)$ are not c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}}. However, we now show that the game can be equivalently formulated as a game satisfying conditions of our framework. The expected payoff can be rewritten as follows \begin{align* N^0(\tau, \sigma) := \mathbb{E}\big[\mathcal{P} (\tau, \sigma) \big] &= \mathbb{E}\big[\indd{\tau \le \theta} \ind{\{\tau<\sigma\}} f^0_\tau + \indd{\sigma \le \theta} \ind{\{{\sigma}<{\tau}\}} g^0_\sigma + \indd{\sigma \le \theta} \ind{\{\tau = \sigma\}} h^0_\tau\big]\\ &= \mathbb{E}\big[\indd{\tau \le \theta} \ind{\{\tau<\sigma\}} f^0_\tau + \indd{\sigma < \theta} \ind{\{{\sigma}<{\tau}\}} g^0_\sigma + \indd{\sigma < \theta} \ind{\{\tau = \sigma\}} h^0_\tau\big],\notag \end{align*} where the second equality holds because $\theta$ is continuously distributed and independent of $\mathcal{F}^2_T$, so $\mathbb{P}(\sigma=\theta) = 0$ for any $\sigma \in \mathcal{T}^R(\mathcal{F}^2_t)$. Fix $\varepsilon > 0$ and set \begin{align* f^\varepsilon_t:=f^0_{t}\ind{\{t<\theta+\varepsilon\}},\quad g_t:=g^0_t\ind{\{t < \theta\}}, \quad h_t:=h^0_t\ind{\{t < \theta\}}, \qquad t \in [0, T]. \end{align*} We see that conditions \ref{eq-integrability-cond}, \ref{eq-order-cond}, \ref{eq-terminal-time-order-cond} hold for the processes $(f^\varepsilon_t)$, $(g_t)$, $(h_t)$ (for condition \ref{eq-order-cond} we use that $f^0 \ge 0$). Condition \ref{ass:regular} (regularity of payoffs $f^\varepsilon$ and $g$) is satisfied, because $\theta$ has a continuous distribution, so it is a totally inaccessible stopping time for the filtration $(\mathcal{F}_t)$ by \cite[Example VI.14.4]{RogersWilliams}. Therefore, by Theorem \ref{thm:main}, the game with expected payoff \[ N^\varepsilon(\tau, \sigma) =\mathbb{E}\big[\mathcal{P}^\varepsilon(\tau,\sigma)\big]:= \mathbb{E} \big[\ind{\{\tau<\sigma\}} f^\varepsilon_\tau + \ind{\{{\sigma}<{\tau}\}} g_\sigma + \ind{\{\tau = \sigma\}} h_\tau\big] \] has a value and a pair of optimal strategies exists. We now show that the game with expected payoff $N^0$ has the same value as the one with expected payoff $N^\varepsilon$, for any $\varepsilon > 0$. First observe that \begin{align* N^\varepsilon(\tau, \sigma) - N^0(\tau, \sigma) = \mathbb{E}\big[\indd{\tau < \sigma} \indd{\theta < \tau < \theta + \varepsilon} f^0_\tau\big] \ge 0 \end{align*} by the assumption that $f^0 \ge 0$. Hence, \begin{equation}\label{eqn:N_eps_upper} \operatornamewithlimits{inf\vphantom{p}}_{\tau \in \mathcal{T}^R(\mathcal{F}^1_t)} \sup_{\sigma \in \mathcal{T}^R(\mathcal{F}^2_t)} N^\varepsilon (\tau, \sigma) \ge \operatornamewithlimits{inf\vphantom{p}}_{\tau \in \mathcal{T}^R(\mathcal{F}^1_t)} \sup_{\sigma \in \mathcal{T}^R(\mathcal{F}^2_t)} N^0 (\tau, \sigma). \end{equation} To derive an opposite inequality for the lower values, fix $\sigma \in \mathcal{T}^R(\mathcal{F}^2_t)$. For $\tau \in \mathcal{T}^R(\mathcal{F}^1_t)$, define \[ \hat \tau = \begin{cases} \tau, & \tau \le \theta,\\ T, & \tau > \theta. \end{cases} \] Then, using that $\mathcal{P}^\varepsilon(\tau,\sigma)=\mathcal{P}(\tau,\sigma)$ on $\{\tau\le \theta\}$ and $\mathcal{P}^\varepsilon(T,\sigma)=g^0_\sigma\ind{\{\sigma<\theta\}}=\mathcal{P}(\tau,\sigma)$ on $\{\tau>\theta\}$, we have $N^\varepsilon(\hat \tau, \sigma) = N^0(\tau, \sigma)$. It then follows that \[ \operatornamewithlimits{inf\vphantom{p}}_{\tau \in \mathcal{T}^R(\mathcal{F}^1_t)} N^\varepsilon (\tau, \sigma) \le \operatornamewithlimits{inf\vphantom{p}}_{\tau \in \mathcal{T}^R(\mathcal{F}^1_t)} N^0 (\tau, \sigma), \] which implies \begin{equation}\label{eqn:N_eps_lower} \sup_{\sigma \in \mathcal{T}^R(\mathcal{F}^2_t)} \operatornamewithlimits{inf\vphantom{p}}_{\tau \in \mathcal{T}^R(\mathcal{F}^1_t)} N^\varepsilon (\tau, \sigma) \le \sup_{\sigma \in \mathcal{T}^R(\mathcal{F}^2_t)} \operatornamewithlimits{inf\vphantom{p}}_{\tau \in \mathcal{T}^R(\mathcal{F}^1_t)} N^0 (\tau, \sigma). \end{equation} Since the value of the game with expected payoff $N^\varepsilon$ exists, combining \eqref{eqn:N_eps_upper} and \eqref{eqn:N_eps_lower} we see that the value of the game with expected payoff $N^0$ also exists. It should be noted, though, that this does not imply that an optimal pair of strategies for $N^\varepsilon$ is optimal for $N^0$. It is worth noticing that in \cite{lempa2013} the setting is Markovian with $T=\infty$, $f^0_t=h^0_t=e^{-rt} \bar f(X_t)$, $g^0_t=e^{-rt} \bar g(X_t)$, $\bar f$, $\bar g$ deterministic functions, $r\ge 0$, $\theta$ exponentially distributed and $(X_t)$ a one-dimensional linear diffusion. Under specific technical requirements on the functions $\bar f$ and $\bar g$ the authors find that a pair of optimal strategies for the game \eqref{eq:PLM} exists when the first player uses $(\mathcal{F}^1_t)$-stopping times and the second player uses $(\mathcal{F}^2_t)$-stopping times (in the form of hitting times to thresholds), with no need for randomisation. Their methods rely on the theory of one-dimensional linear diffusions (using scale function and speed measure) and free-boundary problems, hence do not admit an extension to a non-Markovian case. \section{Reformulation as a game of (singular) controls} \label{sec:reform} In order to integrate out the randomisation devices for $\tau$ and $\sigma$ and obtain a reformulation of the payoff functional $N(\tau, \sigma)$ in terms of generating processes for randomised stopping times $\tau$ and $\sigma$, we need the following two auxiliary lemmata. We remark that if $\eta$ is a $(\mathcal{G}_t)$-randomised stopping time for $(\mathcal{G}_t) \subseteq (\mathcal{F}_t)$, then $\eta$ is also an $(\mathcal{F}_t)$-randomised stopping time. Therefore, the results below are formulated for $(\mathcal{F}_t)$-randomises stopping times. \begin{Lemma}\label{lem-eta-xi} Let $\eta\in\mathcal{T}^R(\mathcal{F}_t)$ with the generating process $(\rho_t)$. Then, for any $\mathcal{F}_T$-measurable random variable $\kappa$ with values in $[0,T]$, \begin{alignat}{3} &\mathbb{E}[\ind{\{\eta\le \kappa\}}|\mathcal{F}_T]=\rho_\kappa, \qquad &&\mathbb{E}[\ind{\{\eta>\kappa\}}|\mathcal{F}_T]=1-\rho_\kappa, \label{eq-xi-eta-1}\\ &\mathbb{E}[\ind{\{\eta<\kappa\}}|\mathcal{F}_T]=\rho_{\kappa_-},\qquad &&\mathbb{E}[\ind{\{\eta\ge \kappa\}}|\mathcal{F}_T]=1-\rho_{\kappa_-}. \label{eq-xi-eta-3} \end{alignat} \end{Lemma} \begin{proof} The proof of \eqref{eq-xi-eta-1} follows the lines of \cite[Proposition 3.1]{DEG2020}. Let $Z$ be the randomisation device for $\eta$. Since $\rho$ is right-continuous, non-decreasing and (\ref{eq-def-rand-st}) holds, we have \begin{equation*} \{\rho_\kappa > Z\}\subseteq \{\eta\le \kappa\}\subseteq\{\rho_\kappa\ge Z\}. \end{equation*} Using that $\rho_\kappa$ is $\mathcal{F}_T$-measurable, and $Z$ is uniformly distributed and independent of $\mathcal{F}_T$, we compute \begin{equation*} \mathbb{E}[\ind{\{\eta\le \kappa\}}|\mathcal{F}_T]\ge \mathbb{E}[\ind{\{\rho_\kappa> Z\}}|\mathcal{F}_T] = \int_0^1 \ind{\{\rho_\kappa> y\}} dy = \rho_\kappa, \end{equation*} and \begin{equation*} \mathbb{E}[\ind{\{\eta\le \kappa\}}|\mathcal{F}_T]\le \mathbb{E}[\ind{\{\rho_\kappa\ge Z\}}|\mathcal{F}_T] = \int_0^1 \ind{\{\rho_\kappa\ge y\}} dy = \rho_\kappa. \end{equation*} This completes the proof of the first equality in \eqref{eq-xi-eta-1}. The other one is a direct consequence. To prove $(\ref{eq-xi-eta-3})$, we observe that, by (\ref{eq-xi-eta-1}), for any $\varepsilon>0$ we have \[ \ind{\{\kappa>0\}}\mathbb{E}[\ind{\{\eta\le (\kappa-\varepsilon) \vee (\kappa/2)\}}|\mathcal{F}_T]=\ind{\{\kappa>0\}} \rho_{(\kappa-\varepsilon) \vee (\kappa/2)}. \] Dominated convergence theorem implies \begin{align*} \mathbb{E}[\ind{\{\eta< \kappa\}}|\mathcal{F}_T] &= \ind{\{\kappa > 0\}}\, \mathbb{E}[\ind{\{\eta< \kappa\}}|\mathcal{F}_T] = \lim_{\varepsilon\downarrow 0} \ind{\{\kappa>0\}}\,\mathbb{E}[\ind{\{\eta\le (\kappa-\varepsilon) \vee (\kappa/2)\}}|\mathcal{F}_T]\\ &= \lim_{\varepsilon\downarrow 0} \ind{\{\kappa>0\}}\, \rho_{(\kappa-\varepsilon) \vee (\kappa/2)} = \ind{\{\kappa>0\}}\, \rho_{\kappa-} = \rho_{\kappa-}, \end{align*} where in the last equality we used that $\rho_{0-}=0$. This proves the first equality in \eqref{eq-xi-eta-3}. The other one is a direct consequence. \end{proof} \begin{Lemma}\label{lem:integ_out} Let $\eta,\theta\in\mathcal{T}^R(\mathcal{F}_t)$ with generating processes $(\rho_t)$, $(\chi_t)$ and independent randomisation devices $Z_\eta$, $Z_\theta$. For $(X_t)$ measurable, adapted and such that $\|X\|_{\mcalL}<\infty$ (but not necessarily {c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}}\!\!}), we have \begin{align*} &\mathbb{E}\left[X_\eta \ind{\{\eta\le\theta\}\cap\{\eta<T\}}\right]=\mathbb{E}\left[\int_{[0, T)} X_t(1-\chi_{t-})d\rho_t\right],\\ &\mathbb{E}\left[X_\eta \ind{\{\eta<\theta\}}\right]=\mathbb{E}\left[\int_{[0, T)} X_t(1-\chi_{t})d\rho_t\right], \end{align*} where we use the notation $\int_{[0, T)}$ for the (pathwise) Lebesgue-Stieltjes integral. \label{lem-alt-repr-both-rand} \end{Lemma} \begin{proof} For $y\in[0,1)$, define a family of random variables \begin{equation*} q(y)=\operatornamewithlimits{inf\vphantom{p}}\{t\in [0, T]: \rho_t > y\}. \end{equation*} Then, $\eta=q(Z_\eta)$. Using that $Z_\eta \sim U(0,1)$ and Fubini's theorem, we see that \begin{align*} \mathbb{E}\left[X_\eta \ind{\{\eta\le\theta\}\cap\{\eta<T\}}\right]&=\mathbb{E}\left[\int_0^1 X_{q(y)} \ind{\{q(y)\le\theta\}\cap\{q(y)<T\}} dy\right]\\ &=\int_0^1 \mathbb{E}\left[\mathbb{E}\left[X_{q(y)} \ind{\{q(y)\le\theta\}\cap\{q(y)<T\}}|\mathcal{F}_T\right]\right]dy. \end{align*} Since $X_{q(y)} \ind{\{q(y)<T\}}$ is $\mathcal{F}_T$-measurable and the randomization device $Z_\theta$ is independent of $\mathcal{F}_T$, we continue as follows: \begin{align*} \int_0^1 \mathbb{E}\left[\mathbb{E}\left[X_{q(y)} \ind{\{q(y)\le\theta\}\cap\{q(y)<T\}}|\mathcal{F}_T\right]\right]dy&=\int_0^1 \mathbb{E}\left[X_{q(y)} \ind{\{q(y)<T\}} \mathbb{E}[\ind{\{q(y)\le\theta\}} | \mathcal{F}_T]\right] dy\\ &=\mathbb{E}\left[\int_0^1 X_{q(y)} \ind{\{q(y)<T\}} (1-\chi_{q(y)-}) dy\right]\\ &=\mathbb{E}\left[\int_{[0, T)} X_t(1-\chi_{t-})d\rho_t\right], \end{align*} where in the second equality we apply Lemma \ref{lem-eta-xi} with $\kappa = q(y)$ and in the third equality we change the variable of integration applying \cite[Proposition 0.4.9]{revuzyor} $\omega$-wise and using the fact that the function $y \mapsto q(y)(\omega)$ is the generalized inverse of $t\mapsto \rho_t(\omega)$. The first statement of the lemma is now proved. For the second statement, we adapt the arguments above to write \begin{align*} \mathbb{E}\left[X_\eta \ind{\{\eta<\theta\}}\right] &= \int_0^1 \mathbb{E}\left[X_{q(y)} \mathbb{E}[\ind{\{q(y)<\theta\}} | \mathcal{F}_T]\right] dy =\mathbb{E}\left[\int_0^1 X_{q(y)} (1-\chi_{q(y)}) dy\right]\\ &=\mathbb{E}\left[\int_{[0, T]} X_t(1-\chi_{t})d\rho_t\right] =\mathbb{E}\left[\int_{[0, T)} X_t(1-\chi_{t})d\rho_t\right], \end{align*} where in the last equality we used that $\chi_T = 1$. \end{proof} \begin{cor}\label{cor:j} Under the assumptions of Lemma \ref{lem:integ_out}, we have \[ \mathbb{E}[X_\eta\ind{\{\eta=\theta\}}]=\mathbb{E}\bigg[\sum_{t\in[0,T]}X_t\Delta\rho_t\Delta\chi_t\bigg], \] where $\Delta \rho_t = \rho_t - \rho_{t-}$ and $\Delta \chi_t = \chi_t - \chi_{t-}$. \end{cor} \begin{proof} From Lemma \ref{lem-alt-repr-both-rand} we have \begin{align*} \mathbb{E}[X_\eta\ind{\{\eta=\theta\}\cap\{\eta<T\}}]=&\mathbb{E}\big[X_{\eta}\big(\ind{\{\eta\le\theta\}\cap\{\eta<T\}}-\ind{\{\eta<\theta\}}\big)\big]\\ =&\mathbb{E}\bigg[\int_{[0,T)}X_t\Delta\chi_t d\rho_t\bigg]=\mathbb{E}\bigg[\sum_{t\in[0,T)} X_t\Delta\chi_t \Delta\rho_t\bigg], \end{align*} where the final equality is due to the fact that $t\mapsto\chi_t(\omega)$ has countably many jumps for each $\omega \in \Omega$ and the continuous part of the measure $d\rho_t(\omega)$ puts no mass there. Further, we notice that \begin{align*} \mathbb{E}\big[\ind{\{\eta=\theta=T\}}|\mathcal{F}_T\big] &= \lim_{n\to\infty}\mathbb{E}\big[\ind{\{\eta>T-1/n\}}\ind{\{\theta>T-1/n\}}|\mathcal{F}_T\big] = \lim_{n\to\infty}\mathbb{E}\big[\ind{\{\rho_{T-1/n}\le Z_\eta\}}\ind{\{\chi_{T-1/n}\le Z_\theta\}}|\mathcal{F}_T\big]\\ &= \lim_{n\to\infty}(1-\rho_{T-1/n})(1-\chi_{T-1/n}) = \Delta\rho_T\Delta\chi_T, \end{align*} where the second equality is by \[ \{\rho_{T-1/n}<Z_\eta\}\subseteq\{\eta>T-\tfrac{1}{n}\}\subseteq\{\rho_{T-1/n}\le Z_\eta\}, \] and analogous inclusions for $\{\theta\!>\!T\!-\!\frac{1}{n}\}$. The third equality uses that $\rho_{T-1/n}$ and $\chi_{T-1/n}$ are $\mathcal{F}_T$-measurable, and $Z_\eta$, $Z_\theta$ are independent of $\mathcal{F}_T$. The final equality follows since $\rho_T=\chi_T=1$. Combining the above gives the desired result. \end{proof} Applying Lemma \ref{lem-alt-repr-both-rand} and Corollary \ref{cor:j} to \eqref{eqn:payoff} and \eqref{eq-uninf-payoff}, we obtain the following reformulation of the game. \begin{Proposition}\label{prop-functionals-equal} For $\tau\in \mathcal{T}^R (\mathcal{F}^1_t)$, $\sigma\in \mathcal{T}^R(\mathcal{F}^2_t)$, \begin{equation} N(\tau,\sigma)= \mathbb{E}\bigg[\int_{[0, T)} f_t(1-\zeta_{t})d\xi_t + \int_{[0, T)} g_t(1-\xi_t)d\zeta_t + \sum_{t \in [0, T]} h_t \Delta\xi_t \Delta\zeta_t\bigg], \label{eq-functional-in-terms-of-controls} \end{equation} where $(\xi_t)$ and $(\zeta_t)$ are the generating processes for $\tau$ and $\sigma$, respectively. \end{Proposition} With a slight abuse of notation, we will denote the right-hand side of \eqref{eq-functional-in-terms-of-controls} by $N(\xi,\zeta)$. \begin{remark}\label{rem-Laraki-Solan} In the Definition \ref{def-value-rand-strat} of the lower value, the infimum can always be replaced by infimum over \emph{pure} stopping times (cf. \cite{LarakiSolan2005}). Same holds for the supremum in the definition of the upper value. Let us look at the upper value: take arbitrary $\tau\in \mathcal{T}^R(\mathcal{F}^1_t)$, $\sigma\in \mathcal{T}^R(\mathcal{F}^2_t)$, and define the family of stopping times \begin{equation*} q(y)=\operatornamewithlimits{inf\vphantom{p}}\{t\in [0,T]: \zeta_t > y\}, \qquad y \in [0,1), \end{equation*} similarly to the proof of Lemma \ref{lem-alt-repr-both-rand} and with $(\zeta_t)$ the generating process of $\sigma$. Then, \begin{equation*} N(\tau,\sigma)=\int_0^1 N(\tau,q(y)) dy \le \sup_{y\in[0,1)} N(\tau,q(y))\le \sup_{\sigma\in\mathcal{T}(\mathcal{F}^2_t)} N(\tau,\sigma), \end{equation*} where $\mathcal{T}(\mathcal{F}^2_t)$ denotes the set of pure $(\mathcal{F}^2_t)$-stopping times. Since $\mathcal{T}(\mathcal{F}^2_t) \subset \mathcal{T}^R(\mathcal{F}^2_t)$, we have \begin{equation*} \sup_{\sigma\in\mathcal{T}^R(\mathcal{F}^2_t)}N(\tau,\sigma)= \sup_{\sigma\in\mathcal{T}(\mathcal{F}^2_t)} N(\tau,\sigma), \end{equation*} and, consequently, the `{\em inner}' optimisation can be done over pure stopping times: \begin{equation*} \operatornamewithlimits{inf\vphantom{p}}_{\tau\in\mathcal{T}^R(\mathcal{F}^1_t)}\sup_{\sigma\in\mathcal{T}^R(\mathcal{F}^2_t)} N(\tau,\sigma)= \operatornamewithlimits{inf\vphantom{p}}_{\tau\in \mathcal{T}^R(\mathcal{F}^1_t)}\sup_{\sigma\in\mathcal{T}(\mathcal{F}^2_t)} N(\tau,\sigma). \end{equation*} By the same argument one can show that \begin{equation*} \sup_{\sigma\in\mathcal{T}^R(\mathcal{F}^2_t)} \operatornamewithlimits{inf\vphantom{p}}_{\tau\in\mathcal{T}^R(\mathcal{F}^1_t)} N(\tau,\sigma)= \sup_{\sigma\in\mathcal{T}^R(\mathcal{F}^2_t)} \operatornamewithlimits{inf\vphantom{p}}_{\tau\in \mathcal{T}(\mathcal{F}^1_t)} N(\tau,\sigma). \end{equation*} However, in general an analogue result for the `{\em outer}' optimisation does not hold, i.e., \begin{equation*} \sup_{\sigma\in\mathcal{T}^R(\mathcal{F}^2_t)} \operatornamewithlimits{inf\vphantom{p}}_{\tau\in \mathcal{T}^R(\mathcal{F}^1_t)} N(\tau,\sigma)\neq \sup_{\sigma\in\mathcal{T}(\mathcal{F}^2_t)} \operatornamewithlimits{inf\vphantom{p}}_{\tau\in \mathcal{T}^R(\mathcal{F}^1_t)} N(\tau,\sigma) \end{equation*} as shown by an example in Section \ref{sec:Nikita-examples}. \end{remark} \section{Sion's theorem and existence of value}\label{sec-Sion-existence-of-value}\label{sec:sions} The proofs of Theorems \ref{thm:main2} and \ref{thm:main}, i.e., that the game with payoff \eqref{eq-uninf-payoff} has a value in randomised strategies, utilises Sion's min-max theorem \cite{Sion1958} (see also \cite{Komiya1988} for a simple proof). The idea of relying on Sion's theorem comes from \cite{TouziVieille2002} where the authors study zero-sum Dynkin games with full and symmetric information. Here, however, we need different key technical arguments as explained in, e.g., Remark \ref{rem-TV-norm-doesnt-work} below. Let us start by recalling Sion's theorem. \begin{theorem}[Sion's theorem]\label{th-the-Sion} \cite[Corollary 3.3]{Sion1958} Let $A$ and $B$ be convex subsets of a linear topological space one of which is compact. Let $\varphi(\mu,\nu)$ be a function $A\times B \mapsto \mathbb{R}$ that is quasi-concave and upper semi-continuous in $\mu$ for each $\nu\in B$, and quasi-convex and lower semi-continuous in $\nu$ for each $\mu\in A$. Then, \begin{equation*} \sup_{\mu\in A}\operatornamewithlimits{inf\vphantom{p}}_{\nu\in B} \varphi(\mu,\nu)=\operatornamewithlimits{inf\vphantom{p}}_{\nu\in B}\sup_{\mu\in A} \varphi(\mu,\nu). \end{equation*} \end{theorem} The key step in applying Sion's theorem is to find a topology on the set of randomised stopping times, or, equivalently, on the set of corresponding generating processes so that the functional $N(\cdot, \cdot)$ satisfies the assumptions. We will use the weak topology of \[ \mathcal{S} := L^2 \big([0, T] \times \Omega, \mathcal{B}([0, T]) \times \mathcal{F}, \lambda \times \mathbb{P}\big), \] where $\lambda$ denotes the Lebesgue measure on $[0, T]$. Given a filtration $(\mathcal{G}_t) \subseteq (\mathcal{F}_t)$, in addition to the class of increasing processes ${\mathcal{A}^\circ}(\mathcal{G}_t)$ introduced in Section \ref{sec:setting}, here we also need \begin{align*} {\mathcal{A}^\circ_{ac}} (\mathcal{G}_t) :=&\,\{\rho\in {\mathcal{A}^\circ}(\mathcal{G}):\,\text{$t\mapsto\rho_t(\omega)$ is absolutely continuous on $[0,T)$ for all $\omega\in\Omega$}\}. \end{align*} It is important to notice that $\rho\in{\mathcal{A}^\circ_{ac}}(\mathcal{G}_t)$ may have a jump at time $T$ if \[ \rho_{T-}(\omega):=\lim_{t\uparrow T}\int_0^t\big(\tfrac{d}{d t}\rho_s\big)(\omega)d s<1=\rho_T(\omega). \] As with ${\mathcal{A}^\circ}(\mathcal{G}_t)$, in the definition of ${\mathcal{A}^\circ_{ac}}(\mathcal{G}_t)$ we require that the stated properties hold for all $\omega \in \Omega$, which causes no loss of generality if $\mathcal{G}_0$ contains all $\mathbb{P}$-null sets of $\Omega$. It is clear that ${\mathcal{A}^\circ_{ac}} (\mathcal{G}_t) \subset{\mathcal{A}^\circ}(\mathcal{G}_t) \subset\mathcal{S}$. For reasons that will become clear later (e.g., see Lemma \ref{lem-strat-set-compact}), we prefer to work with slightly more general processes than those in ${\mathcal{A}^\circ}(\mathcal{G}_t)$ and ${\mathcal{A}^\circ_{ac}}(\mathcal{G}_t)$. Let us denote \begin{align*} \mathcal{A}(\mathcal{G}_t) :=&\, \{ \rho \in \mathcal S : \,\exists\; \hat\rho\in{\mathcal{A}^\circ} (\mathcal{G}_t) \,\text{such that $\rho = \hat \rho$ for $(\lambda \times \mathbb{P})$\ae $(t,\omega)\in[0,T]\times\Omega$}\},\\ \mathcal{A}_{ac}(\mathcal{G}_t) := &\,\{ \rho \in \mathcal S : \,\exists\; \hat\rho\in{\mathcal{A}^\circ_{ac}} (\mathcal{G}_t) \,\text{such that $\rho = \hat \rho$ for $(\lambda \times \mathbb{P})$\ae $(t,\omega)\in[0,T]\times\Omega$}\}. \end{align*} We will call $\hat \rho$ in the definition of the set $\mathcal{A}$ (and $\mathcal{A}_{ac}$) the \emph{c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}}} (and {\em absolutely continuous}) {\em representative} of $\rho$. Although it is not unique, all c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} representatives are indistinguishable (Lemma \ref{lem:cadlag_indis}). Hence, all c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} representatives $\hat\rho$ of $\rho\in\mathcal{A}$ define the same positive measure on $[0,T]$ for $\mathbb{P}$\ae $\omega\in\Omega$ via a non-decreasing mapping $t\mapsto\hat\rho_t(\omega)$. Then, given any bounded measurable process $(X_t)$ the stochastic process (Lebesgue-Stieltjes integral) \begin{equation*} t \mapsto \int_{[0, t]} X_s\, d\hat \rho_s, \qquad t \in [0, T], \end{equation*} does not depend on the choice of the c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} representative $\hat \rho$ in the sense that it is defined up to indistinguishability. The next definition connects the randomised stopping times that we use in the construction of the game's payoff (Proposition \ref{prop-functionals-equal}) with processes from the classes $\mathcal{A}(\mathcal{F}^1_t)$ and $\mathcal{A}(\mathcal{F}^2_t)$. Note that $\mathcal{A}(\mathcal{G}_t) \subseteq \mathcal{A}(\mathcal{F}_t)$ whenever $(\mathcal{G}_t) \subseteq (\mathcal{F}_t)$, so the definition can be stated for $\mathcal{A}(\mathcal{F}_t)$ without any loss of generality. \begin{definition}\label{def:integral} Let $(X_t)$ be measurable and such that $\|X\|_{\mcalL}\!<\!\infty$ (not necessarily {c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}}\!\!}). For $\chi,\rho \in \mathcal{A}(\mathcal{F}_t)$, we define the Lebesgue-Stieltjes integral processes \[ t \mapsto \int_{[0, t]} X_s\, d\rho_s,\quad t\mapsto\int_{[0, t]} X_s\,(1-\chi_{s}) d\rho_s\quad\text{and}\quad t\mapsto\int_{[0, t]} X_s\,(1-\chi_{s-}) d\rho_s \qquad t \in [0, T], \] by \[ t \mapsto \int_{[0, t]} X_s\, d\hat{\rho}_s,\quad t\mapsto\int_{[0, t]} X_s\,(1-\hat{\chi}_{s}) d\hat{\rho}_s\quad\text{and}\quad t\mapsto\int_{[0, t]} X_s\,(1-\hat{\chi}_{s-}) d\hat{\rho}_s \qquad t \in [0, T], \] for any choice of the c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} representatives $\hat \rho$ and $\hat \chi$, uniquely up to indistinguishability. \end{definition} With a slight abuse of notation we define a functional $N: \mathcal{A}(\mathcal{F}^1_t) \times \mathcal{A}(\mathcal{F}^2_t) \to \mathbb{R}$ by the right-hand side of \eqref{eq-functional-in-terms-of-controls}. It is immediate to verify using Definition \ref{def-value-rand-strat} and Proposition \ref{prop-functionals-equal} that the lower and the upper value of our game satisfy \begin{align}\label{eq:VV} V_{*}=\sup_{\zeta\in\mathcal{A}(\mathcal{F}^2_t)}\operatornamewithlimits{inf\vphantom{p}}_{\xi\in\mathcal{A}(\mathcal{F}^1_t)} N(\xi,\zeta), \qquad V^*=\operatornamewithlimits{inf\vphantom{p}}_{\xi\in\mathcal{A}(\mathcal{F}^1_t)} \sup_{\zeta\in\mathcal{A}(\mathcal{F}^2_t)} N(\xi,\zeta). \end{align} Notice that even though according to Definition \ref{def-rand-st} the couple $(\xi,\zeta)$ should be taken in ${\mathcal{A}^\circ}(\mathcal{F}^1_t)\times{\mathcal{A}^\circ}(\mathcal{F}^2_t)$, in \eqref{eq:VV} we consider $(\xi,\zeta)\in \mathcal{A}(\mathcal{F}^1_t)\times\mathcal{A}(\mathcal{F}^2_t)$. This causes no inconsistency thanks to the discussion above and Definition \ref{def:integral} for integrals. \begin{remark} The mapping $\mathcal{A}(\mathcal{F}^1_t) \times \mathcal{A}(\mathcal{F}^2_t) \ni (\xi, \zeta) \mapsto N(\xi, \zeta)$ does not satisfy the conditions of Sion's theorem under the strong or the weak topology of $\mathcal{S}$. Indeed, taking $\xi^n_t = \ind{\{t \ge T/2 + 1/n\}}$, we have $\xi^n_t \to \ind{\{t \ge T/2\}}=:\xi_t$ for $\lambda$\ae $t \in [0, T]$, so that by the dominated convergence theorem $(\xi^n)$ also converges to $\xi$ in $\mathcal{S}$. Then, fixing $\zeta_t = \ind{\{t \ge T/2\}}$ in $\mathcal{A}(\mathcal{F}^2_t)$ we have $N(\xi^n, \zeta) = \mathbb{E}[g_{T/2}]$ for all $n\ge 1$ whereas $N(\xi, \zeta) =\mathbb{E}[ h_{T/2} ]$. So the lower semicontinuity of $\xi \mapsto N(\xi, \zeta)$ cannot be ensured if, for example, $\mathbb{P}(h_{T/2}>g_{T/2})>0$. \end{remark} Due to issues indicated in the above remark, as in \cite{TouziVieille2002}, we `smoothen' the control strategy of one player in order to introduce additional regularity in the payoff. We will show that this procedure does not change the value of the game (Proposition \ref{thm:conv_lipsch}). We choose (arbitrarily and with no loss of generality, thanks to Remark \ref{rem:ineq}) to consider an auxiliary game in which the first player can only use controls from $\mathcal{A}_{ac} (\mathcal{F}^1_t)$. Let us define the associated upper/lower values: \begin{equation}\label{eq-value-cont-restriction} W_{*}=\sup_{\zeta\in\mathcal{A}(\mathcal{F}^2_t)}\operatornamewithlimits{inf\vphantom{p}}_{\xi\in\mathcal{A}_{ac}(\mathcal{F}^1_t)} N(\xi,\zeta)\quad\text{and}\quad W^*=\operatornamewithlimits{inf\vphantom{p}}_{\xi\in\mathcal{A}_{ac}(\mathcal{F}^1_t)} \sup_{\zeta\in\mathcal{A}(\mathcal{F}^2_t)} N(\xi,\zeta). \end{equation} Here, we work under the regularity assumption on the payoff processes \ref{ass:regular}. Relaxation of this assumption is conducted in Section \ref{sec:relax}. The main results can be distilled into the following theorems: \begin{theorem}\label{th-value-cont-strat} Under assumptions \ref{eq-integrability-cond}-\ref{ass:filtration}, the game (\ref{eq-value-cont-restriction}) has a value, i.e. \begin{equation*} W_{*}=W^{*}:=W. \end{equation*} Moreover, the $\zeta$-player (maximiser) has an optimal strategy, i.e. there exists $\zeta^*\in\mathcal{A}(\mathcal{F}^2_t)$ such that \begin{equation*} \operatornamewithlimits{inf\vphantom{p}}_{\xi\in\mathcal{A}_{ac}(\mathcal{F}^1_t)} N(\xi,\zeta^*)=W. \end{equation*} \end{theorem} \begin{Proposition}\label{thm:conv_lipsch} Under assumptions \ref{eq-integrability-cond}-\ref{ass:filtration}, for any $\zeta \in \mathcal{A}(\mathcal{F}^2_t)$ and $\xi \in \mathcal{A}(\mathcal{F}^1_t)$, there is a sequence $\xi^n \in \mathcal{A}_{ac}(\mathcal{F}^1_t)$ such that \[ \mathop{\lim\sup}_{n \to \infty} N(\xi^n, \zeta) \le N(\xi, \zeta). \] \end{Proposition} The proofs of the above theorems will be conducted in the following subsections: Section \ref{sec:tech} contains a series of technical results which we then use to prove Theorem \ref{th-value-cont-strat} (in Section \ref{sec:verif}) and Proposition \ref{thm:conv_lipsch} (in Section \ref{sec:approx}). With the results from Theorem \ref{th-value-cont-strat} and Proposition \ref{thm:conv_lipsch} in place we can provide a (simple) proof of Theorem \ref{thm:main}. \begin{proof}[{\bf Proof of Theorem \ref{thm:main}}] Obviously, $V_* \le W_*$ and $V^* \le W^*$. However, Proposition \ref{thm:conv_lipsch} implies that \begin{align}\label{eq:W*} \operatornamewithlimits{inf\vphantom{p}}_{\xi\in\mathcal{A}_{ac}(\mathcal{F}^1_t)} N(\xi,\zeta) = \operatornamewithlimits{inf\vphantom{p}}_{\xi\in\mathcal{A}(\mathcal{F}^1_t)} N(\xi,\zeta)\quad\text{for any $\zeta\in\mathcal{A}(\mathcal{F}^2_t)$}, \end{align} so $V_* \ge W_*$ and therefore $V_* = W_*$. Then, thanks to Theorem \ref{th-value-cont-strat}, we have a sequence of inequalities which completes the proof of existence of the value \[ W = W_* = V_* \le V^* \le W^* = W. \] In \eqref{eq:W*} we can choose $\zeta^*$ which is optimal for $W$ (its existence is guaranteed by Theorem \ref{th-value-cont-strat}). Then, \[ V=V_*=\operatornamewithlimits{inf\vphantom{p}}_{\xi\in\mathcal{A}(\mathcal{F}^1_t)} N(\xi,\zeta^*). \] Thanks to Remark \ref{rem:ineq}, we can repeat the same arguments above with the roles of the two players swapped as in \eqref{eq:swap}, i.e., the $\tau$-player ($\xi$-player) is the maximiser and the $\sigma$-player ($\zeta$-player) is the minimiser. Thus, applying again Theorem \ref{th-value-cont-strat} and Proposition \ref{thm:conv_lipsch} (with $\mathcal{P}'$ as in Remark \ref{rem:ineq} in place of $\mathcal{P}$) we arrive at \[ -V=:V'=\operatornamewithlimits{inf\vphantom{p}}_{\zeta\in\mathcal{A}(\mathcal{F}^2_t)} \mathbb{E}\big[\mathcal{P}'(\xi^*,\zeta)\big], \] where $\xi^*\in\mathcal{A}(\mathcal{F}^1_t)$ is optimal for the maximiser in the game with value $W'=-W$. Hence $\xi^*$ is optimal for the minimiser in the original game with value $V$ and the couple $(\xi^*,\zeta^*)\in\mathcal{A}(\mathcal{F}^1_t)\times\mathcal{A}(\mathcal{F}^2_t)$ is a saddle point. The corresponding randomised stopping times, denoted $(\tau_*,\sigma_*)$, are an optimal pair for the players. \end{proof} \subsection{Technical results}\label{sec:tech} In this section we give a series of results concerning the convergence of integrals when either the integrand or the integrator converge in a suitable sense. We start by stating a technical lemma whose easy proof is omitted. \begin{Lemma}\label{lem:cadlag_indis} Let $(X_t)$ and $(Y_t)$ be c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} measurable processes such that $X_t = Y_t$, $\mathbb{P}$\as for $t \in D \subset [0, T)$ countable and dense, $X_{0-} = Y_{0-}$ and $X_T = Y_T$, $\mathbb{P}$\as Then $(X_t)$ is indistinguishable from $(Y_t)$. \end{Lemma} \begin{definition}\label{def:def_C} Given a c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} measurable process $(X_t)$, for each $\omega\in\Omega$ we denote \[ C_X(\omega):= \{ t\in[0,T]: X_{t-}(\omega)=X_t(\omega) \}. \] \end{definition} Our next result tells us that the convergence $(\lambda\times\mathbb{P})$\ae of processes in $\mathcal{A} (\mathcal{G}_t)$ can be lifted to $\mathbb{P}$\as convergence at all points of continuity of the corresponding c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} representatives. \begin{Lemma}\label{lem:cadlag_convergence} For a filtration $(\mathcal{G}_t) \subseteq (\mathcal{F}_t)$, let $(\rho^n)_{n\ge 1}\subset\mathcal{A}(\mathcal{G}_t)$ and $\rho \in \mathcal{A}(\mathcal{G}_t)$ with $\rho^n \to \rho$ $(\lambda \times \mathbb{P})$\ae as $n\to\infty$. Then for any c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} representatives $\hat \rho^n$ and $\hat \rho$ we have \begin{equation}\label{eqn:cadlag_convergence} \mathbb{P}\Big(\big\{\omega \in \Omega:\ \lim_{n\to\infty}\hat \rho^n_t(\omega)= \hat \rho_t(\omega) \:\:\text{for all $t\in C_{\hat \rho}(\omega)$}\big\}\Big) = 1. \end{equation} \end{Lemma} \begin{proof} The $(\lambda \times \mathbb{P})$\ae convergence of $\rho^n$ to $\rho$ means that the c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} representatives $\hat \rho_n$ converge to $\hat \rho$ also $(\lambda \times \mathbb{P})$\ae. Hence, there is a set $D \subset [0, T]$ with $\lambda([0,T]\setminus D) = 0$ such that $\hat \rho^n_t \to \hat \rho_t$ $\mathbb{P}$\as for $t \in D$. Since $\lambda([0,T]\setminus D) = 0$, there is a countable subset $D_0 \subset D$ that is dense in $[0, T]$. Define \[ \Omega_0 := \{ \omega \in \Omega:\ \hat \rho^n_t (\omega) \to \hat \rho_t (\omega)\:\: \text{for all $t \in D_0$}\}. \] Then $\mathbb{P}(\Omega_0) = 1$. Now, fix $\omega\in\Omega_0$ and let $t\in C_{\hat \rho}(\omega) \cap (0, T)$. Take an increasing sequence $(t^1_k)_{k\ge 1}\subset D_0$ and a decreasing one $(t^2_k)_{k\ge 1}\subset D_0$, both converging to $t$ as $k\to\infty$. For each $k\ge 1$ we have \begin{equation}\label{eqn:upward_conv} \hat \rho_t(\omega)=\lim_{k\to\infty}\hat \rho_{t^2_k}(\omega)=\lim_{k\to\infty}\lim_{n\to\infty}\hat \rho^n_{t^2_k}(\omega)\ge \mathop{\lim\sup}_{n\to\infty}\hat \rho^n_t(\omega), \end{equation} where in the final inequality we use that $\hat \rho^n_{t^2_k}(\omega)\ge \hat\rho^n_t(\omega)$ by monotonicity. By analogous arguments we also obtain \[ \hat \rho_t(\omega)=\lim_{k\to\infty}\hat \rho_{t^1_k}(\omega)=\lim_{k\to\infty}\lim_{n\to\infty}\hat \rho^n_{t^1_k}(\omega)\le \mathop{\lim\operatornamewithlimits{inf\vphantom{p}}}_{n\to\infty}\hat \rho^n_t(\omega), \] where the first equality holds because $t\in C_{\hat \rho}(\omega)$. Combining the above we get \eqref{eqn:cadlag_convergence} (apart from $t\in\{0,T\}$) by recalling that $\omega\in\Omega_0$ and $\mathbb{P}(\Omega_0)=1$. The convergence at $t = T$, irrespective of whether it belongs to $C_{\hat\rho}(\omega)$, is trivial as $\hat \rho^n_T(\omega) = \hat \rho_T(\omega) = 1$. If $0 \in C_{\hat\rho}(\omega)$, then $\hat \rho_0 (\omega) = \hat \rho_{0-}(\omega) = 0$. Inequality \eqref{eqn:upward_conv} reads $0 = \hat \rho_0 (\omega) \ge \mathop{\lim\sup}_{n \to \infty} \hat \rho^n_0(\omega)$. Since $\hat \rho^n_0(\omega) \ge 0$, this proves that $\hat\rho^n_0(\omega) \to \hat \rho_0(\omega)=0$. \end{proof} \begin{Lemma}\label{prop-terminal-time-jump-limit} For a filtration $(\mathcal{G}_t) \subseteq (\mathcal{F}_t)$, let $(\rho^n)_{n\ge 1}\subset{\mathcal{A}^\circ}(\mathcal{G}_t)$ and $\rho\in{\mathcal{A}^\circ}(\mathcal{G}_t)$ with $\rho^n\to \rho$ $(\lambda\times\mathbb{P})$\ae as $n\to\infty$. For any $t \in [0, T]$ and any random variable $X \ge 0$ with $\mathbb{E}[X]<\infty$, we have \begin{equation*} \mathop{\lim\sup}_{n\to\infty} \mathbb{E}[X\Delta \rho^n_t]\le \mathbb{E}[X\Delta \rho_t]. \end{equation*} \end{Lemma} \begin{proof} Fix $t \in (0, T)$. Using $(\lambda\times\mathbb{P})$\ae convergence of $\rho^{n}$ to $\rho$, i.e., that $\int_0^T \mathbb{P}\big(\lim_{n\to\infty}\rho^{n}_t=\rho_t\big) d t=T$, there is a decreasing sequence $\delta_m \to 0$ such that \begin{equation*} \lim_{n\to\infty} \rho^{n}_{t-\delta_m} = \rho_{t-\delta_m}, \qquad \lim_{n\to\infty} \rho^{n}_{t+\delta_m} = \rho_{t+\delta_m},\qquad \mathbb{P}\as \end{equation*} Then, by the dominated convergence theorem, \begin{align*} \mathbb{E}[X\Delta \rho_t]&=\lim_{m \to \infty} \mathbb{E}[ X (\rho_{t + \delta_m} - \rho_{t - \delta_m})]\\ &= \lim_{m\to\infty}\lim_{n\to\infty} \mathbb{E}[X(\rho^{n}_{t+\delta_m}-\rho^{n}_{t-\delta_m})]\\ &= \lim_{m\to\infty}\mathop{\lim\sup}_{n\to\infty} \mathbb{E}[X(\rho^{n}_{t+\delta_m}-\rho^{n}_{t-\delta_m})]\\ &= \lim_{m\to\infty}\mathop{\lim\sup}_{n\to\infty} \mathbb{E}[X(\rho^{n}_{t+\delta_m}-\rho^{n}_{t} + \rho^{n}_{t-} - \rho^{n}_{t-\delta_m} + \Delta \rho^{n}_{t})] \ge \mathop{\lim\sup}_{n\to\infty} \mathbb{E}[X \Delta \rho^{n}_t], \end{align*} where the last inequality is due to $t\mapsto\rho^{n}_t$ being non-decreasing. This finishes the proof for $t\in(0,T)$. The proof for $t \in\{ 0, T\}$ is a simplified version of the argument above, since $\rho^n_T=\rho_T=1$ and $\rho^n_{0-}=\rho_{0-}=0$, $\mathbb{P}$\as \end{proof} We need to consider a slightly larger class of processes ${\tilde{\mathcal{A}}^\circ}(\mathcal{G}_t) \supset {\mathcal{A}^\circ}(\mathcal{G}_t)$ defined by \begin{align*} {\tilde{\mathcal{A}}^\circ}(\mathcal{G}_t):=&\,\{\rho\,:\,\text{$\rho$ is $(\mathcal{G}_t)$-adapted with $t\mapsto\rho_t(\omega)$ c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}},}\\ &\qquad\,\text{non-decreasing, $\rho_{0-}(\omega)=0$ and $\rho_T(\omega)\le 1$ for all $\omega\in\Omega$}\}. \end{align*} \begin{Proposition}\label{prop:r-convergence} For a filtration $(\mathcal{G}_t) \subseteq (\mathcal{F}_t)$, let $(\rho^n)_{n\ge 1}\subset{\tilde{\mathcal{A}}^\circ}(\mathcal{G}_t)$ and $\rho\in{\tilde{\mathcal{A}}^\circ}(\mathcal{G}_t)$. Assume \[ \mathbb{P}\Big(\big\{\omega \in \Omega:\ \lim_{n\to\infty}\rho^n_t(\omega)=\rho_t(\omega)\:\:\text{for all $t\in C_\rho(\omega)\cup\{T\}$} \big\} \Big) = 1. \] Then for any $X\in\mcalL$ that is also $(\mathcal F_t)$-adapted and regular, we have \begin{equation}\label{eqn:b_t} \lim_{n\to\infty}\mathbb{E}\bigg[\int_{[0, T]} X_t d\rho^n_t\bigg] = \mathbb{E}\bigg[\int_{[0, T]} X_t d\rho_t\bigg]. \end{equation} \end{Proposition} \begin{proof} Let us first assume that $(X_t) \in \mcalL$ has continuous trajectories (but is not necessarily adapted). If we prove that \begin{equation}\label{eqn:b_t_omega} \lim_{n\to\infty}\int_{[0, T]} X_t(\omega) d \rho^n_t(\omega)= \int_{[0, T]} X_t(\omega) d \rho_t (\omega),\quad\text{for $\mathbb{P}$\ae $\omega\in\Omega$,} \end{equation} then the result in \eqref{eqn:b_t} will follow by the dominated convergence theorem. By assumption there is $\Omega_0\subset \Omega$ with $\mathbb{P}(\Omega_0)=1$ and such that $\rho^n_t(\omega)\to\rho_t(\omega)$ at all points of continuity of $t\mapsto \rho_t(\omega)$ and at the terminal time $T$ for all $\omega \in \Omega_0$. Since $d\rho^n_t(\omega)$ and $d\rho_t(\omega)$ define positive measures on $[0,T]$ for each $\omega\in\Omega_0$, the convergence of integrals in \eqref{eqn:b_t_omega} can be deduced from the weak convergence of finite measures, see \cite[Remark III.1.2]{Shiryaev}. Indeed, if $\omega\in\Omega_0$ is such that $\rho_T(\omega)=0$, the right-hand side of \eqref{eqn:b_t_omega} is zero and we have \begin{equation*} \mathop{\lim\sup}_{n\to\infty}\left|\int_{[0,T]} X_t(\omega) d \rho^n_t(\omega)\right| \le \mathop{\lim\sup}_{n\to\infty}\sup_{t \in [0, T]} |X_t(\omega)| \rho^n_T(\omega)= 0, \end{equation*} where we use $X\in\mcalL$ to ensure that $\sup_{t \in [0, T]} |X_t(\omega)|<\infty$. If instead, $\omega\in\Omega_0$ is such that $\rho_T(\omega)>0$, then for all sufficiently large $n$'s, we have $\rho^n_T(\omega) > 0$ and $t \mapsto \rho^n_t(\omega) / \rho^n_T(\omega)$ define cumulative distribution functions (cdfs) converging pointwise to $\rho_t(\omega) / \rho_T(\omega)$ at the points of continuity of $\rho_t(\omega)$. Since $t \mapsto X_t(\omega)$ is continuous, \cite[Thm III.1.1]{Shiryaev} justifies \begin{align*} \lim_{n\to\infty}\int_{[0, T]} X_t (\omega) d \rho^{n}_t(\omega) =&\,\lim_{n\to\infty} \rho^{n}_T (\omega) \int_{[0, T]} X_t(\omega) d\left(\frac{\rho^{n}_t(\omega)}{\rho^{n}_T(\omega)}\right) \\ =&\,\rho_T(\omega) \int_{[0, T]} X_t(\omega) d\left(\frac{\rho_t(\omega)}{\rho_T(\omega)}\right) = \int_{[0, T]} X_t(\omega) d \rho_t(\omega). \end{align*} Now we drop the continuity assumption on $X$. We turn our attention to c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}}, $(\mathcal{F}_t)$-adapted and regular $(X_t) \in \mcalL$. By \cite[Theorem 3]{Bismut1978} there is $(\tilde X_t) \in \mcalL$ with continuous trajectories (not necessarily adapted) such that $(X_t)$ is an $(\mathcal{F}_t)$-optional projection of $(\tilde X_t)$. From the first part of the proof we know that \eqref{eqn:b_t} holds for $(\tilde X_t)$. To show that it holds for $(X_t)$ it is sufficient to notice that $(\rho^n_t)$ and $(\rho_t)$ are $(\mathcal{F}_t)$-optional processes, and apply \cite[Thm VI.57]{DellacherieMeyer} to obtain \[ \mathbb{E}\bigg[\int_{[0, T]} X_t d \rho^n_t\bigg] = \mathbb{E}\bigg[\int_{[0, T]} \tilde X_t d \rho^n_t\bigg]\qquad \text{and} \qquad \mathbb{E}\bigg[\int_{[0, T]} X_t d \rho_t\bigg] = \mathbb{E}\bigg[\int_{[0, T]} \tilde X_t d \rho_t\bigg]. \] \end{proof} \begin{remark} The statement of Proposition \ref{prop:r-convergence} can be strengthened to include all processes in $\mcalL$ which are regular but not necessarily $(\mathcal{F}_t)$-adapted. One can prove it by adapting arguments of the proof of \cite[Thm.\ 3]{Meyer}. \end{remark} \begin{Proposition}\label{prop-specific-convergence-2} For a filtration $(\mathcal{G}_t) \subseteq (\mathcal{F}_t)$, let $\chi\in {\mathcal{A}^\circ}(\mathcal{G}_t)$ and $\rho\in\mathcal{A}_{ac}(\mathcal{G}_t)$ and consider $X\in\mcalL$ which is $(\mathcal{F}_t)$-adapted and regular. If $(\rho^n)_{n\ge 1}\subset\mathcal{A}_{ac}(\mathcal{G}_t)$ converges $(\lambda \times \mathbb{P})$\ae to $\rho$ as $n\to\infty$, then \begin{equation}\label{eq:lim00} \lim_{n\to\infty}\mathbb{E}\bigg[\int_{[0, T]} X_t(1-\chi_{t-})d\rho^n_t\bigg]=\mathbb{E}\bigg[\int_{[0, T]} X_t(1-\chi_{t-})d\rho_t\bigg]. \end{equation} \end{Proposition} \begin{proof} Define absolutely continuous adapted processes \begin{equation*} R^n_t = \int_{[0, t]} (1-\chi_{s-})d\rho^n_s\quad\text{and}\quad R_t = \int_{[0, t]} (1-\chi_{s-})d\rho_s, \end{equation*} so that \begin{equation}\label{eq:intdR} \int_{[0, T]} X_t(1-\chi_{t-})d\rho^n_t=\int_{[0, T]} X_tdR^n_t\quad\text{and}\quad \int_{[0, T]} X_t(1-\chi_{t-})d\rho_t=\int_{[0, T]} X_tdR_t. \end{equation} With no loss of generality we can consider the absolutely continuous representatives of $\rho$ and $\rho^n$ from the class ${\mathcal{A}^\circ_{ac}}(\mathcal{G}_t)$ in the definition of all the integrals above (which we still denote by $\rho$ and $\rho^n$ for simplicity). In light of this observation it is clear that $(R^n)_{n\ge 1}\subset{\tilde{\mathcal{A}}^\circ}(\mathcal{G}_t)$ and $R\in {\tilde{\mathcal{A}}^\circ}(\mathcal{G}_t)$. The idea is then to apply Proposition \ref{prop:r-convergence} to the integrals with $R^n$ and $R$ in \eqref{eq:intdR}. Thanks to Lemma \ref{lem:cadlag_convergence} and recalling that $\rho^n_T = \rho_T = 1$, the set \[ \Omega_0 = \big\{\omega\in\Omega:\lim_{n\to\infty}\rho^n_t(\omega)= \rho_t(\omega) \text{ for all $t \in [0, T]$}\big\} \] has full measure, i.e., $\mathbb{P}(\Omega_0)=1$. For any $\omega \in \Omega_0$ and $t\in[0,T]$, integrating by parts (see, e.g., \cite[Prop. 4.5, Chapter 0]{revuzyor}), using the dominated convergence theorem and then again integrating by parts give \begin{equation}\label{eqn:conv_R} \lim_{n \to \infty} R^n_t= \lim_{n \to \infty} \bigg[(1-\chi_{t})\rho^n_t - \int_{[0,t]} \rho^n_sd(1-\chi_{s}) \bigg] = (1-\chi_{t})\rho_t - \int_{[0,t]} \rho_sd(1-\chi_{s})=R_t. \end{equation} Hence $R^n$ and $R$ satisfy the assumptions of Proposition \ref{prop:r-convergence} and we can conclude that \eqref{eq:lim00} holds. \end{proof} \begin{cor}\label{cor-specific-convergence-2} Under the assumptions of Proposition \ref{prop-specific-convergence-2}, we have \begin{equation* \lim_{n\to\infty}\mathbb{E}\bigg[\int_{[0, T)} X_t(1-\chi_{t})d\rho^n_t + X_T \Delta \chi_T \Delta \rho^n_T\bigg]=\mathbb{E}\bigg[\int_{[0, T)} X_t(1-\chi_{t})d\rho_t + X_T \Delta \chi_T \Delta \rho_T\bigg]. \end{equation*} \end{cor} \begin{proof} Recall that $\rho^n$ and $\rho$ are continuous everywhere apart from $T$. Hence, we can rewrite the left- and right-hand side of \eqref{eq:lim00} as \[ \int_{[0, T]} X_t(1-\chi_{t-})d\rho^n_t = \int_{[0, T]} X_t(1-\chi_{t})d\rho^n_t + X_T \Delta \chi_T \Delta \rho^n_T \] and \[ \int_{[0, T]} X_t(1-\chi_{t-})d\rho_t = \int_{[0, T]} X_t(1-\chi_{t})d\rho_t + X_T \Delta \chi_T \Delta \rho_T\,, \] respectively. It remains to note that $\int_{[0, T]} X_t(1-\chi_{t})d\rho^n_t = \int_{[0, T)} X_t(1-\chi_{t})d\rho^n_t$ because $\chi_T = 1$. \end{proof} We close this technical section with a similar result to the above but for approximations which are needed for the proof of Proposition \ref{thm:conv_lipsch}. The next proposition is tailored for our specific type of regularisation of processes in $\mathcal{A}(\mathcal{F}^1_t)$. Notice that the left hand side of \eqref{eq:lim-in-A} features $\chi_{t-}$ while the right hand side has $\chi_t$. \begin{Proposition}\label{prop-specific-convergence-3} For a filtration $(\mathcal{G}_t) \subseteq (\mathcal{F}_t)$, let $\chi,\rho \in{\mathcal{A}^\circ} (\mathcal{G}_t)$, $(\rho^n)_{n\ge 1}\subset{\mathcal{A}^\circ} (\mathcal{G}_t)$ and consider $X\in\mcalL$ which is $(\mathcal{F}_t)$-adapted and regular. Assume the sequence $(\rho^n)_{n\ge 1}$ is non-decreasing and for $\mathbb{P}$\ae $\omega\in\Omega$ \begin{align}\label{eq:conv-rho} \lim_{n\to\infty}\rho^{n}_t(\omega)=\rho_{t-}(\omega)\:\: \text{for all $t\in[0,T)$}. \end{align} Then \begin{equation}\label{eq:lim-in-A} \lim_{n\to\infty}\mathbb{E}\bigg[\int_{[0, T)} X_t(1-\chi_{t-})d\rho^n_t\bigg]= \mathbb{E}\bigg[\int_{[0, T)} X_t(1-\chi_{t})d\rho_t\bigg], \end{equation} and for $\mathbb{P}$\ae $\omega \in \Omega$ \begin{equation}\label{eqn:lim-in-t-} \lim_{n\to\infty}\rho^n_{t-}(\omega)=\rho_{t-}(\omega)\quad \text{for all $t \in [0, T]$}. \end{equation} \end{Proposition} \begin{proof} Denote by $\Omega_0$ the set on which the convergence \eqref{eq:conv-rho} holds. The first observation is that for all $\omega\in\Omega_0$ and $t \in (0, T]$ \begin{align}\label{eq:lim-nt} \lim_{n\to\infty}\rho^n_{t-}(\omega)=\lim_{n\to\infty}\lim_{u\uparrow t}\rho^n_{u}(\omega)=\lim_{u\uparrow t}\lim_{n\to\infty}\rho^n_{u}(\omega)=\lim_{u\uparrow t}\rho_{u-}(\omega)=\rho_{t-}(\omega), \end{align} where the order of limits can be swapped by monotonicity of the process and of the sequence. The convergence at $t=0$ is obvious as $\rho^n_{0-} = \rho_{0-} = 0$. This proves \eqref{eqn:lim-in-t-}. Define for $t \in [0, T)$, \begin{equation}\label{eqn:def_Rn} R^n_t=\int_{[0, t]} (1-\chi_{s-}) d\rho^n_s, \qquad R_t=\int_{[0, t]} (1-\chi_{s}) d\rho_s, \end{equation} and extend both processes to $t=T$ in a continuous way by taking $R^n_{T}:=R^n_{T-}$ and $R_{T}:=R_{T-}$. By construction we have $(R^n)_{n\ge 1}\subset\tilde{{\mathcal{A}^\circ}} (\mathcal{G}_t)$ and $R\in \tilde{{\mathcal{A}^\circ}}(\mathcal{G}_t)$ and the idea is to apply Proposition \ref{prop:r-convergence}. First we notice that for all $\omega\in\Omega$ and any $t\in[0,T)$ we have \[ \Delta R_t(\omega)=(1-\chi_t(\omega))\Delta\rho_t(\omega), \] so that we can write the set of points of continuity of $R$ as (recall Definition \ref{def:def_C}) \[ C_R(\omega)=C_{\rho}(\omega)\cup \{t\in[0,T]:\chi_t(\omega)=1\}. \] For any $t\in[0, T)$ and all $\omega\in\Omega_0$, integrating $R^n_t(\omega)$ by parts (\citep[Prop. 4.5, Chapter 0]{revuzyor}) and then taking limits as $n\to\infty$ we get \begin{align}\label{eq:Rcon} \lim_{n\to\infty}R^n_t(\omega)=&\,\lim_{n\to\infty} \Big[(1-\chi_{t}(\omega))\rho^n_t(\omega) - \int_{[0, t]} \rho^n_s(\omega)d (1-\chi_{s}(\omega))\Big]\\ =&\, (1-\chi_{t}(\omega))\rho_{t-}(\omega) - \int_{[0, t]} \rho_{s-}(\omega)d (1-\chi_{s}(\omega)) \notag\\ =&\, R_t(\omega) - (1-\chi_t(\omega)) \Delta \rho_t(\omega)=R_{t-}(\omega),\notag \end{align} where the second equality uses dominated convergence and \eqref{eq:conv-rho}, and the third equality is integration by parts. We can therefore conclude that \begin{align*} \lim_{n\to\infty}R^n_t(\omega)=R_t(\omega),\quad\text{for all $t\in C_R(\omega) \cap[0,T)$ and all $\omega\in\Omega_0$}. \end{align*} It remains to show the convergence at $T$ which is in $C_R(\omega)$ by our construction of $R$. Since the function $t \mapsto \rho_t(\omega)$ is non-decreasing and the sequence $(\rho^n(\omega))_n$ is non-decreasing, the sequence $(R^n(\omega))_n$ is non-decreasing too (an easy proof of this fact involves integration by parts and observing that $t\mapsto d (1-\chi_t(\omega))$ defines a negative measure; notice also the link to the first-order stochastic dominance). As in \eqref{eq:lim-nt}, we show that $\lim_{n \to \infty} R^n_{T-}(\omega) = R_{T-} (\omega)$ for $\omega \in \Omega_0$. By construction of $R^n$ and $R$, this proves convergence of $R^n_T$ to $R_T$. Then, the processes $R^n$ and $R$ fulfil all the assumptions of Proposition \ref{prop:r-convergence} whose application allows us to obtain \eqref{eq:lim-in-A}. \end{proof} From the convergence \eqref{eq:Rcon}, an identical argument as in \eqref{eq:lim-nt} proves convergence of left-limits of processes $(R^n)$ at any $t \in [0, T]$. The following corollary formalises this observation. It will be used in Section \ref{sec:relax}. \begin{cor}\label{cor:lim_R_in-t-} Consider the processes $(R^n)$ and $R$ defined in \eqref{eqn:def_Rn}. For $\mathbb{P}$\ae $\omega \in \Omega$ we have \begin{equation*} \lim_{n\to\infty} R^n_{t-}(\omega)=R_{t-}(\omega)\quad \text{for all $t \in [0, T]$}. \end{equation*} \end{cor} \subsection{Verification of the conditions of Sion's theorem}\label{sec:verif} For the application of Sion's theorem, we will consider a weak topology on $\mathcal{A}_{ac}(\mathcal{F}^1_t)$ and $\mathcal{A}(\mathcal{F}^2_t)$ inherited from the space $\mathcal{S}$. In our arguments, we will often use that for convex sets the weak and strong closedness are equivalent \cite[Theorem 3.7]{Brezis2010} (although weak and strong convergence are not equivalent, c.f. \cite[Corollary 3.8]{Brezis2010}). \begin{Lemma} \label{lem-strat-set-compact} For any filtration $(\mathcal{G}_t) \subseteq (\mathcal{F}_t)$ satisfying the usual conditions, the set $\mathcal{A}(\mathcal{G}_t)$ is weakly compact in $\mathcal{S}$. \end{Lemma} \begin{proof} We write $\mathcal{A}$ for $\mathcal{A}(\mathcal{G}_t)$ and ${\mathcal{A}^\circ}$ for ${\mathcal{A}^\circ}(\mathcal{G}_t)$. The set $\mathcal{A}$ is a subset of a ball in $\mathcal{S}$. Since $\mathcal{S}$ is a reflexive Banach space, this ball is weakly compact (Kakutani's theorem, \cite[Theorem 3.17]{Brezis2010}). Therefore, we only need to show that $\mathcal{A}$ is weakly closed. Since $\mathcal{A}$ is convex, it is enough to show that $\mathcal{A}$ is strongly closed \cite[Theorem 3.7]{Brezis2010}. Take a sequence $(\rho^n)_{n\ge 1}\subset\mathcal{A}$ that converges strongly in $\mathcal{S}$ to $\rho$. We will prove that $\rho\in\mathcal{A}$ by constructing a c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} non-decreasing adapted process $(\hat \rho_t)$ such that $\hat\rho_{0-} = 0$, $\hat \rho_T = 1$, and $\hat \rho = \rho$ $(\lambda\times\mathbb{P})$\ae With no loss of generality we can pass to the c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} representatives $(\hat \rho^n)_{n\ge 1}\subset{\mathcal{A}^\circ}$ which also converge to $\rho$ in $\mathcal S$. Then, there is a subsequence $(n_k)_{k\ge 1}$ such that $\hat\rho^{n_k}\to \rho$ $(\lambda \times \mathbb{P})$\ae \cite[Theorem 4.9]{Brezis2010}. Since \[ \int_0^t \mathbb{P}\big(\lim_{k\to\infty}\hat\rho^{n_k}_s=\rho_s\big) d s=t,\quad\text{for all $t\in[0,T]$,} \] we can find $\hat D\subset [0,T]$ with $\lambda([0,T]\setminus\hat D)=0$ such that $\mathbb{P}(\Omega_t)=1$ for all $t\in \hat D$, where \[ \Omega_t:=\{\omega\in\Omega: \lim_{k\to\infty}\hat \rho^{n_k}_t(\omega)=\rho_t(\omega)\}. \] Then we can take a dense countable subset $D\subset \hat D$ and define $\Omega_0:=\cap_{t\in D}\Omega_t$ so that $\mathbb{P}(\Omega_0)=1$ and \[ \lim_{k\to\infty}\hat \rho^{n_k}_t(\omega)=\rho_t(\omega),\qquad\text{for all $(t,\omega)\in D\times\Omega_0$.} \] Since $\hat \rho^{n_k}$ are non-decreasing, so is the mapping $D\ni t\mapsto \rho_t(\omega)$ for all $\omega\in\Omega_0$. Let us extend this mapping to $[0,T]$ by defining $\hat \rho_t(\omega):=\rho_t(\omega)$ for $t\in D$ and \[ \hat{\rho}_t(\omega):=\lim_{s\in D:s\downarrow t} \rho_s(\omega),\quad\hat{\rho}_{0-}(\omega):=0,\quad \hat{\rho}_{T}(\omega):=1,\quad\text{for all $\omega\in \Omega_0$,} \] where the limit exists due to monotonicity. For $\omega\in \mathcal{N}:=\Omega\setminus\Omega_0$, we set $\hat \rho_t(\omega) = 0$ for $t < T$ and $\hat \rho_T(\omega)=1$. Notice that $\mathcal{N}\in\mathcal{G}_0$ since $\mathbb{P}(\mathcal{N})=0$ so that $\hat{\rho}_t$ is $\mathcal{G}_t$-measurable for $t\in D$. Moreover, $\hat{\rho}$ is c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} by construction and $\hat{\rho}_t$ is measurable with respect to $\cap_{s\in D, s > t\,}\mathcal{G}_s=\mathcal{G}_{t+}=\mathcal{G}_t$ for each $t\in[0,T]$ by the right-continuity of the filtration. Hence, $\hat \rho$ is $(\mathcal{G}_t)$-adapted and $\hat \rho \in {\mathcal{A}^\circ}$. It remains to show that $\hat \rho^{n_k}\to \hat{\rho}$ in $\mathcal{S}$ so that $\hat \rho=\rho$ $(\lambda\times\mathbb{P})$\ae and therefore $\rho\in\mathcal{A}$. It suffices to show that $\hat \rho^{n_k}\to \hat{\rho}$ $(\lambda\times\mathbb{P})$\ae and then conclude by dominated convergence that $\hat \rho^{n_k}\to \hat{\rho}$ in $\mathcal{S}$. For each $\omega\in\Omega_0$ the process $t\mapsto\hat \rho(\omega)$ has at most countably many jumps (on any bounded interval) by monotonicity, i.e., $\lambda([0,T]\setminus C_{\hat \rho}(\omega))=0$ (recall Definition \ref{def:def_C}). Moreover, arguing as in the proof of Lemma \ref{lem:cadlag_convergence}, we conclude \[ \lim_{k\to\infty}\hat \rho^{n_k}_t(\omega)=\hat \rho_t(\omega),\quad\text{for all $t\in C_{\hat \rho}(\omega)$ and all $\omega\in\Omega_0$}. \] Since $(\lambda\!\times\!\mathbb{P})(\{(t,\omega)\!:\!t\in C_{\hat\rho}(\omega)\cap B,\omega\in\Omega_0\})\!=\!\lambda(B)$ for any bounded interval $B\subseteq[0,T]$ then $\hat \rho^{n_k}\!\to\! \hat{\rho}$ in $\mathcal{S}$ and $\mathcal{A}$ is strongly closed in $\mathcal{S}$. \end{proof} \begin{remark}\label{rem-TV-norm-doesnt-work} Our space $\mathcal{A}(\mathcal{G}_t)$ is the space of processes that generate randomised stopping times and for any $\rho\in\mathcal{A}(\mathcal{G}_t)$ we require that $\rho_T(\omega)=1$, for all $\omega\in\Omega$. In the finite horizon problem, i.e., $T<\infty$, such specification imposes a constraint that prevents a direct use of the topology induced by the norm considered in \cite{TouziVieille2002}. Indeed, in \cite{TouziVieille2002} the space $\mathcal S$ is that of $(\mathcal{G}_t)$-adapted processes $\rho$ with \begin{equation*} \|\rho\|^2:=\mathbb{E}\bigg[\int_{0}^T (\rho_t)^2 d t + (\Delta \rho_T)^2\bigg] < \infty,\quad \Delta\rho_T:=\rho_T-\mathop{\lim\operatornamewithlimits{inf\vphantom{p}}}_{t\uparrow T}\rho_t. \end{equation*} The space of generating processes $\mathcal{A}(\mathcal{G}_t)$ is not closed in the topology induced by $\|\cdot\|$ above: define a sequence $(\rho^n)_{n\ge 1}\subset\mathcal{A}(\mathcal{G}_t)$ by \begin{equation*} \rho^n_t = n \bigg(t - T + \frac{1}{n}\bigg)^+, \qquad t \in [0, T]. \end{equation*} Then $\|\rho^n\|\to 0$ as $n\to\infty$ but $\rho\equiv 0\notin\mathcal{A}(\mathcal{G}_t)$ since it fails to be equal to one at $T$ (and it is not possible to select a representative from $\mathcal{A}(\mathcal{G}_t)$ with the equivalence relation induced by $\|\,\cdot\,\|$). \end{remark} It is of interest to explore the relationship between the topology on $\mathcal{A}(\mathcal{G}_t)$ implied by the weak topology on $\mathcal{S}$ (denote it by $\mcalO_2$) and the topology introduced in \cite{BaxterChacon, Meyer} (denote it by $\mcalO_1$). The topology $\mcalO_1$ is the coarsest topology in which all functionals of the form \begin{equation}\label{eqn:top_O2} \mathcal{A}(\mathcal{G}_t) \ni \rho \mapsto \mathbb{E} \Big[\int_{[0, T]} X_t\, d \rho_t\Big] \end{equation} are continuous for any $X \in \mcalL$ with continuous trajectories. Our topology $\mcalO_2$, instead, is the restriction to $\mathcal{A}(\mathcal{G}_t)$ of the weak topology on $\mathcal{S}$. That is, $\mcalO_2$ is the coarsest topology for which all functionals of the form \begin{equation* \mathcal{A}(\mathcal{G}_t) \ni \rho \mapsto \mathbb{E} \Big[\int_{[0, T]} \rho_t\, Y_t\, d t\Big] \end{equation*} are continuous for all $Y \in \mathcal{S}$. \begin{Lemma}\label{lem:top} Topologies $\mcalO_1$ and $\mcalO_2$ are identical. \end{Lemma} \begin{proof} Denoting \begin{align}\label{eq:Xint} X_t = \int_{[0, t]} Y_t\, d t \end{align} and integrating by parts, we obtain for $\rho \in \mathcal{A} (\mathcal{G}_t)$ \[ \mathbb{E} \Big[\int_{[0, T]} \rho_t\, Y_t\, d t\Big] = \mathbb{E} \Big[X_T \rho_T - X_0 \rho_{0-} - \int_{[0, T]} X_t\, d \rho_t \Big] = \mathbb{E} \Big[X_T - \int_{[0, T]} X_t\, d \rho_t \Big], \] where we used that $\rho_T = 1$ and $\rho_{0-} = 0$, $\mathbb{P}$\as Hence, $\mcalO_2$ is the coarsest topology on $\mathcal{A} (\mathcal{G}_t)$ for which functionals \eqref{eqn:top_O2} are continuous for all processes $X$ defined as in \eqref{eq:Xint}. Since these processes $X$ are continuous, we conclude that $\mcalO_2 \subset \mcalO_1$. The set $\mathcal{A} (\mathcal{G}_t)$ is compact in the topologies $\mcalO_1$ \citep[Theorem 3]{Meyer} and $\mcalO_2$ (see Lemma \ref{lem-strat-set-compact} above). The compact Hausdorff topology is the coarsest among Hausdorff topologies \cite[Cor.~3.1.14, p. 126]{Engelking}. Since $\mcalO_2$ is Hausdorff by \cite[Prop~3.3]{Brezis2010}, so is $\mcalO_1$ and we have $\mcalO_1 = \mcalO_2$. \end{proof} \begin{remark} Meyer \cite[Thm.~4]{Meyer} shows that if $\mathcal{F}$ is separable (i.e., countably generated) then the topology $\mcalO_1$ (hence $\mcalO_2$) is metrizable. This could also be seen directly for the topology $\mcalO_2$ by \cite[Thm.~3.29]{Brezis2010}, because $\mathcal{A}(\mathcal{G}_t)$ is bounded in $\mathcal{S}$ and $\mcalO_2$ is the restriction to $\mathcal{A}(\mathcal{G}_t)$ of the weak topology on $\mathcal{S}$. Indeed, it emerges from this argument for the metrizability of $\mcalO_2$ that it is sufficient to require that only $\mathcal{G}_T$ be separable. \end{remark} \begin{Lemma}\label{lem:semi-cont} Given any $(\xi,\zeta)\in\mathcal{A}_{ac}(\mathcal{F}^1_t) \times\mathcal{A} (\mathcal{F}^2_t)$, the functionals $N(\xi,\cdot):\mathcal{A}(\mathcal{F}^2_t)\to\mathbb{R}$ and $N(\cdot,\zeta):\mathcal{A}_{ac}(\mathcal{F}^1_t)\to\mathbb{R}$ are, respectively, upper semicontinuous and lower semicontinuous in the strong topology of $\mathcal{S}$. \end{Lemma} \begin{proof} Since $\xi \in \mathcal{A}_{ac} (\mathcal{F}^1_t)$, we have from \eqref{eq-functional-in-terms-of-controls} that the contribution of simultaneous jumps reduces to a single term: \begin{equation}\label{eqn:N_cont} N(\xi,\zeta)= \mathbb{E}\bigg[\int_{[0, T)} f_t(1-\zeta_{t})d\xi_t + \int_{[0, T)} g_t(1-\xi_t)d\zeta_t + h_T\Delta\xi_T\Delta\zeta_T\bigg]. \end{equation} \emph{Upper semicontinuity of $N(\xi,\cdot)$}. Fix $\xi\in\mathcal{A}_{ac}(\mathcal{F}^1_t)$ and consider a sequence $(\zeta^{n})_{n \ge 1}\subset\mathcal{A}(\mathcal{F}^2_t)$ converging to $\zeta \in \mathcal{A}(\mathcal{F}^2_t)$ strongly in $\mathcal{S}$. We have to show that \begin{equation*} \mathop{\lim\sup}_{n\to\infty} N(\xi,\zeta^{n})\le N(\xi,\zeta). \end{equation*} Assume, by contradiction, that $\mathop{\lim\sup}_{n\to\infty} N(\xi,\zeta^{n}) > N(\xi,\zeta)$. There is a subsequence $(n_k)$ over which the limit on the left-hand side is attained. Along a further subsequence we have $(\mathbb{P}\times\lambda)\ae$ convergence of $\zeta^n$ to $\zeta$ \citep[Theorem 4.9]{Brezis2010}. With an abuse of notation we will assume that the original sequence possesses those two properties, i.e., the limit $\lim_{n\to\infty} N(\xi,\zeta^{n})$ exists, it strictly dominates $N(\xi,\zeta)$, and there is $(\mathbb{P} \times \lambda)\ae$ convergence of $\zeta^{n}$ to $\zeta$. Since $\xi$ is absolutely continuous on $[0, T)$, \begin{equation*} \lim_{n\to\infty}\mathbb{E}\bigg[\int_{[0, T)} f_t(1-\zeta^{n}_{t})d\xi_t\bigg]=\mathbb{E}\bigg[\int_{[0, T)} f_t(1-\zeta_{t})d\xi_t \bigg] \end{equation*} by the dominated convergence theorem. For the last two terms of $N(\xi, \zeta^n)$ in \eqref{eqn:N_cont} we have \begin{align*} \mathbb{E}\bigg[\int_{[0, T)} g_t(1-\xi_t)d\zeta^n_t + h_T\Delta\xi_T\Delta\zeta^n_T\bigg] &= \mathbb{E}\bigg[\int_{[0, T)} g_t(1-\xi_{t-})d\zeta^n_t + h_T\Delta\xi_T\Delta\zeta^n_T\bigg]\\ &= \mathbb{E}\bigg[\int_{[0, T]} g_t(1-\xi_{t-})d\zeta^n_t + (h_T - g_T) \Delta\xi_T\Delta\zeta^n_T\bigg], \end{align*} where the first equality is by the continuity of $\xi$ and for the second one we use that $1-\xi_{T-} = \Delta \xi_T$. From Lemma \ref{lem:cadlag_convergence} and the boundedness and continuity of $(\xi_t)$ we verify the assumptions of Proposition \ref{prop:r-convergence} (with $X_t=g_t(1-\xi_{t-})$ therein since $\xi_{t-}$ is continuous on $[0,T]$), so \begin{equation*} \lim_{n \to \infty} \mathbb{E}\bigg[\int_{[0, T]} g_t(1-\xi_{t-})d\zeta^n_t \bigg] = \mathbb{E}\bigg[\int_{[0, T]} g_t(1-\xi_{t-})d\zeta_t \bigg]. \end{equation*} Recalling that $g_T \le h_T$, we obtain from Lemma \ref{prop-terminal-time-jump-limit} \begin{equation*} \mathop{\lim\sup}_{n\to\infty} \mathbb{E}\big[(h_T-g_T)\Delta\xi_T\Delta\zeta^n_T\big] \le \mathbb{E}\big[(h_T-g_T)\Delta\xi_T\Delta\zeta_T\big]. \end{equation*} Combining above convergence results contradicts $\lim_{n\to\infty} N(\xi,\zeta^n) >N(\xi,\zeta)$, hence, proves the upper semicontinuity. \emph{Lower semicontinuity of $N(\cdot,\zeta)$}. Fix $\zeta\in\mathcal{A}(\mathcal{F}^2_t)$ and consider a sequence $(\xi^{n})_{n\ge 1}\subset\mathcal{A}_{ac}(\mathcal{F}^1_t)$ converging to $\xi \in \mathcal{A}_{ac} (\mathcal{F}^1_t)$ strongly in $\mathcal{S}$. Arguing by contradiction as above, we assume that there is a subsequence of $\xi^{n}$, which we denote the same, such that $\xi^{n} \to \xi$ $(\mathbb{P}\times\lambda)$\ae and \begin{equation}\label{eqn:two_terms2} \lim_{n\to\infty} N(\xi^{n},\zeta) < N(\xi,\zeta). \end{equation} By Lemma \ref{lem:cadlag_convergence} and the continuity of $(\xi_t)$ we have for $\mathbb{P}$\ae $\omega\in\Omega$ \[ \lim_{n\to\infty} \xi^{n}_t(\omega) = \xi_t(\omega)\quad\text{for all $t \in [0,T)$}. \] Then by dominated convergence for the second term of $N(\xi^n,\zeta)$ in \eqref{eqn:N_cont} we get \[ \lim_{n\to\infty}\mathbb{E}\bigg[\int_{[0, T)} g_t(1-\xi^{n}_t)d\zeta_t\bigg]= \mathbb{E}\bigg[\int_{[0, T)} g_t(1-\xi_t)d\zeta_t\bigg]. \] For the remaining terms of $N(\xi^{n}, \zeta)$, we have \[ \mathbb{E}\bigg[\int_{[0, T)} f_t(1-\zeta_{t})d\xi^n_t + h_T\Delta\xi^n_T\Delta\zeta_T\bigg]=\mathbb{E}\bigg[\int_{[0, T)} f_t(1-\zeta_{t})d\xi^n_t + f_T \Delta\xi^n_T\Delta\zeta_T + (h_T-f_T)\Delta\xi^n_T\Delta\zeta_T\bigg]. \] Observe that, by Lemma \ref{prop-terminal-time-jump-limit}, \begin{equation*} \mathop{\lim\operatornamewithlimits{inf\vphantom{p}}}_{n\to\infty}\mathbb{E}\big[(h_T-f_T)\Delta\xi^n_T\Delta\zeta_T\big]\ge \mathbb{E}\big[(h_T-f_T)\Delta\xi_T\Delta\zeta_T\big], \end{equation*} because $h_T-f_T\le 0$. Further, \begin{equation*} \lim_{n\to\infty}\mathbb{E}\bigg[\int_{[0, T)} f_t(1-\zeta_{t})d\xi^n_t + f_T \Delta\xi^n_T\Delta\zeta_T \bigg]=\mathbb{E}\bigg[\int_{[0, T)} f_t(1-\zeta_{t})d\xi_t + f_T \Delta\xi_T\Delta\zeta_T\bigg] \end{equation*} by Corollary \ref{cor-specific-convergence-2}. The above results contradict \eqref{eqn:two_terms2}, therefore, proving the lower semicontinuity. \end{proof} We are now ready to prove that the game with continuous randomisation for the first player ($\tau$-player) has a value. \begin{proof}[{\bf Proof of Theorem \ref{th-value-cont-strat}}] We will show that the conditions of Sion's theorem hold (recall the notation in Theorem \ref{th-the-Sion}) with $(A,B)=(\mathcal{A}(\mathcal{F}^2_t),\mathcal{A}_{ac}(\mathcal{F}^1_t))$ on the space $\mathcal{S} \times \mathcal{S}$ equipped with its weak topology. For the sake of compactness of notation, we will write $\mathcal{A}$ for $\mathcal{A}(\mathcal{F}^2_t)$ and $\mathcal{A}_{ac}$ for $\mathcal{A}_{ac}(\mathcal{F}^1_t)$. It is straightforward to verify that the sets $\mathcal{A}$ and $\mathcal{A}_{ac}$ are convex. Compactness of $\mathcal{A}$ in the weak topology of $\mathcal{S}$ follows from Lemma \ref{lem-strat-set-compact}. It remains to prove the convexity and semi-continuity properties of $N$ with respect to the weak topology of $\mathcal{S}$. This is equivalent to showing that for any $a\in\mathbb{R}$, $\hat{\xi}\in\mathcal{A}_{ac}$ and $\hat{\zeta}\in\mathcal{A}$ the level sets \[ \mathcal{K}(\hat{\zeta},a)=\{\xi\in\mathcal{A}_{ac}:N(\xi,\hat{\zeta})\le a\} \qquad \text{and}\qquad \mathcal{Z}(\hat{\xi},a)=\{\zeta\in\mathcal{A}:N(\hat{\xi},\zeta)\ge a\} \] are convex and closed in $\mathcal{A}_{ac}$ and $\mathcal{A}$, respectively, with respect to the weak topology of $\mathcal{S}$. For any $\lambda \in [0,1]$ and $\xi^{1}, \xi^{2} \in \mathcal{A}_{ac}$, $\zeta^{1}, \zeta^{2} \in \mathcal{A}$, using the expression in \eqref{eq-functional-in-terms-of-controls} it is immediate (by linearity) that \begin{align*} N(\lambda \xi^{1} + (1-\lambda) \xi^{2}, \hat \zeta) &= \lambda N(\xi^{1}, \hat \zeta) + (1-\lambda) N(\xi^{2}, \hat\zeta),\\ N(\hat\xi, \lambda \zeta^{1} + (1-\lambda)\zeta^{2}) &= \lambda N(\hat\xi, \zeta^{1}) + (1-\lambda) N(\hat\xi, \zeta^{2}). \end{align*} This proves the convexity of the level sets. Their closedness in the strong topology of $\mathcal{S}$ is established in Lemma \ref{lem:semi-cont}. The latter two properties imply, by \cite[Theorem 3.7]{Brezis2010}, that the level sets are closed in the weak topology of $\mathcal{S}$. Sion's theorem (Theorem \ref{th-the-Sion}) yields the existence of the value of the game: $W_* = W^*$. The second part of the statement results from using a version of Sion's theorem proved in \cite{Komiya1988} which allows to write $\max$ instead of $\sup$ in \eqref{eq-value-cont-restriction}, i.e., \begin{equation*} \sup_{\zeta\in\mathcal{A}}\operatornamewithlimits{inf\vphantom{p}}_{\xi\in\mathcal{A}_{ac}} N(\xi,\zeta)=\max_{\zeta\in\mathcal{A}}\operatornamewithlimits{inf\vphantom{p}}_{\xi\in\mathcal{A}_{ac}} N(\xi,\zeta)=\operatornamewithlimits{inf\vphantom{p}}_{\xi\in\mathcal{A}_{ac}} N(\xi,\zeta^*), \end{equation*} where $\zeta^*\in\mathcal{A}$ delivers the maximum. \end{proof} \subsection{Approximation with continuous controls}\label{sec:approx} We now prove Proposition \ref{thm:conv_lipsch} by constructing a sequence $(\xi^{n})$ of Lipschitz continuous processes with the Lipschitz constant for each process bounded by $n$ for all $\omega$. This uniform bound on the Lipschitz constant is not used in this paper as we only need that each of the processes $(\xi^{n}_t)$ has absolutely continuous trajectories with respect to the Lebesgue measure on $[0,T)$ so that it belongs to $\mathcal{A}_{ac}(\mathcal{F}^1_t)$. \begin{proof}[Proof of Proposition \ref{thm:conv_lipsch}] Fix $\zeta\in\mathcal{A}(\mathcal{F}^2_t)$. We need to show that for any $\xi\in\mathcal{A}(\mathcal{F}^1_t)$, there exists a sequence $(\xi^{n})_{n \ge 1} \subset \mathcal{A}_{ac}(\mathcal{F}^1_t)$ such that \begin{equation} \mathop{\lim\sup}_{n\to\infty} N(\xi^{n},\zeta)\le N(\xi,\zeta). \label{eq-liminf-M} \end{equation} We will explicitly construct absolutely continuous $\xi^{n}$ that approximate $\xi$ in a suitable sense. As $N(\xi, \zeta)$ does not depend on the choice of c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} representatives, by Definition \ref{def:integral}, without loss of generality we assume that $\xi \in {\mathcal{A}^\circ}(\mathcal{F}^1_t)$ and $\zeta \in {\mathcal{A}^\circ}(\mathcal{F}^2_t)$. Define a function $\phi^n_t = (nt)\wedge 1\vee 0$. Let $\xi^{n}_t = \int_{[0, t]} \phi^n_{t-s} d\xi_s$ for $t\in[0,T)$, and $\xi^{n}_T = 1$. We shall show that $(\xi^{n}_t)$ is $n$-Lipschitz, hence absolutely continuous on $[0, T)$. Note that $\phi^n_t\equiv 0$ for $t\le 0$, and therefore $\xi^{n}_t = \int_{[0, T]} \phi^n_{t-s} d\xi_s$ for $t \in [0, T)$. For arbitrary $t_1,t_2\in[0,T)$ we have \begin{align*} |\xi^{n}_{t_1}-\xi^{n}_{t_2}| &= \left|\int_{[0, T]} (\phi^n_{t_1-s}-\phi^n_{t_2-s}) d\xi_s\right| \le \int_{[0, T]} |\phi^n_{t_1-s}-\phi^n_{t_2-s}| d\xi_s\\ &\le \int_{[0, T]} n|(t_1-s)-(t_2-s)| d\xi_s = \int_{[0, T]} n|t_1-t_2| d\xi_s=n|t_1-t_2|, \end{align*} where the first inequality is Jensen's inequality (which is applicable since $\xi(\omega)$ is a cumulative distribution function on $[0, T]$ for each $\omega$), and the second inequality follows by the definition of $\phi^n$. We will verify the assumptions of Proposition \ref{prop-specific-convergence-3}. Clearly the sequence $(\xi^n)$ is non-decreasing in $n$, as the measure $d \xi(\omega)$ is positive for each $\omega \in \Omega$ and the sequence $\phi^n$ is non-decreasing. By the construction of $\xi^{n}$ we have $\xi^{n}_0 = 0 \to \xi_{0-}$ as $n\to\infty$. Moreover, for any $t \in (0, T)$ and $n > 1/t$ \begin{align*} \xi^{n}_t = \int_{[0, t)}\phi^n_{t-s}d\xi_s=\xi_{t-\tfrac{1}{n}}+\int_{(t-\tfrac{1}{n}, t)}n(t-s)d\xi_s, \end{align*} where the first equality uses that $\phi^n_{0}=0$, so that jumps of $\xi$ at time $t$ give zero contribution, and the second one uses the definition of $\phi^n$. Letting $n\to\infty$ we obtain $\xi^{n}_t\to \xi_{t-}$ as the second term above vanishes since \begin{align*} 0\le \int_{(t-\tfrac{1}{n}, t)}n(t-s)d\xi_s\le \xi_{t-} - \xi_{t-\tfrac{1}{n}}\to 0. \end{align*} The continuity of $\xi^n$ on $[0, T)$ and Proposition \ref{prop-specific-convergence-3} imply that \begin{equation*} \lim_{n\to\infty}\mathbb{E}\bigg[\int_{[0, T)}f_t(1-\zeta_{t})d\xi^{n}_t\bigg] = \lim_{n\to\infty}\mathbb{E}\bigg[\int_{[0, T)}f_t(1-\zeta_{t-})d\xi^{n}_t\bigg] = \mathbb{E}\bigg[\int_{[0, T)}f_t(1-\zeta_{t})d\xi_t\bigg], \end{equation*} and $\lim_{n \to \infty} \xi^{n}_{T-} = \xi_{T-}$ so that \[ \lim_{n\to\infty}\Delta\xi^{n}_T =\Delta\xi_T, \] since $\xi^{n}_T=1$ for all $n\ge 1$. The dominated convergence theorem (applied to the second integral below) also yields \begin{equation}\label{eqn:M_ij_conv} \begin{aligned} \lim_{n\to\infty} N(\xi^{n},\zeta) &=\lim_{n\to\infty}\mathbb{E}\bigg[\int_{[0, T)}f_{t}(1-\zeta_{t})d\xi^{n}_{t}+ \int_{[0, T)} g_t(1-\xi^{n}_{t})d\zeta_t + h_T\Delta\xi^{n}_T\Delta\zeta_T\bigg]\\ &= \mathbb{E}\bigg[\int_{[0, T)}f_{t}(1-\zeta_{t})d\xi_{t} + \int_{[0, T)} g_t(1-\xi_{t-})d\zeta_t + h_T\Delta\xi_T\Delta\zeta_T\bigg]. \end{aligned} \end{equation} Note that \begin{align}\label{eq-remove-common-jumps} N(\xi,\zeta)&=\mathbb{E}\bigg[\!\int_{[0, T)}\! f_t(1-\zeta_{t})d\xi_t +\! \int_{[0, T)}\! g_t(1\!-\xi_t)d\zeta_t + \sum_{t \in [0, T]} h_t \Delta\xi_t\Delta\zeta_t\bigg]\notag\\ &= \mathbb{E}\bigg[\!\int_{[0, T)}\! f_t(1-\zeta_{t})d\xi_t +\! \int_{[0, T)}\! g_t(1-\xi_{t-})d\zeta_t +\! \sum_{t \in [0, T)}\!\! (h_t-g_t)\Delta\xi_t\Delta\zeta_t + h_T\Delta\xi_T\Delta\zeta_T\bigg]\\ &\ge \mathbb{E}\bigg[\!\int_{[0, T)}\! f_t(1-\zeta_{t})d\xi_t +\! \int_{[0, T)}\! g_t(1-\xi_{t-})d\zeta_t + h_T\Delta\xi_T\Delta\zeta_T\bigg],\notag \end{align} where the last inequality is due to Assumption \ref{eq-order-cond}. Combining this with \eqref{eqn:M_ij_conv} completes the proof of \eqref{eq-liminf-M}. \end{proof} \subsection{Relaxation of Assumption \ref{ass:regular}}\label{sec:relax} Assumption \ref{ass:regular} which requires that the payoff processes be regular can be relaxed to allow for a class of jumps including predictable ones with nonzero conditional mean (i.e., violating regularity, see Eq. \eqref{eq:cond-reg}). In this section we extend Theorem \ref{th-value-cont-strat} and Proposition \ref{thm:conv_lipsch} to the case of Assumption \ref{ass:regular_gen} with $(\hat g_t)$ from the decomposition of the payoff process $g$ being non-decreasing. In this case we must `smoothen' the generating process $\xi$ of the minimiser in order to guarantee the desired semi-continuity properties of the game's expected payoff (see Remark \ref{rem:contrad}). Arguments when $(\hat f_t)$ from the decomposition of $f$ in Assumption \ref{ass:regular_gen} is non-increasing are analogous thanks to the symmetry of the set-up pointed out in Remark \ref{rem:ineq}. However, in that case we restrict strategies of the maximiser to absolutely continuous generating processes $\zeta\in\mathcal{A}_{ac}(\mathcal{F}^2_t)$ and the first player (minimiser) picks $\xi\in\mathcal{A}(\mathcal{F}^1_t)$. \begin{theorem}\label{th-value-cont-strat_gen} Under assumptions \ref{eq-integrability-cond}, \ref{ass:regular_gen}, \ref{eq-order-cond}-\ref{ass:filtration}, (with $\hat g$ non-decreasing) the game \eqref{eq-value-cont-restriction} has a value, i.e. \begin{equation*} W_{*}=W^{*}:=W. \end{equation*} Moreover, the $\zeta$-player (maximiser) has an optimal strategy, i.e. there exists $\zeta^*\in\mathcal{A}(\mathcal{F}^2_t)$ such that \begin{equation*} \operatornamewithlimits{inf\vphantom{p}}_{\xi\in\mathcal{A}_{ac}(\mathcal{F}^1_t)} N(\xi,\zeta^*)=W. \end{equation*} \end{theorem} \begin{Proposition}\label{thm:conv_lipsch_gen} Under assumptions \ref{eq-integrability-cond}, \ref{ass:regular_gen}, \ref{eq-order-cond}-\ref{ass:filtration}, (with $\hat g$ non-decreasing) for any $\zeta \in \mathcal{A}(\mathcal{F}^2_t)$ and $\xi \in \mathcal{A}(\mathcal{F}^1_t)$, there is a sequence $\xi^n \in \mathcal{A}_{ac}(\mathcal{F}^1_t)$ such that \[ \mathop{\lim\sup}_{n \to \infty} N(\xi^n, \zeta) \le N(\xi, \zeta). \] \end{Proposition} \begin{proof}[{\bf Proof of Theorem \ref{thm:main2}}] The proof of the existence of the value is identical to the proof Theorem \ref{thm:main} but with references to Theorem \ref{th-value-cont-strat} and Proposition \ref{thm:conv_lipsch} replaced by the above results. For the existence of the saddle point, the additional requirement that $\hat g$ be non-decreasing {\em and} $\hat f$ be non-increasing guarantees the complete symmetry of the problem when swapping the roles of the two players as in Remark \ref{rem:ineq}. Thus, the same proof as in Theorem \ref{thm:main} can be repeated verbatim. \end{proof} In the rest of the section we prove Theorem \ref{th-value-cont-strat_gen} and Proposition \ref{thm:conv_lipsch_gen}. Processes $\hat f, \hat g$ have the following decomposition according to Theorem VI.52 in \cite{DellacherieMeyer} and remarks thereafter: there are $(\mathcal{F}_t)$-stopping times $(\eta^f_k)_{k \ge 1}$ and $(\eta^g_k)_{k \ge 1}$, non-negative $\mathcal{F}_{\eta^f_k}$-measurable random variables $X^f_k$, $k \ge 1$, and non-negative $\mathcal{F}_{\eta^g_k}$-measurable random variables $X^g_k$, $k \ge 1$, such that \begin{equation}\label{eqn:decomposition_piecewise} \hat f_t = \sum_{k=1}^\infty (-1)^k X^f_k \ind{\{t \ge \eta^f_k\}}, \qquad \hat g_t = \sum_{k=1}^\infty X^g_k \ind{\{t \ge \eta^g_k\}}. \end{equation} The alternating terms in the sum for $(\hat f_t)$ come from interweaving sequences for the two non-decreasing processes $(\hat f^+_t)$ and $(\hat f^-_t)$ from $\mcalL$ arising from the decomposition of the integrable variation process $(\hat f_t)$ (recall $\hat f_t=\hat f^+_t-\hat f^-_t$). This is for notational convenience and resulting in no mathematical complications as the infinite sum is absolutely convergent. Recall that $\hat g$ is assumed non-decreasing. The condition that $\hat f_0 = \hat g_0 = 0$ means that $\eta^f_k, \eta^g_k > 0$ for all $k \ge 1$. Since $\hat f, \hat g$ have integrable variation (in the sense of \cite[p. 115]{DellacherieMeyer}), the infinite sequences in \eqref{eqn:decomposition_piecewise} are dominated by integrable random variables $X^f$ and $X^g$: for any $t \in [0, T]$ \begin{align}\label{eq:Xfg} |\hat f_t| \le X^f := \sum_{k=1}^\infty X^f_k, \qquad \text{and}\qquad \hat g_t \le X^g := \sum_{k=1}^\infty X^g_k. \end{align} To handle convergence of integrals with piecewise-constant processes, we need to extend the results of Proposition \ref{prop:r-convergence}. \begin{Proposition}\label{prop:A} For a filtration $(\mathcal{G}_t)\subseteq (\mathcal{F}_t)$, consider $(\rho^n)_{n \ge 1} \subset \tilde{\mathcal{A}^\circ}(\mathcal{G}_t)$ and $\rho \in \tilde{\mathcal{A}^\circ}(\mathcal{G}_t)$ with \[ \mathbb{P}\Big(\Big\{\omega \in \Omega:\ \lim_{n\to\infty}\rho^n_t(\omega)=\rho_t(\omega),\quad\text{for all $t\in C_\rho(\omega)\cup\{T\}$} \Big\} \Big) = 1. \] Then for any $\mathcal{F}$-measurable random variables $\theta\in(0,T]$ and $X\in[0,\infty)$ with $\mathbb{E}[X] < \infty$ we have \begin{equation}\label{eqn:theta_t} \mathop{\lim\sup}_{n\to\infty}\mathbb{E}\Big[\int_{[0, T]} \ind{\{t \ge \theta\}} X d\rho^n_t\Big] \le \mathbb{E}\Big[\int_{[0, T]} \ind{\{t \ge \theta\}} X d\rho_t\Big]. \end{equation} Furthermore, if $\mathbb{P} (\{\omega:\ \theta(\omega) \in C_\rho(\omega) \text{ or } X(\omega) = 0\}) = 1$, then \begin{equation}\label{eqn:theta_t_eq} \lim_{n\to\infty}\mathbb{E}\Big[\int_{[0, T]} \ind{\{t \ge \theta\}} X d\rho^n_t\Big] = \mathbb{E}\Big[\int_{[0, T]} \ind{\{t \ge \theta\}} X d\rho_t\Big]. \end{equation} \end{Proposition} \begin{proof} Let $\Omega_0$ be the set of $\omega \in \Omega$ for which $\rho^n_t(\omega) \to \rho_t(\omega)$ for all $t \in C_\rho(\omega) \cup \{T\}$. Fix $\omega \in \Omega_0$. For any $t$ such that $t \in C_\rho(\omega)$ and $t < \theta(\omega)$ (such $t$ always exists as $\theta(\omega) > 0$ and $\rho$ has at most countably many jumps on any bounded interval) we have $\rho^n_t(\omega) \le \rho^n_{\theta(\omega)-}(\omega)$ so that by assumption \[ \mathop{\lim\operatornamewithlimits{inf\vphantom{p}}}_{n \to \infty} \rho^n_{\theta(\omega)-} (\omega) \ge \rho_{t} (\omega). \] Since $C_{\rho}(\omega)$ is dense in $(0, T)$, by arbitrariness of $t<\theta(\omega)$ we have \begin{equation}\label{eqn:hash} \mathop{\lim\operatornamewithlimits{inf\vphantom{p}}}_{n \to \infty} \rho^n_{\theta(\omega)-} (\omega) \ge \rho_{\theta(\omega)-} (\omega). \end{equation} We rewrite the integral as follows: $\int_{[0, T]} \ind{\{t \ge \theta\}} X d\rho^n_t = X (\rho^n_T - \rho^n_{\theta-})$. Therefore, \begin{align*} \mathop{\lim\sup}_{n\to\infty}\mathbb{E}\Big[\int_{[0, T]} \ind{\{t \ge \theta\}} X d\rho^n_t\Big] = \mathop{\lim\sup}_{n\to\infty}\mathbb{E}\big[X (\rho^n_T - \rho^n_{\theta-}) \big] = \mathop{\lim\sup}_{n \to \infty} \mathbb{E} [X \rho^n_T] - \mathop{\lim\operatornamewithlimits{inf\vphantom{p}}}_{n \to \infty} \mathbb{E}[X \rho^n_{\theta-}]. \end{align*} The dominated convergence theorem yields that $\lim_{n \to \infty} \mathbb{E} [X \rho^n_T] = \mathbb{E} [X \rho_T]$, while applying Fubini's theorem gives \[ \mathop{\lim\operatornamewithlimits{inf\vphantom{p}}}_{n \to \infty} \mathbb{E}[X \rho^n_{\theta-}] \ge \mathbb{E}[\mathop{\lim\operatornamewithlimits{inf\vphantom{p}}}_{n \to \infty} X \rho^n_{\theta-}] \ge \mathbb{E}[ X \rho_{\theta-}], \] where the last inequality is by \eqref{eqn:hash}. Combining the above estimates completes the proof of \eqref{eqn:theta_t}. Assume now that $\theta(\omega) \in C_\rho(\omega)$ or $X(\omega) = 0$ for $\mathbb{P}$\ae $\omega \in \Omega_0$. This and the dominated convergence theorem yield \[ \mathbb{E}[X(\rho_T - \rho_{\theta-})] = \mathbb{E}[X(\rho_T - \rho_{\theta})] = \lim_{n \to \infty} \mathbb{E}[X(\rho^n_T - \rho^n_{\theta})] \le \mathop{\lim\sup}_{n \to \infty} \mathbb{E}[X(\rho^n_T - \rho^n_{\theta-})], \] where the last inequality follows from the monotonicity of $\rho^n$. This estimate and \eqref{eqn:theta_t} prove \eqref{eqn:theta_t_eq}. \end{proof} \begin{remark}\label{rem:contrad0} The inequality \eqref{eqn:theta_t} in Proposition \ref{prop:A} can be strict even if $\rho^n_t \to \rho_t$ for all $t \in [0, T]$ because this condition does not imply that $\rho^n_{t-} \to \rho_{t-}$. One needs further continuity assumptions on $(\rho_t)$ to establish equality \eqref{eqn:theta_t_eq}. \end{remark} \begin{proof}[{\bf Proof of Theorem \ref{th-value-cont-strat_gen}}] Compared to the proof of the analogue result under the more stringent condition \ref{ass:regular} (i.e., Theorem \ref{th-value-cont-strat}), we only need to establish lower and upper semicontinuity of the functional $N$, while all other remaining arguments stay valid. For the semicontinuity, we extend arguments of Lemma \ref{lem:semi-cont}. \emph{Upper semicontinuity of $N(\xi,\cdot)$}. Fix $\xi\in\mathcal{A}_{ac}(\mathcal{F}^1_t)$ and consider a sequence $(\zeta^{n})_{n \ge 1}\in\mathcal{A}(\mathcal{F}^2_t)$ converging to $\zeta \in \mathcal{A}(\mathcal{F}^2_t)$ strongly in $\mathcal{S}$. Arguing by contradiction, we assume that there is a subsequence of $(\zeta^n)_{n \ge 1}$ denoted the same with an abuse of notation, that converges $(\mathbb{P} \times \lambda)\ae$ to $\zeta$ and such that \begin{equation*} \lim_{n\to\infty} N(\xi,\zeta^{n}) > N(\xi,\zeta). \end{equation*} Without loss of generality, we can further require that $(\zeta^n)_{n \ge 1} \subset {\mathcal{A}^\circ}(\mathcal{F}^2_t)$ and $\zeta \in {\mathcal{A}^\circ}(\mathcal{F}^2_t)$. Since $\xi$ is absolutely continuous on $[0, T)$, \begin{equation} \lim_{n\to\infty}\mathbb{E}\bigg[\int_{[0, T)} f_t(1-\zeta^{n}_{t})d\xi_t\bigg]=\mathbb{E}\bigg[\int_{[0, T)} f_t(1-\zeta_{t})d\xi_t \bigg] \label{eq-int-conv-1a} \end{equation} by the dominated convergence theorem. For the last two terms of $N(\xi, \zeta^n)$ (recall \eqref{eqn:N_cont}) we have \begin{align*} \mathbb{E}\bigg[\int_{[0, T)} g_t(1-\xi_t)d\zeta^n_t + h_T\Delta\xi_T\Delta\zeta^n_T\bigg] &= \mathbb{E}\bigg[\int_{[0, T]} g_t(1-\xi_{t-})d\zeta^n_t + (h_T - g_T) \Delta\xi_T\Delta\zeta^n_T\bigg]. \end{align*} As in the proof of Lemma \ref{lem:semi-cont}, for the regular part $\tilde g$ of the process $g$ we have \begin{equation}\label{eqn:tl_g_conv} \lim_{n \to \infty} \mathbb{E}\bigg[\int_{[0, T]} \tilde g_t(1-\xi_{t-})d\zeta^n_t \bigg] = \mathbb{E}\bigg[\int_{[0, T]} \tilde g_t(1-\xi_{t-})d\zeta_t \bigg]. \end{equation} For the pure jump part $\hat g$ of the process $g$, we will prove that \begin{equation}\label{eqn:hat_g_conv} \mathop{\lim\sup}_{n \to \infty} \mathbb{E}\bigg[\int_{[0, T]} \hat g_t(1-\xi_{t-})d\zeta^n_t \bigg] \le \mathbb{E}\bigg[\int_{[0, T]} \hat g_t(1-\xi_{t-})d\zeta_t \bigg]. \end{equation} To this end, let us define \[ R^n_t = \int_{[0, t]} (1-\xi_{s-})d\zeta^n_s, \qquad R_t = \int_{[0, t]} (1-\xi_{s-})d\zeta_s, \qquad \text{for $t\in[0,T]$,} \] with $R^n_{0-} = R_{0-} = 0$ and then we are going to apply Proposition \ref{prop:A} with $R^n$ and $R$ instead of $\rho^n$ and $\rho$. We need $R^n_t (\omega) \to R_t(\omega)$ as $n\to\infty$ for $t \in C_R(\omega)=C_{\zeta}(\omega)\cup \{t\in[0,T]:\xi_t(\omega)=1\}$, for $\mathbb{P}$\ae $\omega \in \Omega$. The latter is indeed true. Setting $\Omega_0 = \{\omega \in \Omega:\ \lim_{n \to \infty} \zeta^n_t(\omega) = \zeta_t(\omega)\ \forall\, t \in C_\zeta(\omega) \}$, we have $\mathbb{P}(\Omega_0) = 1$ by Lemma \ref{lem:cadlag_convergence}. For any $\omega \in \Omega_0$ and $t \in C_\zeta(\omega)$, invoking the absolute continuity of $(\xi_t)$, we obtain (omitting the dependence on $\omega$) \begin{equation*} \lim_{n \to \infty} R^n_t = \lim_{n \to \infty} \Big[ (1-\xi_t) \zeta^n_t + \int_{[0, t]} \zeta^n_s d \xi_s \Big] = (1-\xi_t) \zeta_t + \int_{[0, t]} \zeta_s d \xi_s = R_t, \end{equation*} where the convergence of the second term is the consequence of the dominated convergence theorem and the fact that $\lambda ([0,T]\setminus C_\zeta(\omega)) = 0$ and $\zeta^n_T = \zeta_T = 1$. For any $k \ge 1$, since $X^g_k \ge 0$, Proposition \ref{prop:A} gives (recall \eqref{eqn:decomposition_piecewise}) \begin{equation}\label{eqn:limsup01} \mathop{\lim\sup}_{n \to \infty} \mathbb{E} \bigg[ \int_{[0, T]} X^g_k \ind{\{t \ge \eta^g_k\}} d R^n_t \bigg] \le \mathbb{E} \bigg[\int_{[0, T]} X^g_k \ind{\{t \ge \eta^g_k\}} d R_t \bigg]. \end{equation} We apply the decomposition of $\hat g$ and then the monotone convergence theorem \[ \mathbb{E}\bigg[\int_{[0, T]} \hat g_t(1-\xi_{t-})d\zeta^n_t \bigg] = \mathbb{E} \bigg[ \sum_{k=1}^\infty \int_{[0, T]} X^g_k \ind{\{t \ge \eta^g_k\}} d R^n_t \bigg] = \sum_{k=1}^\infty \mathbb{E} \bigg[ \int_{[0, T]} X^g_k \ind{\{t \ge \eta^g_k\}} d R^n_t \bigg]. \] Since $\hat g \in \mcalL$ we have the bound (recall \eqref{eq:Xfg}) \[ \sum_{k=1}^\infty \sup_n \mathbb{E} \bigg[ \int_{[0, T]} X^g_k \ind{\{t \ge \eta^g_k\}} d R^n_t \bigg] \le \sum_{k=1}^\infty \mathbb{E} [ X^g_k ] < \infty . \] Then we can apply (reverse) Fatou's lemma (with respect to the counting measure on $\mathbb{N}$) \begin{align*} \mathop{\lim\sup}_{n \to \infty} \sum_{k=1}^\infty \mathbb{E} \bigg[ \int_{[0, T]} X^g_k \ind{\{t \ge \eta^g_k\}} d R^n_t \bigg] &\le \sum_{k=1}^\infty \mathop{\lim\sup}_{n \to \infty} \mathbb{E} \bigg[ \int_{[0, T]} X^g_k \ind{\{t \ge \eta^g_k\}} d R^n_t \bigg]\\ &\le \sum_{k=1}^\infty \mathbb{E} \bigg[\int_{[0, T]} X^g_k \ind{\{t \ge \eta^g_k\}} d R_t \bigg] = \mathbb{E}\bigg[\int_{[0, T]} \hat g_t(1-\xi_{t-})d\zeta_t \bigg], \end{align*} where the last inequality is due to \eqref{eqn:limsup01} and the final equality follows by monotone convergence and the decomposition of $\hat g$. This completes the proof of \eqref{eqn:hat_g_conv}. Recalling that $g_T \le h_T$, we obtain from Lemma \ref{prop-terminal-time-jump-limit} \begin{equation*} \mathop{\lim\sup}_{n\to\infty} \mathbb{E}\big[(h_T-g_T)\Delta\xi_T\Delta\zeta^n_T\big] \le \mathbb{E}\big[(h_T-g_T)\Delta\xi_T\Delta\zeta_T\big], \end{equation*} and combining the latter with \eqref{eqn:tl_g_conv}, \eqref{eqn:hat_g_conv} and \eqref{eq-int-conv-1a} shows that \begin{align}\label{eq:usc_gen} \mathop{\lim\sup}_{n \to \infty} N(\xi, \zeta^n) \le N(\xi, \zeta). \end{align} Hence we have a contradiction with $\lim_{n\to\infty} N(\xi,\zeta^n) >N(\xi,\zeta)$, which proves the upper semicontinuity. \emph{Lower semicontinuity of $N(\cdot,\zeta)$}. The proof follows closely the argument of the proof of Lemma \ref{lem:semi-cont}: we fix $\zeta\in\mathcal{A}(\mathcal{F}^2_t)$, consider a sequence $(\xi^{n})_{n\ge 1}\subset\mathcal{A}_{ac}(\mathcal{F}^1_t)$ converging to $\xi \in \mathcal{A}_{ac} (\mathcal{F}^1_t)$ strongly in $\mathcal{S}$, assume that \eqref{eqn:two_terms2} holds and reach a contradiction. We only show how to handle the convergence for $(\hat f_t)$ as all other terms are handled by the proof of Lemma \ref{lem:semi-cont}. By Lemma \ref{lem:cadlag_convergence} and the continuity of $(\xi_t)$ we have $\mathbb{P} \big( \lim_{n\to\infty} \xi^{n}_t(\omega) = \xi_t(\omega)\ \forall\,t \in [0,T)\big) = 1$. Let \[ R^n_t = \int_{[0,t]} (1-\zeta_{t-})d\xi^n_t, \qquad R_t = \int_{[0,t]} (1-\zeta_{t-})d\xi_t, \] with $R^n_{0-} = R_{0-} = 0$. Due to the continuity of $(\xi^n_t)$ and $(\xi_t)$ for $t \in [0, T)$, processes $(R^n_t)$ and $(R_t)$ are continuous on $[0, T)$ with a possible jump at $T$. From \eqref{eqn:conv_R} in the proof of Proposition \ref{prop-specific-convergence-2} we conclude that for $\mathbb{P}$\ae $\omega \in \Omega$ \[ \lim_{n \to \infty} R^n_t(\omega) = R_t(\omega) \quad \text{for all $t \in [0, T]$}. \] Since $\Delta \hat f_T = 0$ (see Assumption \ref{ass:regular_gen}), there is a decomposition such that $X^f_k \ind{\{\eta^f_k = T\}} = 0$ $\mathbb{P}$\as for all $k$. Recalling that $(R_t)$ is continuous on $[0, T)$, we can apply \eqref{eqn:theta_t_eq} in Proposition \ref{prop:A}: for any $k \ge 1$ \[ \lim_{n \to \infty} \mathbb{E} \bigg[ \int_{[0, T]} X^f_k \ind{\{t \ge \eta^f_k\}} d R^n_t \bigg] = \mathbb{E} \bigg[\int_{[0, T]} X^f_k \ind{\{t \ge \eta^f_k\}} d R_t \bigg]. \] Combining the latter with decomposition \eqref{eqn:decomposition_piecewise} and the dominated convergence theorem (with the bound $X^f$) we obtain \[ \lim_{n \to \infty} \mathbb{E} \bigg[ \int_{[0, T]} \hat f_t d R^n_t \bigg] = \mathbb{E} \bigg[\int_{[0, T]} \hat f_t d R_t \bigg]. \] Arguing as in the proof of Corollary \ref{cor-specific-convergence-2}, we have \begin{equation* \lim_{n \to \infty} \mathbb{E} \bigg[ \int_{[0, T)} \hat f_t (1-\zeta_t) d \xi^n_t + \hat f_T \Delta \zeta_T \Delta \xi^n_T\bigg] = \mathbb{E} \bigg[\int_{[0, T)} \hat f_t (1-\zeta_t) d \xi_t + \hat f_T \Delta \zeta_T \Delta \xi_T \bigg]. \end{equation*} Corollary \ref{cor-specific-convergence-2} implies an analogous convergence for $(\tilde f_t)$ and the rest of the proof of lower semicontinuity from Lemma \ref{lem:semi-cont} applies. \end{proof} \begin{remark}\label{rem:contrad} In the arguments above, item (4) in Assumption \ref{ass:regular_gen} implies in particular that the payoff process $(g_t)$ does not have predictable jumps that are $\mathbb{P}$\as negative. This assumption cannot be further relaxed as this may cause the proof of the upper semicontinuity in Theorem \ref{th-value-cont-strat_gen} to fail. Recall that the process $(g_t)$ corresponds to the payoff of the second player and her strategy $(\zeta_t)$ is not required to be absolutely continuous. For example, fix $t_0\in(0,T)$ and take $g_t=1-\ind{\{t\ge t_0\}}$, $\zeta_t = \ind{\{t\ge t_0\}}$ and $\xi_t = \ind{\{t = T\}}$. Let us consider the sequence $\zeta^n_t =\ind{\{t\ge t_0-\frac{1}{n}\}}$, which converges to $\zeta$ pointwise and also strongly in $\mathcal{S}$. We have \begin{equation*} \int_{[0, T]} g_t(1-\xi_{t-})d\zeta^n_t\equiv 1, \:\:\: \text{for all $n$'s, but}\:\:\: \int_{[0, T]} g_t(1-\xi_{t-})d\zeta_t\equiv 0, \end{equation*} hence \eqref{eqn:hat_g_conv} fails and so does \eqref{eq:usc_gen}. \end{remark} \begin{proof}[{\bf Proof of Proposition \ref{thm:conv_lipsch_gen}}] Here, we also only show how to extend the proof of Proposition \ref{thm:conv_lipsch} to the more general setting. Fix $\zeta \in {\mathcal{A}^\circ}(\mathcal{F}^2_t)$ and $\xi \in {\mathcal{A}^\circ}(\mathcal{F}^1_t)$. Construct a sequence $(\xi^n) \subset {\mathcal{A}^\circ_{ac}}(\mathcal{F}^1_t)$ as in the proof of Proposition \ref{thm:conv_lipsch}. It is sufficient to show that \begin{equation}\label{eqn:lipch_conv_liminf} \mathop{\lim\sup}_{n \to \infty} N(\xi^n, \zeta) \le N(\xi, \zeta). \end{equation} From the proof of Proposition \ref{thm:conv_lipsch} we have that \begin{equation}\label{eqn:M_ij_conv_gen} \begin{aligned} &\lim_{n\to\infty}\mathbb{E}\bigg[\int_{[0, T)}\tilde f_{t}(1-\zeta_{t})d\xi^{n}_{t}+ \int_{[0, T)} \tilde g_t(1-\xi^{n}_{t})d\zeta_t + h_T\Delta\xi^{n}_T\Delta\zeta_T\bigg]\\ &= \mathbb{E}\bigg[\int_{[0, T)}\tilde f_{t}(1-\zeta_{t})d\xi_{t} + \int_{[0, T)} \tilde g_t(1-\xi_{t-})d\zeta_t + h_T\Delta\xi_T\Delta\zeta_T\bigg]. \end{aligned} \end{equation} For $t \in [0, T]$, define \[ R^n_t = \int_{[0, t]} (1-\zeta_{s-})d\xi^n_s, \qquad R_t = \int_{[0, t]} (1-\zeta_{s})d\xi_s \] with $R^n_{0-} = R_{0-} = 0$. Corollary \ref{cor:lim_R_in-t-} implies that for $\mathbb{P}$\ae $\omega \in \Omega$ \begin{align}\label{eq:convR} \lim_{n \to \infty} R^n_{t-}(\omega) = R_{t-}(\omega) \quad \text{for all $t \in [0, T]$}. \end{align} By the decomposition of $(\hat f_t)$ in \eqref{eqn:decomposition_piecewise} and the dominated convergence theorem for the infinite sum (recalling \eqref{eq:Xfg}) we obtain \begin{align*} \mathbb{E}\bigg[\int_{[0, T)}\hat f_{t}(1-\zeta_{t})d\xi^{n}_{t}\bigg] &= \mathbb{E}\bigg[\int_{[0, T)}\hat f_{t}(1-\zeta_{t-})d\xi^{n}_{t}\bigg] = \sum_{k=1}^\infty \mathbb{E}\bigg[(-1)^k \int_{[0, T)} X^f_k \ind{\{t \ge \eta^f_k\}} d R^n_t \bigg]\\ &= \sum_{k=1}^\infty \mathbb{E} \big[ (-1)^k X^f_k (R^n_{T-} - R^n_{\eta^f_k-})\big], \end{align*} where the first equality follows from the continuity of $(\xi^n_t)$ on $[0, T)$. We further apply dominated convergence (with respect to the product of the counting measure on $\mathbb N$ and to the measure $\mathbb{P}$) to obtain \begin{equation}\label{eqn:part2} \begin{aligned} \lim_{n\to\infty}\mathbb{E}\bigg[\!\int_{[0, T)}\!\hat f_{t}(1-\zeta_{t})d\xi^{n}_{t}\bigg] &=\sum_{k=1}^\infty \mathbb{E} \big[(-1)^k \lim_{n\to\infty} X^f_k (R^n_{T-} - R^n_{\eta^f_k-})\big]\\ &= \sum_{k=1}^\infty \mathbb{E} \big[ (-1)^k X^f_k (R_{T-} - R_{\eta^f_k-})\big]= \mathbb{E}\bigg[\int_{[0, T)}\hat f_{t}(1-\zeta_{t})d\xi_{t}\bigg], \end{aligned} \end{equation} where the second equality uses \eqref{eq:convR} and the final one the decomposition of $\hat f$. Recalling that $\xi^n_t\to\xi_{t-}$ as $n\to\infty$ by construction, dominated convergence gives \begin{equation}\label{eqn:part3} \lim_{n\to\infty}\mathbb{E}\bigg[\int_{[0, T)}\hat g_{t}(1-\xi^n_{t})d\zeta_{t}\bigg] = \mathbb{E}\bigg[\int_{[0, T)}\hat g_{t}(1-\xi_{t-})d\zeta_{t}\bigg]. \end{equation} Putting together \eqref{eqn:M_ij_conv_gen}, \eqref{eqn:part2} and \eqref{eqn:part3} shows \[ \lim_{n \to \infty} N(\xi^n, \zeta) = \mathbb{E}\bigg[\int_{[0, T)} f_{t}(1-\zeta_{t})d\xi_{t} + \int_{[0, T)} g_t(1-\xi_{t-})d\zeta_t + h_T\Delta\xi_T\Delta\zeta_T\bigg]. \] It remains to notice that by \eqref{eq-remove-common-jumps} the right hand side is dominated by $N(\xi, \zeta)$, which completes the proof of \eqref{eqn:lipch_conv_liminf}. \end{proof} \subsection{Proof of Theorem \ref{thm:ef_0_value}} \label{sec:ef_functional} Randomisation devices $Z_\tau$ and $Z_\sigma$ associated to a pair $(\tau,\sigma)\in\mathcal{T}^R(\mathcal{F}^1_t)\times\mathcal{T}^R(\mathcal{F}^2_t)$ are independent of $\mathcal{G}$. Denoting by $(\xi_t) \in {\mathcal{A}^\circ}(\mathcal{F}^1_t)$ and $(\zeta_t) \in {\mathcal{A}^\circ}(\mathcal{F}^2_t)$ the generating processes for $\tau$ and $\sigma$, respectively, the statement of Proposition \ref{prop-functionals-equal} can be extended to encompass the conditional functional \eqref{eqn:cond_func}: \begin{equation}\label{eqn:cond_reform} \mathbb{E}\big[ \mathcal{P}(\tau, \sigma) \big| \mathcal{G} \big] = \mathbb{E}\bigg[\int_{[0, T)} f_{t}(1-\zeta_{t})d \xi_t + \int_{[0, T)} g_{t}(1-\xi_t) d \zeta_t + \sum_{t \in [0, T]} h_t \Delta \xi_t \Delta \zeta_t \bigg|\mathcal{G}\bigg]. \end{equation} We can also repeat the same argument as in Remark \ref{rem-Laraki-Solan} to obtain that \[ \underline V:=\operatornamewithlimits{\mathrm{ess\,sup}}_{\sigma\in\mathcal{T}^R(\mathcal{F}^2_t)}\operatornamewithlimits{\mathrm{ess\,inf\vphantom{p}}}_{\tau \in \mathcal{T}^R(\mathcal{F}^1_t)} \mathbb{E}\big[ \mathcal{P}(\tau, \sigma) \big| \mathcal{G} \big]=\operatornamewithlimits{\mathrm{ess\,sup}}_{\sigma\in\mathcal{T}^R(\mathcal{F}^2_t)}\operatornamewithlimits{\mathrm{ess\,inf\vphantom{p}}}_{\tau \in \mathcal{T}(\mathcal{F}^1_t)} \mathbb{E}\big[ \mathcal{P}(\tau, \sigma) \big| \mathcal{G} \big] \] and \[ \overline V:=\operatornamewithlimits{\mathrm{ess\,inf\vphantom{p}}}_{\tau \in \mathcal{T}^R(\mathcal{F}^1_t)} \operatornamewithlimits{\mathrm{ess\,sup}}_{\sigma\in\mathcal{T}^R(\mathcal{F}^2_t)} \mathbb{E}\big[ \mathcal{P}(\tau, \sigma) \big| \mathcal{G} \big]=\operatornamewithlimits{\mathrm{ess\,inf\vphantom{p}}}_{\tau \in \mathcal{T}^R(\mathcal{F}^1_t)}\operatornamewithlimits{\mathrm{ess\,sup}}_{\sigma\in\mathcal{T}(\mathcal{F}^2_t)} \mathbb{E}\big[ \mathcal{P}(\tau, \sigma) \big| \mathcal{G} \big]. \] Notice that $\overline V\ge \underline V$, $\mathbb{P}$\as We will show that \begin{align}\label{eq:EV} \mathbb{E}[\,\underline V\,]=\mathbb{E}[\,\overline V\,], \end{align} so that $\overline V= \underline V$\,, $\mathbb{P}$\as as needed. In order to prove \eqref{eq:EV}, let us define \begin{equation*} \overline{M}(\tau):=\operatornamewithlimits{\mathrm{ess\,sup}}_{\sigma \in \mathcal{T}(\mathcal{F}^2_t)} \mathbb{E}\big[ \mathcal{P}(\tau, \sigma) \big| \mathcal{G} \big],\quad\text{for $\tau\in\mathcal{T}^R(\mathcal{F}^1_t)$}, \end{equation*} and \begin{align*} \underline{M}(\sigma):=\operatornamewithlimits{\mathrm{ess\,inf\vphantom{p}}}_{\tau \in \mathcal{T}(\mathcal{F}^1_t)} \mathbb{E}\big[ \mathcal{P}(\tau, \sigma) \big| \mathcal{G} \big],\quad\text{for $\sigma\in\mathcal{T}^R(\mathcal{F}^2_t)$}. \end{align*} These are two standard optimal stopping problems and the theory of Snell envelope applies (see, e.g., \cite[Appendix D]{Karatzas1998} and \cite{elkaroui1981}). We adapt some results from that theory to suit our needs in the game setting. \begin{Lemma}\label{lem:directed} The family $\{\overline{M}(\tau),\,\tau\in \mathcal{T}^R(\mathcal{F}^1_t)\}$ is downward directed and the family $\{\underline{M}(\sigma),\,\sigma\in \mathcal{T}^R(\mathcal{F}^2_t)\}$ is upward directed. \end{Lemma} \begin{proof} Let $\tau^{(1)},\tau^{(2)}\in\mathcal{T}^R(\mathcal{F}^1_t)$ and let $\xi^{(1)},\xi^{(2)}\in{\mathcal{A}^\circ}(\mathcal{F}^1_t)$ be the corresponding generating processes. Fix the $\mathcal{G}$-measurable event $B=\{\overline{M}(\tau^{(1)})\le\overline{M}(\tau^{(2)})\}$ and define another $(\mathcal{F}^1_t)$-randomised stopping time as $\hat{\tau}=\tau^{(1)} \ind{B} +\tau^{(2)} \ind{B^c}$. We use $\mathcal{G}\subset\mathcal{F}^1_0$ to ensure that $\hat \tau\in\mathcal{T}^R(\mathcal{F}^1_t)$. The generating process of $\hat \tau$ reads $\hat \xi_t=\xi^{(1)}_t \ind{B} +\xi^{(2)}_t \ind{B^c}$ for $t\in[0,T]$. Using the linear structure of $\hat \xi$ and recalling \eqref{eqn:cond_reform}, for any $\sigma\in\mathcal{T}(\mathcal{F}^2_t)$, we have \begin{align*} \mathbb{E}\big[\mathcal{P}(\hat{\tau}, \sigma)|\mathcal{G}\big] &= \ind{B}\mathbb{E}\bigg[\int_{[0, \sigma)} f_{u}d \xi^{(1)}_u + g_{\sigma}(1-\xi^{(1)}_\sigma) + h_\sigma \Delta \xi^{(1)}_\sigma \bigg|\mathcal{G}\bigg]\\ &\hspace{12pt}+\ind{B^c}\mathbb{E}\bigg[\int_{[0, \sigma)} f_{u}d \xi^{(2)}_u + g_{\sigma}(1-\xi^{(2)}_\sigma) + h_\sigma \Delta \xi^{(2)}_\sigma\bigg|\mathcal{G}\bigg]\\ &=\ind{B}\mathbb{E}\big[\mathcal{P}(\tau^{(1)} ,\sigma)|\mathcal{G}\big]+\ind{B^c}\mathbb{E}\big[\mathcal{P}(\tau^{(2)} ,\sigma)|\mathcal{G}\big]\\ &\le\ind{B}\overline{M}(\tau^{(1)})+\ind{B^c}\overline{M}(\tau^{(2)})=\overline{M}(\tau^{(1)})\wedge\overline{M}(\tau^{(2)}), \end{align*} where the inequality is by definition of essential supremum and the final equality by definition of the event $B$. Thus, taking the supremum over $\sigma\in\mathcal{T}(\mathcal{F}^2_t)$ we get \[ \overline{M}(\hat \tau)\le \overline{M}(\tau^{(1)})\wedge\overline{M}(\tau^{(2)}), \] hence the family $\{\overline{M}(\tau),\,\tau\in \mathcal{T}^R(\mathcal{F}^1_t)\}$ is downward directed. A symmetric argument proves that the family $\{\underline{M}(\sigma),\,\sigma\in \mathcal{T}^R(\mathcal{F}^2_t)\}$ is upward directed. \end{proof} An immediate consequence of the lemma and of the definition of essential supremum/infimum is that (see, e.g., \cite[Lemma I.1.3]{Peskir2006}) we can find sequences $(\sigma_n)_{n\ge 1}\subset \mathcal{T}^R(\mathcal{F}^2_t)$ and $(\tau_n)_{n\ge 1}\subset\mathcal{T}^R(\mathcal{F}^1_t)$ such that $\mathbb{P}$\as \begin{align}\label{eq:limV} \overline V=\lim_{n\to\infty}\overline{M}(\tau_n)\quad\text{and}\quad\underline V=\lim_{n\to\infty}\underline{M}(\sigma_n), \end{align} where the convergence is monotone in both cases. Analogous results hold for the optimisation problems defining $\overline M(\tau)$ and $\underline M(\sigma)$. The proof of the following lemma is similar to that of Lemma \ref{lem:directed} and omitted. \begin{Lemma}\label{lem:directed2} The family $\{\mathbb{E}\big[\mathcal{P}(\tau,\sigma)|\mathcal{G}\big],\,\sigma\in \mathcal{T}(\mathcal{F}^2_t)\}$ is upward directed for each $\tau\in \mathcal{T}^R(\mathcal{F}^1_t)$. The family $\{\mathbb{E}\big[\mathcal{P}(\tau,\sigma)|\mathcal{G}\big],\,\tau\in \mathcal{T}(\mathcal{F}^1_t)\}$ is downward directed for each $\sigma\in \mathcal{T}^R(\mathcal{F}^2_t)$. \end{Lemma} It follows that for each $\tau\in\mathcal{T}^R(\mathcal{F}^1_t)$ and $\sigma\in\mathcal{T}^R(\mathcal{F}^2_t)$, there are sequences $(\sigma^\tau_n)_{n\ge 1}\subset\mathcal{T}(\mathcal{F}^2_t)$ and $(\tau^\sigma_n)_{n\ge 1}\subset\mathcal{T}(\mathcal{F}^1_t)$ such that \begin{align}\label{eq:limM} \overline M(\tau)=\lim_{n\to\infty}\mathbb{E}\big[\mathcal{P}(\tau,\sigma^\tau_n)|\mathcal{G}\big]\quad\text{and}\quad\underline M(\sigma)=\lim_{n\to\infty}\mathbb{E}\big[\mathcal{P}(\tau^\sigma_n,\sigma)|\mathcal{G}\big], \end{align} where the convergence is monotone in both cases. Equipped with these results we can prove the following lemma which will quickly lead to \eqref{eq:EV}. \begin{Lemma}\label{cor-exp-for-values} Recall $V_*$ and $V^*$ as in Definition \ref{def-value-rand-strat}. We have \begin{equation}\label{eq-cor-exp} \mathbb{E}[\overline{V}] = V^*, \qquad \text{and}\qquad \mathbb{E}[\underline{V}] = V_*. \end{equation} \end{Lemma} \begin{proof} Fix $\tau\in\mathcal{T}^R(\mathcal{F}^1_t)$. By \eqref{eq:limM} and the monotone convergence theorem \[ \mathbb{E}[ \overline{M}(\tau) ] = \lim_{n\to\infty}\mathbb{E}[\mathcal{P}(\tau,\sigma^\tau_n)]\le \sup_{\sigma\in\mathcal{T}(\mathcal{F}^2_t)}\mathbb{E}[\mathcal{P}(\tau,\sigma)]. \] The opposite inequality follows from the fact that $\overline{M}(\tau) \ge \mathbb{E}[\mathcal{P}(\tau,\sigma)|\mathcal{G}]$ for any $\sigma \in \mathcal{T}(\mathcal{F}^2_t)$ by the definition of the essential supremum. Therefore, we have \begin{equation}\label{eqn:M1} \mathbb{E}[ \overline{M}(\tau) ] = \sup_{\sigma\in\mathcal{T}(\mathcal{F}^2_t)}\mathbb{E}[\mathcal{P}(\tau,\sigma)]. \end{equation} From \eqref{eq:limV}, similar arguments as above prove that \begin{equation}\label{eqn:M2} \mathbb{E}[\overline{V}] = \operatornamewithlimits{inf\vphantom{p}}_{\tau \in \mathcal{T}^R(\mathcal{F}^1_t)} \mathbb{E}[ \overline{M}(\tau) ]. \end{equation} Combining \eqref{eqn:M1} and \eqref{eqn:M2} completes the proof that $\mathbb{E}[\overline{V}] = V^*$. The second part of the statement requires analogous arguments. \end{proof} Finally, \eqref{eq-cor-exp} and Theorem \ref{thm:main2} imply \eqref{eq:EV}, which concludes the proof of Theorem \ref{thm:ef_0_value}. \section{Counterexamples} \label{sec:Nikita-examples} In the three subsections below we show that: (a) relaxing condition \ref{eq-order-cond} may lead to a game without a value, (b) in situations where one player has all the informational advantage, the use of randomised stopping times may still be beneficial also for the uninformed player, and (c) Assumption \ref{ass:regular_gen} is tight in requiring that either $(\hat f_t)$ is non-increasing or $(\hat g_t)$ is non-decreasing. In order to keep the exposition simple we consider the framework of Section \ref{subsec:game_1} with $\I = 2$, $\J = 1$, and impose that $(\mathcal{F}^p_t)$ be the trivial filtration (hence all payoff processes are deterministic, since they are $(\mathcal{F}^p_t)$-adapted). Furthermore we restrict our attention to the case in which $f^{1,1} = f^{2,1} = f$, $g^{1,1} = g^{2,1} = g$ and $h^{1,1}_t\ind{\{t<T\}}=h^{2,1}_t\ind{\{t<T\}}=f_t\ind{\{t<T\}}$. Only the terminal payoff depends on the scenario, i.e., $h^{1,1}_T\neq h^{2,1}_T$ (both deterministic). For notational simplicity we set $h^{1}:=h^{1,1}_T$ and $h^{2}:=h^{2,1}_T$. Notice that only the first player (minimiser) observes the true value of $\mcalI$, so she has a strict informational advantage over the second player (maximiser). The second player will be referred to as the \emph{uninformed player} while the first player as the \emph{informed player}. We denote by $\mathcal{T}^R$ the set of $(\mathcal{F}^p_t)$-randomised stopping times. The informed player chooses two randomised stopping times $\tau_1, \tau_2$ (one for each scenario, recall Lemma \ref{lem:tau_decomposition}) with the generating processes $\xi^1,\xi^2$ which, due to the triviality of the filtration $(\mathcal{F}_t^p)$, are deterministic functions. Pure stopping times are constants in $[0,T]$. Similarly, the uninformed player's randomised stopping time $\sigma$ has the generating process $\zeta$ that is a deterministic function. \subsection{A game without a value when \ref{eq-order-cond} fails} Let us consider specific payoff functions \begin{equation*} f \equiv 1,\quad g_t=\frac{1}{2}t, \quad h^1= 2,\quad h^2=0, \end{equation*} and let us also set $T=1$, $\pi_1 = \pi_2 =\frac{1}{2}$. \begin{Proposition} In the example of this subsection we have \begin{equation*} V_{*}\le\frac{1}{2} \qquad \text{and}\qquad V^* > \frac12, \end{equation*} so the game does not have a value. \end{Proposition} \begin{proof} First we show that $V_{*}\le\frac{1}{2}$. Recall that (c.f. Remark \ref{rem-Laraki-Solan}) \begin{equation*} V_*=\sup_{\sigma\in\mathcal{T}^R}\operatornamewithlimits{inf\vphantom{p}}_{\tau_1,\tau_2\in\mathcal{T}^R}N((\tau_1,\tau_2),\sigma)=\sup_{\sigma\in\mathcal{T}^R}\operatornamewithlimits{inf\vphantom{p}}_{\tau_1,\tau_2\in[0,1]}N((\tau_1,\tau_2),\sigma), \end{equation*} so we can take $\tau_1, \tau_2 \in [0, 1]$ deterministic in the arguments below. Take any $\sigma \in \mathcal{T}^R$ and the corresponding generating process $(\zeta_t)$ which is, due to the triviality of the filtration $(\mathcal{F}^p_t)$, a deterministic function. For $\tau_1\in[0,1)$, $\tau_2=1$ we obtain \begin{align*} N((\tau_1,\tau_2),\sigma)&= \mathbb{E}\big[(\frac{1}{2}\sigma \indd{\sigma<\tau_1}+1\cdot \ind{\{\sigma\ge\tau_1\}})\indd{\mcalI = 1} + (\frac{1}{2}\sigma \indd{\sigma<1}+0 \cdot \indd{\sigma=1}) \indd{\mcalI = 2}\big]\\ &\le \frac{1}{2}(\frac{1}{2}\zeta_{\tau_1-}+(1-\zeta_{\tau_1-}))+ \frac{1}{4}\zeta_{1-} = \frac{1}{2}-\frac{1}{4}\zeta_{\tau_1-}+\frac{1}{4}\zeta_{1-}, \end{align*} where we used that $\sigma$ is bounded above by $1$ and that $\mcalI$ is independent of $\sigma$ with $\mathbb{P}(\mcalI=1)=\mathbb{P}(\mcalI=2)=\tfrac{1}{2}$. In particular, \begin{equation*} \operatornamewithlimits{inf\vphantom{p}}_{\tau_1,\tau_2 \in [0, 1]} N((\tau_1,\tau_2),\sigma) \le \lim_{\tau_1\to 1-} N((\tau_1,1),\sigma) = \frac{1}{2}. \end{equation*} This proves that $V_* \le \frac12$. Now we turn our attention to demonstrating that $V^{*}>\frac{1}{2}$. Noting again that \begin{equation*} V^*=\operatornamewithlimits{inf\vphantom{p}}_{\tau_1,\tau_2\in\mathcal{T}^R}\sup_{\sigma\in\mathcal{T}^R}N(\tau_1,\tau_2,\sigma)=\operatornamewithlimits{inf\vphantom{p}}_{\tau_1,\tau_2\in\mathcal{T}^R}\sup_{\sigma\in[0,1]}N(\tau_1,\tau_2,\sigma), \end{equation*} we can restrict our attention to constant $\sigma \in [0, 1]$. Take any $\tau_1, \tau_2 \in \mathcal{T}^R$ and the corresponding generating processes $(\xi^1_t), (\xi^2_t)$ which are also deterministic functions. Take any $\delta \in (0, 1/2)$. If $\xi^1_{1-} > \delta$, then for any $\sigma<1$ we have \begin{align*} N((\tau_1,\tau_2),\sigma) &\ge \mathbb{E}\big[\big(1 \cdot \indd{\tau_1\le\sigma}+ \frac{1}{2}\sigma \indd{\sigma<\tau_1}\big) \indd{\mcalI=1}+\frac{1}{2}\sigma \indd{\mcalI=2}\big]\\ &= \mathbb{E}\big[\big(\xi^1_\sigma + \frac{1}{2}\sigma(1-\xi^1_\sigma)\big) \ind{\{\mcalI=1\}}+\frac{1}{2}\sigma \ind{\{\mcalI=2\}}\big]\\ &= \frac{1}{2}\xi^1_\sigma - \frac{1}{4}\sigma\xi^1_\sigma+\frac{1}{2}\sigma = \frac{1}{2}\xi^1_\sigma(1 - \frac{1}{2}\sigma)+\frac{1}{2}\sigma, \end{align*} and, in particular, \begin{equation*} \sup_{\sigma \in [0, 1]} N((\tau_1,\tau_2),\sigma) \ge \lim_{\sigma\to 1-} N((\tau_1,\tau_2),\sigma) \ge \frac{1}{4}\xi^1_{1-} + \frac{1}{2}\ge \frac12 + \frac14\delta>\frac12. \end{equation*} On the other hand, if $\xi^1_{1-} \le \delta$, taking $\sigma=1$ yields \begin{equation*} \sup_{\sigma \in [0, 1]} N((\tau_1,\tau_2),\sigma) \ge N((\tau_1,\tau_2),1) \ge \mathbb{E}[2\cdot \ind{\{\tau_1= 1\}} \ind{\{\mcalI=1\}}] = 1-\xi^1_{1-} \ge 1 - \delta > \frac{1}{2}. \end{equation*} This completes the proof that $V^* > \frac12$. \end{proof} \subsection{Necessity of randomization}\label{sec-example-necessity-of-rand} Here we argue that randomisation is not only sufficient in order to find the value in Dynkin games with asymmetric information but in many cases it is also necessary. In \cite{DEG2020} there is a rare example of explicit construction of optimal strategies for a zero-sum Dynkin game with asymmetric information in a diffusive set-up (see Section \ref{subsec:game_2} above for details). The peculiarity of the solution in \cite{DEG2020} lies in the fact that the informed player uses a randomised stopping time whereas the uninformed player sticks to a pure stopping time. An interpretation of that result suggests that the informed player uses randomisation to `gradually reveal' information about the scenario in which the game is being played, in order to induce the uninformed player to act in a certain desirable way. Since the uninformed player has `nothing to reveal' one may be tempted to draw a general conclusion that she should never use randomised stopping rules. However, Proposition \ref{prop-uninf-benefits-from-rand} below shows that such conclusion would be wrong in general and even the \emph{uninformed} player may benefit from randomisation of stopping times. \begin{figure}[tb] \begin{center} \includegraphics[width=0.6\textwidth]{example_graph.png} \end{center} \caption{Payoff functions $f$ in blue, $g$ in orange.} \label{fig:1} \end{figure} We consider specific payoff functions $f$ and $g$ plotted on Figure \ref{fig:1}. Their analytic formulae read \[ f_t = (10t+4)\ind{\{t\in[0,\frac{1}{10})\}} + 5\ind{\{t\in[\frac{1}{10},1]\}}, \qquad g_t = (15t - 6) \ind{\{t\in[\frac{2}{5},\frac{1}{2})\}}+(9-15 t)\ind{\{t\in[\frac{1}{2},\frac{3}{5})\}} \] with \begin{equation*} h^1 = 0 = g_{1-},\quad h^2=5=f_{1-}. \end{equation*} We also set $T=1$, $\pi_1 = \pi_2 =\frac{1}{2}$. As above, we identify randomized strategies with their generating processes. In particular, we denote by $\zeta$ the generating process for $\sigma \in \mathcal{T}^R$. By Theorem \ref{thm:main}, the game has a value in randomised strategies, i.e., $V^* = V_*$. Restriction of the uninformed player's (player 2) strategies to pure stopping times affects only the lower value, see Remark \ref{rem-Laraki-Solan}. The lower value of the game in which player 2 is restricted to using pure stopping times reads \begin{equation*} \widehat{V}_*:=\sup_{\sigma\in[0,1]}\operatornamewithlimits{inf\vphantom{p}}_{\tau_1, \tau_2 \in \mathcal{T}^R} N((\tau_1, \tau_2),\sigma)=\sup_{\sigma\in[0,1]}\operatornamewithlimits{inf\vphantom{p}}_{\tau_1, \tau_2 \in [0,1]} N((\tau_1, \tau_2),\sigma), \end{equation*} where the equality is again due to Remark \ref{rem-Laraki-Solan} (notice that here all pure stopping times are $(\mathcal{F}^p_t)$-stopping times hence deterministic, because $(\mathcal{F}^p_t)$ is trivial). As the following proposition shows, $\widehat{V}_*<V_*$, so the game in which the uninformed player does not randomise does not have a value. This confirms that the randomisation can play a strategic role beyond manipulating information. \begin{Proposition}\label{prop-uninf-benefits-from-rand} In the example of this subsection, we have \begin{equation*} V_*>\widehat{V}_*. \end{equation*} \end{Proposition} \begin{proof} First, notice that \begin{equation*} \widehat{V}_*\le\sup_{\sigma\in [0,1]} N(\hat{\tau}(\sigma),\sigma), \end{equation*} where we take \[ \hat \tau (\sigma)= (\tau_1(\sigma), \tau_2(\sigma)) = \begin{cases} (1,1),& \text{for } \sigma\in[0,1),\\ (1,0),& \text{for } \sigma=1. \end{cases} \] It is easy to verify that $\sup_{\sigma\in[0,1]} N(\hat{\tau}(\sigma),\sigma)=2$. We will show that the $\sigma$-player can ensure a strictly larger payoff by using a randomised strategy. Define $\zeta_t=a \ind{\{t\ge\frac{1}{2}\}}+(1-a)\ind{\{t=1\}}$, i.e., the corresponding $\sigma\in\mathcal{T}^R$ prescribes to `stop at time $\frac{1}{2}$ with probability $a$ and at time $1$ with probability $1-a$'. The value of the parameter $a \in [0,1]$ will be determined below. We claim that \begin{equation}\label{eqn:zb} \operatornamewithlimits{inf\vphantom{p}}_{\tau_1, \tau_2 \in[0,1]} N((\tau_1, \tau_2),\zeta) = N((1,0),{\zeta})\wedge N((1,1),{\zeta}). \end{equation} Assuming that the above is true, we calculate \[ N((1,0),{\zeta})=2+\frac{3}{4}a, \qquad N((1,1),{\zeta}) = \frac{5}{2}-a. \] Picking $a = \frac27$ the above quantities are equal to $\frac{31}{14}$. Hence $V_* \ge \frac{31}{14}>2$. It remains to prove \eqref{eqn:zb}. Recall that $\zeta_t=a \ind{\{t\ge\frac{1}{2}\}}+(1-a)\ind{\{t=1\}}$ is the generating process of $\sigma$ and the expected payoff reads \begin{equation*} N((\tau_1, \tau_2), \zeta) = \sum_{i=1}^2 \mathbb{E} \big[ \ind{\{\mcalI=i\}}\left(f_{\tau_i} \ind{\{\tau_i \le \sigma\} \cap \{\tau_i < 1\}} + g_{\sigma} \ind{\{\sigma<\tau_i\} \cap \{\sigma < 1\}} + h^i \ind{\{\tau_i = \sigma = 1\}}\right) \big]. \end{equation*} It is clear that on the event $\{\mcalI=1\}$ the infimum is attained for $\tau_1=1$, irrespective of the choice of $\zeta$. On the event $\{\mcalI=2\}$ the informed player would only stop either at time zero, where the function $f$ attains the minimum cost $f_0=4$, or at time $t>\frac12$ since $\zeta$ only puts mass at $t=\frac12$ and at $t=1$ (the informed player knows her opponent may stop at $t=\frac12$ with probability $a$). The latter strategy corresponds to a payoff $5-\frac72 a$ and can also be achieved by picking $\tau_2=1$. Then the informed player needs only to consider the expected payoff associated to the strategies $(\tau_1,\tau_2)=(1,0)$ and $(\tau_1,\tau_2)=(1,1)$, so that \eqref{eqn:zb} holds. \end{proof} \subsection{Necessity of Assumption \ref{ass:regular_gen}} \label{subsec:example_jumps} Our final counter-example shows that violating Assumption \ref{ass:regular_gen} by allowing both predictable upward jumps of $f$ \emph{and} predictable downward jumps of $g$ may also lead to a game without a value. Consider the payoffs \[ f_t=1+2\ind{\{t\ge \frac{1}{2}\}},\quad g_t=-\ind{\{t\ge \frac{1}{2}\}},\quad h^1=3,\quad h^2=-1, \] so that $h^1=f_{1-}$ and $h^2=g_{1-}$ and let us also set $T=1$, $\pi_1=\pi_2=\tfrac{1}{2}$. Assumption \ref{ass:regular_gen} is violated as $g$ has a predictable downward jump and $f$ has a predictable upward jump at time $t=\frac{1}{2}$. \begin{Proposition} In the example of this subsection we have \begin{equation*} V_{*}\le 0,\quad\text{and}\quad V^{*}>0, \end{equation*} so the game does not have a value. \end{Proposition} \begin{proof} First we show that $V_{*}\le 0$. For this step, it is sufficient to restrict our attention to pure stopping times $\tau_1,\tau_2\in[0,1]$ for the informed player (c.f. Remark \ref{rem-Laraki-Solan}). Let $\sigma\in\mathcal{T}^R$ with a (deterministic) generating process $(\zeta_t)$ and fix $\varepsilon\in (0, \frac12)$. For $\tau_1=\frac{1}{2}-\varepsilon$ and $\tau_2=1$ we obtain \begin{align*} N((\tau_1,\tau_2),\sigma)& =\mathbb{E}\big[\indd{\mcalI=1}(0\cdot\indd{\sigma<\tau_1}+1\cdot\indd{\sigma\ge\tau_1}) + \indd{\mcalI=2}(0\cdot\indd{\sigma<\frac{1}{2}}-1\cdot\indd{\sigma\ge \frac{1}{2}})\big]\\ &=\frac12\big(1-\zeta_{(\frac{1}{2}-\varepsilon)-}\big)-\frac12\big(1-\zeta_{\frac{1}{2}-}\big). \end{align*} Therefore, using that $(\zeta_t)$ has c\`adl\`ag\@ifnextchar.{}{\@ifnextchar,{}{\@ifnextchar;{}{ }}} trajectories we have \begin{equation*} \operatornamewithlimits{inf\vphantom{p}}_{\tau_1,\tau_2\in[0,1]} N((\tau_1,\tau_2),\sigma) \le \lim_{\varepsilon\to 0} \frac12\cdot(\zeta_{\frac{1}{2}-}-\zeta_{(\frac{1}{2}-\varepsilon)-}) =0. \end{equation*} Since the result holds for all $\sigma\in\mathcal{T}^R$ we have $V_*\le 0$. Next, we demonstrate that $V^{*}>0$. For this step it is sufficient to consider pure stopping times $\sigma\in[0,1]$ for the uninformed player (Remark \ref{rem-Laraki-Solan}). Let $\tau_1,\tau_2\in\mathcal{T}^R$ and let $\xi^1,\xi^2$ be the associated (deterministic) generating processes. Consider first the case in which $\xi^1_{\frac{1}{2}-}+\xi^2_{\frac{1}{2}-}>\delta$ for some $\delta\in(0,1)$ and fix $\varepsilon \in (0, \frac12)$. For $\sigma=\frac{1}{2}-\varepsilon$ we have \begin{align*} N((\tau_1,\tau_2),\sigma) &= \mathbb{E}\big[\indd{\mcalI=1}(1\cdot\indd{\tau_1\le\sigma}+0\cdot\indd{\sigma<\tau_1}) + \indd{\mcalI=2}(1\cdot\indd{\tau_2\le\sigma}+0\cdot\indd{\sigma<\tau_2})\big]\\ &= \frac12\big(\xi^1_{\frac{1}{2}-\varepsilon} + \xi^2_{\frac{1}{2}-\varepsilon}\big), \end{align*} thus implying \begin{equation}\label{eq:last0} \sup_{\sigma\in[0,1]} N((\tau_1,\tau_2),\sigma) \ge \lim_{\sigma\to \frac{1}{2}-} N((\tau_1,\tau_2),\sigma)= \frac12(\xi^1_{\frac{1}{2}-}+\xi^2_{\frac{1}{2}-})>\frac{\delta}{2}>0. \end{equation} If, instead, $\xi^1_{\frac{1}{2}-}+\xi^2_{\frac{1}{2}-}\le\delta$ so that, in particular, $\xi^1_{\frac{1}{2}-}\vee\xi^2_{\frac{1}{2}-}\le\delta$, then \begin{equation}\label{eq:last1} \begin{aligned} \sup_{\sigma\in[0,1]} N((\tau_1,\tau_2),\sigma) &\ge N((\tau_1,\tau_2),1)\\ &\ge \mathbb{E}\big[\indd{\mcalI=1}(1\cdot\indd{\tau_1<\frac{1}{2}}+3\cdot\indd{\tau_1\ge\frac{1}{2}}) + \indd{\mcalI=2}(-1)\big]\\ &\ge \frac{1}{2}\left(\xi^1_{\frac{1}{2}-}+3\big(1-\xi^1_{\frac{1}{2}-}\big)\right) - \frac{1}{2} =1-\xi^1_{\frac{1}{2}-}\ge 1-\delta>0. \end{aligned} \end{equation} Combining \eqref{eq:last0} and \eqref{eq:last1} we have $V^*>0$. \end{proof} \bibliographystyle{abbrvnat}
proofpile-arXiv_065-162
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Accelerating AI and Its Implication on AI Tax} \label{sec:accelerate} There are numerous efforts underway to accelerate AI~\cite{aichip, habana, chen2014diannao, intelncs2, mahajan2016tabla, xilinx2018alveo, nvdla, googlecoral}. But given the significance of the AI tax in end-to-end AI performance and in pre- and post-processing, it is important to understand how the tax evolves as AI is accelerated and its impact on the overall end-to-end application performance; there are performance limitations that arise beyond a certain point of AI acceleration. To build a balanced system, it is important to understand these limits. We therefore study how varying degrees of AI speedup affect the end-to-end performance of our video analytics workload. In general, we could foresee accelerated AI coming about in various ways: CPU manufacturers may decide to integrate ML-centric hardware directly into the CPU execution pipeline; or dedicated off-chip accelerators may be utilized, including highly-parallel pipelines (such as GPUs) and dedicated inference engines (such as Intel's Neural Compute Stick~\cite{intelncs2}, Habana's Goya inference processor~\cite{habanagoya}, or Google's Coral Edge TPU~\cite{googlecoral}). Comparisons of the efficacy of each of these solutions is the subject of other work. In this work, we look at the impact of theoretical speedups, regardless of how the speedups are achieved. In this section, we explore the impact of accelerating AI applications up to 32$\times$, based on the performance of existing accelerators. Habana reports that its Goya processor achieves 13.5$\times$ speedup over a two-socket Intel Xeon Platinum 8180~\cite{wheeler2018data}.\footnote{Other than its faster clock, the 8180 CPU is identical to the 8176 CPUs we use.} We explore speedups approximately twice as great as this (up to 32$\times$) in order to account for future advances. \Sec{sec:accelerate:process} analytically estimates accelerated AI performance to show its asymptotic limits. \Sec{sec:accelerate:now} introduces our technique for emulating accelerated workloads on current hardware. In \Sec{sec:accelerate:impact} we evaluate the performance of accelerated processing and discover a quickly approaching bottleneck. \Sec{sec:accelerate:bw} shows that the bottleneck results from overwhelming the system's capacity to write to storage. We finish by showing in \Sec{sec:accelerate:waiting} how frames' waiting time in brokers grows as a fraction of end-to-end latency. \subsection{Analytical Speedups for AI Acceleration} \label{sec:accelerate:process} \input{tex/fig/fr_process_profiles_model} The AI tax means that a significant portion of an AI application's compute cycles are spent on tasks other than AI and ML, and Amdahl's law dictates that the overall speedup of a system is limited by the portion of execution that is not accelerated. Application of Amdahl's law (\Fig{fig:fr_process_profiles_model}) shows that each of the three primary processes---ingestion, detection, and identification---is limited in how much real-world speedup it can enjoy if AI is accelerated in isolation. Ingestion, which performs no AI compute, naturally derives no benefit from acceleration. Detection, which is 42\% AI, rapidly approaches its asymptotic speedup of just 1.74$\times$, achieving 1.59$\times$ overall speedup at 8$\times$ acceleration and 1.66$\times$ overall speedup at 16$\times$ acceleration. Identification, at 88\% AI, has an asymptotic speedup limit of just 8$\times$. At 16$\times$ AI acceleration it achieves 5.6$\times$ overall speedup, and even at 32$\times$ AI acceleration it shows just 6.6$\times$ overall speedup. The exciting speedups promised by up-and-coming inference accelerators will be severely moderated by the reality of the supporting, non-AI code---the AI tax. With the fervor surrounding acceleration of AI and ML, these results from Amdahl's law serve as an important reminder that AI applications are more than ML computation. Supporting and enabling code is a critical component of an end-to-end application and this should serve as a call to action to address the limitations imposed by that code. \subsection{Emulating AI Acceleration on Hardware} \label{sec:accelerate:now} It is instructive to see how the AI tax evolves as compute is universally accelerated (i.e.\ overcoming the asymptotic limits of \Sec{sec:accelerate:process}) on a real system. To do so, we emulate the behavior of accelerated processing. Only the most basic loop controls and Kafka code are left in their original state. Our emulated acceleration technique relies on the observation that, from the perspective of application progress, the perspective of network traffic, and the perspective of the brokers, it is impossible to distinguish between (1)~running the real application as has been described and characterized and (2)~implementing artificial delays reflective of the actual compute times (\Sec{sec:aitax}) and sending meaningless data over the network of the same size as in the real application. In accelerated \application{Face Recognition}, rather than accelerating and executing the real algorithms, we replace the compute with calls to \texttt{sleep}, where the sleep duration is reflective of measured execution times (\Sec{sec:aitax:events}). Accordingly, rather than sending face thumbnails to brokers, we send meaningless data whose size matches the measured sizes. We can accelerate processing (both ingest/detect and identification) by an arbitrary factor by dividing the sleep times by the speedup factor. In this way, we maintain the behavior of the brokers, network, storage, and supporting code while exploring how acceleration changes the AI tax. \input{tex/fig/fr_speedup_frame} \input{tex/fig/fr_speed_bw_util} We emphasize that this AI acceleration emulation provides realistic performance estimation under acceleration because (1)~the most basic general purpose processing (the support code to iterate through available frames, code to coordinate communication with Kafka brokers, the brokers themselves, etc.) remains in place and is executed as usual without the benefits of acceleration; (2)~from the perspective of the data center, compute time spent executing real algorithms and waiting in \texttt{sleep} are identical; and (3)~the brokers are completely ignorant of and unconcerned with the execution details of both producers and consumers. Thus, our setup to accelerate AI through emulation provides a realistic and believable look at the impact of faster AI on the data center and on the workload as a whole. \subsection{Accelerated AI Impact on Total Speedup} \label{sec:accelerate:impact} We explore how the end-to-end frame latency will evolve as AI benefits from increasingly powerful acceleration. For this analysis, we assume that the AI algorithms will experience no latency overhead from acceleration---that is, we assume that the latency to communicate with dedicated off-chip accelerators is factored into the emulated speeds, or, equivalently, that future CPU architectures will directly integrate accelerator hardware into their execution pipelines. For these experiments, we maintain the same application organization as depicted in \Fig{fig:fr_flow_dc}. As we accelerate producers and consumers equally, the imbalance between the two will persist, so it is important to utilize Kafka's load balancing features at the interface between the two. For the sake of simplicity and repeatability and without loss of generality, we configure these emulation experiments so that each frame produces exactly one face. This has two impacts on performance: (1)~on average, this is more faces per frame than produced by the video file used previously, which yields 0.64 faces per frame on average; and (2)~because the rate of face production is constant, we do not have to provision our cluster to handle sudden spikes in traffic, allowing us to deploy fewer identification instances than for the video file. None of the conclusions we draw from these experiments is invalidated by these these two observations. \Fig{fig:fr_speedup_frame} shows the effects of accelerating the AI components of \application{Face Recognition}. Note that because we assume one face per frame, which is significantly higher than the 0.64 faces per frame average produced by our default video file, the average end-to-end latency is somewhat higher at 1$\times$ speed than in \Sec{sec:aitax:latency_breakdown}. At higher speedups, we see a two-fold benefit: first, the latency is very clearly reduced; second, the throughput is commensurately increased. At 8$\times$ speedup, we see a new manifestation of the AI tax, with latency tending toward infinity---the longer the experiment runs, the larger the latency grows. This is an example of an unstable system in queueing theory: faces are entering the system more quickly than they are leaving. This is a major limitation that can severely hamper the prospects of AI acceleration and demands further investigation. \subsection{Network and Storage Bandwidth Limits} \label{sec:accelerate:bw} With state-of-the-art industry accelerators claiming improvements of up to 15$\times$ in inference speedup over CPUs~\cite{habana}, it is critical to understand why the system becomes unbalanced at 8$\times$ acceleration. Without this insight, it will be difficult to build systems to accommodate higher acceleration factors. Intuitively, we suspect the imbalance results from the increased throughput of the system overwhelming one of two resources with limited bandwidth: either network or storage bandwidth. We measure the utilization of both bandwidths to understand the problem at increased acceleration factors (i.e.\ greater than 8$\times$). In \Fig{fig:fr_speed_bw_util:network}, the network bandwidth utilization of all container types rises with increasing acceleration factor. Unsurprisingly, producer (ingest/detect containers) network read bandwidth is next to zero, as is consumer (identification containers) write bandwidth. Conversely, producer write and consumer read bandwidths are comparable. But the real network bandwidth hot spot is the brokers---as the point of communication between producers and consumers, they must process all network traffic generated by the producers or read by the consumers. However, even the combined network traffic flowing through the brokers constitutes a small portion of the available bandwidth: at 8$\times$ accelerated AI, the read bandwidth is only 6~Gbps, a mere 6\% of the available 100~Gbps. \Fig{fig:fr_speed_bw_util:storage} shows the storage bandwidth requirements of the brokers. We omit the data for the producer and consumer containers, as their storage behaviors are not preserved by our emulation technique and are nevertheless expected to be near zero, as they work largely out of memory. The brokers, however, have rather high bandwidth requirements. Even at native (1$\times$) speed, the write bandwidth is 10\% of capacity (1.1~GB/s). At 8$\times$ acceleration, that rises to over 67\%. With the overhead of the operating system, managing the file system, and coordinating all the small requests to be written to storage, by 8$\times$ acceleration, 67\% utilization has effectively saturated the available bandwidth. Returning to queueing theory, the inability of storage to write data to storage (and make it available to the consumers) as fast as it is supplied leads to the imbalance and growing latency. We note that data reads, however, use essentially none of the available bandwidth. This is easily understood: brokers are tasked with ensuring data reliability, so they must write producer data to storage, but the operating system can also cache the data in memory, allowing reads directly from memory and bypassing the storage read path. Doubtless, fine-tuning the brokers' parameters could allow them to better utilize the storage bandwidth. An in-depth exploration of the Apache Kafka parameter space is not, however, the purpose of this paper. Regardless of the ability of the brokers to utilize available bandwidth, they will hit a hard limit at the specifications of the hardware devices. In a setup with a more conservative network bandwidth (e.g.\ 10~Gbps), both the storage and the network would quickly become bottlenecks when accelerating compute. Thus the increased throughput of a moderately accelerated end-to-end system creates a new AI tax that quickly overwhelms the communication substrate, counteracting the gains achieved through hardware acceleration of AI. \subsection{Increase in Waiting Time} \label{sec:accelerate:waiting} Furthermore, whereas the waiting time at 1$\times$ speed constitutes 64.6\% of the total latency of a frame (\Fig{fig:fr_speedup_frame}), it grows to 66.4\% at 2$\times$, 68.0\% at 4.0$\times$, and 79.1\% at 6$\times$. This trend can be partially understood by Kafka's automatic batching between brokers and consumers and producers. A message from a producer can be held in the producer for a small amount of time until a larger group of messages has been accumulated to be sent as a batch. Similarly, when a consumer requests available messages from a broker, the broker can withhold messages until there exists some minimum amount of data. Thus, the broker time grows with the decrease in compute time to improve batching. Both batching behaviors are limited by timeouts to ensure that neither producer nor consumer waits excessively long. We have tuned these parameters to find settings that ensure good behavior across a variety of experiments. Nevertheless, the time spent waiting between producers and consumers approaches some lower limit beyond which no amount of tuning can help; in an application that has many more stages than \application{Face Recognition}, this minimum waiting time could accumulate across stages and prove prohibitively long. \section{Conclusion} \label{sec:conclusion} It is easy to get caught up in the excitement of AI and ML; this work has brought context to those advancements, elucidating an AI tax, and serves as a call to action to address limiters of performance in realistic, edge data center deployments of AI applications. Streaming AI applications are only possible with the support of pre- and post-processing code, which is far from trivial in both latency and compute cycles and relies almost exclusively on the CPU for all of the processing. AI applications will likely be composed of multiple inference stages, each with its own characteristics and overheads. And the enabling substrate for managing AI applications in a data center sees hot spots in both network and storage that could soon become bottlenecks if not addressed. \section{End-to-End Video Analytics Application} \label{sec:application} Given the importance of understanding the end-to-end AI application as deployed in a data center, we describe in this section the AI-centric \application{Face Recognition} application we developed and how we deployed it. It is based on Google's FaceNet algorithm~\cite{schroff2015facenet}. However, \application{Face Recognition} is much more than just FaceNet---it is a full data center application. Going from the algorithm to the full workload requires algorithm partitioning, containerization, work coordination, and communication management. We explain how the logical flow of \application{Face Recognition} is transformed into a functional streaming data center application. We chose a video analytics application for our studies of AI tax due to the rising importance of this domain. The global market for video analytics is expected to hit US\$25 billion by 2026 due to rapid adoption of video technologies across industries such as retail, manufacturing, and smart cities~\cite{marketwatch}. Video analytics uses AI to provide cost-efficient business intelligence insights to its users. The domain is slated for deployment in edge data centers, as opposed to traditional cloud- or warehouse-scale systems due to latency constraints, network bandwidth, and privacy regulations. In \Sec{sec:application:workload} we introduce and describe \application{Face Recognition}. \Sec{sec:application:setup} summarizes our edge data center setup. In \Sec{sec:application:stages}, we explore different ways of deploying it to the data center nodes. In \Sec{sec:application:kafka}, we introduce the Apache Kafka framework, which we use to coordinate the application steps and manage communication between them. \Sec{sec:application:container} explores the design space for individual containers while considering both latency and throughput constraints. In \Sec{sec:application:ratios}, we explain how \application{Face Recognition} is deployed at scale. We detail the deployment of a second data center-level application in \Sec{sec:discussion}. \input{tex/fig/fr_flow_algorithm} \subsection{Video Analytics Pipeline} \label{sec:application:workload} Video analytics is the automatic analysis of video data. For our analysis, we developed and deployed a video analytics application for edge usage called \application{Face Recognition} (\Fig{fig:fr_flow_algorithm}). Though it uses machine learning, this application is strictly user-facing, i.e.\ it uses inference rather than training. Our implementation of \application{Face Recognition} relies heavily on artificial intelligence and machine learning algorithms implemented in Tensor\-Flow~\cite{abadi2016tensorflow}. Given a number of input video streams, the application parses the videos into individual frames, locates faces within the frames, and identifies each face as a certain individual. The video streams could represent a surveillance system's cameras~\cite{regazzoni2010video}, offline processing of recorded videos, a transaction-less shopping environment~\cite{amazongo}, or many other applications where multiple streams are concurrently being fed into the system. The \application{Face Recognition} application consists of four primary processing stages (\Fig{fig:fr_flow_algorithm}) and is built from MT-CNNs (multi-task cascaded convolutional networks)~\cite{zhang2016joint} along with Google's Face\-Net~\cite{schroff2015facenet, githubfacenet}. Like many real-world use cases, the application involves multiple inferences per query. \begin{enumerate} \item \textbf{Ingestion} is a pre-processing stage that ingests a video stream and parses it into individual frames. This stage is critical as FaceNet cannot operate directly on video. \item \textbf{Face detection (AI)} relies on MT-CNNs to detect any faces within a frame without making any effort to identify them. It determines bounding boxes for and produces a \texttt{160x160} thumbnail of each face in a frame. \item \textbf{Feature extraction (AI)} is built using the Inception-Resnet~\cite{szegedy2017inception} architecture and produces a 128-byte vector of essential features that describe each face. \item \textbf{Classification (AI)} compares the feature vector for a face against a set of known face vectors to find the best match by means of a support vector machine (SVM), yielding an identification. \end{enumerate} \application{Face Detection} is designed as a pipelined streaming application---it ingests video streams at or near their native frame rate, injects them into the pipeline, and yields facial identities for frames after some delay. Even though the overall latency may exceed the time between adjacent frames in a video stream, because the application is pipelined, the throughput is at or near the native frame rate. \subsection{Edge Data Center} \label{sec:application:setup} \input{tex/tbl/dc_details} For our experiments, we utilize a small edge data center built from high-performance servers (see \Tbl{tbl:dc_details}). We use over 2200 processor cores spread across 40+ nodes to ensure that we have a realistic deployment whose characteristics can scale to larger setups. Each node is equipped with 56 physical cores spread across two sockets, 384~GB of RAM, high-speed local NVMe storage, and 100~Gbps Ethernet. The nodes are connected in a fat tree topology~\cite{leiserson1985fat}. We rely on industry-standard, open-source tools for our deployment. After dividing the application algorithm into steps, we deploy the different steps in lightweight Docker~\cite{docker} containers. Depending on the resource requirements of a container, it may be deployed in isolation on a data center node or it may be deployed alongside other containers on the same node. The deployment of the various containers is managed using Kubernetes~\cite{kubernetes}. As the application steps are separated from each other, data has to be communicated between them; for this, we rely on Apache Kafka~\cite{apachekafka}. We explore these and other details in the remainder of this section. \subsection{Stage Count} \label{sec:application:stages} As an application, \application{Face Recognition} separates its algorithmic steps into discrete stages that coordinate with one another while running independently on separate nodes to produce a legitimate data center application. Separating the algorithm into multiple coordinated steps allows different stages of the application to adapt to the speed and requirements of other stages. Even though there are four logical steps in the \application{Face Recognition} algorithm (ingestion, face detection, feature extraction, and classification), in practice this reduces to three. Feature extraction and classification are tightly coupled and their code is hard to separate. We term the combined steps, ``identification.'' This simplification results in a reduced design exploration space. \input{tex/fig/fr_stagecounts} In optimizing \application{Face Recognition}, we explored two designs for separating the algorithm into stages, shown in \Fig{fig:fr_stagecounts}. In \Fig{fig:fr_stagecounts:three}, the application is separated into its three logical components: ingestion, face detection, and identification. The ingestion stage, like the algorithmic step of the same name, is fed a video stream which is parsed into separated video frames. Because the ingestion and face detection stages exist in separate containers, likely on physically distinct nodes within the data center, the separated video frames must be transferred between them over the network. Similarly, the cropped face thumbnails produced by the face detection stage are sent over the network to the identification stage. The alternative deployment for \application{Face Recognition} is shown in \Fig{fig:fr_stagecounts:two}: ingestion and face detection are combined into a single \textit{ingest/detect} stage which operates along with identification. In this design, ingestion and face detection processes operate within the same container, so frames are transferred directly between them, leaving only the face thumbnails to be sent over the network to the identification stage. Beyond the obvious difference between the two- and three-stage designs (the three-stage design imposes greater demands on the network), there is a subtler and more profound difference that must be considered. In the three-stage design, the junction between the ingestion and face detection stages is very simple: ingestion always produces one frame at a time at a particular rate and each frame must run through face detection exactly once. In contrast, the junction between face detection and identification transfers a variable number of faces and this face count determines the amount of work to be done in identification. Hence, the compute demands placed on the identification stage will vary based on the nature of the video streams---a video stream that captures many faces will demand greater identification processing power. Thus, by decoupling face detection from identification, we create a point of flexibility where a single face detection (or ingest/detect) container can be serviced by potentially many identification containers---i.e.\ load balancing. The junction between ingestion and face detection has no such requirement. Both in order to reduce the demands on the network and to combine processing steps where it makes sense to do so, we adopt the two-stage design (\Fig{fig:fr_stagecounts:two}). We also utilize one additional container, the \textit{broker}, that we discuss in \Sec{sec:application:kafka}. The \textit{ingest/detect} container runs two processes internally, one for ingestion and the other for face detection. Ingestion processes a video stream (in our experiments, we use a \texttt{1920x1080} video file for deterministic operation) and parses the stream into frames. It resizes the frames to \texttt{960x540} before passing them to the face detection process. Face detection produces a thumbnail for each face in a frame, if any (our video yields zero to five faces and averages 0.64 faces per frame, with face thumbnails averaging 37~kB each). If no faces are found, identification is not needed; otherwise the faces are transferred to the \textit{identification} container. It consists of a single process (the combined feature extraction and classification, as mentioned earlier) which processes face thumbnails to yield an identity. In accordance with industry practice, we execute all inference directly on the CPU~\cite{hazelwood2018applied}. This yields the lowest latency which is critical in a user-facing application. \subsection{Apache Kafka} \label{sec:application:kafka} Communication between containers running on separate nodes within a data center requires intelligence and elegance. We rely on Apache Kafka~\cite{apachekafka} to manage the communication between ingest/detect and identification, allowing for load balancing and offering rapid adaptation in the presence node failures. Though there exists a variety of open source tools for building and managing streaming applications~\cite{apachestorm, apachesamza, apacheflink, apacheapex, apachedruid, kafkastreams}, we note that these tools tend to rely on a separate framework for enabling communication between containers. It is common in practice to rely on Apache Kafka to serve this purpose, and each of these projects has proponents extolling the benefits of using Kafka~\cite{datanami, storm-kafka, samza-kafka, flink-kafka, apex-kafka, druid-kafka}. We therefore use Kafka directly to coordinate communication between our containers. Apache Kafka implements the publish-subscribe pattern of communication~\cite{birman1987exploiting}. This pattern operates by relying on an intermediate staging area for data, instead of data producers sending data directly to data consumers. The intermediate staging area is divided into \textit{topics} to distinguish different kinds of data. Data \textit{producers} publish data (send it to the intermediate staging area) without any knowledge of the data \textit{consumers} or even a guarantee that any consumers exist. They simply publish the data as a \textit{topic}. Similarly, consumers subscribe to a topic, oblivious to all details about the producers. As producers publish data to a given topic, the data become visible to the consumers subscribed to the same topic; consumers are then free to process the data. In Kafka, the intermediate staging area where topic data is stored is implemented in \textit{brokers} (these are the brokers mentioned earlier being deployed in their own containers). A topic is implemented by creating partitions---open file handles---typically spread across multiple brokers. When a producer publishes data to a topic, it may send that data to any of the partitions, which the corresponding broker receives and writes to the open file. When a consumer requests data from a partition, the broker reads it from the same file. In contrast to producers, partitions may have a maximum of one consumer. Thus an application should divide a topic into at least as many partitions as there are consumers in order to maximize parallelism. The topic partition also serves as the basic unit of replication. Kafka assumes and encourages data replication for reliability should a broker go down. Each partition has a ``leader'' and, in the presence of replication, some number of followers. After new data is written to a leader partition it is replicated to the followers. Producers and consumers interact with the broker that holds the leader partition, while the follower partitions are spread among the remaining brokers. In the event of a broker failure, one of the follower partitions will become the new leader partition. Unlike partitions, there are no ``leader brokers'' or ``follower brokers''; both leader and follower partitions are spread among all available brokers; thus, no one broker is more important or heavily utilized than any other. \input{tex/fig/fr_flow_dc} Our data center deployment of \application{Face Recognition} is depicted in \Fig{fig:fr_flow_dc}. The ingest/detect containers function as producers, sending face thumbnails extracted from each frame to brokers as the ``faces'' topic. The identification containers are the corresponding consumers, subscribing to the ``faces'' topic. The placement of brokers between ingest/detect and identification containers was chosen to provide load balancing. As we will show in \Sec{sec:aitax}, the two containers have different latencies; we thus instantiate more identification than ingest/detect containers. By placing brokers between them, Kafka ensures that the work is spread among the consumers evenly. Each of the three container images is deployed a set number of times and distributed throughout the data center. Because of the extremely low network utilization relative to capacity (\Sec{sec:accelerate:bw}), the placement of containers relative to one another in the data center is unimportant. We use a minimum of three broker nodes in all cases to allow for three-way data replication, reflecting common practice in industry-quality deployments. When we experimented with the three-stage (four containers: ingestion, face detection, identification, and brokers) setup, the communication of video frames between ingestion and face detection was also passed through the brokers. We simply created an additional topic---frames---within the same set of brokers. \subsection{Container Resources} \label{sec:application:container} \input{tex/fig/fr_corescaling} An important consideration in deploying containers is determining what resources each should be allocated. \Fig{fig:fr_corescaling} shows how the ingest/detect and identification containers perform with increasing core counts. With more cores, both containers complete their operations at lower latency, but they do not scale linearly. Doubling the core count from one to two yields only a 16\% reduction in latency in ingest/detect and a 36\% reduction in identification. At larger core counts, the computational latency actually increases for both containers. The ``correct'' allocation of cores to containers depends on the workload requirements. Often in a data center application the key metric is latency-bounded throughput: maximize the throughput of the application as long as the latency of individual queries remains below some upper bound. This is a prominent metric used in MLPerf Inference~\cite{janapareddi2019mlperf}. For \application{Face Recognition}, where we lack a clear latency bound and given the very poor core-scaling behavior, we optimize for throughput by assigning a single core to each container and arbitrarily declare that the resulting latency is acceptable. This may not be the case in other applications and deployments. \subsection{Container Ratios} \label{sec:application:ratios} As discussed in \Sec{sec:application:stages}, identification can take a variable amount of time, depending on the number of faces that need processing. Furthermore, we will see in \Sec{sec:aitax} that identification takes significantly longer to perform its calculations than ingest/detect, even when identifying only a single face. To ensure that identification can keep up with ingest/detect, we allocate many more identification than ingest/detect containers. The precise ratio of identification to ingest/detect containers is dependent upon the characteristics of the video streams and the latency bound. If video streams never contain more than one face at a time, fewer identification containers are needed. However, even if video streams tend to show low face counts on average but have large spikes where there are many faces at once, this can temporarily overwhelm the identification containers and so requires that more such containers are instantiated. We will see this in our experiments (\Sec{sec:aitax:latency_breakdown}). \section{AI Tax} \label{sec:aitax} We start our exposition of the AI tax by evaluating the end-to-end performance of \application{Face Recognition}. We aim to understand what fraction of the cycles in an AI application go to AI processing versus the non-AI components. To this end, we examine the lifetime of a frame as it flows through the AI-centric \application{Face Recognition} application. In \Sec{sec:aitax:events}, we explain how we measure frame progress. \Sec{sec:aitax:latency_breakdown} breaks down the end-to-end progress of a frame in each stage of the pipeline and shows that AI computation is not so central as one would expect in an AI application. In \Sec{sec:aitax:process} we break down the application behavior in each container and reveal how much supporting compute is needed to enable AI processing. We show that it is vitally important to view AI application performance holistically, as it involves much more than just AI processing and the supporting code and infrastructure tax have a profound impact on latency. \subsection{Instrumenting the End-to-End Execution} \label{sec:aitax:events} \input{tex/code/events} To really understand an AI application deployed in even an edge data center, we must raise the level of abstraction from how applications are traditionally evaluated. While we do not claim to have the right level of abstraction for all end-to-end workloads, for \application{Face Recognition}, we believe that we have identified a good level of abstraction for tracking and measuring application progress without perturbing the application's original behavior. Application progress is a sequence of unit steps that are necessary for a frame to progress through the application. We term the units of application progress ``events''; these are high-level steps in the application and correspond to the stages described in \Sec{sec:application}: video ingestion, face detection, broker waiting time, and identification. \Lst{lst:events} shows simplified Python code demonstrating the operation of the face detection process with event-logging code inserted. The event-logging code in the listing is only slightly simplified from our actual code. Event-based logging lets us track end-to-end application progress. This higher level of abstraction is critical in enabling engineers to architect at a cluster level, where the complete application executes, instead of just at a node level. We log all the events during execution of the application using Elasticsearch~\cite{elasticsearch} and Logstash~\cite{logstash} running on a separate server. We measure the execution time of each step as well as the sizes of data that are transferred between stages. This is done using timestamps around the major regions of interest, e.g.\ the time to do the face identification \textit{excluding} the supporting code (e.g.\ iteration management). In essence, events capture the major steps that a frame goes through from beginning to end. We use built-in language functions to measure the size of the data that are transferred through the brokers. Due to the infeasibility of instrumenting a complex program such as Kafka, we approximate the broker waiting time event by calculating the time delta between the end of face detection and the beginning of identification. Our instrumentation method has negligible overhead and resource requirements, since we are only logging events. It has minimal impact on the application (see \Fig{fig:process_breakdowns}). \input{tex/fig/fr_latency_pie} \subsection{AI Applications Are More Than Just AI} \label{sec:aitax:latency_breakdown} One of the end user's primary concerns is latency. In \application{Face Recognition}, this is the total time of a frame progressing serially from ingestion through identification; the latency of any stage that is not performing AI contributes to the AI tax. We conduct experiments with 840 ingest/detect processes (producers) executing on 15 nodes (56 processes per node), 1680 identification processes (consumers) executing on 30 nodes (56 processes per node), and 3 brokers (each given its own node). We require the brokers to maintain 3$\times$ data replication, which is standard practice for disaster recovery. We measure an average face size of 37.3~kB and an end-to-end latency of 351~ms. While this latency may seem large, there are two points to remember. First, the throughput per stream is around 10 frames per second (FPS)---and a single stream could be divided among three ingest/detect instances for 30 FPS operation---regardless of the latency, since the application is pipelined; the output video still displays smoothly, just with a small delay. Second, there are multiple inferences per frame, performed sequentially, with the inference stages located on different nodes to improve performance~\cite{gupta2019architectural}; the communication between the stages imposes additional latency. \Fig{fig:fr_latency_pie} summarizes the average latency for each stage. Ingestion operates quickly, taking only 18.8~ms, while the AI stages, face detection and identification, take 74.8 and 131.5~ms, respectively. Remarkably, over a third of the end-to-end latency is spent waiting between stages, at 126.1~ms. As in any real-time application, tail latency is an important factor to consider. We measure a \nth{99} percentile tail latency of 2.21~s, with the standalone \nth{99} percentile tail latencies of ingestion, detection, waiting time, and identification at 27~ms, 1.84~s, 116~ms, and 380~ms, respectively. We remind the reader that these latencies can be improved somewhat by allocating additional cores to each stage of processing; in our implementation, however, we have chosen to optimize for throughput over latency. \input{tex/fig/fr_latency_timeline} From the tail latency breakdowns, we see that the end-to-end tail latency derives almost entirely from the waiting time in the brokers. This waiting time in turn results, at least in part, from congestion in the application. As shown in \Fig{fig:fr_latency_timeline}, when ingest/detect processes collectively produce a surplus of faces, identification has a hard time keeping up, leaving the faces in the brokers for a longer time. When there are almost no faces detected, identification containers are almost idle and so are able to fetch the few faces quickly. In summary, deep learning inference performance is more than just the performance of an individual node in the system. Even with a well-balanced system, there is a substantial AI tax latency imposed in managing the transfer of data between the nodes (i.e.\ the detection and identification stages). Without looking at the end-to-end latency, one would not realize that a large portion of time is spent waiting at brokers. This observation is \emph{not} unique to our application; any application built on Apache Kafka (or a similarly brokered communication mechanism) will face this reality. \input{tex/fig/process_breakdowns} \subsection{Overhead of Pre- and Post-Processing AI} \label{sec:aitax:process} Most AI research papers focus only on the core AI component, neglecting the other associated parts that are essential to end-to-end AI processing. However, there are pre- and post-processing steps that are unavoidable. Both play a critical role in the overall latency and both contribute to the AI tax. Pre-processing involves preparing the data for the AI kernel execution, while post-processing is loosely defined as any processing that is performed to convert the generated AI result(s) into something meaningful to the user or next stage. To quantify the AI tax for pre- and post-processing we refine our view of a frame's processing by looking at the time breakdown of each process using code profiling tools. \Fig{fig:process_breakdowns} shows, for each of the main processes (ingestion, detection, and identification), where time is spent. Ingestion is exclusively a pre-processing stage. It shows a nearly even split between frame extraction and frame resizing (\Fig{fig:process_breakdowns:ingestion}). Extraction refers to parsing the incoming video stream into individual frames. Resizing converts frames from \texttt{1920x1080} to \texttt{960x540} for the detection stage. The remaining time is split between the overhead of event logging and other supporting code, including transferring frames to the co-located detection process. During face detection (\Fig{fig:process_breakdowns:detection}), despite being an AI-centric stage, only 42\% of the time is spent executing the AI algorithm in TensorFlow. Cropping and resizing faces (to \texttt{160x160}) for identification takes 25\% of the time; supporting TensorFlow and NumPy code (pre- and post-processing for each frame) take 6\% and 4\%, respectively; and ``other'' code takes a whopping 13\% of the time. Code in the ``other'' category includes inter-process communication (from the ingest stage), additional matrix manipulation, loop management, bounding box calculation, image encoding, etc. The AI-centric identification stage has a markedly different breakdown. It spends 88\% of its time directly executing AI algorithms; Kafka code, though, takes 8\% of the time. The remaining components contribute little to the total time. Beyond the pre-processing of the ingestion stage, end-to-end \application{Face Recognition} requires substantial pre- and post-processing within the AI-centric stages. In face detection, non-AI computation constitutes 57.6\% of the compute cycles. In identification, that figure drops to 12.4\%, which is still far from trivial. In a complex and diverse AI-centric application such as \application{Face Recognition}, AI computation constitutes 55.2\% of end-to-end cycles, with the remainder going to supporting code: 17.8\% to resizing, 9.0\% to networking, 5.2\% to tensor preparation, 3.6\% to Kafka processing (outside of the brokers), and the rest to other supporting tasks. In summary, despite the massive excitement surrounding AI algorithms, AI workloads are more than just tensors and neural networks: without the supporting code, AI is impotent. We emphasize that the supporting code, far from being a minor player in a complete application, constitutes over 40\% of the compute cycles, not counting the compute time in the brokers. The pre- and post-processing code is executed on the general-purpose CPU, so it motivates the need to understand the role of the CPU as AI acceleration increases. \section{Generalizability of Findings} \label{sec:discussion} We recognize that our research is a case study of a single application. While case studies are often undervalued---despite shedding light on an application that is valuable in its own right and pioneering an evaluation approach~\cite{pingali2019case}---we nevertheless acknowledge that evaluation of additional applications is beneficial. We therefore discuss some findings on a second application, \application{Object Detection}, that was deployed similarly using Kafka in our edge data center. While we do not study \application{Object Detection} in the same detail as \application{Face Recognition}, we discuss here some results showing that this additional application faces AI tax bottlenecks as well. In \Sec{sec:discussion:object}, we describe the purpose and design of the application. We look at the AI tax in \application{Object Detection} when running natively (\Sec{sec:discussion:aitax}) and when accelerated (\Sec{sec:discussion:acceleration}). \subsection{Object Detection} \label{sec:discussion:object} Like \application{Face Recognition}, \application{Object Detection} analyzes video streams in real-time. Instead of recognizing faces, though, \application{Object Detection} uses an R-CNN~\cite{ren2015faster} to identify multiple objects in each frame. Also like \application{Face Recognition}, \application{Object Detection} is split into two stages with Kafka brokers serving to transfer data between them. The two stages are termed ingestion and detection. Ingestion ingests a video stream, parsing it into separate frames, and passes those frames through Kafka to detection. AI compute is exclusively performed in this later stage in the R-CNN. Unlike \application{Face Recognition}, wherein the presence or absence of faces in a frame dictates whether and how much data is sent through Kafka, in this application each frame is always sent. This decreases the variability in system load, as each detection instance always has to process precisely one frame at a time. \input{tex/fig/od_corescaling} As shown in \Fig{fig:od_corescaling}, the detection stage of \application{Object Detection} shows near linear speedups with increasing core count. Through testing, we determined to allocate 14 cores per container; this allows us to instantiate 4 detection containers per server. Despite this, the ingestion stage operates orders of magnitude faster than the detection stage (see \Sec{sec:discussion:aitax}); we limit the ingestion rate to 30 frames per second. To balance that ingestion rate, we instantiate 96 detection containers for each ingestion container. \subsection{AI Tax} \label{sec:discussion:aitax} \input{tex/fig/or_latency_pie} \Fig{fig:or_latency_pie} shows the end-to-end frame latency breakdown. The first stage, ingestion, performs no AI. As such, it completes very quickly, in only 4.5~ms. As stated, we limit the ingestion rate to 30~FPS, so for practical purposes this time is really 33.3~ms. In contrast, the final stage, detection, does all of the AI processing and weighs in at an impressive 687~ms. Waiting time in the brokers is nearly as long, averaging 629~ms. \subsection{Acceleration} \label{sec:discussion:acceleration} Using our acceleration emulation methodology, we explore the implications of accelerating \application{Object Detection}. Given the large number of cores needed for each detection instance, and given the limited size of our cluster, we assign only a single core to each detection instance for these experiments, allowing us to deploy significantly more ingestion instances than we would be able to otherwise. This is feasible because the emulation methodology does not care about number of cores and because in an accelerated data center setup, it may be very possible to support the higher instance count. We maintain the same ratio of producers to consumers as in our real setup, but by increasing the density of consumers we are able to scale up the experiments significantly. We use a single producer node but instantiate 21 producers on it and we use 36 consumers nodes, each with 56 consumers. We continue to use three brokers. Since we already decided to limit the frame rate to 30~FPS, we continue this practice. With increasing acceleration, we increase the number of frames we send---effectively, the acceleration factor dictates the number of simultaneous video feeds each producer can process. Thus, at 2$\times$ acceleration, each producer sends frames at 30~FPS but it sends two frames at a time instead of one; similarly, at 8$\times$ acceleration, a producer sends eight frames at 30~FPS, and so on. \input{tex/fig/or_speedup_frame} \Fig{fig:or_speedup_frame} shows the results of acceleration. Despite the significantly lower count of producers compared to \application{Face Recognition}, we still see performance begin to degrade after 4$\times$ acceleration. By 12$\times$, the average latency is not yet infinite, though it does exceed 3000~ms. But at 16$\times$ and above, the average end-to-end latency again tends toward infinity. While we see the broker waiting time begin to grow at 6$\times$ and particularly 12$\times$ acceleration (suggesting that the brokers are probably again facing a storage bottleneck), the real bottleneck in this application arises from a new category entirely. In \Fig{fig:or_speedup_frame}, we have added a ``Delay'' category for latency components; this component represents the time between when a frame (or set of frames) was supposed to start processing in ingestion and when it actually starts processing. This delay arises from a set of frames taking longer than 33.3~ms and delaying the start of the subsequent set of frames. This ingestion delay represents a new manifestation of the AI tax we had not seen previously. For every set of frames, we have opted to send each frame to the brokers separately; this ensures that they can be fully load balanced by the brokers. However, the time required to send the full set of frames rapidly grows with the larger set sizes, so that it soon exceeds the time allotted to each set. Kafka is well designed, however, so the producers and the brokers manage to intelligently batch the frames before sending them. But by 16$\times$ speedup, the AI tax from sending so many items so rapidly has overwhelmed the capacity of the producers to keep up. We see this bottleneck reflected in the throughput as well. At 1$\times$, the throughput is 630~FPS, as expected. That scales pretty well up to 8$\times$ speedup, but it falls short of what is expected at 12$\times$ and the system saturates by 16$\times$ speedup. \section{Related Work} \label{sec:related} We build on prior work that enabled and rapidly expanded AI and ML applications. Unlike most of the prior work, however, we explore the implications of accelerating AI computation and how it affects an end-to-end application flow. We present related work in five categories: (1)~AI and ML benchmarking, (2)~integrating AI and ML, (3)~end-to-end application flow studies, (4)~exploiting heterogeneity, and (5)~edge data centers. \paragraph{Benchmarking} \label{sec:related:benchmarking} MLPerf is one of the leading resources for benchmarking ML-related compute~\cite{janapareddi2019mlperf, mlperfinference}. It provides flexibility for benchmarking a variety of hardware across a variety of ML kernels, but it entirely ignores the issue of end-to-end application behavior and performance. In our work we demonstrate the central importance of understanding the end-to-end application, showing that each ML kernel can constitute a relatively small portion of the pipeline and that truly optimizing ML performance requires a more holistic view of the system. \paragraph{Integrated AI} \label{sec:related:ai} In presenting the scale and deployment of ML workloads at Facebook, Hazelwood et al.\ acknowledged the importance of pre-processing data for training and emphasized its stress on storage, network, and CPU~\cite{hazelwood2018applied}. They acknowledged the potentially high latency of inference for top quality models but did not expose the overhead of pre- and post-processing. Nor did they discuss the resource requirements of streaming inference workloads. We emphasize both of these to show how they can pose a barrier to overall performance improvement from AI acceleration. Microsoft recognizes the importance of latency in the data center particularly as it applies to deep neural networks~\cite{brainwave}. Chung et al.\ presented Microsoft's Project Brainwave which implements DNNs largely in FPGAs distributed throughout the data center, emphasizing the importance of accelerating increasingly complex DNNs~\cite{chung2018serving}. In contrast, this work emphasizes the importance of the enabling code for AI and assumes that accelerating AI and ML will be successful, instead looking at its ultimate impact on the larger workflow. \paragraph{End-to-End Application Flows} \label{sec:related:endtoend} Though not specific to AI workloads, Kanev et al.\ offered a comprehensive look at the trends of warehouse-scale computing at Google~\cite{kanev2015profiling}. They quantified data center ``taxes''---overheads that come with applications but do not directly contribute to the end result, including compression, communication serialization, and remote procedure calls. We show that the brokers act as a tax, coordinating the activities of a distributed application. Other work has broken down the end-to-end latency of requests, at various levels of granularity, ranging from evaluation of Internet speed and programming language patterns to operating system scheduling and memory latency~\cite{chow2014mystery, li2014tales}. This more closely matches our contribution, though our analysis is restricted to latency within the data center and is focused specifically on the common communication base (Ap\-a\-che Kafka) of open source streaming frameworks in an effort to bring perspective to end-to-end AI application flows. \paragraph{Heterogeneous Execution} Prior works sought to exploit the heterogeneity in a data center, producing benefits in speed, energy consumption, and operating costs~\cite{mars2011heterogeneity, haque2017exploiting}. Where these papers sought to capitalize on unintentional heterogeneity (arising from workload co-location and, for example, later upgrades), we extol the benefits of intentionally designing an on-premise data center with heterogeneous servers and network. We thereby add hardware cost savings to the existing benefits of data center heterogeneity. \paragraph{Edge Data Centers} \label{sec:related:onprem} Hewlett Packard Enterprise recently demonstrated that cloud-based computing is often not the most cost-effective solution~\cite{hpe2018onprem}. Their analysis showed for a well-utilized edge data center, TCO can be drastically lower than comparable capabilities in the cloud. Our work extends that idea, showing how the on-premise data center can be specifically tailored to the needs of AI applications. \section{AI in Edge Data Centers} \label{sec:aiscope} With the explosive growth of AI applications, much work has been done to try to characterize and understand them through numerous proposed benchmark suites. Ranging from device-level computations to complete applications, these benchmarking efforts cover a range of abstraction levels, each recognizing the need to understand AI applications from a variety of abstraction levels (\Fig{fig:aiscope}). In this section, we explore the range of abstractions and explain the need for the next level of abstraction: edge data center, end-to-end AI applications. In \Sec{sec:aiscope:layers}, we look at benchmarking efforts aimed at understanding ML layers. \Sec{sec:aiscope:model} looks at complete ML models built from potentially many layers of compute. \Sec{sec:aiscope:task} considers entire AI tasks composed of potentially multiple ML models operating in tandem. Finally, \Sec{sec:aiscope:dc} takes the next logical step and motivates the study of a complete AI application as it exists in an industry-quality edge data center. \input{tex/fig/aiscope} \subsection{ML Layers} \label{sec:aiscope:layers} At its heart, an AI application is a set of computations done on some piece of hardware, and, though this seems simplistic, it is critical to understand what computations compose the application and how they fit together (\Fig{fig:aiscope}, ``ML Layers''). DeepBench~\cite{deepbench} enables researchers to benchmark the primitive layers of ML models (matrix multiplies, convolutions, and recurrent operations) at the lowest level---on CPU and GPU using primitive machine learning libraries. It exists below any ML framework (such as TensorFlow~\cite{tensorflow} and Caffe~\cite{caffe}). Its goal is to elucidate the performance of the most common operations on various hardware devices. Taken in isolation, though, DeepBench is incomplete. Basic operations like matrix multiplication are essential in machine learning, but in practice they occur only in certain model layers and within specific data access patterns that can vary from one ML framework to another. These details, available only in the next level of abstraction, give important context for the basic operations of benchmarks like DeepBench. \subsection{ML Model} \label{sec:aiscope:model} The next level of abstraction is the ML model, built from fundamental operations and layers (\Fig{fig:aiscope}, ``ML Model''). A complete model will typically operate on its input (such as an image) to produce a useful evaluation (such as identifying the objects in the image). Benchmarking complete models is critical because it determines precisely which operations are to be performed, how common each operation is, and the computational intensity of operating. There are numerous benchmark suites that address this level of abstraction. AI Matrix~\cite{zhang2019ai,AIMatrix} introduces numerous complete ML models, covering such diverse application domains as image classification and neural machine translation, and also provides for synthetic models that are generated to mimic desired workload characteristics. MLMark, an EEMBC benchmark, is focused on benchmarking ML inference on embedded or edge devices~\cite{torelliMLMark}. AI-Benchmark~\cite{AIBenchmark} is designed specifically to target smartphone performance~\cite{ignatov2018ai, ignatov2019ai}. AIIA-Benchmark targets accelerator devices and aims to provide a useful means of comparison. While these benchmark suites are indispensable, they are also incomplete. ML models do not exist or execute in a vacuum but rely on supporting compute. In practice, this often means that, unlike these benchmarks, ML model execution cannot operate uninterrupted but must be supported by additional models or even non-ML compute. \subsection{AI Task} \label{sec:aiscope:task} For ML to be useful, it needs the help of supporting compute, i.e.\ pre- and post-processing. Additionally, a model is often only one piece of a series of models working as a unit to produce a result. Together, these compose an AI task (\Fig{fig:aiscope}, ``AI Task''). For example, before a model can operate on an input image, the image must be transformed into the proper size, with the proper color encoding, in the proper layout, to match the requirements of the model; the model output must similarly undergo transformation to be prepared for the next stage of processing or to be returned to the user. At this level of abstraction, the real behavior of an AI application starts to become apparent. Without the pre- and post-processing steps, the AI kernel is basically useless. Even so, we are aware of no benchmarking suite that captures this more complete, context-aware, AI task-level view. While an AI task may represent a complete application, if it is executing on a single device, much of the AI compute in practice takes place as real-time services provided by industry players for end users. In such a scenario, an AI task is itself incomplete, as an AI task will be part of a larger application that spans multiple servers and interacts over the network both internally and with end users. \subsection{End-to-End Application} \label{sec:aiscope:dc} \input{tex/tbl/benchmarks} The highest level of abstraction is the data center-level, where we finally see the end-to-end application (\Fig{fig:aiscope}, ``End-to-End Application''). The various AI tasks are deployed to different nodes throughout the data center along with their supporting compute. Additionally, some nodes may be entirely dedicated to non-AI, supporting compute. \textit{Of critical importance, at the data center level, the networking equipment, storage devices, communication protocols, data center management software, and application coordination software all become part of the AI application.} The numerous past ML benchmarking efforts are invaluable for understanding the heart of AI applications; however, they all fall short of this highest level of abstraction. \Tbl{tbl:benchmarks} illustrates, for a small sampling of benchmarks, how much of an AI application is \textit{not captured} and \textit{not understood} as a result of failing to rise to this level of abstraction. All the benchmarks we have mentioned in this section do a good job of benchmarking ML compute details, but they all leave gaping holes in the understanding of end-to-end application performance. DAWNBench~\cite{DAWNBench} and MLPerf~\cite{mlperf} are noteworthy for trying to introduce greater realism into benchmarking ML models; unfortunately, they do not go far enough. DAWNBench recognizes the importance of batch sizing~\cite{coleman2017dawnbench}, which is widely known and adopted in realistic deployments of AI applications to maximize performance. MLPerf takes this further and introduces a number of ``scenarios'' under which ML models can be evaluated. These scenarios incorporate the essential concept of latency-bounded throughput---maximizing throughput while honoring latency constraints. They try to mimic a setup that would exist in a real-world data center where AI applications are deployed~\cite{janapareddi2019mlperf}. Ultimately, however, even though these benchmarks acknowledge that ML models are executed in a server-like scenario, they sidestep the issue. MLPerf, for example, attempts to mimic a server setup by having requests arrive at random intervals according to a Poisson distribution, but this ignores the portion of the pipeline that actually provides and pre-processes the requests. AIBench is an ambitious undertaking that recognizes the need for end-to-end benchmarks~\cite{gao2019aibench}. Designed to be easily extensible by building from a common framework, AIBench has implemented two scenarios: e-commerce searching and online language translation. Both scenarios operate in a data center, utilize multiple stages of compute and inference coordinated by orchestration software, rely on network and storage, and utilize pre- and post-processing. However, as it stands, AIBench is insufficient to cover the breadth and depth of AI applications and has not undergone the extensive and holistic evaluation for which we argue in this paper. For these reasons, we consider it more of a standalone application than a true benchmark. To fully understand an AI application requires taking a holistic view at this highest level of abstraction. We are not aware of any effort to capture this complete understanding. While we do not present a new benchmark suite, in this work we undertake to present a thorough, forward-looking, end-to-end evaluation of a complete AI application. \section{Introduction} \label{sec:intro} Artificial intelligence (AI), especially the field of machine learning (ML), is transforming the marketplace. Sparked by advances in computer system design, enterprises are leveraging AI in every possible manner to provide unprecedented new services to their end users, ranging from recommendation-based online shopping and personalized social network services to virtual personal assistants and better health care. To enable ML, there has been a flurry of work at two extremes. At one extreme is the effort that focuses on hardware acceleration of ML kernels~\cite{habana, chen2014diannao, tpu1, intelncs2, nvidiaai}. At the other extreme is the effort that focuses on engineering the system and its supporting infrastructure, such as the associated networking and storage. The former is essential for enabling microprocessor advancements, while the latter is essential for allowing cloud-scale deployment. But recent years have seen a shift in the needs of the industry. While much research has been dedicated to maximizing and accelerating machine learning performance, recent industry perspectives have urged for a more holistic understanding of machine learning development and performance. Facebook, for example, has discussed some of the challenges it has faced running AI at scale and encouraged research on mitigating those challenges~\cite{hazelwood2018applied}. Instead of focusing solely on the AI kernel computation time, there is a need to look at the bigger picture. Enabling AI applications involves several stages: ingesting the data, pre-processing the data, offloading the data to an AI accelerator, waiting for data, post-processing the result, etc., all of which affect the requests' end-to-end latency and total system throughput. At the same time, the industry is witnessing AI services migrate from warehouse-scale systems to smaller purpose-built data centers located at the edge, closer to end-users~\cite{hpe2018onprem}. These edge data centers complement existing cloud- or large-scale services by being physically closer to the data source, which enables faster responses to latency-sensitive or bandwidth-hungry application services~\cite{vxchnge-edgelatency}. There are also data sovereignty and regulatory compliance rules to safeguard data privacy that are addressed with edge data centers~\cite{cloud-sovereignty}. Moreover, many mid-size organizations find it more economical to invest in on-premise data centers that are purpose-built for executing a particular type of task~\cite{data-economy}. So despite the continued growth in public cloud solutions, spending for edge data centers is predicted to increase~\cite{emconit}. In this work, we study the intersection of user-facing AI computing---the inference side of machine learning---and smaller, edge data centers to reveal the often overlooked ``AI tax'': the additional compute cycles, infrastructure, and latency required to support the AI at the application's heart. In the context of a data center, execution of a fully developed, deployment-ready AI-centric application relies on more than just AI algorithms. End users' requests demand pre-processing to ready them for the pipeline; intermediate data must be communicated between stages, often over a network using custom protocols; the communication framework often has built-in data reliability safeguards which impose overheads on data movement; and each stage faces its own overhead for moving data. All of these components together add to the overhead of executing AI. We study a full deployment of \application{Face Recognition}, an end-to-end video analytics AI-centric application at the edge. Our setup is an industry deployment of the Google FaceNet~\cite{schroff2015facenet} architecture in an edge data center. Our application is entirely focused on the inference side of machine learning, where it is exposed to end-users and faces associated latency constraints. As an AI-centric application, \application{Face Recognition} is a good choice as it employs three distinct artificial intelligence algorithms, including two neural networks and a classification algorithm. Furthermore, it is representative of the reality of a considerable portion of AI and ML applications: many AI applications exist as streaming services, deployed in data centers, serving real-time needs of consumers. Coordinating the many activities required to transform raw data into useful, easily consumable conclusions requires the intricacies and nuances of any distributed application: networking equipment, storage devices, coordination, data durability, power distribution, cooling, communication protocols, data compression, etc.~\cite{kanev2015profiling}. We find that in today's edge data centers, already the communication framework can constitute over 33\% on the latency of the application. \application{Face Recognition} is built on top of Apache Kafka~\cite{apachekafka}, which is widely adopted both directly and as a fabric upon which advanced streaming frameworks are built~\cite{datanami, storm-kafka, samza-kafka, flink-kafka, apex-kafka, druid-kafka}. Kafka is also representative of alternative frameworks that utilize communication hot spots. The simplicity and impressive performance of Apache Kafka have established it as a common denominator for many industry-quality projects. Despite this, requests can spend substantial time passing through the framework. Moreover, we show that as accelerator technologies advance and integrate into production environments, the supporting portions of the pipeline will soon supplant AI as the primary determinant of performance. We measure the implications of greater AI inference acceleration. Apache Kafka becomes increasingly stressed to move the vastly increased volume of data ingested by the application. Even at relatively low acceleration factors, the added stress will quickly overwhelm Kafka's current capabilities. We demonstrate that at a very modest 8$\times$ acceleration factor, Kafka overwhelms the capabilities of its underlying storage. These findings present a unique opportunity on the compute research spectrum: rather than neglecting the execution context of AI and without moving into the realm of cloud compute where resources must be generic and homogeneous enough to handle all kinds of workloads, we show a proof-of-concept for the economic value of edge data centers. We demonstrate how a data center that is custom-built for the needs of a streaming AI workload can accommodate the anticipated requirements of accelerated AI without over-provisioning, thereby realizing an overall decrease in the total cost of ownership (TCO) in excess of 15\% over a homogeneous edge data center. Although our deep-dive analysis primarily focuses on edge video analytics with \application{Face Recognition} (as the poster child application), we also conduct a basic analysis of a second application, \application{Object Detection}, deployed similarly to \application{Face Recognition}, and show that it faces comparable AI tax challenges. In other words, our analyses and conclusions are not specific to one application; we further discuss how the underlying infrastructure of an end-to-end AI application will present mainly the same bottlenecks regardless of the AI application. In summary, our main contributions and insights are \begin{enumerate} \item Where much focus is devoted to tuning and accelerating AI inference to enable faster compute, we instead evaluate the larger \textbf{system-level implications of end-to-end AI applications and expose the AI tax}; \item We show that the \textbf{general-purpose CPU performance remains a significant determinant of overall request performance} because processing an end-user request requires more than just AI kernel computation; \item The \textbf{communication layer of an AI application imposes a large overhead} on the latency of processing; \item The \textbf{increased throughput from AI acceleration will overwhelm the communication substrate}; and \item A \textbf{purpose-built data center can adapt to the upcoming challenges of accelerated AI at a lower TCO} than a generic, homogeneous data center. \end{enumerate} There are, of course, innumerable ways to deploy a data center AI application. In this research, we focus on a scheme that concentrates communication in a few data brokers. We expect that our main takeaways will be applicable to any deployment that utilizes some sort of data broker. Namely, with the increasing data throughput of increasingly effective AI accelerators, the brokers will become a point of failure. The amount of data moving through the application can quickly overwhelm the capabilities of the brokers. The proper solutions to the AI tax may reside in a customized edge data center, as we suggest, or they may depend on a different deployment scheme. In any case, our work clearly demonstrates the criticality of understanding AI applications from an end-to-end viewpoint. Without that perspective, we would have no comprehension of the AI tax nor any reason to suspect that it would become the limiting factor to performance. The remainder of this paper is structured as follows. In \Sec{sec:aiscope}, we motivate study of AI applications in a full, end-to-end, data center context. In \Sec{sec:application}, we introduce our primary application, \application{Face Recognition}, and explain its deployment and optimization in our edge data center. In \Sec{sec:aitax}, we elucidate the AI tax, characterizing the performance and limitations of the end-to-end AI application. In \Sec{sec:accelerate}, we conduct a forward-looking analysis of \application{Face Recognition} under accelerated AI compute and identify significant impediments to improving performance. In \Sec{sec:discussion}, we show that the AI tax exposition is not unique to \application{Face Recognition} by similarly deploying and evaluating a second application. In \Sec{sec:solution}, we show that an edge data center can be purpose-built to address the upcoming challenges of AI while reducing TCO. We then distinguish our work from prior art in \Sec{sec:related} and conclude in \Sec{sec:conclusion}. \section{AI-Centric Data Center Design} \label{sec:solution} \input{tex/fig/fr_speedup_frame_fixed} As future accelerators emerge, we seek to unlock higher speedups. In \Sec{sec:accelerate}, we saw that in an accelerated AI environment, the AI tax overwhelms the communication mechanism; in particular, the storage medium is quickly saturated at relatively modest emulated compute acceleration speeds. So in \Sec{sec:solution:unlocking} we explore two avenues to overcoming this bottleneck. Implementing these solutions to the tax translates to actual monetary cost, as shown in \Sec{sec:solution:homogeneous}. We show in \Sec{sec:solution:purposebuilt} that a purpose-built edge data center can address the tax with increased capacity to handle accelerated compute at lower total cost of ownership (TCO). \subsection{Unlocking Higher Speedups} \label{sec:solution:unlocking} There are three ways to deal with the limitation in the storage bandwidth: (1)~utilize faster storage in the existing brokers, either through a faster storage medium (e.g.\ Intel Optane~\cite{intel_optane}) or through multiple drives operating in parallel; (2)~create more storage bandwidth by allocating additional brokers; or (3)~decrease the size of the face thumbnails, thus demanding less bandwidth. We explore all three methods (\Fig{fig:fr_speedup_frame_fixed}), first increasing the installed drive count from one to four to provide greater bandwidth to each broker node, then increasing the broker count from three to eight across distinct broker nodes, and finally decreasing the face thumbnails down to one-eighth their original size. \paragraph{Increasing the Bandwidth} \label{sec:solution:bandwidth} The effect of additional storage bandwidth on the existing nodes is captured in \Fig{fig:fr_speedup_frame_fixed:drives}. For these experiments, we instantiated additional broker instances on each broker node (one for each drive) to ensure that each drive is given the same access to compute and memory resources; in practice, only one broker should be instantiated per node to avoid replicating data on the same node. In the figure, we start with 8$\times$ speedup---the speedup that sent latency to infinity in the previous experiments---and increase the emulated speedup to 32$\times$. With just one NVMe drive, the average end-to-end frame latency is infinite (depicted by the latency bar extending beyond the limits of the chart) at 8$\times$ and all higher speedups. These experiments rely on additional drives being installed only in the brokers---in our case, there are three of them; the remainder of the servers remain unaltered from their original configuration. In increasing the storage bandwidth by going from one drive to two drives, both 8$\times$ and 12$\times$ speedups are ``unlocked''---the system gains the ability to support compute of these speeds. With three drives, the system supports up to 24$\times$ speedup, and with four drives, 32$\times$ is unlocked. \paragraph{Spreading the Load} \label{sec:solution:brokers} Rather than installing additional drives in each of the brokers, we can instantiate additional brokers in the data center. This spreads out the load on storage to more brokers and hence more drives. Returning each broker to its default storage configuration (one NVMe drive), we repeat our experiments with four, six, and eight brokers. With three brokers, as we saw before, any acceleration factor at or above 8$\times$ leads to infinite latency. A small 33\% increase in the broker count (going from three to four brokers), however, allows the system to handle the 8$\times$ factor, while a 2$\times$ broker increase allows for up to 16$\times$ acceleration. At eight brokers, the system can handle a 32$\times$ factor. We find an important distinction between adding additional drives to existing brokers and adding additional brokers: the latter is more efficient. To achieve the ability for the system to support 32$\times$ accelerated AI compute, we had to increase the number of drives by a factor of four; in contrast, we had to increase the broker count by 2.7$\times$ (going from three brokers to eight) for the same performance achievement. The significantly lower increase in storage bandwidth in the increased-brokers approach indicates that brokers may also benefit from having additional compute capacity, memory bandwidth, or network bandwidth available. \paragraph{Decreasing the Demand} \label{sec:solution:sizes} There exists one additional possibility for reining in the bandwidth demands on the storage: decrease the volume of data that needs to be stored. Rather than spreading the data among additional brokers, the data volume can be reduced by decreasing the average size of face thumbnails. \Fig{fig:fr_speedup_frame_fixed:sizes} shows the effect of face sizes at one-half, one-quarter, and one-eighth their original size. Similar to increasing the bandwidth in each broker, we see that the smaller face sizes use a smaller portion of the available bandwidth and so increase the maximum supportable speedup, but without instantiating additional brokers or installing additional storage devices. This solution, however, comes with serious trade-offs. Decreasing face size using compression would require additional compute time, potentially offsetting much or all of the accelerator gains. Decreasing face size by using smaller thumbnails changes the algorithm and can detrimentally impact accuracy. Due to these severe limitations of this approach, we will focus on the previous two solutions. \subsection{The Cost of the AI Tax in the Data Center} \label{sec:solution:homogeneous} A typical and simple approach that customers rely on to build an edge data center is to aim for homogeneity across servers (i.e., all of the server components are literally identical across the machines). But in a specialized application domain, such as edge video analytics, this ignores the unique characteristics of the applications and either significantly over-provisions some resources or severely handicaps application performance, leading to suboptimal TCO. \input{tex/tbl/equipment_homogeneous} \Tbl{tbl:equipment_homogeneous} shows the basic computing and networking equipment needed to build a homogeneous 1024-node edge data center similar in compute capabilities to our own setup. This design gives each node comparable equipment to that used in our experiments: two 28-core processors, 384~GB of RAM, 100~Gbps interconnect, and a single NVMe drive. The nodes are connected in a three-level fat-tree topology using 32-port Mellanox Ethernet switches. This topology ensures full-speed non-blocking network connectivity to each node. Using an open source TCO calculator from Coolan~\cite{coolantcomodel} to include power (servers, networking equipment, cooling, etc.), rack equipment, cabling costs, etc.\ and assuming a three-year amortization life, we estimate a yearly cost of US\$10.2 million for server equipment, US\$1.3 million for network equipment, and US\$1.4 million for power, for a total yearly cost of US\$12.9 million. While common wisdom regarding data centers suggests that the majority of the TCO is spent on power (including powering cooling equipment), simple analysis shows this is not necessarily the case. Each of the servers in our hypothetical data center is equipped with a 750~watt power supply, while Mellanox reports that its routers can consume a maximum of 398~watts~\cite{mellanoxspecs}. This yields a total maximum power consumption of 921~kW. Cooling is estimated to require approximately as much power as the compute resources~\cite{hpepoweradvisor, dataspancoolingcosts}, bringing the total to 1842~kW. Assuming US\$0.10 per kilowatt hour, operating the data center would cost US\$184 per hour or US\$1.61 million per year under maximum load. To accommodate up to 32$\times$ accelerated compute in AI, we must either install three additional drives in each node (to maintain homogeneity) or designate a large number of the nodes as brokers. Adding the additional NVMe drives costs US\$1.23 million. Instead, we designate 157 of the nodes as brokers, 289 as producers, and 578 as consumers. This maintains the ratio of each node type as in our original \application{Face Recognition} experiments (15 producer and 30 consumer nodes, though with 8 brokers instead of 3) to enable support for 32$\times$ accelerated AI. Extrapolating from \Fig{fig:fr_speed_bw_util:network}, we estimate each producer and consumer node will consume approximately 4~Gbps of network bandwidth and each broker node 24~Gbps. The broker nodes demand less than 9~Gbps (or 1.1~GB/s) of storage write bandwidth. \subsection{AI-Specific Edge Data Center} \label{sec:solution:purposebuilt} \input{tex/tbl/equipment_ai} \input{tex/fig/onprem_network} The homogeneous data center was designed to be generic, capable of executing a variety of application classes; hence, we had to adapt the application to the data center, resulting in hugely over-provisioned network and storage. The producers and consumers constitute over 84\% of the data center and use only 4\% of the available network capacity and essentially none of the storage bandwidth. The brokers use a respectable 24\% of the network capacity and basically all of the storage bandwidth but use very little of the compute capacity. This shows extremely inefficient allocation of limited resources in the data center. But we can do better. Instead of forcing the application to fit into an existing data center, we propose building a data center that fits the application. We recommend a \emph{purpose-built data center} that specifically targets the broker-specific AI tax (the demand for storage bandwidth). The AI tax, if not accounted for, can translate to non-trivial real-world costs that can drastically affect end users' needs. In contrast, by understanding the AI tax and designing to it, we demonstrate that a moderately-sized edge data center can be purpose-built to better address the AI tax while yielding meaningful cost savings. In \Tbl{tbl:equipment_ai} we see the equipment needed for this setup, designed to support up to 32$\times$ AI acceleration. In this scenario, we utilize the same highly parallel servers as in \Tbl{tbl:equipment_homogeneous} for producers and consumers but limit the network bandwidth on these nodes to only 10~Gbps and install only basic storage for operating each server. The broker nodes, in contrast, are built on far less parallel but still impressive CPUs, while enjoying 50~Gbps network connections and four NVMe SSDs. We illustrate in \Fig{fig:onprem_network} a simple network solution that could provide the designated bandwidths to each server. At its heart, the network is still a fat-tree built from 100~Gbps Mellanox switches, but, using Mellanox splitter cables and slower 40~Gbps switches, broker nodes are provided with 50~Gbps connections while producer and consumer nodes get 10~Gbps connections. A single edge switch can connect 32 broker nodes or 128 producer/consumer nodes. We can thus build the complete data center using a two-level fat-tree of just 28 100~Gbps switches (12 edge and 16 core); seven edge switches connect to a total of fourteen 40~Gbps switches and five connect to the 157 brokers. In designing this purpose-built data center, we wanted to avoid limiting potential advancements or upgrades during the lifetime of the data center. We designed it with double the anticipated requirements for network and storage bandwidth. The brokers were designed to accommodate the 32$\times$ speedup in two separate ways. First, we maintained the higher ratio of brokers to compute nodes, just as we did in the homogeneous design; second, we allocated four times the storage devices and bandwidth to each broker. Either one of these solutions on its own would have been adequate to accommodate the compute speedup. Furthermore, by giving 50~Gbps and 10~Gbps network connections to the broker and compute nodes, respectively, we have allowed them to grow to double their anticipated needs. In combination, we have given the data center the ability to adapt to unanticipated application speedups during its intended lifetime. Our purpose-built data center incurs an equipment cost of US\$27.9 million with a yearly power cost of US\$1.4 million for a three-year amortized yearly total cost of ownership of US\$10.8 million. This is 16.6\% lower than the TCO of the homogeneous data center while being better equipped to handle future accelerated compute.
proofpile-arXiv_065-163
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \IEEEPARstart{I}{mage} captioning, i.e., automatically generating descriptive sentences of images, has received increasing attention in the fields of vision and language in recent years. Compared with other image semantic analysis tasks such as object detection\cite{zhu2019attention,chen2016disc} or fine-grained image recognition\cite{peng2018object,chen2018knowledge}, image captioning provides a deeper and more comprehensive understanding of the images and extends to a wide range of applications, including image retrieval, scene graph generation and video captioning. \begin{figure}[t] \centering \includegraphics[width=0.98\linewidth]{motivation.pdf} \caption{An example of how our proposed method aids in generating more fine-grained and accurate descriptions. The proposed method consists of a global discriminative objective and a local discriminative objective. The local discriminative constraint employs a reward reweighting mechanism to increase the rewards of some informative phrases (such as ``using a computer'' and ``an old man'') and decrease the rewards of some inaccurate or universal phrases (such as ``sitting'' and ``in front of an umbrella''). } \label{fig:motivation} \end{figure} With the advancement of deep learning, the existing approaches \cite{rennie2017self,gu2017stack} generally employ a neural-network-based encoder-decoder architecture \cite{vinyals2015show} and resort to a reinforcement learning method \cite{williams1992simple} to optimize this task. Despite acknowledged successes, the captions generated by these leading methods \cite{gu2017stack,anderson2018bottom} are often overly rigid and tend to repeat the words/phrases that frequently appear in the training set. Thus, these captions can hardly describe the corresponding images with the desired accuracy. The reasons are primarily twofold: (1) the conservative characteristic of traditional training objectives (e.g., maximum likelihood estimation (MLE) or consensus-based image description evaluation (CIDEr)), which tends to encourage generating overly universal captions that in turn are hard to be used to discriminate similar images; and (2) the uneven word distribution of ground-truth captions, which encourages generating highly frequent words/phrases while suppressing the less frequent but more concrete ones. For example, given an image shown in Figure \ref{fig:motivation}, existing methods are inclined to generate the high-frequency and common phrase ``a man'' that provides little discriminative or informative cues. Obviously, the less frequent phrase ``an old man'' is a more accurate choice. Another issue is that some images share similar contents, and these methods driven by conservative training objectives tend to pay more attention to such contents, thus resulting in similar or even identical captions for such images. In recent years, some works have generated diverse and discriminative captions with the help of adversarial learning \cite{shetty2017speaking,dai2017towards} or ranking loss \cite{luo2018discriminability}. However, these methods focus more on diversity and may not balance diversity and accuracy well. In this work, we propose a novel global-local discriminative objective to train the captioning model. From the global perspective, we first design a global discriminative constraint that encourages the generated caption to better describe the corresponding image rather than other similar images. To this end, we employ the ranking loss that pulls the generated caption to match the corresponding image while pushing the caption away from other similar images. From a local perspective, we develop a local discriminative constraint that pays more attention to finer-grained words/phrases. As suggested in \cite{vedantam2015cider}, the fine-grained words/phrases generally occur less frequently because they merely describe some distinct and detailed contents of some specific images. Thus, we implement this constraint by assigning higher rewards to less frequent words. This can well address the uneven word distribution issue and help to capture more informative visual details of the images. The two constraints are based on the reference model to facilitate generating discriminative captions while simultaneously improving the accuracy. An example of how our proposed method aids in generating finer-grained and more accurate description is presented in Figure \ref{fig:motivation}. There is a certain correlation between fine-grained and accurate. When our model generates accurate sentences, it actually reflects more fine-grained details at the same time. Similarly, the discriminative captions can also improve the accuracy of the caption. It can be regarded as a process of mutual promotion. The main contributions of this work are summarized as follows. First, we propose a new global-local discriminative objective to optimize the image captioning model, which encourages generating fine-grained and discriminative descriptions from both global and local perspectives. This objective can readily be applied to improve existing captioning models. Second, the proposed global discriminative constraint incorporates a reliable term that enables stable and effective training. Third, our local discriminative constraint establishes a novel word-level content-sensitive reward function. Moreover, we conduct extensive experiments on the widely used MS-COCO dataset, which demonstrates that our approach outperforms the baseline methods by a sizable margin and achieves competitive performance over existing leading approaches. We also perform self-retrieval experiments \cite{dai2017contrastive} and prove that our proposed method exhibits superior discriminability over existing leading and baseline methods. \emph{We have released our codes and trained models at \url{https://github.com/HCPLab-SYSU/Fine-Grained-Image-Captioning}.} \section{Related Work} Recently, image captioning \cite{cho2015describing} has been a popular research task in the field of artificial intelligence, which attempts to generate natural language sentences that describe the visual contents. In real life, image captioning has a great impact, for instance, by aiding visually impaired users in understanding the visual surroundings around them. Recent progress in image captioning has also motivated the exploration of its applications for video captioning \cite{yang2018video} and question answering \cite{xue2018better,xue2017unifying}. Recent advances in image captioning have benefited from the encoder-decoder pipeline that adopted a convolutional neural network (CNN) \cite{he2016deep} to encode a semantic image representation and a recurrent neural network (RNN) \cite{hochreiter1997long} to decode the representation into a descriptive sentence. As a pioneering work, \cite{vinyals2015show} simply used a GoogLeNet \cite{ioffe2015batch} model to extract an image feature, which was then fed into a long short-term memory (LSTM) network to generate the sentence. Additionally, attention mechanisms \cite{bahdanau2014neural} have recently enjoyed widespread use in computer vision tasks \cite{du2018recurrent,wang2017multi,chen2018recurrent,chen2019learning,chen2018fine}. Some researchers introduced attention mechanisms \cite{xu2015show,lu2017knowing,gu2017stack,tan2019comic} or learn more discriminative image features \cite{dong2018predicting,yan2019stat} and thus substantially improved the captioning performance. For example, \cite{xu2015show} further learned to locate attentional regions that were highly related to the semantic content to help prediction. \cite{yao2018exploring,yang2019auto} managed to explore visual relationship \cite{chen2019knowledge} between objects in the graph convolution network for generating accurate captions. MLE, which aims to maximize the likelihood of the ground-truth word at time step $t$ given the ground-truth words of the previous $t-1$ time steps, has traditionally been applied to optimize the captioning models \cite{xu2015show}. However, these models suffered from an exposure bias issue, resulting in poor captioning performance \cite{ranzato2015sequence}. To address these issues, recent works \cite{ranzato2015sequence,rennie2017self,gao2019self} introduced the policy-gradient-based reinforcement learning technique \cite{williams1992simple} for sequence-level training, which directly optimized the discrete and non-differentiable evaluation metrics for the tasks. For example, Ranzato et al. \cite{ranzato2015sequence} defined the reward based on a sequence-level metric (e.g., bilingual evaluation understudy (BLEU) \cite{papineni2002bleu} or recall-oriented understudy for gisting evaluation (ROUGE) \cite{lin2004rouge}) that was used as an evaluation metric during the test stage to train the captioning model, thus leading to a notable performance improvement. Similarly, Zhang et al. \cite{zhang2017actor} designed an actor-critic algorithm that formulated a per-token advantage function and value estimation strategy into the reinforcement-learning-based captioning model to directly optimize non-differentiable quality metrics of interest. Rennie et al. \cite{rennie2017self} proposed a self-critical sequence training approach that normalized the rewards using the output of its own test-time inference algorithm for steadier training. Chen et al. \cite{chen2017temporal} introduced the temporal-difference (TD) learning method to further account for the correlation between temporally successive actions. Although these methods have achieved impressive successes over the past several years, they tend to generate overly rigid sentences that are generally composed of the most frequent words/phrases, leading to inaccurate and indistinguishable descriptions. Zhang et al. \cite{zhang2018high} created a mechanism of fine-grained and semantic-guided visual attention to generate captions of high accuracy, completeness, and diversity. This attention mechanism can accurately correlate relevant visual information with each semantic in the text. In addition, a series of efforts were devoted to exploring the generation of diverse and discriminative descriptions because diversity and discriminability are also considered to be important properties for the generated captions \cite{dai2017towards,luo2018discriminability}. Motivated by adversarial learning \cite{NIPS2014_5423}, existing methods \cite{dai2017towards,shetty2017speaking,yang2018video} also adapted this technology to generate humanoid and natural captions. For instance, \cite{dai2017towards} developed a conditional generative adversarial network (CGAN) \cite{mirza2014conditional} framework that jointly learned a generator to produce descriptions conditioned on images and an evaluator to assess how good and natural the generated captions were. Shetty et al. \cite{shetty2017speaking} adopted adversarial training to enable the distribution of the generated caption to better match that of humans. Although these methods could generate diverse and human-like sentences, they primarily focused on diversity and naturalness, and they suffered from a performance drop on evaluation metrics such as CIDEr \cite{vedantam2015cider} and BLEU \cite{papineni2002bleu}. \begin{figure*}[t] \centering \includegraphics[width=0.95\linewidth]{model.pdf} \caption{An illustration of the proposed objective. It consists of a global discriminative constraint and a local discriminative constraint that is formulated on top of a reference model to encourage generating more accurate and fine-grained descriptions.} \label{fig:model} \end{figure*} Ranking algorithms \cite{faghri2017vse++,chen2016deep} were designed to pull the matched instances into close proximity with each other and to push the unmatched instances to increase their distance from each other \cite{chen2016deep}. A series of works applied these algorithms to facilitate the diversity of various generation tasks, including visual question answering \cite{goyal2017making}, image generation \cite{saquil2018ranking,diba2017object}, and video forecasts \cite{xiong2018learning,yang2018text2video}. For instance, Xiong et al. \cite{xiong2018learning} proposed a multistage dynamic generative adversarial network and designed an adversarial ranking loss to optimize this network to encourage the generated video to be closer to the real one while being farther away from the input video. To enable generating diverse and discriminative sentences, recent works \cite{luo2018discriminability,dai2017contrastive} also formulated the ranking loss as an additional constraint on top of current captioning models. \cite{luo2018discriminability} introduced hard negative triplet loss \cite{schroff2015facenet} as an extra constraint to train the captioning model such that , being able to generate more discriminative captions. However, this kind of method may lead to unstable training and model degradation. The reason is that it just measures discriminability among samples of a minibatch during training, but the reward built on the minibatch can be invalid in the cases in which the images of the minibatch are not similar to some extent. More seriously, the captioning model will be misled and presume that the generated caption is discriminative. Such cases occur more often when the size of the minibatch becomes smaller. Consequently, it required training with a relatively large minibatch size to ensure discriminability during training. In contrast to the aforementioned works, our method is able to overcome the above problem by improving the discriminability from a totally global perspective. The proposed global discriminative constraint incorporates a term that uses the most similar image in the whole dataset to provide significant distinctive rewards. This term serves as a basic and reliable reward to enable stable and effective training. Furthermore, our method further introduces a local discriminative constraint, which pays more attention to the less frequent words and encourages describing more detailed and fine-grained content of the input image. In this way, our method can generate discriminative captions and simultaneously enhance the overall performance on evaluation metrics. \section{Methodology} \subsection{Overview} Currently, advanced and typical image captioning methods adopt the encoder-decoder framework and generally resort to the reinforcement learning (RL) method for optimization. In this work, we also utilize this encoder-decoder pipeline \cite{xu2015show} as our reference model. Specifically, the pipeline involves a CNN \cite{he2016deep} to encode the input image $I$ into a semantic feature representation and an LSTM network \cite{hochreiter1997long} to decode this representation into the target descriptive sentence $c$. This process can be formulated as \begin{align} v = \phi(I); \quad c=\psi(v), \end{align} where $\phi$ denotes the CNN encoder and $\psi$ represents the LSTM decoder. During training, the sequential word generation process is formulated as a sequential decision-making problem, and the RL method is introduced to learn a policy network for decision making. Let $\theta$ denote the parameters of the captioning model and $p_\theta=p(c|I;\theta)$ be the conditional distribution over captions. Then, RL commonly aims to minimize the negative expected reward, formulated as \begin{align} \mathcal{L}(\theta)=-\mathbb{E}_{\tilde{c} \sim p_\theta}[R_{C}(\tilde{c})], \end{align} where $\tilde{c}=\{w^s_1, \dots, w^s_t, \dots, w^s_T\}$ is a caption sampled from the conditional distribution $p_\theta$ (i.e., $\tilde{c} \sim p_\theta$), and $w^s_t$ is the word at time step $t$ in $\tilde{c}$. $R_{C}(\tilde{c})$ is a reward defined for the caption $\tilde{c}$. Training with the reinforcement learning algorithm requires designing an appropriate reward function. Currently, the reward $R_{C}(\tilde{c})$ is often defined based on the CIDEr score \cite{vedantam2015cider} because it can well measure the quality of the generated captions. However, this reward can hardly enable generating discriminative captions for similar images. Moreover, most existing works use $R_{C}(\tilde{c})$ to provide the same caption-level reward for each word, i.e., \begin{align} R(w^s_t)=R_{C}(\tilde{c}), \quad t=1,...,T \end{align} which is contrary to the appropriate credit assignment. Hence, this setting is susceptible to the uneven word distribution that encourages generating highly frequent words/phrases while suppressing the less frequent but more fine-grained ones. To solve these issues, we design a global-local discriminative objective for the reward, which is formulated as two constraints based on the above-described reference model, as shown in Figure \ref{fig:model}. Concretely, to encourage the generated captions to describe the corresponding images well, we develop a global discriminative constraint that pulls the generated caption to match the corresponding image while pushing the caption away from other similar images via a ranking loss. Furthermore,the local discriminative constraint provides higher rewards for the less frequent but fine-grained words via a word-level reward reweighting mechanism instead of treating all words equally. In this way, the model will pay more attention to these words and thereby alleviate the strong bias of the generated words/phrases. Therefore, the reward $R(w^s_t)$ can be defined as \begin{equation} R(w^s_t) = R_{\mathrm{GD}}(I, \tilde{c}) + R_{\mathrm{LD}}(w^s_t), \end{equation} where $R_{\mathrm{GD}}$ and $R_{\mathrm{LD}}$ are the two rewards defined according to the global-local discriminative constraints, respectively. We introduce these two rewards in detail in the next subsections. Consequently, we aim to minimize the following objective: \begin{align}\label{eq:sumloss} \mathcal{L}(\theta)=-\mathbb{E}_{\tilde{c} \sim p_\theta}[\sum_{t=1}^{T}R(w^s_t)]. \end{align} \subsection{Global Discriminative Constraint} Discriminability is essential for fine-grained image captioning. Some works \cite{vedantam2017context, mao2016generation} focus on designing different loss functions to generate discriminative sentences. In this paper, we design a global discriminative constraint that resorts to the visual-semantic embedding module \cite{faghri2017vse++} to act as an evaluator to measure the uniqueness of captions. This constraint is designed to pull the generated captions to better match the corresponding image rather than the others. To this end, we first introduce a function $s(I,c)$ that measures the similarity of an image $I$ and sentence $c$. The detail of this score function will be described in Section \ref{sec:setting}. Then, given an input image $I$ and its sampled caption $\tilde{c}$, it is expected that the score $s(I,\tilde{c})$ is higher than score $s(I_a,\tilde{c})$ for any image $I_a$ taken from the training image set $\mathcal{I}$. Here, because it is impractical to compute $s(I_a,\tilde{c})$ for all images during training, we approximate this target by enabling $s(I,\tilde{c})$ to be higher than $s(I_g,\tilde{c})$, in which $I_g$ is the image most similar to $I$, formulated as \begin{equation} R_{\mathrm{H}}(I,\tilde{c}) = -[\epsilon +s(I_g,\tilde{c})-s(I,\tilde{c})]_+, \label{eq:rh} \end{equation} where $[x]_+$ is a ramp function defined by $\mathrm{max}(0,x)$. To obtain the most similar image $I_g$ for each image $I$, we extract the image feature using ResNet-101 \cite{he2016deep} pretrained on the ImageNet dataset \cite{russakovsky2015imagenet} and compute the Euclidean distance between features of $I$ and all other images $I_a$. The image with the smallest distance in the entire training set is selected as $I_g$. We can retrieve the most similar image for each image before training, so this process hardly incurs any additional training cost. \begin{figure*}[t] \centering \includegraphics[width=0.95\linewidth]{1-gram.pdf} \caption{The TF-IDF weights for some words (1-gram) in the MS-COCO dataset. Words are sorted by TF-IDF weights.} \label{fig:1gram} \end{figure*} This reward setting contributes to improving the discriminability from a global perspective, but it merely considers one reference image (i.e., the most similar image). In fact, some other images also share very similar content with the given image. Taking these images into the reward definition manages to further improve the discriminability. Inspired by \cite{luo2018discriminability, liu2018show}, we introduce another ranking target defined on the minibatch during training: \begin{equation} \begin{split} R_{\mathrm{B}}(I,\tilde{c})= -[\epsilon +s(I,c')-s(I,\tilde{c})]_+ \\ -[\epsilon +s(I',\tilde{c})-s(I,\tilde{c})]_+ , \end{split} \label{eq:rb} \end{equation} where $I'= \arg\max_{I'\ne I}s(I',\tilde{c})$ is the hardest negative image from the current minibatch and $c'= \arg\max_{c'\ne c}s(I,c)$ is the hardest negative caption in the minibatch. Thus, $(I, c')$ and $(I', \tilde{c})$ are the hard negative pair defined on the minibatch. The training data are completely shuffled to select the batch samples in each iteration. Applying sufficient iterations will approximate using the entire training set, so the discriminative loss in minibatches can be considered as part of the global discriminative constraint. Furthermore, a random minibatch can introduce a high degree of data diversity to facilitate effective training. Finally, we sum the two terms to obtain the global discriminative reward: \begin{equation} R_{\mathrm{GD}}(I,\tilde{c}) = R_{\mathrm{H}}(I,\tilde{c}) + R_{\mathrm{B}}(I,\tilde{c}) . \label{eq:gd} \end{equation} \subsection{Local Discriminative Constraint} The local discriminative constraint is content-sensitive and expected to assign higher rewards to the words/phrases that concretely describe the visual contents of given images. Thus, we adopt the reward reweighting mechanism to provide higher rewards to these words/phrases. In general, some phrases that describe the distinct and detailed contents of some specific images such as ``doing tricks on the ramp'' occur less frequently in the dataset. To this end, we resort to the term frequency-inverse document frequency (TF-IDF) \cite{robertson2004understanding} weights to compute the frequency of each n-gram (n = 1, 2, 3, 4) phrase in the dataset. Then we adopt a two-stage mechanism to select and reweight the less frequent but informative words. The two-stage mechanism is designed based on the following assumptions: 1) The fine-grained and detailed phrases are selected according to the computed TF-IDF weights, and these weights will be assigned to each corresponding word and increase their rewards. 2) Some frequently occurring common words, such as ``a'', ``on'' and so forth, should be determined such that their original weights are retained since they are the basic building blocks of almost all sentences. Below, we describe this mechanism in detail. In the first stage, we follow \cite{vedantam2015cider} to compute a TF-IDF weight for each n-gram phrase $w_k$ in the sampled sentence $\tilde{c}$: \begin{equation} g_{\omega_k}(\tilde{c})= \frac{n_{\omega_k}(\tilde{c})}{\sum_{\omega \in \Omega}n_{\omega}(\tilde{c})}\mathrm{log}(\frac{|\mathcal{I}|}{\sum _{I_p \in \mathcal{I}} \mathrm{min}(1, \sum_{q}n_{\omega_k}(s_{pq}))}), \end{equation} where $\Omega$ is the vocabulary of all n-grams and $\mathcal{I}$ is the set of all images in the dataset. $n_{\omega}(\tilde{c})$ denotes the number of times the n-gram $\omega$ occurs in the sentence $\tilde{c}$. $s_{pq}$ is the $q$-th ground-truth sentence for image $I_p$. The TF-IDF weight $g_{\omega_k}(\tilde{c})$ reflects the saliency of the n-gram $\omega_k$ in the dataset, and a higher weight indicates that this n-gram occurs less frequently across all images in the dataset. \begin{figure}[!t] \centering \includegraphics[width=0.95\linewidth]{n-gram.pdf} \caption{The probability distribution and some examples of TF-IDF weights for n-grams (n = 1, 2, 3, 4) in the MS-COCO dataset.} \label{fig:ngram} \end{figure} We statistically show the probability distribution of TF-IDF weights for n-grams in the MS-COCO dataset along with some examples of $n$-grams. The results are summarized in Figure \ref{fig:ngram}. From Figure \ref{fig:ngram}, we can see that the distribution will concentrate on larger TF-IDF weights with the increase of $n$. To determine which n-grams should be assigned a higher reward, we introduce a threshold $\lambda$ to select the fine-grained n-grams with TF-IDF weights higher than $\lambda$. This threshold should maintain the informative n-grams, such as ``red stop sign'' and ``a brick road'', while filtering out some frequently occurring n-grams, such as ``a person with'' and ``next to each other''. Based on the above analysis and the observations in Figure \ref{fig:ngram}, we fix $\lambda$ to 5 in this paper. Note that not all words in the selected n-gram phrases are informative, particularly some articles and conjunctions. For example, ``in the grass'' is a less frequent $n$-gram, but the article ``the'' and the preposition ``in'' occur frequently in the dataset and are generally less relevant to the image content. Thus, in the second stage, we utilize the TF-IDF weights to select these words using another threshold $\eta$. These words are sentence-structured words, which should maintain their original rewards. We summarize the TF-IDF weights of some words in Figure \ref{fig:1gram}. As depicted in Figure \ref{fig:1gram}, some less-informative words (such as ``that'' and ``it'') have small TF-IDF weights while some fine-grained words (such as ``blue'' and ``vegetables'') have larger weights. In our work, $\eta$ is set to 1 according to the observations in Figure \ref{fig:1gram}. More qualitative and quantitative analysis of these two thresholds is summarized in the experimental results. Finally, we also utilize the computed TF-IDF weights to determine the increase in the reward of each informative word. The reward definition is inspired by the CIDEr metric \cite{vedantam2015cider}. Concretely, the reward for the local discriminative constraint can be defined as: \begin{equation} \begin{split} & R_{\mathrm{LD}}(w^s_t)= \\ & \sum _{w^s_t \in \omega_k} \sum _{j} \frac {\mathrm{min}(g_{\omega_k}(\tilde{c}), g_{\omega_k}(s_{j})) \cdot g_{\omega_k}(s_{j})} {\Vert g_{\omega_k}(\tilde{c}) \Vert \Vert g_{\omega_k}(s_{j})\Vert} + R_{\mathrm{C}}(\tilde{c}) \\ & \text{if~~} g_{\omega_k}(\tilde{c}) > \lambda \text{~~and~~} g_{w^s_t}(\tilde{c}) > \eta, \end{split} \label{eq:LD} \end{equation} where $w^s_t$ is the word at time step $t$ in the sampled sentence $\tilde{c}$. $g_{w^s_t}(\tilde{c})$ denotes the 1-gram TF-IDF weight for word $w^s_t$ and $g_{\omega_k}(s_{j})$ denotes the TF-IDF weight for n-grams $\omega_k$ in the ground-truth sentence $s_{j}$. $s_j$ denotes the $j$-th ground-truth sentence for the input image. $R_{\mathrm{C}}(\tilde{c})$ is the original caption-level reward \cite{rennie2017self} defined based on the CIDEr score. The factor $\mathrm{min}(g_{\omega_k}(\tilde{c}), g_{\omega_k}(s_{j}))$ penalizes the condition where specific n-grams are repeated continually until the desired sentence length is achieved. Equation (\ref{eq:LD}) illustrates the word selection and reward reweighing procedures simultaneously. $g_{\omega_k}(\tilde{c}) > \lambda$ restrains some less-informative phrases while $g_{w^s_t}(\tilde{c}) > \eta$ removes some common words. If the word does not satisfy these two conditions, the sample word $w^s_t$ will obtain its original reward $R_{\mathrm{C}}(\tilde{c})$. \begin{figure}[!t] \centering \includegraphics[width=0.95\linewidth]{reward1.pdf} \caption{The local discriminative reward (the red line) and the traditional CIDEr reward (the blue line).} \label{fig:cs-reward} \end{figure} Figure \ref{fig:cs-reward} illustrates the local discriminative reward for each word in two examples. We find that it is in good agreement with the above-described assumption, in which the article ``a'' and the commonly used word ``man'' maintain the original reward, while the detailed words such as ``slope'' and ``hydrant'' that describe most distinguishable content enjoy higher rewards. It accurately reflects the relationship between image contents and their rewards. In this way, the local discriminative constraint is beneficial to provide an appropriate credit assignment for each word. \subsection{Optimization} At the training stage, we minimize the objective (\ref{eq:sumloss}) to obtain the caption model. In practice, we utilize a sampling mechanism to approximate the expectation and introduce the algorithm known as REINFORCE \cite{williams1992simple}, \cite{mnih2014recurrent} to compute the gradients, formulated as: \begin{equation} \begin{split} \nabla \mathcal{L}(\theta)&=-\mathbb{E}_{w^s \sim p_\theta}[\sum_{t=1}^{T}R(w^s_t)\nabla\mathrm{log}(p_\theta(w^s_t))] \\ &=-\frac{1}{M}\sum_{m=1}^{M}\sum_{t=1}^{T}R(w^s_{mt})\nabla\mathrm{log}(p_\theta(w^s_{mt})), \end{split} \label{eq:L} \end{equation} where $M$ is the number of sampled sentences. The gradient estimated by the above approximation is of high variance, which is not conducive to the convergence of the model. To solve this problem, we follow \cite{chen2017temporal} to introduce a baseline sentence to obtain an unbiased low-variance gradient estimation. Specifically, for each sentence $c_m^s$, we adopt the current model to generate a baseline sentence $c_m^b=\{w^b_{m1}, w^b_{m2}, \dots, w^b_{mT}\}$. We compute the reward of this baseline sentence and normalize the gradient in Equation (\ref{eq:L}) by \begin{align} \nabla \mathcal{L}(\theta)&=-\frac{1}{M}\sum_{m=1}^{M}\sum_{t=1}^{T}[R(w^s_{mt})-R(w^b_{mt})]\nabla\mathrm{log}(p_\theta(w^s_{mt})). \label{eq:gradient} \end{align} The difference between $R(w^s_{mt})$ and $R(w^b_{mt})$ is small since the two sentences are both sampled from the same distribution. Thus, the gradient variance is low, leading to more stable updating of parameters in the training process \cite{chen2017temporal}. Moreover, as shown in Equation (\ref{eq:gradient}), samples with higher rewards will be given larger probability, and the inferior samples will be suppressed. \begin{table*}[t] \centering \vspace{6pt} \caption{Performance (\%) of our proposed and existing state-of-the-art methods on the MS-COCO dataset using the Karpathy test split. We report our results that use the more advanced TDA baseline. ``-'' indicates that the corresponding result is not available. The best and second best results are highlighted in \textcolor{red} {\textbf{red}} and \textcolor{blue} {\underline{blue}} fonts (Best viewed in color).} \begin{tabular}{c|c|c|c|c|c|c|c|c} \toprule \centering Method & BLEU4& BLEU3& BLEU2& BLEU1 & ROUGEL & METEOR &SPICE &CIDEr\\ \hline GoogleNIC\cite{vinyals2015show} (CVPR2015) &24.6&32.9&46.1&66.6&-&-&-&- \\ MRNN\cite{mao2014deep} (ICLR2015)&27.0&36.4&50.0&68.0&50.0&23.1&-&86.4 \\ SoftAtt\cite{xu2015show} (ICML2015)&24.3&34.4&49.2&70.7&-&23.9&-&- \\ HardAtt\cite{xu2015show} (ICML2015)&25.0&35.7&50.4&71.8&-&23.0&-&- \\ SemATT\cite{you2016image} (CVPR2016)&30.4&40.2&53.7&70.9&-&24.3&-&- \\ SCACNN\cite{chen2016sca} (CVPR2017)&30.2&40.4&54.2&71.2&52.4&24.4&-&91.2 \\ AdaAtt\cite{lu2017knowing} (CVPR2017)&33.2&44.5&59.1&74.2&-&26.6&-&108.5 \\ Rennie\cite{rennie2017self} (CVPR2017)&34.2&-&-&-&55.7&26.7&-&114.0 \\ MSM\cite{yao2017boosting} (ICCV2017) &32.5&42.9&56.5&73.0&53.8&25.1&-&98.6 \\ ALT-ALTM\cite{ye2018attentive} (TIP2018) &35.5&45.7&59.0&75.1&55.9&27.4&20.3&110.7 \\ TD-ATT\cite{chen2017temporal} (AAAI2018) &34.0&45.6&60.3&76.5&55.5&26.3&-&111.6 \\ Stack-cap \cite{gu2017stack} (AAAI2018) &36.1&47.9&62.5&78.6&\textcolor{blue} {\underline{56.9}}&27.4&20.9&\textcolor{blue} {\underline{120.4}} \\ Up-down\cite{anderson2018bottom} (CVPR2018) &\textcolor{red} {\textbf{36.3}}&-&-&\textcolor{red} {\textbf{79.8}}&\textcolor{blue} {\underline{56.9}}&\textcolor{blue} {\underline{27.7}}&\textcolor{blue} {\underline{21.4}}&120.1 \\ OPR-MCM\cite{zhang2019more} (TIP2019) &35.6&46.0&59.6&75.8&56.0&27.3&-&110.5 \\ N-step SCST\cite{gao2019self} (CVPR2019) &35.0 &46.8 &61.5 &77.9 &56.3 &26.9 & 20.4 &115.2 \\ KMSL\cite{li2019know} (TMM2019) &\textcolor{red} {\textbf{36.3}}&\textcolor{red} {\textbf{48.3}}&\textcolor{red} {\textbf{63.2}}&\textcolor{blue} {\underline{79.2}}&56.8&27.6&\textcolor{blue} {\underline{21.4}}&120.2 \\ TDA+GLD (Ours) &\textcolor{blue} {\underline{36.1}}&\textcolor{blue} {\underline{48.0}}&\textcolor{blue} {\underline{62.6}}&78.8&\textcolor{red} {\textbf{57.1}}&\textcolor{red} {\textbf{27.8}}&\textcolor{red} {\textbf{21.6}}&\textcolor{red} {\textbf{121.1}}\\ \bottomrule \end{tabular} \label{table:result} \end{table*} \section{Experiments} In this section, we conduct extensive experiments to compare our method with existing state-of-the-art approaches both quantitatively and qualitatively. We also perform ablation studies to discuss and analyze the contribution of each component of the proposed method. \subsection{Experimental Settings}\label{sec:setting} \label{sec:Experiment Settings} \subsubsection{Datasets} MS-COCO \cite{chen2015microsoft} is a widely used benchmark for the image captioning task. This dataset contains 123,287 images, with each image annotating five sentences. Each image in this dataset is equipped with five reference sentences, which describe the images using Amazon Mechanical Turk \cite{rashtchian2010collecting} provided by human annotators. In this work, we follow the Karpathy split \cite{karpathy2015deep} that divides the dataset into a training set of 113,287 images, a validation set of 5,000 images, and a test set of 5,000 images for evaluation. We also submit our results to the online MS-COCO test server (\url{https://www.codalab.org/competitions/3221\#results}) for public comparison with the published methods. \subsubsection{Evaluation Metrics} We evaluate our method, the baseline methods and other competitors on widely used metrics, including BLEU \cite{papineni2002bleu}, ROUGEL \cite{lin2004rouge}, METEOR \cite{banerjee2005meteor}, semantic propositional image caption evaluation (SPICE) \cite{anderson2016spice} and CIDEr \cite{vedantam2015cider}. We introduce these metrics in detail as follows. \noindent\textbf{BLEU }\cite{papineni2002bleu} is defined as the geometric mean of the logarithmic n-gram precision scores. The output is further multiplied by the brevity penalty factor BP to penalize short captions. It is effective for measuring the fraction of n-grams (up to 4-gram) that are in common between a hypothesis and a reference. \noindent\textbf{ROUGEL }\cite{lin2004rouge} evaluates captions based on the co-occurrence information of n-grams in sentences. ROUGEL uses the longest common subsequence (LCS)-based F1 score to estimate the LCS of tokens between a hypothesis and a reference. \noindent\textbf{METEOR }\cite{banerjee2005meteor} is designed to explicitly address the weaknesses in BLEU. It evaluates a translation by computing the harmonic mean of unigram precision and recall based on an explicit word-to-word matching. \noindent\textbf{SPICE }\cite{anderson2016spice} takes semantic propositional content into account to assess the quality of image captions. Reference and candidate captions are mapped through dependency parse trees to the semantic scene graphs. Caption quality is determined using an F-score calculated over tuples in the candidate and reference scene graphs. \noindent\textbf{CIDEr }\cite{vedantam2015cider} is an evaluation metric developed specifically for the task of image captioning. CIDEr measures the similarity of a generated sentence against a set of ground-truth sentences, capturing the notions of grammaticality, saliency, importance, and accuracy inherently by sentence similarity. CIDEr shows a high agreement with consensus as assessed by humans, and it is widely regarded as the most authoritative metric in this task. \begin{table*}[t] \centering \vspace{6pt} \caption{Performance (\%) of our proposed and existing state-of-the-art methods on the online MS-COCO test server. We report our results using the more advanced TDA baseline. $\dag$ indicates the results of ensemble models (bottom part of the table). The best and second best results are highlighted in \textcolor{red} {\textbf{red}} and \textcolor{blue} {\underline{blue}} fonts (Best viewed in color).} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \toprule \ \multirow{2}*{Method} & \multicolumn{2}{c}{BLEU1} & \multicolumn{2}{c}{BLEU2}& \multicolumn{2}{c}{BLEU3} & \multicolumn{2}{c}{BLEU4}& \multicolumn{2}{c}{METEOR} & \multicolumn{2}{c} {ROUGEL} & \multicolumn{2}{c}{CIDEr} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7}\cmidrule(lr){8-9} \cmidrule(lr){10-11} \cmidrule(lr){12-13} \cmidrule(lr){14-15} \ & c5 & c40 & c5 & c40 & c5 & c40 & c5 & c40 & c5 & c40 & c5 & c40 & c5 & c40 \\ \midrule SCACNN\cite{chen2016sca} (CVPR2017) &72.5&89.2&55.6&80.3&41.4&69.4&30.6&58.2&24.6&32.9&52.8 &67.2&91.1&92.4\\ MSM\cite{yao2017boosting} (ICCV2017) &75.1 &92.6 &58.8 &85.1 &44.9 &75.1 &34.3 &64.6 &26.6 &36.1 &55.2 &70.9 &104.9 &105.3\\ AdaAtt\cite{lu2017knowing} (CVPR2017) &74.8&92.0&58.4&84.5&44.4&74.4&33.6&63.7&26.4&35.9&55.0&70.5&104.2&105.9\\ TD-ATT\cite{chen2017temporal} (AAAI2018) &75.7&91.3&59.1&83.6&44.1&72.6&32.4&60.9&25.9&34.2&54.7&68.9&105.9&109.0 \\ Stack-cap\cite{gu2017stack} (AAAI2018) &77.8&93.2&\textcolor{blue} {\underline{61.6}}&86.1&46.8&\textcolor{blue} {\underline{76.0}}&34.9&64.6&27.0&35.6&\textcolor{blue} {\underline{56.2}}&70.6&114.8&\textcolor{blue} {\underline{118.3}} \\ OPR-MCM\cite{zhang2019more} (TIP2019) &74.9&92.7&58.7&85.3&45.0&75.5&34.5&\textcolor{blue} {\underline{65.0}}&26.9&\textcolor{red} {\textbf{36.6}}&55.3&\textcolor{blue} {\underline{71.3}}&105.0&105.6\\ KMSL\cite{li2019know} (TMM2019) &\textcolor{red} {\textbf{79.2}}&\textcolor{red} {\textbf{94.4}}&\textcolor{red} {\textbf{62.6}}&\textcolor{red} { \textbf{87.2}}&\textcolor{blue} {\underline{47.5}}&\textcolor{red} {\textbf{77.1}} &\textcolor{blue} {\underline{35.4}}&\textcolor{red} {\textbf{65.8}}&\textcolor{blue} {\underline{27.3}}&36.1&\textcolor{blue} {\underline{56.2}}&71.2&\textcolor{blue} {\underline{115.1}}&117.3 \\ TDA+GLD (Ours) &\textcolor{blue} {\underline{78.7}}&\textcolor{blue} {\underline{93.7}}&\textcolor{red} {\textbf{62.6}}&\textcolor{blue} {\underline{86.9}}&\textcolor{red} {\textbf{47.8}}&\textcolor{red} {\textbf{77.1}} &\textcolor{red} {\textbf{35.9}}&\textcolor{red} {\textbf{65.8}}&\textcolor{red} {\textbf{27.5}}&\textcolor{blue} {\underline{36.2}}&\textcolor{red} {\textbf{56.9}}&\textcolor{red} {\textbf{71.6}}&\textcolor{red} {\textbf{116.9}}&\textcolor{red} {\textbf{119.5}} \\ \midrule GoogleNIC \cite{vinyals2015show} (CVPR2015) \dag &71.3&89.5&54.2&80.2&40.7&69.4&30.9&58.7&25.4&34.6 &53.0&68.2&94.3&94.6\\ SemATT \cite{you2016image} (CVPR2016) \dag &73.1&90.0&56.5&81.5&42.4&70.9&31.6&59.9&25.0&33.5&53.5&68.2&94.3&95.8\\ Rennie \cite{rennie2017self} (CVPR2017) \dag &78.1&93.1&61.9&86.0&47.0&75.9&35.2&64.5&27.0&35.5&\textcolor{blue} {\underline{56.3}}&70.7&\textcolor{blue} {\underline{114.7}}&116.7 \\ ALT-ALTM \cite{ye2018attentive} (TIP2018) \dag &74.2&92.2&57.7&84.3&44.3&74.3&34.1&63.9&27.0&37.0&55.2&71.2&105.3&105.9\\ Up-down \cite{anderson2018bottom} (CVPR2018) \dag &\textcolor{red} {\textbf{80.2}} &\textcolor{red} {\textbf{95.2}} &\textcolor{red} {\textbf{64.1}} &\textcolor{red} {\textbf{88.8}} &\textcolor{red} {\textbf{49.1}} &\textcolor{red} {\textbf{79.4}} &\textcolor{red} {\textbf{36.9}} &\textcolor{red} {\textbf{68.5}} &\textcolor{blue} {\underline{27.6}} &\textcolor{red} {\textbf{36.7}} &\textcolor{red} {\textbf{57.1}} &\textcolor{red} {\textbf{72.4}} &\textcolor{red} {\textbf{117.9}} &\textcolor{red} {\textbf{120.5}}\\ N-step SCST \cite{gao2019self} (CVPR2019) \dag & 77.6 &93.1 &61.3 &86.1 &46.5 &76.0 &34.8 &64.6 &26.9 &35.4 &56.1 &70.4 &117.4 &119.0 \\ TDA+GLD (Ours) \dag &\textcolor{blue} {\underline{79.0}}&\textcolor{blue} {\underline{94.0}}&\textcolor{blue} {\underline{63.0}}&\textcolor{blue} {\underline{87.4}}&\textcolor{blue} {\underline{48.2}}&\textcolor{blue} {\underline{77.7}} &\textcolor{blue} {\underline{36.3}}&\textcolor{blue} {\underline{66.6}}&\textcolor{red} {\textbf{27.7}}&\textcolor{blue} {\underline{36.6}}&\textcolor{red} {\textbf{57.1}}&\textcolor{blue} {\underline{71.9}}&\textcolor{red} {\textbf{117.9}}&\textcolor{blue} {\underline{120.4}} \\ \bottomrule \end{tabular} \label{table:result-sever} \end{table*} \subsubsection{Implementation details} We utilize two typical and advanced methods as our reference models, i.e., Show-Tell (ST) \cite{vinyals2015show} and Top-Down Attention (TDA) \cite{anderson2018bottom}. Both models are trained with the RL-based approach in \cite{rennie2017self}. For ST, we exactly follow the details described in \cite{vinyals2015show} for implementation. For TDA, the original implementation involves an extra detector trained on the Visual Genome~\cite{krishna2017visual} dataset. For effective implementation, we remove this component and simply apply spatially adaptive max-pooling to image feature maps to obtain the final image feature. Both baseline methods adopt the encoder-decoder pipeline, and we follow existing methods \cite{rennie2017self} to use ResNet-101 \cite{he2016deep} for image encoding and an LSTM with a hidden state size of 512 for decoding captions. We train the models by the Adam \cite{kingma2014adam} optimizer. Specifically, we follow previous works \cite{rennie2017self} to train the model using the MLE loss for the first 20 epochs and then switch to the RL loss to continue training. The batch size is set as 16, the learning rate is initialized as $5\times 10^{-4}$ and annealed by a factor of 0.8 for every 3 epochs, and the model is trained with 120 epochs in total. During inference, we use beam search with a size of 3 to decode the captions. The reward definitions of Equations (\ref{eq:rh}) and (\ref{eq:rb}) involve computing the similarity score $s(I,c)$ of a given image and caption pair. In this work, we resort to the visual-semantic embedding method (VSE++) \cite{faghri2017vse++} to obtain the similarity score. The hinge-based triplet ranking loss is adopted to train the VSE++ model and learn joint visual-semantic embeddings. When training the caption generators, the parameters of VSE++ are frozen. We first extract the image feature $f_I$ using ResNet-101 \cite{he2016deep} and sentence feature $f_c$ using a gated recurrent unit (GRU) encoder \cite{cho2014learning}. Then, $f_I$ and $f_c$ are mapped into VSE++ with two linear transformations and use the cosine similarity to compute the final score. \subsection{Comparison with State-of-the-art Methods} The MS-COCO dataset \cite{chen2015microsoft} is the most widely used benchmark to evaluate captioning models, and most competitive works have reported their results on Karpathy's test split \cite{karpathy2015deep}. In this part, we first compare the performance of our proposed method with the following 15 state-of-the-art methods on this split. 1) GoogleNIC \cite{vinyals2015show}, which adopts a CNN-RNN architecture to directly translate image pixels to natural language descriptions. 2) MRNN \cite{mao2014deep}, which combines representation from multiple modalities. 3) SoftAtt \cite{xu2015show} and HardAtt \cite{xu2015show}, which integrate the ``soft'' deterministic attention mechanism and ``hard'' stochastic attention mechanism for learning content-related representations. 4) SCACNN \cite{chen2016sca}, which incorporates spatial and channel-wise attentions to dynamically modulate the sentence generation context in multilayer feature maps. 5) SemATT \cite{you2016image}, which learns to selectively attend to semantic proposals and feeds them to the RNN. 6) AdaAtt \cite{lu2017knowing}, which automatically decides when to focus on the image and when to rely on the language model to generate the next word. 7) MSM \cite{yao2017boosting}, which exploits mid-level attributes as complementary information to image representation. 8) ALT-ALTM \cite{ye2018attentive}, which learns various relevant feature abstractions by attending to the high-dimensional transformation matrix from the image feature space to the context vector space. 9) OPR-MCM \cite{zhang2019more}, which adaptively re-weights the loss of different samples and uses a two-stage optimization strategy to detect more semantic concepts. 10) TD-ATT \cite{chen2017temporal}, which adopts the temporal-difference learning method to take the correlation between temporally successive actions into account for defining the reward. 11) Rennie \cite{rennie2017self}, which presents the self-critical sequence training algorithm to normalize the rewards to reduce variance in reinforcement learning. 12) Stack-cap \cite{gu2017stack}, which proposes a coarse-to-fine multistage prediction framework to produce increasingly refined image descriptions. 13) Up-down \cite{anderson2018bottom}, which combines bottom-up and top-down attention mechanisms to enable attending on semantic object cues and image features. 14) N-step SCST\cite{gao2019self}, which propose an n-step reformulated advantage function to generally increase the absolute value of the mean of reformulated advantage while lowering variance. 15) KMSL \cite{li2019know}, which takes advantage of the object entities and pairwise relationships in scene graphs for generating natural language descriptions. \begin{table*}[t] \centering \vspace{6pt} \caption{Ablation studies on our method using the baselines ST and TDA.} \begin{tabular}{c|c|c|c|c|c|c|c} \toprule \multirow{2}*{Model Variants} & \multicolumn{5}{c}{Evaluation Metrics (\%)} & \multicolumn{2}{c}{Fine-Granularity} \\ \cmidrule(lr){2-6} \cmidrule(lr){7-8} &BLEU4 & ROUGEL & METEOR& SPICE& CIDEr & UniCap & AvgLen \\ \midrule ST &32.8&54.7&25.7&19.1&103.1&2713&9.20 \\ ST-Strengthen &32.6&54.8&25.7&19.2&102.8&2682&9.19 \\ ST+GD &32.9&54.8&25.8&19.0&104.0&3040&9.28 \\ ST+LD-Diff &32.9&54.8&25.8&19.0&105.9&2738&9.20 \\ ST+LD &\textbf{33.1}&\textbf{55.0}&25.8&19.0&107.2&2765&9.22 \\ ST+GLD &33.0&54.9&\textbf{25.9}&\textbf{19.3}&\textbf{107.7}&\textbf{3140} &\textbf{9.29} \\ \midrule TDA &36.1&57.1&27.5&21.0&117.0&3589 &9.33 \\ TDA-Strengthen &36.2&57.0&27.4&21.2&116.8&3582 &9.29 \\ TDA+GD &36.1&57.1&27.6&21.3&117.9&3612 &9.52 \\ TDA+LD-Diff &36.2&57.1&27.6&21.3&{119.3}&3513 &9.38 \\ TDA+LD &\textbf{36.3}&\textbf{57.2}&\textbf{27.8}&21.4&{121.0}&3448 &9.41 \\ TDA+GLD &36.1&57.1&\textbf{27.8}&\textbf{21.6}&\textbf{121.1}&\textbf{3797} &\textbf{9.56}\\ \bottomrule \end{tabular} \label{table:baseline} \end{table*} \begin{table*}[t] \centering \vspace{6pt} \caption{Performance of TDA+LD using different parameter settings.} \begin{tabular}{c|c|c|c|c|c|c|c} \toprule \multirow{2}*{Parameter Settings} & \multicolumn{5}{c}{Evaluation Metrics (\%)} & \multicolumn{2}{c}{Fine-Granularity} \\ \cmidrule(lr){2-6} \cmidrule(lr){7-8} \ &BLEU4 & ROUGEL & METEOR& SPICE& CIDEr & UniCap & AvgLen \\ \midrule TDA+LD ($\lambda$=0; $\eta$=0) &36.1&57.1&27.5&21.0&117.0&3589&9.33 \\ TDA+LD ($\lambda$=2; $\eta$=0) &35.1&56.3&27.3&20.6&119.7&3165&9.29 \\ TDA+LD ($\lambda$=5; $\eta$=0) &36.0&57.0&\textbf{27.8}&21.2&120.5&3340&9.38 \\ TDA+LD ($\lambda$=8; $\eta$=0) &35.7&56.8&27.6&21.0&120.2&3276&9.34 \\ \hline TDA+LD ($\lambda$=5; $\eta$=0) &36.0&57.0&\textbf{27.8}&21.2&120.5&3340&9.38 \\ TDA+LD ($\lambda$=5; $\eta$=1) &\textbf{36.3}&\textbf{57.2}&\textbf{27.8}&\textbf{21.4}&\textbf{121.0}&\textbf{3448} &\textbf{9.41} \\ TDA+LD ($\lambda$=5; $\eta$=2) &36.1&57.0&\textbf{27.8}&21.2&120.7&3394&9.38 \\ \bottomrule \end{tabular} \label{table:Parameters} \end{table*} The performance results on Karpathy's test split are shown in Table \ref{table:result}. As can be observed, the previous leading methods are Stack-cap \cite{gu2017stack} and Up-down \cite{anderson2018bottom}, which obtain the CIDEr score of 120.4\% and 120.1\%, respectively. Our approach outperforms these competitors in terms of ROUGEL, METEOR, SPICE and CIDEr. Furthermore, our approach improves the CIDEr score to 121.1\%. For a more comprehensive comparison, we also submit our result to the online MS-COCO test server for evaluation, and we summarize the results of our method and those of the published leading competitors in Table \ref{table:result-sever}. When using a single model, our method achieves the best performance among the competitors on most evaluation metrics, as exhibited in the upper part of Table \ref{table:result-sever}. Some methods also report the results of ensembling several models \cite{anderson2018bottom,rennie2017self}. By simply ensembling four models, our method achieves competitive performance with the existing state-of-the-art ensemble method \cite{anderson2018bottom}, ranking first or second among the competitors on all the metrics. Notably, \cite{anderson2018bottom} achieves better results than our method on some evaluation metrics because it additionally utilizes an object-detection-based attention mechanism. Work \cite{anderson2018bottom} detects objects from the image via a pre-trained detector and employs an attentional mechanism to infer the useful semantic regions. It can locate the semantic regions more accurately and thus facilitate image captioning performance. However, this method relies on the detector that requires additional annotations and is more complicated as it needs to execute the additional detector during inference. In contrast, our method directly applies attentional mechanism at the final convolutional feature maps and introduces no additional annotation and computing overhead. \subsection{Ablation Study} The proposed global-local discriminative (GLD) objective is formulated based on existing reference models and consists of two components, i.e., a global discriminative (GD) constraint and a local discriminative (LD) constraint. In this section, we first conduct experiments to compare with the baseline models to demonstrate the overall effectiveness of the proposed GLD objective and then perform further analysis to assess the contribution of each component. \subsubsection{Overall Contribution of the Global-Local Discriminative (GLD) Objective} We perform an in-depth comparative analysis with the reference models to demonstrate the effectiveness of our GLD objective. Specifically, we compare with ST \cite{vinyals2015show} and TDA \cite{anderson2018bottom}, the two baseline reference models that we use, and present the comparison results on the MS-COCO dataset in Table \ref{table:baseline}. As can be observed, our method exhibits notable improvements in all metrics, e.g., improving the CIDEr score by 4.6\% when using the ST baseline and by 4.1\% when using the TDA baseline. To better illustrate the performance comparison with the baseline methods, we further show the curves of CIDEr score v.s. training iteration in Figure \ref{fig:CIDER}. Because the result of the MS-COCO test server is submitted online and the number of submissions allowed is limited, thus we cannot obtain the result of each training iteration. Hence Figure \ref{fig:CIDER} reflects the result of the Karpathy's test split, which is a common test set for this task. We find that our method converges faster and consistently outperforms the baseline method by a sizable margin during the entire training process. To evaluate the fine-granularity of the generated captions, two quantitative metrics (i.e., UniCap and AvgLen) are introduced. The UniCap metric denotes the number of unique captions generated by a tested method on the test set. The larger the UniCap is, the more powerful the model is to generate fine-grained captions that discriminate among similar images. A sufficiently good caption model should generate a distinct caption for each different image. The AvgLen metric denotes the average length of all the captions generated by a tested method on the test set. It can reflect the fine-granularity of captions to some extent since a more fine-grained caption is generally longer. From the results summarized in Table \ref{table:baseline}, it can be inferred that our method encourages generating more discriminative and fine-grained descriptions than the baselines. Specifically, upon the 5,000 test images, our method improves the UniCap from 2,713 to 3,140 for the ST baseline and from 3,589 to 3,797 for the TDA baseline. It verifies that our method has a significant effect on discriminative caption generation. With respect to the average length of captions, our method achieves an increase of 0.09 for ST and 0.23 for TDA, which demonstrates the effectiveness of our method from another perspective. These results also show that our proposed objective can achieve a consistent improvement on different baselines, indicating our method's good adaptability to various caption models. \begin{figure}[t] \centering \subfigure[]{ \includegraphics[width=0.46\linewidth]{ST_CIDER.pdf}} \subfigure[]{ \includegraphics[width=0.46\linewidth]{TDA_CIDER.pdf}} \caption{Average CIDEr score curve of our method and the baseline for the (a) ST and (b) TDA reference models.} \label{fig:CIDER} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.95\linewidth]{vis1-bk.pdf} \caption{Captions generated by the TDA baseline and our TDA+GLD.} \label{fig:vis1} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.95\linewidth]{vis2-bk.pdf} \caption{Captions generated by the TDA baseline and our TDA+GLD.} \label{fig:vis2} \end{figure} We further visualize some representative results to provide a more direct qualitative comparison. Figure \ref{fig:vis1} exhibits a few test images, corresponding ground-truth (GT) captions, and the captions generated by the baseline TDA and our method (TDA+GLD). It can be seen that our method can generate more fine-grained and detailed captions that better describe the given images. Taking the image in the upper left of Figure \ref{fig:vis1} as an example, the TDA baseline ignores the content of the grass, but our method can well describe this detail. Moreover, our method generates the more fine-grained and accurate phrase ``red double decker bus'', while TDA just predicts a common word ``bus''. A similar phenomenon can also be observed in other examples. Moreover, when the baseline method tends to generate the same caption for similar images, our method can generate discriminative captions. Some examples are presented in Figure \ref{fig:vis2}. For the two images in the second row of Figure \ref{fig:vis2}, the TDA baseline merely describes the shared content ``flying kites in the sky'' and generates a sentence ``a group of kites flying in the sky'' for both images. In contrast, except for describing the shared content, our method can further capture ``parked truck'' for the first image and ``people crowd and bench'' for the second image. It shows that our method can capture more details and thus generate more fine-grained and distinguishable captions. \subsubsection{Contribution of Global Discriminative (GD) Constraint} We merely incorporate the GD constraint in the reference model to evaluate its contribution. As shown in Table \ref{table:baseline}, it can improve most of the evaluation metrics. For the more comprehensive metric (CIDEr) , the score is increased by 0.5\% with the ST baseline and by 0.9\% with the TDA baseline. Furthermore, the GD constraint can significantly increase the fine-granularity of generated captions. Specifically, it increases the number of unique sentences from 2,713 to 3,040 for the ST baseline and from 3,589 to 3,612 for the TDA baseline. \begin{figure}[t] \centering \subfigure[]{ \includegraphics[width=0.46\linewidth]{TSNE_1.pdf}} \subfigure[]{ \includegraphics[width=0.46\linewidth]{TSNE_2.pdf}} \caption{Embedding of the caption space visualized using t-SNE: (a) TDA and TDA+GD; (b) TDA+(Luo et al.) and TDA+GD.} \label{fig:tsne} \end{figure} To further demonstrate the effectiveness of the global discriminative constraint, we use the t-distributed stochastic neighbor embedding (t-SNE) \cite{maaten2008visualizing} visualization technique to analyze the discriminability of captions in Figure \ref{fig:tsne}. Specifically, we use the proposed model to generate captions for 5,000 images in the Karpathy test set, and apply the GRU encoder from visual-semantic embedding \cite{faghri2017vse++} to extract a 1,024-dimension representation vector for each generated caption. Then, the t-SNE \cite{maaten2008visualizing} method is applied to reduce the vector dimension and visualize the caption representation distribution in Figure \ref{fig:tsne}. Intuitively, we can observe that the caption representation cluster produced by TDA+GD is more dispersed than that of TDA and TDA+(Luo et al.) \cite{luo2018discriminability}. It shows that the captions generated by TDA and TDA+(Luo et al.) are of less variance and more similarity, and indicates that the captions provided by our TDA+GD are more discriminative. In Section \ref{sec:Self-Retrieval}, further quantitative evaluation will be provided to demonstrate the effectiveness of our GD constraint. \subsubsection{Contribution of Local Discriminative (LD) Constraint} We evaluate the contribution of the LD constraint by merely incorporating it in the reference model. As shown in Table \ref{table:baseline}, it leads to a clear performance improvement, e.g., obtaining an obvious CIDEr increase of 4.1\% and 4.0\% over the ST and TDA reference models, respectively. As described above, our LD constraint works by introducing a word-level reward assignment mechanism, which assigns higher rewards to the more fine-grained and content-sensitive words/phrases that describe the visual details of given images. Below we perform further investigation to analyze the formulation of our LD constraint. First, our LD constraint strengthens the TF-IDF weights of the less frequent but informative words. A question that may arise is whether the benefit of our LD constraint can come from simply strengthening TF-IDF weights in CIDEr \cite{vedantam2015cider}. Thus, we design a new baseline with such a TF-IDF weight adjustment. Specifically, the logarithmic base $e$ in CIDEr is replaced by 2 to strengthen TF-IDF weights in CIDEr, and then the original caption-level reward is still used for each word on the ST/TDA baselines. The new baselines are denoted as ST-Strengthen/TDA-Strengthen. As exhibited in Table \ref{table:baseline}, the performance of ST-Strengthen/TDA-Strengthen is just comparable with the original ST/TDA and even worsens in some metrics. It shows that the key factor is not that the IDF weighting in CIDEr is too weak, and that only strengthening TF-IDF weights in CIDEr cannot contribute to our LD constraint. Second, we propose the idea that provides each word a word-level CIDEr reward in our LD constraint. To better verify the effectiveness of this idea, we design a simpler variant of our LD constraint, termed as ST+LD-Diff/TDA+LD-Diff. Specifically, we calculate the difference between the previous and the current caption scores when each new word is generated and appended, and this difference is used as a word-level reward for the newly appended word. Consequently, Equation (\ref{eq:LD}) becomes $R_{\mathrm{LD2}}(w^s_t)= R_{\mathrm{C}}(c_{1:t}) - R_{\mathrm{C}}(c_{1:t-1}) + R_{\mathrm{C}}(\tilde{c})$, where $c_{1:t}$ denotes the sentence fragment from 1 to $t$ in the time series. From Table \ref{table:baseline}, we can see that ST+LD-Diff/TDA+LD-Diff performs better than ST/TDA, achieving a CIDEr increase of 2.8\% and 2.3\%, respectively. It demonstrates that the idea of word-level rewards does play a significant role in our LD constraint. It can also be easily observed that, by using our LD constraint, ST+LD/TDA+LD obtain still better results than ST+LD-Diff/TDA+LD-Diff, further increasing the CIDEr by 1.3\% and 1.7\%, respectively. The reason is that our LD constraint also strengthens the weights of less frequent but informative words and takes phrases into account, while the difference-based word-level reward only considers the effect of single words. Third, our LD constraint adopts two thresholds to help select the fine-grained words, i.e., $\eta$ and $\lambda$ in Equation (\ref{eq:LD}). Therefore, we conduct experiments to analyze the influence of different threshold settings on performance. We first fix $\eta=0$ and increase $\lambda$ from 2 to 8. As shown in Table \ref{table:Parameters}, we find that the performance first increases when increasing $\lambda$ from 2 to 5, but the performance becomes worse with a further increase of $\lambda$. Thus, setting $\lambda=5$ can roughly achieve the best performance. Then, we fix $\lambda$ as 5 and increase $\eta$ from 0 to 2. The results are also presented in Table \ref{table:Parameters}. A similar performance change is observed, and we find that setting $\eta=1$ roughly achieves the best performance. Based on the above analysis, we set $\lambda=5$ and $\eta=1$ in all the experiments. To further qualitatively validate the performance of $\lambda$ and $\eta$, we randomly choose two test images and show the results with different threshold settings in Figure \ref{fig:threshold}. We can find that setting a small or large value for $\lambda$ and $\eta$ will tend to generate less-informative captions, which is consistent with the quantitative analysis. The possible reason is summarized as follows. In the training stage, the word-level rewards will be increased for almost all the words when $\lambda$ and $\eta$ take small values, and some fine-grained phrases will be ignored when $\lambda$ and $\eta$ take large values. Both cases will not take care of the fine-grained phrases well. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{threshold.pdf} \caption{Captions generated by TDA baseline with different thresholds.} \label{fig:threshold} \end{figure} \subsection{Evaluation on Self-Retrieval} \label{sec:Self-Retrieval} In this subsection, we follow previous work \cite{dai2017contrastive} to conduct self-retrieval experiments to assess the discriminability of the proposed approach. Specifically, we first randomly select 5,000 images $\{I_1, I_2, \dots, I_{5000}\}$ from the MS-COCO test set and use the caption model to generate the corresponding 5,000 sentences $\{c_1, c_2, \dots, c_{5000}\}$. Then, for each sentence $c_i$, we use it as a query and compute the probabilities conditioned on each image, i.e., $\{p(c_i|I_1), p(c_i|I_2), \dots, p(c_i|I_{5000})\}$. If the conditional probability $p(c_i|I_i)$ is within the top-K highest probabilities, the image $I_i$ is considered to be a top-K recalled image. The Recall@K (R@K) metric is used to measure the model's discriminability, and it is defined as the fraction of the top-K recalled images relative to all the 5,000 images. A higher Recall@K indicates that more images are easily retrieved by their corresponding generated sentences, and thus, the caption model captures the distinctiveness of images better. The evaluation results are reported in Table \ref{table:retrieval}. Since previous work (Luo et al.) \cite{luo2018discriminability} also employed a ranking loss to improve the discriminability, we implement their loss on the ST and TDA baselines for comparison. As shown in Table \ref{table:retrieval}, by introducing the ranking loss, ST/TDA+\cite{luo2018discriminability} outperform the baseline methods and achieve an impressive retrieval performance. The better-performing model TDA+\cite{luo2018discriminability} obtains the Recall@1, Recall@5, and Recall@10 of 73.6\%, 93.04\%, and 96.02\%, respectively. Compared with \cite{luo2018discriminability} that only uses a ranking loss on the minibatch, our GD constraint additionally contains a ranking loss defined on the entire training set. It can be observed that our GD constraint achieves better results than \cite{luo2018discriminability} on both ST and TDA baselines. Moreover, we note that \cite{luo2018discriminability} requires setting a large batch size during training to ensure performance. As exhibited in Figures \ref{fig:r1} and \ref{fig:r10}, the Recall@1 and Recall@10 performance are plotted with respect to different batch sizes. ST/TDA+(Luo et al.) \cite{luo2018discriminability} suffer a significant performance drop when the batch size is decreased. In contrast, our GD method achieves more steady results with different batch sizes. \begin{table}[!t] \centering \vspace{6pt} \caption{Performance comparison on self-retrieval.} \newcommand {\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{tabular}{c|c|c|c} \toprule \multirow{2}*{Model} & \multicolumn{3}{c}{Performance (\%)} \\ \cline{2-4} \ &R@1 & R@5 & R@10 \\ \hline \ ST+\cite{luo2018discriminability} &60.24&86.33&93.18 \\ \hline \ ST &50.84&78.54&87.58\\ \ ST+LD &53.60&82.24&90.38 \\ \ ST+GD &61.74&87.12&93.94\\ \ ST+GLD &\textbf{62.08}&\textbf{88.24}&\textbf{94.36}\\ \midrule TDA+\cite{luo2018discriminability} &73.60&93.04&96.52 \\ \hline \ TDA &66.40&88.46&94.26 \\ \ TDA+LD &68.82&90.90&95.80 \\ \ TDA+GD &74.53&93.67&97.03 \\ \ TDA+GLD &\textbf{76.24}&\textbf{94.50}&\textbf{97.90}\\ \bottomrule \end{tabular} \label{table:retrieval} \end{table} \begin{figure}[!t] \centering \subfigure[]{ \includegraphics[width=0.46\linewidth]{ST_R1.pdf}} \subfigure[]{ \includegraphics[width=0.46\linewidth]{TDA_R1.pdf}} \caption{Recall@1 comparison between our GD method and Luo et al. \cite{luo2018discriminability} with different batch sizes using the (a) ST and (b) TDA baselines.} \label{fig:r1} \end{figure} \begin{figure}[!t] \centering \subfigure[]{ \includegraphics[width=0.46\linewidth]{ST_R10.pdf}} \subfigure[]{ \includegraphics[width=0.46\linewidth]{TDA_R10.pdf}} \caption{Recall@10 comparison between our GD method and Luo et al. \cite{luo2018discriminability} with different batch sizes using the (a) ST and (b) TDA baselines.} \label{fig:r10} \end{figure} \begin{table*}[t] \centering \caption{ Per-Batch Training Time (s) for some baselines. - denotes that this model does not require this step.} \newcommand {\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{tabular}{c|c|c|c} \toprule \ Model & Finding Similar Images & Reward Reweighting & Backpropagation \\ \hline TDA+\cite{luo2018discriminability} & 0.106 &-& 0.368 \\ \hline \ TDA &-&-& 0.323 \\ \ TDA+LD &-& 0.177 & 0.362 \\ \ TDA+GD &0.139&-&0.372 \\ \ TDA+GLD &0.139&0.177&0.385 \\ \bottomrule \end{tabular} \label{table:cost} \end{table*} We further compare the variants of our method, which merely uses the GD or LD constraint. As shown in Table \ref{table:retrieval}, both the GD and LD constraints can achieve a notable performance gain on the baseline methods and thus help improve the model's discriminability. It is also easily observed that the GD constraint performs better than the LD constraint. For example, ST+GD increases from ST by 10.90\%, 8.58\% and 6.36\% on Recall@1, Recall@5, and Recall@10, respectively, compared to 2.76\%, 3.70\% and 2.80\% for ST+LD. The reason is that our GD and LD constraints focus on different perspectives. The GD constraint directly models the contrast among similar images and guides the caption model to generate sentences that describe the major discriminative image content, while the LD constraint drives the caption model to add more fine-grained phrases that are sensitive to content details and improve the discriminability indirectly. By combining both, our overall GLD objective can further improve the discriminative performance, as demonstrated in Table \ref{table:retrieval}. \subsection{Computational Costs} The training procedure for TDA+GLD consists of three steps: 1) finding similar images to compute ranking loss, 2) reweighting reward to obtain the word-level reward, and 3) backpropagation to update parameters. To evaluate the computational cost of each step, we report the per-batch training times of TDA+\cite{luo2018discriminability}, TDA, TDA+LD, TDA+GD, and TDA+GLD in Table \ref{table:cost}. All algorithms are implemented in PyTorch and run on an NVIDIA TITAN card with a minibatch size of 16. As outlined in Table \ref{table:cost}, we have the following observations: (i) TDA+\cite{luo2018discriminability}, TDA+GD, and TDA+GLD indeed consume some computational costs in finding similar images. However, the cost can be negligible as the similarity matrix of images can be precomputed. (ii) The reweighting reward consumes more computational resources. But the TF-IDF weights of the n-grams can also be precomputed to save time. (iii) TDA+GLD helps to provide more accurate and discriminative captions, and it can achieve a better tradeoff between computational cost and caption quality. \subsection{Discussion and Limitations} \begin{figure}[!t] \centering \includegraphics[width=0.95\linewidth]{fail.pdf} \caption{Two inaccurate captions generated by the TDA+GLD.} \label{fig:vis3} \end{figure} In this work, we address the fine-grained image captioning task. But, in fact, there is no unified evaluation metric for this task. So we introduce some metrics to measure ``fine-grained captioning''. First, inspired by \cite{luo2018discriminability}, UniCap and AvgLen are introduced to reveal the fine-granularity of the generated captions to some extent. Second, we follow some works from natural language processing \cite{maaten2008visualizing} to visualize the caption representation distribution via the t-SNE technology, where a more dispersed cluster indicates the generated captions are more discriminative. Third, we follow previous work \cite{dai2017contrastive} to perform self-retrieval experiments that quantitatively measure the discriminability of the generated captions. Maybe the proposed metrics cannot fully measure the fine-grained captioning, but they have a great deal of relevance and reliability. Our extensive qualitative and quantitative analysis can demonstrate fine-grained captioning to a large extent. We believe that our work plays a very important role in the early exploration of the task. There are some limitations in our approach for addressing fine-grained image captioning. We find that our method tends to generate sentences that are more discriminative than others, and thus some generated sentences do not well match the ground truth description of the images. In Figure \ref{fig:vis3}, we present some examples that our method generates unsatisfying captions. One possibility for this phenomenon is that the global discriminative (GD) constraint pulls all the generated sentences away from each other, and thus it tends to generate captions describing discriminative contents. In fact, discriminability varies for different images. Thus, we will explore an adaptive GD constraint that gives different balance factors to the GD constraint for different images to avoid this phenomenon. \section{Conclusion} To generate fine-grained and discriminative captions, we propose a global-local discriminative objective, which is formulated as two constraints based on a reference model. Specifically, the global discriminative constraint is utilized to pull the generated caption to better describe the distinctiveness of the corresponding image, thus improving the discriminability, and the local discriminative constraint is designed to focus more on the less frequent words and thus enable describing more detailed and fine-grained contents of the input image. Extensive experimental evaluation and comparison with existing leading methods on the MS-COCO dataset demonstrate the effectiveness of the proposed method. However, the proposed global-local discriminative objective has limitations that should be addressed in the future. For example, the thresholds of the local discriminative constraint are set empirically and establishing how to adaptively adjust these thresholds remains to be studied. Moreover, our discriminative objective helps with informative but less frequent words, but it will also suppress informative and highly frequent words. Therefore, a more general and adaptive discriminative objective needs to be investigated. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
proofpile-arXiv_065-164
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In his letter \cite{Deligne} to B. Malgrange, P. Deligne introduced the notion of a sheaf with a filtration indexed by a local system of ordered sets in order to express the Stokes phenomenon of a linear differential equation of one complex variable in a sheaf-theoretic way. The notion has been developed and extended to arbitrary dimensions (See \cite{Sabbah} and references therein) and is now called a \textit{Stokes filtered sheaf}. Here, the term ^^ sheaf' is often replaced by a more precise term like local system, perverse sheaf, and so on. In this paper, we introduce an analogous notion of a Stokes filtered sheaf to express the Stokes phenomenon of a differential-\textit{difference} module of two complex variables in a sheaf-theoretic way. Although we only treat a special class of differential-difference modules, we expect that this approach gives a clue to investigate more general difference equations in a sheaf theoretic way. To clarify the analogy, in \S \ref{Intro recall} we will briefly recall some parts of the theory of Stokes filtered local systems for differential equations. We also give a class of examples of the Stokes filtered local systems constructed in a geometric way since we will mainly consider the analogue of such examples. We then explain our notion of Stokes filtered ^^ ^^ quasi-local systems" for differential-difference modules in \S \ref{Intro q-Stokes} and the main results of this paper in \S \ref{Intro main}. Further direction related to mirror symmetry and Dubrovin's conjecture will be discussed in \S \ref{Intro mirror}. \subsection{Stokes filtered local systems for differential equations}\label{Intro recall} Let us briefly recall the theory of Stokes filtered local systems on $S^1$ following \cite{Sabbah}. Let $\scr{I}_1$ denote the constant sheaf on $S^1$ with fiber $\lambda^{-1}\C[\lambda^{-1}]$ equipped with the order depends on the point $e^{{\sqrt{-1}}\theta}\in S^1$ as follows: For $\varphi,\psi\in \lambda^{-1}\C[\lambda^{-1}]$ and $e^{{\sqrt{-1}}\theta}\in S^1$, $ \varphi\leqslant_{\theta}\psi$ (resp. $\varphi<_\theta\psi$) if and only if $\exp(\varphi(\lambda)-\psi(\lambda))$ is of moderate growth (resp. rapid decay) when $\lambda$ tends to zero satisfying $\arg(\lambda)=\theta$. Let $\cal{L}$ be a local system on a circle $S^1$. A non-ramified pre-Stokes filtration $\cal{L}_{\leqslant}$ is a family of subsheaves $\cal{L}_{\leqslant\varphi}\subset \cal{L}$ on $\cal{L}$ indexed by $\varphi\in \lambda^{-1}\C[\lambda^{-1}]$ with the following condition: For any $e^{{\sqrt{-1}}\theta}\in S^1$, $\varphi\leqslant_\theta\psi$ implies $\cal{L}_{\leqslant\varphi,\theta}\subset \cal{L}_{\leqslant\psi,\theta}$. We may naturally define the non-ramified pre-Stokes filtration ${\mathrm{gr}}\cal{L}_\leqslant$ on ${\mathrm{gr}}\cal{L}\coloneqq \bigoplus_\varphi{\mathrm{gr}}_\varphi\cal{L}$, ${\mathrm{gr}}_\varphi\cal{L}\coloneqq \cal{L}_{\leqslant \varphi}/\cal{L}_{<\varphi}$. Then a non-ramified pre-Stokes filtration is called a non-ramified Stokes filtration if $(\cal{L},\cal{L}_\leqslant)$ is locally isomorphic to $({\mathrm{gr}}\cal{L},{\mathrm{gr}}\cal{L}_\leqslant)$. A morphism of non-ramified Stokes filtered local is defined in an obvious way. The following theorem is fundamental (cf. \cite[Theorem 3.1, Theorem 3.5]{Sabbah}): \begin{theorem}\label{diff. abel} The category of non-ramified Stokes filtered local systems is abelian. \end{theorem} \begin{remark} This theorem holds for more general Stokes filtered local systems explained below. More generally, the notion of Stokes filtered sheaves can be defined in higher dimensions. In higher dimensional case, the notion of ^^ ^^ goodness" plays an important role to see the abelianity of the category. \end{remark} We may consider (ramified) Stokes filtered local systems. In the definition, the index sheaf $\scr{I}_1$ is replaced by $\scr{I}=\bigcup_{d\geqslant 1}\scr{I}_d$, where, roughly speaking, $\scr{I}_d$ denotes the local system on $S^1$ with the fiber $\lambda^{-1/d}\C[\lambda^{-1/d}]$ and the monodromy $\exp(-2\pi{\sqrt{-1}}/d)$. Let $\scr{I}^{\acute{e}t}$ denote the \'etale space of $\scr{I}$ and $\tau \colon \scr{I}^{\acute{e}t}\to S^1$ denote the projection. The (pre-)Stokes filtration $\cal{L}_\leqslant$ is defined as the subsheaf of $\tau^{-1}\cal{L}$ with some conditions (See \cite{Sabbah} for more details). \begin{theorem}[{Deligne \cite{Deligne}, Malgrange \cite{Mal}, see also \cite[Theorem 5.8]{Sabbah}}]\label{Riemann-Hilbert} There is a functor \[\cal{H}\mapsto \mathrm{RH}(\cal{H})= \(\mathrm{RH}(\cal{H}),\mathrm{RH}_\leqslant(\cal{H})\)\] from the category of germs of meromorphic connections on $(\C,0)$ to the category of Stokes filtered local systems on $S^1$. \qed \end{theorem} \begin{remark} In this theorem, the functor $\mathrm{RH}$ is called the Riemann-Hilbert functor. The local system ${\mathrm{RH}}(\cal{H})$ on $S^1$ is denoted by ${\mathscr{H}}^0(\widetilde{{\mathrm{DR}}}(\cal{H}))$ and the Stokes filtration ${\mathrm{RH}}_{\leqslant}(\cal{H})$ on it is denoted by ${\mathscr{H}}^0({\mathrm{DR}}_\leqslant(\cal{H}))$ in \cite{Sabbah}. \end{remark} We can construct interesting examples of Stokes filtered local system and the corresponding differential equation in a geometric way. For simplicity, we restrict ourselves to one dimensional case. Let $X$ be a compact Riemann surface. Let $f\colon X\to \mathbb{P}^1$ be a meromorphic function on it. Put $P=f^{-1}(\infty)$. Assume that $f$ has only $A_1$-singularities on $U\coloneqq X\setminus P$. Put $\mathfrak{X}\coloneqq \C^*_\lambda\times X$ and $\mathfrak{P}\coloneqq \C^*_\lambda\times P$. Consider the meromorphic connection $\cal{M}(f)=({\mathscr{O}}_{\mathfrak{X}}(*\mathfrak{P}),d+d(\lambda^{-1}f))$ on $\mathfrak{X}$. Then, we obtain a meromorphic connection on $\C^*_\lambda$ by taking the pushing forward \[\cal{H}_{\rm dR}^1(f)\coloneqq {\mathrm{Cok}}\left[\pi_{\mathfrak{X*}}{\mathscr{O}}_{\mathfrak{X}}(*\mathfrak{P})\xrightarrow{d_{\mathfrak{X}/\C^*}+\lambda^{-1}df} \pi_{\mathfrak{X}*}\Omega_{\mathfrak{X}/\C^*}^1(*\mathfrak{P})\right] \] where $\pi_{\mathfrak{X}}\colon \mathfrak{X}\to \C^*_\lambda$ denotes the projection and $d_{\mathfrak{X}/\C^*}\colon {\mathscr{O}}_{\mathfrak{X}}\to\Omega_{\mathfrak{X}/\C^*}^1$ denotes the relative differential. It is easy to see that $\cal{H}^1_{\rm dR}(f)$ has singularities only on $\{0,\infty\}$. $\cal{H}_{\rm dR}^1(f)$ can be seen as a filtered de Rham cohomology studied in the theory of primitive forms by Kyoji Saito (see \cite{ST} and references therein). Applying the Riemann-Hilbert functor in Theorem \ref{Riemann-Hilbert} to the germ of $\cal{H}_{\rm dR}^1(f)$, we obtain the Stokes filtered local system ${\mathrm{RH}}(\cal{H}_{\rm dR}^1(f))$. It is well known that ${\mathrm{RH}}(\cal{H}_{\rm dR}^1(f))$ is non-ramified and moreover of exponential type (c.f. \cite{Pham} for the case where $U=\C^n$, and \cite{Sabbah-Saito} for general case). In other words, the graded part ${\mathrm{gr}}_\varphi{\mathrm{RH}}(\cal{H}_{\rm dR}^1(f))$ is non-zero if and only if $\varphi=-c/\lambda$ for a critical value $c$ of $f$. There is a geometric description of $\mathrm{RH}(\cal{H}_{\rm dR}^1(f))$. For each $\lambda\in \C^*$, let $H^{\rm rd}_1(U, f/\lambda)$ denote the rapid decay homology of Bloch-Esnault-Hien (\cite{BE}, \cite{Hien}) of a meromorphic connection $({\mathscr{O}}_X(*P),d+\lambda^{-1}df)$. As proved in \cite{HR} (in a more general setting), $\cal{H}_1^{\rm rd}(f)\coloneqq\bigcup_{\lambda\in \C^*}H^{\rm rd}_1(U, f/\lambda)$ is a local system, and the period integral \begin{align*} {\rm Per}\colon\cal{H}_1^{\rm rd}(f)\longrightarrow {\mathscr{H}\! \! om}_{\scr{D}_{\C^*}}(\cal{H}_{\rm dR}^1(f)^\vee,{\mathscr{O}}_{\C^*}), \quad [\gamma\otimes e^{-f/\lambda}]\mapsto (\omega\mapsto \int_{\gamma}e^{-f/\lambda}\omega) \end{align*} gives an isomorphism. Here, we have put $\cal{H}_{\rm dR}^1(f)^\vee\coloneqq {\mathscr{H}\! \! om}_{\mathscr{O}}(\cal{H}_{\rm dR}^1(f),{\mathscr{O}}_{\C^*})$ and hence ${\rm Per}$ induces an inclusion $\cal{H}^{\rm rd}_1(f)\hookrightarrow \cal{H}^1_{\rm dR}(f)$. Take the real blowup ${\mathrm{Bl}}^\mathbb{R}_0(\C)$ of $\C$ at the origin and consider the inclusions $S^1\xrightarrow{i_{S^1}}{\mathrm{Bl}}^\mathbb{R}_0(\C)\xleftarrow{j_{\C^*}}\C^*.$ Then \[\cal{L}^{\rm rd}\coloneqq i_{S^1}^{-1}j_{\C^**}\cal{H}^{\rm rd}_1(f)\] is a local system of $S^1$. Take any meromorphic basis $e_1,\dots, e_r$ of $\cal{H}^1_{{\rm dR}}(f)$. Then we can define the filtration on $\cal{L}^{\rm rd}(f)$ as follows: For a section $\varepsilon\in\cal{L}^{\rm rd}(f)$, take representative $\widetilde{\varepsilon}\in \cal{H}^{\rm rd}_1(f)$. Then there is an expression \begin{align*} \widetilde{\varepsilon}=\sum_{i=1}^r h_i(\lambda)e_i \end{align*} where $h_i(\lambda)$ denotes holomorphic function on a sector in $\C^*$. Then $\varepsilon \in \cal{L}^{\rm rd}_{\leqslant \varphi}(f)$ if and only if $e^{-\varphi}h_i(\lambda)$ is of moderate growth when $\lambda$ tends to zero on the sector for all $i=1,\dots, r$. Then we can prove that $(\cal{L}^{\rm rd}(f),\cal{L}^{\rm rd}_\leqslant(f))\simeq ({\mathrm{RH}}(\cal{H}_{\rm dR}^1(f)),{\mathrm{RH}}_\leqslant (\cal{H}_{\rm dR}^1(f)))$. Let $\omega_1,\dots,\omega_r$ denote the section of $\pi_{\mathfrak{X}*}\Omega^1_{\mathfrak{X}/\C^*}(*\mathfrak{P})$ which represents dual bases $e_1^\vee,\dots, e_r^\vee$ of $e_1,\dots,e_r$ in $\cal{H}^1_{\rm dR}(f)^\vee$. Assume that $\varepsilon=\gamma\otimes e^{-f/\lambda}$ for some family of paths $\gamma$. Then we have \[h_i(\lambda)=\int_{\gamma}e^{-f/\lambda}\omega_i\] By the saddle point method, we can directly obtain the following theorem without using the Riemann-Hilbert correspondence (Theorem \ref{Riemann-Hilbert}). \begin{theorem}\label{L(f)} The pair $(\cal{L}^{\rm rd}(f), \cal{L}^{\rm rd}_\leqslant(f))$ is a non-ramified Stokes filtered local system on $S^1$ such that ${\mathrm{gr}}_\varphi\cal{L}^{\rm rd}(f)\neq 0$ iff $\varphi=-c/\lambda$ for a critical value $c$ of $f$. \qed \end{theorem} \subsection{Stokes filtered quasi-local systems}\label{Intro q-Stokes} The purpose of this paper is to introduce an analogous notion of a Stokes filtered sheaf to express the asymptotic behavior of a differential-difference module of two variables. Before explaining the relation to differential-difference modules, we would like to explain the idea of the definition of such Stokes filtered sheaves. Let $T=(S^1)^2\simeq \{(\theta_u,\theta_v)\in (\mathbb{R}/2\pi\mathbb{Z})^2\}$ be the torus considered as the corner of the real blowing up \[\varpi_{B}\colon \widetilde{B}={\mathrm{Bl}}^\mathbb{R}_Z(B)\longrightarrow B=\C^2 \] along the divisor $Z=\{(u, v)\in \C\mid uv=0\}$. Put $B^*\coloneqq B\setminus Z$ and let $T\xrightarrow{\imath_T} \widetilde{B}\xleftarrow{\jmath_B}B^*$ denote the inclusions. As the counterpart of $\scr{I}_1$ (or $\lambda^{-1}\C_{S^1}\subset \scr{I}_1$), we consider the sheaf ${\mathscr{Q}}$ of index defined as the restrictions of the sub-sheaf of $j_{B*}{\mathscr{O}}_{B^*}$ generated by the sections of the form \begin{align*} u^{-1}\(n\log v+\frac{h(v)}{v}\)\quad\quad(n\in\mathbb{Z}, h(v)\in {\mathscr{O}}_{\C}) \end{align*} to $T$ (see Definitions \ref{tilde Q} and \ref{IQ definition} for more precise). The sheaf ${\mathscr{Q}}$ admits a sheaf of order $\leqslant$ on $T$ (see Definition \ref{IQ order}). Let $\k$ be a field. As a counterpart of $\k$-local system on $S^1$, we consider a quasi-local system of $\k[q^{\pm1}]$-modules on $T$. Here, by a quasi-local system of $\k[q^{\pm 1}]$-module on $T$, we mean an $\mathbb{R}$-constructible sheaf $\L$ of finite rank free $\k[q^{\pm 1}]$-modules on $T$ such that $\L(q)=\L\otimes_{k[q^{\pm 1}]}k(q)$ is a local system of $\k(q)$-vector spaces. We moreover assume that $\L$ is constructible with respect to the stratification $\Theta=(T_{\mathbb{R}_+}, T_{\mathbb{R}_-}, T_{+}, T_-)$ of $T$, where $T_{\mathbb{R}_\pm}=\{e^{{\sqrt{-1}}\theta_u}\in \mathbb{R}_\pm\}$ and $T_{\pm}=\{\pm{\mathrm{Im}} e^{{\sqrt{-1}}\theta_u}>0\}$. Let $\tau\colon {\mathscr{Q}}^{\acute{e}t}\to T$ denote the \'etale space of ${\mathscr{Q}}$. Then, a pre-Stokes filtration is defined as the subsheaf $\L_\leqslant$ in $\tau^{-1}\L$ of $\k$-modules with the similar properties as in \S \ref{Intro recall} (see Definition \ref{pre-q}). The new property we add here is the compatibility of the filtration with the action of $\k[q^{\pm 1}]$. More precisely, we impose the condition that \begin{align*} q\cdot \L_{\leqslant \varphi}= \L_{\leqslant\varphi+2\pi{\sqrt{-1}} u^{-1}} \end{align*} for any local section $\varphi$ of ${\mathscr{Q}}$. By this compatibility, we can induce the \textit{coarse} filtration on $\L$, which consists of $\k[q^{\pm 1}]$-submodules in $\L$(see \S \ref{2.2}). Then, we can define the notion of a (good) Stokes filtered quasi-local system $(\L,\L_\leqslant)$ by imposing existence of the local isomorphism to its graded part (see Definition \ref{Def q-Stokes} for more precise). Because of some technical reasons, we also impose some conditions on the graded part with respect to the coarse filtrations. The following is the main theorem of this part, which is an analogue of Theorem \ref{diff. abel}: \begin{theorem}[See Theorem \ref{STRICTNESS} for a more precise statement] The category of good Stokes filtered quasi-local systems on $T$ is an abelian category. \end{theorem} \subsection{Geometric construction}\label{Intro main} In this paper, we do not try to formulate an analog of Theorem \ref{Riemann-Hilbert}. Instead, we shall give an analog of Theorem \ref{L(f)}. \subsubsection{De Rham cohomology}\label{intro dR} As in \S \ref{Intro recall}, let $X$ be an compact Riemann surface and $f$ be a meromorphic function on it. We also consider a meromorphic function $g$ on $X$. Let $D$ be the union of $P$ and the pole of $g^{-1}dg$. Let $S= \C^2$ be the surface with coordinates $(\lambda,\mu)$. We put ${\mathcal{X}}=S\times X$ and $\cal{D}=S\times D$. We then define the module ${\mathscr{M}}(f,g)={\mathscr{O}}_{{\mathcal{X}}}(*\cal{D})$ with operators $\nabla$, $\nabla_{\mathfrak{a}}$, and ${\mathbb{S}}$. Here, $\nabla=d+\lambda^{-1}(df-\mu g^{-1}dg)$ denotes the relative connection, $\nabla_{\mathfrak{a}}$ denotes the differential operator corresponding to the vector field ${\mathfrak{a}}=\lambda^2\partial_\lambda+\lambda\mu \partial_\mu$ defined by $\nabla_{\mathfrak{a}}(1)=-f$, and the difference operator ${\mathbb{S}}$ corresponding to the shift of the parameter $\sigma\colon (\lambda,\mu)\mapsto(\lambda,\mu-\lambda)$ determined by ${\mathbb{S}}(1)=g$ (see Definition \ref{MFG} for more precise). Since $(\nabla,\nabla_{\mathfrak{a}},{\mathbb{S}})$ satisfies a kind of integrability condition (Lemma \ref{INT}), the de Rham cohomology group (or, a pushing forward) ${\mathscr{H}}^1_{\rm dR}(f,g)_{|S^\circ}$ of ${\mathscr{M}}(f, g)$ restricted to $S^\circ=\{(\lambda,\mu)\in S^\circ\mid \lambda\neq 0\}$ is naturally equipped with the operators $\nabla_{\mathfrak{a}}$ and ${\mathbb{S}}$ (see \S \ref{DR2} for the definition). Roughly speaking, the differential-difference module ${\mathscr{H}}^1_{\rm dR}(f,g)_{|S^\circ}$ is a counterpart of $\cal{H}^1_{\rm dR}(f)$. However, there appears some difficulties when $E=D\setminus P$ is not empty (as we will see below, this case contains an interesting example). In this case, ${\mathscr{H}}^1_{\rm dR}(f,g)(*\lambda)$ is not locally free of finite rank over ${\mathscr{O}}_S(*\lambda)$, although we can show that ${\mathscr{H}}^1_{\rm dR}(f,g)_{|S^\circ}$ is locally free over ${\mathscr{O}}_{S^\circ}$ (see Theorem \ref{dRT}). It means that we can not take a meromorphic frame of ${\mathscr{H}}^1_{\rm dR}(f,g)$ around $\lambda=0$. This causes some problem since we would like to investigate the asymptotic behavior when $\lambda\to0$. To avoid this problem, we take some sub-sheaves ${\mathscr{H}}^1_{{\rm dR}, a, b}(f,g)$ of ${\mathscr{H}}^1_{{\rm dR}}(f,g)$ of free ${\mathscr{O}}_S$-modules indexed by two integers $(a, b)\in \mathbb{Z}^2$ (see \S \ref{S lattice}). These two integers correspond to the pole orders along the components $E=E_0\sqcup E_\infty$ where $E_0=g^{-1}(0)\setminus P$ and $E_\infty=g^{-1}(\infty)\setminus P$. If $a\leq a'$ and $b\leq b'$, then we have the inclusion ${\mathscr{H}}^1_{{\rm dR}, a, b}(f,g)\subset {\mathscr{H}}^1_{{\rm dR},a',b'}(f,g)$. The operators $\nabla_{\mathfrak{a}}$ and ${\mathbb{S}}$ act as \[\nabla_{\mathfrak{a}}\colon {\mathscr{H}}_{{\rm dR}, a, b}^1(f,g)\to{\mathscr{H}}_{{\rm dR},a, b}^1(f,g), \text{ and }{\mathbb{S}}\colon {\mathscr{H}}^1_{{\rm dR}, a, b}(f,g)\to \sigma_*{\mathscr{H}}^1_{{\rm dR},a+1,b-1}(f,g).\] The limit $\lim_{a, b\to\infty}{\mathscr{H}}^1_{{\rm dR},a, b}(f,g)$ is isomorphic to ${\mathscr{H}}^1_{{\rm dR}}(f,g)$ and the other limits when $(a, b)\to (\infty,-\infty), (-\infty,\infty)$ and $(-\infty,-\infty)$ also exist and have geometric meaning concerning the asymptotic behavior of the de Rham complexes along the divisor $E$ (see Proposition \ref{limits} for more precise). \subsubsection{Betti homology}\label{intro Betti} Put $Y=X\setminus D$ and $\cal{Y}^\circ=S^\circ\times Y$. Let $\k$ be a subfield in $\C$. Then we consider the following local system of $\k[q^{\pm1}]$-modules on $\cal{Y}$: \begin{align*} {\mathscr{K}}(f,g)\coloneqq \k[q^{\pm 1}]e^{-f/\lambda}g^{\mu/\lambda}\subset {\mathscr{M}}(f,g)_{|\cal{Y}^\circ}, \end{align*} where we put $q=\exp(2\pi{\sqrt{-1}}\mu/\lambda)$. Although $g^{\mu/\lambda}$ is a multivalued function, the submodule ${\mathscr{K}}(f,g)$ is well defined. We regard ${\mathscr{K}}(f,g)$ as a counterpart of the local system of flat sections of $\cal{M}(f)_{|\mathfrak{X}\setminus \mathfrak{P}}$. In the case where $E=D\setminus P$ is empty, we consider the family of rapidly decay homology group to obtain a local system ${\mathscr{H}}^{\rm rd}_1(f,g)$ of $\k[q^{\pm 1}]$-modules on $S^\circ$, which we regard as a counterpart of $\cal{H}^{\rm rd}_1(f)$. In the general case where $E$ is not necessarily empty, as in \S \ref{intro dR}, we should consider moderate growth or rapid decay condition on the components $E_0$ and $E_\infty$ and construct four kinds of local systems ${\mathscr{H}}^{\rm mod}_1(f,g)$, ${\mathscr{H}}^{\rm Be}_{1,E_0!}(f, g)$, ${\mathscr{H}}^{\rm Be}_{1,E_\infty!}(f, g)$ and ${\mathscr{H}}^{\rm rd}_1(f,g)$ of $\k[q^{\pm 1}]$-modules on $S^\circ$. Using the relations of (co)homology intersection pairings and period integrals (c.f. \cite{MMT}, \cite{FSY}, and a review in \S \ref{3.4} in this case), we obtain the inclusion of the four sheaves above to the four limits considered in \S \ref{intro dR}. In particular, we have the inclusion ${\mathscr{H}}^{\rm mod}_1(f,g)\hookrightarrow {\mathscr{H}}^1_{{\rm dR}}(f,g)$. Glueing the sheaves ${\mathscr{H}}^{\rm mod}_1(f,g)$, ${\mathscr{H}}^{\rm Be}_{1,E_0!}(f, g)$, ${\mathscr{H}}^{\rm Be}_{1,E_\infty!}(f, g)$ and ${\mathscr{H}}^{\rm rd}_1(f,g)$ on $S^*\coloneqq S^\circ\setminus \{\mu=0\}$, we obtain a quasi-local system ${\mathscr{H}}^{\rm Be}_{1,\bm{0}}$ on $S^*$ (see \S \ref{Glueing Betti}). Put ${\mathscr{H}}^1_{{\rm dR},\bm0}={\mathscr{H}}^1_{{\rm dR},0,0}$. The inclusions defined above induce the inclusion ${\mathscr{H}}^{\rm Be}_{1,\bm{0}}\hookrightarrow {\mathscr{H}}^1_{{\rm dR},\bm0|S^*} $ such that it induces the isomorphism ${\mathscr{H}}^{\rm Be}_{1,\bm{0}}\otimes{\mathscr{O}}_{S^*}\simeq {\mathscr{H}}^1_{{\rm dR},\bm0|S^*}$ (see \S \ref{Glueing Betti}). This inclusion plays a similar role as the inclusion $\cal{H}^{\rm rd}_1(f)\hookrightarrow \cal{H}^1_{\rm dR}(f)$. \subsubsection{Main theorem and an example} Take the holomorphic map $\phi_S\colon B\to S$ defined by $\phi_S(u, v)=(uv, v)$, which induces an isomorphism $\phi_S\colon B^*\xrightarrow{\sim} S^*$. Put \begin{align*} \L^{\rm Be}(f,g)\coloneqq \imath_T^{-1}\jmath_{B*}\phi_{S}^{-1}{\mathscr{H}}^{\rm Be}_{1,\bm0}(f,g), \end{align*} which is a quasi-local system on $T$. Using a local frame of $\phi_S^*{\mathscr{H}}^1_{{\rm dR},\bm0}$ around the origin of $B$, we can define the pre-Stokes filtration $\L^{\rm Be}_{\leqslant}(f,g)$ on $\L^{\rm Be}(f,g)$. Let ${\mathrm{Crit}}(f)$ denote the set of critical points of $f_{|U}$. Assume $E\cap {\mathrm{Crit}}(f)=\emptyset$. Under this assumption, we can define the goodness condition on $(f,g)$ (See Definition \ref{good pair}). The following is the main theorem of this paper: \begin{theorem}[Theorem \ref{main theorem}] Assume that $(f,g)$ is good. Then the pair \[(\L^{\rm Be}(f,g),\L^{\rm Be}_{\leqslant}(f,g))\] is a good Stokes filtered quasi-local system. \end{theorem} As an easy example, we consider the case $X=\mathbb{P}^1$, $f=z$ and $g=z$, where $z$ denotes the non-homogeneous coordinate on the projective line $\mathbb{P}^1$. In this case, $E=E_0=\{0\}$ is non-empty, and the differential-difference equation essentially corresponds to the difference equation for gamma function (See Example \ref{EXG}). The Stokes filtered quasi-local system $\L^{\rm Be}(z, z)$ is naturally isomorphic to $(\L_{\Gamma},\L_{\Gamma\leqslant})$ defined as follows (see Example \ref{GAMMA FUN} and \S \ref{last examples}): Let $\L_\Gamma$ be a sub-sheaf of $\imath_T^{-1}\jmath_{B*}{\mathscr{O}}_{B^*}$ defined by \begin{align*} \L_{\Gamma}(V)\coloneqq \begin{cases} \C[q^{\pm 1}]u^{u^{-1}}v^{u^{-1}}\Gamma(u^{-1})& (V\cap T_{\mathbb{R}_-}= \emptyset)\\ \C[q^{\pm 1}]u^{u^{-1}}v^{u^{-1}}(1-q)\Gamma(u^{-1})& (V\cap T_{\mathbb{R}_-}\neq \emptyset) \end{cases} \end{align*} where $V$ is a small open subset of $T$ and $\Gamma(u^{-1})$ denotes the gamma function. Then the moderate growth condition on $\imath_T^{-1}\jmath_{B*}{\mathscr{O}}_{B^*}$ induces the filtration $\L_{\Gamma,\leqslant}$. By the Stirling formula and the reflection formula for gamma function, the graded part is given by \begin{align*} \tau_!{\mathrm{gr}}\L_{\Gamma}\simeq \C[q^{\pm 1}]u^{1/2}v^{u^{-1}}e^{-u^{-1}}. \end{align*} In this way, the theory of the Stokes filtered quasi-local system contains the sheaf theoretic expression of asymptotic behavior of solutions to some classical difference equations. We will also deal with the example related to some cylindrical functions. \subsection{Further direction related to mirror symmetry}\label{Intro mirror} One of the motivations of this study is to formulate the equivariant version of Dubrovin's conjecture from the viewpoint of the author's previous study with F. Sanda \cite{Sanda-Shamoto}. See \cite{FIMS}, \cite{TV}, and \cite{CV} for pioneering works on this topic. From the viewpoint of mirror symmetry, the differential-difference module considered here corresponds to the equivariant quantum cohomology with the grading operator (as a differential operator) and the shift operator (as a difference operator). The example of gamma function corresponds to the affine line $\mathbb{A}^1$ with the canonical $\C^*$-action. The example related to cylindrical functions corresponds to the projective line $\mathbb{P}^1$ with the $\C^*$-action. To relate our Stokes filtered quasi-local systems to the equivariant derived categories, it seems important to formulate the notion of Stokes data which should be described in terms of modules over $\k[q^{\pm 1}]$. \subsection{Notations} For a complex number $\alpha$, ${\mathrm{Re}}(\alpha)$ and ${\mathrm{Im}}(\alpha)$ denote the real and imaginary part of $\alpha$, respectively. For a complex manifold $M$, ${\mathscr{O}}_M$ denotes the sheaf of holomorphic functions on $M$. For a meromorphic function $F$ on $M$, $(F)_0$ and $(F)_\infty$ denote the zero divisor of $F$ and the pole divisor of $F$, respectively. $|(F)_0|$ and $|(F)_\infty|$ denote their supports. For a hypersurface $N\subset M$, ${\mathscr{O}}_M(*N)$ denote the sheaf of meromorphic functions whose poles are contained in $N$. \section{Stokes filtered quasi-local systems}\label{stokes} \subsection{A sheaf of ordered abelian groups} Let $B=\C^2$ denote the complex surface with coordinate $(u, v)$. Let ${Z}$ denote the divisor $|(uv)_0|$ in $B$. We take the real blowing up \begin{align*} \varpi_B\colon \widetilde{B}={\mathrm{Bl}}^\mathbb{R}_{{Z}}(B)\longrightarrow B \end{align*} of $B$ along the normal crossing divisor ${Z}$, which is identified with the projection \begin{align*} \(\mathbb{R}_{\geq 0}\times (\mathbb{R}/2\pi\mathbb{Z})\)^2\longrightarrow \C^2, \quad ((r_u,\theta_u),(r_v,\theta_v))\mapsto (u, v)=(r_u e^{{\sqrt{-1}} \theta_u}, r_v e^{{\sqrt{-1}} \theta_v}). \end{align*} Let $B^*\coloneqq B\setminus {Z}$ be the complement of ${Z}$ in $B$. Let $\widetilde{\jmath}_B\colon B^*\to \widetilde{B}$ denote the inclusion. Let $\varpi_{v}\colon \widetilde{B}\to \C$ denote the projection to $v$-plane. \begin{definition}\label{tilde Q} Let $\widetilde{{\mathscr{Q}}}$ denote the sheaf of $\mathbb{Z}$-submodules of $\widetilde{\jmath}_{B*}{\mathscr{O}}_{B^*}$ locally generated by the sections of the form \begin{align*} &u^{-1}\(n\log v+\frac{h(v)}{v}\)&(n\in\mathbb{Z}, h(v)\in \varpi_v^{-1}{\mathscr{O}}_{\C}) \end{align*} where the branch of $\log v$ is locally defined. \end{definition} Let $\widetilde{\jmath}_{B*}{\mathscr{O}}^{\rm lb}_{B^*}$ denote the subsheaf of $\widetilde{\jmath}_{B*}{\mathscr{O}}_{B^*}$ whose sections are locally bounded. In other words, for an open subset $V\subset \widetilde{B}$, a section $\varphi\in\widetilde{\jmath}_{B*}{\mathscr{O}}_{B^*}(V)={\mathscr{O}}_{B^*}(V\cap B^*)$ is a section of $\widetilde{\jmath}_{B*}{\mathscr{O}}^{\rm lb}_{B^*}$ if and only if the following condition is satisfied: for any compact subset $K\subset V$, there exists a positive constant $C_K>0$ such that $|\varphi(u, v)|<C_K$ for any $(u, v)\in K\cap B^*$. Let $T$ be the corner of $\widetilde{B}$, which is identified with $(\mathbb{R}/2\pi\mathbb{Z})^2$. Let $\imath_T\colon T\hookrightarrow \widetilde{B}$ denote the inclusion. \begin{definition}\label{IQ definition} We set ${\mathscr{Q}}\coloneqq \imath_T^{-1}\widetilde{{\mathscr{Q}}}/(\widetilde{{\mathscr{Q}}}\cap \widetilde{\jmath}_{B*}{\mathscr{O}}^{\rm lb}_{B^*})$. \end{definition} Let ${\mathscr{A}}_{\widetilde{B}}^{\leqslant {Z}}$ denote the subsheaf of $\widetilde{\jmath}_{B*} {\mathscr{O}}_{B^*}$ whose section has moderate growth along $\partial \widetilde{B}$ (\cite[\S 8.3]{Sabbah}). Recall that a section $\varphi$ of $\widetilde{\jmath}_{B*} {\mathscr{O}}_{B^*}(V)$ for an open subset $V\subset \widetilde{B}$ is in ${\mathscr{A}}_{\widetilde{B}}^{\leqslant {Z}}(V)$ if and only if, for any compact subset $K\subset V$, there exists $N_K\geqslant 0$ and $C_K>0$ such that \[|\varphi|<C_K|uv|^{-N_K}\] on $K\cap B^*$. Let $\log {\mathscr{A}}_{\widetilde{B}}^{\leqslant {Z}}$ denote the subsheaf of $\widetilde{\jmath}_{B*} {\mathscr{O}}_{B^*}$ locally generated by the the sections with the property that the exponential of them have moderate growth along $\partial \widetilde{B}$. In other words, we put \[\log {\mathscr{A}}_{\widetilde{B}}^{\leqslant {Z}}\coloneqq \exp^{-1}({\mathscr{A}}_{\widetilde{B}}^{\leqslant {Z}})\] where $\exp\colon \widetilde{\jmath}_{B*}{\mathscr{O}}_{B^*}\to \widetilde{\jmath}_{B*}{\mathscr{O}}_{B^*}$, $\varphi\mapsto e^\varphi$ denotes the exponential map. \begin{definition}\label{IQ order} Let ${\mathscr{Q}}_{\leqslant 0}$ be the subsheaf of ${\mathscr{Q}}$ defined as the restriction of the quotient $ (\widetilde{{\mathscr{Q}}}\cap\log {\mathscr{A}}_{\widetilde{B}}^{\leqslant Z})/ (\widetilde{{\mathscr{Q}}}\cap \widetilde{\jmath}_{B*} {\mathscr{O}}_{B^*}^{\rm lb})$ to ${{{T}}}$. Note that we have ${\mathscr{Q}}_{\leqslant 0}\cap (-{\mathscr{Q}}_{\leqslant 0})=0$. For two sections $\varphi, \psi\in {\mathscr{Q}}(V)$ for an open subset $V\subset {{{T}}}$, we define the order $\leqslant_V$ on ${\mathscr{Q}}(V)$ by \begin{align*} \varphi\leqslant_V\psi\overset{\text{def}}{\Longleftrightarrow} \varphi-\psi\in {\mathscr{Q}}_{\leqslant 0}. \end{align*} We also denote $\varphi <_V \psi $ if and only if $\varphi\leqslant_V \psi$ and $\varphi\neq \psi$. \end{definition} In the following, we regard ${\mathscr{Q}}=({\mathscr{Q}},\leqslant)$ as the sheaf of ordered abelian groups. For every $\bm{\theta}\in{{T}}$, we also use the following notation: For two germs $\varphi_{\bm{\theta}}, \psi_{\bm{\theta}}\in{\mathscr{Q}}_{\bm{\theta}}$, we write $\varphi_{\bm{\theta}}\leqslant_{\bm{\theta}} \psi_{\bm{\theta}}$ if and only if there exists a representative $\varphi, \psi\in {\mathscr{Q}}(V)$ on a open neighborhood $V$ of $\bm{\theta}$ such that $\varphi\leqslant_V\psi$. For $n\in \mathbb{Z}$, and $h(v)\in {\mathscr{O}}_{\C,0}$ we have the following sub-sheaf of sets in ${\mathscr{Q}}$: \begin{align* \Phi_{n, h(v)}\coloneqq \left[u^{-1}\(n \log v +\frac{h(v)}{v}+2\pi{\sqrt{-1}}\mathbb{Z}\) \right]. \end{align*} \begin{definition}\label{GOODNESS} A finite disjoint union $\Phi=\bigsqcup_{j=1}^m \Phi_{n_j, h_j(v)}$ $(n_j\in \mathbb{Z}, h_j(v)\in {\mathscr{O}}_{\C,0})$ is called a \textit{good factor} if $h_i(0)\neq h_j(0)$ or $n_i-n_j\neq 0$ for $i\neq j$. \end{definition} \begin{definition} Let $\Phi=\bigsqcup_{j=1}^m \Phi_{n_j, h_j(v)}$ $(n_j\in \mathbb{Z}, h_j(v)\in {\mathscr{O}}_{\C,0})$ be a good factor. For each pair $i, j$ with $i, j=1,\dots, m$, $h_i(0)-h_j(0)\neq 0$, we define \textit{the Stokes line} $\mathrm{St}_{i j}(\Phi)$ by \begin{align*} \mathrm{St}_{i j}(\Phi) \coloneqq \left\{(\theta_u, \theta_v)\in {{T}}\middle| {\mathrm{Re}}\(e^{-{\sqrt{-1}}(\theta_u+\theta_v)}(h_i(0)-h_j(0))\)=0\right\}. \end{align*} For a pair $i, j$ with $h_i(0)=h_j(0)$ and $n_i-n_j\neq 0$, we set \begin{align*} \mathrm{St}_{i j}(\Phi)\coloneqq \{(\theta_u, \theta_v)\in {{T}}\mid \theta_u=\pm \pi/2 \}. \end{align*} For each $i=1,\dots,m$, we set \begin{align*} {\mathrm{St}}_{ii}(\Phi)\coloneqq \{(\theta_ v, \theta_u)\in {{T}}\mid \theta_u= 0 \text{ or }\pi \}. \end{align*} \end{definition} \begin{remark} For two sections $\varphi, \psi\in {\mathscr{Q}}(V)$ on an open subset $V\subset T$, we set \begin{align*} V_{\varphi\leqslant \psi}\coloneqq \{t\in V\mid \varphi_t \leqslant_t \psi_t \}, \end{align*} which is an open subset of $V$. Let ${\mathrm{St}}(\varphi, \psi)$ denote the boundary of $V_{\varphi\leqslant \psi}$ in $V$. Assume that $\varphi$ and $\psi$ are sections of $\Phi_{n_i, h_i(v)}$ and $\Phi_{n_j, h_j(v)}$ for a good covering $\Phi=\bigsqcup_{j=1}^m \Phi_{n_j, h_j(v)}$, respectively. Then, we have \begin{align* {\mathrm{St}}(\varphi, \psi)={\mathrm{St}}_{i, j}(\Phi)\cap V. \end{align*} Indeed, if $h_i(0)\neq h_j(0)$, then the equality holds because $\lim_{ v\to 0} v\log v=0 $. If $h_i(0)=h_j(0)$ and $i\neq j$, then we should have $n_{i j}\coloneqq n_i-n_j\neq 0$ by the goodness of $\Phi$. The following expression implies the equality: \begin{align*} \varphi-\psi=r_u^{-1} e^{-{\sqrt{-1}}\theta_u}(n_{ij}\log r_ v+n_{ij}{\sqrt{-1}}\theta_ v +\gamma( v)) \end{align*} where $\gamma( v)\in {\mathscr{O}}_{\C,0}$. If $i=j$, then we have $\varphi-\psi=u^{-1}2\pi {\sqrt{-1}} k$ for some $k\in\mathbb{Z}$, which also implies ${\mathrm{St}}(\varphi, \psi)={\mathrm{St}}_{i, j}(\Phi)\cap V$. In this sense, ${\mathrm{St}}_{i, j}(\Phi)$ describes the Stokes lines for sections of $\Phi$. \end{remark} \subsection{Pre-Stokes filtrations}\label{2.2} Let ${\tau}\colon {\mathscr{Q}}^{\acute{e}t}\to T$ denote the \'etale space of ${\mathscr{Q}}$. Note that ${\mathscr{Q}}^{\acute{e}t}$ is a Hausdorff space. For $m\in \mathbb{Z}$, let $\rho(m)$ denote the class in ${\mathscr{Q}}(T)$ represented by $2\pi{\sqrt{-1}} m u^{-1}$. \begin{definition}\label{pre-q} Let $\k$ be a field. Let $\L$ be a sheaf of $\k[q^{\pm 1}]$-modules on $T$. A \textit{pre-Stokes filtration} on $\L$ is a subsheaf $\L_{\leqslant}\subset {\tau}^{-1}\L$ of $\k$-vector spaces satisfying the following conditions: \begin{enumerate} \item For each $\bm{\theta}\in T$, and $\psi_{\bm{\theta}}, \varphi_{\bm{\theta}}\in {\tau}^{-1}(\bm{\theta})$ with $\psi_{\bm{\theta}}\leqslant_{\bm{\theta}}\varphi_{\bm{\theta}}$, we have \begin{align*} \L_{\leqslant \psi_{\bm{\theta}}}\subset\L_{\leqslant \varphi_{\bm{\theta}}} \end{align*} as subsets in $({\tau}^{-1}\L)_{\psi_{\bm{\theta}}}=({\tau}^{-1}\L)_{\varphi_{\bm{\theta}}}=\L_{\bm{\theta}}$. \item For each $\bm{\theta}\in T$, $\varphi_{\bm{\theta}}\in {\tau}^{-1}(\bm{\theta})$, and $m\in \mathbb{Z}$, we have \begin{align*} q^m\cdot \L_{\leqslant \varphi_{\bm{\theta}}}= \L_{\leqslant \varphi_{\bm{\theta}}+\rho(m)} \end{align*} also as subsets in $\L_{\bm{\theta}}$. The action of $q^m$ on the left hand side comes from the $\k[q^{\pm 1}]$-module structure on $\L_{\bm{\theta}}$. \end{enumerate} \end{definition} Since ${\mathscr{Q}}^{\acute{e}t}$ is a Hausdorff space, there exists a unique subsheaf $\L_{<}$ of $\L_{\leqslant}$ such that, for any $\bm{\theta}\in T$ and $\varphi_{\bm{\theta}}\in {\mathscr{Q}}_{\bm \theta}$, we have $\L_{<\varphi_{\bm\theta}}=\sum_{\psi_{\bm\theta}<\varphi_{\bm\theta}}\L_{\leqslant \psi_{\bm\theta}}$. \begin{definition}[c.f. {\cite[Definition 1.34]{Sabbah}}]\label{GR} For a pre-Stokes filtration $ \L_\leqslant$ on a sheaf $\L$ of $\k[q^{\pm 1}]$-modules, let ${\mathrm{gr}}\L$ denote the quotient sheaf $\L_{\leqslant}/\L_{<}$. For a point $\varphi\in {\mathscr{Q}}^{\acute{e}t}$, the stalk of ${\mathrm{gr}}\L$ at $\varphi$ is denoted by ${\mathrm{gr}}_\varphi\L$. \end{definition} By condition (2) in Definition \ref{pre-q}, the proper push-forward ${\tau_!}{\mathrm{gr}}\L$ is naturally equipped with the structure of sheaf of $\k[q^{\pm 1}]$-modules. For each point $\bm{\theta}\in {{T}}$, the action of $\k[q^{\pm 1}]$ on $({\tau_!}{\mathrm{gr}}\L)_{\bm\theta}$ is described as follows: we have \begin{align*} ({\tau_!}{\mathrm{gr}}\L)_{\bm{\theta}}=\bigoplus_{\varphi_{\bm\theta}\in{\tau}^{-1}(\bm\theta)} {\mathrm{gr}}_{\varphi_{\bm\theta}}\L. \end{align*} By condition (2) in Definition \ref{pre-q}, the action $q^m\colon \L_{\leqslant \varphi_{\bm\theta}}\to \L_{\leqslant \varphi_{\bm\theta}+\rho(m)}$ induces $q^m\colon {\mathrm{gr}}_{\varphi_{\bm{\theta}}}\L\to {\mathrm{gr}}_{\varphi_{\bm\theta}+\rho(m)}\L$ for $m\in \mathbb{Z}$. Similar discussion as in \cite[Example 1.35]{Sabbah} shows that ${\tau_!}{\mathrm{gr}}\L$ is naturally equipped with a pre-Stokes filtration $({\tau_!}{\mathrm{gr}}\L)_{\leqslant}$. We have \begin{align*} ({\tau_!}{\mathrm{gr}}\L)_{\leqslant \varphi_{\bm\theta}}=\bigoplus_{\psi_{\bm\theta}\leqslant_{\bm\theta}\varphi_{\bm\theta}} {\mathrm{gr}}_{\psi_{\bm\theta}}\L \end{align*} at each point $\varphi_{\bm\theta}\in {\tau}^{-1}(\bm\theta)$, $(\bm\theta\in {{T}})$. We have the identification ${\mathrm{gr}}({\tau_!}{\mathrm{gr}}\L)={\mathrm{gr}} \L$. \begin{definition}\label{q-covering} Let $\L_\leqslant$ be a pre-Stokes filtration on a sheaf $\L$ of $\k[q^{\pm 1}]$-modules on $T$. Let $\Phi(\L,\L_{\leqslant})$ denote the support of ${\mathrm{gr}}\L$ on ${\mathscr{Q}}^{\acute{e}t}$. We call $\Phi(\L,\L_{\leqslant})$ the \textit{exponential factor} of the pair $(\L, \L_{\leqslant})$. \end{definition} Fix a finite union $\Phi=\bigsqcup_{j=1}^m \Phi_j$ of $\Phi_j=\Phi_{n_j, h_j(v)}$ with $n_j\in \mathbb{Z}$, $h_j(v)\in {\mathscr{O}}_{\C,0}$. Let $(\L,\L_\leqslant)$ be a pre-Stokes filtered sheaf of $\C[q^{\pm 1}]$-module with $\Phi(\L,\L_\leqslant)\subset \Phi$. For each $j$, put \begin{align*} &\L_{<\Phi_{j}}(V)\coloneqq \bigcap_{\varphi\in \Gamma(V,\Phi_{j})}\L_{\leqslant \varphi}(V), &\L_{\leqslant \Phi_{j}}(V)\coloneqq \sum_{\varphi\in \Gamma(V,\Phi_{j})}\L_{\leqslant \varphi}(V). \end{align*} for contractible open subsets $V\subset T$. We then obtain the family of sub-sheaves $\L_{<\Phi_{j}}\subset \L_{\leqslant\Phi_{j}}\subset \L$ of $\k[q^{\pm 1}]$-modules, which we will call \textit{coarse filtrations}. We then put \begin{align*} {\mathrm{Gr}}_{\Phi_{j}}(\L )\coloneqq \L_{\leqslant \Phi_{j}}/\L_{<\Phi_{j}}. \end{align*} The pre-Stokes filtration on $\L$ induces a pre-Stokes filtration on ${\mathrm{Gr}}_{\Phi_{j}}(\L)$. \subsection{Stokes filtrations}\label{QLS} An $\mathbb{R}$-constructible sheaf ${\mathscr{F}}$ of locally finitely generated free $\k[q^{\pm 1}]$-modules on a real analytic manifold will be called a \textit{quasi-local system} in this paper if it becomes a local system after the localization, i.e. if the tensor product \begin{align*} {\mathscr{F}}(q)\coloneqq {\mathscr{F}}\otimes_{\k[q^{\pm 1}]} \k(q) \end{align*} is a local system. Recall that $T$ is identified with the torus $\{(\theta_u,\theta_v)\mid \theta_u,\theta_v\in \mathbb{R}/2\pi \mathbb{Z}\}$. We set \begin{align*} &T_{\mathbb{R}_+}\coloneqq \{(\theta_u,\theta_v)\in T\mid e^{-{\sqrt{-1}}\theta_u}\in \mathbb{R}_{>0}\}, &&T_{\mathbb{R}_-}\coloneqq\{ e^{-{\sqrt{-1}}\theta_u}\in \mathbb{R}_{<0}\},\\ &T_+\coloneqq\{{\mathrm{Im}} (e^{-{\sqrt{-1}}\theta_u})>0\}, &&T_-\coloneqq\{{\mathrm{Im}} (e^{-{\sqrt{-1}}\theta_u})<0\}. \end{align*} Then $\Theta\coloneqq\{T_{\mathbb{R}_+}, T_{\mathbb{R}_-},T_+, T_-\}$ is a stratification of $T$. \begin{definition}\label{Q-local} By \textit{a quasi-local system on $(T,\Theta)$}, we mean a quasi-local system on $T$ which is constructible with respect to $\Theta$. In other words, a sheaf $\L$ of locally finitely generated free $\k[q^{\pm 1}]$-modules on $T$ is called a \textit{quasi-local system on $(T,\Theta)$} if it satisfies the following conditions: \begin{enumerate} \item There is a non-negative integer $r$ (\textit{the rank of $\L$}) such the restriction $\L_{|T_\star}$ to each strata $T_\star\in \Theta$ is a local system of free $\k[q^{\pm 1}]$-modules of rank $r$. \item For any two connected open subsets $W, V\subset T$ with $W \subset V$, the restriction map $\L(V)\to \L(W)$ is injective. \end{enumerate} \end{definition} \begin{lemma}\label{Glueing L} Let $\L$ be a quasi-local system on $(T,\Theta)$. Let $r$ be the rank of $\L$. Then, there exist a unique pair of local systems $\L^{+}$ on $T\setminus T_{\mathbb{R}_-}$ and $\L^{-}$ on $T\setminus T_{\mathbb{R}_+}$ of free $\k[q^{\pm 1}]$-modules of rank $r$ with the following properties: \begin{enumerate} \item $\L^{+}\subset \L_{|T\setminus T_{\mathbb{R}_-}}$ and $\L^-\subset \L_{|T\setminus T_{\mathbb{R}_+}}$. \item $\L_{|T_{\mathbb{R}_+}}=\L^+_{|T_{\mathbb{R}_+}}$ and $\L_{|T_{\mathbb{R}_-}}=\L^-_{|T_{\mathbb{R}_-}}$.\qed \end{enumerate} \end{lemma} \begin{definition} A quasi-local system $\L$ on $(T,\Theta)$ is called \textit{saturated} if we have $\L(V)=\L^+(V)+\L^-(V)$ for any open $V\subset T_+\cup T_-$. If we have $\L(V)=\L^+(V)$ (resp. $\L(V)=\L^-(V)$), then $\L$ is called \textit{$+$saturated} (resp. \textit{$-$saturated}). \end{definition} Let $\L_{\leqslant}$ be a pre-Stokes filtration on a quasi-local system $\L$ on $(T,\Theta)$. Then, $\L_{\leqslant}$ induces the filtrations on $\L^\pm$ in a natural way: \begin{align*} \L^\pm_{\leqslant }\coloneqq \tau_{|\tau^{-1}(T\setminus T_{\mathbb{R}_\mp})}^{-1}\L^\pm\cap \L_{\leqslant |\tau^{-1}(T\setminus T_{\mathbb{R}_\mp})}, \end{align*} We define $\L^\pm_{<}$ and ${\mathrm{gr}}\L^\pm$ in the same way as above. \begin{lemma} The natural morphisms \begin{align* {\mathrm{gr}}\L^\pm\to {\mathrm{gr}}\L_{|\tau^{-1}{(T\setminus T_{\mathbb{R}_\mp}})}. \end{align*} are isomorphisms. \end{lemma} \begin{proof} We shall see that the morphisms are injective and surjective on $T_+\cup T_-$. We consider the morphism ${\mathrm{gr}}\L^+\to {\mathrm{gr}}\L$ (The case of the morphism ${\mathrm{gr}}\L^-\to {\mathrm{gr}}\L$ can be proved in a similar way). Firstly, we shall see that the morphism is injective. If a section $\mathfrak{s}\in \L_{\leqslant\varphi}^{+}$ is in $\L_{<\varphi}$, then it is contained in $\L^+_{<\varphi}$ by the definition of the filtration. This proves the injectivity. Next, we shall see that the morphism is surjective. Let $\mathfrak{s}$ be a section of $\L_{\leqslant\varphi}$ on an open subset in $T_+$. Then, there exists a polynomial $P(q)\in\k[q]$ such that $P(0)=1$ and $P(q)\mathfrak{s}\in \L^+$. By the condition (2) in Definition \ref{pre-q}, we obtain $P(q)\mathfrak{s}\in \L^+_{\leqslant \varphi}$ and ${\mathrm{gr}}_\varphi(P(q)\mathfrak{s})={\mathrm{gr}}_\varphi(\mathfrak{s})$, which proves the surjectivity. \end{proof} \begin{definition}[Stokes filtration]\label{Def q-Stokes} Let $\Phi=\bigsqcup_{j=1}^m \Phi_j$ be a finite disjoint union of $\Phi_j=\Phi_{n_j, h_j(v)}$ $(n_j\in \mathbb{Z}, h_j(v)\in {\mathscr{O}}_{\C,0})$. Let $\L$ be a saturated quasi-local system on $(T,\Theta)$. Let $\L_{\leqslant}$ be a pre-Stokes filtrations on $\L$ with $\Phi(\L,\L_\leqslant)\subset \Phi$. Then, $\L_{\leqslant}$ is called a \textit{Stokes filtration} if the following conditions are satisfied: \begin{enumerate} \item ${\mathrm{gr}}\L$ is a local system of $\k$-modules on $\Phi(\L,\L_{\leqslant})$. \item For each point $\bm\theta\in T$ (resp. $\bm\theta_\pm\in T\setminus T_{\mathbb{R}_\mp}$), there exists an isomorphism \begin{align*} &\eta_{{\bm\theta}}\colon \tau_!{\mathrm{gr}}\L_{\bm\theta}\xrightarrow{\sim} \L_{\bm\theta}, &(\text{resp. } \eta_{\bm\theta_\pm}\colon \tau_!{\mathrm{gr}}\L_{\bm\theta_\pm} \xrightarrow{\sim} \L^\pm_{\bm\theta_\pm}) \end{align*} such that \begin{enumerate} \item The isomorphism $\eta_{\bm\theta}$ (resp. $\eta_{\bm\theta_\pm}$) preserves the filtration, i.e. for any germ $\varphi \in {\mathscr{Q}}_{\bm\theta}$ (resp. $\varphi_\pm\in {\mathscr{Q}}_{\bm\theta_\pm}$), it induces the morphism \begin{align*} &\eta_{\bm\theta}\colon {\mathrm{gr}}_\varphi\L\to \L_{\leqslant \varphi} &(\text{resp. } \eta_{\bm\theta_\pm}\colon {\mathrm{gr}}_{\varphi_\pm}\L \to \L^\pm_{\leqslant \varphi_\pm}) \end{align*} \item The associated morphism \begin{align*} &{\mathrm{gr}}_\varphi(\eta_{\bm\theta})\colon {\mathrm{gr}}_\varphi\L\to {\mathrm{gr}}_\varphi\L &(\text{resp. } {\mathrm{gr}}_{\varphi_\pm}(\eta_{\bm\theta_\pm}) \colon {\mathrm{gr}}_{\varphi_\pm}\L\to {\mathrm{gr}}_{\varphi_\pm}\L) \end{align*} is identity for any $\varphi$ (resp. $\varphi_\pm$). \end{enumerate} The morphisms $\eta_{{\bm\theta}}$ and $\eta_{{\bm\theta_\pm}}$ are called splittings of the filtrations. \item Each coarse graded part ${\mathrm{Gr}}_{\Phi_j}(\L)$ is a $+$saturated (resp. $-$saturated) quasi-local system on $(T,\Theta)$ if $n_j\in \mathbb{Z}_{\geq 0}$ (resp. $n_j\in \mathbb{Z}_{\leq 0}$). \end{enumerate} The pair $(\L, \L_{\leqslant})$ of a saturated quasi-local system $\L$ and a Stokes filtration $\L_{\leqslant}$ on it is called a \textit{Stokes filtered quasi-local system}. \end{definition} \begin{remark} The notion of Stokes filtered quasi-local systems might be defined in a more general setting considering ramifications e.t.c. In that case, we should call the notion defined above non-ramified or of exponential type. We will not pursue such generalizations in the present paper, and do not add such adjectives. \end{remark} \begin{definition} A Stokes filtered quasi-local system is called \textit{good} if its exponential factor $\Phi(\L,\L_{\leqslant})=\bigsqcup_{j=1}^m \Phi_{n_j, h_j(v)}$ is good. The \textit{Stokes direction} of $(\L,\L_{\leqslant})$ is defined as that of $\Phi(\L,\L_{\leqslant})$ and denoted by ${\mathrm{St}}_{i, j}(\L,\L_{\leqslant})$ $(i, j=1,\dots, m)$. \end{definition} \begin{example}\label{trivial q} Let $(\k[q^{\pm 1}]_T,\k[q^{\pm 1}]_{\leqslant})$ be a pair of the constant sheaf $\k[q^{\pm 1}]_T$ on $T$ and the filtration $\k[q^{\pm 1}]_{T\leqslant}$ characterized by the conditions \begin{align*} \Phi(\k[q^{\pm 1}]_T,\k[q^{\pm 1}]_{T\leqslant})=[u^{-1}2\pi{\sqrt{-1}}\mathbb{Z}] \end{align*} and \begin{align*} \k[q^{\pm 1}]_{T\leqslant 0}= \begin{cases} \k[q]_{T_+} & \text{ on $T_+$}\\ \k[q^{-1}]_{T_-}&\text{ on $T_-$}\\ \k_{T_\star} q^0&\text{ on $T_{\mathbb{R}_{+}}\cup T_{\mathbb{R}_{-}}$}. \end{cases} \end{align*} Then $(\k[q^{\pm 1}]_T,\k[q^{\pm 1}]_{\leqslant})$ is a Stokes filtered quasi-local system. \end{example} \begin{example}[Gamma function]\label{GAMMA FUN} Take a subsheaf $\L_{\Gamma}\subset \imath_T^{-1}\widetilde{\jmath}_{B*}{\mathscr{O}}_{B^*}$ by \begin{align*} \L_{\Gamma}(V)\coloneqq \begin{cases} \C[q^{\pm 1}]u^{u^{-1}}v^{u^{-1}}\Gamma(u^{-1})& (V\cap T_{\mathbb{R}_-}= \emptyset)\\ \C[q^{\pm 1}]u^{u^{-1}}v^{u^{-1}}(1-q)\Gamma(u^{-1})& (V\cap T_{\mathbb{R}_-}\neq \emptyset) \end{cases} \end{align*} where $V$ is a small open subset of $T$, $q=\exp(2\pi{\sqrt{-1}} u^{-1})$, and $\Gamma(z)$ denotes the gamma function. Define the filtration $\L_{\Gamma,\leqslant}$ as follows: \begin{align*} &\L_{\Gamma,\leqslant \varphi}\coloneqq \L_{\Gamma}\cap \imath_T^{-1}(e^\varphi\scr{A}_{\widetilde{B}}^{\leqslant Z}) &(\varphi\in {\mathscr{Q}}). \end{align*} We also consider a subsheaf $\scr{G}_\Gamma\coloneqq\C[q^{\pm 1}]u^{1/2}v^{u^{-1}}e^{-u^{-1}} \subset \imath_T^{-1}\widetilde{\jmath}_{B*}{\mathscr{O}}_{B^*}$, which is also equipped with the Stokes filtration in a similar way. It is easy to see that $\scr{G}_\Gamma$ is a local system and that $(\scr{G}_\Gamma, \scr{G}_{\Gamma,\leqslant})$ is a Stokes filtered quasi-local system with $\tau_!{\mathrm{gr}}\scr{G}_\Gamma=\scr{G}_\Gamma$. By the Stirling formula, we have \begin{align*} \Gamma(u^{-1}) =e^{-u^{-1}}u^{-u^{-1}+1/2}\sqrt{2\pi}\left\{1+O(u)\right\} \end{align*} when $|u|\to 0$ with $-\pi+\delta<\arg(u)<\pi-\delta$ for any $\delta>0$. It follows that we have an isomorphism \begin{align*} \eta_{T\setminus T_{\mathbb{R}_-}}\colon (\scr{G}_{\Gamma},\scr{G}_{\Gamma\leqslant})_{|T\setminus T_{\mathbb{R}_-}}\to (\L_\Gamma,\L_{\Gamma,\leqslant})_{|T\setminus T_{\mathbb{R}_-}} \end{align*} defined as \begin{align*} \eta_{T\setminus T_{\mathbb{R}_-}}\(u^{1/2}v^{u^{-1}}e^{-u^{-1}}\sqrt{2\pi}\) =u^{u^{-1}}v^{u^{-1}}\Gamma(u^{-1}), \end{align*} where the branches of $\log u$ and $\log v$ are defined in a suitable way. By the reflection formula for Gamma function, we have \begin{align*} (1-q)\Gamma(u^{-1})=2\pi{\sqrt{-1}}\frac{e^{\pi{\sqrt{-1}} u^{-1}}}{\Gamma(1-u^{-1})} \end{align*} and hence \begin{align*} (1-q)u^{u^{-1}}v^{u^{-1}}\Gamma(u^{-1})= u^{1/2}e^{-u^{-1}}v^{u^{-1}}\sqrt{2\pi}\{1+O(u)\} \end{align*} when $|u|\to 0$ with $\delta<\arg(u)<2\pi-\delta$. It then also follows that we have an isomorphism \begin{align*} \eta^+_{T\setminus T_{\mathbb{R}_+}}\colon (\scr{G}_{\Gamma},\scr{G}_{\Gamma\leqslant})_{|T\setminus T_{\mathbb{R}_+}}\to (\L_\Gamma^+,\L_{\Gamma,\leqslant}^+) \end{align*} defined as \begin{align*} \eta^+_{T\setminus T_{\mathbb{R}_+}}(u^{1/2}v^{u^{-1}}e^{-u^{-1}}\sqrt{2\pi}) \coloneqq (1-q)u^{u^{-1}}v^{u^{-1}}\Gamma(u^{-1}) \end{align*} where the branches of $\log u$ and $\log v$ are defined in a suitable way. In conclusion, we have that $(\L_{\Gamma},\L_{\Gamma,\leqslant})$ is a Stokes filtered quasi-local system such that \begin{align*} \tau_!{\mathrm{gr}}\L_{\Gamma}\simeq (\scr{G}_{\Gamma},\scr{G}_{\Gamma\leqslant}) \end{align*} and $\Phi(\L_{\Gamma},\L_{\Gamma,\leqslant})=\left[u^{-1}(\log v-1+2\pi{\sqrt{-1}}\mathbb{Z})\right]$ hold. \end{example} \subsection{Strictness and Abelianity--Statement} Let $(\L,\L_\leqslant)$ and $(\L',\L'_{\leqslant})$ be Stokes filtered quasi-local systems. \begin{definition} \textit{A morphism $\xi\colon (\L,\L_\leqslant)\to (\L',\L'_{\leqslant})$ of Stokes filtered quasi-local systems} is a morphism $\xi\colon \L\to \L'$ of sheaves of $\k[q^{\pm 1}]$-modules such that the pull back $\tau^{-1}\xi$ induces the morphism $\tau^{-1}\xi\colon \L_{\leqslant}\to \tau^{-1}\L'_{\leqslant}.$ The category of Stokes filtered quasi-local system is denoted by $\bm{{\mathrm{St}}}^q$. \end{definition} \begin{definition} Let $\xi\colon (\L,\L_\leqslant)\to(\L',\L'_\leqslant)$ be a morphism in $\bm{{\mathrm{St}}}^q$. $\xi$ is called \textit{strict} if it satisfies $\xi(\L_{\leqslant \varphi})=\xi(\L_{\bm\theta})\cap \L'_{\leqslant\varphi}$ for any ${\bm\theta}\in T$ and $\varphi\in \tau^{-1}(\bm{\theta})$. \end{definition} Let $\Phi$ be a good factor. Let $\bm{{\mathrm{St}}}^q_\Phi$ denote the full sub-category of $\bm{{\mathrm{St}}}^q$ whose objects $(\L,\L_\leqslant)$ satisfy $\Phi(\L,\L_\leqslant)\subset \Phi$. The following is the main theorem of this section: \begin{theorem}\label{STRICTNESS} $\bm{{\mathrm{St}}}^q_\Phi$ is an abelian category and every morphism in $\bm{{\mathrm{St}}}^q_\Phi$ is strict. \end{theorem} The proof will be finished in \S \ref{end of the strictness}. \begin{lemma}\label{first vanishing} Let $I$ be an open interval. Let $A_+$ and $A_-$ be principal ideal domains. Take $t\in I$ and let $I_+$ and $I_-$ be the connected components of $I\setminus\{t \}$. Let ${\mathscr{F}}$ be a sheaf of modules such that the restrictions ${\mathscr{F}}_{|I_+}, $ and ${\mathscr{F}}_{|I_-}$ are local systems of finite rank free modules over $A_+$ and $A_-$, respectively. Then we have $H^1(I, {\mathscr{F}})=0$. \end{lemma} \begin{proof} By the same discussion as \cite[Lemma 2.8]{Sabbah}, it reduces to show that for an inclusion $\iota\colon (a, b)\hookrightarrow (a, b]$ with $a, b\in\mathbb{R}$, $a<b$ and a princepal ideal domain $A$, we have \begin{align* H^1\((a, b], \iota_!A_{(a, b)}\)=0. \end{align*} This is well known. \end{proof} For a given $\bm\theta=(\theta_u,\theta_v)\in T$ and $a\in \mathbb{Z}$, take a circle \begin{align*} \ell_a^{\bm\theta}\colon S^1\to T,\quad e^{{\sqrt{-1}}\theta}\mapsto (\theta_u+\theta,\theta_v+a\theta). \end{align*} The image $\ell_a^{\bm\theta}(S^1)$ is also denoted by $\ell_a^{\bm\theta}$. We firstly assume that $\Phi(\L,\L_\leqslant)=\Phi_{n, h(v)}$ for some $n\in \mathbb{Z}$, and $h(v)\in{\mathscr{O}}_{\C,0}$. % \begin{lemma}\label{Split 1} Let $(\L, \L_\leqslant)$ be an object in $\bm{\mathrm{St}}_{\Phi}^q$ with $\Phi= \Phi_{n, h(v)}$. Let $I\subset \ell^{\bm\theta}_a$ be an open interval such that $\mathrm{card}(I\cap (T_{\mathbb{R}_+}\cup T_{\mathbb{R}_-}))\leq 1$. Then, there exists a splitting \begin{align*} \eta_I\colon H^0(I, \tau_!{\mathrm{gr}}\L)\xrightarrow{\sim} H^0(I, \L) \end{align*} of the filtration. If $\mathrm{card}(I\cap (T_{\mathbb{R}_+}\cup T_{\mathbb{R}_-}))=1$, then the splitting is unique. \end{lemma} \begin{proof} We first note that $\Phi$ is trivial over $I$. The existence of such splitting follows from Definition \ref{Def q-Stokes} and Definition \ref{Q-local}. We shall prove the uniqueness and give another proof of the existence using Lemma \ref{first vanishing}. Consider the exact sequence \begin{align*} 0\longrightarrow \L_{<\varphi}\longrightarrow \L_{\leqslant \varphi}\longrightarrow {\mathrm{gr}}_\varphi\L \longrightarrow 0 \end{align*}for $\varphi\in \Gamma(I,\Phi)$. Taking the section on $I$, we obtain the long exact sequence \begin{align*} 0\longrightarrow H^0(I,\L_{<\varphi})\to H^0(I,\L_{\leqslant\varphi })\to H^0(I,{\mathrm{gr}}_\varphi\L) \to H^1(I, \L_{<\varphi})\to\cdots. \end{align*} Put $I_{\pm}\coloneqq I\cap T_{\pm}$. Then $\L_{<\varphi|I_{+}}$ (resp. $\L_{<\varphi|I_{-}}$) is a local system of free $\k[q]$-modules (resp. $k[q^{-1}]$-modules) of finite rank. Hence $H^1(I, \L_{<\varphi})=0$ by Lemma \ref{first vanishing}. We also have $H^0(I, \L_{<\varphi})=0$ when $I\cap (T_{\mathbb{R}_+}\cup T_{\mathbb{R}_-})\neq \emptyset$ since $\Phi=\Phi_{n,h(v)}$. Therefore, the morphism $ H^0(I,\L_{\leqslant\varphi })\to H^0(I,{\mathrm{gr}}_\varphi\L)$ is an isomorphism. This fact implies the existence and the uniqueness of the splitting. \end{proof} \begin{lemma}\label{m=1} Theorem $\ref{STRICTNESS}$ holds when $\Phi=\Phi_{n, h(v)}$ for $n\in\mathbb{Z}$ and $h(v)\in {\mathscr{O}}_{\C,0}$. \end{lemma} \begin{proof} Let $\lambda\colon (\L,\L_{\leqslant})\to (\L',\L'_{\leqslant})$ be a morphism in $\bm{\mathrm{St}}_\Phi^q$. For each point $\bm\theta\in T$, there exist an open interval $I\subset \ell_0^{\bm\theta}(S^1)$ such that (i) $\bm\theta\in I$, (ii) the restriction maps \begin{align*} \Gamma(I,\L)\to \L_{\bm\theta},\quad \Gamma(I,\L')\to \L'_{\bm\theta} \end{align*} are isomorphisms, and (iii) $I\cap (T_{\mathbb{R}_+}\cup T_{\mathbb{R}_-})$ consists of exactly one point (We have used the condition (3) in Definition \ref{Def q-Stokes}). Then we have the following diagram by Lemma \ref{Split 1}: \begin{align*} \xymatrix{ \tau_!{\mathrm{gr}}\L_{\bm\theta}\ar@{.>}[d] &\ar[l]\Gamma(I,\tau_!{\mathrm{gr}}\L)\ar[r]^{\eta_I}\ar@{.>}[d] &\Gamma(I,\L)\ar[d]^{\Gamma(I,\lambda)}\ar[r] &\L_{\bm\theta}\ar[d]^{\lambda_{\bm\theta}}\\ \tau_!{\mathrm{gr}}\L_{\bm\theta} &\ar[l]\Gamma(I,\tau_!{\mathrm{gr}}\L')\ar[r]^{\eta'_I} &\Gamma(I,\L')\ar[r]&\L'_{\bm\theta} } \end{align*} where $\eta_I$ and $\eta_I'$ denote the unique splittings and dotted arrows denote the induced maps. Then, all morphisms preserve the filtrations, and by the condition (iii) above, we obtain that the dotted arrows coincide with \begin{align*} \Gamma(I, \tau_!{\mathrm{gr}}(\lambda))\colon \Gamma(I,\tau_!{\mathrm{gr}}\L)\to \Gamma(I,\tau_!{\mathrm{gr}}\L'), \text{ and } \tau_!{\mathrm{gr}}(\lambda)_{\bm\theta}\colon \tau_!{\mathrm{gr}}\L_{\bm\theta}\to \tau_!{\mathrm{gr}}\L'_{\bm\theta}. \end{align*} In other words, $\lambda_{\bm\theta}$ is diagonalized and hence we obtain the strictness. It also follows that we have \begin{align*} \mathrm{Cok}(\lambda_{\bm\theta})\simeq \mathrm{Cok}({\mathrm{gr}}_\varphi(\lambda))\otimes_\k \k[q^{\pm 1}] \end{align*} for any $\varphi\in \Phi_{\bm\theta}$. This implies that $\mathrm{Cok}(\lambda)$ again a saturated quasi-local system which satisfies (3) in Definition \ref{Def q-Stokes}. Then the proof of abelianity is the same as in the case of usual Stokes structures. \end{proof} \subsection{Strictness and Abelianity--Proof}\label{end of the strictness} We shall consider the general case of Theorem \ref{STRICTNESS}. For a good factor $\Phi=\bigsqcup_{j=1}^m\Phi_{n_j, h_j(v)}$, we put $\Phi_j\coloneqq \Phi_{n_j, h_j(v)}$ and \begin{align*} {\mathrm{Gr}}_{\Phi}(\L)\coloneqq \bigoplus_{j=1}^m{\mathrm{Gr}}_{\Phi_j}(\L) \end{align*} for an object $(\L,\L_{\leqslant})\in \bm{\mathrm{St}}_{\Phi}^q$. Note that ${\mathrm{Gr}}_{\Phi_j}(\L)\in \bm{\mathrm{St}}_{\Phi_j}^q$, ${\mathrm{Gr}}_{\Phi}(\L)\in \bm{\mathrm{St}}_{\Phi}^q$, and ${\mathrm{gr}}({\mathrm{Gr}}_{\Phi}(\L))={\mathrm{gr}}\L$. By the condition (2) in Definition \ref{Def q-Stokes}, there is an filtered isomorphism \begin{align* \xi_{\bm\theta}\colon {\mathrm{Gr}}_{\Phi}(\L)_{\bm\theta}\xrightarrow{\sim} \L_{\bm\theta} \end{align*} such that ${\mathrm{Gr}}_{\Phi_j}(\xi_{\bm\theta})$ is identity. \begin{lemma}\label{key lemma1} If an interval $I\subset \ell_a^{\bm\theta}(S^1)$ satisfies the condition that \begin{itemize} \item $I\cap {\mathrm{St}}_{i, j}(\L)$ is at most one point for any $i\neq j$, \item $I\cap (T_{\mathbb{R}_+}\cup T_{\mathbb{R}_-})=\emptyset$, \end{itemize} then there exists a splitting of the coarse filtration \begin{align*} \xi_I\colon \Gamma\(I, {\mathrm{Gr}}_{\Phi}(\L)\)\xrightarrow{\sim} \Gamma(I,\L), \end{align*} i.e. $\xi_I$ preserves the coarse filtrations and the induced morphism \begin{align*} {\mathrm{Gr}}_{\Phi}(\xi_I)\colon \Gamma(I, {\mathrm{Gr}}_{\Phi}(\L))\to\Gamma(I, {\mathrm{Gr}}_{\Phi}(\L)) \end{align*} is identity. Moreover, if $I\cap {\mathrm{St}}_{i, j}(\L)$ contains exactly one point for any $i\neq j$, then such $\xi_{I}$ is unique and in that case $\xi_{I}$ also preserves the Stokes filtration. \end{lemma} \begin{proof} We firstly show that the first claim of this lemma (existence of the splitting of the coarse filtrations) is equivalent to the condition that $H^1(I, \L_{<\Phi_j})=0$ for every $j=1,\dots, m$. Consider the exact sequence \begin{align*} 0\longrightarrow \L_{<{\Phi_j}}\longrightarrow \L_{\leqslant {\Phi_j}}\longrightarrow {\mathrm{Gr}}_{\Phi_j}\L\longrightarrow 0. \end{align*} Taking the section on $I$, we obtain the long exact sequence \begin{align*} \cdots \to H^0(I, \L_{<{\Phi_j}}) \to H^0(I, \L_{\leqslant {\Phi_j}})\to H^0(I, {\mathrm{Gr}}_{\Phi_j} \L)\to H^1(I, \L_{<{\Phi_j}})\to \cdots. \end{align*} Then $H^1(I, \L_{<\Phi_j})=0$ if and only if the morphism $H^0(I, \L_{\leqslant \Phi_j})\to H^0(I, {\mathrm{Gr}}_{\Phi_j} \L)$ is surjective. Let $e_1^{(j)},\dots, e_{r_j}^{(j)}$ be a free bases of $H^0(I, {\mathrm{Gr}}_{\Phi_j} \L)$. In other words, we have $H^0(I, {\mathrm{Gr}}_{\Phi_j} \L)=\bigoplus_{k=1}^{r_j}\k[q^{\pm 1}]e_k^{(j)}$. If $H^0(I, \L_{<\Phi_j})=0$, there exist sections $\widetilde{e}^{(j)}_1,\dots, \widetilde{e}^{(j)}_{r_j}\in H^0(I, \L_{\leqslant \Phi_j})$ such that $\widetilde{e}^{(j)}_k$ maps to ${e}^{(j)}_k$ for every $k=1,\dots, r_j$. Then we obtain the coarse splitting $\xi_I\colon \Gamma(I, {\mathrm{Gr}}_\Phi(\L))\to \Gamma(I,\L)$ defined as \begin{align* \xi_I({e}^{(j)}_k)=\widetilde{e}^{(j)}_k. \end{align*} To see that $\eta_I$ is an isomorphism, we take a point $\bm{\theta}\in I\setminus \bigcup_{i\neq j}{\mathrm{St}}_{i, j}(\L)$. Then the morphism \[{\mathrm{Gr}}_\Phi(\L)_{\bm\theta}\to {\mathrm{Gr}}_\Phi(\L)_{\bm\theta}\] given by \begin{align*} {\mathrm{Gr}}_\Phi(\L)_{\bm\theta}\xleftarrow{\sim} \Gamma(I,{\mathrm{Gr}}_{\Phi}(\L))\xrightarrow{\xi_I} \Gamma(I,\L)\xleftarrow{\sim} \L_{\bm\theta} \xrightarrow{\xi_{\bm\theta}^{-1}} {\mathrm{Gr}}_\Phi(\L)_{\bm{\theta}} \end{align*} is upper triangular with respect to the decomposition ${\mathrm{Gr}}_{\Phi}(\L)=\bigoplus_j {\mathrm{Gr}}_{\Phi_j}\L$ with the total order on $\{\Phi_j\}_j$ at $\bm{\theta}$ with identity diagonal, hence is an isomorphism. Conversely, if we obtain the coarse splitting $\xi_I$, we take the lift $\widetilde{e}_k^{(k)}$ by $\xi_I({e}^{(j)}_k)=\widetilde{e}^{(j)}_k$. We shall prove the claim by the induction on the number $N$ of elements in the set $I\cap \(\bigcup_{i\neq j}{\mathrm{St}}_{i j}(\L)\).$ If $N=1$, we have $H^1(I, \L_{<\Phi_j})=0$ by Lemma \ref{first vanishing}. We then consider the case $N>1$. Take a connected component $I_0$ of $I\cap \(\bigcup_{i\neq j}{\mathrm{St}}_{i j}(\L)\)$ such that the closure of $I_0$ in $S^1$ is contained in $I$. Take the covering $I=I_1\cup I_2$ such that each $I_\ell$ is the image of an open interval in $S^1$ and $I_1\cap I_2=I_0$. The boundary $\partial I_0$ is two points $\{i_1, i_2\}$ such that $i_1\in I_1$ and $i_2\in I_2$. By the induction assumption, $H^1(I_\ell,\L_{<{\Phi_j}})=0$ for any section ${\Phi_j}\in H^0(I,{\mathscr{Q}})$ and $\ell=1,2$. Hence we may use the covering $I=I_1\cup I_2$ to compute the \v Cech cohomology of $H^1(I, \L_{<{\Phi_j}})$. Then, what we need to show is that the morphism \begin{align*} \delta_{\Phi_j}\colon H^0(I_1, \L_{<{\Phi_j}})\oplus H^0(I_2,\L_{<{\Phi_j}})\longrightarrow H^0(I_0, \L_{<{\Phi_j}}) \end{align*} defined by $\delta_{\Phi_j}(v_1, v_2)=v_{1|I_0}-v_{2|I_0}$ is surjective. By the induction assumption, we have the coarse splittings \begin{align*} \xi_{I_\ell}\colon \Gamma(I, {\mathrm{Gr}}_{\Phi}(\L)) \xrightarrow{\sim} \Gamma(I,\L) \end{align*} for $\ell=0,1,2$. By these splittings, $H^0(I_\ell, \L_{<\Phi_j})$ is identified with \begin{align*} \bigoplus_{\Phi_i<_{I_\ell}\Phi_j}H^0(I_\ell, {\mathrm{Gr}}_{\Phi_i}\L) \end{align*} for each $\ell$ where $\Phi_i<_{I_\ell}\Phi_j$ denotes the condition that $\varphi_i<_{I_\ell}\varphi_j$ for any sections $\varphi_i\in \Gamma(I_\ell, \Phi_i)$ and $\varphi_i\in \Gamma(I_\ell, \Phi_j)$. Under this identification, $\delta_{\Phi_j}$ is translated into a morphism \begin{align}\label{F delta} \bigoplus_{\Phi_i<_{I_1}\Phi_j}H^0(I_1, {\mathrm{Gr}}_{\Phi_i}\L) \oplus \bigoplus_{\Phi_k<_{I_2}\Phi_j}H^0(I_2, {\mathrm{Gr}}_{\Phi_k}\L)\to \bigoplus_{\Phi_m <_{I_0}\Phi_j} H^0(I_0, {\mathrm{Gr}}_{\Phi_m}\L). \end{align} The component \begin{align*} H^0(I_\ell, {\mathrm{Gr}}_{\Phi_i}\L)\to H^0(I_0, {\mathrm{Gr}}_{\Phi_i}\L)\quad (\ell=0,1, \Phi_i <_{I_\ell}\Phi_j) \end{align*} of (\ref{F delta}) coincides with $(-1)^{\ell-1}$ times the restriction map, and the component \begin{align*} H^0(I_\ell, {\mathrm{Gr}}_{\Phi_i}\L)\to H^0(I_0, {\mathrm{Gr}}_{\Phi_k}\L)\quad (\ell=0,1, \Phi_i <_{I_\ell}\Phi_k<_{I_\ell}\Phi_j) \end{align*} of (\ref{F delta}) is zero. Since $I\cap {\mathrm{St}}_{k, m}(\L)$ is at most one point for $k\neq m$ by the assumption of the claim, $\Phi_k<_{I_0}\Phi_m$ if and only if either $\Phi_k<_{I_1}\Phi_m$ or $\Phi_k<_{I_2}\Phi_m$. Therefore, the morphism (\ref{F delta}), and hence $\delta_{\Phi_j}$, are surjective. When $I\cap {\mathrm{St}}_{i,j}$ is exactly one point for any $i\neq j$, then $H^0(I,\L_{<\Phi_j})=0$ for $j=1,\dots, m$. It follows that $H^0(I,\L_{\leqslant\Phi_j})\to H^0(I,{\mathrm{Gr}}_{\Phi_j}\L)$ is an isomorphism, which proves the uniqueness. We also note that this isomorphism preserves the Stokes filtration, and so does the inverse. \end{proof} Recall that $\L^+$ and $\L^-$ are the local system of free $\k[q^{\pm 1}]$-modules on $T\setminus T_-$ and $T\setminus T_+$, respectively (see Lemma \ref{Glueing L}). Then, the Stokes filtrations on $\L$ induces filtrations on $\L^+$ and $\L^-$. We may also define the coarse filtrations on $\L^+$ and $\L^-$ in a natural way. The coarse grading ${\mathrm{Gr}}_{\Phi}(\L^+)$ (resp. ${\mathrm{Gr}}_{\Phi}(\L^-)$) are local systems on $T\setminus T_-$ (resp. $T\setminus T_+$), and there exists a splitting \begin{align*} &\xi_{\bm\theta}^+\colon {\mathrm{Gr}}_{\Phi}(\L^+)_{\bm\theta}\xrightarrow{\sim} \L^+_{\bm\theta} &(\text{resp. }\xi_{\bm\theta}^-\colon {\mathrm{Gr}}_{\Phi}(\L^-)_{\bm\theta}\xrightarrow{\sim} \L^-_{\bm\theta}) \end{align*} for any $\bm\theta\in T\setminus T_-$ (resp. $\bm\theta\in T\setminus T_+$) by the condition (2) in Definition \ref{Def q-Stokes}. The proof of the following lemma is essentially the same as that of Lemma \ref{key lemma1}: \begin{lemma}\label{key lemma2} If an interval $I\subset \ell_a^{\bm\theta}(S^1)$ satisfies the condition that \begin{itemize} \item $I\cap {\mathrm{St}}_{i, j}(\L)$ is at most one point for any $i\neq j$, \item $I\cap T_{\mathbb{R}_-}=\emptyset$ $($resp. $I\cap T_{\mathbb{R}_+}=\emptyset$$)$, \end{itemize} then there exists a splitting of the coarse filtration \begin{align*} &\xi_I^+\colon \Gamma\(I, {\mathrm{Gr}}_{\Phi}(\L^+)\)\xrightarrow{\sim} \Gamma(I,\L^+), &(\text{resp. }\xi_I^- \colon \Gamma\(I, {\mathrm{Gr}}_{\Phi}(\L^-)\)\xrightarrow{\sim} \Gamma(I,\L^-)) \end{align*} Moreover, if $I\cap {\mathrm{St}}_{i, j}(\L)$ contains exactly one point for any $i\neq j$, then such $\xi_{I}^+$ $($resp. $\xi_I^-$$)$ is unique and in that case $\xi_{I}^+$ $($resp. $\xi^-_I$$)$ also preserves the Stokes filtration.\qed \end{lemma} We now return to the proof of the main theorem of this section: \begin{proof}[Proof of Theorem $\ref{STRICTNESS}$] Let $\lambda\colon(\L,\L_{\leqslant})\to (\L',\L'_{\leqslant})$ be a morphism in $\bm{\mathrm{St}}_{\Phi}^q$. For $\bm\theta\in T\setminus (T_{\mathbb{R}_+}\cup T_{\mathbb{R}_-})$, we take $a\in \mathbb{Z}$ and $I\subset \ell^{\bm\theta}_a(S^1)$ so that \begin{itemize} \item $I\cap {\mathrm{St}}_{i, j}(\L)$ consists of one point for any $i\neq j$, \item $I\cap (T_{\mathbb{R}_+}\cup T_{\mathbb{R}_-})=\emptyset$. \end{itemize} Then, by the similar discussion as in the proof of Lemma \ref{m=1}, using Lemma \ref{key lemma1}, we obtain the commutative diagram \begin{align}\label{Diagram} \begin{split} \xymatrix{ {\mathrm{Gr}}_{\Phi}(\L)_{\bm\theta}\ar[d]^{{\mathrm{Gr}}_{\Phi}(\lambda)_{\bm\theta}} &\ar[l]\Gamma(I,{\mathrm{Gr}}_{\Phi}(\L))\ar[r]^{\ \ \xi_{I}}\ar[d]^{\Gamma(I,{\mathrm{Gr}}_{\Phi}(\lambda))} &\Gamma(I, \L)\ar[r]\ar[d]^{\Gamma(I,\lambda)}& \L_{\bm\theta}\ar[d]^{\lambda_{\bm\theta}}\\ {\mathrm{Gr}}_{\Phi}(\L')_{\bm\theta}&\ar[l]\Gamma(I,{\mathrm{Gr}}_{\Phi}(\L'))\ar[r]^{\ \ \xi'_{I}}&\Gamma(I, \L')\ar[r]& \L'_{\bm\theta} } \end{split} \end{align} where $\xi_I$ and $\xi'_i$ are the unique splittings in Lemma \ref{key lemma1}, ${\mathrm{Gr}}_{\Phi}(\lambda)$ is the coarse graded morphism associated to $\lambda$. Then (\ref{Diagram}) means that $\lambda_{\bm\theta}$ is graded by coarse filtration. Then, for each graded part ${\mathrm{Gr}}_{\Phi_j}(\lambda)$ $(j=1,\dots, m)$, we can apply the discussion in the proof of Lemma \ref{m=1} to obtain that ${\mathrm{Gr}}_{\Phi_j}(\lambda)$ is again graded with respect to the Stokes filtration, which implies the strictness of $\lambda$ at $\bm\theta$. For $\bm\theta\in T_{\mathbb{R}_+}\cup T_{\mathbb{R}_-}$, we apply the Lemma \ref{key lemma2} to obtain the similar diagram as (\ref{Diagram}) replacing $\L$ with $\L^+$ or $\L^-$. Then for each coarse graded part, we can take a splitting by the condition (2) in Definition \ref{Def q-Stokes}, whose uniqueness follows from the condition $\bm\theta\in T_{\mathbb{R}_+}\cup T_{\mathbb{R}_-}$. Then, we obtain the strictness of $\lambda$ at ${\bm\theta}\in T_{\mathbb{R}_+}\cup T_{\mathbb{R}_-}$ (details are left to the reader). The proof of abelianity is then given by the same way as in the case of usual Stokes structure (see also the proof of Lemma \ref{m=1}), which is left to the reader. \end{proof} \subsection{Skew symmetric quasi-duality pairing} Let $\L$ be a quasi-local system on $(T,\Theta)$. Let $\iota\colon T\to {{T}}$ be an involution defined by $\iota(\theta_u,\theta_v)=(\theta_u+\pi,\theta_v)$. We set \[\iota^*\L\coloneqq \k[q^{\pm 1}]\otimes_{\k[q^{\pm 1}]}\iota^{-1}\L\] where $\k[q^{\pm 1}]\to \k[q^{\pm 1}]$ is given by $q\mapsto q^{-1}.$ Then $\iota^*\L$ is again a quasi-local system such that $(\iota^*\L)^\pm=\iota^*(\L^\mp)\coloneqq \k[q^{\pm 1}]\otimes_{\k[{q^{\pm 1}]}}\iota^{-1}\L^\mp$ where the tensor product is defined in a similar way. \begin{definition} \textit{A skew symmetric quasi-duality pairing} on $\L$ is a pair of non-degenerate pairings \begin{align*} \<\cdot,\cdot\>_\pm\colon \L^\pm\otimes (\iota^*\L)^{\pm}\to \k[q^{\pm 1}]_{T\setminus T_{\mathbb{R}_\mp}} \end{align*} such that \begin{enumerate} \item $\<\cdot,\cdot\>_+=\<\cdot,\cdot\>_-$ on the mutual domain, and \item $\iota^*\<\cdot,\cdot\>_+=-\<\cdot,\cdot\>_-\circ\mathrm{ex}$ where $\mathrm{ex} $ denotes the exchange. \end{enumerate} \end{definition} Let $\L_{\leqslant}$ be a pre-Stokes filtration on $\L$. It induces a pre-Stokes filteration $\iota^*\L_{\leqslant}$ on $\iota^*\L$ as follows: \[(\iota^*\L)_{\leqslant\varphi}\coloneqq \C1\otimes_\C\iota^{-1}\L_{\leqslant \iota^*\varphi} \subset \iota^*\L\] where $\iota^*\varphi(u, v)\coloneqq \varphi(-u,v)$. \begin{definition} A skew symmetric quasi-duality pairing $\<\cdot,\cdot\>_{\pm }$ on $\L$ is compatible with $\L_{\leqslant}$ if for any $\mathfrak{s}\in\L_{\leqslant\varphi}^+$, $\mathfrak{s}'\in (\iota^*\L)_{\leqslant\varphi'}^+$, $\mathfrak{t}\in\L^-_{\leqslant\psi}$, and $\mathfrak{t}'\in (\iota^*\L)^-_{\leqslant \psi'}$ we have \begin{align*} &\<\mathfrak{s},\mathfrak{s}'\>_+\in \k[q^{\pm 1}]_{\leqslant \varphi+\varphi'}, &\<\mathfrak{t},\mathfrak{t}'\>_-\in\k[q^{\pm 1}]_{\leqslant \psi+\psi'}, \end{align*} where the filtration on $\k[q^{\pm 1}]$ is defined as in Example \ref{trivial q}. \end{definition} \begin{example}[Continuation of Example \ref{GAMMA FUN}] Let $(\L_\Gamma,\L_{\Gamma\leqslant})$ be the Stokes filtered quasi-local system in Example \ref{GAMMA FUN}. Let $\iota\colon \widetilde{B}\to \widetilde{B}$ be the involution defined by $\iota(u, v)=(-u, v)$ on $B^*$ and the continuous extension. We then define the quasi-duality pairing $\<\cdot,\cdot\>_{\Gamma\pm}$ by the multiplication divided by $2\pi{\sqrt{-1}} u$: \begin{align*} &\<u^{u^{-1}}v^{u^{-1}}\Gamma(u^{-1}),\iota^*\( u^{u^{-1}}v^{u^{-1}}(1-q)\Gamma(u^{-1})\)\>_{\Gamma +}\\ &\coloneqq (2\pi{\sqrt{-1}} u)^{-1} e^{\pi{\sqrt{-1}} u^{-1}}\Gamma(u^{-1})(1-q^{-1})\Gamma(-u^{-1})\\ &=-1, \end{align*} where we have used the reflection formula, and the branches of the multivalued functions are taken in a suitable way. The factor $(2\pi{\sqrt{-1}} u)^{-1}$ induces the skew symmetry. \end{example} \section{De Rham cohomology groups}\label{DR} \subsection{Geometric setting} Let $X$ be a compact connected Riemann surface. Let $f$ and $g$ be meromorphic functions on $X$. Assume that $g$ is not constantly zero. We set $P\coloneqq f^{-1}(\infty)$, $E_0\coloneqq g^{-1}(0)\setminus P$, $E_\infty\coloneqq g^{-1}(\infty)\setminus P$ and $E\coloneqq E_0\cup E_\infty$. We also use the notations $D\coloneqq P\cup E$, $Y\coloneqq X\setminus D$ and $U\coloneqq X\setminus P$. There are inclusions $Y\subset U\subset X$. Let $S=\C^2$ denote a complex surface with a coordinate $(\lambda,\mu)$. Set ${\mathcal{X}}\coloneqq S\times X$. We also use the notations $\cal{D}\coloneqq S\times D$, $\cal{P}\coloneqq S\times P$, etc. Let $\sigma\colon S\to S$ be an automorphism on $S$ defined as $\sigma(\lambda,\mu)\coloneqq (\lambda, \mu-\lambda)$. The induced automorphism on ${\mathcal{X}}$ is also denoted by $\sigma$. Set ${\mathfrak{a}}\coloneqq \lambda^2\partial_\lambda+\lambda\mu\partial_\mu$. \begin{definition}\label{MFG} Let ${\mathscr{M}}$ be a trivial ${\mathscr{O}}_{\mathcal{X}}(*\cal{D})$-module of rank one. Fix a global frame ${\bm{e}}\in H^0({\mathcal{X}},{\mathscr{M}})$, which induces an isomorphism ${\mathscr{M}}\xrightarrow{\sim} {\mathscr{O}}_{{\mathcal{X}}}(*\cal{D}), {\bm{e}}\mapsto 1$. We consider the following operations on ${\mathscr{M}}$ associated with the pair $(f,g)$: \begin{itemize} \item \textit{A relative connection} $\nabla\colon {\mathscr{M}}\to {\mathscr{M}}\otimes_{{\mathscr{O}}_{\mathcal{X}}}\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1$ defined by \begin{align*} \nabla(h{\bm{e}})\coloneqq {\bm{e}}\otimes d_{{\mathcal{X}}/S}(h)+ h{\bm{e}}\otimes \lambda^{-1}(df-\mu g^{-1}dg) \end{align*} where $h$ is a local section of ${\mathscr{O}}_{\mathcal{X}}(*\cal{D})$ and $d=d_{{\mathcal{X}}/S}\colon {\mathscr{O}}_{\mathcal{X}}(*\cal{D})\to \Omega^1_{{\mathcal{X}}/S}(*\cal{D})$ denotes the relative differential. \item \textit{A differential operator} \begin{align*} \nabla_{\mathfrak{a}}\colon {\mathscr{M}}\to {\mathscr{M}},\quad\nabla_{\mathfrak{a}}(h{\bm{e}})=([{\mathfrak{a}}, h]-hf){\bm{e}}. \end{align*} where $[{\mathfrak{a}},\cdot]\colon {\mathscr{O}}_{\mathcal{X}}(*\cal{D})\to {\mathscr{O}}_{\mathcal{X}}(*\cal{D})$ denote the Lie differential. \item \textit{A difference operator} \begin{align*} {\mathbb{S}}\colon{\mathscr{M}}\longrightarrow \sigma_{*}{\mathscr{M}},\quad {\mathbb{S}}(h{\bm{e}})\coloneqq \sigma^\#(h)g{\bm{e}} \end{align*} where $\sigma^\#\colon {\mathscr{O}}_{\mathcal{X}}(*\cal{D})\to\sigma_*{\mathscr{O}}_{\mathcal{X}}(*\cal{D})$ is defined by $\sigma^\#(h)=h\circ\sigma$. Note that ${\bm{e}}$ on the right hand side denotes the global section of $\sigma_*{\mathscr{M}}$. \end{itemize} In the following, we will assume that ${\mathscr{M}}={\mathscr{M}}(f, g)$ is equipped with the operations $\nabla,\nabla_{\mathfrak{a}},{\mathbb{S}}$ defined above. \end{definition} Let $p_{\mathcal{X}}\colon {\mathcal{X}}\to X$ denote the projection. Then $\Omega^1_{{\mathcal{X}}/S}={\mathscr{O}}_{\mathcal{X}}\otimes_{p_{\mathcal{X}}^{-1}{\mathscr{O}}_X}p_{\mathcal{X}}^{-1}\Omega_X^1$. We have the morphisms $[{\mathfrak{a}},\cdot]\colon \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1\to \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1$ and $\sigma^\#\colon \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1\to \sigma_*\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1$. We then define \[\nabla_{\mathfrak{a}}^1\colon {\mathscr{M}}\otimes \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1 \to {\mathscr{M}}\otimes \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1\] by $\nabla_{\mathfrak{a}}^1\coloneqq \nabla_{\mathfrak{a}}\otimes \mathrm{id} +\mathrm{id}\otimes [{\mathfrak{a}}, -]$. We also define \[{\mathbb{S}}^1\colon {\mathscr{M}}\otimes \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1\to \sigma_*({\mathscr{M}}\otimes\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1)\] by ${\mathbb{S}}^1\coloneqq {\mathbb{S}}\otimes \sigma^\#$. The following lemma, which can be proved by some easy calculations, shows that the operations $\nabla,\nabla_{\mathfrak{a}},{\mathbb{S}}$ have a kind of integrability. \begin{lemma}\label{INT} $\nabla^1_{\mathfrak{a}}\circ\nabla=\nabla\circ\nabla_{\mathfrak{a}}$, ${\mathbb{S}}^1\circ\nabla=(\sigma_*\nabla)\circ {\mathbb{S}}$, and ${\mathbb{S}}\circ\nabla_{\mathfrak{a}}=(\sigma_*\nabla_{\mathfrak{a}})\circ {\mathbb{S}}$.\qed \end{lemma} \subsection{De Rham cohomology groups}\label{DR2} We consider \begin{align*} {\mathrm{DR}}_{{\mathcal{X}}/S}({\mathscr{M}})\coloneqq\left[ {\mathscr{M}}\xrightarrow{\nabla} {\mathscr{M}}\otimes_{{\mathscr{O}}_{\mathcal{X}}}\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1\right] \end{align*} as a complex placed at degree zero and one. Let $\pi_{\mathcal{X}}\colon {\mathcal{X}}\to S$ denote the projection. \begin{definition} Let $k$ be an integer. The $k$-th cohomology group \begin{align*} \mathbb{R}^k\pi_{{\mathcal{X}}*}{\mathrm{DR}}_{{\mathcal{X}}/S}({\mathscr{M}}) \end{align*} of the pushing forward of ${\mathrm{DR}}_{{\mathcal{X}}/S}({\mathscr{M}})$ by $\pi_{{\mathcal{X}}}$ will be denoted by ${\mathscr{H}}_{{\rm dR}}^k$ or ${\mathscr{H}}_{\rm dR}^k(f,g)$. \end{definition} Put $S^\circ\coloneqq S\setminus \{\lambda=0\}\simeq \C^*\times \C$. The following theorem will be proved in \S \ref{PRO}. \begin{theorem}\label{dRT} Assume that $f$ is not a constant function. Then \begin{enumerate} \item ${\mathscr{H}}^k_{{\rm dR}}=0$ for $k\neq 1$, and \item ${\mathscr{H}}^1_{{\rm dR}|S^\circ}$ is a locally free ${\mathscr{O}}_{S^\circ}$-module. \end{enumerate} If we moreover assume that $E=\emptyset$, i.e. if the zeros and the poles of $g$ are contained in the poles of $f$, then ${\mathscr{H}}^1_{{\rm dR}}$ is locally free over ${\mathscr{O}}_{S}$. \end{theorem} By Lemma \ref{INT}, we have the two natural operations \begin{align*} &\nabla_{\mathfrak{a}}\colon {\mathscr{H}}^1_{{\rm dR}|S^\circ}\to {\mathscr{H}}^1_{{\rm dR}|S^\circ}, &{\mathbb{S}}\colon{\mathscr{H}}^1_{{\rm dR} |S^\circ}\to \sigma_*{\mathscr{H}}^1_{{\rm dR}|S^\circ}, \end{align*} which satisfy ${\mathbb{S}}\circ\nabla_{\mathfrak{a}}=\sigma_*\nabla_{\mathfrak{a}}\circ{\mathbb{S}}$. \begin{example}\label{Bessel} Consider the case $X=\mathbb{P}^1$, $f=z+z^{-1}$ and $g=z$ where $z$ denotes the coordinate on $\C=X\setminus\{\infty\}$. In this case, $P=\{0,\infty\}$, $E=\emptyset$ and hence $D=P$. We have \begin{align*} {\mathscr{H}}_{{\rm dR}}^1\simeq {\mathrm{Cok}}\left[{\mathscr{O}}_S[z^{\pm 1}]\xrightarrow{\nabla}\lambda^{-1}{\mathscr{O}}_S[z^{\pm 1}]dz\right] \end{align*} with $\nabla(h z^n)=h(n z^{n-1}+\lambda^{-1}(z^n-z^{n-2}-\mu z^{n-1}))dz$ for $h\in{\mathscr{O}}_S, n\in \mathbb{Z}$. Let $e_n$ denote the class represented by $\lambda^{-1}z^{n}d z$ for $n\in\mathbb{Z}$. We have ${\mathscr{H}}_{\rm dR}^1= \bigoplus_{i=0,1}{\mathscr{O}}_Se_{n+i}$ and $e_n=(\mu-n\lambda)e_{n-1}+e_{n-2}$ for each $n\in \mathbb{Z}$. The action of $(\nabla_{\mathfrak{a}},{\mathbb{S}})$ is given as follows: \begin{align*} &\nabla_{\mathfrak{a}}(e_n)=e_{n+1}+e_{n-1}-\lambda e_n, &{\mathbb{S}}(e_n)=e_{n+1}. \end{align*} \end{example} \begin{example}\label{EXG} Consider the case $X=\mathbb{P}^1$, and $f=g=z$. In this case, we have $P=\{\infty\}$, $E=\{0\}$, and $D=\{0,\infty\}$. We have \begin{align*} {\mathscr{H}}_{{\rm dR}}^1\simeq {\mathrm{Cok}}\left[{\mathscr{O}}_{S}[z^{\pm 1}]\xrightarrow{\nabla}\lambda^{-1}{\mathscr{O}}_{S}[z^{\pm 1}]dz\right] \end{align*} with $\nabla(h z^n)=h(n z^{n-1}+\lambda^{-1}(z^n-\mu z^{n-1}))dz$ for $h\in{\mathscr{O}}_S, n\in \mathbb{Z}$. Let $e_n$ denote the class represented by $\lambda^{-1}z^{n}d z$. We have the relation \[e_{n}=(\mu-n\lambda)e_{n-1}.\] For each point $s\in S^\circ$, there exists an open neighborhood ${\mathrm{nb}}(s)\subset S^\circ$ of $s$ and $n_s\in\mathbb{Z}$ such that ${\mathscr{H}}_{{\rm dR}|{\mathrm{nb}}(s)}^1={\mathscr{O}}_{{\mathrm{nb}}(s)}e_{n_s}$, which implies that ${\mathscr{H}}_{{\rm dR}|S^\circ}^1$ is locally free. The action of $(\nabla_{\mathfrak{a}},{\mathbb{S}})$ is given as follows: \begin{align*} &\nabla_{\mathfrak{a}}(e_n)=e_{n+1}-\lambda e_n, &{\mathbb{S}}(e_n)=e_{n+1}. \end{align*} \end{example} \begin{remark} The module ${\mathscr{H}}_{{\rm dR}}^1$ in Example \ref{EXG} is \textit{not} locally finitely generated over ${\mathscr{O}}_S$. Indeed, for any point $s_0=(0,\mu_0)\in S$ with $\mu_0\in\C$, any open neighborhood ${\mathrm{nb}}(s_0)$ of $s_0$, and any $n_0\in\mathbb{Z}$, there are infinitely many $n<n_0$ such that \[\{(\lambda,\mu)\in S\mid \mu-n\lambda=0\}\cap{\mathrm{nb}}(s_0)\neq \emptyset,\] which implies that no $e_{n_0}$ can generate $ {\mathscr{H}}_{{\rm dR}}^1$ on ${\mathrm{nb}}(s_0)$. This proof also indicates that ${\mathscr{H}}^1_{{\rm dR}}(*\{\lambda\mu=0\})$ is \textit{not} finitely generated over ${\mathscr{O}}_S(*\{\lambda\mu=0\})$. \end{remark} \subsection{Submodules}\label{S lattice} Take a submodule ${\mathscr{M}}_{\bm{0}}\coloneqq {\mathscr{O}}_{\mathcal{X}}(*\cal{P}){\bm{e}}\subset {\mathscr{M}}.$ For $(a, b)\in\mathbb{Z}^2$, set \begin{align*} {\mathscr{M}}_{a, b}\coloneqq {\mathscr{M}}_{\bm{0}}(a\cal{E}_0+b\cal{E}_\infty)={\mathscr{M}}_{\bm{0}}\otimes_{{\mathscr{O}}_{\mathcal{X}}}{\mathscr{O}}_{\mathcal{X}}(a\cal{E}_0+b\cal{E}_\infty) \end{align*} which is also a submodule of ${\mathscr{M}}$ (Recall that $\cal{E}_0=S\times E_0$ and $\cal{E}_\infty=S\times E_\infty$). The operations $\nabla,\nabla_{\mathfrak{a}},{\mathbb{S}}$ induce the following morphisms: \begin{align*} &\nabla\colon{\mathscr{M}}_{a, b}\longrightarrow {\mathscr{M}}_{a+1,b+1}\otimes\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1,\\ &\nabla_{\mathfrak{a}}\colon {\mathscr{M}}_{a, b}\longrightarrow {\mathscr{M}}_{a, b},\\ &{\mathbb{S}}\colon {\mathscr{M}}_{a, b}\longrightarrow \sigma_*{\mathscr{M}}_{a-1,b+1}. \end{align*} \begin{lemma}\label{dR} For $(a, b)\in \mathbb{Z}^2$ and $n\in \mathbb{Z}$, set \[{\mathscr{M}}_{a, b}^n\coloneqq {\mathscr{O}}_{\mathcal{X}}(n(f)_\infty+a\cal{E}_0+b\cal{E}_\infty)\bm{e}\subset {\mathscr{M}}_{a, b}.\] Then, the inclusion of the complexes \begin{align*} \begin{split} \xymatrix{ {\mathscr{M}}_{a, b}^n\ar[d]\ar[r]&{\mathscr{M}}_{a+1,b+1}^{n+1}\otimes \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1(\cal{P})\ar[d]\\ {\mathscr{M}}_{a,b}\ar[r]^{\nabla\ \ \ \ \ \ \ \ }&{\mathscr{M}}_{a+1,b+1}\otimes \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1 } \end{split} \end{align*} is a quasi-isomorphism. \end{lemma} \begin{proof} We need to check the claim around the points in $\cal{P}=S\times P$. For a point $p\in P$, take a local coordinate $(V, z)$ centered at $p$ such that we have $f_{|V}=z^{-m_p}$ for some $m_p\in \mathbb{Z}_{>0}$. Note that we have $g^{-1}dg_{|V}=h(z)z^{-1}dz$ for some holomorphic function $h$ on $V$. Then, on $\cal{V}=S\times V$, by $\nabla$, $z^\ell\bm{e}\in {\mathscr{M}}_{a, b}$ $(\ell\in \mathbb{Z})$ maps to \begin{align*} \bm{e}\otimes\ell z^{\ell-1}dz+\bm{e}\otimes \lambda^{-1}\(-(m_p+1)z^{\ell-1-m_p}dz-\mu h(z)z^{\ell-1}dz\). \end{align*} Since $m_p>0$, the morphism \begin{align*} {\mathscr{M}}^{n+1}_{a+1, b+1}\otimes \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1(\cal{P}) \longrightarrow \frac{{\mathscr{M}}_{a+1,b+1}\otimes \lambda^{-1}\Omega^1_{{\mathcal{X}}/S}}{\nabla ({\mathscr{M}}_{a,b})} \end{align*} is surjective. This implies the lemma. \end{proof} \begin{definition} For $a, b\in\mathbb{Z}$, let ${\mathrm{DR}}_{{\mathcal{X}}/S}({\mathscr{M}}_{a, b})$ denote the complex \begin{align*} {\mathscr{M}}_{a, b}\xrightarrow{\nabla}{\mathscr{M}}_{a+1, b+1}\otimes \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1 \end{align*} placed at degree zero and one. The $k$-th cohomology group of the derived push forward $\mathbb{R}\pi_{{\mathcal{X}}*}{\mathrm{DR}}_{{\mathcal{X}}/S}({\mathscr{M}}_{a, b})$ is denoted by ${\mathscr{H}}_{{\rm dR},a, b}^k$ or ${\mathscr{H}}_{{\rm dR}, a, b}^k(f, g)$. \end{definition} \begin{theorem}\label{LF} Assume that $f$ is not a constant function. Then, for $(a, b)\in \mathbb{Z}^2$, \begin{enumerate} \item ${\mathscr{H}}^k_{{\rm dR}, a, b}=0$ if $k\neq 1$, and \item ${\mathscr{H}}^1_{{\rm dR}, a, b}$ is a locally free ${\mathscr{O}}_S$-module. \end{enumerate} \end{theorem} \begin{proof} For $a, b, n, \ell\in \mathbb{Z}$, we set \begin{align*} V_{a, b}^{n, \ell}&\coloneqq H^\ell\(X,{\mathscr{O}}_X(n(f)_\infty+aE_0+bE_\infty)\),\\ W_{a, b}^{n, \ell}&\coloneqq H^\ell\(X,\Omega_X^1(n(f)_\infty+aE_0+bE_\infty)\). \end{align*} Since $f$ is not constant, given $(a, b)\in \mathbb{Z}^2$, there exists $m=m(a, b)>0$ such that \[V_{a, b}^{n,\ell}=W_{a, b}^{n,\ell}=0\] for $n>m$ and $\ell\neq 0$. Then, by Lemma \ref{dR}, $\mathbb{R}\pi_{{\mathcal{X}}*}{\mathrm{DR}}_{{\mathcal{X}}/S}({\mathscr{M}}_{a, b})$ is quasi-isomorphic to the following complex for $n>m$: \begin{align*} V_{a, b}^{n,0}\otimes {\mathscr{O}}_S\xrightarrow{d+\lambda^{-1}(df-\mu g^{-1}dg)} W_{a, b}^{n,0}\otimes \lambda^{-1}{\mathscr{O}}_S. \end{align*} For any complex numbers $(\lambda,\mu)\in \C^2$, the linear map \begin{align*} \lambda d+df-\mu g^{-1}dg\colon V_{a, b}^{n,0} \to W_{a, b}^{n,0} \end{align*} is injective, which implies the theorem. \end{proof} The operators $\nabla_{\mathfrak{a}}$ and ${\mathbb{S}}$ on ${\mathscr{M}}_{a, b}$ induce the operators \begin{align*} &\nabla_{\mathfrak{a}}\colon {\mathscr{H}}_{{\rm dR}, a, b}^1\longrightarrow {\mathscr{H}}^1_{{\rm dR}, a, b}, &{\mathbb{S}}\colon {\mathscr{H}}_{{\rm dR}, a, b}^1\longrightarrow \sigma_*{\mathscr{H}}^1_{{\rm dR}, a-1, b+1}, \end{align*} which satisfies ${\mathbb{S}}\circ\nabla_{\mathfrak{a}}=\sigma_*\nabla_{\mathfrak{a}}\circ{\mathbb{S}}$. \begin{example}[Continuation of Example \ref{EXG}]\label{Ga} In the case of Example \ref{EXG}, since $E_\infty=\emptyset$, we may omit the subscript $b$. We then obtain ${\mathscr{H}}^1_{{\rm dR},a}={\mathscr{O}}_Se_{-a-1}$. \end{example} \subsection{Proof of Theorem \ref{dRT}}\label{PRO} For $a, n\in \mathbb{Z}$, put $H_{a,n}\coloneqq \{(\lambda,\mu)\in S\mid a\lambda+n\mu=0\}$. \begin{lemma}\label{SUP} We have the following exact sequences of ${\mathscr{O}}_S$-modules: \begin{align*} &0\longrightarrow {\mathscr{H}}_{{\rm dR}, a-1,b}^1\longrightarrow {\mathscr{H}}^1_{{\rm dR}, a, b} \longrightarrow \bigoplus_{e\in E_0}{\mathscr{O}}_{H_{a,n_e}} \longrightarrow 0,\\ &0\longrightarrow {\mathscr{H}}_{{\rm dR}, a, b-1}^1\longrightarrow {\mathscr{H}}^1_{{\rm dR}, a, b}\longrightarrow \bigoplus_{e\in E_\infty}{\mathscr{O}}_{H_{b,n_e}} \longrightarrow 0. \end{align*} Here, $n_e\in \mathbb{Z}$ $(e\in E)$ denote the order of $g$ at $e$ $($$n_e>0$ for $e\in E_0$$)$. \end{lemma} \begin{proof} For $a, b\in \mathbb{Z}$, put ${\mathscr{M}}_{\overline{a}, b}\coloneqq \mathrm{Cok}[{\mathscr{M}}_{a-1,b}\hookrightarrow {\mathscr{M}}_{a,b}] $, ${\mathscr{M}}'_{a, b}\coloneqq{\mathscr{M}}_{a, b}\otimes \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1$, and ${\mathscr{M}}'_{\overline{a},b}\coloneqq{\mathscr{M}}_{\overline{a}, b}\otimes \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1$. Then we have \[{\mathscr{M}}_{\overline{a},b}\simeq {\mathscr{O}}_{\cal{E}_0} =\bigoplus_{e\in E_0}{\mathscr{O}}_{S\times\{e\}}\] and \begin{align*} \xymatrix{ 0\ar[r]& {\mathscr{M}}_{a-1, b}\ar[d]^\nabla\ar[r]&{\mathscr{M}}_{a, b}\ar[r] \ar[d]^\nabla & {\mathscr{M}}_{\overline{a}, b}\ar[d]^{\overline{\nabla}}\ar[r]&0\\ 0\ar[r]&{\mathscr{M}}'_{a, b+1}\ar[r] &{\mathscr{M}}'_{a+1, b+1}\ar[r]& {\mathscr{M}}'_{\overline{a+1}, b}\ar[r]&0 }\end{align*} whose rows are exact. We shall describe $\overline{\nabla}$. Take $e\in E_0$. We may take a local coordinate $(V, z)$ centered at $e$ such that $g_{|V}=z^{n_e}$ and $f_{|V}$ is holomorphic. Then, if we set $\cal{V}=S\times V$, we have ${\mathscr{M}}_{\overline{a}, b|\cal{V}}={\mathscr{O}}_{S\times\{e\}}[z^{-a}]$, ${\mathscr{M}}'_{\overline{a+1}, b|\cal{V}}={\mathscr{O}}_{S\times\{e\}}[z^{-a-1}]\otimes \lambda^{-1}dz$ and \begin{align*} \overline{\nabla}([z^{-a}])=-\lambda^{-1}(a\lambda+n_e\mu )[z^{-a-1}]dz \end{align*} where $[z^{-a}]$ and $[z^{-a-1}]$ are the classes represented by $z^{-a}\bm{e}$ and $z^{-a-1}{\bm{e}}$, respectively. By this expression, taking the pushing forward, we obtain the first exact sequence by Theorem \ref{LF}. We can also obtain the second exact sequence in a similar way. \end{proof} \begin{proof}[Proof of Theorem \ref{dRT}] If $E=\emptyset$, the statement directly follows from Theorem \ref{LF}. We also note that (1) is straightforward. It remains to show (2) when $E\neq \emptyset$. For each $(a, b)\in \mathbb{Z}^2$, we have the injective morphisms ${\mathscr{H}}_{{\rm dR}, a, b}^1\to{\mathscr{H}}^1_{{\rm dR}}$ which are compatible with the inclusions in Lemma \ref{SUP}. In this sense, we have \begin{align*} {\mathscr{H}}^1_{{\rm dR}}=\lim_{a, b\to\infty}{\mathscr{H}}^1_{{\rm dR}, a, b}. \end{align*} For each point $s\in S^\circ$, by Lemma \ref{SUP}, there exists an open neighborhood ${\mathrm{nb}}(s)$ where the limit terminate at a finite term. In other words, ${\mathscr{H}}_{{\rm dR}|{\mathrm{nb}}(s)}^1\simeq {\mathscr{H}}^1_{{\rm dR}, a, b|{\mathrm{nb}}(s)}$ for some $(a, b)$. By Theorem \ref{LF}, this implies the theorem. \end{proof} \subsection{Some pairings}\label{j} Let $\iota\colon S\to S$ be the involution defined by $\iota(\lambda,\mu)\coloneqq (-\lambda,\mu)$. The induced involution on ${\mathcal{X}}$ is also denoted by $\iota$. On the pull back $\iota^*{\mathscr{M}}={\mathscr{O}}_{\mathcal{X}}\otimes_{\iota^{-1}{\mathscr{O}}_{\mathcal{X}}} \iota^{-1}{\mathscr{M}}$, we naturally obtain the operators \begin{align*} &\iota^*\nabla\colon \iota^*{\mathscr{M}}\longrightarrow \iota^*{\mathscr{M}}\otimes \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1,\\ &\quad\iota^*\nabla(h\iota^*{\bm{e}})\coloneqq \iota^*{\bm{e}}\otimes d_{{\mathcal{X}}/S}(h)-h\iota^*{\bm{e}}\otimes \lambda^{-1}(df-\mu g^{-1}dg),\\ &\iota^*\nabla_{\mathfrak{a}}\colon \iota^*{\mathscr{M}}\to \iota^*{\mathscr{M}},\quad \iota^*\nabla_{\mathfrak{a}}(h\iota^*{\bm{e}})\coloneqq([\iota_*{\mathfrak{a}}, h]+hf)\iota^*{\bm{e}},\\ &\iota^*{\mathbb{S}}\colon \iota^*{\mathscr{M}}\to \iota^*\sigma_*{\mathscr{M}}=(\sigma^{-1})_*\iota^*{\mathscr{M}}, \quad \iota^*{\mathbb{S}}(h\iota^*{\bm{e}})\coloneqq(\sigma^{-1})^\#(h)g^{-1}\iota^*{\bm{e}} \end{align*} where $h\in{\mathscr{O}}_{\mathcal{X}}(*\cal{D})$, $\iota^*{\bm{e}}=1\otimes \iota^{-1}{\bm{e}}$, and $\iota_*{\mathfrak{a}}=-{\mathfrak{a}}$. Define a pairing \begin{align*} \<\cdot,\cdot\>\colon {\mathscr{M}}\otimes_{{\mathscr{O}}_{\mathcal{X}}(*\cal{D})}\iota^*{\mathscr{M}}\to{\mathscr{O}}_{\mathcal{X}}(*\cal{D}) \end{align*} by $\<{\bm{e}},\iota^*{\bm{e}}\>=1$ and the ${\mathscr{O}}_{\mathcal{X}}(*\cal{D})$-linearity. \begin{lemma}\label{DUA} For $\bm{v}\in {\mathscr{M}}$ and $\bm{w}\in\iota^*{\mathscr{M}}$, the equalities \begin{align*} d_{{\mathcal{X}}/S}\<\bm{v},\bm{w}\>&=\<\nabla \bm{v},\bm{w}\>+\<\bm{v},\iota^*\nabla\bm{w}\>,\\ [{\mathfrak{a}}, \<\bm{v},\bm{w}\>]&=\<\nabla_{\mathfrak{a}}\bm{v},\bm{w}\>+\<\bm{v},\iota^*\nabla_{\mathfrak{a}}\bm{w}\>, \end{align*} hold. For $\bm{v}\in {\mathscr{M}}$ and $\bm{w}\in \sigma_*\iota^*{\mathscr{M}}$, the equality \begin{align*} \sigma_*\<{\mathbb{S}}\bm{v},\bm{w}\>=\sigma^\#(\<\bm{v},\sigma_*\iota^*{\mathbb{S}}\bm{w}\>) \end{align*} holds. Note that $\sigma_*\iota^*{\mathbb{S}}\colon \sigma_*\iota^*{\mathscr{M}}\to\iota^*{\mathscr{M}}$. \qed \end{lemma} Consider \begin{align*} {\mathrm{DR}}_{{\mathcal{X}}/S}(\iota^*{\mathscr{M}})\coloneqq \left[ \iota^*{\mathscr{M}}\xrightarrow{\iota^*\nabla} \iota^*{\mathscr{M}}\otimes \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1\right] \end{align*} as a complex placed at degree zero and one. We then naturally obtain \[\mathbb{R}^k\pi_{{\mathcal{X}}*}{\mathrm{DR}}_{{\mathcal{X}}/S}(\iota^*{\mathscr{M}})\simeq \iota^*{\mathscr{H}}^k_{{\rm dR}}.\] This isomorphism is compatible with the actions of $\iota^*\nabla_{\mathfrak{a}}$ and $\iota^*{\mathbb{S}}$ on both sides. Similar facts hold for $\iota^*{\mathscr{M}}_{a, b}$ and $\iota^*{\mathscr{H}}^1_{{\rm dR}, a, b}$. \subsection{Pairings on cohomology groups}\label{pairing} Mimicking the discussion in \cite{Yu}, which goes back to the work of Deligne \cite{Del2}, we shall construct a morphism \begin{align*} \<\cdot,\cdot\>_{\rm dR}\colon{\mathscr{H}}_{{\rm dR}, a, b}^1\otimes\iota^*{\mathscr{H}}^1_{{\rm dR}, c, d}\longrightarrow \lambda^{-1}{\mathscr{O}}_S \end{align*} for $a, b, c, d\in \mathbb{Z}$ with $a+c=b+d=-1$. Use the notation in Lemma \ref{dR}. Set \begin{align*} &{\mathscr{M}}_{a, b}^{n,0}\coloneqq {\mathscr{M}}_{a,b}^n, &{\mathscr{M}}_{a, b}^{n,1}\coloneqq {\mathscr{M}}_{a+1,b+1}^{n+1}\otimes \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1(\cal{P}). \end{align*} By Lemma \ref{dR}, ${\mathrm{DR}}_{{\mathcal{X}}/S}({\mathscr{M}}_{a, b})$ is quasi-isomorphic to \[{\mathscr{M}}_{a, b}^{n,\bullet}\coloneqq [{\mathscr{M}}_{a, b}^{n,0}\xrightarrow{\nabla}{\mathscr{M}}_{a, b}^{n,1}].\] Similarly, ${\mathrm{DR}}_{{\mathcal{X}}/S}(\iota^*{\mathscr{M}})$ is quasi-isomorphic to \[\scr{N}_{c, d}^{m,\bullet}\coloneqq[\scr{N}_{c, d}^{m,0}\xrightarrow{\iota^*\nabla}\scr{N}_{c, d}^{m,1}]\] where \begin{align*} &{\scr{N}}_{c, d}^{m,0}\coloneqq \iota^*{\mathscr{M}}_{c, d}^{m}(-\cal{P}), &{\scr{N}}_{c, d}^{m,1}\coloneqq \iota^*{\mathscr{M}}_{c+1,d+1}^{m+1}\otimes\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1. \end{align*} Put ${\mathrm{DR}}_{{\mathcal{X}}/S}({\mathscr{O}}_{\mathcal{X}})\coloneqq [{\mathscr{O}}_{\mathcal{X}}\xrightarrow{d_{{\mathcal{X}}/S}}\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1]$ placed at degree zero and one. We define the morphism of complexes \begin{align*} \<\cdot,\cdot\>_{{\rm dR}}^\bullet\colon{\mathscr{M}}_{a, b}^{n,\bullet}\otimes \scr{N}_{c, d}^{m,\bullet}\to {\mathrm{DR}}_{{\mathcal{X}}/S}({\mathscr{O}}_{\mathcal{X}}) \end{align*} for $a+c=b+d=n+m=-1$ as follows: In degree zero, we have \begin{align*} \<\cdot,\cdot\>_{\rm dR}^0\colon {\mathscr{M}}_{a, b}^{n,0}\otimes{\scr{N}}_{c, d}^{m,0} \xrightarrow{\<\cdot,\cdot\>} {\mathscr{O}}_{{\mathcal{X}}}(-(f)_\infty-\cal{P})\hookrightarrow {\mathscr{O}}_{\mathcal{X}}, \end{align*} where $\<\cdot,\cdot\>$ is defined in \S \ref{j}. In degree 1, we may also define \begin{align*} \<\cdot,\cdot\>^1_{\rm dR}\colon\bigoplus_{i+j=1}{\mathscr{M}}_{a, b}^{n,i}\otimes \scr{N}_{c, d}^{m,j} \longrightarrow\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1 \end{align*} extending $\<\cdot,\cdot\>$ linearly with respect to the tensor product of sections in $\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1$. We set $\<\cdot,\cdot\>_{\rm dR}^2=0$ in degree 2. Then, by Lemma \ref{DUA}, $\<\cdot,\cdot\>_{{\rm dR}}^\bullet$ is a morphism of complexes. Taking the pushing forward with respect to $\pi_{\mathcal{X}}$, we obtain $\<\cdot,\cdot\>_{\rm dR}$. Here, we have fixed the isomorphism \begin{align*} \mathbb{R}^2\pi_{{\mathcal{X}}*}{\mathrm{DR}}_{{\mathcal{X}}/S}({\mathscr{O}}_{\mathcal{X}})=H^1(X,\Omega_X^1)\otimes \lambda^{-1}{\mathscr{O}}_S\simeq \lambda^{-1}{\mathscr{O}}_S \end{align*} by $(2\pi{\sqrt{-1}})^{-1}\int_X\colon H^1(X,\Omega_X^1)\xrightarrow{\sim} \C$. \begin{remark} The pairing $\<\cdot,\cdot\>_{{\rm dR}}$ does not depend on the choice of the integer $n$ fixed in the construction. Moreover, they are compatible with the inclusions in Lemma \ref{SUP} in the following sense: For $a',b',c',d'$ with $a'<a$, $b'<b$ and $a'+c'=b'+d'=-1$, take $\bm{\upsilon}\in {\mathscr{H}}^1_{{\rm dR}, a', b'}$ and $\bm{\omega}\in\iota^* {\mathscr{H}}^1_{{\rm dR}, c, d}$. Then we have \begin{align*} \<\imath(\bm{\upsilon}),\bm{\omega}\>_{{\rm dR}}=\<\bm{\upsilon},\imath(\bm{\omega})\>_{\rm dR} \end{align*} where $\imath$ denotes the injection (i.e. the composition of inclusions in Lemma \ref{SUP}). \end{remark} By Lemma \ref{DUA}, we have the following: \begin{corollary} For $\bm{\upsilon}\in {\mathscr{H}}^1_{{\rm dR}, a, b}$ and $\bm{\omega}\in \iota^*{\mathscr{H}}_{{\rm dR},c,d}^1$ with $a+c=b+d=-1$, the following equality holds: \begin{align*} \left[{\mathfrak{a}},\<\bm{\upsilon},\bm{\omega}\>_{\rm dR}\right] =\<\nabla_{\mathfrak{a}}\bm{\upsilon},\bm{\omega}\>_{\rm dR}+\<\bm{\upsilon},\iota^*\nabla_{\mathfrak{a}}\bm{\omega}\>_{\rm dR}. \end{align*} For $\bm{\upsilon}\in {\mathscr{H}}^1_{{\rm dR}, a, b}$ and $\bm{\omega}\in \sigma_*\iota^*{\mathscr{H}}_{{\rm dR}, c+1,d-1}^1$ with $a+c=b+d=-1$, we have \begin{align*} \sigma_*\<{\mathbb{S}}\bm{\upsilon},\bm{\omega}\>_{\rm dR}= \sigma^\#(\<\bm{\upsilon},\sigma_*\iota^*{\mathbb{S}}\bm{\omega} \>_{\rm dR}) \end{align*} Note that ${\mathbb{S}} \bm{\upsilon}\in \sigma_*{\mathscr{H}}_{{\rm dR},a-1, b+1}^1$ and $\sigma_*\iota^*{\mathbb{S}}\bm{\omega}\in \iota^*{\mathscr{H}}_{{\rm dR}, c, d}^1$.\qed \end{corollary} \subsection{Perfectness of the pairings} In this subsection, we prove the following: \begin{theorem}[{c.f. \cite[Theorem 2.1]{Yu} }]\label{Perfect theorem} The pairing $\<\cdot,\cdot\>_{\rm dR}$ is perfect, i.e. the induced morphism \begin{align*} \cal{I}_{\rm dR}\colon {\mathscr{H}}_{{\rm dR}, a, b}^1\longrightarrow {\mathscr{H}\! \! om}_{{\mathscr{O}}_S}(\iota^*{\mathscr{H}}^1_{{\rm dR}, c, d},\lambda^{-1}{\mathscr{O}}_S), \quad \bm{\upsilon} \mapsto (\bm{\omega}\mapsto\<\bm{\upsilon},\bm{\omega}\>_{\rm dR}) \end{align*} is an isomorphism for $a, b, c, d\in \mathbb{Z}$ with $a+c=b+d=-1$. \end{theorem} Let us introduce the following notations: \begin{align*} &(-)^\vee\coloneqq {\mathscr{H}\! \! om}_{{\mathscr{O}}_{\mathcal{X}}}(-,{\mathscr{O}}_{\mathcal{X}}), &(-)^\wedge\coloneqq {\mathscr{H}\! \! om}_{{\mathscr{O}}_{\mathcal{X}}}(-,\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1). \end{align*} We have \begin{align*} ({\scr{N}}_{c, d}^{m,0})^\wedge&=(\iota^*{\mathscr{M}}_{c, d}^{m})^\vee\otimes \lambda^{-1}\Omega^1_{{\mathcal{X}}/S}(\cal{P}),\\ ({\scr{N}}_{c, d}^{m,1})^\wedge&=(\iota^*{\mathscr{M}}_{c+1,d+1}^{m+1})^\vee. \end{align*} Then, the isomorphisms \begin{align*} \cal{J}_{\rm dR}^i\colon {\mathscr{M}}_{a, b}^{n,1+i}\longrightarrow ({\scr{N}}_{c, d}^{m, -i})^\wedge, \quad \bm{v}\mapsto (\bm{w}\mapsto \<\bm{v},\bm{w}\>_{\rm dR}^1), \quad(i=0,-1) \end{align*} induced from $\<\cdot,\cdot\>_{\rm dR}^1$ are identified with $\bm{v}\mapsto \<\bm{v},\cdot\> \in (\iota^*{\mathscr{M}}_{c+1,d+1}^{m+1})^\vee$ for $i=-1$, and $\bm{v}\otimes \omega \mapsto\<\bm{v},\cdot\> \otimes\omega\in (\iota^*{\mathscr{M}}_{c, d}^{-m})^\vee\otimes \lambda^{-1}\Omega^1_{{\mathcal{X}}/S}(\cal{P})$ for $i=0$, respectively. Define \begin{align*} (\iota^*\nabla)^\vee\colon (\iota^*{\mathscr{M}}_{c+1,d+1}^{m+1})^\vee\longrightarrow (\iota^*{\mathscr{M}}_{c, d}^{m})^\vee\otimes \lambda^{-1}\Omega^1_{{\mathcal{X}}/S}(\cal{P}) \end{align*} by the following formula: \begin{align*} d_{{\mathcal{X}}/S}{\<}\bm{\psi},\bm{w}{\>}_{\rm ev}= {\<}(\iota^*\nabla)^\vee(\bm{\psi}),\bm{w}{\>}_{\rm ev} +{\<}\bm{\psi},{\iota^*\nabla}\bm{w}{\>}_{\rm ev} \end{align*} where $\bm{\psi}\in (\iota^*{\mathscr{M}}_{c+1,d+1}^{m+1})^\vee$, $\bm{w}\in \iota^*{\mathscr{M}}_{c+1,d+1}^{m+1}$, and ${\<}\cdot,\cdot{\>}_{\rm ev}$ denotes the evaluation. We regard $(\scr{N}_{c, d}^{m,-\bullet})^\wedge$ as a complex with differential $-(\iota^*\nabla)^\vee$. Then $\cal{J}_{\rm dR}^\bullet$ induces an isomorphism of complexes ${\mathscr{M}}_{a, b}^{m, \bullet}[1]\xrightarrow{\sim} ({\scr{N}}_{c, d}^{m,-\bullet})^\wedge$. Taking the pushing forward functor, we obtain \begin{align*} \cal{J}_{\rm dR}\colon {\mathscr{H}}^1_{{\rm dR}, a, b}\xrightarrow{\sim} \mathbb{R}^0\pi_{{\mathcal{X}}*}(\scr{N}_{c, d}^{m,-\bullet})^\wedge. \end{align*} \begin{lemma}\label{Serre} There is a natural isomorphism \begin{align*} \eta\colon\mathbb{R}^0\pi_{{\mathcal{X}}*}(\scr{N}_{c, d}^{m,-\bullet})^\wedge\xrightarrow{\sim} {\mathscr{H}\! \! om}_{{\mathscr{O}}_S}\(\iota^*{\mathscr{H}}_{{\rm dR}, c, d}^1,\lambda^{-1}{\mathscr{O}}_S\). \end{align*} \end{lemma} \begin{proof} By Lemma \ref{dR}, we may assume that $m$ is sufficiently small so that \begin{align*} \mathbb{R}^k\pi_{{\mathcal{X}}*}({\scr{N}}_{c, d}^{m, i})^\wedge=\mathbb{R}^\ell\pi_{{\mathcal{X}}*}({\scr{N}}_{c, d}^{m, j})=0, \end{align*} for $j=0,1$, $k\neq 0$, and $\ell\neq 1$. By the Grothendieck-Serre duality, there are natural isomorphisms \begin{align*} \eta_j\colon\pi_{{\mathcal{X}}*}(\scr{N}_{c, d}^{m, j})^\wedge \xrightarrow{\sim} {\mathscr{H}\! \! om}_{{\mathscr{O}}_S}\(\mathbb{R}^1\pi_{{\mathcal{X}}*}(\scr{N}_{c, d}^{m ,j}),\lambda^{-1}{\mathscr{O}}_S\) \end{align*} for $j=0,1$. Using the notation \begin{align*} (-)^\vee_S\coloneqq {\mathscr{H}\! \! om}_{{\mathscr{O}}_S}(-,\lambda^{-1}{\mathscr{O}}_S), \end{align*} we obtain the following diagram: \begin{align*} \xymatrix{ 0\ar[r]&\pi_{{\mathcal{X}}*}({\scr{N}}_{c, d}^{m, 1})^\wedge\ar[r]\ar[d]^{\eta_1}& \pi_{{\mathcal{X}}*}({\scr{N}}_{c, d}^{m,0})^\wedge\ar[r]\ar[d]^{\eta_0}& \mathbb{R}^0\pi_{{\mathcal{X}}*}(\scr{N}_{c, d}^{m,-\bullet})^\wedge\ar[r]&0\\ 0\ar[r]&(\mathbb{R}^1\pi_{{\mathcal{X}}*}{\scr{N}}_{c, d}^{m,1})^\vee_S\ar[r]&(\mathbb{R}^1\pi_{{\mathcal{X}}*}{\scr{N}}_{c, d}^{m,0})^\vee_S \ar[r]&(\iota^*{\mathscr{H}}_{{\rm dR}, c, d}^1)^\vee_S\ar[r]&0.} \end{align*} Since the square in this diagram commutes, we obtain $\eta$. Commutativity of the square: The Grothendieck-Serre duality isomorphisms $\eta_j$ are induced from the following morphisms: \begin{align*} \mathbb{R}^1\pi_{{\mathcal{X}}*} {\scr{N}}\otimes \pi_{{\mathcal{X}}*}{\mathscr{H}\! \! om}({\scr{N}},\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1) &\xrightarrow{\<\cdot,\cdot\>_{\rm ev}} \mathbb{R}^1\pi_{{\mathcal{X}}*}(\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1)\\ &\xrightarrow{\frac{1}{2\pi{\sqrt{-1}}}\int_X} \lambda^{-1}{\mathscr{O}}_S \end{align*} where ${\scr{N}}={\scr{N}}_{c, d}^{m, j}$, and $j=0,1$. Then the commutativity follows from the equality \begin{align*} \frac{1}{2\pi{\sqrt{-1}}}\int_X\<\iota^*\nabla\bm{\upsilon},\bm{\omega}\>_{\rm ev} =\frac{1}{2\pi{\sqrt{-1}}}\int_X\<\bm{\upsilon},-(\iota^*\nabla)^\vee\bm{\omega}\>_{\rm ev} \end{align*} for $\bm{\upsilon}\in\mathbb{R}^1\pi_{{\mathcal{X}}*} {\scr{N}}_{c, d}^{m,0}$, and $\bm{\omega}\in \pi_{{\mathcal{X}}*}{\mathscr{H}\! \! om}({\scr{N}}_{c, d}^{m,1},\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1)$. This equality follows from the fact that the difference of this equality is given by \begin{align*} \frac{1}{2\pi{\sqrt{-1}}}\int_Xd_{{\mathcal{X}}/S}\<\bm{\upsilon},\bm{\omega}\>_{\rm ev}=0 \end{align*} where $\bm{\omega}$ is considered as the section of $\pi_{{\mathcal{X}}*}{\mathscr{H}\! \! om}({\scr{N}}_{c, d}^{m,0},{\mathscr{O}}_{\mathcal{X}})$. \end{proof} Composing $\eta$ in Lemma \ref{Serre} and the morphism $\cal{J}_{\rm dR}$ constructed above, we obtain \begin{align*} \eta\circ\cal{J}_{\rm dR}\colon {\mathscr{H}}^1_{{\rm dR}, a, b}\xrightarrow{\sim} {\mathscr{H}\! \! om}_{{\mathscr{O}}_S}\(\iota^*{\mathscr{H}}_{{\rm dR}, c, d}^1,\lambda^{-1}{\mathscr{O}}_S\). \end{align*} Theorem \ref{Perfect theorem} follows from the following: \begin{lemma} $\cal{I}_{\rm dR}=\eta\circ\cal{J}_{\rm dR}$. \end{lemma} \begin{proof} We may assume that $n$ is sufficiently large and hence $m$ is sufficiently small so that we have \begin{align*} {\mathscr{H}}_{{\rm dR}, a, b}^1&\simeq {\mathrm{Cok}}\left[\pi_{{\mathcal{X}}*}{\mathscr{M}}_{a, b}^{n,0}\xrightarrow{\nabla}\pi_{{\mathcal{X}}*}{\mathscr{M}}_{a, b}^{n,1}\right],\\ \iota^*{\mathscr{H}}_{{\rm dR}, c, d}^1&\simeq \mathrm{Ker}\left[\mathbb{R}^1\pi_{{\mathcal{X}}*}{\scr{N}}_{c, d}^{m,0} \xrightarrow{\iota^*\nabla}\mathbb{R}^1\pi_{{\mathcal{X}}*}{\scr{N}}_{c, d}^{m, 1}\right]. \end{align*} Hence a section $\bm{\upsilon}\in {\mathscr{H}}_{{\rm dR}, a, b}^1$ can be represented by a section $\upsilon$ of \[\pi_{{\mathcal{X}}*}{\mathscr{M}}_{a, b}^{n,1}=\pi_{{\mathcal{X}}*}{\mathscr{M}}_{a+1,b+1}^{n+1}\otimes\Omega_{{\mathcal{X}}/S}^1(\cal{P})\] and a section $\bm{\omega}\in \iota^*{\mathscr{H}}_{{\rm dR}, c, d}^1$ can be seen as a section $\omega$ of \[\mathbb{R}^1\pi_{{\mathcal{X}}*}{\scr{N}}_{c, d}^{m,0}=\mathbb{R}^1\pi_{{\mathcal{X}}*}(\iota^*{\mathscr{M}}_{c, d}^{m}(-\cal{P})).\] Using this expression, both $\<\cal{I}_{\rm dR}(\bm{v}), \bm{\omega}\>_{\rm ev}$ and $\<\eta\circ\cal{J}_{\rm dR}(\bm{v}),\bm{\omega}\>_{\rm ev}$ are given by \begin{align*} \frac{1}{2\pi{\sqrt{-1}}}\int_X\<\upsilon,\omega\> \end{align*} which implies the lemma. \end{proof} \subsection{Explicit description and examples of pairings} Take $\bm{\upsilon}\in {\mathscr{H}}^1_{{\rm dR},a, b}$ and $\bm{\omega}\in \iota^*{\mathscr{H}}^1_{{\rm dR}, c, d}$ with $a+c=b+d=-1$. As we have seen before, we can take sufficiently large $n$ so that $\bm{\upsilon}$ can be represented by a section $\upsilon$ of $\pi_{{\mathcal{X}}*}({\mathscr{M}}^{n+1}_{a+1, b+1}\otimes \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1(\cal{P}))$. Similarly, if one take $m'$ large enough, then $\bm{\omega}$ is also represented by a section $\omega'$ of $\pi_{{\mathcal{X}}*}(\iota^*{\mathscr{M}}_{c+1, d+1}^{m'}\otimes\lambda^{-1} \Omega_{{\mathcal{X}}/S}^1)$. For each point $p\in P$, there exist an open disk $V_p$ centered at $p$ and a section $\alpha_p\in \Gamma(\cal{V}_p,\iota^*{\mathscr{M}}_{c, d})$ on $\cal{V}_p=S\times V_p$ such that \begin{align*} \omega'_{|\cal{V}_p}-\iota^*\nabla \alpha_p\in \(\iota^*{\mathscr{M}}_{c+1, d+1}^{m+1}\otimes \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1\)_{\big|\cal{V}_p}. \end{align*} where $m=-n-1$ (See the proof of Lemma \ref{dR}). \begin{proposition}[{c.f. \cite[p. 112]{Del2}}] Using the notations above, we have \begin{align*} \<\bm{\upsilon},\bm{\omega}\>_{\rm dR}=\sum_{p\in P}\mathrm{Res}_{S\times\{p\}}\<\upsilon,\alpha_p\>. \end{align*} \end{proposition} \begin{proof} Take a characteristic function $\psi_p$ of $(V_p, p)$ and put \[\alpha\coloneqq \sum_{p\in P}\psi_p\alpha_p.\] Then, $\omega'-(\iota^*\nabla+\overline{\partial}_X)\alpha$ also represents $\bm{\omega}$ in the relative Dolbeault resolution of the complex ${\scr{N}}_{c, d}^{m,\bullet}$, where $\overline{\partial}_X$ denotes the $\overline{\partial}$-operator along the $X$-direction. We have \begin{align*} \<\bm{\upsilon},\bm{\bm{\omega}}\>_{\rm dR}&=\frac{1}{2\pi{\sqrt{-1}}}\int_X\<\upsilon,\omega'-(\iota^*\nabla+\overline{\partial}_X)\alpha\>\\ &=\frac{1}{2\pi{\sqrt{-1}}}\sum_{p\in P}\int_X-\overline{\partial}_X\psi_p\<\upsilon,\alpha_p\>\\ &=\sum_{p\in P}\mathrm{Res}_{S\times\{p\}}\<\upsilon,\alpha_p\>. \end{align*} Hence we obtain the proposition. \end{proof} \begin{example}[Continuation of Example \ref{Bessel}]\label{BDual} In the case of Example \ref{Bessel}, we may omit the subscripts $a, b, c, d$ since $E=\emptyset$. We can take $n$ to be $0$ to obtain \begin{align*} {\mathscr{H}}^1_{{\rm dR}}\simeq {\mathrm{Cok}}\left[{\mathscr{O}}_S\xrightarrow{\lambda^{-1}(1-z^{-2}-\mu z^{-1})dz} \bigoplus_{-2\leq k\leq 0}\lambda^{-1}{\mathscr{O}}_Sz^k dz\right]. \end{align*} We have put $e_k\coloneqq [\lambda^{-1}z^k d z]$ $(k\in \mathbb{Z})$. We shall compute \begin{align*} \<e_k,\iota^* e_\ell\>_{\rm dR} \end{align*} for $-2\leq k,\ell\leq 0$. At $0\in P=\{0,\infty\}$, $\iota^*e_{0|\mathbb{P}^1\setminus\{\infty\}}$, \begin{align*} \iota^*e_{-1}|_{\mathbb{P}^1\setminus\{\infty\}}-\iota^*\nabla(z), \text{ and }\quad \iota^*e_{-2}|_{\mathbb{P}^1\setminus\{\infty\}}-\iota^*\nabla(1+\mu z) \end{align*} are in $ \(\iota^*{\mathscr{M}}^0\otimes\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1\)|_{\mathbb{P}^1\setminus\{\infty\}}$. Near $\infty\in P$, take the coordinate $w=z^{-1}$. Then $\iota^*e_{-2|\mathbb{P}^1\setminus\{0\}}$, \begin{align*} \iota^*e_{-1}|_{\mathbb{P}^1\setminus\{0\}}-\iota^*\nabla(-w), \text{ and }\quad \iota^*e_0|_{\mathbb{P}^1\setminus\{0\}}-\iota^*\nabla(-1-\mu w) \end{align*} are in $ \(\iota^*{\mathscr{M}}^0\otimes\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1\)|_{\mathbb{P}^1\setminus\{0\}}$. Hence we obtain \begin{align*} \<e_k,\iota^*e_\ell\>_{\rm dR}= \begin{cases} 0&(|k-\ell|=0)\\ \lambda^{-1}&(|k-\ell|=1)\\ \lambda^{-1}\mu&(|k-\ell|=2). \end{cases} \end{align*} \end{example} \begin{example}[Continuation of Examples \ref{EXG} and \ref{Ga}]\label{GDual} Consider the case of Example \ref{EXG} and \ref{Ga}. For each $a\in \mathbb{Z}$, take $n$ to be $-a-1$ to obtain \[{\mathscr{H}}_{{\rm dR},a}^1={\mathrm{Cok}}\left[0\to\lambda^{-1}{\mathscr{O}}_Sz^{-a-1}dz\right]=\lambda^{-1}{\mathscr{O}}_Sz^{-a-1}dz.\] Set $e_{-a-1}=\lambda^{-1}z^{-a-1}dz$. Take another coordinate $w=z^{-1}$. Since we have \[(\iota^*e_a)|_{\mathbb{P}^1\setminus0}-\iota^*\nabla(-w^{-a} \in\(\iota^*{\mathscr{M}}_{-a}^{a+1}\otimes \lambda^{-1}\Omega_{{\mathcal{X}}/S}^1\)_{\big|\mathbb{P}^1\setminus\{0\}}, \] we obtain \[\<e_{-a-1}, \iota^*e_a\>_{\rm dR}=\mathrm{Res}_{w=0} \((-w^{-a})\cdot \(-\lambda^{-1}w^{a}\frac{dw}{w}\)\)=\lambda^{-1}. \] \end{example} \subsection{Analytic description of cohomology groups} Let $\varpi_X\colon \widetilde{X}\to X$ denote the real blowing up of $X$ along $D$. For a subset $H\subset D$, we set $\widetilde{H}\coloneqq \varpi_{X}^{-1}(H)$. Let ${\mathscr{A}}_{\widetilde{X}}^{\leqslant D}$ denote the sheaf of holomorphic functions on $Y=\widetilde{X}\setminus \widetilde{D}$ which are of moderate growth along $\widetilde{D}$. For a subset $H\subset D$, let ${\mathscr{A}}_{\widetilde{X},\widetilde{D}}^{<H}$ denote the subsheaf of ${\mathscr{A}}_{\widetilde{X}}^{\leqslant D}$ whose section is of rapid decay along $\widetilde{H}$. If $H=D$, we also use the notation ${\mathscr{A}}_{\widetilde{X}}^{<D}\coloneqq {\mathscr{A}}_{\widetilde{X},\widetilde{D}}^{<D}$. If $H=\emptyset$, then ${\mathscr{A}}_{\widetilde{X},\widetilde{D}}^{\emptyset}={\mathscr{A}}_{\widetilde{X}}^{\leqslant D}$. We have a differential \[d\colon {\mathscr{A}}_{\widetilde{X},\widetilde{D}}^{<H}\longrightarrow {\mathscr{A}}_{\widetilde{X},\widetilde{D}}^{<H}\otimes_{\varpi_X^{-1}{\mathscr{O}}_X} \varpi_X^{-1}\Omega^1_X.\] Set $\widetilde{{\mathcal{X}}}\coloneqq S^\circ\times \widetilde{X}$, and ${{\mathcal{X}}}^\circ\coloneqq S^\circ\times X$. Let $\varpi_{{\mathcal{X}}}\colon \widetilde{{\mathcal{X}}}\to {\mathcal{X}}^\circ$, $p_{\widetilde{{\mathcal{X}}}}\colon \widetilde{{\mathcal{X}}}\to \widetilde{X}$, and $\pi_{\widetilde{{\mathcal{X}}}}\colon\widetilde{{\mathcal{X}}}\to S^\circ$ denote the projections. We also use the notations $\widetilde{\cal{D}}\coloneqq S^\circ\times \widetilde{D}$, e.t.c. We take the restrictions of sheaves on ${\mathcal{X}}$ to ${\mathcal{X}}^\circ$ (resp. $S$ to $S^\circ$) without a mention. Put \begin{align*} {\mathscr{A}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H}\coloneqq \varpi_{{\mathcal{X}}}^{-1}{\mathscr{O}}_{{\mathcal{X}}^\circ} \otimes_{p_{\widetilde{{\mathcal{X}}}}^{-1}\varpi_{X}^{-1}{\mathscr{O}}_{X}} p_{\widetilde{{\mathcal{X}}}}^{-1}{\mathscr{A}}_{\widetilde{X},\widetilde{D}}^{<H}. \end{align*} There is the canonical relative differential \begin{align*} d_{{\mathcal{X}}/S}\colon {\mathscr{A}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H} \to {\mathscr{A}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H}\otimes\varpi_{{\mathcal{X}}}^{-1}\Omega_{{\mathcal{X}}/S}^1. \end{align*} Then $\varpi_{{\mathcal{X}}}^{-1}\nabla\colon \varpi_{\mathcal{X}}^{-1}{\mathscr{M}}\to \varpi_{{\mathcal{X}}}^{-1}({\mathscr{M}}\otimes \lambda^{-1}\Omega^1_{{\mathcal{X}}/S})$ induces the connection \begin{align*} \nabla\colon{\mathscr{A}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H}\otimes \varpi_{\mathcal{X}}^{-1}{\mathscr{M}} \longrightarrow {\mathscr{A}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H}\otimes \varpi_{{\mathcal{X}}}^{-1}({\mathscr{M}}\otimes \lambda^{-1}\Omega^1_{{\mathcal{X}}/S}) \end{align*} by the Leibniz rule. \begin{definition} For a subset $H\subset D$, let \begin{align*} {\mathrm{DR}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H}({\mathscr{M}}) \coloneqq \left[{\mathscr{A}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H}\otimes \varpi_{\mathcal{X}}^{-1}{\mathscr{M}} \xrightarrow{\nabla} {\mathscr{A}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H}\otimes \varpi_{{\mathcal{X}}}^{-1}({\mathscr{M}}\otimes \lambda^{-1}\Omega^1_{{\mathcal{X}}/S}) \right] \end{align*} denote the complex placed at degree $0$ and $1$. We then set \begin{align*} {\mathscr{H}}^k_{{\rm dR},H!}\coloneqq \mathbb{R}^k\pi_{\widetilde{{\mathcal{X}}}*}{\mathrm{DR}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H}({\mathscr{M}}). \end{align*} \end{definition} Let $\widetilde{\mathbb{P}}^1$ denote the real blowing up of $\widetilde{\mathbb{P}}^1$ at $\infty$. The boundary of $\widetilde{\mathbb{P}}^1$ is denoted by $\widetilde{\infty}=\{e^{{\sqrt{-1}}\theta}\infty\mid \theta\in\mathbb{R}\}$ where $e^{{\sqrt{-1}}\theta}\infty$ denotes the limit $\underset{r\to \infty}{\lim}re^{{\sqrt{-1}}\theta}$ in $\widetilde{\mathbb{P}}^1$. Assume that $f$ is non-constant. The map $\lambda^{-1}f\colon {\mathcal{X}}^\circ\to \mathbb{P}^1$, $\((\lambda,\mu),x\)\mapsto \lambda^{-1}f(x)$ uniquely lifts to the continuous map \begin{align*} \widetilde{f/\lambda}\colon \widetilde{{\mathcal{X}}}\longrightarrow \widetilde{\mathbb{P}}^1. \end{align*} We define $\cal{P}^{\rm rd}\subset \widetilde{\cal{P}}=S^\circ\times \widetilde{P}$ as $(\widetilde{f/\lambda})^{-1}(\{e^{{\sqrt{-1}}\theta}\infty \mid -\pi/2<\theta<\pi/2\})$. For $H\subset D$, we set \[{\mathcal{X}}^{\rm rd}_H\coloneqq \cal{Y}^\circ\cup \cal{P}^{\rm rd}\cup \(S^\circ\times (E\setminus H)\),\] where $\cal{Y}^\circ\coloneqq S^\circ\times Y$. Let $\widetilde{\imath}^H\colon \cal{Y}^\circ \to {\mathcal{X}}^{\rm rd}_H$ and $\widetilde{\jmath}^H\colon {\mathcal{X}}^{\rm rd}_H\to \widetilde{{\mathcal{X}}}$ denote the inclusions. Let ${\mathscr{K}}_{\mathscr{O}}$ denote the kernel of $\nabla\colon {\mathscr{M}}_{|\cal{Y}^\circ}\to{\mathscr{M}}_{|{\cal{Y}}^\circ}\otimes \Omega_{\cal{Y}^\circ/S^\circ}^1$. \begin{lemma}\label{malgrange} On $\widetilde{{\mathcal{X}}}$, we obtain the following$\colon$ \begin{enumerate} \item The $k$-th cohomology group ${\mathscr{H}}^k({\mathrm{DR}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H}({\mathscr{M}}))$ vanishes if $k\neq 0$. \item ${\mathscr{H}}^0({\mathrm{DR}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H}({\mathscr{M}}))\simeq \widetilde{\jmath}^H_!\widetilde{\imath}^H_*{\mathscr{K}}_{\mathscr{O}}$.\qed \end{enumerate} \end{lemma} From this lemma, we have ${\mathrm{DR}}^{<H}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}({\mathscr{M}}){\simeq} {\mathrm{DR}}^{<H'}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}({\mathscr{M}})$ in the derived category if $H\setminus P=H'\setminus P$. Hence, we may assume that $H\subset E=D\setminus P$. \begin{proposition}\label{limits} On $S^\circ$, we have the following natural isomorphisms: \begin{align*} &{\mathscr{H}}_{{\rm dR},\emptyset!}^1\simeq {\mathscr{H}}^1_{{\rm dR}}\simeq \lim_{a, b\to\infty}{\mathscr{H}}^1_{{\rm dR}, a, b}, &&{\mathscr{H}}_{{\rm dR},E!}^1\simeq \lim_{a, b\to-\infty}{\mathscr{H}}^1_{{\rm dR}, a, b}, \\ &{\mathscr{H}}_{{\rm dR},E_0!}^1\simeq \lim_{\substack{a\to-\infty,\\ b\to\infty }} {\mathscr{H}}_{{\rm dR}, a, b}^1, &&{\mathscr{H}}^1_{{\rm dR},E_\infty!}\simeq \lim_{\substack{a\to\infty,\\ b\to-\infty}}{\mathscr{H}}^1_{{\rm dR}, a, b}. \end{align*} \end{proposition} \begin{proof} We firstly consider the case $H=\emptyset$. Then, by Malgrange, we obtain that $\mathbb{R}^i\varpi_{{\mathcal{X}}*}({\mathscr{A}}_{\widetilde{{\mathcal{X}}}/S^\circ}^{\leqslant D})=0$ for $i\neq 0$, and $\varpi_{{\mathcal{X}}*}({\mathscr{A}}_{\widetilde{{\mathcal{X}}}/S^\circ}^{\leqslant D})={\mathscr{O}}_{{\mathcal{X}}}(*\cal{D})$. It follows that \[\mathbb{R}\varpi_{{\mathcal{X}}*}{\mathrm{DR}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<\emptyset}({\mathscr{M}})\simeq {\mathrm{DR}}_{{\mathcal{X}}/S}({\mathscr{M}}),\] which implies the first isomorphism. In the case $H=E$, for each point $(s,e)\in S^\circ\times E$, we can take a neighborhood $\cal{V}_e={\mathrm{nb}}(s)\times V_e$ on which we have the following exact sequence: \[ 0\longrightarrow\jmath^e_!({\mathscr{K}}_{\mathscr{O}})_{|{\mathrm{nb}}(s)\times (V_e\setminus\{e\})} \longrightarrow ({\mathscr{M}}_{a, b})_{|\cal{V}_e}\longrightarrow ({\mathscr{M}}_{a, b}\otimes\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1)_{|\cal{V}_e} \longrightarrow 0. \] for sufficiently small $a$ or $b$, where $\jmath^e\colon{\mathrm{nb}}(s)\times (V_e\setminus\{e\})\to \cal{V}_e$ denote the inclusion. By Lemma \ref{malgrange}, we obtain \[\mathbb{R}\varpi_{{\mathcal{X}}*}{\mathrm{DR}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<E}({\mathscr{M}})_{|\cal{V}_e}\simeq {\mathrm{DR}}_{{\mathcal{X}}/S}({\mathscr{M}}_{a, b})_{|\cal{V}_e},\] in the derived category. By glueing these (quasi-)isomorphisms, we obtain the quasi-isomorphism of complexes and hence $ {\mathscr{H}}_{{\rm dR},E!}^1\simeq \lim_{a, b\to-\infty}{\mathscr{H}}^1_{{\rm dR}, a, b}$. The other cases $H=E_0,E_\infty$ can also be proved in a similar way. \end{proof} \subsection{Analytic description of the pairings} We may also define ${\mathrm{DR}}^{<H}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}(\iota^*{\mathscr{M}})$ in a similar way, and obtain the similar results as in the previous sections. In particular we have ${\mathscr{H}}^k({\mathrm{DR}}^{<H}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}(\iota^*{\mathscr{M}}))=0$ for $k\neq 0$, and \[{\mathscr{H}}^0({\mathrm{DR}}^{<H}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}(\iota^*{\mathscr{M}}))\simeq \iota^{*}\widetilde{\jmath}^{H}_!\widetilde{\imath}_*^H{\mathscr{K}}_{\mathscr{O}}\] where $\iota$ denotes the involution on $\widetilde{{\mathcal{X}}}$ induced from that on $S^\circ$, and $\iota^*$ denotes the pull back as a $\pi_{\widetilde{{\mathcal{X}}}}^{-1}{\mathscr{O}}_{S^\circ}$-module, i.e. \[\iota^{*}\widetilde{\jmath}^{H}_!{\imath}_*^H{\mathscr{K}}_{\mathscr{O}}\coloneqq \pi_{\widetilde{{\mathcal{X}}}}^{-1}{\mathscr{O}}_{S^\circ} \otimes_{\iota^{-1}\pi_{\widetilde{{\mathcal{X}}}}^{-1}{\mathscr{O}}_{S^\circ}}\iota^{-1}\widetilde{\jmath}^{H}_!{\imath}_*^H{\mathscr{K}}_{\mathscr{O}}.\] For $H\subset E$, $\<\cdot,\cdot\>\colon {\mathscr{M}}\otimes \iota^*{\mathscr{M}}\to{\mathscr{O}}(*\cal{D})$ induces the duality pairing \begin{align*} \<\cdot,\cdot\>_{{\rm dR}}^H\colon {\mathrm{DR}}^{<H}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}({\mathscr{M}}) \otimes {\mathrm{DR}}^{<E\setminus H}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}(\iota^*{\mathscr{M}}) \longrightarrow {\mathrm{DR}}_{\widetilde{{\mathcal{X}}}/S^\circ}^{<D}({\mathscr{O}}_{{\mathcal{X}}},d_{{\mathcal{X}}/S}) \end{align*} where ${\mathrm{DR}}_{\widetilde{{\mathcal{X}}}/S^\circ}^{<D}({\mathscr{O}}_{{\mathcal{X}}},d_{{\mathcal{X}}/S})$ denotes the following complex: \begin{align*} {\mathscr{A}}_{\widetilde{{\mathcal{X}}}/S^\circ}^{<D}\xrightarrow{d_{{\mathcal{X}}/S}} {\mathscr{A}}_{\widetilde{{\mathcal{X}}}/S^\circ}^{<D}\otimes \varpi_{{\mathcal{X}}}^{-1}(\lambda^{-1}\Omega_{{\mathcal{X}}/S}^1). \end{align*} \begin{theorem}\label{analytic duality} The pairing $\<\cdot,\cdot\>_{{\rm dR}}^H$ induces the perfect pairing \begin{align*} \<\cdot,\cdot\>_{{\rm dR}} \colon {\mathscr{H}}^1_{{\rm dR},H!}\otimes \iota^*{\mathscr{H}}^1_{{\rm dR},(E\setminus H)!} \longrightarrow {\mathscr{O}}_{S^\circ}. \end{align*} If $H=E,E_0,$ or $E_\infty$, then the pairings are compatible with the isomorphisms in Proposition $\ref{limits}$, and pairings in $\S \ref{pairing}$. \end{theorem} \begin{proof} We also have ${\mathscr{H}}^k({\mathrm{DR}}_{\widetilde{{\mathcal{X}}}/S^\circ}^{<D}({\mathscr{O}}_{{\mathcal{X}}},d_{{\mathcal{X}}/S^\circ}))=0$ for $k\neq 0$ and \[{\mathscr{H}}^0({\mathrm{DR}}_{\widetilde{{\mathcal{X}}}/S^\circ}^{<D}({\mathscr{O}}_{{\mathcal{X}}},d_{{\mathcal{X}}/S}))=\widetilde{\jmath}_!\pi_{\cal{Y}^\circ}^{-1}{\mathscr{O}}_{S^\circ},\] where $\widetilde{\jmath}\colon \cal{Y}^\circ\to \widetilde{{\mathcal{X}}}$ denote the inclusion. Hence, in the derived category, to consider $\<\cdot,\cdot\>_{{\rm dR}}^H$ is equivalent to consider the pairing \begin{align*} \<\cdot,\cdot\>_{{\rm dR}}^H\colon \widetilde{\jmath}_!^H\widetilde{\imath}^H_*{\mathscr{K}}_{\mathscr{O}} \otimes \iota^*\widetilde{\jmath}_!^{H'}\widetilde{\imath}^{H'}_*{\mathscr{K}}_{\mathscr{O}} \longrightarrow \widetilde{\jmath}_!\pi_{\cal{Y}^\circ}^{-1}{\mathscr{O}}_{S^\circ} \end{align*} where $H'\coloneqq E\setminus H$. This pairing is perfect in the sense that the induced morphisms \begin{align*} \widetilde{\jmath}_!^H\widetilde{\imath}^H_*{\mathscr{K}}_{\mathscr{O}}\longrightarrow \mathbb{R}{\mathscr{H}\! \! om}_{\pi_{\widetilde{{\mathcal{X}}}}^{-1}{\mathscr{O}}_{S^\circ}}(\iota^*\widetilde{\jmath}_!^{H'}\widetilde{\imath}^{H'}_*{\mathscr{K}}_{\mathscr{O}}, \widetilde{\jmath}_!\pi_{\cal{Y}^\circ}^{-1}{\mathscr{O}}_{S^\circ})\\ \iota^*\widetilde{\jmath}_!^{H'}\widetilde{\imath}^{H'}_*{\mathscr{K}}_{\mathscr{O}}\longrightarrow \mathbb{R}{\mathscr{H}\! \! om}_{\pi_{\widetilde{{\mathcal{X}}}}^{-1}{\mathscr{O}}_{S^\circ}}(\widetilde{\jmath}_!^H\widetilde{\imath}^H_*{\mathscr{K}}_{\mathscr{O}}, \widetilde{\jmath}_!\pi_{\cal{Y}^\circ}^{-1}{\mathscr{O}}_{S^\circ}) \end{align*} are both isomorphisms (This can be proved by the same way as in \cite{Hien} considering the fact that $\widetilde{\cal{P}}\setminus \cal{P}^{\rm rd}$ is the closure of $\iota(\cal{P}^{\rm rd})$ in $\widetilde{\cal{P}}$). Then, by the Verdier duality, we obtain the theorem (c.f. \cite{Hien}). To check the compatibility when $H=E,E_0, E_\infty$ is left to the reader. \end{proof} \section{Betti homology groups and period integrals} \subsection{Preliminary} Let $M$ be a compact oriented smooth manifold with the boundary $\partial M$. For a non-negative integer $\ell$, let $C_\ell(M)=\bigoplus_{c}\mathbb{Q}\<c\>$ denote the $\mathbb{Q}$-vector space generated by piecewise smooth maps $c\colon \triangle_\ell\to M$ from a $\ell$-simplex \[\triangle_\ell=\{(t_1,\dots, t_\ell) \in \mathbb{R}^\ell\mid0\leq t_1\leq \cdots\leq t_\ell\leq 1 \}\] to $M$. For a closed subset $A\subset M$, let $C_\ell(A)$ denote the subspace of $C_\ell(M)$ generated by the maps whose image is contained in $A$. We put $C_\ell(M, A)\coloneqq C_\ell(M)/ C_\ell(A)$. Then, we define $\mathscr{C}_{M, \partial M}^{-\ell}$ as a sheaf associated to the presheaf \begin{align*} V\mapsto C_\ell \(M, (M\setminus V)\cup \partial M \), \quad \end{align*} where $V$ is an open subset in $\widetilde{X}$. Together with the usual boundary operator, we obtain a co-chain complex $\mathscr{C}_{M, \partial M}^{\bullet}$ of sheaves of $\mathbb{Q}$-vector spaces on $M$. It is known that $\mathscr{C}_{M, \partial M}^{\bullet}$ is a homotopically fine resolution of $\mathbb{Q}_M[\dim M]$ (See \cite{Swan},\cite{Hien}). Let $(N,\partial N)$ be another compact oriented smooth manifold with boundary. Let $h\colon (N,\partial N)\to (M,\partial M)$ be a closed embedding. \begin{lemma}\label{push} For a $\mathbb{Q}_M$-module ${\mathscr{F}}$, we have a natural morphism \begin{align*} h_*\colon h_*(\scr{C}_{N,\partial N}^\bullet\otimes h^{-1}{\mathscr{F}}) \longrightarrow \scr{C}_{M,\partial M}^\bullet \otimes {\mathscr{F}}. \end{align*} \end{lemma} \begin{proof} We firstly consider the case ${\mathscr{F}}=\mathbb{Q}_{M}$. In this case, \[h_*\colon h_*\scr{C}_{N,\partial N}^{-\ell}\to \scr{C}_{M,\partial M}^{-\ell}\] is given in the usual way; for a piecewise smooth map $c$ from $\ell$-simplex to $N$, take $h_*\<c\>\coloneqq \<h\circ c\>$. We then consider the general case. By the projection formula, we have \begin{align*} h_*(\scr{C}_{N,\partial N}^{-\ell}\otimes h^{-1}{\mathscr{F}}) \xleftarrow{\sim}&h_*({\scr{C}_{N,\partial N}^{-\ell}})\otimes {\mathscr{F}}. \end{align*} Then $h_*\otimes \mathrm{id}_{\mathscr{F}}$ defines the desired morphism. \end{proof} Let $I=[0,1]$ be the closed interval. Let $h_I\colon I\times N\to M$ be the $C^\infty$ family of closed embeddings $(N,\partial N)\hookrightarrow (M,\partial M)$. For $t\in I$, set $h_t\coloneqq h_{I|\{t\}\times N}\colon N\to M$. A sheaf $\scr{G}$ on $I\times N$ is said to be trivial along $I$ if the adjunction $\mathrm{pr}^{-1}\mathrm{pr}_{*}(\scr{G})\to \scr{G}$ is an isomorphism where $\mathrm{pr}\colon I\times N\to N$ denotes the projection. \begin{lemma}\label{homotopy} Let ${\mathscr{F}}$ be a $\mathbb{Q}_M$-module. Assume that $h_I^{-1}{\mathscr{F}}$ is trivial along $I$. Then the morphisms \[h_{i*}\colon \Gamma(N,\scr{C}_{N,\partial N}^{\bullet}\otimes h_i^{-1}{\mathscr{F}}) \longrightarrow \Gamma(M,\scr{C}_{M,\partial M}^\bullet \otimes {\mathscr{F}}) \quad (i=0,1)\] are chain homotopic to each other under the identification $h_0^{-1}{\mathscr{F}}\simeq h_1^{-1}{\mathscr{F}}$. \end{lemma} \begin{proof} We have \[h_\star^{(\ell)}\colon h_{I*}(\mathrm{pr}^{-1}\scr{C}_{N,\partial N}^{-\ell})\longrightarrow \scr{C}_{M,\partial M}^{-\ell+1}\] in the usual way: $h^{(\ell)}_\star(\<c\>)\coloneqq \sum_i (-1)^i \<h_I\circ (\mathrm{id}_I\times c)\circ s_i\>$, where $s_i\colon \triangle_{\ell}\hookrightarrow I\times \triangle_{\ell-1}$ is defined as $s_i(t_1,\dots, t_{\ell})=(t_i,( t_1,\dots,t_{i-1},t_{i+1},\dots, t_{\ell})) $. We then consider $h_\star^{(\ell)}\otimes \mathrm{id}_{{\mathscr{F}}}$. Taking $\Gamma(M,-)$, we obtain \begin{align*} h^{(\ell)}_\star\colon \Gamma(N,\scr{C}_{N,\partial N}^{-\ell}\otimes\mathrm{pr}_*h_I^{-1}{\mathscr{F}} )\to \Gamma(M,\scr{C}^{-\ell+1}_{M,\partial M}\otimes {\mathscr{F}}) \end{align*} Since $h_I^{-1}{\mathscr{F}}$ is trivial along $I$, we have natural isomorphisms \begin{align*} \Gamma(N,\scr{C}_{N,\partial N}^{-\ell}\otimes\mathrm{pr}_*h_I^{-1}{\mathscr{F}} ) \xrightarrow{\sim} \Gamma(N,\scr{C}_{N,\partial N}^{\bullet}\otimes h_i^{-1}{\mathscr{F}}). \end{align*} We then obtain the lemma by the usual calculation. \end{proof} \subsection{Betti homology groups}Fix a sub-field $\k\subset \C$. We shall use the notations in \S \ref{DR}. Put ${q}\coloneqq \exp(2\pi{\sqrt{-1}} \mu/\lambda)$, which is a holomorphic function on $S^\circ$. \begin{definition} Let ${\mathscr{K}}={\mathscr{K}}(f, g)$ be the subsheaf of ${\mathscr{M}}_{|\cal{Y}^\circ}$ defined as follows: \begin{align*} {\mathscr{K}}\coloneqq \k[{q}^{\pm 1}]_{\cal{Y}^\circ}e^{-f/\lambda}g^{\mu/\lambda}\bm{e}. \end{align*} \end{definition} Although $g^{\mu/\lambda}$ is a multivalued function, ${\mathscr{K}}$ is well defined since the ratio of every two distinct values of $g^{\mu/\lambda}$ is given by $q^m$ for some $m\in\mathbb{Z}$. ${\mathscr{K}}$ is a local system of free $\k[q^{\pm 1}]$-modules of rank one. We have ${\mathscr{K}}\subset {\mathscr{K}}_{\mathscr{O}}$ and ${\mathscr{K}}\otimes_{\k[q^{\pm 1}]}\pi_{\cal{Y}^\circ}^{-1}{\mathscr{O}}_{S^\circ}={\mathscr{K}}_{\mathscr{O}}$. We also note that ${\mathscr{K}}$ is trivial along $S^\circ$ in the sense that the adjunction \begin{align}\label{ADJK} p_{\cal{Y}^\circ}^{-1}p_{\cal{Y}^\circ *}{\mathscr{K}}\longrightarrow {\mathscr{K}} \end{align} is an isomorphism, where $p_{\cal{Y}^\circ}\colon \cal{Y}^\circ\to Y$ denotes the projection. \begin{definition} For a subset $H\subset E$, we set ${\mathscr{K}}_{H!}\coloneqq \widetilde{\jmath}_!^H\widetilde{\imath}_*^{H}{\mathscr{K}}$, \[\scr{C}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H}({\mathscr{M}}) \coloneqq p_{\widetilde{{\mathcal{X}}}}^{-1} \scr{C}_{\widetilde{X},\widetilde{D}}^\bullet\otimes {\mathscr{K}}_{H!},\] and \begin{align*} {\mathscr{H}}^{{\rm Be}}_{\ell,H!}={\mathscr{H}}^{{\rm Be}}_{\ell,H!}(f,g;\k)\coloneqq \mathbb{R}^{-\ell}\pi_{\widetilde{{\mathcal{X}}}*} \scr{C}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H}({\mathscr{M}}). \end{align*} We also use the notations ${\mathscr{H}}^{\rm rd}_\ell\coloneqq {\mathscr{H}}^{\rm Be}_{\ell,E!}$, and ${\mathscr{H}}^{\rm mod}_{\ell}\coloneqq {\mathscr{H}}^{\rm Be}_{\ell,\emptyset!}$. \end{definition} Since $\scr{C}_{\widetilde{X},\widetilde{D}}^\bullet$ is homotopically fine, we have \[{\mathscr{H}}^{\rm Be}_{\ell,H!}={\mathscr{H}}^{-\ell}\(\pi_{\widetilde{{\mathcal{X}}}*}\(p_{\widetilde{{\mathcal{X}}}}^{-1} \scr{C}_{\widetilde{X},\widetilde{D}}^\bullet\otimes {\mathscr{K}}_{H!}\)\).\] \begin{lemma} ${\mathscr{H}}^{{\rm Be}}_{\ell,H!}$ is a locally trivial $\k[q^{\pm 1}]_{S^\circ}$-module. \end{lemma} \begin{proof} This lemma easily follows from the facts that the adjunction (\ref{ADJK}) is an isomorphism and that the projection ${\mathcal{X}}_H^{\rm rd}\to S^\circ$ is locally trivial. \end{proof} \begin{lemma}\label{BEV} If $f$ is not constant, then ${\mathscr{H}}^{\rm Be}_{\ell,H!}=0$ for $\ell\neq 1$. \end{lemma} \begin{proof} It is enough to see that ${\mathscr{H}}^{\rm Be}_{\ell,H!}=0$ for $\ell=0,2$. We have \[{\mathscr{H}}^{\rm Be}_{0, H!}=\mathrm{Cok} \(\pi_{\widetilde{{\mathcal{X}}}*} (\scr{C}^1_{\widetilde{X},\widetilde{D}}\otimes {\mathscr{K}}_H) \to \pi_{\widetilde{{\mathcal{X}}}*} (\scr{C}^0_{\widetilde{X},\widetilde{D}}\otimes {\mathscr{K}}_H)\)=0\] since any point in $Y$ can be connected by a path with a boundary point in $\cal{P}^{\rm rd}$. We also have \begin{align*} {\mathscr{H}}^{\rm Be}_{2,H!}=\mathbb{R}^{-2}\pi_{\widetilde{{\mathcal{X}}}*}({\mathscr{K}}_H[2])=\pi_{\widetilde{{\mathcal{X}}}*}{\mathscr{K}}_H=0 \end{align*} since the section should be locally constant and zero on $\cal{P}^{\rm rd}$. Both facts relies on the assumption that $P\neq \emptyset$. \end{proof} We put \begin{align*} &\iota^*{\mathscr{H}}_{\ell,H!}^{\rm Be}\coloneqq \k[q^{\pm 1}]_{S^\circ}\otimes_{\k[q^{\pm1}]}\iota^{-1}{\mathscr{H}}^{\rm Be}_{\ell,H!}, \end{align*} where the tensor product is given by using the morphism $\k[q^{\pm 1}]\xrightarrow{\sim} \k[q^{\pm 1}],q\mapsto q^{-1}$. \begin{proposition} For $H\subset E$, there is a natural perfect pairing \begin{align*} \<\cdot,\cdot\>_{{\rm Be}}\colon{\mathscr{H}}^{\rm Be}_{\ell,H!}\otimes \iota^*{\mathscr{H}}^{{\rm Be}}_{2-\ell,(E\setminus H)!} \longrightarrow \k[q^{\pm 1}]_{S^\circ}. \end{align*} \end{proposition} \begin{proof} Put \begin{align*} \iota^*{\mathscr{K}}_{H!}\coloneqq \k[q^{\pm 1}]_{\widetilde{{\mathcal{X}}}}\otimes_{\k[q^{\pm1}]}\iota^{-1}{\mathscr{K}}_{H!} \end{align*} where the tensor product is given by using the morphism $\k[q^{\pm 1}]\xrightarrow{\sim} \k[q^{\pm 1}],q\mapsto q^{-1}$. Then, the pairing $\<\cdot,\cdot\>_{{\rm dR}}^H$ in the proof of Theorem \ref{analytic duality} is restricted to the following pairing: \begin{align*} \<\cdot,\cdot\>^H_{{\rm Be}}\colon {\mathscr{K}}_{H!} \otimes \iota^*{\mathscr{K}}_{H'!} \longrightarrow \widetilde{\jmath}_!\k[q^{\pm 1}]_{\cal{Y}^\circ} \end{align*} where $H'\coloneqq E\setminus H$. This pairing is perfect in a similar sense as in the proof of Theorem \ref{analytic duality} (the proof is also similar). Noting that $\scr{C}_{\widetilde{X},\widetilde{D}}^\bullet \simeq \mathbb{Q}_{\widetilde{X}}[2]$ in the derived category, by the Verdier duality and the universal coefficient theorem, we obtain the following exact sequence: \begin{align*} 0\longrightarrow \scr{E}\!xt^1({\mathscr{H}}^{\rm Be}_{0,H!}, \k[q^{\pm 1}]_{S^\circ}) \longrightarrow \iota^*{\mathscr{H}}^{\rm Be}_{1,H'!} \longrightarrow {\mathscr{H}\! \! om}({\mathscr{H}}^{\rm Be}_{1,H!},\k[q^{\pm 1}]_{S^\circ})\longrightarrow 0 \end{align*} Then we obtain the proposition by Lemma \ref{BEV}. \end{proof} \begin{corollary} ${\mathscr{H}}^{\rm Be}_{1,H!}$ is torsion free.\qed \end{corollary} We note that $\<\cdot,\cdot\>_{{\rm Be}}\colon {\mathscr{H}}_{1,H!}^{\rm Be}\otimes\iota^*{\mathscr{H}}^{\rm Be}_{1,H'!}\to {\mathscr{O}}_{S^\circ}$ is skew symmetric, i.e. satisfies the relation $\<\cdot,\cdot\>_{\rm Be}=-\iota^*\<\cdot,\cdot\>_{\rm Be}\circ\mathrm{ex}$, where $\mathrm{ex}\colon {\mathscr{H}}_{1,H!}^{\rm Be}\otimes\iota^*{\mathscr{H}}^{\rm Be}_{1,H'!} \to \iota^*{\mathscr{H}}^{\rm Be}_{1,H'!}\otimes {\mathscr{H}}_{1,H!}^{\rm Be}$ denote the exchange and $\iota^*\<\cdot,\cdot\>_{\rm Be}\colon \iota^*{\mathscr{H}}^{\rm Be}_{1,H'!}\otimes {\mathscr{H}}_{1,H!}^{\rm Be}\to \k[q^{\pm 1}]$ denotes the pull back. For $e\in E$, let $n_e$ be the order of $g$ at $e$, i.e. $g_{|V_e}=z^{n_e}$ for some coordinate neighborhood $(V_e, z)$ centered at $e$. \begin{lemma}\label{cokernel} For $H_1\subset H_2\subset E$, there is a canonical exact sequence \[0\longrightarrow{\mathscr{H}}^{\rm Be}_{1,H_2!}\longrightarrow {\mathscr{H}}^{\rm Be}_{1,H_1!}\longrightarrow \bigoplus_{e\in H_2\setminus H_1}\(\frac{\k[q^{\pm 1}]}{(1-q^{n_e})}\)_{S^\circ}\longrightarrow 0.\] \end{lemma} \begin{proof} Let $\kappa\colon \widetilde{\jmath}^{H_2}_!\widetilde{\imath}^{H_2}_*{\mathscr{K}} \to \widetilde{\jmath}^{H_1}_!\widetilde{\imath}^{H_1}_*{\mathscr{K}}$ be the canonical extension. Then, ${\mathrm{Cok}}(\kappa)$ is supported on $\bigsqcup_{e\in H_2\setminus H_1}S^\circ\times \widetilde{e}$, where $\widetilde{e}\coloneqq \varpi_X^{-1}(e)$ for each $e\in H_2\setminus H_1$. On $S^\circ\times \widetilde{e}$, ${\mathrm{Cok}}(\kappa)$ is a local system trivial along $S^\circ$ and the monodromy around $\widetilde{e}$ is $q^{n_e}$. Hence \begin{align*} \mathbb{R}^{-\ell}\pi_{\widetilde{{\mathcal{X}}}*}\(p_{\widetilde{X}}^{-1}\scr{C}_{\widetilde{X},\widetilde{D}}\otimes {\mathrm{Cok}}(\kappa)\) &\simeq \mathbb{R}^{-\ell}\pi_{\widetilde{{\mathcal{X}}}*}\(\mathbb{Q}_{\widetilde{{\mathcal{X}}}}[2]\otimes {\mathrm{Cok}}(\kappa)\)\\ &\simeq \begin{cases} \bigoplus_{e\in H_2\setminus H_1}\(\frac{\k[q^{\pm 1}]}{(1-q^{n_e})}\)_{S^\circ} &(\ell=1)\\ 0&(\ell\neq 1),\end{cases} \end{align*} which implies the lemma. \end{proof} \subsection{Local sections of Betti homology group sheaves} For an open subset $V\subset S^\circ$, put $I_V\coloneqq V\times I$, and $\widetilde{{\mathcal{X}}}_{|V}\coloneqq V\times \widetilde{X}$. Let \[\gamma\colon I_V\to \widetilde{{\mathcal{X}}}_{|V}, \quad (s, t)\mapsto (s, \gamma_s(t))\] be a family of piecewise smooth closed embeddings $\gamma_s\colon I\to \widetilde{X}$ over $V$ such that $\gamma_s(\partial I)\subset \widetilde{D}$. Let $p_I\colon I_V\to I$ and $\pi_I\colon I_V\to V$ denote the projections. Take $H\subset E$ and a global section $\varepsilon\in \Gamma(I_V, \gamma^{-1}{\mathscr{K}}_{H!})$. Then $\<\mathrm{id}_I\>\otimes \varepsilon$ defines a section of $(-1)$-th cohomology group \begin{align*} {\mathscr{H}}_{1}(I,\gamma^{-1}{\mathscr{K}}_{H!})\coloneqq {\mathscr{H}}^{-1}\(\pi_{I*}\(p_I^{-1}\scr{C}^{\bullet}_{I,\partial I}\otimes \gamma^{-1}{\mathscr{K}}_{H!}\)\). \end{align*} The section will be denoted by $[\mathrm{id}\otimes \varepsilon]$. By Lemma \ref{push}, we have \[\gamma_{s*}\colon {\mathscr{H}}_{1}(I,\gamma^{-1}{\mathscr{K}}_{H!})_s \to ({\mathscr{H}}_{1,H!}^{{\rm Be}})_s\] for $s\in V$. The following lemma follows from Lemma \ref{homotopy} by the standard argument: \begin{lemma}\label{section kousei} There exists a unique section $[\gamma\otimes \varepsilon]\in{\mathscr{H}}_{1,H!}^{{\rm Be}}(V)$ such that \[[\gamma\otimes \varepsilon]_s=\gamma_{s*}([\mathrm{id}\otimes \varepsilon]_s)\] for any $s\in V$.\qed \end{lemma} Let $\gamma'\colon I_{V'}\to \widetilde{{\mathcal{X}}}_{|V'}$ be another family of paths over $V'\coloneqq \iota(V)$. Let $\varepsilon'$ be a global section in $\Gamma(I_{V'}, \gamma'^{-1}{\mathscr{K}}_{H'})$, where $H'\coloneqq E\setminus H$. In a similar way as above, we have a section $[\gamma'\otimes\varepsilon']\in \iota^*{\mathscr{H}}_{1,H'}^{\rm Be}(V)$. \subsection{Period pairings}\label{3.4} The contents of this subsection is mostly contained in the works \cite{MMT}, \cite{matsubara}, \cite{FSY} (see also \cite{Hien}) in a more general setting. We recall a part of their results in our (trivially) relative setting. Let ${\mathfrak{Db}}_{\widetilde{X}}^{{\rm rd}, -r}$ denote the sheaf of rapid decay distributions on $\widetilde{X}$, i.e. the sheaf whose section on small open $V\subset \widetilde{X}$ are the distributions \begin{align*} \psi\in \mathrm{Hom}_{\rm cont}(\Gamma_c(V, \Omega_{\widetilde{X}}^{\infty, r}),\C) \end{align*} on the space $\Omega_{\widetilde{X}}^{\infty, r}$ of $C^\infty$ differential forms on $\widetilde{X}$ of degree $r$ with compact support in $V$ satisfying the rapid decay condition along $V\cap \widetilde{D}$ (c.f \cite{Hien2}). We set \[{\mathfrak{Db}}_{\widetilde{{\mathcal{X}}}/S^\circ}^{{\rm rd},-r}\coloneqq \varpi_{{\mathcal{X}}}^{-1}{\mathscr{O}}_{{\mathcal{X}}^\circ} \otimes_{p_{\widetilde{X}}^{-1}\varpi_{X}^{-1}{\mathscr{O}}_X}\(p_{\widetilde{X}}^{-1}\varpi_{X}^{-1}{\mathfrak{Db}}_{\widetilde{X}}^{{\rm rd}, -r}\).\] We then obtain the local period pairing \begin{align*} \<\cdot,\cdot\>_{{\rm Per}}^H\colon \scr{C}^{<H}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}({\mathscr{M}}) \otimes {\mathrm{DR}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H'}(\iota^*{\mathscr{M}}) &\longrightarrow {\mathfrak{Db}}_{\widetilde{{\mathcal{X}}}/S^\circ}^{{\rm rd},-\bullet},\\ (c\otimes \varepsilon)\otimes \omega&\mapsto (\eta\mapsto \int_c\eta\wedge\<\varepsilon,\omega\>), \end{align*} which induces the (family of) period pairing(s) \begin{align*} \<\cdot, \cdot\>_{\rm Per}\colon {\mathscr{H}}_{1,H!}^{{\rm Be}}\otimes_{\k[q^{\pm 1}]} \iota^*{\mathscr{H}}_{{\rm dR},H'!}^1\longrightarrow {\mathscr{O}}_{S^\circ}. \end{align*} Recall that there is a natural the injection $i_H\colon {\mathscr{H}}_{1,H!}^{{\rm Be}}\to {\mathscr{H}}_{{\rm dR},H!}^1$ induced from ${\mathscr{K}}\hookrightarrow {\mathscr{K}}_{\mathscr{O}}$. On the other hand, using the pairings $\<\cdot,\cdot\>'_{\rm dR}\coloneqq 2\pi{\sqrt{-1}}\<\cdot,\cdot\>_{\rm dR}$ and $\<\cdot,\cdot\>_{\rm Per}$, we also obtain an injection $i_H'\colon {\mathscr{H}}_{1,H!}^{{\rm Be}}\to {\mathscr{H}}_{{\rm dR},H!}^1$. \begin{lemma}\label{COMPATI} The injections $i_H$ and $i'_H$ defined above coincide with each other. \end{lemma} \begin{proof} The statement follows from the following commutative diagram: \begin{align*} \xymatrix{ {\mathrm{DR}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H}({\mathscr{M}})[2]\otimes {\mathrm{DR}}^{<H'}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}(\iota^*{\mathscr{M}}) \ar[r]^{\ \ \ \ \ \ \ \ \ \ \ \ \<\cdot,\cdot\>'_{\rm dR}}&{\mathrm{DR}}_{\widetilde{{\mathcal{X}}}/S}^{<D}({\mathscr{O}}_{{\mathcal{X}}},d_{{\mathcal{X}}/S})[2]\\ {\mathscr{K}}_H[2]\otimes \iota^*{\mathscr{K}}_{{\mathscr{O}},H'}\ar[r]^{\ \ \ \ \ \ \<\cdot,\cdot\>} \ar[u]\ar[d]&\widetilde{\jmath}!\pi_{{\cal{Y}^\circ}}^{-1}{\mathscr{O}}_{S^\circ}[2] \ar[u]\ar[d]\\ \scr{C}^{<H}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}({\mathscr{M}})\otimes {\mathrm{DR}}_{\widetilde{{\mathcal{X}}},\widetilde{\cal{D}}/S^\circ}^{<H'}(\iota^*{\mathscr{M}}) \ar[r]^{\ \ \ \ \ \ \ \ \<\cdot,\cdot\>_{\rm Per}}&{\mathfrak{Db}}_{\widetilde{{\mathcal{X}}}/S^\circ}^{{\rm rd},-\bullet} } \end{align*} where we have put $\iota^*{\mathscr{K}}_{{\mathscr{O}},H'}\coloneqq\iota^{*}\widetilde{\jmath}^{H}_!\widetilde{\imath}_*^H{\mathscr{K}}_{\mathscr{O}}$. \end{proof} We also note that $i_H=i'_H$ induces an isomorphism \[\mathrm{i}_H\colon {\mathscr{H}}_{1,H!}^{\rm Be}\otimes_{\k[q^{\pm 1}]} {\mathscr{O}}_{S^\circ} \xrightarrow{\sim} {\mathscr{H}}^1_{{\rm dR},H!}.\] This isomorphism trivialize the actions of $\nabla_{\mathfrak{a}}$ and ${\mathbb{S}}$ in the following sense: if we define $\nabla_{\mathfrak{a}}=\mathrm{id}\otimes [{\mathfrak{a}},\cdot]$ and ${\mathbb{S}}=\mathrm{id}\otimes \sigma^\#$ on ${\mathscr{H}}_{1,H!}^{\rm Be}\otimes_{\k[q^{\pm 1}]} {\mathscr{O}}_{S^\circ}$, then $\mathrm{i}_H$ is compatible with these operators. By the similar arguments as in the proof of Lemma \ref{COMPATI}, we obtain the following: \begin{lemma}\label{compatible} We have the following commutative diagram: \begin{align*} \xymatrix{ {\mathscr{H}}^{\rm Be}_{1,H!}\otimes \iota^*{\mathscr{H}}^{\rm Be}_{1,H'!}\ar[r]^{\ \ \ \ \ \ \ \<\cdot,\cdot\>_{\rm Be}}\ar[d]&\k[q^{\pm}]_{S^\circ}\ar[d]\\ {\mathscr{H}}_{{\rm dR},H!}^1\otimes \iota^*{\mathscr{H}}^1_{{\rm dR},H'!}\ar[r]^{\ \ \ \ \ \ \ \<\cdot,\cdot\>'_{\rm dR}}&{\mathscr{O}}_{S^\circ} } \end{align*} where the vertical arrows are the inclusions.\qed \end{lemma} \section{Geometric construction of Stokes filtered quasi-local systems} \subsection{Glueing Betti homology groups}\label{Glueing Betti} Set $S^*\coloneqq S\setminus|(\lambda\mu)_0|\simeq (\C^*)^2$. Define subsets $S_{\mathbb{R}_+}^*$ and $S_{\mathbb{R}_-}^*$ of $S^*$ by \begin{align*} &S_{\mathbb{R}_+}^*\coloneqq\{(\lambda,\mu)\in S^*\mid \mu/\lambda\in\mathbb{R}_{>0} \}\\ &S_{\mathbb{R}_-}^*\coloneqq\{(\lambda,\mu)\in S^*\mid \mu/\lambda\in \mathbb{R}_{<0} \}. \end{align*} Put $S_\mathbb{R}^*\coloneqq S_{\mathbb{R}_+}^*\cup S_{\mathbb{R}_-}^*$. We then define a quasi-local system on $S^*$ (see \S \ref{QLS}) constructible with respect to the stratification $S^*=S_{\mathbb{R}_+}^*\sqcup S_{\mathbb{R}_-}^* \sqcup (S^*\setminus S_\mathbb{R}^*) $ as follows: \begin{definition} Let ${\mathscr{H}}^{\rm Be}_{1,\bm{0}}={\mathscr{H}}^{\rm Be}_{1,\bm{0}}(f,g)$ be the sheaf of $\k[q^{\pm 1}]$-modules on $S^*$ defined as follows: For an open subset $V\subset S^*$, we set \begin{align*} {\mathscr{H}}^{\rm Be}_{1,\bm{0}}(V)\coloneqq \begin{cases} {\mathscr{H}}^{\rm mod}_{1}(V)&(V\cap S^*_\mathbb{R}=\emptyset)\\ {\mathscr{H}}^{{\rm Be}}_{1,E_0!}(V)&(V\cap S^*_{\mathbb{R}_+}= \emptyset, V\cap S^*_{\mathbb{R}_-}\neq\emptyset)\\ {\mathscr{H}}^{\rm Be}_{1,E_\infty!}(V)&(V\cap S^*_{\mathbb{R}_+}\neq \emptyset, V\cap S^*_{\mathbb{R}_-}=\emptyset)\\ {\mathscr{H}}^{\rm rd}_1(V)&(V\cap S^*_{\mathbb{R}_+}\neq \emptyset, V\cap S^*_{\mathbb{R}_-}\neq\emptyset). \end{cases} \end{align*} The restrictions are defined as the usual restrictions or the canonical morphisms. \end{definition} By Lemma \ref{cokernel}, ${\mathscr{H}}_{1,\bm{0}}^{\rm Be}$ is a quasi-local system on $S^*$. Set ${\mathscr{H}}^1_{{\rm dR},\bm{0}}\coloneqq {\mathscr{H}}^1_{{\rm dR},0,0}$. \begin{corollary} There is a natural isomorphism \begin{align*} \mathrm{i}_{\bm{0}}\colon{\mathscr{H}}^{\rm Be}_{1,\bm{0}}\otimes_{\k[q^{\pm 1}]}{\mathscr{O}}_{S^*}\xrightarrow{\sim} {\mathscr{H}}^1_{{\rm dR},\bm{0}|S^*} \end{align*} \end{corollary} \begin{proof} By Proposition \ref{limits} and Lemma \ref{SUP} we have the natural isomorphisms \begin{align*} {\mathscr{H}}_{{\rm dR},\bm{0}}^1\simeq \begin{cases} {\mathscr{H}}_{{\rm dR},E_0!}^1&\text{on }S^*\setminus S^*_{\mathbb{R}_+}\\ {\mathscr{H}}_{{\rm dR},E_\infty!}^1&\text{on }S^*\setminus S^*_{\mathbb{R}_-}\\ {\mathscr{H}}_{{\rm dR}}^1&\text{on }S^*\setminus S^*_\mathbb{R}. \end{cases} \end{align*} Then the isomorphisms $\mathrm{i}_H$ in \S \ref{3.4} induce the desired isomorphism $\mathrm{i}_{\bm{0}}$. \end{proof} Let $B$ be the complex plane considered in \S \ref{stokes}. Let $\phi_S\colon B\to S$ be the morphism defined by $\phi(u, v)=(uv, v)$, i.e. $\lambda=uv$, and $\mu=v$. It induces an isomorphism $\phi_S\colon B^*\xrightarrow{\sim} S^*$. \begin{definition} We set \begin{align*} \L^{\rm Be}=\L^{\rm Be}{(f,g)}\coloneqq \imath_T^{-1}\widetilde{\jmath}_{B*}\phi_S^{-1}{\mathscr{H}}_{1,\bm{0}}^{\rm Be}, \end{align*} which is a quasi-local system on $(T,\Theta)$. \end{definition} \begin{lemma} $\L^{\rm Be}$ is saturated. \end{lemma} \begin{proof} Since the sum ${\mathscr{K}}_{E_0!}\oplus {\mathscr{K}}_{E_\infty!}\to {\mathscr{K}}_{\emptyset !}$ is surjective, we obtain the lemma. \end{proof} Let $e_1,\dots, e_r$ be a frame of $\phi_S^*{\mathscr{H}}_{{\rm dR},\bm{0}}^1$ around the origin of $B$. \begin{definition} For an open subset $V\subset T$ and a section $\varphi\in \Gamma(V,{\mathscr{Q}})$, we define the subsheaf $\L^{\rm Be}_{\leqslant \varphi}\subset \L^{\rm Be}_{|V}$ as follows: Let $\mathfrak{s}$ be a local section of $ \L^{\rm Be}_{|V}$. Then, there exist an open subset $\widetilde{V'}\subset \widetilde{B}$ with $T\cap \widetilde{V'}\subset V$ and a representative $\widetilde{\mathfrak{s}}\in \Gamma(\widetilde{V'}\setminus T, {\mathscr{H}}^{\rm Be}_{1,\bm{0}})$ of $\mathfrak{s}$. We have the expression \begin{align*} \widetilde{\mathfrak{s}}=\sum_{i=1}^rh_i(u,v)e_i \end{align*} for some $h_i\in \Gamma(\widetilde{V'},\widetilde{\jmath}_{B*}{\mathscr{O}}_{B^*})$. The section $\mathfrak{s}$ is a local section of $\L^{\rm Be}_{\leqslant \varphi}$ if and only if $e^{-\varphi}h_i\in {\mathscr{A}}_{B}^{\leqslant Z}$ for all $i=1,\dots, r$. \end{definition} \begin{lemma} $\L^{\rm Be}_{\leqslant}$ is a pre-Stokes filtration on $\L^{\rm Be}$.\qed \end{lemma} By Lemma \ref{compatible}, we have the following description of $\L^{\rm Be}_{\leqslant}$: \begin{lemma}\label{period criterion} A local section $\mathfrak{s}$ of $\L^{\rm Be}$ is in $\L^{\rm Be}_{\leqslant\varphi}$ if the period integral \begin{align*} e^{-\varphi}\ \<\mathfrak{s},\omega \>_{\rm Per} \end{align*} twisted by $e^{-\varphi}$ is of moderate growth for any local section $\omega$ of $\iota^*{\mathscr{H}}_{{\rm dR},\bm{0}}^1$.\qed \end{lemma} We also note that we have the quasi-duality pairing $\<\cdot,\cdot\>_{{\rm Be},\pm}$ on $\L^{\rm Be}$ using $\<\cdot,\cdot\>_{{\rm Be}}$ (defined in the previous section) in a natural way. \begin{lemma} The quasi-duality pairing $\<\cdot,\cdot\>_{{\rm Be}\pm}$ on $\L^{\rm Be}$ is compatible with the pre-Stokes filtration $\L^{\rm Be}_{\leqslant}$. \qed \end{lemma} \subsection{Geometric conditions} In the following discussion, we will assume the following conditions: \begin{align} \label{A_1} &\text{The restriction $f_{|U}$ has at most $A_1$-singularities.}\\ \label{nowhere} &\text{The divisor $E$ and the critical points of $f_{|U}$ do not intersect.} \end{align} \begin{definition} Let $\Sigma(f, g)_Y$ be the zero locus of the relative one form \begin{align*} df-\mu g^{-1}dg\in H^0 \(\C \times Y, \Omega^1_{\C\times Y/\C}\). \end{align*} We then define $\Sigma(f, g)$ to be the closure of $\Sigma(f, g)_Y$ in $\C \times U$. \end{definition} \begin{lemma}\label{cover} The natural projection $\pi_\Sigma\colon\Sigma(f, g)\to \C$ is a non-ramified finite covering over the disc $\Delta_\mu(r)=\{\mu\in\C_\mu\mid |\mu|<r\}$ for sufficiently small $r>0$. \end{lemma} \begin{proof} There exist $r>0$ and a small neighborhood $V$ of $P\subset X$ such that we have $({\Delta_\mu(r)}\times V)\cap\Sigma(f, g)=\emptyset$. It follows that $\Sigma(f, g)\cap({\Delta_\mu(r)}\times U)$ is closed in ${\Delta_\mu(r)}\times X$. Hence the projection \begin{align*} \pi_{r}:\Sigma(f, g)\cap (\Delta_\mu(r)\times Y)\longrightarrow {\Delta_\mu(r)} \end{align*} is a finite morphism. It remains to prove that $\pi_r$ is non-ramified for sufficiently small $r>0$. Take a point $p$ in $\pi_\Sigma^{-1}(0)$. By Condition (\ref{nowhere}), we have $\pi_\Sigma^{-1}(0)=E\sqcup {\mathrm{Crit}}(f)$. Consider the case $p\in {\mathrm{Crit}}(f)$. By Condition (\ref{A_1}), there exists a chart $(V_p;x)$ on which we have $f(x)=x^2+f(p)$. Let $h(x)$ be the function on $V_p$ defined by $h(x)d x=g^{-1}dg$. By Condition (\ref{nowhere}), $h(x)$ is a holomorphic function. On ${\Delta_\mu(r)}\times V_p$, we have \begin{align*} df-\mu g^{-1}dg=(2x-\mu h(x))dx. \end{align*} Since $h(x)$ is holomorphic, \[\Sigma(f, g)\cap ({\Delta_\mu(r)}\times V_p)=\{(\mu,x) \in{\Delta_\mu(r)}\times V_p\mid 2x-\mu h(x)=0\}\] is smooth over ${\Delta_\mu(r)}$ for sufficiently small $r>0$. Consider the case $p\in E$. There exists a coordinate $(W_p, y)$ such that $y(p)=0$ and $g(y)=y^{n_p}$ for some $n_p\in \mathbb{Z}$ on $W_p$. Let $f'(y)$ be the holomorphic function on $W_p$ by defined by $df=f'(y)dy$. By Condition (\ref{nowhere}), we may assume that $f'(y)$ is nowhere vanishing on $W_p$. Then \begin{align*} \Sigma(f, g)\cap ({\Delta_\mu(r)}\times W_p)=\{(\mu, y)\in {\Delta_\mu(r)}\times W_p\mid y f'(y)-n_p\mu =0 \} \end{align*} is smooth over ${\Delta_\mu(r)}$ for sufficiently small $r>0$. Hence we obtain the lemma. \end{proof} We also note that under this assumption, we have the following: \begin{corollary} The rank of ${\mathscr{H}}^1_{{\rm dR}}$ over ${\mathscr{O}}_{S^\circ}$ is equal to the number of elements of $\pi_{\Sigma}^{-1}(0)=E\sqcup {\mathrm{Crit}}(f)$. \end{corollary} \begin{proof} By Theorem \ref{LF}, the rank is the same as the dimension of the fiber of ${\mathscr{H}}^1_{{\rm dR},0,0}$ at $(z, s)=(0,0)$. The fiber is canonically isomorphic to \[H^0(U,\omega_{U}(E)/{\mathscr{O}}_{U} df). \] Since the support of $\omega_{U}(E)/{\mathscr{O}}_{U} df$ is $\pi_\Sigma^{-1}(0)$ (the multiplicity at each point is one), we obtain the lemma. \end{proof} Hence we also have: \begin{corollary} If $(f,g)$ satisfies $(\ref{A_1})$, $(\ref{nowhere})$ then the rank of $\L^{\rm Be}$ is equal to the number of elements in $\pi_{\Sigma}^{-1}(0)=E\sqcup {\mathrm{Crit}}(f)$. \qed \end{corollary} For each point $p$ in $\pi_\Sigma^{-1}(0)$, let $\nu_{p}\colon{\Delta_\mu(r)} \xrightarrow{\sim} \Sigma_p$ denote the section of of the projection $\pi_\Sigma$ to the sheet $\Sigma_p \subset \Sigma(f, g)\cap({\Delta_\mu(r)} \times U)$ which contains $p$. We will identify $\Sigma_p$ with its image in $U$ via the projection ${\Delta_\mu(r)}\times U\to U$. Then $\nu_p$ will be identified with the composition with the projection, i.e. it will be considered as a map $\nu_p\colon {\Delta_\mu(r)}\to U$. We then put $f_p \coloneqq f\circ\nu_{p} \colon {\Delta_\mu(r)} \to \C$ and $g_p\coloneqq g\circ \nu_{p} \colon {\Delta_\mu(r)} \to \C$. Let $n_p$ be the integer such that $g_p({\eq})$ is of the form ${\eq}^{n_p}h({\eq})$ for $h({\eq})$ is holomorphic with $h(0)\neq 0$. Using these notations, we define the goodness of $(f, g)$: \begin{definition}\label{good pair} We will say that the pair $(f, g)$ is \textit{good} if for any two distinct points $p, p'\in\pi_\Sigma^{-1}(0)$ with $(f_p, g_p)\neq (f_{p'}, g_{p'})$ as pairs of germs in ${\mathscr{O}}_{{\Delta_\mu(r)},0}$, either $f_p(0)\neq f_{p'}(0)$ or $n_p\neq n_{p'}$ holds. \end{definition} \begin{example}\label{spec bessel} Consider the Example \ref{Bessel}. In this case, we have $E=\emptyset$, ${\mathrm{Crit}}(f)=\{\pm 1\}$, and \[\Sigma(f, g)=\{({\eq}, y)\in \C\times Y \mid y^2-{\eq} y-1=0\}.\] We have $f_{\pm 1}({\eq})=\pm \sqrt{{\eq}^2+4}$ and $g_{\pm 1}({\eq})=2^{-1}(\mu\pm \sqrt{{\eq}^2+4})$. \end{example} \begin{example}\label{spec gamma} Consider the Example \ref{EXG}. In this case, we have $E=E_0=\{0\}$, ${\mathrm{Crit}}(f)=\emptyset$ and \[\Sigma(f, g)=\{({\eq}, y)\in \C \times U \mid y-{\eq}=0\}.\] We have $f_{0}({\eq})=g_0({\eq})={\eq}$. \end{example} In the following, we always assume that $(f, g)$ is a good pair. \subsection{Statement of the main theorem} The main result of this paper is the following: \begin{theorem}\label{main theorem} Assume that the pair $(f, g)$ is good. In particular, they satisfy the conditions $(\ref{A_1})$, $(\ref{nowhere})$. Then, $\L^{\rm Be}_{\leqslant}$ is a good Stokes filtration on $\L^{\rm Be}_{}=\L^{\rm Be}(f,g)$. Moreover, the exponential factor $\Phi(f, g)\coloneqq \Phi(\L^{\rm Be}_{}, \L^{\rm Be}_{\leqslant})$ is given by \begin{align*} &\Phi(f,g)=\bigsqcup_{p\in \pi_{\Sigma}^{-1}(0)} \Phi_p(f,g), & \Phi_p(f, g)\coloneqq \left[ u^{-1}\( \log g_p(v)-\frac{f_p(v)}{v}+2\pi{\sqrt{-1}}\mathbb{Z}\) \right]. \end{align*} \end{theorem} The proof of this theorem will be finished in \S \ref{saddle point method}. The goodness of $\Phi(f,g)$ directly follows from the assumption that the pair $(f,g)$ is good in the sense of Definition \ref{good pair}. We firstly explain what will be actually shown in the proof. \begin{definition} A real number $\phi\in \mathbb{R}$ is called \textit{generic} (with respect to $(f, g)$) if \begin{align*} {\mathrm{Im}}(e^{-{\sqrt{-1}}\phi}(f(p)-f(p')))\neq 0 \end{align*} for $p, p'\in \pi_\Sigma^{-1}(0)$ with $f(p)\neq f(p')$. \end{definition} Remark that if $\phi$ is generic, then $\phi+\pi{\sqrt{-1}} n$ ($n\in \mathbb{Z}$) is also generic. For a generic $\phi$, we set \begin{align*} &V^\phi\coloneqq \left\{(\theta_u,\theta_v)\in T\middle| \cos(\theta_u+\theta_v-\phi)>0 \right\},\\ &V^\phi_{+}\coloneqq \left\{(\theta_u,\theta_v)\in V^\phi\middle| \cos(\theta_v-\phi)>0\right\},\\ &V^\phi_{-}\coloneqq\left\{(\theta_u,\theta_v)\in V^\phi\middle| \cos(\theta_v-\phi)<0\right\}. \end{align*} Note that we have $V^\phi_+\cap T_{\mathbb{R}_+}\neq \emptyset$, $V^\phi_+\cap T_{\mathbb{R}_-}=\emptyset$, $V^\phi_-\cap T_{\mathbb{R}_-}\neq \emptyset$, and $V^\phi_-\cap T_{\mathbb{R}_+}=\emptyset$. We also have \begin{align*} &\iota(V^{\phi})=V^{\phi+\pi}, &\iota(V^{\phi}_{\pm})=V^{\phi+\pi}_{\mp}. \end{align*} Theorem \ref{main theorem} follows from the following: \begin{theorem}\label{WTS} For any generic $\phi$, we have the sections \begin{align*} &\mathfrak{s}_c^\phi\in \L^{\rm Be}_{}(V^\phi),&&(c\in {\mathrm{Crit}}(f))\\ &\mathfrak{s}_{e,\pm}^\phi\in \L^{\rm Be}_{}(V^\phi_{\pm}), &&(e\in E) \end{align*} with the following properties$:$ \begin{enumerate} \item There are sections \begin{align*} &\varphi_c\in H^0(V^\phi, \Phi_c(f,g))&& (c\in {\mathrm{Crit}}(f)),\\ &\varphi_{e,\pm}\in H^0(V^\phi_{\pm},\Phi_e(f,g)), &&(e\in E) \end{align*} such that \begin{enumerate} \item $\mathfrak{s}_c^\phi\in \L_{\leqslant \varphi_c}^{\rm Be}(V^\phi)$ for $c\in {\mathrm{Crit}}(f)$, $\mathfrak{s}_{e,\pm}^\phi\in \L^{\rm Be}_{\leqslant \varphi_{e,\pm}}(V^\phi_{\pm})$ for $e\in E$, and \item the induced sections ${\mathrm{gr}}_{\varphi_c}(\mathfrak{s}_c^\phi)$, ${\mathrm{gr}}_{\varphi_{e,\pm}}(\mathfrak{s}_{e,\pm}^\phi)$ are non-zero. \end{enumerate} \item Let $\mathfrak{s}_{c,\pm}^\phi$ denote the restrictions of $\mathfrak{s}_{c}^\phi$ to $V^\phi_{\pm}$. Then the intersection matrix \[\(\<\mathfrak{s}^\phi_{p,+}, \mathfrak{s}^{\phi+\pi}_{p',-}\>_{{\rm Be}+}\)_{p, p'\in \pi_{\Sigma}^{-1}(0)}\] is the identity matrix. \item The quotient classes \begin{align*} [&\mathfrak{s}^\phi_{e,+}]_{\bm\theta}\in \L_{\bm\theta}^{\rm Be}/(\L^{\rm Be})^-_{\bm\theta} &&\text{for $e\in E_0$, $\bm{\theta}\in V^\phi_+\setminus T_{\mathbb{R}_+}$ and} \\ &[\mathfrak{s}^\phi_{e', -}]_{\bm\theta'}\in \L_{\bm\theta'}^{\rm Be}/(\L^{\rm Be})^+_{\bm\theta'} &&\text{for $e'\in E_\infty$, $\bm\theta'\in V^\phi_-\setminus T_{\mathbb{R}_-}$} \end{align*} are non-zero, while $[\mathfrak{s}^\phi_{e,-}]_{\bm\theta'}\in \L_{\bm\theta'}^{\rm Be}/(\L^{\rm Be})^+_{\bm\theta'}$ and $[\mathfrak{s}^\phi_{e', +}]_{\bm\theta}\in \L_{\bm\theta}^{\rm Be}/(\L^{\rm Be})^-_{\bm\theta}$ are zero. \end{enumerate} \end{theorem} \begin{proof}[Proof of ^^ ^^ Theorem \ref{WTS} $\Rightarrow$ Theorem \ref{main theorem}"] Let $\bm{\theta}$ be a point in $T_+\cup T_-$. Take generic $\phi_+$ and $\phi_-$ such that both $V^{\phi_+}_+$ and $V^{\phi_-}_-$ contain $\bm\theta$. If we assume Theorem \ref{WTS}, we have the sections $\mathfrak{s}^{\phi_\pm}_{c}\in \Gamma(V^{\phi_\pm},\L^{\rm Be})$ for $c\in {\mathrm{Crit}}(f)$, and the sections $\mathfrak{s}_{e,\pm}^{\phi_\pm}\in \Gamma(V^{\phi_\pm}_\pm,\L^{\rm Be})$ for $e\in E$ which satisfy the conditions in Theorem \ref{WTS}. By the conditions in Theorem \ref{WTS}, we have the following frames of $\L^+,\L^-$ and $\L$ on a neighborhood ${\mathrm{nb}}(\bm\theta)$ of $\bm\theta$ (We put $\L\coloneqq \L^{\rm Be}$ for simplicity of notations): \begin{align}\label{+-} &\L^+_{|{\mathrm{nb}}(\bm\theta)}= \bigoplus_{p\in\pi_{\Sigma}^{-1}(0)} \k[q^{\pm 1}]\mathfrak{s}^{\phi_+}_{p,+}, &\L^-_{|{\mathrm{nb}}(\bm\theta)}=\bigoplus_{p\in\pi_{\Sigma}^{-1}(0)} \k[q^{\pm 1}]\mathfrak{s}^{\phi_-}_{p,-}, \end{align} \begin{align*} &\L_{|{\mathrm{nb}}(\bm\theta)}=\(\bigoplus_{c\in {\mathrm{Crit}}(f)}\k[q^{\pm 1}]\mathfrak{s}^{\phi_+}_c\) \oplus \(\bigoplus_{e\in E_0}\k[q^{\pm 1}]\mathfrak{s}^{\phi_+}_{e,+}\) \oplus\(\bigoplus_{e'\in E_\infty}\k[q^{\pm 1}]\mathfrak{s}^{\phi_-}_{e',-}\). \end{align*} Using these frames, we obtain the conditions (1) and (2) in Definition \ref{Def q-Stokes} on ${\mathrm{nb}}(\bm\theta)$. (In the case $\bm\theta\in T_{\mathbb{R}_+}\cup T_{\mathbb{R}_-}$, we only need to consider (\ref{+-}) to see (1) and (2) in Definition \ref{Def q-Stokes}.) We also obtain (3) in Definition \ref{Def q-Stokes} from (3) in Theorem \ref{WTS}. \end{proof} \subsection{Families of paths}\label{properties of gamma} For $\phi\in\mathbb{R}$ and $\delta>0$, we set $\Delta^*_\mu(\delta)\coloneqq \Delta_\mu(\delta)\setminus\{0\}$ and \begin{align*} \cal{S}_\pm^\phi(\delta)\coloneqq \{\mu\in \Delta^*_\mu(\delta)\mid \pm {\mathrm{Re}}(e^{-{\sqrt{-1}}\phi}\mu)>0\}. \end{align*} Note that $\cal{S}_+^{\phi+\pi}(\delta)=\cal{S}_-^\phi(\delta)$ and $\cal{S}^{\phi+\pi}_-(\delta)=\cal{S}_+^{\phi}(\delta)$. For a sufficiently small $\delta>0$ and a generic $\phi$, we shall construct the families of paths \begin{align*} &\gamma^\phi_c\colon \Delta_\mu(\delta)\times \mathbb{R}\to \Delta_\mu(\delta)\times Y, &&(\mu, t)\mapsto \(\mu,\gamma^\phi_{c,\mu}(t)\)\\ &\gamma^\phi_{e,\pm}\colon \cal{S}_\pm^\phi(\delta)\times \mathbb{R}\to \cal{S}_\pm^\phi(\delta)\times Y, &&(\mu, t)\mapsto \(\mu,\gamma^\phi_{c,\pm,\mu}(t)\) \end{align*} indexed by $c\in{\mathrm{Crit}}(f)$, $e\in E$, and $\pm$ with the following properties: \begin{enumerate} \item[$(\gamma1)$] $\gamma^\phi_{c,\mu}(0)=\nu_c(\mu)$, and $\gamma^\phi_{e,\pm,\mu}(0)=\nu_e(\mu)$. \item[$(\gamma2)$] For a fixed $\mu\in \cal{S}_\pm^\phi(\delta)$, the path $\gamma^\phi_{c,\mu}$ $(c\in {\mathrm{Crit}}(f))$ does not intersect with $\gamma^{\phi+\pi}_{c',\mu}$ $(c'\in {\mathrm{Crit}}(f), c'\neq c)$ or $\gamma^{\phi+\pi}_{e',\pm,\mu}$ $(e'\in E)$. Similarly, $\gamma^\phi_{e,\pm,\mu}$ $(e\in E)$ does not intersect with $\gamma^{\phi+\pi}_{c',\mu}$ or $\gamma^{\phi+\pi}_{e',\pm,\mu}$ ($e'\neq e\in E$). \item[$(\gamma3)$] For $c\in {\mathrm{Crit}}(f)$, take a branch of $\log g$ at $c$. For $e\in E$, take a branch of $\log g$ on $\{\nu_e(\mu)\in Y\mid \mu\in\cal{S}_\pm^\phi(\delta)\}$ (see \S \ref{Around E} below). Then the function ${\mathrm{Re}}(e^{-{\sqrt{-1}}\theta}(f-\mu \log g))$ monotonically increases along $\gamma^\phi_{c,\mu}$ and $\gamma^\phi_{e,\pm,\mu}$ as $|t|$ increases. \end{enumerate} The construction will be given in \S \ref{construction of gamma} after the preparations in \S \ref{dumbbell} and \S \ref{foliation}. Then we will use these paths to construct the sections in Theorem \ref{WTS} after some modifications. \subsection{Preliminaries}\label{dumbbell} Let $\phi\in \mathbb{R}$ be generic. For $c\in {\mathrm{Crit}}(f)$, let $L^\phi_c\colon\mathbb{R}\to Y$, $t\mapsto L^\phi_c(t)$ be a path which satisfy the following conditions: \begin{itemize} \item $L_c^\phi(0)=c$, and \item $f\circ L_c^\phi(t)=f(c)+e^{{\sqrt{-1}}\phi}t^2$ for $t\in\mathbb{R}$. \end{itemize} These conditions determine $L^\phi_c$ up to the re-parametrization $t\mapsto -t$. For $e\in E$, let $L^\phi_e\colon \mathbb{R}_{\geq 0}\to Y$, $t\mapsto L^\phi_e(t)$ be a path with the following conditions: \begin{itemize} \item $L_e^\phi(0)=e$, and \item $f\circ L_e^\phi(t)=f(e)+e^{{\sqrt{-1}}\phi}t$ for $t\geq 0$. \end{itemize} Since $\phi$ is generic, any distinct two of $L_c^{\phi}$ $(c\in {\mathrm{Crit}}(f))$ and $L_e^{\phi}$ $(e\in E)$ do not intersect with each other. Moreover, if $p, p'\in {\mathrm{Crit}}(f)\cup E$ are different (i.e. $p\neq p')$, then $L_p^\phi$ and $L_{p'}^{\phi+\pi}$ do not intersect with each other. There exists the following limits in $X$: \begin{align*} &p^\phi_{c,\pm}\coloneqq \lim_{t\to\pm \infty} L^\phi_c(t), &p_{e}^\phi\coloneqq \lim_{t\to \infty} L_e^\phi(t). \end{align*} Let $\Delta(p_{c,\pm}^\phi)$ be a coordinate disk centered at $p_{c,\pm}^\phi$. Take $R>0$ with \begin{align*} \text{ $L^\phi_c([-R,R])\cap \Delta(p_{c,+}^\phi)\neq \emptyset$ and $L^\phi_c([-R,R])\cap \Delta(p_{c,-}^\phi)\neq \emptyset$.} \end{align*} Then we take a tuber neighborhood $\bm{L}^\phi_c(R)$ of $L^\phi_c([-R,R])$ which is relatively compact in $Y$. We take a disk $\Delta(c)\subset \bm{L}^\phi_c(R)$ centered at $c$. We can take the tuber neighborhood $\bm{L}^\phi_c(R)$ and the disk $\Delta(c)$ so that the following condition hold: For any $x\in \bm{L}^\phi_c(R)\setminus \Delta(c)$, the path $\ell_x(t)$ given by \begin{align*} \text{$\ell_x(0)=x$, and $f\circ\ell_x(t)=f(x)+e^{{\sqrt{-1}}\phi}t$} \end{align*} intersects with the boundary of $ \bm{L}^\phi_c(R)\setminus \(\Delta(c)\cup \Delta(p^\phi_+)\cup \Delta(p^\phi_-)\)$ only at the boundary of $\Delta(c)\cup \Delta(p_{c,+}^\phi)\cup \Delta(p_{c-}^\phi)$. Similarly, let $\Delta(p_e^\phi)$ and $\Delta(e)$ be coordinate disks centered at $p_e^\phi$ and $e$, respectively. Take $r,R>0$ such that $L^\phi_e([r,R])\cap \Delta(e)\neq\emptyset$ and $L^\phi_e(r,R)\cap \Delta(p_{e}^\phi)\neq \emptyset$. Let $\bm{L}^\phi_e(r,R)$ be a relatively compact tuber neighborhood of $L_e([r,R])$ with a similar property as $\bm{L}_c^\phi(R)$, i.e. for any $y\in \bm{L}_e(r,R)$, the path $\ell_y^\phi$ defined by the same condition intersects with the boundary of $\bm{L}^\phi_e(r,R)\setminus \(\Delta(e)\cup\Delta(p^\phi_e)\)$ only at the boundary of $\Delta(e)\cup\Delta(p^\phi_e)$. Since $\bm{L}^\phi_e(r,R)$ and $\bm{L}_c^\phi(R)$ are assumed to be relatively compact, we have: \begin{lemma}\label{GDG} For given $R>0,r>0$, fix the branches of $\log g$ on $\bm{L}^\phi_e(r,R)\setminus \Delta(e)$ and on $\bm{L}_c^\phi(R)$. There exists $\delta>0$ such that if $|\mu|<\delta$, then the following property holds: for any $x\in \bm{L}^\phi_c(R)\setminus \Delta(c)$, and any $y\in \bm{L}_e^\phi(r,R)\setminus \Delta(e)$, the functions \begin{align*} &{\mathrm{Re}}\(e^{-{\sqrt{-1}}\phi}(f-\mu \log g)\)\circ \ell_x^\phi (t), &{\mathrm{Re}}\(e^{-{\sqrt{-1}}\phi}(f-\mu\log g)\)\circ\ell_y^\phi (t) \end{align*} monotonically increases as $t$ increase, as long as $\ell_x^\phi(t)$ and $\ell_y^\phi(t)$ are in the closures of $\bm{L}^\phi_c(R)\setminus\( \Delta(c)\cup\Delta(p^\phi_{c,+})\cup \Delta(p^\phi_{c,-})\)$ and $\bm{L}_e^\phi(r,R)\setminus \(\Delta(e)\cup \Delta(p_e^\phi)\)$, respectively. \qed \end{lemma} In the following discussion, when we consider a coordinate function $z$ on a disk $\Delta$ in $X$, we always assume that $z$ is defined on a neighborhood of the closure of $\Delta$. \subsection{Foliations of Abelian differentials around the singular points}\label{foliation} For each $\mu\in\C$, we set \begin{align*} \alpha_\mu\coloneqq df-\mu g^{-1}dg. \end{align*} We shall describe the straight arcs of the Abelian differential $\alpha_\mu$ around the singular points of $\alpha_\mu$ when $|\mu|>0$ is sufficiently small. Here, by the term ^^ straight arc', we mean a path along which the function ${\mathrm{Im}} (e^{-{\sqrt{-1}}\phi}(f-\mu\log g))$ is constant and ${\mathrm{Re}}(e^{-{\sqrt{-1}}\phi}(f-\mu \log g))$ increases for some $\phi$. When we specify $\phi$, we also use the term $\phi$-arc. The pole of $\alpha_\mu$ is $D=P\sqcup E$, and the zero of $\alpha_\mu$ is $\{\nu_p(\mu)\mid p\in \pi_{\Sigma}^{-1}(0)=E\sqcup {\mathrm{Crit}}(f)\}$ by definition. \subsubsection{Around $P$} A maximal straight arc (in a direction) is called \textit{a trajectory}. \begin{lemma}\label{trap} Fix a positive real number $\delta>0$. Then there exists an open neighborhood $\Delta(p)$ of $p$ such that any trajectory of $\alpha_\mu$ which intersects $\Delta(p)$ tends to $p$ for any $\mu\in \Delta_\mu(\delta)$. \end{lemma} \begin{proof}The residue of $\alpha_\mu$ at $p$ is given by $\mu$ times constant and hence bounded on $\Delta_\mu(r)$. Then we obtain the claim by the proof of \cite[Theorem 7.4]{Strebel}. \end{proof} \subsubsection{Around ${\mathrm{Crit}}(f)$}\label{around c} \begin{lemma}\label{w_c} There exist $\delta>0$, a neighborhood $\Delta(c)$ of $c$, and an open embedding $w\colon \Delta_\mu(\delta)\times \Delta(c)\to \Delta_\mu(\delta)\times \C$ over $\Delta_\mu(\delta)$ such that for each $\mu\in \Delta_\mu(\delta)$, $w_\mu\coloneqq w_{|\{\mu\}\times \Delta(c)}$ is a coordinate on $\Delta(c)$ centered at $\nu_e(\mu)$ with $\alpha_\mu=2w_\mu d w_\mu$. \end{lemma} \begin{proof} We can take a neighborhood ${\mathrm{nb}}(c)$ of $c$ so that $\log g$ is univalent on ${\mathrm{nb}}(c)$. Take a coordinate $z=z_c$ on ${\mathrm{nb}}(c)$ centered at $c$. Then we set \[F(\mu,z)\coloneqq f(z)-\mu \log g(z).\] Note that we have $dF_\mu=\alpha_\mu$ on ${\mathrm{nb}}(c)$ for each $\mu$, where $F_\mu(z)=F(\mu,z)$. Then let $G(\mu,z)$ be the function defined by the following equation: \begin{align*} F(z,\mu)-F(\nu_c(\mu),\mu)=(z-\nu_c(\mu))^2G(\mu,z). \end{align*} where we denote $z\circ\nu_c(\mu)$ by $\nu_c(\mu)$ for simplicity. Since $G(\mu,\nu_c(\mu))= \partial_z^2F(\mu,\nu_c(\mu))\neq 0$, taking $\delta>0$ and $\Delta(c)\subset {\mathrm{nb}}(c)$ small enough, we may assume that $G(\mu, z)$ is nowhere vanishing on $\Delta_\mu(r)\times \Delta(c)$. Then we take \[w_\mu(z)\coloneqq (z-\nu_c(\mu))\sqrt{G(\mu, z)}\] and $w(\mu,z)=(\mu, w_\mu(z))$, which satisfies $\alpha_\mu=w_\mu d w_\mu$. \end{proof} \begin{remark} There is an ambiguity of the signature of $w_\mu$. \end{remark} \subsubsection{Around $E$}\label{Around E} Take a disk $\Delta(e)$ with a coordinate $z=z_e$ centered at $e$ such that $f(z)=z+f(e)$ on $\Delta(e)$. We have $g(z)=z^{n_e}h(z)$ where $n_e\in \mathbb{Z}\setminus \{0\}$ and $h$ is nowhere vanishing holomorphic function on $\Delta(e)$ (taking $\Delta(e)$ smaller if necessary). In this coordinate, we have \[\alpha_\mu=d z-\mu n_e \frac{dz}{z}-\mu d\log h.\] and hence $\nu_e(\mu)=z(\nu_e(\mu))$ is characterized by the following equation: \begin{align}\label{nu_e} \(1-\mu\frac{1}{h(\nu_e(\mu))}\frac{dh}{dz}(\nu_e(\mu))\)\nu_e(\mu)=n_e\mu. \end{align} Roughly speaking, (\ref{nu_e}) implies that $\nu_e(\mu)$ is close to $n_e\mu$ when $|\mu|$ is small. Let $\cal{S}\subset \C^*$ be a proper sector, i.e. subset of the form $\cal{S}=\{\mu\in \C\mid a<\arg(\mu)<b\}$ with $|a-b|<2\pi$. We set $\cal{S}(\delta)\coloneqq \cal{S}\cap \Delta_\mu(\delta)$. Such intersections are also called proper sectors. The equation (\ref{nu_e}) implies the following: \begin{lemma}\label{sector} For any proper sector $\cal{S}$, there exists $\delta>0$ such that $\nu_e(\cal{S}(\delta))$ is contained in a proper sector in $\Delta(e)$. \qed \end{lemma} In a similar way as Lemma \ref{w_c}, we obtain: \begin{lemma}\label{w_e} Let $\cal{S}$ be a proper sector in $\C_\mu$. Then, there exist $\delta>0$, an open neighborhood $\mathcal{U}_{e}\subset \cal{S}(\delta)\times \Delta(e)$ of the graph of $\nu_e$ on a proper sector $ \cal{S}(\delta)\subset \Delta_{\mu}^*(\delta)$, and an open embedding $w\colon \cal{U}_{e}^\phi\to \cal{S}(\delta)\times \C$ over $\cal{S}(\delta)$ such that we have $\alpha_\mu=w_\mu d w_\mu$ on the fiber $\cal{U}_{e,\mu}=\cal{U}_e\times_{\Delta^*_\mu(\delta)}\{\mu\}$. \end{lemma} \begin{proof} By Lemma \ref{sector}, on a neighborhood of the graph of $\nu_e$ on $\cal{S}(\delta)$, can take a branch of $\log g$. Then the remaining proof is similar to that of Lemma \ref{w_c} \end{proof} For each $\mu\in\Delta^*_\mu(\delta)$, let $\varphi_\mu\colon \Delta(e)\to\C$ be the holomorphic function defined by \[\varphi_\mu(z)\coloneqq z\exp\(-\frac{z}{n_e\mu}+\frac{1}{n_e}\int_0^z d\log h\).\] Then we have \[-n_e\mu d\varphi_\mu=\varphi_\mu \alpha_\mu.\] By this formula, $\varphi_\mu$ is branched at $\nu_e(\mu)$ with ramification index two. We also have $\alpha_\mu=\varphi^*(-n_e\mu \zeta^{-1}d\zeta)$. Hence the straight arcs of $\alpha_\mu$ can be seen as the pull back of straight arcs of $-n_e\mu \zeta^{-1}d\zeta$. In particular, $\{z\in \Delta(e)\mid |\varphi_\mu(z)|=|\varphi_\mu(\nu_e(\mu))|\}$ defines a union of straight arcs in $\Delta(e)$. We shall consider the pull back $V_\mu\coloneqq \varphi_\mu^{-1}(\{\zeta\in \C\mid |\zeta|<|\varphi_\mu(\nu_e(\mu))|\})$. It has two connected components $V^0_\mu$ and $V^1_\mu$, where $e\in V^0_\mu$. \begin{lemma}\label{||=||} If $\delta>0$ is sufficiently small and $\mu\in \Delta^*_\mu(\delta)$, then the closure of $V^0_\mu$ in $X$ is in the interior of $\Delta(e)$. \end{lemma} \begin{proof} Take $R_e>0$ so that $\Delta(e)=\{z\mid |z|<R_e\}$. On the one hand, the equation \begin{align*} &|z|= R_e, &|\varphi_\mu(z)|=|\varphi_\mu(\nu_e(\mu))| \end{align*} has two solutions for each $\mu\in \Delta^*_\mu(\delta)$ if $\delta>0$ is sufficiently small. On the other hand, the closure $\overline{V}^1_\mu$ of $V^1_\mu$ intersect with $\partial \Delta(e)$. Hence we have $\# \partial(\overline{V}^1_\mu\cap \partial\Delta(e))\geq2$. It follows that the closure $\overline{V}^0_\mu$ does not intersect with $\partial \Delta(e)$. \end{proof} Let $\gamma^\phi_{\mu}\colon (-\epsilon',\epsilon')\to \cal{U}_{e,\mu}$ ($\mu\in \cal{S}$ for a proper sector $\cal{S}$) be a path determined by the conditions (i) $\gamma^\phi_{\mu}(0)=\nu_e(\mu)$, and (ii) $w_\mu\circ\gamma_{\mu}^\phi=e^{{\sqrt{-1}}\phi/2}t$ for $\phi\in \mathbb{R}$ and sufficiently small $\epsilon'>0$. In the case ${\mathrm{Re}}(e^{-{\sqrt{-1}}\phi}n_e\mu)>0$, the composition $\varphi_\mu\circ\gamma^\phi_\mu$ satisfies the inequality \begin{align*} |\varphi_\mu(\gamma_\mu^\phi(t))|\leq |\varphi(\nu_e(\mu))|, \end{align*} where the equality holds if and only if $t=0$. In this case, taking the signature of $w$ properly, by Lemma \ref{||=||}, the path $\gamma^\phi_\mu$ extends to the path $\gamma^\phi_\mu\colon (-\infty, \epsilon]\to X$, ($\epsilon>\epsilon'$) such that \begin{itemize} \item the path is the $\phi$-arc of $\alpha_\mu$ on $(-\infty,\epsilon)\setminus \{0\}$, and that \item the image $\gamma^\phi_\mu(-\infty, \epsilon)$ is in $\Delta(e)$ with $\gamma^\phi_\mu(\epsilon)\in\partial\Delta(e)$. \end{itemize} We have $\lim_{t\to -\infty} \gamma(t)=e$, and ${\mathrm{Re}}\(e^{-{\sqrt{-1}}\phi}z(\gamma^\phi_\mu(\epsilon))\)>0$ for $\mu\in \Delta_\mu(\delta)$ with $\delta>0$ sufficiently small. In the case ${\mathrm{Re}}(e^{-{\sqrt{-1}}\phi}n_e\mu)>0$, the composition $\varphi_\mu\circ\gamma^\phi_\mu$ satisfies the inequality \begin{align*} |\varphi_\mu(\gamma_\mu^\phi(t))|\geq |\varphi(\nu_e(\mu))|, \end{align*} where the equality holds if and only if $t=0$. In this case, the path $\gamma^\phi_\mu$ extends to the path $\gamma^\phi_\mu\colon [-\epsilon_0, \epsilon_1]\to X$, ($\epsilon_0,\epsilon_1>\epsilon'$) such that \begin{itemize} \item the path is the $\phi$-arc of $\alpha_\mu$ on $(-\epsilon_0,\epsilon_1)\setminus\{0\}$, and that \item the image $\gamma^\phi_\mu((-\epsilon_0,\epsilon_1))$ is in $\Delta(e)$ with $\gamma^\phi_\mu(\epsilon_i)\in \partial\Delta(e)$, $(i=0,1)$. \end{itemize} For sufficiently small $\delta>0$, ${\mathrm{Re}}\(e^{-{\sqrt{-1}}\phi}z(\gamma^\phi_\mu(\epsilon_i))\)>0$ ($i=0,1$ and $\mu\in\Delta_\mu(\delta)$). \subsection{Construction of family of paths}\label{construction of gamma} Let $\phi\in \mathbb{R}$ be generic. We shall construct the paths explained in \S \ref{properties of gamma}. \subsubsection{Construction of $\gamma^\phi_c$}\label{Cc} Take a family of coordinates $w\colon \Delta_\mu(\delta)\times \Delta(c)\to \Delta_\mu(\delta)\times\C$ which satisfies the conditions Lemma \ref{w_c}. Then, there exists a family of closed intervals $I_\mu=[-\epsilon_\mu^-,\epsilon_\mu^+]$ ($\mu\in \Delta_\mu(\delta)$, $\epsilon_\mu^\pm>0$) and a family of paths $\gamma^\phi_{c,\mu}\colon I_\mu\to Y$ such that $\gamma_{c,0}^\phi=L^\phi_c$ (see \S \ref{dumbbell}) restricted to $[\epsilon^-_0, \epsilon^+_0]$, \begin{itemize} \item $\gamma^\phi_{c,\mu}((-\epsilon_\mu^-,\epsilon_\mu^+))\subset \Delta(c)$, $\gamma^\phi_{c,\mu}(0)=\nu_c(\mu)$, \item $w_\mu\circ\gamma^\phi_{c,\mu}(t)=e^{{\sqrt{-1}}\phi/2}t$ for $t\in (-\epsilon_\mu-,\epsilon_\mu^+)$, and that \item $\gamma^\phi_{c,\mu}(\epsilon_\mu^\pm)\in \partial\Delta(c)$. \end{itemize} Take $\bm{L}_c^\phi(R)$ as in \S \ref{dumbbell}. Then we can extend $\gamma^\phi_{c,\mu}$ connecting it with $\ell^\phi_{x^+_\mu}$ and $\ell^{\phi+\pi}_{x^{-}_\mu}$ where we put $x^\pm_\mu\coloneqq \gamma^\phi_{c,\mu}(\epsilon_\mu^\pm)$ (we take linear re-parametrization if necessary). Then, we can take $R^\pm_{c,\mu}>0$ smoothly depending on $\mu$ such that $\gamma^\phi_{c,\mu}(\pm R^\pm_{c,\mu})\in \Delta(p_{c,\pm}^\phi)$. We then extend the path so that it is the $\phi$-arc of $\alpha_\mu$ for $t>R^+_{c,\mu}$, and $(\phi+\pi)$-arc of $\alpha_\mu$ for $t<-R^-_{c,\mu}$. By Lemma \ref{trap}, taking $\delta>0$ sufficiently small, we have \begin{align*} \lim_{t\to \pm \infty}\gamma^\phi_{c,\mu}(t)=p^\phi_{c,\pm} \end{align*} for each $\mu\in \Delta_\mu(\delta)$. In summary, we take the family of paths $\gamma^\phi_{c,\mu}\colon \mathbb{R}\to Y$ by the following conditions: \begin{enumerate} \item On $(-\epsilon^{-}_{\mu},\epsilon^+_{\mu})$, we have $w_\mu(\gamma_\mu^\phi(t))=e^{{\sqrt{-1}}\phi/2}t$. \item On $[\epsilon_\mu^+,R_{c,\mu}^+]$ (resp. $[-R^-_{c,\mu}, -\epsilon^-_\mu]$), $\gamma^\phi_{c,\mu}$ is $\phi$-arc (resp. $(\phi+\pi)$-arc) of $\alpha_0=df$. \item On $(R_{c,\mu}^+,\infty)$ (resp. $(-\infty, -R_{c,\mu}^-)$ ), $\gamma^\phi_{c,\mu}$ is $\phi$-arc (resp. $(\phi+\pi)$-arc) of $\alpha$. \end{enumerate} \begin{lemma}\label{case of c} The family of paths $\gamma^\phi_c\colon \Delta_\mu(\delta)\times \mathbb{R}\to \Delta_\mu(\delta)\times Y$, $(\mu, t)\mapsto (\mu, \gamma^\phi_{c,\mu}(t))$ defined above satisfies conditions $(\gamma 1)$ and $(\gamma 3)$ in \S $\ref{properties of gamma}$. \end{lemma} \begin{proof} The condition $(\gamma 1)$ follows directly from the definition. The condition $(\gamma 3)$ follows from the construction and Lemma \ref{GDG}. \end{proof} \subsubsection{Construction of $\gamma^\phi_{e,\pm}$}\label{construct gamma e} If $e\in E_0$, the integer $n_e$ is positive. Hence we have $ {\mathrm{Re}}(e^{-{\sqrt{-1}}\phi}n_e\mu)>0$ on $\cal{S}^\phi_+(\delta)$ and $-{\mathrm{Re}}(e^{-{\sqrt{-1}}\phi}n_e\mu)>0$ on $\cal{S}^\phi_-(\delta)$. Then we put $\gamma^\phi_{e,+,\mu}(t)=\gamma^\phi_{\mu}(t)$ for $t\in (-\infty,\epsilon)$, $\mu\in \cal{S}^\phi_{+}(\delta)$ and $\gamma^\phi_{e,-,\mu}(t)=\gamma^\phi_{\mu}(t)$ for $t\in [\epsilon_0,\epsilon_1]$, $\mu\in \cal{S}_-^\phi(\delta)$. Similarly to the case of $\gamma^\phi_c$, we extend $\gamma^\phi_{e,+,\mu}$ to $(-\infty, R_{e,+,\mu}]$ so that $\gamma^\phi_{e,+,\mu}$ is the $\phi$-arc of $\alpha_0=df$ on $(\epsilon, R_{e,+,\mu})$ (here, we note that we may assume $\gamma^\phi_{e,+}((\epsilon, R_{e,+,\mu}))\subset \bm{L}^\phi_e(r, R)$), and $\gamma^\phi_{e,+}(R_{e,+,\mu})\in \Delta(p_{e})$. Then on $(R_{e,+,\mu},\infty)$, we extend $\gamma^\phi_{e,+}$ as the $\phi$-trajectory ray of $\alpha_\mu$. Then, $\underset{t\to \infty}{\lim}\gamma^\phi_e(t)=p_e^\phi$ by Lemma \ref{trap}. Similarly, we extend $\gamma^\phi_{e,-,\mu}$ to $[-R^-_{e,-,\mu}, R^+_{e,+,\mu}]$ so that $\gamma^\phi_{e,-,\mu}$ is $\phi$-arc (resp. $\phi+\pi$-arc) of $\alpha_0=df$ on $(\epsilon_1, R^{+}_{e,-,\mu})$ (resp. $(-R_{e,-,\mu}^-,-\epsilon_-)$) and $\gamma^\phi_{e,-,\mu}(\pm R_{e,-,\mu}^\pm)\in \Delta(e)$. Then, we extend it to $\mathbb{R}$ by considering the $\phi$-arc and $(\phi+\pi)$-arcs on $t>R^+_{e,-,\mu}$ and $t<-R^-_{e,-,\mu}$, respectively. We also have $\underset{t\to\pm \infty}{\lim}\gamma^\phi_{e,-,\mu}(t)=p_e^\phi. $ If $e'\in E_\infty$, we have $\mp{\mathrm{Re}}(e^{-{\sqrt{-1}}\phi}n_{e'} \mu)>0$ for $\mu\in\cal{S}^\phi_{\pm}(\delta)$. Hence we can construct $\gamma^\phi_{e',\pm}$ by the same way as $\gamma^\phi_{e,\mp}$ $(e\in E_0)$. \begin{lemma} The families of paths $\gamma^\phi_{c}$ $(c\in {\mathrm{Crit}}(f))$, $\gamma^\phi_{e,\pm}$ $(e\in E)$ satisfy the conditions $(\gamma1)$, $(\gamma2)$, and $(\gamma3)$ in \S $\ref{properties of gamma}$. \end{lemma} \begin{proof} The conditions $(\gamma1)$ and $(\gamma3)$ for $\gamma^\phi_c$ is proved in Lemma \ref{case of c}, and the same conditions for $\gamma^\phi_{e,\pm}$ can also be proved in the same way. The condition ($\gamma 2$) follows from the fact that for generic $\phi$, we can take $\bm{L}^\phi_c(R_c)$, $\bm{L}_{e}^\phi(r_e,R_e)$, and $\bm{L}_e^{\phi+\pi}(r'_e,R'_e)$ so that they don't have nontrivial intersections and the fact that on $\Delta(p)$, the paths are taken to be the $\phi$ (or, $\phi+\pi$)-arcs of $\alpha_\mu$. \end{proof} \subsection{End of the proof of the Theorem \ref{WTS}}\label{saddle point method} We shall construct the sections $\mathfrak{s}^\phi_c$ for $c\in {\mathrm{Crit}}(f)$ and $\mathfrak{s}^\phi_{e,\pm}$ for $e\in E$ in Theorem \ref{WTS}, i.e. finish the proof. \subsubsection{Construction of $\mathfrak{s}^\phi_c$}\label{construct s_c} Let $\gamma_c^\phi$ be the path constructed in \S \ref{Cc}. Put $\overline{\mathbb{R}}\coloneqq \mathbb{R}\cup\{\pm \infty\}$, which is isomorphic to the 1-simplex $[0,1]$. Then, we can extend it to the family of paths $\widetilde{\gamma}^\phi_c\colon \Delta_\mu(\delta)\times \overline{\mathbb{R}}\to \Delta_\mu(\delta)\times \widetilde{X}$. For $(\lambda,\mu)\in S^\circ$, set \[P_\lambda^{\rm rd}\coloneqq \{(\lambda,\mu)\}\times_{S^\circ}\cal{P}^{\rm rd}\subset \widetilde{X}.\] Then we can easily see that if ${\mathrm{Re}}(e^{-{\sqrt{-1}}\phi}\lambda)>0$, then $\widetilde{\gamma}^\phi_{c,\mu}(\pm \infty)\in P^{\rm rd}_\lambda$. Hence, extending $\widetilde{\gamma}^\phi_c$ to $\C^*_\lambda\times \Delta_\mu(\delta)$ trivially, and taking branch of $\log g$ on $\nu_e(\Delta_\mu(\delta))$, we obtain a section \begin{align*} {\mathfrak{s}}^\phi_c\coloneqq \widetilde{\gamma}_c^\phi\otimes e^{-f/\lambda}g^{\lambda/\mu}\bm{e} \end{align*} of ${\mathscr{H}}^{\rm rd}_{1}$, which can also be seen as a section in $\Gamma(V^\phi,\L^{\rm Be})$. \begin{lemma}\label{c-case} The section $\mathfrak{s}^\phi_c$ $(c\in {\mathrm{Crit}}(f))$ constructed above satisfies the condition $(1)$ in Theorem \ref{WTS}. \end{lemma} \begin{proof} Let $\omega$ be a section of $\pi_{{\mathcal{X}}*}\iota^*{\mathscr{M}}\otimes \Omega^1_{X}$ which represents a section $[\omega]$ of $\iota^*{\mathscr{H}}^1_{{\rm dR},\bm{0}}$. Then we have \begin{align*} \<\mathfrak{s}_{c}^\phi, [\omega]\>_{\rm Per}=\int_{\gamma^\phi_{c,\mu}}e^{-f/\lambda}g^{\mu/\lambda}\omega. \end{align*} Then, by the condition $(\gamma 3)$, the description in \S \ref{around c}, and the standard arguments of saddle point method (see e.g \cite{AGV}, \cite{Mochizuki}), we obtain that \[\exp(-\varphi_c(\lambda,\mu))\<\mathfrak{s}_{c}^\phi, [\omega]\>_{\rm Per}\] is of moderate growth (and not of rapid decay) when $(\lambda,\mu)\to (0,0)$ in the neighborhood of $V^\phi$ in $\widetilde{B}$ where $\varphi_c(\lambda,\mu)$ correspond to the branch of $\log g$ fixed when $\mathfrak{s}^\phi_c$ is defined. This implies the condition (1) of Theorem \ref{WTS} by Lemma \ref{period criterion}. \end{proof} \subsubsection{Construction of $\mathfrak{s}_{e,-}^\phi$ $(e\in E_0)$ and $\mathfrak{s}^\phi_{e',+}$ $(e'\in E_\infty)$} \label{construct s_e 1} For $e\in E_0$ let $\gamma^\phi_{e,-}$ be a path constructed in \S \ref{construct gamma e}. For $e'\in E_\infty$ In both cases, the paths are extended to the families of paths \begin{align*} &\widetilde{\gamma}^\phi_{e,-}\colon \cal{S}^\phi_-(\delta)\times \overline{\mathbb{R}}\to\cal{S}^\phi_-\times \widetilde{X}, &\widetilde{\gamma}^\phi_{e',+}\colon \cal{S}^\phi_+(\delta)\times \overline{\mathbb{R}}\to \cal{S}^\phi_+\times \widetilde{X} . \end{align*} Similarly as \S \ref{construct s_c}, the end points $\widetilde{\gamma}^\phi_{e,-}(\pm\infty)$ are in $P^{\rm rd}_{\lambda}$ if ${\mathrm{Re}}(e^{-{\sqrt{-1}}\phi}\lambda)>0$. By Lemma \ref{sector}, we can take a branch of $\log g$ on $\nu_e(\cal{S}^\phi_-(\delta))$ and on $\nu_{e'}(\cal{S}^\phi_+(\delta))$. Then we obtain the sections \begin{align*} &\mathfrak{s}^\phi_{e,-}\coloneqq \widetilde{\gamma}_{e,-}^\phi\otimes e^{-f/\lambda}g^{\mu/\lambda}\bm{e}, &\mathfrak{s}^\phi_{e',+}\coloneqq \widetilde{\gamma}_{e',+}^\phi\otimes e^{-f/\lambda}g^{\mu/\lambda}\bm{e}. \end{align*} \begin{lemma} The sections $\mathfrak{s}^\phi_{e,-}$ and $\mathfrak{s}^\phi_{e',+}$ satisfy the conditions $(1)$ and $(3)$ in Theorem \ref{WTS}. \end{lemma} \begin{proof} The proof of the condition (1) is similar to that of Lemma \ref{c-case}. The condition (3) is trivial by the construction. \end{proof} \subsubsection{Construction of $\mathfrak{s}_{e,+}^\phi$ $(e\in E_0)$ and $\mathfrak{s}^\phi_{e',-}$ $(e'\in E_\infty)$} For $e\in E_0$ let $\gamma^\phi_{e,+}$ be the family of paths constructed in \S \ref{construct gamma e}. Then, the limit ${\gamma}^\phi_{e,+}(\infty)\coloneqq \lim_{t\to \infty}\gamma$ exists in $\widetilde{X}$ and similarly to \S \ref{construct s_c} and \S \ref{construct s_e 1} the limit ${\gamma}^\phi_{e,+}(\infty)$ is in $P^{\rm rd}_\lambda$ if ${\mathrm{Re}}(e^{-{\sqrt{-1}}\phi}\lambda)>0$. However, the limit $\lim_{t\to-\infty}\gamma^{\phi}_{e,+,\mu}(t)$ does not exist (in general) in $\widetilde{X}$. Hence we modify the path by taking $\widetilde{\gamma}^\phi_{e,+,\mu}\colon \overline{\mathbb{R}}\to \widetilde{X}$ as follows: \begin{itemize} \item $\widetilde{\gamma}^\phi_{e,+,\mu}(t)=\gamma^{\phi}_{e,+,\mu}(t)$ for $t\in [-\epsilon', \infty]$ (see \S \ref{Around E} for the definition of $\epsilon'$). \item $\widetilde{\gamma}^\phi_{e,+,\mu}([-\infty,\epsilon'])$ is a line segment which connects $\widetilde{\gamma}^\phi_{e,+,\mu}(\epsilon')$ and a point in $\varpi_{X}^{-1}(e)$ of constant argument with respect to the coordinate $z=z_e$. \end{itemize} In a similar way as in \S \ref{construct s_e 1}, we obtain a section \begin{align*} \mathfrak{s}^\phi_{e,+}\coloneqq \widetilde{\gamma}^\phi_{e,+}\otimes e^{-f/\lambda}g^{\mu/\lambda}\bm{e}. \end{align*} By the same way with suitable changes of $+$ and $-$, we also obtain the family of paths $\widetilde{\gamma}^\phi_{e',-}$ and the section \begin{align*} \mathfrak{s}^\phi_{e',-}\coloneqq \widetilde{\gamma}^\phi_{e',-}\otimes e^{-f/\lambda}g^{\mu/\lambda}\bm{e} \end{align*} for $e'\in E_\infty$. \begin{lemma} The sections $\mathfrak{s}^{\phi}_{e,+}$ $(e\in E_0)$ and $\mathfrak{s}^\phi_{e',-}$ $(e'\in E_\infty)$ satisfy the conditions $(1)$ and $(3)$ in Theorem \ref{WTS}. \end{lemma} \begin{proof} The condition $(3)$ is trivial by the construction. We shall check the condition $(1)$ in the case $e\in E_0$ (the case $e'\in E_\infty$ can also be treated in the same way). Let $[\omega]$ be a section of $\iota^*{\mathscr{H}}^1_{{\rm dR},\bm{0}}$ on a neighborhood of $(0,0)\in S$ represented by $\omega\in \pi_{{\mathcal{X}}*}{\mathscr{M}}\otimes \Omega^{1}_{{\mathcal{X}}/S}$. Set \[\widetilde{V}^\phi_+\coloneqq \(\{\lambda\in \C\mid {\mathrm{Re}}(e^{-{\sqrt{-1}}\phi}\lambda)>0\} \times\cal{S}^\phi_+(\delta)\) \subset{S}^*. \] Then, we have $\iota^*{\mathscr{H}}^1_{{\rm dR},\bm{0}}(\widetilde{V}^\phi_+) =\iota^*{\mathscr{H}}^1_{{\rm dR}, E_0!}(\widetilde{V}^\phi_+)=\iota^*(\lim{\mathscr{H}}^{1}_{{\rm dR}, a ,b})$, and hence $[\omega]$ is also represented by $\omega_a\in\iota^*\pi_*({\mathscr{M}}_{a, b}(*\cal{P})\otimes \Omega^1_{{\mathcal{X}}/S})$ where $a$ is arbitrary small. Taking $a$ sufficiently small, the period pairing can be computed as follows: \begin{align*} \<\mathfrak{s}^\phi_{e,+}, [\omega]\>_{\rm Per} &=\int_{\widetilde{\gamma}^\phi_{e,+}}e^{-f/\lambda}g^{\mu/\lambda}\omega_a\\ &=\int_{\gamma^\phi_{e,+}}e^{-f/\lambda}g^{\mu/\lambda}\omega_a&(\text{limit Stokes theorem})\\ &=\int_{\gamma^\phi_{e,+}}e^{-f/\lambda}g^{\mu/\lambda}\omega&(\text{limit Stokes theorem}) \end{align*} Then, we can use the same discussion as in Lemma \ref{c-case} to see (1). \end{proof} \begin{proof}[Proof of Theorem \ref{WTS}] It remains to see that the sections constructed above satisfies the condition (2) in Theorem \ref{WTS}. This follows from the following observations: \begin{itemize} \item The paths $\gamma_{c,\mu}^{\phi}$ and $\gamma^\phi_{e,\mu,\pm} $ do not intersect with $\gamma_{c',\mu}^{\phi+\pi}$ or $\gamma_{e',\mu,\pm}^{\phi+\pi} $ if $c\neq c'$ and $e\neq e'$, where $c,c'\in {\mathrm{Crit}}(f)$ and $e, e'\in E$ (condition $(\gamma 2)$). \item The paths $\gamma_{c,\mu}^\phi$ and $\gamma_{c,\mu}^{\phi+\pi}$ (resp. $\gamma_{e,\mu,\pm}^\phi$ and $\gamma^{\phi+\pi}_{e,\mu,\mp}$) intersect transversally only at $\nu_c(\mu)$ (resp. $\nu_e(\mu)$), at which we can take the branch of $\log g$ so that $\<\mathfrak{s}_c^\phi,\mathfrak{s}_c^{\phi+\pi}\>=1$ (resp. $\<\mathfrak{s}_{e\pm}^\phi, \mathfrak{s}^{\phi+\pi}_{e,\mp}\>=1$). \end{itemize} (see \cite{MMT} and \cite{FSY}.) \end{proof} \subsection{Examples}\label{last examples} In this last section, we shall give two examples of $\L^{\rm Be}(f,g)$. The first one is related to the gamma function, and the second one is related to cylindrical functions. \subsubsection{Gamma function} We consider the case treated in Examples \ref{EXG}, \ref{Ga}, \ref{GDual}, and \ref{spec gamma}. In this case, $\pi_{\Sigma}^{-1}(0)=E_0=\{0\}$. Any $\phi\in \mathbb{R}$ is generic. For $(\lambda,\mu)\in \widetilde{V}^\phi_+$, we have \begin{align*} \<\widetilde{\mathfrak{s}}^\phi_{0,+},\iota^*e_{-1}\>_{\rm Per} &=\int_{\gamma^{\phi}_{0,+,\mu}}e^{-z/\lambda}z^{\mu/\lambda}\lambda^{-1}\frac{dz}{z}\\ &=\lambda^{-1}\int_{\gamma^{\phi}_{0,+,\mu}}e^{-\zeta}(\zeta\lambda)^{\mu/\lambda}\frac{d\zeta}{\zeta}\\ &=\lambda^{\mu/\lambda-1}\Gamma(\lambda/\mu). \end{align*} By the computation in \ref{GDual}, we obtain \[\widetilde{\mathfrak{s}}^\phi_{0,+}=(2\pi{\sqrt{-1}})^{-1}\lambda^{\mu/\lambda}\Gamma(\mu/\lambda)e_{0}\] in ${\mathscr{H}}^1_{{\rm dR},\bm{0}|S^*}$. Note that we can directly check the relations $[\nabla_{\mathfrak{a}},\widetilde{\mathfrak{s}}^\phi_{0,+}]=0$ and ${\mathbb{S}}(\widetilde{\mathfrak{s}}^\phi_{0,+})=\widetilde{\mathfrak{s}}^\phi_{0,+}$ in this case. In a similar way, using the Hankel's counter integral representation of the gamma function, we also obtain \[\widetilde{\mathfrak{s}}^{\phi}_{0,-}=(2\pi{\sqrt{-1}})^{-1}(1-q)\lambda^{\mu/\lambda}\Gamma(\mu/\lambda)e_{0}\] on $V^\phi_-$. Hence we obtain the following: \begin{proposition} We have the natural isomorphism \begin{align*} (\L^{\rm Be}(f,g),\L^{\rm Be}_{\leqslant}(f,g))\simeq (\L_\Gamma,\L_{\Gamma\leqslant}) \end{align*} where $f, g$ is as in Example \ref{EXG} and $\L_\Gamma$ is as in Example \ref{GAMMA FUN}.\qed \end{proposition} \subsubsection{Cylindrical functions} We consider the case treated in Examples \ref{Bessel}, \ref{BDual}, and \ref{spec bessel}. In this case, we have $\pi_{\Sigma}^{-1}(0)={\mathrm{Crit}}(f)=\{\pm 1\}$. We put $c_1=1$, $c_2=-1$. $\phi\in\mathbb{R}$ is generic if and only if $e^{{\sqrt{-1}}\phi}\notin\mathbb{R}$. Take $\phi$ so that ${\mathrm{Im}} (e^{{\sqrt{-1}}\phi})>0$. Let $H^{(1)}_\nu(z)$ and $H^{(2)}_\nu(z)$ $(\nu, z \in \C,i=1,2)$ denote the first and second Hankel function. We then put \[C^{(i)}(\lambda,\mu)\coloneqq \pi{\sqrt{-1}} e^{\pi{\sqrt{-1}} \mu /2\lambda}H^{(i)}_{\mu/\lambda}(2{\sqrt{-1}} \lambda^{-1})\] for $(\lambda,\mu)\in S^*$ and $i=1,2$. Then we have \begin{align*} \<\mathfrak{s}_{c_i}^\phi, \iota^*e_{j}\> &=\lambda^{-1}\int_{\gamma^\phi_{c}}e^{-({z+z^{-1}})/\lambda}z^{\mu/\lambda-j-1}\frac{dz}{z}\\ &=\lambda^{-1}e^{\pi{\sqrt{-1}}(\mu/\lambda-j-1)/2}\int_{\gamma^\phi_c}e^{-(\zeta-\zeta^{-1})/\lambda}\zeta^{\mu/\lambda-j-1}\frac{d\zeta}{\zeta}\\ &=\lambda^{-1}C^{(i)}(\lambda, \mu-(j+1)\lambda) \end{align*} for $i=1,2$ and $j=0,-1$. By the computation given in Example \ref{BDual}, we obtain \begin{align*} 2\pi{\sqrt{-1}}\mathfrak{s}_{c_i}^\phi= C^{(i)}(\lambda,\mu)e_0+C^{(i)}(\lambda,\mu-\lambda) e_{-1} \quad (i=1,2). \end{align*} We can directly check that $\nabla_{\mathfrak{a}}(\mathfrak{s}_{c_i}^\phi)=0$ and ${\mathbb{S}}(\mathfrak{s}_{c_i}^\phi)=\mathfrak{s}_{c_i}^\phi$. Note that in the coordinate $(u, v)$, we have \[C^{(i)}(uv, v)= \pi{\sqrt{-1}} e^{\pi{\sqrt{-1}} /2u}H^{(i)}_{u^{-1}}(2{\sqrt{-1}} u^{-1}v^{-1})\] Hence Theorem \ref{main theorem} in this case seems to be closely related to the asymptotic analysis of Olver \cite{Olver} on Bessel functions. \subsection*{Acknowledgement} The author is grateful to Fumihiko Sanda, Jeng-Daw Yu, Saiei-Jaeyeong Matsubara-Heo, Keiji Matsumoto, Mikhail Kapranov, and Tatsuki Kuwagaki for valuable discussions. The discussion with Saiei-Jaeyeong Matsubara-Heo gave the author an idea on the way of taking basis in the proof of Theorem \ref{WTS}. Keiji Matsumoto told the author the paper \cite{MMT}. He would like to express his gratitude to Takuro Mochizuki and Kyoji Saito for their kindness and encouragement. Takuro Mochizuki also kindly pointed out some mistakes in an early draft. He also would like to thank his son, Hinata, who joined his family when he was writing the first draft of this paper, for giving him happiness and pleasure. This work was supported by JSPS KAKENHI Grant Number JP18H05829, 19K21021, 20K14280, and partially by 18H01116. This work was also supported by World Premier International Research Center Initiative (WPI), MEXT, Japan.
proofpile-arXiv_065-165
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Acknowledgements} This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement no 812997. \bibliographystyle{splncs04} \section{Conclusion and Future Work} In this paper, we have presented our solutions for two tasks in CLEF CheckThat! 2020. In the first task, we used syntactic, contextual features and SVM for predicting the check-worthiness of tweets in Arabic and English. For syntactic features, we evaluated Parts-of-Speech tags, named entities and syntactic dependency relations, and used the best feature sets for both languages. In the case of contextual features, we evaluated different word embeddings, BERT models and sentence-transformers to capture the semantics of each tweet or sentence. For future work, we would like to evaluate the possibility of using relevant metadata and other modalities like images and videos present in tweets for claim's check-worthiness. In the second task, we evaluated monolingual and multilingual sentence-transformers to retrieve verified claims for the query tweet. We found that off-the-shelf multilingual sentence-transformer is very well suited for semantic textual similarity task than other monolingual BERT models. \section{Experiments and Results} \subsection{Task-1: Tweet Check-Worthiness Prediction} \subsubsection{Dataset and Training Details} English dataset consists of training, development (dev) and test splits with 672, 150 and 140 tweets respectively on the topic of COVID-19. We perform grid search using development set to find the best parameters. Arabic dataset consists of training and test splits with 1500 tweets on 3 topics and 6000 tweets on 12 topics respectively with 500 tweets on each topic. For validation purpose, we keep 10\% (150 samples) from the training data as development set. The official ranking of submitted system for this task is based on Mean Average Precision (MAP) and Precision@30 (P@30) for English and Arabic datasets, respectively. To train the SVM models for both English and Arabic, we perform grid search over PCA energy (\%) conservation, regularization parameter \textit{C} and RBF kernel's \textit{gamma}. Parameters range for PCA varies from 100\% (original features) to 95\% with decrements of 1, and both \textit{C} and \textit{gamma} vary between -3 to 3 on a log-scale with 30 steps. For faster training on a large grid search, we use ThunderSVM \cite{wenthundersvm18} which takes advantage of a GPU or a multi-core system to speed up SVM training. \subsubsection{Results} Our submissions used the best models that we obtained from the grid search and are briefly discussed below. \textbf{English}: We made 3 submissions in total. Our primary (Run-1) and $2^{nd}$ contrastive (Run-3) submission uses sentence embeddings computed from BERT-large word embeddings as discussed in the proposed work section. In addition, both submissions use POS tag and dependency relation features. Interestingly, we found that the best performing sentence embeddings did not include stop-words. The primary submission (Run-1) uses an ensemble of predictions from three models trained on concatenated last 4 hidden layers, average of last 4 hidden layers and $2{nd}$ last hidden layer. The $2^{nd}$ contrastive submission (Run-3) uses predictions from the model trained on the best performing sentence embedding computed from concatenating last 4 hidden layers. Our $1^{st}$ contrastive submission (Run-2) uses an ensemble of predictions from three models trained with GloVe\cite{pennington2014glove} on Twitter with 25, 50 and 100-dimensional embeddings but with the same POS tag and dependency relation features. We use majority voting to get the final prediction and mean of decision values to get the final decision value. We found that removing the stop-words to compute average of word embeddings actually degraded the performance and hence included them in the average. We also add some additional results to see the effect of stop-words, POS tags, named entities, dependency relations and ensemble predictions in Table~\ref{tab:eng_t1}. The effect of stop-words can be clearly seen in alternative runs of Run-1 and Run-3, where the MAP clearly drops by 1-2 points. Similarly, the negative effect of removing POS tag and dependency relation features can be seen in rest of the alternative runs. Lastly, adding named entity features to the original submissions also decreases the precision by 1-2 points. This might be because the tweets have very few named entities and are not useful to distinguish between check-worthy and not check-worthy claims. For comparison with other teams in the challenge, we show top 3 results at the bottom of the table for reference. Team Accenture \cite{clef-checkthat-williams:2020} fine-tuned a RoBERTa model with an extra mean pooling and a dropout layer to prevent overfitting. Team Alex \cite{clef-checkthat-Nikolov:2020} experimented with different tweet pre-processing techniques and various transformer models together with logistic regression and SVM. Their main submission used logistic regression trained on 5-fold predictions from RoBERTa concatenated with tweet metadata. Team QMUL-SDS \cite{clef-checkthat-Alkhalifa:2020} fine-tuned a BERT model pre-trained specifically on COVID twitter data. \bgroup \setlength{\tabcolsep}{5pt} \renewcommand{\arraystretch}{1.5} \begin{table}[ht] \caption{Task-1 Check-Worthiness English Results, MAP (Mean Average Precision), DRel (dependency relations), NE (named entities), *Primary Submission} \label{tab:eng_t1} \centering \begin{tabular}{|l|cccccc|c|} \hline \textbf{Run} & \textbf{Stopwords} & \textbf{Ensemble} & \textbf{POS} & \textbf{DRel} & \textbf{NE} & \textbf{Embedding} & \textbf{MAP} \\ \hline Run-$1^*$ & & \checkmark & \checkmark & \checkmark & & BERT & \textbf{0.7217} \\ \hline Run-2 & \checkmark & \checkmark & \checkmark & \checkmark & & GloVe & 0.6249 \\ \hline Run-3 & & & \checkmark & \checkmark & & BERT & 0.7139 \\ \hline \hline Run-1-1 & \checkmark & \checkmark & \checkmark & \checkmark & & BERT & 0.7102 \\ \hline Run-1-2 & & \checkmark & \checkmark & & & BERT & 0.6965 \\ \hline Run-1-3 & & \checkmark & & & & BERT & 0.7094 \\ \hline Run-1-4 & & \checkmark & \checkmark & \checkmark & \checkmark & BERT & 0.7100 \\ \hline Run-3-1 & \checkmark & & \checkmark & \checkmark & & BERT & 0.6889 \\ \hline Run-3-2 & & & \checkmark & & & BERT & 0.7074 \\ \hline Run-3-3 & & & & & & BERT & 0.6981 \\ \hline Run-3-4 & & & \checkmark & \checkmark & \checkmark & BERT & 0.6940 \\ \hline \hline \cite{clef-checkthat-williams:2020} & - & - & - & - & - & - & 0.8064 \\ \hline \cite{clef-checkthat-Nikolov:2020} & - & - & - & - & - & - & 0.8034 \\ \hline \cite{clef-checkthat-Alkhalifa:2020} & - & - & - & - & - & - & 0.7141 \\ \hline \end{tabular} \end{table} \egroup \textbf{Arabic} There are a total of four submissions that we made in this task. Our best performing submission (Run-1) uses 100-dimensional word2vec Arabic embeddings trained on a Twitter corpus \cite{soliman2017aravec} in combination with POS tag features. Our second and third submissions are redundant in terms of feature use, so we only mention the second one (Run-2) here. In addition features used in first submission, it uses dependency relation features and 300-dimensional Twitter embeddings instead of 100-dimensional. Our last submission (Run-3) uses only pre-trained multilingual sentence-transformer\footnote{\url{https://github.com/UKPLab/sentence-transformers}} \cite{reimers-2020-multilingual-sentence-bert} that is trained on 10 languages including Arabic. In the first three submissions, we removed the stop-words from all the features as keeping them resulted in a poorer performance. \textit{Precision@K} and \textit{Average Precision} (AP) results on the test set are shown in the same order in Table~\ref{tab:arab_t1}. Official metric for ranking is P@30. For comparison with other teams in the challenge,we show top 3 results at the bottom of the table for reference. Team Accenture \cite{clef-checkthat-williams:2020} experimented with and fine-tuned three different pre-trained Arabic BERT models and used external data to increase the positive instances. Team TOBB-ETU \cite{clef-checkthat-Kartal:2020} used logistic regression and experimented with Arabic BERT and word embeddings together to classify tweets. Team UB\_ET \cite{clef-checkthat-Hasanain:2020} used a multilingual BERT for ranking tweets by check-worthiness. \bgroup \setlength{\tabcolsep}{6pt} \renewcommand{\arraystretch}{1.5} \begin{table}[htp] \caption{Task-1 Check-Worthiness Arabic Results, P@K (Precision@K) and AP (Average Precision), *Primary Submission} \label{tab:arab_t1} \centering \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \textbf{Run ID} & \textbf{P@5} & \textbf{P@10} & \textbf{P@15} & \textbf{P@20} & \textbf{P@25} & \textbf{P@30} & \textbf{AP} \\ \hline Run-$1^*$ & \textbf{0.6000} & \textbf{0.6083} & \textbf{0.5944} & \textbf{0.6000} & \textbf{0.5900} & \textbf{0.5778} & \textbf{0.4949} \\ \hline Run-2 & 0.5500 & 0.5667 & 0.5611 & 0.5417 & 0.5433 & 0.5361 & 0.4649 \\ \hline Run-3 & 0.4000 & 0.3917 & 0.4167 & 0.4292 & 0.4433 & 0.4472 & 0.4279 \\ \hline \hline \cite{clef-checkthat-williams:2020} & 0.7333 & 0.7167 & 0.7167 & 0.6875 & 0.6933 & 0.7000 & 0.6232 \\ \hline \cite{clef-checkthat-Kartal:2020} & 0.7000 & 0.7000 & 0.7000 & 0.6625 & 0.6500 & 0.6444 & 0.5816\\ \hline \cite{clef-checkthat-Hasanain:2020} & 0.6833 & 0.6417 & 0.6667 & 0.6333 & 0.6367 & 0.6417 & 0.5511 \\ \hline \end{tabular} \end{table} \egroup \subsection{Task-2: Claim Retrieval} \subsubsection{Dataset and Training Details} The dataset in this task has 1,003 tweets for training and 200 tweets for testing. These tweets are to be matched against a set 10,373 verified claims. From the training set, 197 tweets are kept for validation. To fine-tune the sentence-transformer network with the triplet loss, we use a batch size of eight and train the network for two epochs. The official ranking of this is based on Mean Average Precision@5 (MAP@5). All tweets and verified claims are in English. Our primary (Run-1) and $2^{nd}$ contrastive (Run-3) submission uses BERT-base and BERT-large pre-trained on SNLI dataset with sentence embedding pooled from the \emph{CLS} and \emph{MAX} tokens respectively. We fine-tune these two networks with the triplet loss. On the contrary, our $1^{st}$ contrastive submission (Run-2) uses multilingual DistilBERT model \cite{reimers-2020-multilingual-sentence-bert} trained on 10 languages including English. This model is directly used to test the pre-trained embeddings. \subsubsection{Results} Interestingly, pre-trained embeddings extracted from multilingual DistilBERT without any fine-tuning turn out to be better for semantic similarity than fine-tuned monolingual BERT models. Having said that, the fine-tuned monolingual BERT models do perform better than extracted pre-trained embeddings and the difference can be seen in \textit{Run-1-2} and \textit{Run-3-2} in Table~\ref{tab:eng_t2}. We also try to fine-tune the multilingual model which drastically decreases the retrieval performance. The decrease can be attributed to the pre-training process \cite{reimers-2020-multilingual-sentence-bert} in which the model was trained in a teacher-student knowledge distillation learning framework and on multiple languages. As stated in the proposed work section, we conduct a second evaluation to retrieve the claims with highest similarity without KD-Search and the results are significantly better as shown in Table~\ref{tab:eng_t2}. For comparison with other teams in the challenge, we have shown top 3 primary submissions at the bottom of the table for reference. Team Buster.AI \cite{clef-checkthat-Bouziane:2020} investigated sentence similarity using transformer models, and experimented with multimodality and data augmentation. Team UNIPI-NLE \cite{clef-checkthat-Passaro:2020} fine-tuned a sentence-BERT in two steps, first to predict the cosine similarity of positive and negative pairs, followed by a binary classification of whether a tweet-claim pair is a correct match or not. Team UB\_ET \cite{clef-checkthat-Thuma:2020} experimented with three different models to rank the verified claims and their main submission used a DPH Divergence from Randomness (DFR) term weighting model. \bgroup \setlength{\tabcolsep}{3pt} \renewcommand{\arraystretch}{1.5} \begin{table}[htp] \caption{Task-2 Claim Retrieval Results, MAP (Mean Average Precision), *Primary Submission} \label{tab:eng_t2} \centering \begin{tabular}{|l|cc|c|c|c|c|} \hline \textbf{Run} & \textbf{Fine-tuned} & \textbf{KD-Search} & \textbf{MAP@1} & \textbf{MAP@3} & \textbf{MAP@5} & \textbf{MAP@10} \\ \hline Run-$1^*$ & \checkmark & \checkmark & 0.6520 & 0.6900 & 0.6950 & 0.7000 \\ \hline Run-2 & & \checkmark & \textbf{0.8280} & \textbf{0.8680} & \textbf{0.8730} & \textbf{0.8740} \\ \hline Run-3 & \checkmark & \checkmark & 0.7180 & 0.7460 & 0.7540 & 0.7600 \\ \hline \hline Run-1-1 & \checkmark & & 0.703 & 0.743 & 0.756 & 0.760 \\ \hline Run-1-2 & & & 0.527 & 0.584 & 0.589 & 0.594 \\ \hline Run-2-1 & & & 0.858 & 0.892 & 0.894 & 0.896 \\ \hline Run-3-1 & \checkmark & & 0.718 & 0.764 & 0.770 & 0.772 \\ \hline Run-3-2 & & & 0.532 & 0.569 & 0.576 & 0.585 \\ \hline \hline \cite{clef-checkthat-Bouziane:2020} & - & - & 0.8970 & 0.9260 & 0.9290 & 0.9290 \\ \hline \cite{clef-checkthat-Passaro:2020} & - & - & 0.8770 & 0.9070 & 0.9120 & 0.9130 \\ \hline \cite{clef-checkthat-Thuma:2020} & - & - & 0.8180 & 0.8620 & 0.8640 & 0.8660 \\ \hline \end{tabular} \end{table} \egroup \section{Introduction} Social media is increasingly becoming the main source of news for so many people. With around 2.5 billion Internet users, 12\% receive breaking news from Twitter instead of traditional media according to a 2018 survey report \cite{socialnews2018}. Fake news in general can be defined \cite{tandoc2018defining} as fabrication and manipulation of information and facts with the main intention of deceiving the reader. As a result, fake news can have several undesired and negative consequences. For example, recent news around COVID-19 pandemic with non-verified claims, that masks lead to rise in carbon dioxide levels caused an online movement to not wear masks. With ease of access and sharing news on Twitter, any news spreads much faster from the moment an event occurs in any part of the world. Although, the survey report \cite{socialnews2018} found that almost 60\% of users expect news on social media to be inaccurate, it still leaves millions of people who will spread fake news expecting it to be true. Considering the vast amount of news that spreads everyday, there has been a rise in independent fact-checking projects like \textit{Snopes, Alt News, Our.News}, who investigate the news that spread online and publish the results for public use. Most of these independent projects rely on manual efforts that are time consuming which makes it harder to keep up with rate of news production. Therefore, it has become very important to develop tools that can process news at a rapid rate and provide news consumers with some kind of an authenticity measure that reflects the correctness of claims in the news. In this paper, we focus on two sub-problems in CheckThat! 2020 \cite{clef-checkthat-en:2020}\footnote{\url{https://sites.google.com/view/clef2020-checkthat/}} that are a part of larger fact-checking ecosystem. In the first task, we focus on learning a model that can recognize check-worthy claims on Twitter. We present a solution that works for both English \cite{clef-checkthat-en:2020} and Arabic \cite{clef-checkthat-ar:2020} tweets. Some examples of tweets with claims are classified whether it is a check-worthy or not, shown in Table~\ref{tab:samp_t1}. One can see that the claims which are not check-worthy look like personal opinions and do not pose any threat to a larger audience. We explore the fusion of syntactic features and deep transformer Bidirectional Encoder Representations from Transformers (BERT) embeddings, to classify check-worthiness of a tweet. We use Part-of-speech (POS) tags, named entities, and dependency relations as syntactic features and a combination of hidden layers in BERT to compute tweet embedding. Before learning the model with a Support Vector Machine (SVM) \cite{suykens1999least}, we use Principal Component Analysis (PCA) \cite{wold1987principal} for dimensionality reduction. In the second task, we focus on learning a model that can accurately retrieve verified claims w.r.t a query claim, where query claim is a tweet and verified claims are snippets from actual documents. The verified claim is true and thus acts as the evidence or support for the query tweet. Some example pairs of tweets and claims can be seen in Table~\ref{tab:samp_t2}, which shows that the pairs share lots of contextual information which makes this task a semantic textual similarity problem. For this reason, we explore the pre-trained embeddings from a Siamese network transformer model (sentence-transformers) specifically trained for semantic textual similarity and perform KD-search to retrieve claims. We share the source code for both tasks publicly with the community. \footnote{\url{https://github.com/cleopatra-itn/claim_detection}} The remainder of the paper is organized as follows. Section 2 briefly discusses about previous works on fake news detection and CheckThat! tasks in particular. Section 3 presents details of our approach for Task-1 and Task-2. Section 4 describes the experimental details and results. Section 5 summarizes our conclusion and future research directions. \begin{table}[!ht] \caption{Sample tweets for Task-1 Check-Worthiness Prediction} \label{tab:samp_t1} \centering \begin{tabular}{|l|c|c|} \hline \multicolumn{1}{|c|}{\textbf{Tweet}} & \textbf{Claim} & \textbf{Check-Worthy} \\ \hline \begin{tabular}[c]{@{}l@{}}Dear @VP Pence: What are you hiding from the \\ American people? Why won’t you let the people \\ see and hear what experts are saying about the \\ \#CoronaOutbreak?\end{tabular} & 0 & 0 \\ \hline \begin{tabular}[c]{@{}l@{}}Greeting my good friends from the \#US the \\ \#Taiwan way. Remember: to better prevent the \\ spread of \#COVID19, say no to a handshake \& \\ yes to this friendly gesture. Check it out:\end{tabular} & 0 & 0 \\ \hline \begin{tabular}[c]{@{}l@{}}Corona got these flights cheap as hell I gotta job \\ interview in Greece Monday \end{tabular} & 1 & 0 \\ \hline \begin{tabular}[c]{@{}l@{}}My mum has a PhD on Corona Virus from \\ WhatsApp University\end{tabular} & 1 & 0 \\ \hline \begin{tabular}[c]{@{}l@{}}This is why the beaches haven't closed in \\ Florida, and why they've had minimal COVID-19 \\ prevention. Absolute dysfunction. \textless{}link\textgreater{}\end{tabular} & 1 & 1 \\ \hline \begin{tabular}[c]{@{}l@{}}COVID-19 cases in the Philippines jumped \\ from 24 to 35 in less than 12 hours. This is \\ seriously ALARMING. Stay safe everyone! \\ \textless{}link\textgreater{}\end{tabular} & 1 & 1 \\ \hline \end{tabular} \end{table} \begin{table}[!ht] \caption{Sample pairs of tweets and verified claims for Task-2 Claim Retrieval} \label{tab:samp_t2} \centering \begin{tabular}{|c|l|} \hline \textbf{Tweet} & \begin{tabular}[c]{@{}l@{}}A number of fraudulent text messages informing individuals \\ they have been selected for a military draft have circulated \\ throughout the country this week.\end{tabular} \\ \hline \textbf{Verified Claim} & \begin{tabular}[c]{@{}l@{}}The U.S. Army is sending text messages informing people \\ they've been selected for the military draft.\end{tabular} \\ \hline \hline \textbf{Tweet} & \begin{tabular}[c]{@{}l@{}}El Paso was NEVER one of the MOST dangerous cities in \\ the US. We‘ve had a fence for 10 years and it has impacted \\ illegal immigration and curbed criminal activity. It is NOT \\ the sole deterrent. Law enforcement in our community \\ continues to keep us safe \#SOTU\end{tabular} \\ \hline \textbf{Verified Claim} & \begin{tabular}[c]{@{}l@{}}El Paso was one of the U.S. most dangerous cities before \\ a border fence was built there.\end{tabular} \\ \hline \hline \textbf{Tweet} & \begin{tabular}[c]{@{}l@{}}Hey @Always since today is \#TransVisibilityDay it’s \\ probably important to point out the fact that this new \\ packaging isn’t trans* friendly. Just a reminder that \\ Menstruation does not equal Female. Maybe rethink \\ this new look. \textless{}link\textgreater{}\end{tabular} \\ \hline \textbf{Verified Claim} & \begin{tabular}[c]{@{}l@{}}"In 2019, trans activists or ""the transgender lobby"" \\ forced Procter \& Gamble to remove the Venus symbol \\ from menstruation products."\end{tabular} \\ \hline \end{tabular} \end{table} \section{Proposed Approach} \subsection{Task-1: Tweet Check-Worthiness Prediction} Check-Worthiness prediction is the task of predicting whether a tweet includes a claim that is of interest to a large audience. Our approach is motivated by the successful use of lexical, syntactic and contextual features in the previous editions of CheckThat! check-worthiness task for political debates. Given that this task contains less amount of training data, we approached this problem with the idea of creating a rich feature representation, reducing the dimensions of large feature set with PCA \cite{wold1987principal} and then learning the model with a SVM. In doing so, our goal is also to understand which features are the most important for check-worthiness prediction from tweet content. As context is very important for downstream NLP tasks, we experiment with word embeddings (word2vec \cite{mikolov2013distributed}, GloVe \cite{pennington2014glove}) and BERT \cite{devlin2018bert} embeddings to create a sentence representation of each tweet. Our pre-processing and feature extraction is agnostic to the topic of the tweet so that it can be applied to any domain. Next, we provide details about all the features used, their extraction and the encoding process. Our overall approach can be seen in Figure~\ref{fig:app_task1}. \subsubsection{Pre-processing} We use two publicly available pre-processing tools for English and Arabic tweets. We use Baziotis \textit{et. al.}'s \cite{baziotis-pelekis-doulkeridis:2017:SemEval2} tool for English to apply the following normalization steps: tokenization, lower-casing, removal of punctuation, spell correction, normalize \textit{hashtags, all-caps, censored, elongated and repeated} words, and terms like \textit{URL, email, phone, user mentions}. We use Stanford Stanza \cite{qi2020stanza} toolkit to pre-process Arabic tweets by applying the following normalization steps: tokenization, multi-word token expansion and lemmatization. In the case of extracting word embeddings from a transformer network, we use the raw text as the networks have their own tokenization process. \subsubsection{Syntactic Features} We use the following syntactic features for English and Arabic tasks: Parts-of-Speech (POS) tags, named entities (NE) and dependency parse tree relations. We use the pre-processed text and run off-the-shelf tools to extract syntactic information of tweets and then convert each group of information to feature sets. For English we used spaCy\cite{honnibal2017spacy} and Stanford Stanza \cite{qi2020stanza} for Arabic tweets to extract the following syntactic features. In all the features, we experiment with keeping and removing stop-words to evaluate their affect. \textbf{Part-of-Speech}: For both English and Arabic, we extract 16 POS tags in total and through our empirical evaluation we find that the following eight tags to be the most useful when used as features: NOUN, VERB, PROPN, ADJ, ADV, NUM, ADP, PRON. For Arabic, the additional four tags are useful features: DET, INTJ, AUX, PART. We used the chosen set of POS tags for respective language to encode the syntactic information of tweets. \textbf{Named Entities}: We identified the following named entity types to be the most important features through our evaluation: (GPE, PERSON, ORG, NORP, LOC, DATE, CARDINAL, TIME, ORDINAL, FAC, MONEY) for English and (LOC, PER, ORG, MISC) for Arabic. We also found that while developing feature combinations named entities do not add much value to overall accuracy, and hence our primary and contrastive submissions do not include them. \textbf{Syntactic Dependencies}: these features are constructed using dependency relation between tokens in a given tweet. We use the dependency relation between two nodes in the parsed tree if the the child and parent nodes' POS tags are one of the following ADJ, ADV, NOUN, PROPN, VERB or NUM. All dependency relations that match the defined constraint are converted into the triplet relation such as (\emph{child node-POS, dependency-relation, parent-POS}) and pairs such as (\emph{child node-POS, dependency-relation}) where the relation is not part of a feature representation. This process is shown in Figure~\ref{fig:feat_extract}. We found that the features based on pairs of child and parent node perform better than the triplet feature. The dimension of the feature vector for English and Arabic is 133 and 134 respectively. For encoding a feature, we get a histogram vector which contains the number of type of tag, named entity or syntactic relation pair. The process of feature encoding is shown in Figure~\ref{fig:feat_extract}. Finally, we normalize each type of feature with maximum value in the vector. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{images/feat_extraction.jpg} \caption{Syntactic feature extraction and encoding process. Feature vectors are based on the number of times it is seen in the given sentence.} \label{fig:feat_extract} \vspace{-0.2cm} \end{figure} \subsubsection{Average Word Embeddings} One simple way to get a contextual representation of a sentence is to average the word embeddings of each token in a given sentence. For this purpose, we experiment with three types of word embeddings pre-trained on three different sources for English: GloVe embeddings \cite{pennington2014glove} trained on Twitter and Wikipedia, word2vec embeddings \cite{mikolov2013distributed} trained on Google News, and FastText \cite{mikolov2017advances} embeddings trained on multiple sources. In addition, we also experiment with removing stop-words from the average word representation, as stop-words can dominate in the average and result in less meaningful sentence representation. For Arabic, we use word2vec embeddings that are trained on Arabic tweets and Arabic Wikipedia \cite{soliman2017aravec}. \subsubsection{Transformer Features} Another way to extract contextual features is to use BERT \cite{devlin2018bert} embeddings that are trained using the context of the word in a sentence. BERT is usually trained on a very large text corpus which makes them very useful for off-the-shelf feature extraction and fine-tuning for downstream tasks in NLP. To get one embedding per tweet, we follow the observations made in \cite{devlin2018bert} that, different layers of BERT capture different kinds of information, so an appropriate pooling strategy should be applied depending on the task. The paper also suggests that the last four hidden layers of the network are good for transfer learning tasks and thus we experiment with 4 different combinations, i.e. concatenate last 4 hidden layers, average of last 4 hidden layers, last hidden layer and $2^{nd}$ last hidden layer. We normalize the final embedding so that $l2$ norm of the vector is 1. We also experimented with BERT's pooled sentence embedding that is encoded in the \emph{CLS} (class) tag, which performed significantly poorer than the pooling strategies we employed. For Arabic, we only experimented with a sentence-transformer \cite{reimers2019sentence} that is trained on multilingual training corpus and outputs a sentence embedding for each tweet/sentence. \textbf{Sentence Representation}: To get the overall representation of the tweet, we concatenate all the syntactic features together with either average word embedding or BERT-based transformer features and then apply PCA for dimensionality reduction. SVM classifier is trained on the feature vectors of tweets to output a binary decision (check worthy or not). The overall process is shown in Figure~\ref{fig:app_task1}. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{images/approach_task_1.jpg} \caption{Proposed Approach for Check-Worthiness Prediction} \label{fig:app_task1} \end{figure} \subsection{Task-2: Claim Retrieval} Claim Retrieval is the task of retrieving the most similar already verified claim to the query claim. For this task, it is important that the feature representation captures the meaning and context of words and phrases so that query matches the correct verified claim. Therefore, we relied on a triplet-network setting, where the network could be trained with triplets consisting of an anchor sample \textit{a}, positive sample \textit{p} and a negative sample \textit{n}. We use triplet loss to fine-tune a pre-trained sentence embedding network, such that the distance between \textit{a} and \textit{p} is smaller than the distance between \textit{a} and \textit{n} using the following loss function. \begin{align} Loss = \sum_{i=1}^{N} [||S_i^a - S_i^p||_2^2 - ||S_i^a - S_i^n||_2^2 + m]_+ \end{align} where $S_i^a$, $S_i^p$ and $S_i^n$ are triplet sentence embeddings and $m$ is the margin (set to 1), \textit{N} is the number of samples in the batch. As each verified claim is a tuple consisting of text and title, we create two triplets for every true tweet-claim pair, i.e., (anchor tweet, true claim text, negative claim text) and (anchor tweet, true claim title, negative claim title). This increases the number of positive samples for training as there are only 800 samples and one true claim for every tweet. To get negative claims, we select 3 claims with highest cosine similarity that are not the true claims for the anchor tweet using the pre-trained sentence-transformer embeddings. For pre-processing, we use Baziotis \textit{et. al.}'s \cite{baziotis-pelekis-doulkeridis:2017:SemEval2} tool for processing the tweets to remove \textit{URL, email, phone, user mentions}, as the claim text or title do not contain any such information. As retrieval is a search task, we used KD-Tree search to find the most similar already verified claim that has the minimum Euclidean distance to the query claim. The sentence embeddings extracted from the network are used to build a KD-Tree and for each query claim, top 1000 verified claims are extracted from the tree for evaluation. For building the KD-Tree, we average the sentence embeddings of claim text and claim title, as it performs better than just using either claim or title. In our ablation study, we directly compute the cosine similarity between each query tweet and all the verified claims, and pick the top 1000 (highest cosine similarity) verified claims for evaluation. We conduct the second evaluation because building a KD-Tree can affect the retrieval accuracy. \subsubsection{Sentence Transformers for Textual Similarity} As a backbone network to extract sentence embeddings and fine-tuning with triplet loss, we use the recently proposed Sentence-BERT \cite{devlin2018bert} that learns the embeddings in a Siamese (pairs) and triplet network settings. We experiment with the pre-trained Siamese Network models trained on SNLI (Stanford Natural Language Inference) \cite{bowman2015large} and STSb (Semantic Textual Similarity benchmark) \cite{cer2017semeval} datasets that have been shown to perform very well for semantic textual similarity. \section{Related Work} Fake news has been studied from different perspectives in the last five years, like factuality or credibility detection \cite{popat2016credibility,giachanou2019leveraging,samadi2016claimeval,hassan2015detecting,hassan2017toward}, rumour detection \cite{zubiaga2017exploiting,zubiaga2018detection,sicilia2018twitter,zhao2015enquiring}, propagation in networks \cite{liu2018early,monti2019fake,shu2019studying,papanastasiou2020fake}, use of multiple modalities \cite{khattar2019mvae,wang2018eann,singhal2019spotfake} and also as an ecosystem of smaller sub-problems like in CheckThat! \cite{nakov2018overview,elsayed2019overview,clef-checkthat-lncs:2020}. For social media in particular, Shu \textit{et al.} \cite{shu2017fake} studied and provided a comprehensive review of fake news detection with characterizations from psychology and social science, and existing computational algorithms from data mining perspective. The fact that Twitter has become a source of news for so many people, researchers have extensively used the platform to formulate problems, extract data and test their algorithms. For instance, Zubiaga \textit{et. al.} \cite{zubiaga2017exploiting} extracted tweets around breaking news and used Conditional Random Fields to exploit context during the sequential learning process for rumour detection. Buntain \textit{et. al.} \cite{buntain2017automatically} studied three large Twitter datasets and developed models to predict accuracy assessments of fake news by crowd-sourced workers and journalists. While many approaches rely on tweet content for detecting fake news, there has been a rise in methods that exploit user characteristics and metadata to model the problem as fake news propagation. For example, Liu \textit{et. al.} \cite{liu2018early} modeled the propagation path of each news story as a multivariate time series over users who engaged in spreading the news via tweets. They further classified the fake news using Gated Recurrent Unit (GRU) and Convolutional Neural Networks (CNN) to capture the global and local variations of user characteristics respectively. Monti \textit{et. al.} \cite{monti2019fake} went a step further and used a hybrid feature set including user characteristics, social network structure and tweet content. They modeled the problem as binary prediction using a Graph CNN resulting in a highly accurate fake news detector. Besides fake news detection, a sub task to predict check-worthiness of claims has also been explored recently mostly in political context. For example, Hassan \textit{et. al.} \cite{hassan2015detecting,hassan2016comparing} proposed a system that predicts the check-worthiness of a statement made by presidential candidates using SVM \cite{suykens1999least} classifier and combination of lexical and syntactic features. They also compared their results with fact-checking organizations like CNN\footnote{\url{http://www.cnn.com}} and PolitiFact\footnote{\url{https://www.politifact.com/}}. Later, in CheckThat! 2018 \cite{nakov2018overview}, several methods were proposed to improve the check-worthiness of claims in political debates. Best methods used a combination of lexical and syntactic features like Bag of Words (BoW), Parts-of-Speech (POS) tags, named Entities, sentiment, topic modeling, dependency parse trees and word embeddings \cite{mikolov2013distributed}. Various classifiers were built using either Recurrent Neural Networks (RNN) \cite{hansen2018copenhagen,zuo2018hybrid}, gradient boosting \cite{yasser2018bigir}, k-nearest neighbor \cite{ghanem2018upv} or SVM \cite{zuo2018hybrid}. In 2019 edition of CheckThat! \cite{elsayed2019overview}, in addition to using lexical and syntactic features \cite{gkasior2019ipipan}, top approaches relied on learning richer content embeddings and utilized external data for better performance. For example, Hansen \textit{et. al.} \cite{hansen2019neural} used word embeddings and syntactic dependency features as input to an LSTM network, enriched the dataset with additional samples from Claimbuster system \cite{hassan2017claimbuster} and trained the network with a contrastive ranking loss. Favano \textit{et. al.} \cite{favano2019theearthisflat} trained a neural network with Standard Universal Sentence Encoder (SUSE) \cite{cer2018universal} embeddings of the current sentence and previous two sentences as context. Another approach by Su \textit{et. al.} \cite{su2019entity} used co-reference resolution to replace pronouns with named entities to get a feature representation with bag of words, named entity similarity and relatedness. Other than political debates, Jaradat \textit{et. al.} \cite{jaradat2018claimrank} proposed an online multilingual check-worthiness system that works for different sources (debates, news articles, interviews) in English and Arabic . They use actual annotated data from reputable fact-checking organizations and use best performing feature representations from previous approaches. For tweets in particular, Majithia \textit{et. al.} \cite{majithia2019claimportal} proposed a system to monitor, search and analyze factual claims in political tweets with Claimbuster \cite{hassan2017claimbuster} at the backend for check-worthiness. Lastly, Dogan \textit{et. al.} \cite{dogan2015detecting} also conducted a detailed study on detecting check-worthy tweets in U.S. politics and proposed a real-time system to filter them.
proofpile-arXiv_065-166
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Most real-world data comes with a long-tailed nature: a few high-frequency classes (or head classes) contributes to most of the observations, while a large number of low-frequency classes (or tail classes) are under-represented in data. Taking an instance segmentation dataset, LVIS~\citep{gupta2019lvis}, for example, the number of instances in \textit{banana} class can be thousands of times more than that of a \textit{bait} class. In practice, the number of samples per class generally decreases from head to tail classes exponentially. Under the power law, the tails can be undesirably heavy. A model that minimizes empirical risk on long-tailed training datasets often underperforms on a class-balanced test dataset. As datasets are scaling up nowadays, the long-tailed nature poses critical difficulties to many vision tasks, e.g., visual recognition and instance segmentation. An intuitive solution to long-tailed task is to re-balance the data distribution. Most state-of-the-art (SOTA) methods use the class-balanced sampling or loss re-weighting to ``simulate" a balanced training set~\citep{byrd2018effect, wang2017learning}. However, they may under-represent the head class or have gradient issues during optimization. Cao et al.~\citep{LDAM} introduced Label-Distribution-Aware Margin Loss (LDAM), from the perspective of the generalization error bound. Given fewer training samples, a tail class should have a higher generalization error bound during optimization. Nevertheless, LDAM is derived from the hinge loss, under a binary classification setup and is not suitable for multi-class classification. We propose \textit{Balanced Meta-Softmax} (BALMS) for long-tailed visual recognition. We first show that the Softmax function is intrinsically biased under the long-tailed scenario. We derive a Balanced Softmax function from the probabilistic perspective that explicitly models the test-time label distribution shift. Theoretically, we found that optimizing for the Balanced Softmax cross-entropy loss is equivalent to minimizing the generalization error bound. Balanced Softmax generally improves long-tailed classification performance on datasets with moderate imbalance ratios, e.g., CIFAR-10-LT~\citep{Krizhevsky09cifa10} with a maximum imbalance factor of 200. However, for datasets with an extremely large imbalance factor, e.g., LVIS~\citep{gupta2019lvis} with an imbalance factor of 26,148, the optimization process becomes difficult. Complementary to the loss function, we introduce the \textit{Meta Sampler}, which learns to re-sample for achieving high validation accuracy by meta-learning. The combination of Balanced Softmax and Meta Sampler could efficiently address long-tailed classification tasks with high imbalance factors. We evaluate BALMS on both long-tailed image classification and instance segmentation on five commonly used datasets: CIFAR-10-LT~\citep{Krizhevsky09cifa10}, CIFAR-100-LT~\citep{Krizhevsky09cifa10}, ImageNet-LT~\citep{Liu2019LargeScaleLR}, Places-LT~\cite{zhou2017places} and LVIS~\citep{gupta2019lvis}. On all datasets, BALMS outperforms state-of-the-art methods. In particular, BALMS outperforms all SOTA methods on LVIS, with an extremely high imbalanced factor, by a large margin. We summarize our contributions as follows: 1) we theoretically analyze the incapability of Softmax function in long-tailed tasks; 2) we introduce Balanced Softmax function that explicitly considers the label distribution shift during optimization; 3) we present Meta Sampler, a meta-learning based re-sampling strategy for long-tailed learning. \section{Related Works} \textbf{Data Re-Balancing.} Pioneer works focus on re-balancing during training. Specifically, re-sampling strategies~\citep{Kubat97addressingthe,Chawla02smote:synthetic,Chawla02bordersmote,he2009lfid,shen2016relay,buda2018systematic,recardo2009} try to restore the true distributions from the imbalanced training data. Re-weighting, i.e., cost-sensitive learning~\cite{wang2017learning,huang2016learning,huang2019deep,dean2013drw}, assigns a cost weight to the loss of each class. However, it is argued that over-sampling inherently overfits the tail classes and under-sampling under-represents head classes' rich variations. Meanwhile, re-weighting tends to cause unstable training especially when the class imbalance is severe because there would be abnormally large gradients when the weights are very large. \textbf{Loss Function Engineering.} Tan et al.~\citep{Tan2020EqualizationLF} point out that randomly dropping some scores of tail classes in the Softmax function can effectively help, by balancing the positive gradients and negative gradients flowing through the score outputs. Cao et al.~\citep{LDAM} show that the generalization error bound could be minimized by increasing the margins of tail classes. Hayat et al.~\citep{Hayat2019ICCVbayessian} modify the loss function based on Bayesian uncertainty. Li et al.~\citep{li2019gradient} propose two novel loss functions to balance the gradient flow. Khan et al.~\citep{khan2017cost} jointly learn the model parameters and the class-dependent loss function parameters. Ye et al.~\citep{ye2020identifying} force a large margin for minority classes to prevent feature deviation. We progress this line of works by introducing probabilistic insights that also bring empirical improvements. We show in this paper that an ideal loss function should be unbiased under the long-tailed scenarios. \textbf{Meta-Learning.} Many approaches~\citep{Jamal2020RethinkingCM,Ren2018LearningTR,shu2019meta} have been proposed to tackle the long-tailed issue with meta-learning. Many of them~\citep{Jamal2020RethinkingCM,Ren2018LearningTR} focus on optimizing the weight-per-sample as a learnable parameter, which appears as a hyper-parameter in the sample-based re-weight approach. This group of methods requires a clean and unbiased dataset as a meta set, i.e., development set, which is usually a fixed subset of the training images and use bi-level optimization to estimate the weight parameter. \textbf{Decoupled Training.} Kang et al.~\cite{Kang2020DecouplingRA} point out that decoupled training, a simple yet effective solution, could significantly improve the generalization issue on long-tailed datasets. The classifier is the only under-performed component when training in imbalanced datasets. However, in our experiments, we found this technique is not adequate for datasets with extremely high imbalance factors, e.g., LVIS~\citep{gupta2019lvis}. Interestingly in our experiments, we observed that decoupled training is complementary to our proposed BALMS, and combining them results in additional improvements. \section{Balanced Meta-Softmax} The major challenge for long-tailed visual recognition is the mismatch between the imbalanced training data distribution and the balanced metrics, e.g., mean Average Precision (mAP), that encourage minimizing error on a balanced test set. Let $\mathcal{X}=\{x_i, y_i\}, i\in\{1, \cdots, n\}$ be the balanced test set, where $x_i$ denotes a data point and $y_i$ denotes its label. Let $k$ be the number of classes, $n_j$ be the number of samples in class $j$, where $\sum_{j=1}^k n_j = n$. Similarly, we denote the long-tailed training set as $\hat{\mathcal{X}} = \{\hat{x}_i, \hat{y}_i\}, i\in \{1, \dots, n\}$. Normally, we have $\forall i, p(\hat{y}_i)\neq p(y_i)$. Specifically, for a tail class $j$, $p(\hat{y}_j)\ll p(y_j)$, which makes the generalization under long-tailed scenarios extremely challenging. We introduce Balanced Meta-Softmax (BALMS) for long-tailed visual recognition. It has two components: 1) a Balanced Softmax function that accommodates the label distribution shift between training and testing; 2) a Meta Sampler that learns to re-sample training set by meta-learning. We denote a feature extractor function as $f$ and a linear classifier's weight as $\theta$. \subsection{Balanced Softmax} \textbf{Label Distribution Shift.} We begin by revisiting the multi-class Softmax regression, where we are generally interested in estimating the conditional probability $p(y|x)$, which can be modeled as a multinomial distribution $\phi$: \begin{equation} \phi = \phi_1^{\mathbf{1}\{y=1\}}\phi_2^{\mathbf{1}\{y=2\}} \cdots \phi_k^{\mathbf{1}\{y=k\}};\quad \phi_j = \frac{e^{\eta_j}}{\sum_{i=1}^ke^{\eta_i}};\quad \sum_{j=1}^k\phi_j=1 \end{equation} where $\mathbf{1}(\cdot)$ is the indicator function and Softmax function maps a model's class-$j$ output $\eta_j =\theta_j^Tf(x)$ to the conditional probability $\phi_j$. From the Bayesian inference's perspective, $\phi_j$ can also be interpreted as: \begin{equation} \phi_j = p(y=j|x) = \frac{p(x|y=j)p(y=j)}{p(x)} \end{equation} where $p(y=j)$ is in particular interest under the class-imbalanced setting. Assuming that all instances in the training dataset and the test dataset are generated from the same process $p(x|y=j)$, there could still be a discrepancy between training and testing given different label distribution $p(y=j)$ and evidence $p(x)$. With a slight abuse of the notation, we re-define $\phi$ to be the conditional distribution on the balanced test set and define $\hat{\phi}$ to be the conditional probability on the imbalanced training set. As a result, standard Softmax provides a biased estimation for $\phi$. \textbf{Balanced Softmax.} To eliminate the discrepancy between the posterior distributions of training and testing, we introduce Balanced Softmax. We use the same model outputs $\eta$ to parameterize two conditional probabilities: $\phi$ for testing and $\hat{\phi}$ for training. \begin{theorem} \label{th:loss} Assume $\phi$ to be the desired conditional probability of the balanced dataset, with the form $\phi_j={p}(y=j|x) =\frac{p(x|y=j)}{{p}(x)}\frac{1}{k}$, and $\hat{\phi}$ to be the desired conditional probability of the imbalanced training set, with the form $\hat {\phi}_j = {\hat{p}}(y=j|x) = \frac{p(x|y=j)}{\hat{p}(x)}\frac{n_j}{\sum_{i=1}^k n_i}$. If $\phi$ is expressed by the standard Softmax function of model output $\eta$, then $\hat{\phi}$ can be expressed as \begin{equation} \hat{\phi}_j =\frac{n_je^{\eta_j}}{\sum_{i =1}^k n_ie^{\eta_i}}. \end{equation} \end{theorem} We use the exponential family parameterization to prove Theorem~\ref{th:loss}. The proof can be found in the supplementary materials. Theorem~\ref{th:loss} essentially shows that applying the following Balanced Softmax function can naturally accommodate the label distribution shifts between the training and test sets. We define the Balanced Softmax function as \begin{equation}\label{eq:our_loss} \hat{l}(\theta) = -\log(\hat{\phi}_y) =-\log\left(\frac{n_ye^{\eta_y}}{\sum_{i =1}^k n_ie^{\eta_i}}\right). \end{equation} We further investigate the improvement brought by the Balanced Softmax in the following sections. Many vision tasks, e.g., instance segmentation, might use multiple binary logistic regressions instead of a multi-class Softmax regression. By virtue of Bayes' theorem, a similar strategy can be applied to the multiple binary logistic regressions. The detailed derivation is left in the supplementary materials. \textbf{Generalization Error Bound.} Generalization error bound gives the upper bound of a model's test error, given its training error. With dramatically fewer training samples, the tail classes have much higher generalization bounds than the head classes, which make good classification performance on tail classes unlikely. In this section, we show that optimizing Eqn.~\ref{eq:our_loss} is equivalent to minimizing the generalization upper bound. Margin theory provides a bound based on the margins \citep{kakade2009complexity}. Margin bounds usually negatively correlate to the magnitude of the margin, i.e., a larger margin leads to lower generalization error. Consequently, given a constraint on the sum of margins of all classes, there would be a trade-off between minority classes and majority classes \citep{LDAM}. Locating such an optimal margin for multi-class classification is non-trivial. The bound investigated in \citep{LDAM} was established for binary classification using hinge loss. Here, we try to develop the margin bound for the multi-class Softmax regression. Given the previously defined $\phi$ and $\hat{\phi}$, we derive $\hat{l}(\theta)$ by minimizing the margin bound. Margin bound commonly bounds the 0-1 error: \begin{equation}\label{eq:01_loss} err_{0,1} = \mathrm{Pr}\left[\theta_y^Tf(x)<\max_{i\neq y}\theta_i^Tf(x)\right]. \end{equation} However, directly using the 0-1 error as the loss function is not ideal for optimization. Instead, negative log likelihood (NLL) is generally considered more suitable. With continuous relaxation of Eqn.~\ref{eq:01_loss}, we have \begin{equation} err(t) = \mathrm{Pr}[t<\log(1+\sum_{i\neq y} e^{\theta_i^Tf(x) -\theta_y^Tf(x)})] = \mathrm{Pr}\left[l_y(\theta)>t\right], \end{equation} where $t\geq 0$ is any threshold, and $l_y(\theta)$ is the standard negative log-likelihood with Softmax, i.e., the cross-entropy loss. This new error is still a counter, but describes how likely the test loss will be larger than a given threshold. Naturally, we define our margin for class $j$ to be \begin{equation} \gamma_j = t - \max_{(x,y)\in S_j} l_j(\theta). \end{equation} where $S_j$ is the set of all class $j$ samples. If we force a large margin $\gamma_j$ during training, i.e., force the training loss to be much lower than $t$, then $err(t)$ will be reduced. The Theorem 2 in \citep{kakade2009complexity} can then be directly generalized as \begin{theorem} Let $t\geq 0$ be any threshold, for all $\gamma_j >0$, with probability at least $1-\delta$, we have \begin{equation}\label{eq:bound} err_{bal}(t) \lesssim \frac{1}{k}\sum_{j=1}^k\Big(\frac{1}{\gamma_j}\sqrt{\frac{C}{n_j}}+\frac{\log n}{\sqrt{n_j}}\Big) ; \quad \gamma^*_j = \frac{\beta n_j^{-1/4}}{\sum_{i=1}^{k} n_i^{-1/4}}, \end{equation} where $err_{bal}(t)$ is the error on the balanced test set, $\lesssim$ is used to hide constant terms and $C$ is some measure on complexity. With a constraint on $\sum_{j=1}^k \gamma_j = \beta$, Cauchy-Schwarz inequality gives us the optimal $\gamma^*_j$. \end{theorem} The optimal $\gamma^*$ suggests that we need larger $\gamma$ for the classes with fewer samples. In other words, to achieve the optimal generalization ability, we need to focus on minimizing the training loss of the tail classes. To enforce the optimal margin, for each class $j$, the desired training loss $\hat{l}^*_j(\theta)$ is \begin{equation} \hat{l}^*_j(\theta) = l_j(\theta) + \gamma^*_j, \quad \text{where} \quad l_j(\theta) = -\log (\phi_j), \end{equation} \begin{corollary} $\hat{l}_j^*(\theta) =l_j(\theta) +\gamma_j^* =l_j(\theta)+\frac{\beta n_j^{-1/4}}{\sum_{i=1}^{k} n_i^{-1/4}} $ can be approximated by $\hat{l}_j(\theta)$ when: \begin{equation} \label{eq:bound_loss} \hat{l}_j(\theta) = -\log(\hat{\phi}_j); \quad \hat{\phi}_j = \frac{e^{\eta_j-\log \gamma_j^*}}{\sum_{i=1}^k e^{\eta_i-\log \gamma_i^*}}= \frac{n_j^{\frac{1}{4}}e^{\eta_j}}{\sum_{i =1}^k n_i^{\frac{1}{4}}e^{\eta_i}} \end{equation} \label{corollary:bound} \end{corollary} We provide a sketch of proof to the corollary in supplementary materials. Notice that compared to Eqn.~\ref{eq:our_loss}, we have an additional constant $1/4$. We empirically find that setting $1/4$ to $1$ leads to the optimal results, which may suggest that Eqn.~\ref{eq:bound} is not necessarily tight. To this point, the label distribution shift and generalization bound of multi-class Softmax regression lead us to the same loss form: Eqn.~\ref{eq:our_loss}. \subsection{Meta Sampler} \label{sec:3.2} \textbf{Re-sampling.} Although Balanced Softmax accommodates the label distribution shift, the optimization process is still challenging when given large datasets with extremely imbalanced data distribution. For example, in LVIS, the \textit{bait} class may appear only once when the \textit{banana} class appears thousands of times, making the \textit{bait} class difficult to contribute to the model training due to low sample rate. Re-sampling is usually adopted to alleviate this issue, by increasing the number of minority class samples in each training batch. Recent works~\citep{soudry2018implicit, byrd2018effect} show that the global minimum of the Softmax regression is independent of the mini-batch sampling process. Our visualization in the supplementary material confirms this finding. As a result, a suitable re-sampling strategy could simplify the optimization landscape of Balanced Softmax under extremely imbalanced data distribution. \textbf{Over-balance.} Class-balanced sampler (CBS) is a common re-sampling strategy. CBS balances the number of samples for each class in a mini-batch. It effectively helps to re-train the linear classifier in the decoupled training setup~\cite{Kang2020DecouplingRA}. However, in our experiments, we find that naively combining CBS with Balanced Softmax may worsen the performance. We first theoretically analyze the cause of the performance drop. When the linear classifier's weight $\theta_j$ for class $j$ converges, i.e., $\sum_{s=1}^B\frac{\partial L^{(s)}}{\partial \theta_j} = 0$, we should have: \begin{equation}\label{eq:gradient} \sum_{s=1}^{B} \frac{\partial L^{(s)}}{\partial \theta_j} = \sum_{s=1}^{B/k} f(x^{(s)}_{y=j})(1-\hat{\phi}^{(s)}_j) - \sum_{i \neq j}^k\sum_{s=1}^{B/k}f(x^{(s)}_{y=i})\hat{\phi}^{(s)}_j = 0, \end{equation} where $B$ is the batch size and $k$ is the number of classes. Samples per class have been ensured to be $B/k$ by CBS. We notice that $\hat{\phi}_j$, the output of Balanced Softmax, casts a varying, minority-favored effect to the importance of each class. We use an extreme case to demonstrate the effect. When the classification loss converges to 0, the conditional probability of the correct class $\hat{\phi}_y$ is expected to be close to 1. For any positive sample $x^+$ and negative sample $x^-$ of class $j$, we have $\hat{\phi}_j (x^+) \approx \phi_j (x^+)$ and $\hat{\phi}_j (x^-) \approx \frac{n_j}{n_i}\phi_j (x^-)$, when $\hat{\phi}_y \to 1$. Eqn.~\ref{eq:gradient} can be rewritten as \begin{equation}\label{eq:gradient_final} \frac{1}{n_j^2} \mathbb{E}_{(x^+,y=j)\sim D_{train}}[f(x^+)(1-\phi_j)] - \sum_{i\neq j}^k \frac{1}{n_i^2} \mathbb{E}_{(x^-,y=i)\sim D_{train}}[f(x^-)\phi_j] \approx 0 \end{equation} where $D_{train}$ is the training set. The formal derivation of Eqn.~\ref{eq:gradient_final} is in the supplementary materials. Compared to the inverse loss weight, i.e., $1/n_j$ for class $j$, combining Balanced Softmax with CBS leads to the over-balance problem, i.e., $1/n_j^2$ for class $j$, which deviates from the optimal distribution. Although re-sampling does not affect the global minimum, an over-balanced, tail class dominated optimization process may lead to local minimums that favor the minority classes. Moreover, Balanced Softmax's effect in the optimization process is dependent on the model's output, which makes hand-crafting a re-sampling strategy infeasible. \textbf{Meta Sampler.} To cope with CBS's over-balance issue, we introduce Meta Sampler, a learnable version of CBS based on meta-learning, which explicitly learns the optimal sample rate. We first define the empirical loss by sampling from dataset $D$ as $L_{D}(\theta)=\mathbb{E}_{(x,y)\sim D}[l(\theta)]$ for standard Softmax, and $\hat{L}_{D}(\theta)=\mathbb{E}_{(x,y)\sim D}[\hat{l}(\theta)]$ for Balanced Softmax, where $\hat{l}(\theta)$ is defined previously in Eqn.~\ref{eq:our_loss}. To estimate the optimal sample rates for different classes, we adopt a bi-level meta-learning strategy: we update the parameter $\psi$ of sample distribution $\pi_\psi$ in the inner loop and update the classifier parameters $\theta$ in the outer loop, \begin{equation} \pi^*_\psi = \arg \min_{\psi}L_{D_{meta}}(\theta^*(\pi_\psi)) \quad s.t.\quad \theta^*(\pi_\psi) = \arg \min_{\theta}\hat{L}_{D_{q(x,y;\pi_\psi)}}(\theta), \end{equation} where $\pi_\psi^j = p(y=j;\psi)$ is the sample rate for class $j$, $D_{q(x,y;\pi_{\psi})}$ is the training set with class sample distribution $\pi_\psi$, and $D_{meta}$ is a meta set we introduce to supervise the inner loop optimization. We create the meta set by class-balanced sampling from the training set $D_{train}$. Empirically, we found it sufficient for inner loop optimization. An intuition to this bi-level optimization strategy is that: we want to learn best sample distribution parameter $\psi$ such that the network, parameterized by $\theta$, outputs best performance on meta dataset $D_{meta}$ when trained by samples from $\pi_\psi$. We first compute the per-instance sample rate $\rho_i = \pi_\psi^{c(i)} / \sum_{i=1}^{n} \pi_\psi^{c(i)}$, where $c(i)$ denotes the label class for instance $i$ and $n$ is total number of training samples, and sample a training batch $B_\psi$ from a parameterized multi-nomial distribution $\rho$. Then we optimize the model in a meta-learning setup by \begin{enumerate} \item sample a mini-batch $B_\psi$ given distribution $\pi_\psi$ and perform one step gradient descent to get a surrogate model parameterized by $\tilde{\theta}$ by $\tilde{\theta} \leftarrow \theta - \nabla_{\theta} \hat{L}_{B_\psi}(\theta)$. \item compute the $L_{D_{meta}}(\tilde{\theta})$ of the surrogate model on the meta dataset $D_{meta}$ and optimize the sample distribution parameter by $\psi \leftarrow \psi-\nabla_{\psi} L_{D_{meta}}(\tilde{\theta})$ with the standard cross-entropy loss with Softmax. \item update the model parameter $\theta \leftarrow \theta - \nabla_{\theta} \hat{L}_{B_\psi} (\theta)$ with Balanced Softmax. \end{enumerate} However, sampling from a discrete distribution is not differentiable by nature. To allow end-to-end training for the sampling process, when forming the mini-batch $B_\psi$, we apply the Gumbel-Softmax reparameterization trick~\citep{categorical}. A detailed explanation can be found in the supplementary materials. \section{Experiments} \subsection{Exprimental Setup} \textbf{Datasets.} We perform experiments on long-tailed image classification datasets, including CIFAR-10-LT~\citep{Krizhevsky09cifa10}, CIFAR-100-LT~\citep{Krizhevsky09cifa10}, ImageNet-LT~\citep{Liu2019LargeScaleLR} and Places-LT~\citep{zhou2017places} and one long-tailed instance segmentation dataset, LVIS~\citep{gupta2019lvis}. We define the imbalance factor of a dataset as the number of training instances in the largest class divided by that of the smallest. Details of datasets are in Table~\ref{tab:dataset}. \begin{wraptable}{r}{0cm} \centering \fontsize{8}{11}\selectfont \centering \begin{tabular}{lcc} \toprule Dataset & \#Classes & Imbalance Factor\\ \hline CIFAR-10-LT~\citep{Krizhevsky09cifa10} & 10 & 10-200 \\ % CIFAR-100-LT~\citep{Krizhevsky09cifa10} & 100 & 10-200 \\ % ImageNet-LT~\citep{Liu2019LargeScaleLR} & 1,000 & 256 \\% & Classification \\ Places-LT~\citep{zhou2017places} & 365 & 996 \\ LVIS~\citep{gupta2019lvis} & 1,230 &26,148 \\\bottomrule% \end{tabular} \caption{Details of long-tailed datatsets. For both CIFAR-10 and CIFAR-100, we report results with different imbalance factors.} \label{tab:dataset} \end{wraptable} \textbf{Evaluation Setup.} For classification tasks, after training on the long-tailed dataset, we evaluate the models on the corresponding balanced test/validation dataset and report top-1 accuracy. We also report accuracy on three splits of the set of classes: Many-shot (more than 100 images), Medium-shot (20 $\sim$ 100 images), and Few-shot (less than 20 images). Notice that results on small datasets, i.e., CIFAR-LT 10/100, tend to show large variances, we report the mean and standard error under 3 repetitive experiments. We show details of long-tailed dataset generation in supplementary materials. For LVIS, we use official training and testing splits. Average Precision (AP) in COCO style~\citep{Lin2014MicrosoftCC} for both bounding box and instance mask are reported. Our implementation details can be found in the supplementary materials. \begin{figure*}[!htb] \centering \begin{tabular}{c c c} \includegraphics[width=0.3\linewidth]{imgs/imba10.pdf}&% \includegraphics[width=0.3\linewidth]{imgs/imba100.pdf}&% \includegraphics[width=0.3\linewidth]{imgs/imba200.pdf}% \\ Imbalance Factor = 10 & Imbalance Factor = 100 &Imbalance Factor = 200 \end{tabular} \centering \caption{Experiment on CIFAR-100-LT. x-axis is the class labels with decreasing training samples and y-axis is the marginal likelihood $p(y)$ on the test set. We use end-to-end training for the experiment. Balanced Softmax is more stable under a high imbalance factor compared to the Softmax baseline and the SOTA method, Equalization Loss (EQL). } \label{fig:smooths} \end{figure*} \subsection{Long-Tailed Image Classification} We present the results for long-tailed image classification in Table~\ref{tab:cifar} and Table~\ref{tab:imaagenet}. On all datasets, BALMS achieves SOTA performance compared with all end-to-end training and decoupled training methods. In particular, we notice that BALMS demonstrates a clear advantage under two cases: 1) When the imbalance factor is high. For example, on CIFAR-10 with an imbalance factor of 200, BALMS is higher than the SOTA method, LWS~\citep{Kang2020DecouplingRA}, by 3.4\%. 2) When the dataset is large. BALMS achieves comparable performance with cRT on ImageNet-LT, which is a relatively small dataset, but it significantly outperforms cRT on a larger dataset, Places-LT. In addition, we study the robustness of the proposed Balanced Softmax compared to standard Softmax and SOTA loss function for long-tailed problems, EQL~\citep{Tan2020EqualizationLF}. We visualize the marginal likelihood $p(y)$, i.e., the sum of scores on each class, on the test set with different losses given different imbalance factors in Fig.~\ref{fig:smooths}. Balanced Softmax clearly gives a more balanced likelihood under different imbalance factors. Moreover, we show Meta Sampler's effect on $p(y)$ in Fig.~\ref{fig:over-balance}. Compared to CBS, Meta Sampler significantly relieves the over-balance issue. \begin{figure*}[htb!] \centering \begin{tabular}{c c} \includegraphics[width=0.4\linewidth]{imgs/py_cifar10.pdf}&% \includegraphics[width=0.4\linewidth]{imgs/py_cifar100.pdf}% \\ CIFAR-10-LT & CIFAR-100-LT \end{tabular} \caption{ Visualization of $p(y)$ on test set with Meta Sampler and CBS. x-axis is the class labels with decreasing training samples and y-axis is the marginal likelihood $p(y)$ on the test set. The result is on CIFAR-10/100-LT with imbalance factor 200. We use decoupled training for the experiment. BS: Balanced Softmax. BS + CBS shows a clear bias towards the tail classes, especially on CIFAR-100-LT. Compared to BS + CBS, BS + Meta Sampler effectively alleviates the over-balance problem. } \label{fig:over-balance} \end{figure*} \begin{table}[t] \begin{center} \fontsize{8}{11}\selectfont \centering \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}|ccc|ccc} \toprule Dataset & \multicolumn{3}{c|}{CIFAR-10-LT} & \multicolumn{3}{c}{CIFAR-100-LT} \\ \hline Imbalance Factor & 200 & 100 & 10 & 200 & 100 & 10 \\ \hline End-to-end training \\ \hline Softmax & 71.2 $\pm$ 0.3 & 77.4 $\pm$ 0.8 & 90.0 $\pm$ 0.2 & 41.0 $\pm$ 0.3 & 45.3 $\pm$ 0.3 & 61.9 $\pm$ 0.1 \\ CBW & 72.5 $\pm$ 0.2 & 78.6 $\pm$ 0.6 & 90.1 $\pm$ 0.2 & 36.7 $\pm$ 0.2 & 42.3 $\pm$ 0.8 & 61.4 $\pm$ 0.3 \\ CBS & 68.3 $\pm$ 0.3& 77.8 $\pm$ 2.2 & 90.2 $\pm$ 0.2 & 37.8 $\pm$ 0.1 &42.6 $\pm$ 0.4 &61.2 $\pm$ 0.3 \\ Focal Loss~\citep{Lin2017FocalLF} & 71.8 $\pm$ 2.1 & 77.1 $\pm$ 0.2 & 90.3 $\pm$ 0.2 & 40.2 $\pm$ 0.5 &43.8 $\pm$ 0.1 & 60.0 $\pm$ 0.6 \\ Class Balanced Loss~\citep{Cui2019ClassBalancedLB} ~& 72.6 $\pm$ 1.8 & 78.2 $\pm$ 1.1 & 89.9 $\pm$ 0.3 & 39.9 $\pm$ 0.1 & 44.6 $\pm$ 0.4 & 59.8 $\pm$ 1.1\\ LDAM Loss~\citep{LDAM} & 73.6 $\pm$ 0.1 & 78.9 $\pm$ 0.9 & 90.3 $\pm$ 0.1 & 41.3 $\pm$ 0.4 & 46.1 $\pm$ 0.1 & 62.1 $\pm$ 0.3 \\ Equalization Loss~\citep{Tan2020EqualizationLF}~ & 74.6 $\pm$ 0.1 & 78.5 $\pm$ 0.1 & 90.2 $\pm$ 0.2 & 43.3 $\pm$ 0.1 & 47.4 $\pm$ 0.2 & 60.5 $\pm$ 0.6 \\ \hline Decoupled training \\ \hline cRT~\citep{Kang2020DecouplingRA} & 76.6 $\pm$ 0.2 & 82.0 $\pm$ 0.2 & 91.0 $\pm$ 0.0 & 44.5 $\pm$ 0.1 & 50.0 $\pm$ 0.2 & 63.3 $\pm$ 0.1 \\ LWS~\citep{Kang2020DecouplingRA} & 78.1 $\pm$ 0.0 & 83.7 $\pm$ 0.0 & 91.1 $\pm$ 0.0 & 45.3 $\pm$ 0.1 & 50.5 $\pm$ 0.1 & \textbf{63.4} $\pm$ 0.1 \\ \hline BALMS & \textbf{81.5} $\pm$ 0.0 & \textbf{84.9} $\pm$ 0.1 & \textbf{91.3} $\pm$ 0.1 & \textbf{45.5} $\pm$ 0.1& \textbf{50.8} $\pm$ 0.0 & 63.0 $\pm$ 0.1\\ \hline \end{tabular*} \end{center} \caption{ Top 1 accuracy for CIFAR-10/100-LT. Softmax: the standard cross-entropy loss with Softmax. CBW: class-balanced weighting. CBS: class-balanced sampling. LDAM Loss: LDAM loss without DRW. Results of Focal Loss, Class Balanced Loss, LDAM Loss and Equalization Loss are reproduced with optimal hyper-parameters reported in their original papers. BALMS generally outperforms SOTA methods, especially when the imbalance factor is high. Note that for all compared methods, we reproduce higher accuracy than reported in original papers. Comparison with their originally reported results is provided in the supplmentary materials.} \label{tab:cifar} \end{table} \begin{table}[t] \fontsize{8}{11}\selectfont \begin{center} \centering \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}|cccc|cccc} \toprule Dataset & \multicolumn{4}{c|}{ImageNet-LT} & \multicolumn{4}{c}{Places-LT} \\ \hline Accuracy & Many & Medium & Few & Overall & Many & Medium & Few & Overall \\ \hline End-to-end training \\ \hline Lifted Loss~\citep{songCVPR16lifted} & 35.8& 30.4& 17.9 &30.8 &41.1 &35.4& 24& 35.2 \\ Focal Loss~\citep{Lin2017FocalLF} & 36.4& 29.9& 16 &30.5 & 41.1 & 34.8 & 22.4 & 34.6 \\ Range Loss~\citep{zhang2017rangeloss} & 35.8 & 30.3 &17.6& 30.7 & 41.1& 35.4& 23.2 &35.1 \\ OLTR~\citep{Liu2019LargeScaleLR} & 43.2& 35.1& 18.5& 35.6 & \textbf{44.7} & 37.0 & 25.3 & 35.9 \\ Equalization Loss~\citep{Tan2020EqualizationLF}~ & - & - & - & 36.4 & - & - & - & - \\ \hline Decoupled training \\ \hline cRT~\citep{Kang2020DecouplingRA} &-&-& -& \textbf{41.8} & 42.0 &37.6 & 24.9 & 36.7 \\ LWS~\citep{Kang2020DecouplingRA} &-&-& -& 41.4& 40.6 &39.1 &28.6& 37.6 \\ \hline BALMS & \textbf{50.3} & \textbf{39.5} & \textbf{25.3} & \textbf{41.8} & 41.2 & \textbf{39.8} &\textbf{31.6 }& \textbf{38.7} \\ \hline \end{tabular*} \end{center} \caption{Top 1 Accuracy on ImageNet-LT and Places-LT. We present results with ResNet-10~\citep{Liu2019LargeScaleLR} for ImageNet-LT and ImageNet pre-trained ResNet-152 for Places-LT. Baseline results are taken from original papers. BALMS generally outperforms the SOTA models.} \label{tab:imaagenet} \end{table} \subsection{Long-Tailed Instance Segmentation} LVIS dataset is one of the most challenging datasets in the vision community. As suggested in Tabel~\ref{tab:dataset}, the dataset has a much higher imbalance factor compared to the rest (26148 vs. less than 1000) and contains many very few-shot classes. Compared to the image classification datasets, which are relatively small and have lower imbalance factors, the LVIS dataset gives a more reliable evaluation of the performance of long-tailed learning methods. Since one image might contain multiple instances from several categories, we hereby use Meta Reweighter, a re-weighting version of Meta Sampler, instead of Meta Sampler. As shown in Table~\ref{tab:lvis}, BALMS achieves the best results among all the approaches and outperform others by a large margin, especially in rare classes, where BALMS achieves an average precision of 19.6 while the best of the rest is 14.6. The results suggest that with the Balanced Softmax function and learnable Meta Reweighter, BALMS is able to give more balanced gradients and tackles the extremely imbalanced long-tailed tasks. In particular, LVIS is composed of images of complex daily scenes with natural long-tailed categories. To this end, we believe BALMS is applicable to real-world long-tailed visual recognition challenges. \begin{table}[t] \fontsize{8}{11}\selectfont \begin{center} \centering \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}} c c c c c} \toprule Method & AP\textsubscript{m} & AP\textsubscript{f} & AP\textsubscript{c} & AP\textsubscript{r} & AP\textsubscript{b} \\ \hline Softmax & 23.7 & 27.3 & 24.0 & 13.6 & 24.0 \\ Sigmoid & 23.6 & 27.3 & 24.0& 12.7 & 24.0\\ Focal Loss~\citep{Lin2017FocalLF} & 23.4 & 27.5 & 23.5 & 12.8 & 23.8 \\ Class Balanced Loss~\citep{Cui2019ClassBalancedLB} & 23.3 & 27.3 & 23.8 & 11.4 & 23.9 \\ LDAM~\citep{LDAM} & 24.1 & 26.3 & 25.3 & 14.6 & 24.5 \\ LWS~\citep{Kang2020DecouplingRA} & 23.8 & 26.8 & 24.4 & 14.4 & 24.1 \\ Equalization Loss~\citep{Tan2020EqualizationLF} & 25.2 & 26.6 & 27.3 & 14.6 & 25.7 \\ \hline Balanced Softmax$^\dagger$ & 26.3 & \textbf{28.8} & 27.3 & 16.2 & 27.0 \\ BALMS & \textbf{27.0} & 27.5 & \textbf{28.9} & \textbf{19.6} & \textbf{27.6}\\ \hline \end{tabular*} \end{center} \caption{ Results for LVIS dataset. AP\textsubscript{m} denotes Average Precision of masks. AP\textsubscript{b} denotes Average Precision of bounding box. AP\textsubscript{f}, AP\textsubscript{c} and AP\textsubscript{r} denote Average Precision of masks on frequent classes, common classes and rare classes. $\dagger$: the multiple binary logistic regression variant of Balanced Softmax, more details in the supplementary material. BALMS significantly outperforms SOTA models given high imbalance factor in LVIS. All compared methods are reproduced with higher AP than reported in the original papers.} \label{tab:lvis} \end{table} \begin{table}[!htb] \fontsize{8}{11}\selectfont \begin{center} \centering \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}|ccc|ccc} \toprule Dataset &\multicolumn{3}{c|}{CIFAR-10-LT} & \multicolumn{3}{c}{CIFAR-100-LT} \\ \hline Imbalance Factor & 200 & 100 & 10 & 200 & 100 & 10 \\ \hline End-to-end training \\ \hline (1) Softmax & 71.2 $\pm$ 0.3 & 77.4 $\pm$ 0.8 & 90.0 $\pm$ 0.2 & 41.0 $\pm$ 0.3 & 45.3 $\pm$ 0.3 & 61.9 $\pm$ 0.1 \\ (2) Balanced Softmax $\frac{1}{4}$ & 71.6 $\pm$ 0.7 & 78.4 $\pm$ 0.9 & 90.5 $\pm$ 0.1 & 41.9 $\pm$ 0.2& 46.4 $\pm$ 0.7 & 62.6 $\pm$ 0.3\\ (3) Balanced Softmax & 79.0 $\pm$ 0.8& 83.1 $\pm$ 0.4 & 90.9 $\pm$ 0.4& \textbf{45.9} $\pm$ 0.3& 50.3 $\pm$ 0.3 & 63.1 $\pm$ 0.2\\ \hline Decoupled training \\ \hline (4) Balanced Softmax $\frac{1}{4}$+DT & 72.2 $\pm$ 0.1 & 79.1 $\pm$ 0.2 & 90.2 $\pm$ 0.0 & 42.3 $\pm$ 0.0& 46.1 $\pm$ 0.1 & 62.5 $\pm$ 0.1\\ (5) Balanced Softmax $\frac{1}{4}$+DT+MS ~& 76.2 $\pm$ 0.4 & 81.4 $\pm$ 0.1 & 91.0 $\pm$ 0.1 & 44.1 $\pm$ 0.2& 49.2 $\pm$ 0.1 & 62.8 $\pm$ 0.2\\ (6) Balanced Softmax+DT & 78.6 $\pm$ 0.1& 83.7 $\pm$ 0.1 & 91.2 $\pm$ 0.0& 45.1 $\pm$ 0.0& 50.4 $\pm$ 0.0 & 63.4 $\pm$ 0.0\\ (7) Balanced Softmax+CBS+DT & 80.6 $\pm$ 0.1 & 84.8 $\pm$ 0.0 &91.2 $\pm$ 0.1 & 42.0 $\pm$ 0.0 & 47.4 $\pm$ 0.2 & 62.3 $\pm$ 0.0\\ (8) DT+MS & 73.6 $\pm$ 0.2 & 79.9 $\pm$ 0.4 &90.9 $\pm$ 0.1 & 44.2 $\pm$ 0.1 & 49.2 $\pm$ 0.1 & 63.0 $\pm$ 0.0\\ (9) Balanced Softmax+DT+MR & 79.2 $\pm$ 0.0 & 84.1 $\pm$ 0.0 &91.2 $\pm$ 0.1 & 45.3 $\pm$ 0.3& \textbf{50.8} $\pm$ 0.0 & \textbf{63.5} $\pm$ 0.1\\ \hline (10) BALMS & \textbf{81.5} $\pm$ 0.0 & \textbf{84.9} $\pm$ 0.1 & \textbf{91.3} $\pm$ 0.1 & 45.5 $\pm$ 0.1& \textbf{50.8} $\pm$ 0.0 & 63.0 $\pm$ 0.1\\ \hline \end{tabular*} \end{center} \caption{Component Analysis on CIFAR-10/100-LT. CBS: class-balanced sampling. DT: decoupled training without CBS. MS: Meta Sampler. MR: Meta Reweighter. Balanced Softmax $\frac{1}{4}$: the loss variant in Eqn.~\ref{eq:bound_loss}. Balanced Softmax and Meta Sampler both contribute to the final performance. } \label{tab:cifarablation} \end{table} \subsection{Component Analysis} We conduct an extensive component analysis on CIFAR-10/100-LT dataset to further understand the effect of each proposed component of BALMS. The results are presented in Table~\ref{tab:cifarablation}. \textbf{Balanced Softmax.} Comparing (1), (2) with (3), and (5), (8) with (10), we observe that Balanced Softmax gives a clear improvement to the overall performance, under both end-to-end training and decoupled training setup. It successfully accommodates the distribution shift between training and testing. In particular, we observe that Balanced Softmax $\frac{1}{4}$, which we derive in Eqn.~\ref{eq:bound_loss}, cannot yield ideal results, compared to our proposed Balanced Softmax in Eqn.~\ref{eq:our_loss}. \textbf{Meta-Sampler.} From (6), (7), (9) and (10), we observe that Meta-Sampler generally improves the performance, when compared with no Meta-Sampler, and variants of Meta-Sampler. We notice that the performance gain is larger with a higher imbalance factor, which is consistent with our observation in LVIS experiments. In (9) and (10), Meta-Sampler generally outperforms the Meta-Reweighter and suggests the discrete sampling process gives a more efficient optimization process. Comparing (7) and (10), we can see Meta-Sampler addresses the over-balancing issue discussed in Section 3.2. \textbf{Decoupled Training.} Comparing (2) with (4) and (3) with (6), decoupled training scheme and Balanced Softmax are two orthogonal components and we can benefit from both at the same time. \section{Conclusion} We have introduced BALMS for long-tail visual recognition tasks. BALMS tackles the distribution shift between training and testing, combining meta-learning with generalization error bound theory: it optimizes a Balanced Softmax function which theoretically minimizes the generalization error bound; it improves the optimization in large long-tailed datasets by learning an effective Meta Sampler. BALMS generally outperforms SOTA methods on 4 image classification datasets and 1 instance segmentation dataset by a large margin, especially when the imbalance factor is high. However, Meta Sampler is computationally expensive in practice and the optimization on large datasets is slow. In addition, the Balanced Softmax function only approximately guarantees a generalization error bound. Future work may extend the current framework to a wider range of tasks, e.g., machine translation, and correspondingly design tighter bounds and computationally efficient meta-learning algorithms. \section{Acknowledgements} This work is supported in part by the General Research Fund through the Research Grants Council of Hong Kong under grants (\textit{Nos.} CUHK14208417, CUHK14207319), in part by the Hong Kong Innovation and Technology Support Program (\textit{No.} ITS/312/18FX). \section*{Broader Impact} Due to the Zipfian distribution of categories in real life, algorithms, and models with exceptional performance on research benchmarks may not remain powerful in the real world. BALMS, as a light-weight method, only adds minimal computational cost during training and is compatible with most of the existing works for visual recognition. As a result, BALMS could be beneficial to bridge the gap between research benchmarks and industrial applications for visual recognition. However, there can be some potential negative effects. As BALMS empowers deep classifiers with stronger recognition capability on long-tailed distribution, the application of such a classification algorithm can be further extended to more real-life scenarios. We should be cautious about the misuse of the method proposed. Depending on the scenario, it might cause negative effects on democratic privacy.
proofpile-arXiv_065-167
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{#1}} \baselineskip=20pt \newfont{\elevenmib}{cmmib10 scaled\magstep1} \newcommand{\preprint}{ \begin{flushleft} \end{flushleft}\vspace{-1.3cm} \begin{flushright}\normalsize \end{flushright}} \newcommand{\Title}[1]{{\baselineskip=26pt \begin{center} \Large \bf #1 \\ \ \\ \end{center}}} \newcommand{\Author}{\begin{center} \large \bf Xiaotian Xu${}^{a}$, Junpeng Cao${}^{a,b,c,d}$, Yi Qiao${}^{a,e}$, Wen-Li Yang${}^{d,e,f,g}\footnote{Corresponding author: wlyang@nwu.edu.cn}$, Kangjie Shi${}^e$ and Yupeng Wang${}^{a,d,h}\footnote{Corresponding author: yupeng@iphy.ac.cn}$ \end{center}} \newcommand{\Address}{\begin{center} ${}^a$ Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China\\ ${}^b$ Songshan Lake Materials Laboratory, Dongguan, Guangdong 523808, China \\ ${}^c$ School of Physical Sciences, University of Chinese Academy of Sciences, Beijing, China\\ ${}^d$ Peng Huanwu Center for Fundamental Theory, Xian 710127, China\\ ${}^e$ Institute of Modern Physics, Northwest University, Xian 710127, China\\ ${}^f$ Shaanxi Key Laboratory for Theoretical Physics Frontiers, Xian 710127, China\\ ${}^g$ Physics school, Northwest University, Xian 710127, China\\ ${}^h$ The Yangtze River Delta Physics Research Center, Liyang, Jiangsu, China \end{center}} \newcommand{\Accepted}[1]{\begin{center} {\large \sf #1}\\ \vspace{1mm}{\small \sf Accepted for Publication} \end{center}} \preprint \thispagestyle{empty} \bigskip\bigskip\bigskip \Title{Graded off-diagonal Bethe ansatz solution of the $SU(2|2)$ spin chain model with generic integrable boundaries} \Author \Address \vspace{1cm} \begin{abstract} \bigskip The graded off-diagonal Bethe ansatz method is proposed to study supersymmetric quantum integrable models (i.e., quantum integrable models associated with superalgebras). As an example, the exact solutions of the $SU(2|2)$ vertex model with both periodic and generic open boundary conditions are constructed. By generalizing the fusion techniques to the supersymmetric case, a closed set of operator product identities about the transfer matrices are derived, which allows us to give the eigenvalues in terms of homogeneous or inhomogeneous $T-Q$ relations. The method and results provided in this paper can be generalized to other high rank supersymmetric quantum integrable models. \vspace{1truecm} \noindent {\it PACS:} 75.10.Pq, 02.30.Ik, 71.10.Pm \noindent {\it Keywords}: Bethe Ansatz; Lattice Integrable Models; $T-Q$ Relation \end{abstract} \newpage \section{Introduction} \label{intro} \setcounter{equation}{0} Quantum integrable models \cite{Bax82} play important roles in fields of theoretical physics, condensed matter physics, field theory and mathematical physics, since exact solutions of those models may provide useful benchmarks to understand a variety of many-body problems. During the past several decades, much attention has been paid to obtain exact solutions of integrable systems with unusual boundary conditions. With the development of topological physics and string theory, study on off-diagonal boundaries becomes an interesting issue. Many interesting phenomena such as edge states, Majorana zero modes, and topological excitations have been found. Due to the existence of off-diagonal elements contained in boundaries, particle numbers with different intrinsic degrees of freedom are not conserved anymore and the usual $U(1)$ symmetry is broken. This leads to absence of a proper reference state which is crucial in the conventional Bethe ansatz scheme. To overcome this problem, several interesting methods \cite{Alc87,Skl88, Nep02, Cao03, Yan04, Gie05, Yan07, Bas06, Bas07, Bas10, Bas13,Fra08, Fra11, Nic13,cao13, wang15, Bel13, Bel15, Pim15, Ava15} are proposed. A remarkable one is the off-diagonal Bethe ansatz (ODBA) \cite{cao13, wang15}, which allow us to construct the exact spectrum systematically. The nested ODBA has also been developed to deal with the models with different Lie algebras such as $A_n$ \cite{Cao14, Cao15}, $A_2^{(2)}$ \cite{Hao14}, $B_2$ \cite{ Li_119}, $C_2$ \cite{Li_219} and $D_3^{(1)}$ \cite{Li_319}. Nevertheless, there exists another kind of high rank integrable models which are related to superalgebras \cite{Fra00} such as the $SU(m|n)$ model, the Hubbard model, and the supersymmetric $t-J$ model. The $SU(m|n)$ model has many applications in AdS/CFT correspondence \cite{Mal99,Bei12}, while the Hubbard and $t-J$ model have many applications in the strongly correlated electronic theory. These models with $U(1)$ symmetry have been studied extensively \cite{yb, Ess05,Perk81,Vega91,Yue96,Yue1,Yue2,Yue3,Yue4,Yue5,Mar97}. A general method to approach such kind of models with off-diagonal boundaries is still missing. In this paper, we develop a graded version of nested ODBA to study supersymmetric integrable models (integrable models associated with superalgebras). As an example, the $SU(2|2)$ model with both periodic and off-diagonal boundaries is studied. The structure of the paper is as follows. In section 2, we study the $SU(2|2)$ model with periodic boundary condition. A closed set of operator identities is constructed by using the fusion procedure. These identities allow us to characterize the eigenvalues of the transfer matrices in terms of homogeneous $T-Q$ relation. In section 3, we study the model with generic open boundary conditions. It is demonstrated that similar identities can be constructed and the spectrum can be expressed in terms of inhomogeneous $T-Q$ relation. Section 4 is attributed to concluding remarks. Some technical details can be found in the appendices. \section{$SU(2|2)$ model with periodic boundary condition} \label{c2} \setcounter{equation}{0} \subsection{The system} Let ${V}$ denote a $4$-dimensional graded linear space with a basis $\{|i\rangle|i=1,\cdots,4\}$, where the Grassmann parities are $p(1)=0$, $p(2)=0$, $p(3)=1$ and $p(4)=1$, which endows the fundamental representation of the $SU(2|2)$ Lie superalgebra. The dual space is spanned by the dual basis $\{\langle i|\,\,|i=1,\cdots,4\}$ with an inner product: $\langle i|j\rangle=\delta_{ij}$. Let us further introduce the ${Z_{2}}$-graded $N$-tensor space ${V}\otimes { V}\otimes\cdots{V}$ which has a basis $\{|i_1,i_2,\cdots,i_N\rangle=|i_N\rangle_N\,\cdots|i_2\rangle_2\,|i_1\rangle_1\,|\,i_l=1,\cdots,4;\,l=1,\cdots,N\}$, and its dual with a basis $\{\langle i_1,i_2,\cdots,i_N|=\langle i_1|_1\,\langle i_2|_2\,\cdots\langle i_N|_N\,|\,i_l=1,\cdots,4;\,l=1,\cdots,N\}$. For the matrix $A_j\in {\rm End}({ V_j})$, $A_j$ is a super embedding operator in the ${Z_{2}}$-graded $N$-tensor space ${V}\otimes { V}\otimes\cdots{V}$, which acts as $A$ on the $j$-th space and as identity on the other factor spaces. For the matrix $R_{ij}\in {\rm End}({ V_i}\otimes { V_j})$, $R_{ij}$ is a super embedding operator in the ${Z_{2}}$ graded tensor space, which acts as identity on the factor spaces except for the $i$-th and $j$-th ones. The super tensor product of two operators is the graded one satisfying the rule\footnote{For $A=\sum_{\alpha,\,\beta}A_{\beta}^{\alpha}{|\beta\rangle}{\langle\alpha|}$ and $B=\sum_{\delta,\,\gamma}B_{\delta}^{\gamma}{|\delta\rangle}{\langle\gamma|}$, the super tensor product $A\otimes B=\sum_{\a,\b,\gamma,\delta}(A_{\beta}^{\alpha}{|\beta\rangle}_1{\langle\alpha|}_{1})\,\, (B_{\delta}^{\gamma}{|\delta\rangle}_2{\langle\gamma|}_2)=\sum_{\a,\b,\gamma,\delta}(-1)^{p(\delta)[p(\alpha)+p(\beta)]}A_{\beta}^{\alpha}B_{\delta}^{\gamma}{|\delta\rangle}_2 {|\beta\rangle}_1\,{\langle\alpha|}_{1}{\langle\gamma|}_2$.} $(A\otimes B)_{\beta \delta}^{\alpha \gamma}=(-1)^{[p(\alpha)+p(\beta)]p(\delta)}A^{\alpha}_{\beta}B^{\gamma}_{\delta}$ \cite{Gra13}. The supersymmetric $SU(2|2)$ model is described by the $16\times 16$ $R$-matrix \begin{equation} \label{rm} R_{12}(u) =\left( \begin{array}{cccc|cccc|cccc|cccc} u+\eta & & & & & & & & & & & & & & & \\ & u & & & \eta & & & & & & & & & & & \\ & & u & & & & & & \eta & & & & & & & \\ & & & u & & & & & & & & & \eta & & & \\ \hline & \eta & & & u & & & & & & & & & & & \\ & & & & & u+\eta & & & & & & & & & & \\ & & & & & & u & & & \eta & & & & & & \\ & & & & & & & u & & & & & & \eta & & \\ \hline & & \eta & & & & & & u & & & & & & & \\ & & & & & & \eta & & & u & & & & & & \\ & & & & & & & & & & u-\eta & & & & & \\ & & & & & & & & & & & u & & & -\eta & \\ \hline & & & \eta & & & & & & & & & u & & & \\ & & & & & & & \eta & & & & & & u & & \\ & & & & & & & & & & & -\eta & & & u & \\ & & & & & & & & & & & & & & & u-\eta \\ \end{array} \right), \end{equation} where $u$ is the spectral parameter and $\eta$ is the crossing parameter. The $R$-matrix (\ref{rm}) enjoys the following properties \begin{eqnarray} {\rm regularity}&:&R _{12}(0)=\eta P_{12},\nonumber\\[4pt] {\rm unitarity}&:&R_{12}(u)R_{21}(-u) = \rho_1(u)\times {\rm id},\nonumber\\[4pt] {\rm crossing-unitarity}&:&R_{12}^{st_1}(-u)R_{21}^{st_1}(u)=\rho_2(u)\times {\rm id},\nonumber \end{eqnarray} where $P_{12}$ is the $Z_2$-graded permutation operator with the definition \begin{eqnarray} P_{\beta_{1}\beta_{2}}^{\alpha_{1}\alpha_{2}}=(-1)^{p(\alpha_{1})p(\alpha_{2})} \delta_{\alpha_{1}\beta_{2}} \delta_{\beta_{1}\alpha_{2}}, \end{eqnarray} $R_{21}(u)=P_{12}R_{12}(u)P_{12}$, $st_i$ denotes the super transposition in the $i$-th space $(A^{st_i})_{ij}=A_{ji}(-1)^{p(i)[p(i)+p(j)]}$, and the functions $\rho_1(u)$ and $\rho_2(u)$ are given by \begin{eqnarray} \rho_1(u)=-({u}-\eta)({u}+\eta), \quad \rho_2(u)=-u^2. \end{eqnarray} The $R$-matrix (\ref{rm}) satisfies the graded Yang-Baxter equation (GYBE) \cite{Kul1, Kul86} \begin{eqnarray} R_{12}(u-v)R_{13}(u)R_{23}(v)=R_{23}(v)R_{13}(u)R_{12}(u-v)\label{YBE}. \end{eqnarray} In terms of the matrix entries, GYBE (\ref{YBE}) reads \bea &&\sum_{\beta_1,\beta_2,\beta_3}R(u-v)_{\beta_1\beta_2}^{\alpha_1\alpha_2}R(u)_{\gamma_1\beta_3}^{\beta_1\alpha_3} R(v)_{\gamma_2\gamma_3}^{\beta_2\beta_3}(-1)^{(p(\beta_1)+p(\gamma_1))p(\beta_2)}\no\\[4pt] &&=\sum_{\beta_1,\beta_2,\beta_3}R(v)_{\beta_2\beta_3}^{\alpha_2\alpha_3}R(u)_{\beta_1\gamma_3}^{\alpha_1\beta_3} R(u-v)_{\gamma_1\gamma_2}^{\beta_1\beta_2}(-1)^{(p(\alpha_1)+p(\beta_1))p(\beta_2)}. \eea For the periodic boundary condition, we introduce the ``row-to-row" (or one-row) monodromy matrix $T_0(u)$ \begin{eqnarray} T_0 (u)=R _{01}(u-\theta_1)R _{02}(u-\theta_2)\cdots R _{0N}(u-\theta_N),\label{T1} \end{eqnarray} where the subscript $0$ means the auxiliary space $V_0$, the other tensor space $V^{\otimes N}$ is the physical or quantum space, $N$ is the number of sites and $\{\theta_j|j=1,\cdots,N\}$ are the inhomogeneous parameters. In the auxiliary space, the monodromy matrix (\ref{T1}) can be written as a $4\times 4$ matrix with operator-valued elements acting on ${\rm V}^{\otimes N}$. The explicit forms of the elements of monodromy matrix (\ref{T1}) are \bea \Big\{[T_0(u)]^{a}_b\Big\}_{\beta_1\cdots\beta_N}^{\alpha_1\cdots\alpha_N}&=&\sum_{c_2,\cdots,c_N}R_{0N}(u)_{c_N\beta_N}^{a\alpha_N}\cdots R_{0j}(u)_{c_j\beta_j}^{c_{j+1}\alpha_j}\cdots R_{01}(u)_{b\beta_1}^{c_2\alpha_1}\no\\[4pt] &&\times (-1)^{\sum_{j=2}^{N}(p(\alpha_j)+p(\beta_j))\sum_{i=1}^{j-1}p(\alpha_i)}. \eea The monodromy matrix $T_0(u)$ satisfies the graded Yang-Baxter relation \begin{eqnarray} R _{12}(u-v)T_1 (u) T_2 (v) =T_2 (v)T_1 (u)R_{12}(u-v). \label{ybe} \end{eqnarray} The transfer matrix $t_p(u)$ of the system is defined as the super partial trace of the monodromy matrix in the auxiliary space \begin{eqnarray} t_p(u)=str_0\{T_0 (u)\}=\sum_{\alpha=1}^{4}(-1)^{p(\alpha)}[T_0(u)]_{\alpha}^{\alpha}. \end{eqnarray} From the graded Yang-Baxter relation (\ref{ybe}), one can prove that the transfer matrices with different spectral parameters commute with each other, $[t_p(u), t_p(v)]=0$. Thus $t_p(u)$ serves as the generating functional of all the conserved quantities, which ensures the integrability of the system. The model Hamiltonian is constructed by \cite{Yue1} \bea H_p=\frac{\partial \ln t_p(u)}{\partial u}|_{u=0,\{\theta_j\}=0}.\label{peri-Ham} \eea \subsection{Fusion} One of the wonderful properties of $R$-matrix is that it may degenerate to the projection operators at some special points, which makes it possible to do the fusion procedure \cite{Kul81, Kul82, Kar79, Kir86, Kir87, Tsu97}. It is easy to check that the $R$-matrix (\ref{rm}) has two degenerate points. The first one is $u=\eta$. At which, we have \bea R _{12}(\eta)= 2\eta P_{12}^{(8)},\label{Int-R1}\eea where $P_{12}^{(8)}$ is a 8-dimensional supersymmetric projector \bea P_{12}^{(8)}=\sum_{i=1}^{8}|f_i\rangle \langle f_i|, \label{1-project}\eea and the corresponding basis vectors are \bea &&|f_1\rangle= |11\rangle, \quad |f_2\rangle =\frac{1}{\sqrt{2}}(|12\rangle +|21\rangle ), \quad|f_3\rangle =|22\rangle,\nonumber\\ &&|f_4\rangle=\frac{1}{\sqrt{2}}(|34\rangle -|43\rangle ),\quad |f_5\rangle=\frac{1}{\sqrt{2}}(|13\rangle +|31\rangle ),\quad|f_6\rangle=\frac{1}{\sqrt{2}}(|14\rangle +|41\rangle ),\nonumber\\ &&|f_7\rangle=\frac{1}{\sqrt{2}}(|23\rangle +|32\rangle ),\quad|f_8\rangle= \frac{1}{\sqrt{2}}(|24\rangle +|42\rangle ),\no \eea with the corresponding parities \bea p(f_1)=p(f_2)=p(f_3)=p(f_4)=0, \quad p(f_5)=p(f_6)=p(f_7)=p(f_8)=1. \no \eea The operator $P_{12}^{(8)}$ projects the original 16-dimensional tensor space $V_1\otimes V_2$ into a new 8-dimensional projected space spanned by $\{|f_i\rangle|i=1,\cdots,8\}$. Taking the fusion by the operator (\ref{1-project}), we construct the fused $R$-matrices \bea &&R_{\langle 12\rangle 3}(u)=(u+\frac{1}{2}\eta)^{-1}P^{ (8) }_{12}R _{23}(u-\frac{1}{2}\eta)R _{13}(u+\frac{1}{2}\eta)P^{ (8) }_{12}\equiv R_{\bar 1 3}(u), \label{hhgg-1}\\[4pt] &&R_{3 \langle 21\rangle}(u)=(u+\frac{1}{2}\eta)^{-1}P^{ (8) }_{21}R _{32}(u-\frac{1}{2}\eta)R _{31}(u+\frac{1}{2}\eta)P^{ (8) }_{21}\equiv R_{3\bar 1}(u), \label{hhgg-2} \eea where $P^{ (8) }_{21}$ can be obtained from $P^{ (8) }_{12}$ by exchanging $V_1$ and $V_2$. For simplicity, we denote the projected space as $V_{\bar 1}=V_{\langle12\rangle}=V_{\langle21\rangle}$. The fused $R$-matrix ${R}_{\bar{1}2}(u)$ is a $32\times 32$ matrix defined in the tensor space $V_{\bar 1}\otimes V_2$ and has the properties \bea &&R_{\bar{1}2}(u)R_{2\bar{1}}(-u)=\rho_3(u)\times {\rm id}, \no\\[4pt] &&R_{\bar{1}2}(u)^{st_{\bar 1}} R_{2\bar{1}}(-u)^{st_{\bar 1}}=\rho_4(u)\times {\rm id}, \eea where \bea \rho_3(u)=-(u+\frac{3}{2}\eta)(u-\frac{3}{2}\eta),\quad \rho_4(u)=-(u+\frac{1}{2}\eta)(u-\frac{1}{2}\eta). \eea From GYBE (\ref{YBE}), one can prove that the following fused graded Yang-Baxter equations hold \bea R_{\bar{1}2}(u-v) R_{\bar{1}3}(u) R_{23}(v) =R_{23}(v)R_{\bar{1}3}(u)R_{\bar{1}2}(u-v).\label{fuse-qybe1} \eea It is easy to check that the elements of fused $R$-matrices $R_{\bar{1}2}(u)$ and $R_{2\bar{1}}(u)$ are degree one polynomials of $u$. At the point of $u=-\frac{3}{2}\eta$, the fused $R$-matrix $R_{\bar{1}2}(u)$ can also be written as a projector \bea R_{\bar{1}2}(-\frac{3}{2}\eta)= -3\eta P^{(20) }_{{\bar 1}2},\label{Fusion-5-4} \eea where $P^{(20) }_{\bar{1} 2}$ is a 20-dimensional supersymmetric projector \bea P^{(20) }_{\bar{1}2}=\sum_{i=1}^{20} |\phi_i\rangle \langle \phi_i|, \label{cjjc}\eea with the basis vectors \bea &&|\phi_1\rangle =\frac{1}{\sqrt{3}}(\sqrt{2}|f_1\rangle\otimes|2\rangle -|f_2\rangle\otimes|1\rangle),\quad |\phi_2\rangle=\frac{1}{\sqrt{3}}( |f_2\rangle\otimes|2\rangle -\sqrt{2}|f_3\rangle\otimes|1\rangle),\nonumber\\[4pt] &&|\phi_{3}\rangle=\frac{1}{\sqrt{6}}(2|f_6\rangle\otimes|3\rangle+|f_5\rangle\otimes|4\rangle +|f_4\rangle\otimes|1\rangle ),\quad|\phi_{4}\rangle=\frac{1}{\sqrt{2}}(|f_5\rangle\otimes|4\rangle -|f_4\rangle\otimes|1\rangle ),\nonumber\\[4pt] &&|\phi_{5}\rangle =\frac{1}{\sqrt{6}}(|f_8\rangle\otimes|3\rangle+2|f_4\rangle\otimes|2\rangle -|f_7\rangle\otimes|4\rangle ),\quad |\phi_{6}\rangle=\frac{1}{\sqrt{2}}(|f_7\rangle\otimes|4\rangle +|f_8\rangle\otimes|3\rangle ),\nonumber\\[4pt] &&|\phi_{7}\rangle =|f_5\rangle\otimes|3\rangle ,\quad |\phi_{8}\rangle=|f_7\rangle\otimes|3\rangle,\quad |\phi_{9}\rangle=|f_6\rangle\otimes|4\rangle ,\quad|\phi_{10}\rangle =|f_8\rangle\otimes|4\rangle,\nonumber\\[4pt] &&|\phi_{11}\rangle=\frac{1}{\sqrt{3}}(\sqrt{2}|f_1\rangle\otimes|3\rangle -|f_5\rangle\otimes|1\rangle ),\quad |\phi_{12}\rangle =\frac{1}{\sqrt{3}}( \sqrt{2}|f_1\rangle\otimes|4\rangle -|f_6\rangle\otimes|1\rangle),\nonumber\\[4pt] &&|\phi_{13}\rangle =\frac{1}{\sqrt{6}}(|f_7\rangle\otimes|1\rangle+|f_2\rangle\otimes|3\rangle -2|f_5\rangle\otimes|2\rangle ),\quad |\phi_{14}\rangle=\frac{1}{\sqrt{2}}(|f_2\rangle\otimes|3\rangle -|f_7\rangle\otimes|1\rangle )\nonumber\\[4pt] &&|\phi_{15}\rangle =\frac{1}{\sqrt{6}}(|f_8\rangle\otimes|1\rangle+|f_2\rangle\otimes|4\rangle -2|f_6\rangle\otimes|2\rangle ),\quad |\phi_{16}\rangle=\frac{1}{\sqrt{2}}(|f_2\rangle\otimes|4\rangle -|f_8\rangle\otimes|1\rangle ),\nonumber\\[4pt] &&|\phi_{17}\rangle =\frac{1}{\sqrt{3}}(\sqrt{2}|f_3\rangle\otimes|3\rangle -|f_7\rangle\otimes|2\rangle ),\quad|\phi_{18}\rangle=\frac{1}{\sqrt{3}}(\sqrt{2}|f_3\rangle\otimes|4\rangle -|f_8\rangle\otimes|2\rangle ),\nonumber\\[4pt] &&|\phi_{19}\rangle =|f_4\rangle\otimes|3\rangle ,\quad |\phi_{20}\rangle=|f_4\rangle\otimes|4\rangle.\no \eea The corresponding parities of the basis vectors are \bea p(\phi_1)=p(\phi_2)=\cdots =p(\phi_{10})=0,\quad p(\phi_{11})=p(\phi_{12})=\cdots =p(\phi_{20})=1. \no \eea The operator $P^{(20)}_{{\bar 1}2}$ is a projector on the 32-dimensional product space $V_{\bar 1}\otimes V_2$ which projects $V_{\bar 1}\otimes V_2$ into its 20-dimensional subspace spanned by $\{|\phi_i\rangle, i=1,\cdots,20\}$. Taking the fusion by the projector $P^{(20)}_{{\bar 1}2}$, we obtain another new fused $R$-matrix \bea && {R}_{\langle {\bar 1}2\rangle 3}(u) =(u-\eta)^{-1}P^{(20) }_{2\bar{1}}R_{\bar{1}3}(u+\frac{1}{2}\eta) R_{23}(u-\eta)P^{(20)}_{2\bar{1} }\equiv {R}_{\tilde 1 3}(u), \label{fu-2}\\[4pt] &&{R}_{3\langle 2{\bar 1}\rangle}(u) =(u-\eta)^{-1}P^{(20) }_{{\bar 1}2}R_{3\bar{1}}(u+\frac{1}{2}\eta) R_{32}(u-\eta)P^{(20)}_{{\bar 1}2}\equiv {R}_{3\tilde 1}(u), \label{fu-22} \eea where $P^{(20)}_{2{\bar 1}}$ can be obtained from $P^{(20)}_{{\bar 1}2}$ by exchanging $V_{\bar 1}$ and $V_2$. For simplicity, we denote the projected subspace as $V_{\tilde 1}=V_{\langle\bar 12\rangle}=V_{\langle2\bar 1\rangle}$. The fused $R$-matrix $R_{\tilde{1}2}(u)$ is a $80\times 80$ matrix defined in the tensor space $V_{\tilde 1}\otimes V_2$ and satisfies following graded Yang-Baxter equations \bea R_{{\tilde 1}2}(u-v) R_{{\tilde 1}3}(u) R_{{2}3}(v)= R_{{2}3}(v) R_{{\tilde 1}3}(u)R_{{\tilde 1}2}(u-v).\label{sdfu-22} \eea The elements of fused $R$-matrix $R_{\tilde{1} 2}(u)$ are also degree one polynomials of $u$. The second degenerate point of $R$-matrix (\ref{rm}) is $u=-\eta$. At which we have \bea R_{12}(-\eta)= -2\eta \bar P_{12}^{(8)}=-2\eta(1-P_{12}^{(8)}),\label{2-pewrerroject} \eea where $\bar P_{12}^{(8)}$ is an 8-dimensional supersymmetric projector in terms of \bea \bar P_{12}^{(8)}=\sum_{i=1}^{8}|g_i\rangle \langle g_i|, \label{2-project}\eea with \bea &&|g_1\rangle= \frac{1}{\sqrt{2}}(|12\rangle -|21\rangle ), \quad |g_2\rangle=|33\rangle, \quad |g_3\rangle= \frac{1}{\sqrt{2}}(|34\rangle +|43\rangle ),\nonumber\\ &&|g_4\rangle=|44\rangle,\quad |g_5\rangle =\frac{1}{\sqrt{2}}(|13\rangle -|31\rangle ), \quad |g_6\rangle=\frac{1}{\sqrt{2}}(|14\rangle -|41\rangle ) \nonumber\\ &&|g_7\rangle=\frac{1}{\sqrt{2}}(|23\rangle -|32\rangle ),\quad |g_8\rangle =\frac{1}{\sqrt{2}}(|24\rangle -|42\rangle ). \label{fuse-q1ybe2} \eea The corresponding parities are \bea p(g_1)=p(g_2)=p(g_3)=p(g_4)=0,\quad p(g_5)=p(g_6)=p(g_7)=p(g_8)=1. \no \eea The operator $\bar P_{12}^{(8)}$ projects the 16-dimensional product space $V_1\otimes V_2$ into a new 8-dimensional projected space spanned by $\{|g_i\rangle|i=1,\cdots,8\}$. Taking the fusion by the projector $\bar P_{12}^{(8)}$, we obtain the fused $R$-matrices \bea &&R_{\langle 12\rangle^\prime 3}(u)=(u-\frac{1}{2}\eta)^{-1}\bar P^{ (8) }_{12}R _{23}(u+\frac{1}{2}\eta)R _{13}(u-\frac{1}{2}\eta)\bar P^{ (8) }_{12}\equiv R_{\bar{1}^\prime 3}(u), \label{hhgg-3}\\[4pt] &&R_{3 \langle 21\rangle^\prime}(u)=(u-\frac{1}{2}\eta)^{-1}\bar P^{ (8) }_{21}R _{32}(u+\frac{1}{2}\eta)R _{31}(u-\frac{1}{2}\eta)\bar P^{ (8) }_{21}\equiv R_{3\bar{1}^\prime }(u).\label{hhgg-4} \eea For simplicity, we denote the projected space as $V_{\bar 1^\prime}=V_{\langle12\rangle^\prime}=V_{\langle21\rangle^\prime}$. The fused $R$-matrix $R_{\bar{1}^\prime 2}(u)$ is a $32\times 32$ matrix defined in the product space $V_{\bar{1}^\prime }\otimes V_2$ and possesses the properties \bea &&R_{\bar{1}^\prime 2}(u) R_{2\bar{1}^\prime }(-u)=\rho_5(u)\times {\rm id}, \no\\[4pt] &&R_{\bar{1}^\prime 2}(u)^{st_{{\bar 1}^\prime }} R_{2\bar{1}^\prime }(-u)^{st_{{\bar 1}^\prime }}=\rho_6(u)\times {\rm id}, \no\\[6pt] &&R_{\bar{1}^\prime 2}(u-v) R_{\bar{1}^\prime 3}(u) R_{23}(v) =R_{23}(v)R_{\bar{1}^\prime 3}(u)R_{\bar{1}^\prime 2}(u-v),\label{fuse-qybe2} \eea where \bea \rho_5(u)=-(u-\frac{3}{2}\eta)(u+\frac{3}{2}\eta),\quad \rho_6(u)=-(u-\frac{1}{2}\eta)(u+\frac{1}{2}\eta). \eea Now, we consider the fusions of $R_{\bar{1}^\prime 2}(u)$, which include two different cases. One is the fusion in the auxiliary space $V_{\bar 1}$ and the other is the fusion in the quantum space $V_2$. Both are necessary to close the fusion processes. We first introduce the fusion in the auxiliary space. At the point $u=\frac{3}{2}\eta$, we have \bea R_{\bar{1}^\prime 2}(\frac{3}{2}\eta)= 3\eta P^{(20) }_{\bar{1}^\prime 2},\label{Fusion-20-1}\eea where $P^{(20) }_{\bar{1}^\prime 2}$ is a 20-dimensional supersymmetric projector with the form of \bea P^{(20) }_{\bar{1}^\prime 2}=\sum_{i=1}^{20} |\tilde{\phi}_i\rangle \langle \tilde{\phi}_i|, \label{xiao1}\eea and the corresponding vectors are \bea &&|\tilde{\phi}_1\rangle =|g_1\rangle\otimes|1\rangle,\quad |\tilde{\phi}_2\rangle=|g_1\rangle\otimes|2\rangle,\nonumber\\[4pt] &&|\tilde{\phi}_3\rangle=\frac{1}{\sqrt{2}}(|g_3\rangle\otimes|1\rangle -|g_5\rangle\otimes|4\rangle ),\quad |\tilde{\phi}_{4}\rangle=\frac{1}{\sqrt{6}}( |g_5\rangle\otimes4\rangle+|g_3\rangle\otimes|1\rangle-2|g_6\rangle\otimes|3\rangle ),\nonumber\\[4pt] &&|\tilde{\phi}_{5}\rangle=\frac{1}{\sqrt{2}}(|g_8\rangle\otimes|3\rangle -|g_7\rangle\otimes|4\rangle ) ,\quad |\tilde{\phi}_{6}\rangle=\frac{1}{\sqrt{6}}(2|g_3\rangle\otimes|2\rangle-|g_7\rangle\otimes4\rangle -|g_8\rangle\otimes|3\rangle ),\nonumber\\[4pt] &&|\tilde{\phi}_{7}\rangle =\frac{1}{\sqrt{3}}(\sqrt{2}|g_2\rangle\otimes|1\rangle -|g_5\rangle\otimes|3\rangle ),\quad |\tilde{\phi}_{8}\rangle=\frac{1}{\sqrt{3}}(\sqrt{2}|g_2\rangle\otimes|2\rangle -|g_7\rangle\otimes|3\rangle ),\nonumber\\[4pt] &&|\tilde{\phi}_{9}\rangle=\frac{1}{\sqrt{3}}(\sqrt{2}|g_4\rangle\otimes|1\rangle -|g_6\rangle\otimes|4\rangle ),\quad|\tilde{\phi}_{10}\rangle =\frac{1}{\sqrt{3}}(\sqrt{2}|g_4\rangle\otimes|2\rangle -|g_8\rangle\otimes|4\rangle),\nonumber\\[4pt] &&|\tilde{\phi}_{11}\rangle=|g_5\rangle\otimes|1\rangle,\quad|\tilde{\phi}_{12}\rangle =|g_6\rangle\otimes|1\rangle),\nonumber\\[4pt] &&|\tilde{\phi}_{13}\rangle =\frac{1}{\sqrt{2}}(|g_7\rangle\otimes|1\rangle-|g_1\rangle\otimes|3\rangle ),\quad |\tilde{\phi}_{14}\rangle=\frac{1}{\sqrt{6}}(|g_7\rangle\otimes|1\rangle+2|g_5\rangle\otimes2\rangle +|g_1\rangle\otimes|3\rangle )\nonumber\\[4pt] &&|\tilde{\phi}_{15}\rangle=\frac{1}{\sqrt{2}}(|g_8\rangle\otimes|1\rangle -|g_1\rangle\otimes|4\rangle ),\quad |\tilde{\phi}_{16}\rangle=\frac{1}{\sqrt{6}}(|g_6\rangle\otimes|2\rangle +2|g_8\rangle\otimes1\rangle +|g_1\rangle\otimes|4\rangle ),\nonumber\\[4pt] &&|\tilde{\phi}_{17}\rangle=|g_7\rangle\otimes|2\rangle,\quad |\tilde{\phi}_{18}\rangle=|g_8\rangle\otimes|2\rangle,\nonumber\\[4pt] &&|\tilde{\phi}_{19}\rangle=\frac{1}{\sqrt{3}}(|g_3\rangle\otimes|3\rangle-\sqrt{2}|g_2\rangle\otimes|4\rangle ),\quad|\tilde{\phi}_{20}\rangle=\frac{1}{\sqrt{3}}(\sqrt{2}|g_4\rangle\otimes|3\rangle -|g_3\rangle\otimes|4\rangle).\no \eea The parities read \bea p(\tilde\phi_1)=p(\tilde\phi_2)=\cdots=p(\tilde\phi_{10})=0,\quad p(\tilde\phi_{11})=p(\tilde\phi_{12})=\cdots=p(\tilde\phi_{20})=1.\no \eea The operator $P^{(20)}_{{\bar 1}^\prime 2}$ projects the 32-dimensional product space $V_{{\bar 1}^\prime} \otimes V_2$ into a 20-dimensional projected space spanned by $\{|\tilde \phi_i\rangle, i=1,\cdots,20\}$. Taking the fusion by the projector $P^{(20)}_{{\bar 1}^\prime 2}$, we obtain the following fused $R$-matrices \bea &&{R}_{\langle {\bar 1}^\prime 2\rangle 3}(u)= (u+\eta)^{-1}P^{(20) }_{2\bar{1}^\prime }R_{\bar{1}^\prime 3}(u-\frac{1}{2}\eta) R_{23}(u+\eta)P^{(20)}_{2\bar{1}^\prime }\equiv R_{{\tilde 1}^\prime 3}(u), \label{fu-4}\\[4pt] &&{R}_{3\langle 2{\bar 1}^\prime\rangle}(u)= (u+\eta)^{-1}P^{(20) }_{{\bar 1}^\prime 2}R_{3\bar{1}^\prime }(u-\frac{1}{2}\eta) R_{32}(u+\eta)P^{(20)}_{{\bar 1}^\prime 2}\equiv R_{3{\tilde 1}^\prime}(u). \label{fu-44} \eea For simplicity, we denote the projected space as $V_{\tilde 1^\prime}=V_{\langle \bar 1^\prime 2\rangle}=V_{\langle 2 \bar 1^\prime \rangle}$. The fused $R$-matrix $R_{\tilde{1}^\prime 2}(u)$ is a $80\times 80$ one defined in the product spaces $V_{{\tilde 1}^\prime}\otimes V_2$ and satisfies following graded Yang-Baxter equation \bea R_{{\tilde 1}^\prime 2}(u-v) R_{{\tilde 1}^\prime 3}(u) R_{{2}3}(v) = R_{{2}3}(v) R_{{\tilde 1}^\prime 3}(u)R_{{\tilde 1}^\prime 2}(u-v).\label{fwusdfwa-44} \eea A remarkable fact is that after taking the correspondences \bea |\phi_i\rangle\longrightarrow|\psi_i\rangle,\quad |\tilde\phi_i\rangle\longrightarrow|\tilde\psi_i\rangle, \quad i=1,\cdots,20,\label{vec-corresp} \eea the two fused $R$-matrices $R_{\tilde 1 2}(u)$ given by (\ref{fu-2}) and $R_{{\tilde 1}^\prime 2}(u)$ given by (\ref{fu-4}) are identical, \bea R_{\tilde 1 2}(u)=R_{{\tilde 1}^\prime 2}(u),\label{peri-iden} \eea which allows us to close the recursive fusion processe. The fusion of $R_{\bar{1}^\prime 2}(u)$ in the quantum space is carried out by the projector $P_{23}^{(8)}$, and the resulted fused $R$-matrix is \bea R_{{\bar 1}^\prime \langle 23\rangle}(u)= (u+\eta)^{-1}P_{23}^{(8)}R_{{\bar 1}^\prime 3}(u-\frac{1}{2}\eta)R_{{\bar 1}^\prime 2}(u+\frac{1}{2}\eta)P_{23}^{(8)}\equiv R_{{\bar 1}^\prime \bar 2}(u), \eea which is a $64\times 64$ matrix defined in the space $V_{\bar{1}^\prime }\otimes V_{\bar 2}$ and satisfies the graded Yang-Baxter equation \bea R_{{\bar 1}^\prime\bar 2}(u-v)R_{{\bar 1}^\prime 3}(u)R_{\bar 2 3}(v)=R_{\bar 2 3}(v)R_{{\bar 1}^\prime 3}(u)R_{{\bar 1}^\prime\bar 2}(u-v),\label{sdfusdsd-22} \eea which will help us to find the complete set of conserved quantities. \subsection{Operator product identities} Now, we are ready to extend the fusion from one site to the whole system. From the fused $R$-matrices given by (\ref{hhgg-1}), (\ref{fu-2}), (\ref{hhgg-3}) and (\ref{fu-4}), we construct the fused monodromy matrices as \begin{eqnarray} &&T_{\bar 0}(u)=R_{\bar 01}(u-\theta_1)R_{\bar 02}(u-\theta_2)\cdots R_{\bar 0N}(u-\theta_N), \no \\ &&T_{\bar 0^\prime}(u)=R_{\bar 0^\prime 1}(u-\theta_1)R_{\bar 0^\prime 2}(u-\theta_2)\cdots R_{\bar 0^\prime N}(u-\theta_N), \no \\ &&T_{\tilde 0}(u)=R_{\tilde 01}(u-\theta_1)R_{\tilde 02}(u-\theta_2)\cdots R_{\tilde 0N}(u-\theta_N), \no \\ &&T_{\tilde 0^\prime}(u)=R_{\tilde 0^\prime1}(u-\theta_1)R_{\tilde 0^\prime2}(u-\theta_2)\cdots R_{\tilde 0^\prime N}(u-\theta_N),\label{T6} \end{eqnarray} where the subscripts $\bar 0$, $\bar 0^\prime$, $\tilde 0$ and $\tilde 0^\prime $ mean the auxiliary spaces, and the quantum spaces in all the monodromy matrices are the same. By using the graded Yang-Baxter equations (\ref{fuse-qybe1}), (\ref{sdfu-22}), (\ref{fuse-qybe2}), (\ref{fwusdfwa-44}) and (\ref{sdfusdsd-22}), one can prove that the monodromy matrices satisfy the graded Yang-Baxter relations \begin{eqnarray} &&R_{\bar 12} (u-v) T_{\bar 1}(u) T_2(v)= T_2(v) T_{\bar 1}(u) R_{\bar 12}(u-v), \no \\ &&R_{\bar 1^\prime 2} (u-v) T_{\bar 1^\prime }(u) T_2(v)= T_2(v) T_{\bar 1^\prime }(u) R_{\bar 1^\prime 2} (u-v), \no \\ &&R_{\bar 1^\prime \bar 2} (u-v) T_{\bar 1^\prime }(u) T_{\bar 2}(v)= T_{\bar 2}(v) T_{\bar 1^\prime }(u) R_{\bar 1^\prime \bar 2} (u-v), \no \\ &&R_{\tilde 12} (u-v) T_{\tilde 1}(u) T_2(v)= T_2(v) T_{\tilde 1}(u) R_{\tilde 12}(u-v), \no \\ &&R_{\tilde 1^\prime 2} (u-v) T_{\tilde 1^\prime }(u) T_2(v)= T_2(v) T_{\tilde 1^\prime }(u) R_{\tilde 1^\prime 2} (u-v). \label{yybb4} \end{eqnarray} According to the property that the $R$-matrices in above equations can degenerate into the projectors $P^{(8)}_{12}$, $\bar P^{(8)}_{12}$, $P^{(20)}_{\bar{1}2} $, $P^{(20)}_{\bar{1}^\prime 2}$ and using the definitions (\ref{T6}), we obtain following fusion relations among the monodromy matrices \bea &&P^{ (8) }_{12}T_2 (u)T_1 (u+\eta)P^{(8) }_{12}=\prod_{l=1}^N (u-\theta_l+\eta)T_{\bar 1}(u+\frac{1}{2}\eta), \no\\[4pt] &&\bar P^{ (8) }_{12}T_2 (u)T_1 (u-\eta)\bar P^{(8) }_{12} =\prod_{l=1}^N (u-\theta_l-\eta)T_{\bar 1^\prime}(u-\frac{1}{2}\eta), \no\\[4pt] &&P^{(20) }_{2\bar{1}} T_{\bar{1}} (u+\frac{1}{2}\eta) T_2(u-\eta)P^{(20)}_{2\bar{1}} =\prod_{l=1}^N (u-\theta_l-\eta){T}_{\tilde 1}(u),\no\\[4pt] &&P^{(20) }_{2\bar{1}^\prime } T_{\bar{1}^\prime } (u-\frac{1}{2}\eta)T_2(u+\eta)P^{(20) }_{2\bar{1}^\prime }=\prod_{l=1}^N (u-\theta_l+\eta){T}_{\tilde 1^\prime }(u).\label{fut-6} \eea The fused transfer matrices are defined as the super partial traces of fused monodromy matrices in the auxiliary space \bea {t}^{(1)}_p(u)=str_{\bar 0} T_{\bar 0}(u), \; {t}^{(2)}_p(u)=str_{\bar 0^\prime} T_{\bar 0^\prime}(u), \; \tilde{t}^{(1)}_p(u)=str_{\tilde 0} T_{\tilde 0}(u), \; \tilde{t}^{(2)}_p(u)=str_{\tilde 0^\prime} T_{\tilde 0^\prime }(u).\no \eea From Eq.(\ref{fut-6}), we know that these fused transfer matrices with certain spectral difference must satisfy some intrinsic relations. We first consider the quantity \bea \hspace{-1.2truecm}&&\hspace{-1.2truecm}t_p(u)t_p(u+\eta)=str_{12}\{T_1(u)T_2(u+\eta)\}\no\\[4pt] &&\hspace{8mm}=str_{12}\{(P_{12}^{(8)}+\bar P_{12}^{(8)})T_1(u)T_2(u+\eta)(P_{12}^{(8)}+\bar P_{12}^{(8)})\}\no\\[4pt] &&\hspace{8mm}=str_{12}\{P_{12}^{(8)}T_1(u)T_2(u+\eta)P_{12}^{(8)}\}+str_{12}\{\bar P_{12}^{(8)}\bar P_{12}^{(8)}T_1(u)T_2(u+\eta)\bar P_{12}^{(8)}\}\no\\[4pt] &&\hspace{8mm}=str_{12}\{P_{12}^{(8)}T_1(u)T_2(u+\eta)P_{12}^{(8)}\}+str_{12}\{\bar P_{12}^{(8)}T_2(u+\eta)T_1(u)\bar P_{12}^{(8)}\bar P_{12}^{(8)}\}\no\\[4pt] &&\hspace{8mm}=\prod_{j=1}^{N}(u-\theta_j+\eta) t_p^{(1)}(u+\frac{1}{2}\eta)+\prod_{j=1}^{N}(u-\theta_j) t_p^{(2)}(u+\frac{1}{2}\eta).\label{fui-3tan} \eea Here we give some remarks. Both $V_1$ and $V_2$ are the 4-dimensional auxiliary spaces. From Eq.(\ref{fui-3tan}), we see that the 16-dimensional auxiliary space $V_{1}\otimes V_2$ can be projected into two 8-dimensional subspaces, $V_{1}\otimes V_2=V_{\langle12\rangle}\oplus V_{\langle12\rangle^\prime}$. One is achieved by the 8-dimensional projector $P_{12}^{(8)}$ defined in the subspace $V_{\langle12\rangle}\equiv V_{\bar 1}$, and the other is achieved by the 8-dimensional projector $\bar P_{12}^{(8)}$ defined in the subspace $V_{\langle 12\rangle^\prime}\equiv V_{\bar 1^\prime}$. The vectors in $P_{12}^{(8)}$ and those in $\bar P_{12}^{(8)}$ constitute the complete basis of $V_{1}\otimes V_2$, and all the vectors are orthogonal, \bea P_{12}^{(8)}+\bar P_{12}^{(8)}=1,~~P_{12}^{(8)}\bar P_{12}^{(8)}=0.\no \eea From Eq.(\ref{fui-3tan}), we also know that the product of two transfer matrices with fixed spectral difference can be written as the summation of two fused transfer matrices $t_p^{(1)}(u)$ and $ t_p^{(2)}(u)$. At the point of $u=\theta_j-\eta$, the coefficient of the fused transfer matrix $ t_p^{(1)}(u)$ is zero, while at the point of $u=\theta_j$, the coefficient of the fused transfer matrix $ t_p^{(2)}(u)$ is zero. Therefore, at these points, only one of them has the contribution. Motivated by Eq.(\ref{fut-6}), we also consider the quantities \bea \hspace{-0.8truecm}&&\hspace{-0.8truecm} t_p^{(1)}(u+\frac{1}{2}\eta)t_p(u-\eta)=str_{\bar 12}\{(P_{2\bar 1}^{(20)}+\tilde P_{2\bar 1}^{(12)})T_{\bar 1}(u+\frac{1}{2}\eta)T_2(u-\eta)(P_{2\bar 1}^{(20)}+\tilde P_{2\bar 1}^{(12)})\}\no\\[4pt] &&=str_{\bar 12}\{P_{2\bar 1}^{(20)}T_{\bar 1}(u+\frac{1}{2}\eta)T_2(u-\eta)P_{2\bar 1}^{(20)}\}+str_{\bar 12}\{\tilde P_{2\bar 1}^{(12)}T_{\bar 1}(u+\frac{1}{2}\eta)T_2(u-\eta)\tilde P_{2\bar 1}^{(12)}\}\no\\[4pt] &&=\prod_{j=1}^{N}(u-\theta_j-\eta)\tilde t_p^{(1)}(u)+\prod_{j=1}^{N}(u-\theta_j)\bar{t}_p^{(1)}(u), \label{fui-3tan-1}\\ \hspace{-0.8truecm}&&\hspace{-0.8truecm} t_p^{(2)}(u-\frac{1}{2}\eta)t_p(u+\eta)=str_{\bar 1^\prime 2}\{(P_{2\bar 1^\prime }^{(20)} +\tilde P_{2\bar 1^\prime }^{(12)})T_{\bar 1^\prime }(u-\frac{1}{2}\eta)T_2(u+\eta)(P_{2\bar 1^\prime }^{(20)}+\tilde P_{2\bar 1^\prime }^{(12)})\}\no\\[4pt] &&=str_{\bar 1^\prime 2}\{P_{2\bar 1^\prime }^{(20)}T_{\bar 1^\prime }(u-\frac{1}{2}\eta)T_2(u+\eta)P_{2\bar 1^\prime }^{(20)}\} +str_{\bar 1^\prime 2}\{\tilde P_{2\bar 1^\prime }^{(12)}T_{\bar 1^\prime }(u-\frac{1}{2}\eta)T_2(u+\eta)\tilde P_{2\bar 1^\prime }^{(12)}\}\no\\[4pt] &&=\prod_{j=1}^{N}(u-\theta_j+\eta)\tilde t_p^{(2)}(u)+\prod_{j=1}^{N}(u-\theta_j)\bar{t}_p^{(2)}(u).\label{fui-3tan-2} \eea During the derivation, we have used the relations \bea P_{2\bar 1}^{(20)}+\tilde P_{2\bar 1}^{(12)}=1,~~P_{2\bar 1}^{(20)}\tilde P_{2\bar 1}^{(12)}=0, ~~ P_{2\bar 1^\prime}^{(20)}+\tilde P_{2\bar 1^\prime}^{(12)}=1,~~P_{2\bar 1^\prime}^{(20)}\tilde P_{2\bar 1^\prime}^{(12)}=0.\no \eea From Eq.(\ref{fui-3tan-1}), we see that the 32-dimensional auxiliary space $V_{\bar 1}\otimes V_2$ can be projected into a 20-dimensional subspace $V_{\langle \bar 12\rangle}\equiv V_{\tilde 1}$ by the projector $P_{\bar 12}^{(20)}$ and a 12-dimensional subspace $V_{\overline{\langle \bar 12\rangle}}$ by the projector $\tilde P_{\bar 12}^{(12)}$, $V_{\bar 1}\otimes V_2=V_{\langle \bar 12\rangle} \oplus V_{\overline{\langle \bar 12\rangle}}$. The vectors in $P_{\bar 12}^{(20)}$ and $\tilde P_{\bar 12}^{(12)}$ are the complete and orthogonal basis. Eq.(\ref{fui-3tan-1}) also gives that the quantity $t_p^{(1)}(u+\frac{1}{2}\eta)t_p(u-\eta)$ is the summation of two new fused transfer matrices $\tilde t_p^{(1)}(u)$ and $\bar{t}_p^{(1)}(u)$ with some coefficients. In Eq.(\ref{fui-3tan-2}), the 32-dimensional auxiliary space $V_{\bar 1^\prime}\otimes V_2$ is projected into a 20-dimensional and a 12-dimensional subspaces by the operators $P_{\bar 1^\prime 2}^{(20)}$ and $\tilde P_{\bar 1^\prime 2}^{(12)}$, respectively. Thus the quantity $t_p^{(2)}(u-\frac{1}{2}\eta)t_p(u+\eta)$ is the summation of two fused transfer matrices $\tilde t_p^{(2)}(u)$ and $\bar{t}_p^{(2)}(u)$ with some coefficients. At the point of $u=\theta_j-\eta$, the coefficient of $\tilde t_p^{(1)}(u)$ in Eq.(\ref{fui-3tan-1}) and that of $\tilde t_p^{(2)}(u)$ in Eq.(\ref{fui-3tan-1}) are zero. While at the point of $u=\theta_j$, the coefficient of $\bar{t}_p^{(1)}(u)$ in Eq.(\ref{fui-3tan-1}) and that of $\bar{t}_p^{(2)}(u)$ in Eq.(\ref{fui-3tan-2}) are zero. Here, the explicit forms of $\tilde P_{\bar 12}^{(12)}$, $\tilde P_{\bar 1^\prime 2}^{(12)}$, $\bar{t}_p^{(1)}(u)$ and $\bar{t}_p^{(2)}(u)$ are omitted because we donot use them. Combining the above analysis, we obtain the operator product identities of the transfer matrices at the fixed points as \bea && t_p(\theta_j)t_p (\theta_j+\eta)=\prod_{l=1}^N (\theta_j-\theta_l+\eta) t^{(1)}_p(\theta_j+\frac{1}{2}\eta),\label{futp-4-1} \\[4pt] && t_p(\theta_j)t_p (\theta_j-\eta)=\prod_{l=1}^N (\theta_j-\theta_l-\eta) t^{(2)}_p(\theta_j-\frac{1}{2}\eta),\label{futp-4-2} \\[4pt] && t^{(1)}_p(\theta_j+\frac{1}{2}\eta)t_p (\theta_j-\eta)=\prod_{l=1}^N (\theta_j-\theta_l-\eta)\tilde t_{p}^{(1)}(\theta_j),\label{futp-4-3}\\[4pt] && t^{(2)}_p(\theta_j-\frac{1}{2}\eta)t_p (\theta_j+\eta)=\prod_{l=1}^N (\theta_j-\theta_l+\eta)\tilde t_{p}^{(2)}(\theta_j), \quad j=1, \cdots, N.\label{futp-4-4} \eea From the property (\ref{peri-iden}), we obtain that the fused transfer matrices $\tilde{t}^{(1)}_p(u)$ and $\tilde{t}^{(2)}_p(u)$ are equal \bea \tilde{t}^{(1)}_p(u)=\tilde{t}^{(2)}_p(u). \label{futp-6} \eea With the help of Eqs. (\ref{futp-6}), (\ref{futp-4-3}) and (\ref{futp-4-4}), we can obtain the constraint among $t_p(u)$, $ t^{(1)}_p(u)$ and $ t^{(2)}_p(u)$, \bea t^{(1)}_p (\theta_j+\frac{1}{2}\eta) t_p(\theta_j-\eta) =\prod_{l=1}^N\frac{\theta_j-\theta_l-\eta}{\theta_j-\theta_l+\eta} t^{(2)}_p (\theta_j-\frac{1}{2}\eta) t_p(\theta_j+\eta).\label{peri-ope-3} \eea Then Eqs.(\ref{futp-4-1}), (\ref{futp-4-2}) and (\ref{peri-ope-3}) constitute the closed recursive fusion relations. From the definitions, we know that the transfer matrices $t_p(u)$, ${t}^{(1)}_p(u)$ and ${t}^{(2)}_p(u)$ are the operator polynomials of $u$ with degree $N-1$. Then, the $3N$ conditions (\ref{futp-4-1}), (\ref{futp-4-2}) and (\ref{peri-ope-3}) are sufficient to solve them. From the graded Yang-Baxter relations (\ref{yybb4}), the transfer matrices $t_p(u)$, ${t}^{(1)}_p(u)$ and ${t}^{(2)}_p(u)$ commutate with each other, namely, \bea [t_p(u),{t}^{(1)}_p(u)]=[t_p(u),{t}^{(2)}_p(u)]=[{t}^{(1)}_p(u),{t}^{(2)}_p(u)]=0. \eea Therefore, they have common eigenstates and can be diagonalized simultaneously. Let $|\Phi\rangle$ be a common eigenstate. Acting the transfer matrices on this eigenstate, we have \bea t_p(u)|\Phi\rangle=\Lambda_p(u)|\Phi\rangle,\quad t_p^{(1)}(u)|\Phi\rangle= \Lambda_p^{(1)}(u)|\Phi\rangle,\quad t_p^{(2)}(u)|\Phi\rangle=\Lambda_p^{(2)}(u)|\Phi\rangle,\no \eea where $\Lambda_p(u)$, ${\Lambda}^{(1)}_p(u)$ and ${\Lambda}^{(2)}_p(u)$ are the eigenvalues of $t_p(u)$, ${t}^{(1)}_p(u)$ and ${t}^{(2)}_p(u)$, respectively. Meanwhile, acting the operator product identities (\ref{futp-4-1}), (\ref{futp-4-2}) and (\ref{peri-ope-3}) on the state $|\Phi\rangle$, we have the functional relations among these eigenvalues \bea && \Lambda_p(\theta_j)\Lambda_p (\theta_j+\eta)=\prod_{l=1}^N (\theta_j-\theta_l+\eta){\Lambda}^{(1)}_p(\theta_j+\frac{1}{2}\eta),\no \\[4pt] && \Lambda_p(\theta_j)\Lambda_p (\theta_j-\eta)=\prod_{l=1}^N (\theta_j-\theta_l-\eta){\Lambda}^{(2)}_p(\theta_j-\frac{1}{2}\eta),\no \\[4pt] && \Lambda^{(1)}_p (\theta_j+\frac{1}{2}\eta) \Lambda_p(\theta_j-\eta)=\prod_{l=1}^N\frac{\theta_j-\theta_l-\eta}{\theta_j-\theta_l+\eta} \Lambda^{(2)}_p (\theta_j-\frac{1}{2}\eta) \Lambda_p(\theta_j+\eta),\label{futpl-3} \eea where $j=1,2,\cdots N$. Because the eigenvalues $\Lambda_p(u)$, ${\Lambda}^{(1)}_p(u)$ and ${\Lambda}^{(2)}_p(u)$ are the polynomials of $u$ with degree $N-1$, the above $3N$ conditions (\ref{futpl-3}) can determine these eigenvalues completely. \subsection{$T-Q$ relations} Let us introduce the $z$-functions \begin{eqnarray} z_p^{(l)}(u)=\left\{ \begin{array}{ll} \displaystyle(-1)^{p(l)}Q^{(0)}_p(u)\frac{Q_p^{(l-1)}(u+\eta)Q_p^{(l)}(u-\eta)}{Q_p^{(l)}(u)Q_p^{(l-1)}(u)}, &l=1,2,\\[6mm] \displaystyle(-1)^{p(l)}Q^{(0)}_p(u)\frac{Q_p^{(l-1)}(u-\eta)Q_p^{(l)}(u+\eta)}{Q_p^{(l)}(u)Q_p^{(l-1)}(u)}, &l=3,4,\end{array} \right. \end{eqnarray} where the $Q$-functions are \bea &&Q_p^{(0)}(u)=\prod_{l=1}^{N}(u-\theta_j),\quad Q^{(m)}_p(u)=\prod_{j=1}^{L_m}(u-\lambda_j^{(m)}), \quad m=1, 2, 3,\quad Q_p^{(4)}(u)=1,\no \eea and $\{L_m|m=1,2,3\}$ are the numbers of the Bethe roots $\{\lambda_j^{(m)}\}$. According to the closed functional relations (\ref{futpl-3}), we construct the eigenvalues of the transfer matrices in terms of the homogeneous $T-Q$ relations \bea &&\Lambda_p (u)=\sum_{l=1}^{4}z_p^{(l)}(u) \no\\[4pt] &&\Lambda_p^{(1)}(u)=\Big[Q_p^{(0)}(u+\frac{1}{2}\eta)\Big]^{-1}\Big\{\sum_{l=1}^{2}z_p^{(l)}(u+\frac{1}{2}\eta)z_p^{(l)}(u-\frac{1}{2}\eta)\no\\ &&~~~~~~~~~~~~~~~+\sum_{l=2}^{4} \sum_{m=1}^{l-1}z_p^{(l)}(u+\frac{1}{2}\eta)z_p^{(m)}(u-\frac{1}{2}\eta)\Big\} \no\\[4pt] &&\Lambda_p^{(2)}(u)=\Big[Q_p^{(0)}(u-\frac{1}{2}\eta)\Big]^{-1}\Big\{\sum_{l=3}^{4}z_p^{(l)}(u+\frac{1}{2}\eta)z_p^{(l)}(u-\frac{1}{2}\eta)\no\\[4pt] &&~~~~~~~~~~~~~~~+\sum_{l=2}^{4} \sum_{m=1}^{l-1}z_p^{(l)}(u-\frac{1}{2}\eta)z_p^{(m)}(u+\frac{1}{2}\eta)\Big\}.\label{ep-3} \eea The regularities of the eigenvalues $\Lambda_p(u)$, $\Lambda_p^{(1)}(u)$ and $\Lambda_p^{(2)}(u)$ give rise to the constraints that the Bethe roots $\{\lambda_j^{(m)}\}$ should satisfy the Bethe ansatz equations (BAEs) \bea &&\frac{Q_p^{(0)}(\lambda_j^{(1)}+\eta)}{Q_p^{(0)}(\lambda_j^{(1)})}=-\frac{Q_p^{(1)}(\lambda_j^{(1)}+\eta)Q_p^{(2)}(\lambda_j^{(1)}-\eta)} {Q_p^{(2)}(\lambda_j^{(1)})Q_p^{(1)}(\lambda_j^{(1)}-\eta)},~~j=1,\cdots,L_1 \no\\ &&\frac{Q_p^{(1)}(\lambda_j^{(2)}+\eta)}{Q_p^{(1)}(\lambda_j^{(2)})}=\frac{Q_p^{(3)}(\lambda_j^{(2)})}{Q_p^{(3)}(\lambda_j^{(2)})},~~j=1,\cdots,L_2 \no\\ &&\frac{Q_p^{(2)}(\lambda_j^{(3)}-\eta)}{Q_p^{(2)}(\lambda_j^{(3)})}=-\frac{Q_p^{(3)}(\lambda_j^{(3)}-\eta)}{Q_p^{(3)}(\lambda_j^{(3)}+\eta)},~~j=1,\cdots,L_3. \label{BAE-period-3}\eea We have verified that the above BAEs indeed guarantee all the $T-Q$ relations (\ref{ep-3}) are polynomials and satisfy the functional relations (\ref{futpl-3}). Therefore, we arrive at the conclusion that $\Lambda_p(u)$, $\Lambda_p^{(1)}(u)$ and $\Lambda_p^{(2)}(u)$ given by (\ref{ep-3}) are indeed the eigenvalues of the transfer matrices $t_p(u)$, ${t}^{(1)}_p(u)$, ${t}^{(2)}_p(u)$, respectively. The eigenvalues of the Hamiltonian (\ref{peri-Ham}) are \begin{eqnarray} E_p= \frac{\partial \ln \Lambda_p(u)}{\partial u}|_{u=0,\{\theta_j\}=0}. \end{eqnarray} \section{$SU(2|2)$ model with off-diagonal boundary reflections} \setcounter{equation}{0} \subsection{Boundary integrability} In this section, we consider the system with open boundary conditions. The boundary reflections are characterized by the reflection matrix $K^-(u)$ at one side and $K^+(u)$ at the other side. The integrability requires that $K^-(u)$ satisfies the graded reflection equation (RE) \cite{Che84, Bra98} \begin{equation} R _{12}(u-v){K^{-}_{ 1}}(u)R _{21}(u+v) {K^{-}_{2}}(v)= {K^{-}_{2}}(v)R _{12}(u+v){K^{-}_{1}}(u)R _{21}(u-v), \label{r1} \end{equation} while $K^+(u)$ satisfies the graded dual RE \begin{eqnarray} R_{12}(v-u)K_1^+(u)R_{21}(-u-v)K_2^+(v)=K_2^+(v)R_{12}(-u-v)K_1^+(u)R_{21}(v-u). \label{r2} \end{eqnarray} The general solution of reflection matrix $K_0^{-}(u)$ defined in the space $V_0$ satisfying the graded RE (\ref{r1}) is \bea K_0^{-}(u)=\xi+uM,\quad M=\left(\begin{array}{cccc}1 &c_1&0&0\\[6pt] c_2&-1 &0&0\\[6pt] 0&0 &-1 &c_3\\[6pt] 0&0&c_4&1 \end{array}\right), \label{K-matrix-1}\eea and the dual reflection matrix $K^+(u)$ can be obtained by the mapping \begin{equation} K_0^{ +}(u)=K_0^{ -}(-u)|_{\xi,c_i\rightarrow \tilde{\xi},\tilde{c}_i }, \label{K-matrix-2} \end{equation} where the $\xi$, $\tilde{\xi}$ and $\{c_i, \tilde{c}_i |i=1,\cdots,4\}$ are the boundary parameters which describe the boundary interactions, and the integrability requires \bea c_1c_2=c_3c_4,\quad \tilde{c}_1\tilde{c}_2=\tilde{c}_3\tilde{c}_4.\no \eea The reflection matrices (\ref{K-matrix-1}) and (\ref{K-matrix-2}) have the off-diagonal elements, thus the numbers of ``quasi-particles" with different intrinsic degrees of freedom are not conserved during the reflection processes. Meanwhile, the $K^-(u)$ and $K^+(u)$ are not commutative, $[K^-(u),K^+(v)]$ $\neq 0$, which means that they cannot be diagonalized simultaneously. Thus it is quite hard to derive the exact solutions of the system via the conventional Bethe ansatz because of the absence of a proper reference state. We will develop the graded nested ODBA to solve the system exactly. For the open case, besides the standard ``row-to-row" monodromy matrix $T_0(u)$ specified by (\ref{T1}), one needs to consider the reflecting monodromy matrix \begin{eqnarray} \hat{T}_0 (u)=R_{N0}(u+\theta_N)\cdots R_{20}(u+\theta_{2}) R_{10}(u+\theta_1),\label{Tt11} \end{eqnarray} which satisfies the graded Yang-Baxter relation \begin{eqnarray} R_{ 12} (u-v) \hat T_{1}(u) \hat T_2(v)=\hat T_2(v) \hat T_{ 1}(u) R_{12} (u-v)\label{haishi0}. \end{eqnarray} The transfer matrix $t(u)$ is defined as \begin{equation} t(u)= str_0 \{K_0^{ +}(u)T_0 (u) K^{ -}_0(u)\hat{T}_0 (u)\}\label{tru}. \end{equation} The graded Yang-Baxter relations (\ref{ybe}), (\ref{haishi0}) and reflection equations (\ref{r1}), (\ref{r2}) lead to the fact that the transfer matrices with different spectral parameters commutate with each other, $[t(u), t(v)]=0$. Therefore, $t(u)$ serves as the generating function of all the conserved quantities and the system is integrable. The model Hamiltonian with open boundary condition can be written out in terms of transfer matrix (\ref{tru}) as \begin{eqnarray} H=\frac{1}{2}\frac{\partial \ln t(u)}{\partial u}|_{u=0,\{\theta_j\}=0}. \label{hh} \end{eqnarray} The hermiticity of Hamiltonian (\ref{hh}) further requires $c_1=c_2^{*}$ and $c_3=c_4^{*}$. \subsection{Fused reflection matrices} In order to solve the eigenvalue problem of the transfer matrix (\ref{tru}), we should study the fusion of boundary reflection matrices \cite{Mez92, Zho96}. The main idea of the fusion for reflection matrices associated with a supersymmetric model is expressed in Appendix A. Focusing on the supersymmetric $SU(2|2)$ model with the boundary reflection matrices (\ref{K-matrix-1}) and (\ref{K-matrix-2}), we can take fusion according to Eqs.(\ref{oled-3})-(\ref{oled-4}) or (\ref{oled-13})-(\ref{oled-14}). The two 8-dimensional fusion associated with the super projectors $P_{12}^{(8)}$ (\ref{1-project}) and $\bar P_{12}^{(8)}$ (\ref{2-project}) gives \bea && {K}^{-}_{\bar 1}(u)=(u+\frac{1}{2}\eta)^{-1}P_{21}^{(8)}K_1^{-}(u-\frac{1}{2}\eta)R_{21}(2u)K_2^{-}(u+\frac{1}{2}\eta)P_{12}^{(8)},\no\\[4pt] && {K}^{+}_{\bar 1}(u)=(u-\frac{1}{2}\eta)^{-1}P_{12}^{(8)}K_2^+(u+\frac{1}{2}\eta)R_{12}(-2u)K_1^+(u-\frac{1}{2}\eta)P_{21}^{(8)},\no\\[4pt] && {K}^{-}_{\bar 1^\prime }(u)=(u-\frac{1}{2}\eta)^{-1}\bar P_{21}^{(8)}K_1^{-}(u+\frac{1}{2}\eta)R_{21}(2u)K_2^{-} (u-\frac{1}{2}\eta)\bar P_{12}^{(8)},\no\\[4pt] && K^{+}_{\bar 1^\prime}(u)=(u+\frac{1}{2}\eta)^{-1}\bar P_{12}^{(8)}K_2^{+}(u-\frac{1}{2}\eta) R_{12}(-2u)K_1^{+}(u+\frac{1}{2}\eta)\bar P_{21}^{(8)}.\label{open-k4} \eea By specific calculation, we know that all the fused $K$-matrices are the $8\times8$ ones and their matric elements are the polynomials of $u$ with maximum degree two. The fused reflection $K$-matrices (\ref{open-k4}) satisfy the resulting graded reflection equations. We can further use the reflection matrices $K_{\bar 1}^{\pm}(u)$ [or $K_{\bar 1^\prime }^{\pm}(u)$] and $K_2^{\pm}(u)$ to obtain the $20$-dimensional projector $ P_{{\bar 1}2}^{(20)}$ (\ref{cjjc}) [or $P_{{\bar 1}^\prime 2}^{(20)}$ (\ref{xiao1})]. The resulted new fused reflection matrices are \bea && {K}^{-}_{\tilde 1}(u)=(u- \eta)^{-1} P_{2{\bar1}}^{(20)} K_{\bar{1}}^{-}(u+\frac{1}{2}\eta)R_{2\bar 1}(2u-\frac{1}{{2}}\eta)K_{2}^{-}(u-\eta)P_{{\bar 1}2}^{(20)}, \no \\[4pt] && {K}^{+}_{\tilde 1}(u)=(2u+\eta)^{-1} P_{{\bar 1}2}^{(20)} K_{2}^{+}(u-\eta)R_{{\bar 1}2}(-2u+\frac{1}{{2}}\eta) K_{\bar{1}}^{+}(u+\frac{1}{2}\eta)P_{2{\bar 1}}^{(20)}, \no \\[4pt] && {K}^{-}_{\tilde 1^\prime }(u)=(u+ \eta)^{-1} P_{2{\bar1^\prime }}^{(20)} K_{\bar{1}^\prime }^{-}(u-\frac{1}{2}\eta)R_{2\bar 1^\prime}(2u+\frac{1}{{2}}\eta) K_{2}^{-}(u+\eta)P_{{\bar 1}^\prime 2}^{(20)},\no \\[4pt] && {K}^{+}_{\tilde 1^\prime }(u)=(2u-\eta)^{-1} P_{{\bar 1}^\prime 2}^{(20)} K_{2}^{+}(u+\eta)R_{{\bar 1}^\prime 2}(-2u-\frac{1}{{2}}\eta) K_{\bar{1}^\prime }^{+}(u-\frac{1}{2}\eta)P_{2{\bar 1^\prime }}^{(20)}.\label{fuseref4} \eea It is easy to check that the fused reflection matrices (\ref{fuseref4}) are the $20\times 20$ ones where the matric elements are polynomials of $u$ with maximum degree three. Moreover, keeping the correspondences (\ref{vec-corresp}) in mind, we have the important relations that the fused reflection matrices defined in the projected subspace $V_{ \tilde 1}$ and that defined in the projected subspace $V_{ \tilde 1^\prime}$ are equal \bea {K}^{-}_{\tilde 1}(u)={K}^{-}_{\tilde 1^\prime }(u), \quad {K}^{+}_{\tilde 1}(u)={K}^{+}_{\tilde 1^\prime }(u), \label{k-iden} \eea which will be used to close the fusion processes with boundary reflections. \subsection{Operator production identities} For the model with open boundary condition, besides the fused monodromy matrices (\ref{T6}), we also need the fused reflecting monodromy matrices, which are constructed as \begin{eqnarray} &&\hat{T}_{\bar 0}(u)=R_{ N\bar 0}(u+\theta_N)\cdots R_{2\bar 0}(u+\theta_2)R_{1\bar 0}(u+\theta_1), \no \\[4pt] &&\hat{T}_{\bar 0^\prime}(u)=R_{N\bar 0^\prime}(u+\theta_N)\cdots R_{2\bar 0^\prime}(u+\theta_2)R_{1\bar 0^\prime}(u+\theta_1).\label{openT6} \end{eqnarray} The fused reflecting monodromy matrices satisfy the graded Yang-Baxter relations \begin{eqnarray} &&R_{1\bar 2} (u-v) \hat{T}_1(u) \hat{ T}_{\bar 2}(v) = \hat{ T}_{\bar 2}(v) \hat{T}_1(u) R_{1\bar 2} (u-v), \no \\[4pt] &&R_{1\bar 2^\prime} (u-v) \hat{T}_1(u) \hat{T}_{\bar 2^\prime}(v) = \hat{ T}_{\bar 2^\prime}(v) \hat{T}_1(u) R_{1\bar 2^\prime} (u-v), \no \\[4pt] &&R_{\bar 1\bar 2^\prime} (u-v) \hat{T}_{\bar 1}(u) \hat{T}_{\bar 2^\prime}(v) = \hat{ T}_{\bar 2^\prime}(v) \hat{T}_{\bar 1}(u) R_{\bar 1\bar 2^\prime} (u-v).\label{yyBB222} \end{eqnarray} The fused transfer matrices are defined as \bea &&t^{(1)}(u)= str_{\bar 0}\{K^{+}_{\bar{0}}(u) T_{\bar 0}(u) K^{-}_{\bar{0}}(u) \hat{T}_{\bar 0}(u)\},\no \\[4pt] &&t^{(2)}(u)= str_{\bar 0^\prime}\{K^{+}_{\bar{0}^\prime}(u) T_{\bar 0^\prime}(u) K^{-}_{\bar{0}^\prime}(u) \hat{ T}_{\bar 0^\prime}(u)\}.\label{openTransfer-5}\eea Using the method we have used in the periodic case, we can obtain the operator product identities among the fused transfer matrices as \bea && t (\pm\theta_j)t (\pm\theta_j+\eta)=-\frac{1}{ 4} \frac{(\pm\theta_j)(\pm\theta_j+\eta) }{(\pm\theta_j+\frac{1}{{2}}\eta)^2}\nonumber\\[4pt] &&\hspace{20mm}\times\prod_{l=1}^N (\pm\theta_j-\theta_l+\eta)(\pm\theta_j+\theta_l+\eta) t^{(1)}(\pm\theta_j+\frac{1}{2}\eta),\label{openident1} \\[4pt] && t (\pm\theta_j)t (\pm\theta_j-\eta)=-\frac{1}{ 4} \frac{(\pm\theta_j)(\pm\theta_j-\eta) }{(\pm\theta_j-\frac{1}{{2}}\eta)^2}\nonumber\\[4pt] &&\hspace{20mm}\times\prod_{l=1}^N (\pm\theta_j-\theta_l-\eta)(\pm\theta_j+\theta_l-\eta) t^{(2)}(\pm\theta_j-\frac{1}{2}\eta),\label{openident2}\\[4pt] && t (\pm\theta_j-\eta){ t}^{(1)}(\pm\theta_j+\frac{1}{{2}}\eta)=\frac{(\pm\theta_j+\frac{1}{2}\eta)^2(\pm\theta_j-\eta)}{(\pm\theta_j+\eta) (\pm\theta_j-\frac{1}{{2}}\eta)^2}\no\\[4pt]&&~~~~~\times \prod_{l=1}^N \frac{(\pm\theta_j-\theta_l-\eta)(\pm\theta_j+\theta_l-\eta) }{(\pm\theta_j-\theta_l+\eta)(\pm\theta_j+\theta_l+\eta)} t (\pm\theta_j+\eta){t}^{(2)}(\pm\theta_j-\frac{1}{{2}}\eta).\label{openident3} \eea The proof of the above operator identities is given in Appendix B. From the definitions, we know that the transfer matrix $t(u)$ is a operator polynomial of $u$ with degree $2N+2$ while the fused ones ${t}^{(1)}(u)$ and ${t}^{(2)}(u)$ are the operator polynomials of $u$ both with degree $2N+4$. Thus they can be completely determined by $6N+13$ independent conditions. The recursive fusion relations (\ref{openident1}), (\ref{openident2}) and (\ref{openident3}) gives $6N$ constraints and we still need 13 ones, which can be achieved by analyzing the values of transfer matrices at some special points. After some direct calculation, we have \bea && t(0)=0,\quad {t}^{(1)}(0)=0,\quad {t}^{(2)}(0)=0, \quad {t}^{(1)}(\frac{\eta}{2})=-2\xi \tilde{\xi} t(\eta), \no \\[4pt] && {t}^{(1)}(-\frac{\eta}{2})=-2\xi \tilde{\xi} t(-\eta), \quad {t}^{(2)}(\frac{\eta}{2})=2\xi \tilde{\xi} t(\eta), \quad {t}^{(2)}(-\frac{\eta}{2})=2\xi \tilde{\xi} t(-\eta),\no \\[4pt] && \frac{\partial {t}^{(1)}(u)}{\partial u}|_{u=0}+ \frac{\partial {t}^{(2)}(u)}{\partial u}|_{u=0}=0. \label{specialvalue4} \eea Meanwhile, the asymptotic behaviors of $t(u)$, $ t^{(1)}(u)$ and $ t^{(2)}(u)$ read \bea && t(u)|_{u\rightarrow\infty}=-[c_1\tilde{c}_2+\tilde{c}_1c_2-c_3\tilde{c}_4-\tilde{c}_3c_4] u^{2N+2}\times {\rm id} -\eta \hat U u^{2N+1}+\cdots, \no \\[4pt] && {t}^{(1)}(u)|_{u\rightarrow\infty}=-4\{2[c_3c_4\tilde{c}_3\tilde{c}_4-\tilde{c_3}c_4-c_3\tilde{c}_4-1]+(1+c_1\tilde{c}_2)^2+(1+\tilde{c_1}c_2)^2\no\\[4pt] &&\hspace{30mm}-(c_1\tilde{c}_2+\tilde{c}_1c_2)(c_3\tilde{c}_4+\tilde{c}_3c_4)\}u^{2N+4}\times{\rm id} -4\eta\hat Q u^{2N+3}+\cdots, \no \\[4pt] && {t}^{(2)}(u)|_{u\rightarrow\infty}=-4\{2[c_1c_2\tilde{c}_1\tilde{c}_2-\tilde{c}_1c_2-c_1\tilde{c}_2-1]+(1+c_3\tilde{c}_4)^2+(1+\tilde{c}_3c_4)^2\no\\[4pt] &&\hspace{30mm}-(c_1\tilde{c}_2+\tilde{c}_1c_2)(c_3\tilde{c}_4)+\tilde{c}_3c_4\}u^{2N+4}\times{\rm id} +\cdots.\label{openasym3} \eea Here we find that the operator $\hat{U}$ related to the coefficient of transfer matrix $t(u)$ with degree $2N+1$ is given by \bea \hat U= \sum_{i=1}^{N}\hat U_i=\sum_{i=1}^{N}(M_i \tilde{M}_i+\tilde{M}_i M_i),\label{openasym5} \eea where $M_i$ is given by (\ref{K-matrix-1}), $\tilde M_i$ is determined by (\ref{K-matrix-2}) and the operator $\hat U_i$ is \bea \hat U_i=\left( \begin{array}{cccc} 2+c_1\tilde{c}_2+\tilde{c}_1c_2 & 0 & 0 & 0 \\ 0 & 2+c_1\tilde{c}_2+\tilde{c}_1c_2 & 0 & 0 \\ 0 & 0 & 2+c_3\tilde{c}_4+\tilde{c}_3c_4 & 0 \\ 0 & 0 & 0 & 2+c_3\tilde{c}_4+\tilde{c}_3c_4 \\ \end{array} \right)_i. \eea We note that $\hat U_i$ is the operator defined in the $i$-th physical space $V_i$ and can be expressed by a diagonal matrix with constant elements. The summation of $\hat U_i$ in Eq.(\ref{openasym5}) is the direct summation and the representation matrix of operator $\hat U$ is also a diagonal one with constant elements. Moreover, we find that the operator $\hat{Q}$ related to the coefficient of the fused transfer matrix ${t}^{(1)}(u)$ with degree $ 2N+3$ is given by \bea \hat Q=\sum_{i=1}^{N}\hat Q_i,\label{openasym4} \eea where the operator $\hat Q_i$ is defined in $i$-th physical space $V_i$ with the matrix form of \bea &&\hat Q_i=\left( \begin{array}{cccc} \alpha & 0 & 0 & 0 \\ 0 & \alpha & 0 & 0 \\ 0 & 0 & \beta & 0 \\ 0 & 0 & 0 & \beta \\ \end{array} \right)_i, \no \\[4pt] && \alpha=2-2\tilde{c}_1\tilde{c}_2+4c_1\tilde{c}_2+(c_1\tilde{c}_2)^2+4\tilde{c}_1c_2-2c_1c_2+(\tilde{c}_1c_2)^2,\no \\[4pt] && \beta=2-2\tilde{c}_3\tilde{c}_4-(c_1\tilde{c}_2)^2-(\tilde{c_1}c_2)^2-4c_1c_2\tilde{c}_1\tilde{c}_2+4c_3\tilde{c}_4+2c_1\tilde{c}_2c_3\tilde{c}_4\no\\[4pt] &&\hspace{10mm}+2\tilde{c}_1c_2c_3\tilde{c}_4 +4\tilde{c}_3c_4+2c_1\tilde{c}_2\tilde{c}_3c_4+2\tilde{c}_1c_2\tilde{c}_3c_4-2c_3c_4.\no \eea Again, the operator $\hat Q_i$ is a diagonal matrix with constant elements and the summation of $\hat Q_i$ in Eq.(\ref{openasym4}) is the direct summation. So far, we have found out the $6N+13$ relations (\ref{openident1}), (\ref{openident2}), (\ref{openident3}), (\ref{specialvalue4})-(\ref{openasym4}), which allow us to determine the eigenvalues of the transfer matrices $t(u)$, ${t}^{(1)}(u)$ and $ t^{(2)}(u)$. \subsection{Functional relations} From the graded Yang-Baxter relations (\ref{yybb4}), (\ref{yyBB222}) and graded reflection equations (\ref{r1}) (\ref{r2}), one can prove that the transfer matrices $t(u)$, ${t}^{(1)}(u)$ and ${t}^{(2)}(u)$ commutate with each other, namely, \bea [t(u), {t}^{(1)}(u)]=[t(u), {t}^{(2)}(u)]=[ {t}^{(1)}(u), {t}^{(2)}(u)]=0.\label{opencom} \eea Therefore, they have common eigenstates and can be diagonalized simultaneously. Let $|\Phi\rangle$ be a common eigenstate. Acting the transfer matrices on this eigenstate, we have \bea &&t(u)|\Psi\rangle=\Lambda(u)|\Psi\rangle,\no\\ && t^{(1)}(u)|\Psi\rangle= \Lambda^{(1)}(u)|\Psi\rangle,\no\\ && t^{(2)}(u)|\Psi\rangle= \Lambda^{(2)}(u)|\Psi\rangle.\no \eea where $\Lambda(u)$, ${\Lambda}^{(1)}(u)$ and ${\Lambda}^{(2)}(u)$ are the eigenvalues of $t(u)$, ${t}^{(1)}(u)$ and ${t}^{(2)}(u)$, respectively. It is easy to check that the eigenvalue $\Lambda(u)$ is a polynomial of $u$ with degree of $2N+2$, and both ${\Lambda}^{(1)}(u)$ and ${\Lambda}^{(2)}(u)$ are the polynomials of $u$ with degree $2N+4$. Thus $\Lambda(u)$, ${\Lambda}^{(1)}(u)$ and ${\Lambda}^{(2)}(u)$ can be determined by $6N+13$ independent conditions. Acting the operator product identities (\ref{openident1}), (\ref{openident2}) and (\ref{openident3}) on the state $|\Phi\rangle$, we obtain the functional relations among the eigenvalues \bea && \Lambda(\pm\theta_j)\Lambda(\pm\theta_j+\eta)=-\frac{1}{ 4} \frac{(\pm\theta_j)(\pm\theta_j+\eta) }{(\pm\theta_j+\frac{1}{{2}}\eta)^2}\nonumber\\[4pt] &&\hspace{10mm}\times\prod_{l=1}^N (\pm\theta_j-\theta_l+\eta)(\pm\theta_j+\theta_l+\eta) \Lambda^{(1)}(\pm\theta_j+\frac{1}{2}\eta), \no \\[4pt] && \Lambda (\pm\theta_j)\Lambda(\pm\theta_j-\eta)=-\frac{1}{ 4} \frac{(\pm\theta_j)(\pm\theta_j-\eta) }{(\pm\theta_j-\frac{1}{{2}}\eta)^2}\nonumber\\[4pt] &&\hspace{10mm}\times\prod_{l=1}^N (\pm\theta_j-\theta_l-\eta)(\pm\theta_j+\theta_l-\eta) \Lambda^{(2)}(\pm\theta_j-\frac{1}{2}\eta), \no \\[4pt] && \Lambda (\pm\theta_j-\eta){ \Lambda}^{(1)}(\pm\theta_j+\frac{1}{{2}}\eta)= \frac{(\pm\theta_j+\frac{1}{2}\eta)^2(\pm\theta_j-\eta)}{(\pm\theta_j+\eta) (\pm\theta_j-\frac{1}{{2}}\eta)^2}\nonumber\\[4pt] &&\hspace{10mm}\times \prod_{l=1}^N \frac{(\pm\theta_j-\theta_l-\eta)(\pm\theta_j+\theta_l-\eta)}{(\pm\theta_j-\theta_l+\eta)(\pm\theta_j+\theta_l+\eta)} \Lambda (\pm\theta_j+\eta) {\Lambda}^{(2)}(\pm\theta_j-\frac{1}{{2}}\eta),\label{eigenident3} \eea where $j=1,2,\cdots,N$. Acting Eqs.(\ref{specialvalue4}) and (\ref{openasym3}) on the state $|\Phi\rangle$, we have \bea && \Lambda(0)=0,\quad {\Lambda}^{(1)}(0)=0,\quad {\Lambda}^{(2)}(0)=0, \quad {\Lambda}^{(1)}(\frac{\eta}{2})=-2\xi \tilde{\xi} \Lambda(\eta), \no \\[4pt] &&{\Lambda}^{(1)}(-\frac{\eta}{2})=-2\xi \tilde{\xi} \Lambda(-\eta), \quad {\Lambda}^{(2)}(\frac{\eta}{2})=2\xi \tilde{\xi} \Lambda(\eta),\quad {\Lambda}^{(2)}(-\frac{\eta}{2})=2\xi \tilde{\xi} \Lambda(-\eta),\no\\[4pt] && \frac{\partial {\Lambda}^{(1)}(u)}{\partial u}|_{u=0}+ \frac{\partial {\Lambda}^{(2)}(u)}{\partial u}|_{u=0}=0, \no \\[4pt] && \Lambda(u)|_{u\rightarrow\infty}=-[c_1\tilde{c}_2+\tilde{c}_1c_2-c_3\tilde{c}_4-\tilde{c}_3c_4] u^{2N+2}, \no \\[4pt] && {\Lambda}^{(1)}(u)|_{u\rightarrow\infty}=-4\{2[c_3c_4\tilde{c}_3\tilde{c}_4-\tilde{c_3}c_4-c_3\tilde{c}_4-1]+(1+c_1\tilde{c}_2)^2+(1+\tilde{c_1}c_2)^2\no\\[4pt] &&\hspace{30mm}-(c_1\tilde{c}_2+\tilde{c}_1c_2)(c_3\tilde{c}_4+\tilde{c}_3c_4)\}u^{2N+4}, \no \\[4pt] && {\Lambda}^{(2)}(u)|_{u\rightarrow\infty}=-4\{2[c_1c_2\tilde{c}_1\tilde{c}_2-\tilde{c}_1c_2-c_1\tilde{c}_2-1]+(1+c_3\tilde{c}_4)^2+(1+\tilde{c}_3c_4)^2\no\\[4pt] &&\hspace{30mm}-(c_1\tilde{c}_2+\tilde{c}_1c_2)(c_3\tilde{c}_4)+\tilde{c}_3c_4\}u^{2N+4}.\label{openasym33} \eea Because the operators $\hat U$ given by (\ref{openasym5}) and $\hat Q$ given by (\ref{openasym4}) can be expressed by the constant diagonal matrices, they commutate with each other and commutate with all the fused transfer matrices. Thus the state $|\Phi\rangle$ also is the eigenvalues of $\hat U$ and $\hat Q$. After detailed calculation, the operator $\hat U$ has $N+1$ different eigenvalues \bea N(2+c_1\tilde{c}_2+\tilde{c}_1c_2)+k(c_3\tilde{c}_4+\tilde{c}_3c_4-c_1 \tilde{c}_2-\tilde{c}_1c_2), \quad k=0,1,\cdots,N.\label{higher-1} \eea Eq.(\ref{higher-1}) gives all the possible values of coefficients of the polynomial $\Lambda(u)$ with the degree $2N+1$. Acting the operator $\hat U$ on the state $|\Phi\rangle$, one would obtain one of them. With direct calculation, we also know the operator $\hat Q$ has $N+1$ different eigenvalues \bea &&N\big[2-2\tilde{c}_1\tilde{c}_2+4c_1\tilde{c}_2+(c_1\tilde{c}_2)^2+4\tilde{c}_1c_2-2c_1c_2+(\tilde{c}_1c_2)^2\big]\nonumber\\[4pt] &&\hspace{10mm}+k\big[2(c_1\tilde{c}_2+\tilde{c}_1c_2)(c_3\tilde{c}_4+\tilde{c}_3c_4)-2(c_1\tilde{c}_2+\tilde{c}_1c_2)^2\nonumber\\[4pt] &&\hspace{10mm}+4(c_3\tilde{c}_4+\tilde{c}_3c_4-c_1\tilde{c}_2-\tilde{c}_1c_2)], \quad k=0,1,\cdots N.\label{higher-2} \eea Eq.(\ref{higher-2}) indeed gives all the possible values of coefficients of polynomial $\Lambda^{(1)}(u)$ with the degree $2N+3$. The operator $\hat Q$ acting on the state $|\Phi\rangle$ gives one of them. Then we arrive at that the above $6N+13$ relations (\ref{eigenident3})-(\ref{higher-2}) enable us to completely determine the eigenvalues $\Lambda(u)$, ${\Lambda}^{(1)}(u)$ and ${\Lambda}^{(2)}(u)$ which are expressed as the inhomogeneous $T-Q$ relations in the next subsection. \subsection{Inhomogeneous $T-Q$ relations} For simplicity, we define $z^{(l)}(u)$, $x_1(u)$ and $x_2(u)$ functions \begin{eqnarray} z^{(l)} (u)&=&\left\{ \begin{array}{ll} \displaystyle(-1)^{p(l)}\alpha_l(u)Q^{(0)}(u)K^{(l)}(u)\frac{Q^{(l-1)}(u+\eta)Q^{(l)}(u-\eta)}{Q^{(l)}(u)Q^{(l-1)}(u)}, &l=1,2,\\[6mm] \displaystyle(-1)^{p(l)}\alpha_l(u)Q^{(0)}(u)K^{(l)}(u)\frac{Q^{(l-1)}(u-\eta)Q^{(l)}(u+\eta)}{Q^{(l)}(u)Q^{(l-1)}(u)}, &l=3,4,\end{array} \right. \no \\ x_1 (u)&=&u^2Q^{(0)}(u+\eta)Q^{(0)}(u)\frac{f^{(1)}(u)Q^{(2)}(-u-\eta)}{Q^{(1)}(u)},\no\\[4pt] x_2 (u)&=&u^2Q^{(0)}(u+\eta)Q^{(0)}(u)Q^{(0)}(-u)\frac{f^{(2)}(u)Q^{(2)}(-u-\eta)}{Q^{(3)}(u)}.\no \end{eqnarray} Here the structure factor $\alpha_{l}(u)$ is defined as \begin{eqnarray} \alpha_l(u)=\left\{ \begin{array}{ll} \displaystyle\frac{u}{u+\frac{1}{2}\eta}, &l=1,4,\\[6mm] \displaystyle\frac{u^2}{(u+\frac{1}{2}\eta)(u+\eta)}, &l=2,3.\end{array} \right.\no \end{eqnarray} The $Q$-functions are \bea &&Q^{(0)}(u)=\prod_{l=1}^{N}(u-\theta_l)(u+\theta_l),\quad Q^{(m)}(u)=\prod_{j=1}^{L_m}(u-\lambda_{j}^{(m)})(u+\lambda_{j}^{(m)}+m\eta), \quad m=1,2,\no \\ &&Q^{(3)}(u)=\prod_{j=1}^{L_3}(u-\lambda_{j}^{(m)})(u+\lambda_{j}^{(m)}+\eta),\quad Q^{(4)}(u)=1,\label{higher-3} \eea where $L_1$, $L_2$ and $L_3$ are the non-negative integers which describe the numbers of Bethe roots $\lambda_{j}^{(1)}$, $\lambda_{j}^{(2)}$ and $\lambda_{j}^{(3)}$, respectively. The forms of functions $K^{(l)}(u)$ are related with the boundary reflections and given by \bea &&K^{(1)}(u)=(\xi+\sqrt{1+c_1c_2}u)(\tilde{\xi}+\sqrt{1+\tilde{c}_1\tilde{c}_2}u),\no\\[4pt] &&K^{(2)}(u)=(\xi-\sqrt{1+c_1c_2}(u+\eta))(\tilde{\xi}-\sqrt{1+\tilde{c}_1\tilde{c}_2}(u+\eta)),\no\\[4pt] &&K^{(3)}(u)=(\xi+\sqrt{1+c_1c_2}(u+\eta))(\tilde{\xi}+\sqrt{1+\tilde{c}_1\tilde{c}_2}(u+\eta)),\no\\[4pt] &&K^{(4)}(u)=(\xi-\sqrt{1+c_1c_2}u)(\tilde{\xi}-\sqrt{1+\tilde{c}_1\tilde{c}_2}u). \eea The polynomials $f^{(l)}(u)$ in the inhomogeneous terms $x_1(u)$ and $x_2(u)$ are \bea f^{(l)}(u)=g_lu(u+\eta)(u-\eta)(u+\frac{1}{2}\eta)^2(u+\frac{3}{2}\eta)(u-\frac{1}{2}\eta)(u+2\eta),\quad l=1,2,\label{func} \eea where $g_l$ are given by \bea &&g_1=-2-\tilde{c}_1c_2-c_1\tilde{c}_2-2\sqrt{(1+c_1c_2)(1+\tilde{c}_1\tilde{c}_2)},\no\\[4pt] &&g_2=2+c_3\tilde{c}_4+\tilde{c}_3c_4+2\sqrt{(1+c_1c_2)(1+\tilde{c}_1\tilde{c}_2)}. \eea By using the above functions and based on Eqs.(\ref{eigenident3})-(\ref{higher-2}), we construct the eigenvalues $\Lambda(u)$, ${\Lambda}^{(1)}(u)$ and ${\Lambda}^{(2)}(u)$ as following inhomogeneous $T-Q$ relations \bea &&\Lambda (u)=\sum_{l=1}^4 z^{(l)} (u)+x_1 (u)+x_2 (u),\no \\[4pt] &&\Lambda^{(1)}(u)=-4u^2[Q^{(0)}(u+\frac{1}{2}\eta)(u+\frac{1}{2}\eta)(u-\frac{1}{2}\eta)]^{-1}\Big\{\sum_{l=1}^4\sum_{m=1}^2 \tilde z^{(l)} (u+\frac{1}{2}\eta)\tilde z^{(m)}(u-\frac{1}{2}\eta)\no\\[4pt] &&\qquad\quad-z^{(1)}(u+\frac{1}{2}\eta)z^{(2)}(u-\frac{1}{2}\eta)+z^{(4)}(u+\frac{1}{2}\eta)z^{(3)}(u-\frac{1}{2}\eta)\Big\}, \no \\[4pt] &&\Lambda^{(2)}(u)=-4u^2[Q^{(0)}(u-\frac{1}{2}\eta)(u+\frac{1}{2}\eta)(u-\frac{1}{2}\eta)]^{-1}\Big\{\sum_{l=1}^4\sum_{m=3}^4 \tilde z^{(l)} (u+\frac{1}{2}\eta)\tilde z^{(m)}(u-\frac{1}{2}\eta)\no\\[4pt] &&\qquad\quad+z^{(1)}(u+\frac{1}{2}\eta)z^{(2)}(u-\frac{1}{2}\eta)-z^{(4)}(u+\frac{1}{2}\eta)z^{(2)}(u-\frac{1}{2}\eta)\Big\},\label{eigen3} \eea where \bea \tilde z^{(1)}(u)=z^{(1)}(u)+x_1 (u),~\tilde z^{(2)}(u)=z^{(2)}(u),~\tilde z^{(3)}(u)=z^{(3)}(u),~\tilde z^{(4)}(u)=z^{(4)}(u)+x_2 (u).\no \eea Since all the eigenvalues are the polynomials, the residues of Eq.(\ref{eigen3}) at the apparent poles should be zero, which gives the Bethe ansatz equations \bea &&1+\frac{\lambda_{l}^{(1)}}{\lambda_{l}^{(1)}+\eta}\frac{K^{(2)}(\lambda_{l}^{(1)})Q^{(0)}(\lambda_{l}^{(1)})}{K^{(1)}(\lambda_{l}^{(1)})Q^{(0)}(\lambda_{l}^{(1)}+\eta)} \frac{Q^{(1)}(\lambda_{l}^{(1)}+\eta)Q^{(2)}(\lambda_{l}^{(1)}-\eta)}{Q^{(1)}(\lambda_{l}^{(1)}-\eta)Q^{(2)}(\lambda_{l}^{(1)})}\no\\[4pt] &&\qquad=-\frac{\lambda_{l}^{(1)}(\lambda_{l}^{(1)}+\frac{1}{2}\eta)f^{(1)}(\lambda_{l}^{(1)})Q^{(0)}(\lambda_{l}^{(1)})Q^{(2)}(-\lambda_{l}^{(1)}-\eta)} {K^{(1)}(\lambda_{l}^{(1)})Q^{(1)}(\lambda_{l}^{(1)}-\eta)},\quad l=1,\cdots,L_1,\no\\[4pt] &&\frac{K^{(3)}(\lambda_{l}^{(2)})}{K^{(2)}(\lambda_{l}^{(2)})}\frac{Q^{(3)}(\lambda_{l}^{(2)}+\eta)}{Q^{(3)}(\lambda_{l}^{(2)})}= \frac{Q^{(1)}(\lambda_{l}^{(2)}+\eta)}{Q^{(1)}(\lambda_{l}^{(2)})},\quad l=1,\cdots,L_2,\no\\[4pt] &&\frac{\lambda_{l}^{(3)}(\lambda_{l}^{(3)}+\frac{1}{2}\eta)Q^{(0)}(\lambda_{l}^{(3)}+\eta)Q^{(0)}(-\lambda_{l}^{(3)})f^{(2)}(\lambda_{l}^{(3)})Q^{(2)}(-\lambda_{l}^{(3)}-\eta)} {K^{(4)}(\lambda_{l}^{(3)})Q^{(3)}(\lambda_{l}^{(3)}-\eta)}\no \\[4pt] &&\qquad =1+\frac{\lambda_{l}^{(3)}}{\lambda_{l}^{(3)}+\eta}\frac{K^{(3)}(\lambda_{l}^{(3)})}{K^{(4)}(\lambda_{l}^{(3)})}\frac{Q^{(2)}(\lambda_{l}^{(3)}-\eta)Q^{(3)}(\lambda_{l}^{(3)}+\eta)} {Q^{(2)}(\lambda_{l}^{(3)})Q^{(3)}(\lambda_{l}^{(3)}-\eta)},\quad l=1,\cdots,L_3.\label{open-BAE} \eea From the analysis of asymptotic behaviors and contributions of second higher order of corresponding polynomials, the numbers of Bethe roots should satisfy \bea L_1=L_2+N+4,\quad L_3=2N+L_2+4,\quad L_2=k, \quad k=0, 1, \cdots, N. \eea Some remarks are in order. The coefficient of term with $u^{2N+1}$ in the polynomial $\Lambda(u)$ and that of term with $u^{2N+3}$ in the polynomial $\Lambda^{(1)}(u)$ are not related with Bethe roots. The constraints (\ref{higher-1}) and (\ref{higher-2}) require $L_2=k$, where $k=0,\cdots,N$ is related to the eigenvalues of the operators $\hat{U}$ and $\hat{Q}$. Then the Bethe ansatz equations (\ref{open-BAE}) can describe all the eigenstates of the system. The second set of Bethe ansatz equations in Eq.(\ref{open-BAE}) are the homogeneous ones. This is because that the reflection matrices $K^{(\pm)}(u)$ are the blocking ones. The matrix elements involving both bosonic (where the parity is 0) and fermionic (where the parity is 1) bases are zero. The integrability of the system requires that the reflection processes from bosonic basis to fermionic one and vice versa are forbidden. We note that the Bethe ansatz equations obtained from the regularity of $\Lambda(u)$ are the same as those obtained from the regularities of $\Lambda^{(1)}(u)$ and $\Lambda^{(2)}(u)$. Meanwhile, the functions $Q^{(m)}(u)$ has two zero points, which should give the same Bethe ansatz equations. We have checked that the inhomogeneous $T-Q$ relations (\ref{eigen3}) satisfy the above mentioned $6N+13$ conditions (\ref{eigenident3})-(\ref{higher-2}). Therefore, $\Lambda(u)$, $\Lambda^{(1)}(u)$ and $\Lambda^{(2)}(u)$ are the eigenvalues of transfer matrices $t(u)$, ${t}^{(1)}(u)$ and ${t}^{(2)}(u)$, respectively. Finally, the eigenvalues of Hamiltonian (\ref{hh}) are obtained from $\Lambda(u)$ as \bea E=\frac{\partial \ln \Lambda(u)}{\partial u}|_{u=0,\{\theta_j\}=0}. \eea \section{Conclusion} In this paper, we develop a graded nested off-diagonal Bethe ansatz method and study the exact solutions of the supersymmetric $SU(2|2)$ model with both periodic and off-diagonal boundary conditions. After generalizing fusion to the supersymmetric case, we obtain the closed sets of operator product identities. For the periodic case, the eigenvalues are given in terms of the homegeneous $T-Q$ relations (\ref{ep-3}). While for the open case, the eigenvalues are given by the inhomogeneous $T-Q$ relations (\ref{eigen3}). This scheme can be generalized to other high rank supersymmetric quantum integrable models. \section*{Acknowledgments} The financial supports from the National Program for Basic Research of MOST (Grant Nos. 2016YFA0300600 and 2016YFA0302104), the National Natural Science Foundation of China (Grant Nos. 11934015, 11975183, 11947301, 11774397, 11775178 and 11775177), the Major Basic Research Program of Natural Science of Shaanxi Province (Grant Nos. 2017KCT-12, 2017ZDJC-32), Australian Research Council (Grant No. DP 190101529), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB33000000), the National Postdoctoral Program for Innovative Talents (BX20180350) and the Double First-Class University Construction Project of Northwest University are gratefully acknowledged. \section*{Appendix A: Fusion of the reflection matrices} \setcounter{equation}{0} \renewcommand{\theequation}{A.\arabic{equation}} The general fusion procedure of the reflection matrices was given \cite{Mez92, Zho96}. We will generalize the method developed in \cite{Hao14} to study the fusion of the reflections matrices for super symmetric models (taking the $SU(2|2)$ model as an example). The (graded) reflection equation at special point gives \begin{equation} R_{12}(-\alpha){K^{-}_{1}}(u-\alpha)R_{21}(2u-\alpha) {K^{-}_{2}}(u)= {K^{-}_{2}}(u)R_{12}(2u-\alpha){K^{-}_{1}}(u-\alpha)R_{21}(-\alpha), \label{oled-1} \end{equation} where $R_{12}(-\alpha)=P_{12}^{(d)}S_{12}$ as we defined perviously. Multiplying Eq.(\ref{oled-1}) with the projector $P_{12}^{(d)}$ from left and using the property $P_{12}^{(d)} R_{12}(-\alpha)= R_{12}(-\alpha)$, we have \begin{eqnarray} &&R_{12}(-\alpha){K^{-}_{1}}(u-\alpha)R_{21}(2u-\alpha) {K^{-}_{2}}(u) \no \\ &&\qquad \qquad =P_{12}^{(d)} {K^{-}_{2}}(u)R_{12}(2u-\alpha){K^{-}_{1}}(u-\alpha)R_{21}(-\alpha).\label{oled-2} \end{eqnarray} Comparing the right hand sides of Eqs.(\ref{oled-1}) and (\ref{oled-2}), we obtain \begin{equation} P_{12}^{(d)} {K^{-}_{2}}(u)R_{12}(2u-\alpha){K^{-}_{1}}(u-\alpha)P_{21}^{(d)}= {K^{-}_{2}}(u)R_{12}(2u-\alpha){K^{-}_{1}}(u-\alpha)P_{21}^{(d)}.\label{oled-3} \end{equation} Which give the general principle of fusion of the reflection matrices. If we define $P_{12}^{(d)} {K^{-}_{2}}(u)$ $R_{12}(2u-\alpha){K^{-}_{1}}(u-\alpha)P_{21}^{(d)}$ as the fused reflection matrix $K^-_{\langle 1 2\rangle}(u)\equiv K^-_{\bar 1}(u)$, where the integrability requires that the inserted $R$-matrix with determined spectral parameter is necessary, we can prove the the fused $K$-matrix $K^-_{\bar 1}(u)$ also satisfies the (graded) reflection equation \begin{eqnarray} && R_{\bar 12}(u-v) K^{-}_{\bar 1}(u) R _{2\bar 1}(u+v) K^{-}_{2}(v) =P_{00'}^{(d)}R_{0'2}(u-v)R_{02}(u-v-\alpha)P_{00'}^{(d)}\no \\ &&\qquad \times P_{00'}^{(d)} {K^{-}_{0'}}(u)R_{00'}(2u-\alpha){K^{-}_{0}}(u-\alpha)P_{0'0}^{(d)} P_{0'0}^{(d)}R_{20'}(u+v)R_{20}(u+v-\alpha)P_{0'0}^{(d)} K^{-}_{2}(v) \no \\ && \quad =P_{00'}^{(d)}R_{0'2}(u-v)R_{02}(u-v-\alpha){K^{-}_{0'}}(u)R_{00'}(2u-\alpha) \no \\ &&\qquad \times {K^{-}_{0}}(u-\alpha)R_{20'}(u+v)R_{20}(u+v-\alpha) K^{-}_{2}(v)P_{0'0}^{(d)} \no \\ &&\quad =P_{00'}^{(d)}R_{0'2}(u-v){K^{-}_{0'}}(u)R_{02}(u-v-\alpha)R_{00'}(2u-\alpha)R_{20'}(u+v) \no \\ &&\qquad \times {K^{-}_{0}}(u-\alpha) R_{20}(u+v-\alpha) K^{-}_{2}(v)P_{0'0}^{(d)} \no \\ &&\quad =P_{00'}^{(d)}R_{0'2}(u-v){K^{-}_{0'}}(u)R_{20'}(u+v)R_{00'}(2u-\alpha) \no \\ &&\qquad \times R_{02}(u-v-\alpha){K^{-}_{0}}(u-\alpha) R_{20}(u+v-\alpha) K^{-}_{2}(v)P_{0'0}^{(d)} \no \\ &&\quad =P_{00'}^{(d)}R_{0'2}(u-v){K^{-}_{0'}}(u)R_{20'}(u+v)R_{00'}(2u-\alpha) \no \\ &&\qquad \times K^{-}_{2}(v) R_{02}(u+v-\alpha) {K^{-}_{0}}(u-\alpha) R_{20}(u-v-\alpha) P_{0'0}^{(d)} \no \\ &&\quad =P_{00'}^{(d)}R_{0'2}(u-v){K^{-}_{0'}}(u)R_{20'}(u+v)K^{-}_{2}(v) \no \\ &&\qquad \times R_{00'}(2u-\alpha) R_{02}(u+v-\alpha) {K^{-}_{0}}(u-\alpha) R_{20}(u-v-\alpha) P_{0'0}^{(d)} \no \\ &&\quad =P_{00'}^{(d)}K^{-}_{2}(v) R_{0'2}(u+v) {K^{-}_{0'}}(u) R_{20'}(u-v) \no \\ &&\qquad \times R_{00'}(2u-\alpha) R_{02}(u+v-\alpha) {K^{-}_{0}}(u-\alpha) R_{20}(u-v-\alpha) P_{0'0}^{(d)} \no \\ &&\quad =K^{-}_{2}(v) P_{00'}^{(d)} R_{0'2}(u+v) {K^{-}_{0'}}(u) R_{02}(u+v-\alpha)R_{00'}(2u-\alpha) R_{20'}(u-v) \no \\ &&\qquad \times {K^{-}_{0}}(u-\alpha) R_{20}(u-v-\alpha) P_{0'0}^{(d)} \no \\ &&\quad =K^{-}_{2}(v) P_{00'}^{(d)} R_{0'2}(u+v) R_{02}(u+v-\alpha){K^{-}_{0'}}(u) R_{00'}(2u-\alpha) {K^{-}_{0}}(u-\alpha) \no \\ &&\qquad \times R_{20'}(u-v) R_{20}(u-v-\alpha) P_{0'0}^{(d)} \no \\ &&\quad =K^{-}_{2}(v) R_{\bar 12}(u+v) K^{-}_{\bar 1}(u) R _{2\bar 1}(u-v). \end{eqnarray} In the derivation, we have used the relation \begin{equation} P_{21}^{(d)}R_{32}(u)R_{31}(u-\alpha)P_{21}^{(d)}= R_{32}(u)R_{31}(u-\alpha)P_{21}^{(d)}\equiv R_{3\bar 1}(u).\label{so112led-3} \end{equation} From the dual reflection equation (\ref{r2}), we obtain the general construction principle of fused dual reflection matrices \begin{equation} P_{12}^{(d)} {K^{+}_{2}}(u)R_{12}(-2u-\alpha){K^{+}_{1}}(u+\alpha)P_{21}^{(d)}= {K^{+}_{2}}(u)R_{12}(-2u-\alpha){K^{+}_{1}}(u+\alpha)P_{21}^{(d)}.\label{oled-4} \end{equation} If $R_{12}(-\beta)=S_{12}P_{12}^{(d)}$, the corresponding fusion relations are \begin{eqnarray} &&P_{12}^{(d)} {K^{-}_{1}}(u-\beta)R_{21}(2u-\beta){K^{-}_{2}}(u)P_{21}^{(d)}= P_{12}^{(d)} {K^{-}_{1}}(u-\beta)R_{21}(2u-\beta){K^{-}_{2}}(u),\label{oled-13}\\[6pt] &&P_{12}^{(d)} {K^{+}_{1}}(u+\beta)R_{21}(-2u-\beta){K^{+}_{2}}(u)P_{21}^{(d)} \no \\[6pt] &&\qquad = P_{12}^{(d)} {K^{+}_{1}}(u+\beta)R_{21}(-2u-\beta){K^{+}_{2}}(u).\label{oled-14} \end{eqnarray} Finally the fused K-matrices in subsection 3.2 can be carried out according to Eqs.(\ref{oled-3})-(\ref{oled-4}) or (\ref{oled-13})-(\ref{oled-14}). \section*{Appendix B: Proof of the operator product identities} \setcounter{equation}{0} \renewcommand{\theequation}{B.\arabic{equation}} We introduce the reflection monodromy matrices \bea &&\hat{T}_{\tilde 0}(u)=R_{N\tilde 0}(u+\theta_N)\cdots R_{2\tilde 0}(u+\theta_2) R_{1\tilde 0}(u+\theta_1) \label{openT5}, \no \\[4pt] &&\hat{T}_{\tilde 0^\prime}(u)=R_{N\tilde 0^\prime}(u+\theta_N)\cdots R_{2\tilde 0^\prime}(u+\theta_2) R_{1\tilde 0^\prime}(u+\theta_1),\label{openT7}\eea which satisfy the graded Yang-Baxter equations \bea &&R_{1\tilde 2} (u-v) \hat{T}_1(u) \hat{ T}_{\tilde 2}(v) = \hat{ T}_{\tilde 2}(v) \hat{T}_1(u) R_{1\tilde 2} (u-v), \no \\[4pt] &&R_{1\tilde 2^\prime} (u-v) \hat{T}_1(u) \hat{T}_{\tilde 2^\prime}(v) = \hat{ T}_{\tilde 2^\prime}(v) \hat{T}_1(u) R_{1\tilde 2^\prime} (u-v).\label{yyBb333-1}\eea In order to solve the transfer matrix $t(u)$ (\ref{tru}), we still need the fused transfer matrices which are defined as \bea &&\tilde t^{(1)}(u)= str_{\tilde 0}\{{K}^{+}_{\tilde{0} }(u) T_{\tilde 0}(u) {K}^{-}_{\tilde{0} }(u) \hat{T}_{\tilde 0}(u)\}, \no \\[4pt] &&\tilde t^{(2)}(u)= str_{\tilde 0^\prime}\{{K}^{+}_{\tilde{0}^\prime }(u) T_{\tilde 0^\prime}(u) {K}^{-}_{\tilde{0} }(u) \hat{ T}_{\tilde 0^\prime}(u)\}.\label{openTransfer-6} \eea Similar with periodic case, from the property that above $R$-matrices can degenerate into the projectors and using the definitions (\ref{openT6}) and (\ref{openT7}), we obtain following fusion relations among the reflecting monodromy matrices \bea &&P^{ (8) }_{21}\hat{T}_2 (u)\hat{T}_1 (u+\eta)P^{(8) }_{21}=\prod_{l=1}^N (u+\theta_l+\eta)\hat{ T}_{\bar 1}(u+\frac{1}{2}\eta),\no\\[4pt] &&\bar P^{ (8) }_{21}\hat{T}_2 (u)\hat{T}_1 (u-\eta)\bar P^{(8) }_{21}=\prod_{l=1}^N (u+\theta_l-\eta)\hat{T}_{\bar 1^\prime }(u-\frac{1}{2}\eta),\no\\[4pt] &&P^{(20) }_{{\bar 1}2} \hat{T}_{\bar{1}} (u+\frac{1}{2}\eta) \hat{T}_2(u-\eta)P^{(20) }_{{\bar 1}2}=\prod_{l=1}^N (u+\theta_l-\eta)\hat{T}_{\tilde 1}(u),\no\\[4pt] &&P^{(20) }_{{\bar 1}^\prime 2} \hat{T}_{\bar{1}^\prime} (u-\frac{1}{2}\eta)\hat{T}_2(u+\eta)P^{(20) }_{{\bar 1}^\prime 2}=\prod_{l=1}^N (u+\theta_l+\eta)\hat{T}_{\tilde 1^\prime }(u).\label{fut-66}\eea From the definitions, we see that the auxiliary spaces are erased by taking the super partial traces and the physical spaces are the same. We remark that these transfer matrices are not independent. Substituting Eqs.(\ref{peri-iden}) and (\ref{k-iden}) into the definitions (\ref{openTransfer-6}), we obtain that the fused transfer matrices $\tilde{t}^{(1)}(u)$ and $\tilde{t}^{(2)}(u)$ are equal \bea \tilde{t}^{(1)}(u)=\tilde{t}^{(2)}(u).\label{id0} \eea Consider the quantity \bea &&t(u)t(u+\eta) =str_{12}\{K_1^+(u)T_1(u)K_1^-(u)\hat T_1(u)\no\\[4pt] &&\hspace{12mm}\times[T_2(u+\eta)K_2^-(u+\eta)\hat T_2(u+\eta)]^{st_2}[K_2^+(u+\eta)]^{st_2}\}\no\\[4pt] &&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}str_{12}\{K_1^{+}(u)T_1(u)K_1^-(u)\hat T_1(u)\no\\[4pt] &&\hspace{12mm}\times[T_2(u+\eta)K_2^-(u+\eta)\hat T_2(u+\eta)]^{st_2}R_{21}^{st_2}(2u+\eta)R_{12}^{st_2}(-2u-\eta)[K_2^+(u+\eta)]^{st_2}\}\no\\[4pt] &&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}str_{12}\{K_2^+(u+\eta)R_{12}(-2u-\eta)K_1^+(u)T_1(u)T_2(u+\eta)\no\\[4pt] &&\hspace{12mm}\times K_1^-(u)R_{21}(2u+\eta)K_2^-(u+\eta)\hat T_1(u)\hat T_2(u+\eta)\}\no\\[4pt] &&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}str_{12}\{(P_{12}^{(8)}+\bar P_{21}^{(8)})K_2^+(u+\eta)R_{12}(-2u-\eta)K_1^+(u)\no\\[4pt] &&\hspace{12mm}\times(P_{21}^{(8)}+\bar P_{12}^{(8)}) T_1(u)T_2(u+\eta)(P_{21}^{(8)}+\bar P_{12}^{(8)})K_1^-(u)\no\\[4pt] &&\hspace{12mm}\times R_{21}(2u+\eta)K_2^-(u+\eta)(P_{12}^{(8)}+\bar P_{21}^{(8)})\hat T_1(u)\hat T_2(u+\eta)(P_{12}^{(8)}+\bar P_{21}^{(8)})\}\no\\[4pt] &&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}str_{12}\{[P_{12}^{(8)}K_2^+(u+\eta)R_{12}(-2u-\eta)K_1^+(u)P_{21}^{(8)}]\no\\[4pt] &&\hspace{12mm}\times [P_{21}^{(8)} T_1(u)T_2(u+\eta)P_{21}^{(8)}]\no\\[4pt] &&\hspace{12mm}\times [P_{21}^{(8)} K_1^-(u)R_{21}(2u+\eta)K_2^-(u+\eta)P_{12}^{(8)}][P_{12}^{(8)}\hat T_1(u)\hat T_2(u+\eta) P_{12}^{(8)}]\}\no\\[4pt] &&\hspace{8mm}+[\rho_2(2u+\eta)]^{-1}str_{12}\{[\bar P_{21}^{(8)}K_2^+(u+\eta)R_{12}(-2u-\eta)K_1^+(u)\bar P_{12}^{(8)}]\no\\[4pt] &&\hspace{12mm}\times [\bar P_{12}^{(8)} T_1(u)T_2(u+\eta)\bar P_{12}^{(8)}]\no\\[4pt] &&\hspace{12mm}\times[\bar P_{12}^{(8)}K_1^-(u) R_{21}(2u+\eta)K_2^-(u+\eta)\bar P_{21}^{(8)}][\bar P_{21}^{(8)}\hat T_1(u)\hat T_2(u+\eta)\bar P_{21}^{(8)}]\}\no\\[4pt] &&\hspace{8mm}=t_1(u)+t_2(u).\label{openident1-tan-1} \eea The first term is the fusion by the 8-dimensional projectors and the result is \bea &&t_1(u)=[\rho_2(2u+\eta)]^{-1}(u+\eta)(u)\prod_{j=1}^{N}(u-\theta_j+\eta)(u+\theta_j+\eta)\no\\[4pt] &&\hspace{12mm}\times str_{\langle12\rangle}\{K_{\langle12\rangle}^{+}(u+\frac12\eta)T_{\langle12\rangle}^{(8)}(u+\frac12\eta)K_{\langle12\rangle}^{-}(u+\frac12\eta) \hat T_{\langle12\rangle}^{(8)}(u+\frac12\eta)\}\no\\[4pt] &&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}(u+\eta)u\prod_{j=1}^{N}(u-\theta_j+\eta)(u+\theta_j+\eta) t^{(1)}(u+\frac12\eta). \eea The second term is the fusion by the other 8-dimensional projectors. Detailed calculation gives \bea &&t_2(u)=[\rho_2(2u+\eta)]^{-1}str_{12}\{\bar P_{21}^{(8)}[\bar P_{21}^{(8)}K_2^+(u+\eta)R_{12}(-2u-\eta)K_1^+(u)]\bar P_{12}^{(8)}\no\\[4pt] &&\hspace{12mm}\times \bar P_{12}^{(8)}[\bar P_{12}^{(8)} T_1(u)T_2(u+\eta)]\bar P_{12}^{(8)}\no\\[4pt] &&\hspace{12mm}\times\bar P_{12}^{(8)}[\bar P_{12}^{(8)}K_1^-(u) R_{21}(2u+\eta)K_2^-(u+\eta)]\bar P_{21}^{(8)}\no\\[4pt] &&\hspace{12mm}\times\bar P_{21}^{(8)}[\bar P_{12}^{(8)}\hat T_1(u)\hat T_2(u+\eta)]\bar P_{21}^{(8)}\}\no\\[4pt] &&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}str_{12}\{\bar P_{21}^{(8)}[K_1^+(u)R_{21}(-2u-\eta)K_2^+(u+\eta)\bar P_{12}^{(8)}]\bar P_{12}^{(8)}\no\\[4pt] &&\hspace{12mm}\times \bar P_{12}^{(8)}[T_2(u+\eta)T_1(u)\bar P_{12}^{(8)}]\bar P_{12}^{(8)}\no\\[4pt] &&\hspace{12mm}\times\bar P_{12}^{(8)}[K_2^-(u+\eta)R_{12}(2u+\eta)K_1^-(u)\bar P_{21}^{(8)}]\bar P_{21}^{(8)}\no\\[4pt] &&\hspace{12mm}\times\bar P_{21}^{(8)}[\hat T_2(u+\eta)\hat T_1(u)\bar P_{12}^{(8)}]\bar P_{21}^{(8)}\}\no\\[4pt] &&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}str_{12}\{[\bar P_{21}^{(8)}K_1^+(u)R_{21}(-2u-\eta)K_2^+(u+\eta)\bar P_{12}^{(8)}]\no\\[4pt] &&\hspace{12mm}\times [\bar P_{12}^{(8)}T_2(u+\eta)T_1(u)\bar P_{12}^{(8)}]\no\\[4pt] &&\hspace{12mm}\times[\bar P_{12}^{(8)}K_2^-(u+\eta)R_{12}(2u+\eta)K_1^-(u)\bar P_{21}^{(8)}]\no\\[4pt] &&\hspace{12mm}\times[\bar P_{21}^{(8)}\hat T_2(u+\eta)\hat T_1(u)\bar P_{21}^{(8)}]\}\no\\[4pt] &&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}(u+\eta)u\prod_{j=1}^{N}(u-\theta_j)(u+\theta_j)\no\\[4pt] &&\hspace{12mm}\times str_{\langle12\rangle^\prime}\{K_{\langle12\rangle^\prime}^{+}(u+\frac12\eta)T_{\langle12\rangle^\prime}(u+\frac 12 \eta)K_{\langle12\rangle^\prime}^{-}(u+\frac12\eta) \hat T_{\langle12\rangle^\prime}(u+\frac12\eta)\}\no\\[4pt] &&\hspace{8mm}=[\rho_2(2u+\eta)]^{-1}(u+\eta)u\prod_{j=1}^{N}(u-\theta_j)(u+\theta_j) t^{(2)}(u+\frac12\eta). \eea In the derivation, we have used the relations \bea &&str_{12}\{A_{12}^{st_1}B_{12}^{st_1}\}=str_{12}\{A_{12}^{st_2}B_{12}^{st_2}\}=str_{12}\{A_{12}B_{12}\},\no\\[4pt] &&\hat T_1(u)R_{21}(2u+\eta)T_2(u+\eta)=T_2(u+\eta)R_{21}(2u+\eta)\hat T_1(u),\no\\[4pt] &&P_{12}^{(8)}+\bar P_{12}^{(8)}=1,~~P_{21}^{(8)}+\bar P_{21}^{(8)}=1,~~ P_{12}^{(8)}\bar P_{12}^{(8)}=P_{21}^{(8)}\bar P_{21}^{(8)}=0,~~ P_{12}^{(8)}=P_{21}^{(8)},~~\bar P_{12}^{(8)}=\bar P_{21}^{(8)}.\no \eea In addition, \bea && t^{(1)}(u+\frac{1}{2}\eta)t(u-\eta)=str_{\bar 12}\{K_{\bar 1}^{+}(u+\frac{1}{2}\eta)T_{\bar 1}(u+\frac{1}{2}\eta) K_{\bar 1}^{-}(u+\frac{1}{2}\eta)\hat T_{\bar 1}(u+\frac{1}{2}\eta)\no\\[4pt] &&\hspace{12mm}\times [T_2(u-\eta)K_2^-(u-\eta)\hat T_2(u-\eta)]^{st_2}[K_2^+(u-\eta)]^{st_2}\}\no\\[4pt] &&\hspace{8mm}=\rho_4^{-1}(2u-\frac{1}{2}\eta)str_{\bar 12}\{K_{\bar 1}^{+}(u+\frac{1}{2}\eta)T_{\bar 1} (u+\frac{1}{2}\eta)K_{\bar 1}^{-}(u+\frac{1}{2}\eta)\hat T_{\bar 1}(u+\frac{1}{2}\eta)\no\\[4pt] &&\hspace{12mm}\times [T_2(u-\eta)K_2^-(u-\eta)\hat T_2(u-\eta)]^{st_2}[R_{2\bar 1}(2u-\frac{1}{2}\eta)]^{st_2}\no\\[4pt] &&\hspace{12mm}\times[R_{\bar 12}(-2u+\frac{1}{2}\eta)]^{st_2}[K_2^+(u-\eta)]^{st_2}\}\no\\[4pt] &&\hspace{8mm}=\rho_4^{-1}(2u-\frac{1}{2}\eta)str_{\bar 12}\{K_2^+(u-\eta)R_{\bar 12}(-2u+\frac{1}{2}\eta)K_{\bar 1}^{+}(u+\frac{1}{2}\eta)T_{\bar 1}(u+\frac{1}{2}\eta)\no\\[4pt] &&\hspace{12mm}\times T_2(u-\eta)K_{\bar 1}^{-}(u+\frac{1}{2}\eta)R_{2\bar 1}(2u-\frac{1}{2}\eta)K_2^{-}(u-\eta)\hat T_{\bar 1}(u+\frac{1}{2}\eta) \hat T_2(u-\eta)\}\no\\[4pt] &&\hspace{8mm}=\rho_4^{-1}(2u-\frac{1}{2}\eta)str_{\bar 12}\{(P_{\bar 12}^{(20)}+\tilde P^{(12)}_{\bar 12})K_2^+(u-\eta)R_{\bar 12}(-2u+\frac{1}{2}\eta) K_{\bar 1}^{+}(u+\frac{1}{2}\eta)\no\\[4pt] &&\hspace{12mm}\times (P_{2\bar 1}^{(20)}+\tilde P^{(12)}_{2\bar 1})T_{\bar 1}(u+\frac{1}{2}\eta)T_2(u-\eta)(P_{2\bar 1}^{(20)}+\tilde P^{(12)}_{2\bar 1})\no\\[4pt] &&\hspace{12mm}\times K_{\bar 1}^{-}(u+\frac{1}{2}\eta)R_{2\bar 1}(2u-\frac{1}{2}\eta)K_2^{-}(u-\eta)(P_{\bar 12}^{(20)}+\tilde P^{(12)}_{\bar 12})\no\\[4pt] &&\hspace{12mm}\times \hat T_{\bar 1}(u+\frac{1}{2}\eta) \hat T_2(u-\eta)(P_{\bar 12}^{(20)}+\tilde P^{(12)}_{\bar 12})\}\no\\[4pt] &&\hspace{8mm}=\rho_4^{-1}(2u-\frac{1}{2}\eta)str_{\bar 12}\{P_{\bar 12}^{(20)}K_2^+(u-\eta)R_{\bar 12}(-2u+\frac{1}{2}\eta)K_{\bar 1}^{+}(u+\frac{1}{2}\eta)P_{2\bar 1}^{(20)}\no\\[4pt] &&\hspace{12mm}\times T_{\bar 1}(u+\frac{1}{2}\eta)T_2(u-\eta)P_{2\bar 1}^{(20)}K_{\bar 1}^{-}(u+\frac{1}{2}\eta)R_{2\bar 1}(2u-\frac{1}{2}\eta)K_2^{-}(u-\eta)\no\\[4pt] &&\hspace{12mm}\times P_{\bar 12}^{(20)}\hat T_{\bar 1}(u+\frac{1}{2}\eta) \hat T_2(u-\eta)P_{\bar 12}^{(20)}\}\no\\[4pt] &&\hspace{12mm}+\rho_4^{-1}(2u-\frac{1}{2}\eta)str_{\bar 12}\{\tilde P_{\bar 12}^{(12)}K_2^+(u-\eta)R_{\bar 12}(-2u+\frac{1}{2}\eta)K_{\bar 1}^{+}(u+\frac{1}{2}\eta) \tilde P_{2\bar 1}^{(12)}\no\\[4pt] &&\hspace{12mm}\times T_{\bar 1}(u+\frac{1}{2}\eta)T_2(u-\eta)\tilde P_{2\bar 1}^{(12)}K_{\bar 1}^{-}(u+\frac{1}{2}\eta)R_{2\bar 1}(2u-\frac{1}{2}\eta)K_2^{-}(u-\eta)\no\\[4pt] &&\hspace{12mm}\times \tilde P_{\bar 12}^{(12)}\hat T_{\bar 1}(u+\frac{1}{2}\eta) \hat T_2(u-\eta)\tilde P_{\bar 12}^{(12)}\}\no\\[4pt] &&\hspace{8mm}=\rho_4^{-1}(2u-\frac{1}{2}\eta)(2u+\eta)(u-\eta)\prod_{j=1}^{N}(u-\theta_j-\eta)(u+\theta_j-\eta)\no\\[4pt] &&\hspace{12mm}\times str_{\langle\bar 12\rangle}\{K_{\langle\bar 12\rangle}^{+}(u)T_{\langle\bar 12\rangle}(u) K_{\langle\bar 12\rangle}^{-}(u) \hat T_{\langle\bar 12\rangle}(u)\}\no\\[4pt] &&\hspace{8mm}+\rho_4^{-1}(2u-\frac{1}{2}\eta)(2u+\eta)(u-\eta)\prod_{j=1}^{N}(u-\theta_j)(u+\theta_j) \no\\[4pt] &&\hspace{12mm}\times str_{\overline{\langle\bar 12\rangle}}\{ K_{\overline{\langle\bar 12\rangle}}^{+}(u)T_{\overline{\langle\bar 12\rangle}}(u) K_{\overline{\langle\bar 12\rangle}}^{-}(u) \hat T_{\overline{\langle\bar 12\rangle}}(u)\}\no\\[4pt] &&\hspace{8mm}=\rho_4^{-1}(2u-\frac{1}{2}\eta)(2u+\eta)(u-\eta)\prod_{j=1}^{N}(u-\theta_j-\eta)(u+\theta_j-\eta)\tilde t^{(1)}(u)\no\\ &&\hspace{12mm}+\rho_4^{-1}(2u-\frac{1}{2}\eta)(2u+\eta)(u-\eta)\prod_{j=1}^{N}(u-\theta_j)(u+\theta_j)\bar{t}^{(1)}(u),\label{tan}\\ && t^{(2)}(u-\frac{1}{2}\eta)t(u+\eta)=\rho_6^{-1}(2u+\frac{1}{2}\eta)str_{\bar 1^\prime 2}\{K_2^+(u+\eta)R_{\bar 1^\prime 2}(-2u-\frac{1}{2}\eta)\no\\[4pt] &&\hspace{12mm}\times K_{\bar 1^\prime }^{+}(u-\frac{1}{2}\eta)T_{\bar 1^\prime }(u-\frac{1}{2}\eta) T_2(u+\eta)K_{\bar 1^\prime }^{-}(u-\frac{1}{2}\eta)\no\\[4pt] &&\hspace{12mm}\times R_{2\bar 1^\prime }(2u+\frac{1}{2}\eta)K_2^{-}(u+\eta)\hat T_{\bar 1^\prime }(u-\frac{1}{2}\eta) \hat T_2(u+\eta)\}\no\\[4pt] &&\hspace{8mm}=\rho_6^{-1}(2u-\frac{1}{2}\eta)str_{\bar 1^\prime 2}\{(P_{\bar 1^\prime 2}^{(20)}+\tilde P^{(12)}_{\bar 1^\prime 2})K_2^+(u+\eta)R_{\bar 1^\prime 2}(-2u-\frac{1}{2}\eta)\no\\[4pt] &&\hspace{12mm}\times K_{\bar 1^\prime }^{+}(u-\frac{1}{2}\eta)(P_{2\bar 1^\prime }^{(20)}+\tilde P^{(12)}_{2\bar 1^\prime })T_{\bar 1^\prime }(u-\frac{1}{2}\eta) T_2(u+\eta)(P_{2\bar 1^\prime }^{(20)}+\tilde P^{(12)}_{2\bar 1^\prime })\no\\[4pt] &&\hspace{12mm}\times K_{\bar 1^\prime }^{-}(u-\frac{1}{2}\eta)R_{2\bar 1^\prime }(2u+\frac{1}{2}\eta)K_2^{-}(u+\eta)(P_{\bar 1^\prime 2}^{(20)} +\tilde P^{(12)}_{\bar 1^\prime 2})\no\\[4pt] &&\hspace{12mm}\times \hat T_{\bar 1^\prime }(u-\frac{1}{2}\eta) \hat T_2(u+\eta)(P_{\bar 1^\prime 2}^{(20)}+\tilde P^{(12)}_{\bar 1^\prime 2})\}\no\\[4pt] &&\hspace{8mm}=\rho_6^{-1}(2u+\frac{1}{2}\eta)(2u-\eta)(u+\eta)\prod_{j=1}^{N}(u-\theta_j+\eta)(u+\theta_j+\eta)\no\\[4pt] &&\hspace{12mm}\times str_{\langle \bar 1^\prime 2\rangle}\{ K_{\langle\bar 1^\prime 2\rangle}^{+}(u)T_{\langle\bar 1^\prime 2\rangle}(u) K_{\langle\bar 1^\prime 2\rangle}^{-}(u) \hat T_{\langle\bar 1^\prime 2\rangle}(u)\}\no\\[4pt] &&\hspace{12mm}+\rho_6^{-1}(2u+\frac{1}{2}\eta)(2u-\eta)(u+\eta)\prod_{j=1}^{N}(u-\theta_j)(u+\theta_j)\no\\[4pt] &&\hspace{12mm}\times str_{\overline{\langle \bar 1^\prime 2\rangle}}\{K_{\overline{\langle \bar 1^\prime 2\rangle}}^{+}(u)T_{\overline{\langle \bar 1^\prime 2\rangle}}(u) K_{\overline{\langle \bar 1^\prime 2\rangle}}^{-}(u)\hat T_{\overline{\langle \bar 1^\prime 2\rangle}}(u)\}\no\\ &&\hspace{8mm}=\rho_6^{-1}(2u+\frac{1}{2}\eta)(2u-\eta)(u+\eta)\prod_{j=1}^{N}(u-\theta_j+\eta)(u+\theta_j+\eta)\tilde t^{(2)}(u)\no\\ &&\hspace{12mm}+\rho_6^{-1}(2u+\frac{1}{2}\eta)(2u-\eta)(u+\eta)\prod_{j=1}^{N}(u-\theta_j)(u+\theta_j)\bar{t}^{(2)}(u),\label{tan-09} \eea where we have used the relations \bea &&\hat T_{\bar 1}(u+\frac{1}{2}\eta)R_{2\bar 1}(2u-\frac{1}{2}\eta)T_2(u-\eta)=T_2(u-\eta)R_{2\bar 1}(2u-\frac{1}{2}\eta)\hat T_{\bar 1}(u+\frac{1}{2}\eta),\no\\[4pt] &&P_{\bar 12}^{(20)}+\tilde P_{\bar 12}^{(12)}=1,~~P_{2\bar 1}^{(20)}+\tilde P_{2\bar 1}^{(12)}=1,~~P_{\bar 12}^{(20)}\tilde P_{\bar 12}^{(12)}=0,~~P_{2\bar 1}^{(20)}\tilde P_{2\bar 1}^{(12)}=0, \no \\[4pt] &&\hat T_{\bar 1^\prime }(u-\frac{1}{2}\eta)R_{2\bar 1^\prime }(2u+\frac{1}{2}\eta)T_2(u+\eta)=T_2(u+\eta)R_{2\bar 1^\prime }(2u+\frac{1}{2}\eta)\hat T_{\bar 1^\prime }(u-\frac{1}{2}\eta),\no\\[4pt] &&P_{\bar 1^\prime 2}^{(20)}+\tilde P_{\bar 1^\prime 2}^{(20)}=1,~~P_{2\bar 1^\prime }^{(20)}+\tilde P_{2\bar 1^\prime }^{(12)}=1, ~~P_{\bar 1^\prime 2}^{(20)}\tilde P_{\bar 1^\prime 2}^{(12)}=0,~~P_{2\bar 1^\prime }^{(20)}\tilde P_{2\bar 1^\prime }^{(12)}=0.\no \eea Focusing on the special points introduced in the main text, we have \bea &&t(\pm\theta_j-\eta) t^{(1)}(\pm\theta_j+\frac{1}{2}\eta)=-\frac{1}{2}\frac{(\pm\theta_j+\frac{1}{2}\eta) (\pm\theta_j-\eta)}{(\pm\theta_j)(\pm\theta_j-\frac{1}{2}\eta)}\no\\ &&\hspace{20mm}\times\prod_{l=1}^{N}(\pm\theta_j-\theta_l-\eta)(\pm\theta_j+\theta_l-\eta)\tilde t^{(1)}(\pm\theta_j),\label{open-ope-1}\\ &&t(\pm\theta_j+\eta) t^{(2)}(\pm\theta_j-\frac{1}{2}\eta)=-\frac{1}{2}\frac{(\pm\theta_j-\frac{1}{2}\eta) (\pm\theta_j+\eta)}{(\pm\theta_j)(\pm\theta_j+\frac{1}{2}\eta)}\no\\ &&\hspace{20mm}\times\prod_{l=1}^{N}(\pm\theta_j-\theta_l+\eta)(\pm\theta_j+\theta_l+\eta)\tilde t^{(2)}(\pm\theta_j), \quad j=1,2,\cdots,N.\label{open-ope-2} \eea With the help of Eqs. (\ref{id0}), (\ref{open-ope-1}) and (\ref{open-ope-2}), we can derive the relation (\ref{openident3}). Finally, we have proven the identities (\ref{openident1})-(\ref{openident3}).
proofpile-arXiv_065-168
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:introduction} The dilemma of exploration versus exploitation is common in scenarios involving decision-making in unknown environments. In these contexts, exploration means learning the environment while exploitation means taking empirically {computed} best actions. When finite time performance is concerned, {i.e., scenarios in which} one cannot learn indefinitely, ensuring a good balance of exploration and exploitation is the key to a good performance. {MAB and its variations are prototypical models for these problems, and they are widely used in many areas such as network routing, recommendation systems and resource allocation; see~\cite[Chapter 1]{lattimore2020bandit}.} The stochastic MAB problem was originally proposed by Robbins~\cite{robbins1952}. In this problem, at each time, an agent chooses an arm from a set of $K$ arms and receives the associated reward. The reward at each arm is a stationary random variable with an unknown mean. The objective is to design a policy that maximizes the expected cumulative reward or equivalently minimizes the \emph{expected cumulative regret}, defined by the expected cumulative difference between the maximum mean reward and the reward obtained using the policy. The worst-case regret is defined by the supremum of the expected cumulative regret computed over a class of reward distributions, e.g., sub-Gaussian distributions, or distributions with bounded support. The \emph{minimax regret} is defined as the minimum worst-case regret, where the minimum is computed over all the policies. By construction, the worst-case regret uses minimal information about the underlying distribution and the associated regret bounds are called \emph{distribution-free bounds}. In contrast, the standard regret bounds depend on the difference between the mean rewards from the optimal and suboptimal arms, and the corresponding bounds are referred as \emph{distribution-dependent bounds}. In their seminal work, Lai and Robbins~\cite{TLL-HR:85} establish that the expected cumulative regret admits an asymptotic {distribution-dependent} lower bound that is a logarithmic function of the time-horizon $T$. Here, asymptotic refers to the limit $T \to +\infty$. They also propose a general method of constructing Upper Confidence Bound (UCB) based policies that attain the lower bound asymptotically. By assuming rewards to be bounded or more generally sub-Gaussian, several subsequent works design simpler algorithms with finite time performance guarantees, e.g., the UCB1 algorithm by Auer et al.~\cite{PA-NCB-PF:02}. By using Kullback-Leibler(KL) divergence based upper confidence bounds, Garivier and Capp{\'e}~\cite{AG-OC:11} designed KL-UCB, which is proved to have efficient finite time performance as well as asymptotic optimality. In the worst-case setting, the lower and upper bounds are distribution-free. Assuming the rewards are bounded, Audibert and Bubeck~\cite{MOSS} establish a $\Omega(\sqrt{KT})$ lower bound on the minimax regret. They also studied a modified UCB algorithm called Minimax Optimal Strategy in the Stochastic case (MOSS) and proved that it achieves an order-optimal worst-case regret while maintaining a logarithm distribution-dependent regret. Degenne and Perchet~\cite{moss_anytime} extend MOSS to an any-time version called MOSS-anytime. The rewards being bounded or sub-Gaussian is a common assumption that gives sample mean an exponential convergence and simplifies the MAB problem. However in many applications, such as social networks~\cite{albert2002statistical} and financial markets~\cite{vidyasagar2010law}, the rewards are heavy-tailed. For the standard stochastic MAB problem, Bubeck et al.~\cite{bubeck2013bandits} relax the sub-Gaussian assumption by only assuming the rewards to have finite moments of order $ 1+\epsilon$ for some $\epsilon \in (0, 1]$. They present the robust UCB algorithm and show that it attains an upper bound on the cumulative regret that is within a constant factor of the {distribution-depend} lower bound in the heavy-tailed setting. {However, the solutions provided in~\cite{bubeck2013bandits} are not able to provably achieve an order optimal worst-case regret. Specifically, the factor of optimality is a poly-logarithmic function of time-horizon. } {In this paper, we study the minimax heavy tail bandit problem in which reward distributions admit moments of order $1+\epsilon$, with $\epsilon>0$. We propose and analyze Robust MOSS algorithm to show that it achieves worst-case regret matching with the lower bound while maintaining a distribution-dependent logarithm regret. To the best of our knowledge, Robust MOSS is the first algorithm to achieve order optimal worst-case regret for heavy-tailed bandits. Our results build on techniques in~\cite{MOSS} and~\cite{bubeck2013bandits}, and augment them with new analysis based on maximal Bennett inequalities.} The remaining paper is organized as follows. We describe the minimax heavy-tailed MAB problem and present some background material in Section~\ref{sec:background}. We present and analyze the Robust MOSS algorithm in Sections~\ref{sec:algo} and~\ref{sec:analysis}, respectively, and numerically compare it with the state of the art in Section~\ref{sec:simulation}. We conclude in Section~\ref{sec:conclusions}. \section{Background \& Problem Description}\label{sec:background} \subsection{Stochastic MAB Problem} In a stochastic MAB problem, an agent chooses an arm $\varphi_t$ from the set of $K$ arms $\until{K}$ at each time $t \in \until{T}$ and receives the associated reward. The reward at each arm $k$ is drawn from an unknown distribution $f_k$ with unknown mean $\mu_k$. Let the maximum mean reward among all arms be $\mu^*$. We use $\Delta_k =\mu^*-\mu_k$ to measure the suboptimality of arm $k$. The objective is to maximize the expected cumulative reward or equivalently to minimize the \emph{expected cumulative regret} defined by \[R_T : = \mathbb{E}\, \Bigg[\sum_{t=1}^T \left(\mu^* - X_{\varphi_t} \right)\Bigg] = \mathbb{E}\, \Bigg[\sum_{t=1}^T \Delta_{\varphi_t} \Bigg], \] which is the difference between the expected cumulative reward obtained by selecting the arm with the maximum mean reward $\mu^*$ and selecting arms $\varphi_1, \ldots, \varphi_T$. The expected cumulative regret $R_T$ is implicitly defined for a fixed distribution of rewards from each arm $\{f_1, \ldots, f_K\}$. The worst-case regret is the expected cumulative regret for the worst possible choice of reward distributions. In particular, \[ \supscr{R_T}{worst} = \sup_{\{f_1, \ldots, f_K\}} R_T. \] The regret associated with the policy that minimizes the above worst-case regret is called \emph{minimax regret}. \subsection{Problem Description: Heavy-tailed Stochastic MAB} In this paper, we study the heavy-tailed stochastic MAB problem, which is the stochastic MAB problem with following assumptions. \begin{assumption}\label{ass1} Let $X$ be a random reward drawn from any arm $k \in \until{K}$. There exists a constant $u \in \mathbb{R}_{>0}$ such that $\mathbb{E}\, \big[\abs{X}^{1+\epsilon}\big] \leq u^{1+\epsilon}$ for some $\epsilon \in (0,1]$. \end{assumption} \begin{assumption}\label{ass2} Parameters $T$, $K$, $u$ and $\epsilon$ are known. \end{assumption} \subsection{MOSS Algorithm for Worst-Case Regret} We now present the MOSS algorithm proposed in~\cite{MOSS}. The MOSS algorithm is designed for the stochastic MAB problem with bounded rewards and in this paper, we extend it to design Robust MOSS algorithm for heavy-tailed bandits. Suppose that arm $k$ is sampled $n_k(t)$ times until time $t-1$, and $ \bar \mu^k_{n_k(t)}$ is the associated empirical mean, then, at time $t$, MOSS picks the arm that maximizes the following UCB \[g^k_{n_k(t)} = \bar \mu^k_{n_k(t)} + \sqrt{\frac{ \max \left(\ln \left(\frac{T}{K n_k(t)}\right), 0\right)}{n_k(t)}}.\] If the rewards from the arms have bounded support $[0,1]$, then the worst-case regret for MOSS satisfies $\supscr{R_T}{worst} \leq 49\sqrt{KT}$, which is order optimal~\cite{MOSS}. Meanwhile, MOSS maintains a logarithm distribution-dependent regret bound. \subsection{A Lower Bound for Heavy-tailed Minimax Regret} We now present the lower bound on the minimax regret for the heavy tailed bandit problem derived in~\cite{bubeck2013bandits}. \begin{theorem}[{\cite[Th. 2]{bubeck2013bandits}}] For any fixed time horizon $T$ and the stochastic MAB problem under Assumptions~\ref{ass1} and~\ref{ass2} with $u=1$, \[\supscr{R_T}{worst} \geq 0.01 K^{\frac{\epsilon}{1+\epsilon}} T^{\frac{1}{1+\epsilon}}. \] \end{theorem} \begin{remark} Since $R_T$ scales with $u$, the lower bound for heavy tail bandit is $\Omega \big(u K^{\frac{\epsilon}{1+\epsilon}} T^{\frac{1}{1+\epsilon}}\big)$. This lower bound also indicates that within a finite horizon $T$, it is almost impossible to differentiate the optimal arm from arm $k$, if $\Delta_k \in O \big(u (K/T)^{\frac{\epsilon}{1+\epsilon}} \big)$. {As a special case, rewards with bounded support $[0,1]$ correspond to $\epsilon=1$ and $u=1$. Then, the lower bound $\Omega(\sqrt{KT})$ matches with the regret upper bound achieved by MOSS.} \end{remark} \section{A Robust Minimax Policy}\label{sec:algo} To deal with the heavy-tailed reward distribution, we replace the empirical mean with a saturated empirical mean. Although saturated empirical mean is a biased estimator, it has better convergence properties. We construct a novel UCB index to evaluate the arms, and at each time slot the arm with the maximum UCB index is picked. \subsection{Robust MOSS} In Robust MOSS, we consider a robust mean estimator called saturated empirical mean which is formally defined in the following subsection. Let $n_k(t)$ be the number of times that arm $k$ has been selected until time $t-1$. At time $t$, let $\hat \mu_{n_k(t)}^k$ be the saturated empirical mean reward computed from the $n_k(t)$ samples at arm $k$. Robust MOSS initializes by selecting each arm once and subsequently, at each time $t$, selects the arm that maximizes the following UCB \[ g^k_{n_k(t)} = \hat \mu^k_{n_k(t)} + (1+\eta)c_{n_k(t)}, \] where $\eta >0$ is an appropriate constant, $c_{n_k(t)} = u \times \big[\phi(n_k(t))\big]^{\frac{\epsilon}{1+\epsilon}}$ and \[ \phi(n) = \frac{\ln_+ \big(\frac{T}{K n}\big)}{n}, \] where $\ln_+(x) := \max (\ln x, 1)$. {Note that both $\phi(n)$ and $c_n$ are monotonically decreasing in $n$.} \subsection{Saturated Empirical Mean} The robust saturated empirical mean is similar to the truncated empirical mean used in~\cite{bubeck2013bandits}, which is employed to extend UCB1 to achieve logarithm {distribution-dependent} regret for the heavy-tailed MAB problem. Let $\seqdef{X_i}{i\in \until{m}}$ be a sequence of i.i.d. random variables with mean $\mu$ and $\mathbb{E}\, \big[\abs{X_i}^{1+\epsilon}\big] \leq u^{1+\epsilon}$, where $u>0$. Pick $a >1$ and let $h(m) = a^{\floor{\log_a \left(m\right)}+1}$ {such that $h(m) \geq m$}. Define the saturation point $B_m$ by \[ B_m := u \times \big[\phi\big(h(m)\big)\big]^{-\frac{1}{1+\epsilon}}.\] Then, the saturated empirical mean estimator is defined by \begin{equation}\label{def: sat_mean} \hat \mu_m := \frac{1}{m} \sum_{i=1}^m \sat (X_i,B_m), \end{equation} where $\sat (X_i,B_m) := \sign(X_i)\min \big\{\abs{X_i}, B_m \big\}.$ Define $d_i: = \sat(X_i,B_m)-\mathbb{E}\, [\sat(X_i,B_m)]$. The following lemma examines the estimator bias and provides an upper bound on the error of saturated empirical mean. \begin{lemma}[Error of saturated empirical mean]\label{lemma: estimator_error} For an i.i.d. sequence of random variables $\seqdef{X_i}{i\in\until{m}}$ such that $\mathbb{E}\,[X_i] =\mu $ and $\mathbb{E}\, \big[X_i^{1+\epsilon}\big] \leq u^{1+\epsilon}$, the saturated empirical mean~\eqref{def: sat_mean} satisfies \[ \Bigg | {\hat \mu_m - \mu - \frac{1}{m} \sum_{i=1}^{m} d_i } \Bigg | \leq \frac{u^{1+\epsilon}}{B_m^\epsilon}. \] \end{lemma} \begin{proof} Since $\mu = \mathbb{E}\, \Big[ X_i \big(\boldsymbol{1}_{ \left\{ \abs{X_i} \leq B_m\right\} } + \boldsymbol{1}_{ \left\{ \abs{X_i} > B_m\right\} }\big)\Big]$, the error of estimator $\hat \mu_m$ satisfies \begin{align*} \hat \mu_m - \mu = & \frac{1}{m} \sum_{i=1}^{m} \left(\sat(X_i,B_m) -\mu\right) \\ = & \frac{1}{m} \sum_{i=1}^{m} d_i + \frac{1}{m}\sum_{i=1}^{m} \left(\mathbb{E}\, [\sat(X_i,B_m)] - \mu\right), \end{align*} where the second term is the bias of $\hat \mu_m $. We now compute an upper bound on the bias. \begin{align*} \abs{\mathbb{E}\, [\sat(X_i,B_m)] - \mu} &\leq \mathbb{E}\, \left[ \abs{X_i} \boldsymbol{1}_{ \left\{ \abs{X_i} > B_m\right\} } \right]\\ &\leq \mathbb{E}\, \left[ \frac{\abs{X_i}^{1+\epsilon}}{(B_m)^\epsilon} \right] \leq \frac{u^{1+\epsilon}}{(B_m)^{\epsilon}}, \end{align*} which concludes the proof. \end{proof} We now establish properties of $d_i$. \begin{lemma}[Properties of $d_i$]\label{prop: d_i} For any $i\in \until{m}$, $d_i$ satisfies (i) $\abs{d_i}\leq 2 B_m$ (ii) $\mathbb{E}\,[d_i^2] \leq u^{1+\epsilon}B_m^{1-\epsilon}$. \end{lemma} \begin{proof} Property (i) follows immediately from definition of $d_i$, and property (ii) follows from \medskip $\qquad \displaystyle \mathbb{E}\,[d_i^2] \leq \mathbb{E}\,\big[\sat^2(X_i,B_m)\big] \leq \mathbb{E}\,\big[\abs{X_i}^{1+\epsilon}B_m^{1-\epsilon}\big].$ \end{proof} \section{Analysis of Robust MOSS}\label{sec:analysis} In this section, we analyze Robust MOSS to provide both distribution-free and distribution-dependent regret bounds. \subsection{Properties of Saturated Empirical Mean Estimator} To derive the concentration property of saturated empirical mean, we use a maximal Bennett type inequality as shown in Lemma~\ref{max_inq_b}. \begin{lemma}[Maximal Bennett's inequality~{\cite{fan2012hoeffding}}] \label{max_inq_b} Let $\seqdef{X_i}{i\in \until{n}}$ be a sequence of bounded random variables with support $[-B,B]$, where $B\geq 0$. Suppose that $\mathbb{E}\,[X_i |X_{1},\ldots,X_{i-1}] = \mu_i$ and $\Var[X_i|X_{1},\ldots,X_{i-1}] \leq v$. Let $S_m = \sum_{i=1}^{m} (X_i -\mu_i) $ for any $m\in \until{n}$. Then, for any $\delta \geq 0$ \begin{align*} &\mathbb{P}\left( \exists {m \in \until{n}}: S_m \geq \delta \right) \leq \exp \left( -\frac{\delta}{B}\psi \left (\frac{B\delta}{n v} \right) \right), \\ &\mathbb{P}\left(\exists {m \in \until{n}}: S_m \leq -\delta\right) \leq \exp \left(-\frac{\delta}{B}\psi \left (\frac{B\delta}{n v} \right)\right), \end{align*} where $\psi(x) = (1+1/x)\ln(1+x)-1 $. \end{lemma} \begin{remark} For $x\in (0,\infty)$, function $\psi(x)$ is monotonically increasing in $x$. \end{remark} Now, we establish an upper bound on the probability that the UCB underestimates the mean at arm $k$ by an amount $x$. \begin{lemma}\label{lemma: suff_sample} For any arm $k\in \until{K}$ and any $t \in \left\{K+1,\ldots,T\right\}$ and $x > 0$, if $\eta\psi(2\eta /a) \geq 2a$, the probability of event $\big \{ g^k_{n_k(t)} \leq \mu_k - x \big \}$ is no greater than \[\frac{K}{T} \frac{a}{\ln(a)} \Gamma\left(\frac{1}{\epsilon}+2\right) \left(\frac{\psi \left ( 2\eta /a \right)}{2a} \frac{x}{u}\right)^{-\frac{1+\epsilon}{\epsilon}} .\] \end{lemma} \begin{proof} It follows from Lemma~\ref{lemma: estimator_error} that \begin{align*} &\mathbb{P} \left(g^k_{n_k(t)} \leq \mu_k - x \right) \\ \leq& \mathbb{P} \left(\exists m \in \until{T}: \hat{\mu}^k_m + (1+\eta)c_m \leq \mu_k - x \right)\\ \leq& \mathbb{P} \bigg(\exists m \in \until{T}: \sum_{i=1}^{m} \frac{d_i^k}{m} \leq \frac{u^{1+\epsilon}}{B_m^\epsilon} - (1+\eta) c_m -x \bigg) \\ \leq& \mathbb{P} \bigg(\exists m \in \until{T}: \frac{1}{m} \sum_{i=1}^{m} d_i^k \leq -x - \eta c_m \bigg), \end{align*} where $d_i^k$ is defined similarly to $d_i$ for i.i.d. reward sequence at arm $k$ and the last inequality is due to \begin{equation} \frac{u^{1+\epsilon}}{B_m^\epsilon} = u \big[\phi\big(h(m)\big)\big]^{\frac{\epsilon}{1+\epsilon}} \leq u \big[\phi(m)\big]^{\frac{\epsilon}{1+\epsilon}} = c_m. \label{ineq: bc} \end{equation} Recall $a>1$. We apply a peeling argument~\cite[Sec 2.2]{bubeck2010bandits} with geometric grid $ a^s \leq m < a^{s+1}$ over time interval $\until{T}$. Since $c_m$ is monotonically decreasing with $m$, \begin{align*} & \mathbb{P} \bigg(\exists m \in \until{T}: \frac{1}{m} \sum_{i=1}^{m} d_i^k \leq -x - \eta c_m \bigg)\\ \leq &\sum_{s\geq 0} \mathbb{P}\bigg(\exists m \in [a^s, a^{s+1}) : \sum_{i=1}^{m} d_i^k \leq -a^s \left(x + \eta c_{a^{s+1}} \right) \bigg). \end{align*} Also notice that $B_m = B_{a^s} $ for all $m \in [a^s, a^{s+1})$. Then with properties in Lemma~\ref{prop: d_i}, we apply Lemma~\ref{max_inq_b} to get \begin{align} &\sum_{s\geq 0} \mathbb{P}\bigg(\exists m \in [a^s, a^{s+1}) : \sum_{i=1}^{m} d_i^k \leq -a^s \left(x + \eta c_{a^{s+1}} \right) \bigg) \nonumber\\ \leq &\sum_{s\geq 0} \exp \left( -\frac{a^{s} \left(x+ \eta c_{a^{s+1}}\right)}{2 B_{a^{s}}}\psi \left (\frac{ 2 B_{a^{s}} \left(x + \eta c_{a^{s+1}}\right)}{a u^{1+\epsilon}B_{a^{s}}^{1-\epsilon}} \right) \right)\nonumber\\ &\left(\text{since } \psi(x) \text{ is monotonically increasing}\right)\nonumber\\ \leq & \sum_{s\geq 0} \exp \left( -\frac{a^{s} \left(x+ \eta c_{a^{s+1}}\right)}{2 B_{a^{s}}}\psi \left (\frac{ 2 \eta B_{a^{s}}^\epsilon c_{a^{s+1}}}{a u^{1+\epsilon}} \right) \right) \nonumber\\ &\text{\big(plug in $c_{a^{s+1}}$, $B_{a^{s}}$ and use $h(a^s)=a^{s+1}$)} \nonumber \\ =& \sum_{s\geq 1} \exp \left( -a^{s}\left(\frac{ x}{B_{a^{s-1}}} + \eta \phi(a^s)\right) \frac{\psi \left ( 2\eta /a \right)}{2a} \right) \nonumber \\ &\left( {{\text{plug in } \phi(a^s) \text{ and use }} \eta\psi(2\eta/a)\geq 2a, \ln_+ (y) \geq \ln (y)} \right)\nonumber\\ \leq & \sum_{s\geq 1} \exp \left( -a^{s} \frac{ x}{B_{a^{s-1}}} \frac{\psi \left ( 2\eta / a \right)}{2a} \right) \frac{K}{T} a^s. \label{sum:1} \end{align} Let $b = {x\psi \left ( 2\eta / a \right)}/ (2au)$. {Since $B_{a^{s-1}} \leq ua^{\frac{s}{1+\epsilon}}$, we have} \begin{align*} \eqref{sum:1}\leq & \frac{K}{T} \sum_{s\geq 1} a^s \exp \left( -b a^{\frac{\epsilon s}{1+\epsilon}} \right) \\ \leq & \frac{K}{T} \int_{1}^{+\infty} a^y \exp\big(- b a^{\frac{(y-1)\epsilon}{1+\epsilon}}\big) dy \\ = & \frac{K}{T} a \int_{0}^{+\infty} a^y \exp\big(-b a^{\frac{y\epsilon}{1+\epsilon}}\big) dy \\ &\left(\text{where we set } z=b a^{\frac{y\epsilon}{1+\epsilon}} \right)\\ = & \frac{K}{T} \frac{a}{\ln(a)}\frac{1+\epsilon}{\epsilon} {b^{-\frac{1+\epsilon}{\epsilon}}} \int_{b}^{+\infty} z^{\frac{1+\epsilon}{\epsilon}-1} \exp\big(- z \big) dz \\ \leq & \frac{K}{T} \frac{a}{\ln(a)} \Gamma\left(\frac{1}{\epsilon}+2\right)b^{-\frac{1+\epsilon}{\epsilon}}, \end{align*} which concludes the proof. \end{proof} The following is a straightforward corollary of Lemma~\ref{lemma: suff_sample}. \begin{corollary}\label{corollary: suff_sample} For any arm $k\in \until{K}$ and any $t \in \left\{K+1,\ldots,T\right\}$ and $x > 0$, if $\eta\psi(2\eta/a) \geq 2a$, the probability of event $\big\{g^k_{n_k(t)}-2(1+\eta)c_{n_k(t)}\geq \mu_k + x\}$ shares the same bound in Lemma~\ref{lemma: suff_sample}. \end{corollary} \subsection{Distribution-free Regret Bound} The distribution-free upper bound for Robust MOSS, which is the main result for the paper, is presented in this section. We show that the algorithm achieves order optimal worst-case regret. \begin{theorem}\label{theorem:minimax bound} For the heavy-tailed stochastic MAB problem with $K$ arms and time horizon $T$, if $\eta$ and $a$ are selected such that $\eta\psi(2\eta /a) \geq 2a$, then {Robust MOSS} satisfies \[ \supscr{R_T}{worst} \leq C u K^{\frac{\epsilon}{1+\epsilon}} (T/e)^{\frac{1}{1+\epsilon}} + 2uK,\] where $C = \Gamma\left(1/\epsilon + 2\right) \left[a/\left(6+3\eta\right)\right]^{\frac{1}{\epsilon}} \left[{3}/{\psi \left (6+ 3\eta \right)} \right]^{\frac{1+\epsilon}{\epsilon}} + \epsilon \Gamma\left({1}/{\epsilon}+2\right) \left(6+3\eta\right)^{-\frac{1}{\epsilon}} \big[{6a}/{\psi (2 \eta/a )} \big]^{\frac{1+\epsilon}{\epsilon}} a/\ln(a) + \left(6+3\eta\right) \big [e+(1+\epsilon) e^{\frac{-\epsilon}{1+\epsilon}}\big]$. \end{theorem} \begin{remark} Parameter $a$ and $\eta$ as inputs to Robust MOSS can be selected by minimizing the leading constant $C$ in the upper bound on the regret in Theorem~\ref{theorem:minimax bound}. We have found that selecting $a$ slightly larger than $1$ and selecting smallest $\eta$ that satisfies $\eta\psi(2\eta /a) \geq 2a$ yields good performance. \end{remark} \begin{proof} Since both the UCB and the regret scales with $u$ defined in Assumption~\ref{ass1}, to simplify the expressions, we assume $u=1$. Also notice that Assumption~\ref{ass1} indicates $\abs{\mu_k}\leq u$, so $\Delta_k\leq 2$ for any $k \in \until{K}$. In the following, any terms with superscript or subscript ``$*$" and ``$k$" are with respect to the best and the $k$-th arm, respectively. The proof is divided into $4$ steps. \noindent \textbf{Step 1:} We follow a decoupling technique inspired by the proof of regret upper bound in MOSS~\cite{MOSS}. Take the set of $\delta$-bad arms as $\mathcal{B}_\delta$ as \begin{equation} \label{def: badarm} \mathcal{B}_{\delta} := \setdef{k \in \until{K}}{\Delta_k > \delta}, \end{equation} where we assign $\delta = \left(6+3\eta\right)\left(e K/T\right)^{\frac{\epsilon}{1+\epsilon}}$. Thus, \begin{align} R_T &\leq T \delta + \sum_{t=1}^{K} \Delta_k +\mathbb{E}\, \Bigg[\sum_{t=K+1}^{T} \mathbf{1} {\{\varphi_t \in \mathcal{B}_\delta \}} \left(\Delta_{\varphi_t} - \delta\right) \Bigg] \nonumber \\ &\leq \! T \delta + 2K + \mathbb{E}\, \Bigg[\sum_{t=K+1}^{T} \mathbf{1} {\{\varphi_t \in \mathcal{B}_\delta \}} \left(\Delta_{\varphi_t} - \delta\right) \!\Bigg]. \label{sum: regret} \end{align} Furthermore, we make the following decomposition \begin{align} & \sum_{t=K+1}^T \mathbf{1}{\{\varphi_t \in \mathcal{B}_{\delta} \}} \left(\Delta_{\varphi_t}-\delta\right) \nonumber \\ {=} & \sum_{t = K+1}^T \mathbf{1} {\left\{\varphi_t \in \mathcal{B}_{\delta}, g_{n^* (t)}^* \leq \mu^* - \frac{\Delta_{\varphi_t}}{3} \right \}} \left(\Delta_{\varphi_t}-\delta\right) \label{error: under estimate}\\ & +\sum_{t=K+1}^T \mathbf{1}{\left\{\varphi_t \in \mathcal{B}_{\delta}, g_{n^* (t)}^* > \mu^* - \frac{\Delta_{\varphi_t}}{3} \right \}} \left(\Delta_{\varphi_t}-\delta\right).\nonumber \end{align} Notice that~\eqref{error: under estimate} describes regret from underestimating optimal arm $*$. For the second summand, since $g^{\varphi_t}_{ n_{\varphi_t} (t)} \geq g_{n^* (t)}^*$, \begin{align} &\sum_{t = K+1}^T \mathbf{1} {\left\{\varphi_t \in \mathcal{B}_{\delta}, g_{n^* (t)}^* > \mu^* - \frac{\Delta_{\varphi_t}}{3} \right \}} \left(\Delta_{\varphi_t}-\delta\right) \nonumber \\ \leq & \sum_{t = K+1}^T \mathbf{1} {\left\{\varphi_t \in \mathcal{B}_{\delta}, g^{\varphi_t}_{n_{\varphi_t}(t)} > \mu_{\varphi_t} + \frac{2\Delta_{\varphi_t}}{3} \right \}} \Delta_{\varphi_t} \nonumber \\ = & \sum_{k \in \mathcal{B}_\delta} \sum_{t = K+1}^T \mathbf{1}{\left\{\varphi_t=k, g^k_{n_k(t)} > \mu_k + \frac{2\Delta_{k}}{3} \right\}} \Delta_k, \label{error: over estimate} \end{align} which characterizes the regret caused by overestimating $\delta$-bad arms. \noindent \textbf{Step 2:} In this step, we bound the expectation of~\eqref{error: under estimate}. When event $\left\{\varphi_t \in \mathcal{B}_{\delta}, g_{n^* (t)}^* \leq \mu^* - \Delta_{\varphi_t}/3 \right\}$ happens, we know \[\Delta_{\varphi} \leq 3\mu^* - 3g^*_{n^*(t)} \text{ and } g^*_{n^*(t)} < \mu^* - \frac{\delta}{3}. \] Thus, we get \begin{align*} &\mathbf{1}{\left\{\varphi_t \in \mathcal{B}_{\delta}, g_{n^* (t)}^* \leq \mu^* - \frac{\Delta_{\varphi_t}}{3} \right \}} (\Delta_{\varphi_t}-\delta) \\ \leq &\mathbf{1} {\left\{ g_{n^*(t)}^* < \mu^* - \frac{\delta}{3} \right \}} \times \big(3\mu^* - 3 g_{n^*(t)}^* - \delta \big):=Y_t. \end{align*} Since $Y_t$ is a positive random variable, its expected value can be computed involving only its cumulative density function: \begin{align*} \mathbb{E}\, \left[Y_t\right] & = \int_{0}^{+\infty} \mathbb{P} \left( Y_t>x \right) dx \\ & \leq \int_{0}^{+\infty} \mathbb{P} \left( 3\mu^* - 3 g_{n^*(t)}^* -\delta > x \right) dx \\ &= \int_{\delta}^{+\infty} \mathbb{P} \Big( \mu^* - g_{n^*(t)}^* > \frac{x}{3} \Big) dx. \end{align*} Then we apply Lemma~\ref{lemma: suff_sample} at optimal arm $*$ to get \[\mathbb{E}\,[Y_t] \leq \frac{K C_1}{T} \int_{\delta}^{+\infty} \frac{1}{\epsilon} x^{-\frac{1+\epsilon}{\epsilon}} dx = \frac{K C_1}{T\delta^{\frac{1}{\epsilon}}}, \] where $C_1 = \epsilon \Gamma\left({1}/{\epsilon}+2\right) \big[{6a}/{\psi (2 \eta /a )} \big]^{\frac{1+\epsilon}{\epsilon}} a/\ln(a)$. We conclude this step by \[\mathbb{E}\,[\eqref{error: under estimate}]\leq \sum_{t = K+1}^T Y_t \leq C_1 K\delta^{-\frac{1}{\epsilon}}. \] \noindent \textbf{Step 3:} In this step, we bound the expectation of~\eqref{error: over estimate}. For each arm $k \in \mathcal{B}_\delta$, \begin{align} & \sum_{t = K+1}^T \mathbf{1}{ \left\{\varphi_t=k, g_{n_{k}(t)}^k \geq \mu_k + \frac{2\Delta_{k}}{3} \right\}} \nonumber \\ = & \sum_{t = K+1}^T \sum_{m=1}^{t-K} \mathbf{1} \left\{\varphi_t=k, n_{k}(t) = m\right \} \mathbf{1} \left\{ g_m^k \geq \mu_k + \frac{2\Delta_k}{3} \right\} \nonumber \\ = & \sum_{m = 1}^{T-K} \mathbf{1} \left\{ g_m^k \geq \mu_k + \frac{2\Delta_k}{3} \right\} \! \sum_{t=m+K}^{T} \!\!\mathbf{1} \left\{\varphi_t=k, n_{k}(t) = m\right \} \nonumber \\ \leq & \sum_{m = 1}^T \mathbf{1} \left\{ g_m^k \geq \mu_k + \frac{2\Delta_k}{3} \right\} \nonumber \\ \leq & \sum_{m=1}^{T} \mathbf{1} \Bigg\{ \frac{1}{m} \sum_{i=1}^{m} d_i^k \geq \frac{2\Delta_k}{3} - (2+\eta) c_m \Bigg\}, \label{ineq: 6} \end{align} where in the last inequality we apply Lemma~\ref{lemma: estimator_error} and use the fact that ${u^{1+\epsilon}}/{B_m^\epsilon} \leq c_m$ in~\eqref{ineq: bc}. We set \[l_k = \ceil{\left(\frac{6+3\eta}{\Delta_k}\right)^{\frac{1+\epsilon}{\epsilon}} \ln \Bigg (\frac{T}{K} \left(\frac{\Delta_k}{6+3\eta}\right)^{\frac{1+\epsilon}{\epsilon}} \Bigg )}.\] With $\Delta_k \geq \delta$, we get $l_k$ is no less than \[\left(\frac{6+3\eta}{\Delta_k}\right)^{\frac{1+\epsilon}{\epsilon}} \ln \bigg (\frac{T}{K} \left(\frac{\delta}{6+3\eta}\right)^{\frac{1+\epsilon}{\epsilon}} \bigg ) =\left(\frac{6+3\eta}{\Delta_k}\right)^{\frac{1+\epsilon}{\epsilon}}. \] Furthermore, since $c_m$ is monotonically decreasing with $m$, for $m \geq l_k$, \begin{equation} c_m \leq c_{l_k} \leq \Bigg[\frac{\ln_+ \big(\frac{T}{K} \big(\frac{\Delta_k}{6+3\eta}\big)^{\frac{1+\epsilon}{\epsilon}}\big)}{l_k}\Bigg]^{\frac{\epsilon}{1+\epsilon}} \leq \frac{\Delta_k}{6+3\eta}. \label{ineq: 8} \end{equation} With this result and $l_k \geq 1$, we continue from~\eqref{ineq: 6} to get \begin{align} \mathbb{E}\,[\eqref{ineq: 6}]\leq &l_k-1 + \sum_{m = l_k}^T \mathbb{P} \Bigg\{ \frac{1}{m} \sum_{i=1}^{m} d_i^k \geq \frac{2\Delta_k}{3}- (2+\eta) c_m \Bigg\} \nonumber \\ \leq & l_k-1 + \sum_{m = l_k}^T \mathbb{P} \Bigg\{ \frac{1}{m} \sum_{i=1}^{m} d_i^k \geq \frac{\Delta_k}{3} \Bigg\}. \label{ineq: 7} \end{align} Therefore by using Lemma~\ref{max_inq_b} together with statement (ii) from Lemma~\ref{prop: d_i}, we get \begin{align} &\sum_{m = l_k}^T \mathbb{P} \Bigg\{ \frac{1}{m} \sum_{i=1}^{m} d_i^k \geq \frac{\Delta_k}{3} \Bigg\} \nonumber \\ \leq & \sum_{m = l_k}^T \exp \left(-\frac{m \Delta_k}{3 B_m} \psi\left(B_m^\epsilon \Delta_k \right) \right) \nonumber\\ \leq & \sum_{m = l_k}^T \exp \left(-\frac{m \Delta_k}{3 B_m} \psi\left(6+3\eta \right) \right), \label{newadd} \end{align} where the last step is due to that $\psi(x)$ is monotonically increasing and $B_m^\epsilon \Delta_k \geq (6+3\eta) B_m^\epsilon c_m \geq 6+3\eta$ from~\eqref{ineq: 8} and~\eqref{ineq: bc}. Since $B_m = \phi \big(h(m)\big)^{-\frac{1}{1+\epsilon}} \leq \phi(am)^{-\frac{1}{1+\epsilon}}\leq (am)^{\frac{1}{1+\epsilon}}$, we have \begin{align*} \eqref{newadd} &\leq \sum_{m = 1}^T \exp \left(-m^{\frac{\epsilon}{1+\epsilon}} a^{-\frac{1}{1+\epsilon}} \psi\left(6+3\eta \right) \frac{\Delta_k }{3} \right). \\ &\leq \int_{0}^{+\infty} \exp \left(-\beta y^{\frac{\epsilon}{1+\epsilon}}\right) dy \\ &\big(\text{where we set } \beta = a^{-\frac{1}{1+\epsilon}} \psi \left ( 6+3\eta \right) \Delta_k /3\big)\\ &= \frac{1+\epsilon}{\epsilon} \beta^{-\frac{1+\epsilon}{\epsilon}} \int_{0}^{+\infty} z^{\frac{1+\epsilon}{\epsilon}-1} \exp \left(-z\right) dy \\ &\big(\text{where }z = \beta y^{\frac{\epsilon}{1+\epsilon}}\big)\\ &= \Gamma\left(\frac{1}{\epsilon}+2\right) \beta^{-\frac{1+\epsilon}{\epsilon}}. \end{align*} Plugging it into~\eqref{ineq: 7}, \begin{align*} \mathbb{E}\,[\eqref{ineq: 6}] & \leq C_2 \Delta_k^{-\frac{1+\epsilon}{\epsilon}} + C_3\Delta_k^{-\frac{1+\epsilon}{\epsilon}} \ln \Big (\frac{T}{K C_3} \Delta_k^{\frac{1+\epsilon}{\epsilon}} \Big ), \end{align*} where $C_2 = \Gamma\left({1}/{\epsilon}+2\right) a^{\frac{1}{\epsilon}} \left[{3}/{\psi \left (6+ 3\eta \right)} \right]^{\frac{1+\epsilon}{\epsilon}} $ and $C_3 = \left(6+3\eta\right)^{\frac{1+\epsilon}{\epsilon}} $. Put it together with $\Delta_k \geq \delta$ for all $k \in \mathcal{B}_\delta$, \begin{align*} \mathbb{E}\,[\eqref{error: over estimate}] &\leq \sum_{k \in \mathcal{B}_\delta} C_2 \Delta_k^{-\frac{1}{\epsilon}} + C_3\Delta_k^{-\frac{1}{\epsilon}} \ln \left (\frac{T}{K C_3} \Delta_k^{\frac{1+\epsilon}{\epsilon}} \right )\\ & \leq C_2 K \delta^{-\frac{1}{\epsilon}} + (1+\epsilon) e^{\frac{-\epsilon}{1+\epsilon}} C_3 K \delta^{-\frac{1}{\epsilon}}, \end{align*} where we use the fact that $x^{-\frac{1}{\epsilon}} \ln \big (Tx^{\frac{1+\epsilon}{\epsilon}} /\left(K C_3\right) \big )$ takes its maximum at $x = \delta\exp(\epsilon^2/(1+\epsilon))$. \noindent \textbf{Step 4:} Plugging the results in step $2$ and step $3$ into~\eqref{sum: regret}, \[\supscr{R_T}{worst} \leq T\delta + \left[C_1 + C_2 + (1+\epsilon) e^{\frac{-\epsilon}{1+\epsilon}} C_3\right]K\delta^{-\frac{1}{\epsilon}}+2K.\] Straightforward calculation concludes the proof. \end{proof} \subsection{Distribution-dependent Regret Upper Bound} We now show that robust MOSS also preserves a logarithm upper bound on the {distribution-dependent} regret. \begin{theorem}\label{theorem:distribution-dependent bound} For the heavy-tailed stochastic MAB problem with $K$ arms and time horizon $T$, if $\eta\psi(2\eta/a) \geq 2a$, the regret $R_T$ for {Robust MOSS} is no greater than \[ \sum_{k: \Delta_k > 0 } \Big(\frac{u^{1+\epsilon}}{\Delta_k}\Big)^{\frac{1}{\epsilon}} \left[C_1\ln \bigg (\frac{T}{KC_1} \Big(\frac{\Delta_k}{u}\Big)^{\frac{1+\epsilon}{\epsilon}} \bigg ) + C_2 K \right] + \Delta_k. \] where $C_1 = \left(4+4\eta\right)^{\frac{1+\epsilon}{\epsilon}}$ and $C_2= \max \Big(e C_1, 2 \Gamma({1/\epsilon+2}) \left({8a}/{\psi ( 2\eta/a )}\right)^{\frac{1+\epsilon}{\epsilon}} {a}/{\ln(a)}\Big) $. \end{theorem} \begin{proof} Let $\delta = \left(4+4\eta\right)\left(e K/T\right)^{\frac{\epsilon}{1+\epsilon}}$ and define $\mathcal{B}_\delta $ the same as~\eqref{def: badarm}. Since $\Delta_k \leq \delta$ for all $k \notin \mathcal{B}_\delta$, the regret satisfies \begin{align} R_T &\leq \sum_{k \notin \mathcal{B}_\delta} T \Delta_k + \sum_{t=1}^{T} \mathbf{1} {\{\varphi_t \in \mathcal{B}_\delta \}} \Delta_{\varphi_t} \nonumber \\ &\leq \sum_{k \notin \mathcal{B}_\delta} \! e K\left(\frac{4+4\eta}{\Delta_k}\right)^{\frac{1+\epsilon}{\epsilon}} \!\! \Delta_k + \sum_{k\in \mathcal{B}_\delta}\sum_{t=1}^T \mathbf{1}{\{\varphi_t = k \}} \Delta_{k}. \label{bound2: 1} \end{align} Pick arbitrary $l_k \in \mathbb{Z}_{>0}$, thus \begin{align*} \sum_{t=1}^T \mathbf{1} \{\varphi_t = k \} & \leq l_k + \!\!\!\sum_{t = K+1}^T \!\! \mathbf{1}\left\{ \varphi_t = k, n_k(t)\geq l_k \right \} \\ &\leq l_k + \!\!\! \sum_{t=K+1}^T \!\! \mathbf{1}{\left\{g^k_{n_{k}(t)} \geq g_{n^* (t)}^*, n_k(t)\geq l_k\right \}}. \end{align*} Observe that $g^k_{n_{k}(t)} \geq g_{n^* (t)}^*$ implies at least one of the following is true \begin{align} &g_{n^* (t)}^* \leq \mu^* -{\Delta_k}/{4}, \label{event:1}\\ &g^k_{n_{k}(t)} \geq \mu_{k} + {\Delta_{k}}/{4} + 2(1+\eta) c_{n_k(t)}, \label{event:2}\\ &(1+\eta) c_{n_k(t)} > {\Delta_k}/{4}. \label{event:3} \end{align} We select \[l_k = \ceil{\left(\frac{4+4\eta}{\Delta_k}\right)^{\frac{1+\epsilon}{\epsilon}} \ln \Bigg(\frac{T}{K} \left(\frac{\Delta_k}{4+4\eta}\right)^{\frac{1+\epsilon}{\epsilon}} \Bigg )}.\] Similarly as~\eqref{ineq: 8}, $n_k(t) \geq l_k$ indicates $c_{n_k(t)} \leq \Delta_k/(4+4\eta)$, so~\eqref{event:3} is false. Then we apply Lemma~\ref{lemma: suff_sample} and Corollary~\ref{corollary: suff_sample}, \begin{align*} &\mathbb{P} {\left\{g^k_{n_{k}(t)} \geq g_{n^* (t)}^*, n_k(t)\geq l_k\right \}}\\ \leq & \mathbb{P} \left( \text{\eqref{event:1} or~\eqref{event:2} is true } \right) \leq \frac{C_2' K}{T }\Delta_k^{-\frac{1+\epsilon}{\epsilon}}, \end{align*} where $C_2'= 2 \Gamma\left({1/\epsilon+2}\right) \left({8a}/{\psi (2\eta / a )}\right)^{\frac{1+\epsilon}{\epsilon}} {a}/{\ln(a)}$. Substituting it into~\eqref{bound2: 1}, $R_T$ is upper bounded by \begin{align*} & \sum_{k \notin \mathcal{B}_\delta} \! \frac{e C_1 K }{\Delta_k^{\frac{1}{\epsilon}}} + \! \sum_{k \in \mathcal{B}_\delta} \! \left[\frac{C_1 }{\Delta_k^{\frac{1}{\epsilon}}}\ln \bigg (\frac{T}{K C_1} \Delta_k^{\frac{1+\epsilon}{\epsilon}} \bigg ) + \frac{C_2' K}{\Delta_k^{\frac{1}{\epsilon}}} + \Delta_k \right]. \end{align*} Considering the scaling factor $u$, the proof can be concluded with easy computation. \end{proof} \section{Numerical Illustration}\label{sec:simulation} {In this section, we compare Robust MOSS with MOSS and Robust UCB (with truncated empirical mean or Catoni's estimator)~\cite{bubeck2013bandits} in a $3$-armed heavy-tailed bandit setting.} The mean rewards are $\mu_1=-0.3$, $\mu_2=0$ and $\mu_3=0.3$ and sampling at each arm $k$ returns a random reward equals to $\mu_k$ added by sampling noise $\nu$, where $\abs{\nu}$ is a generalized Pareto random variable and the sign of $\nu$ has equal probability to be positive and negative. The PDF of reward at arm $k$ is \[f_{k}(x) = \frac{1}{2\sigma}\left(1 + \frac{\xi \abs{x-\mu_k}}{\sigma}\right)^{-\frac{1}{\xi} - 1} \, \text{for } x \in (-\infty, +\infty),\] where we select $\xi = 0.33$ and $\sigma=0.32$. Thus, for a random reward $X$ from any arm, we know $\mathbb{E}\, [X^2] \leq 1$, which means $\epsilon=1$ and $u=1$. We select parameters $a=1.1$ and $\eta = 2.2$ for Robust MOSS so that condition $\eta\psi(2\eta/a) \geq 2a$ is met. Fig.\ref{fig.comparison} shows the mean cumulative regret together with quantiles of cumulative regret distribution as a function of time, which are computed using $200$ simulations of each policy. The simulation result shows that there is a chance MOSS loses stability in heavy-tailed MAB and suffers linear cumulative regret while other algorithms work consistently and maintain sub-linear cumulative regrets. {Robust MOSS slightly outperforms Robust UCB in this specific problem.} \begin{figure}[ht!] \begin{center} \includegraphics[width=0.49\textwidth]{comparison2.png} \vspace{-16pt} \caption{Comparison of $4$ algorithms in heavy-tailed MAB: On each graph, the bold curve is the mean regret while light shaded and dark shaded regions correspond respectively to upper $5\%$ and lower $95\%$ quantile cumulative regrets.} \vspace{-10pt} \label{fig.comparison} \end{center} \end{figure} \section{Conclusions and Future Direction}\label{sec:conclusions} We proposed the Robust MOSS algorithm for heavy-tailed bandit problem. We evaluated the algorithm by deriving upper bounds on the associated distribution-free and distribution-dependent regrets. Our analysis showed that Robust MOSS achieves order optimal performance in both scenarios. The saturated mean estimator centers at zero which make the algorithm not translation invariant. Exploration of translation invariant robust mean estimator in this context remains an open problem. \bibliographystyle{IEEEtran}
proofpile-arXiv_065-169
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\textsc{Melody UI}\xspace} \label{sec:system} Based on our design considerations in Section~\ref{sec:requirement} and our \textsc{Melody}\xspace algorithm in Section~\ref{sec:model}, we present \textsc{Melody UI}\xspace, an interactive system for helping users to understand an ML model's decisions on an input dataset \footnote{The system can be accessed at \url{http://128.238.182.13:5004/}}. The interface consists of (A) a main explanation summary visualization, (B) an original data subset view from a selected summary, and (C) an instance view. In the following discussion, we will focus on the main summary visualization and how an XAI workflow in Figure~\ref{fig:need} is established in a visual analytics fashion. \subsection{Explanation Summary Visualization Design} The explanation summary visualization (Figure~\ref{fig:summary_vis}) contains three visual components: the \textit{data flow}, the \textit{adjacency list}, and the \textit{legends}. The dataflow shows how instances from different classes flow to different instance clusters through a Sankey diagram. The adjacency list displays the data summary from the local explanations. The legends display the features and their corresponding color encodings in the adjacency list. \subsubsection{Adjacency List} The main visual component of \textsc{Melody UI}\xspace is the adjacency list of the explanation summary. The explanation summary is a matrix of two sets: instance clusters and feature clusters. The intersection between an instance cluster and a feature cluster is a real-valued submatrix of original explanations. Therefore, the simplest way to present the explanation summary is to directly show the original explanation matrix with rows and columns ordered according to their cluster memberships. However, we found the co-clusters hard to be observed when the matrix is sparse (\textbf{C.2}). Since an ML model's decisions are usually diverse on different subsets of the input data, the clustering will also result in many different row and column clusters. Thus, it becomes difficult for users to notice small clusters. Also, we found the information obtained from the matrix hard to memorize when users perform multiple visual inspections and interactions on different widgets at the same time. For example, when a feature cluster is selected, users inspect the features inside in a separate view. After the inspection, it becomes difficult to recall which feature cluster they have selected inside the matrix. These problems related to sparsity and stimulus have also been identified and thoroughly studied in previous literature \cite{ghoniem2005readability,hlawatsch2014visual,okoe2018node}. To explore relevant instances and features in a large sparse matrix (\textbf{C.1-2}), we design an adjacency list visualization (inspired by \cite{hlawatsch2014visual}) to present the explanation summary (Figure~\ref{fig:summary_vis}B). Each row in the adjacency list represents an instance cluster, and each color texture represents a feature cluster. The size of an instance cluster is encoded with text and height. For a feature cluster, the size is encoded with width. Thus, each intersection between the instance and feature cluster forms a block (i.e. a cell in $p(\hat{R},\hat{C})$). The blocks in each row are sorted by their values in $p(\hat{R},\hat{C})$. In this arrangement, we fix the instances' positions for users to locate a subset of data easily. Also, the features are color encoded so that users can reference an explanatory feature easily by its color, which helps navigate the features across different widgets (\textbf{C.6}). Furthermore, as the column position restriction is removed, the adjacency list becomes more compact. We acknowledge that categorical color scheme might impose a scalability issue, thus we combine the colors with textures \raisebox{-1ex}{\includegraphics[scale=0.15]{texture}} to increase the available selections. \noindent \textbf{Visualizing Local Explanation Values} Each block is a co-cluster between a group of instances and features. Thus it is also a sub-matrix of the original explanation matrix. As a sub-matrix contains a distribution of positive real numbers, we display such information as a histogram encoded by a color gradient (Figure~\ref{fig:summary_vis}\clabel{1}). The values in the sub-matrix are sorted from high to low and then encoded by a sequential color scheme. The sorting can provide better clarity on the quality of co-clusters under the sparse matrix clustering condition (\textbf{C.2}). \begin{figure*} \centering \includegraphics[width=\linewidth]{exp_quality} \caption{ Information loss of three datasets' explanation summaries after applying the heuristics from left to right (with final loss reduction shown). } \label{fig:exp_quality} \end{figure*} \subsubsection{Data Flow} To provide a picture of how data and predictions are arranged in the summary, a Sankey diagram is displayed (Figure\ref{fig:summary_vis}A) on the left of the adjacency list. A vertical rectangle is shown for each class with height encoded as the number of instances in the dataset. The amount of fill of each rectangle is proportional to the number of instances in the currently shown summary. The horizontal flows represent the portion of data falling into a designated instance cluster. Different colors in the flows represent the amount of data that is either correctly predicted (grey) or incorrectly predicted (red). It helps users assess the capability of the ML model: its performance on each class and the accuracy of each different decision boundaries (\textbf{C.3}). \subsubsection{Legends} The legends (Figure~\ref{fig:summary_vis}C) show the color and texture encodings of the clusters of explanatory features. The features are sorted based on their existence in the current summary. When we click on a feature, its distribution of explanation values in the dataset is shown as a histogram (Figure~\ref{fig:summary_vis}\clabel{2}) to allow the inspection of its global importance (\textbf{C.1}) \subsubsection{Interactions} The explanation summary can be filtered through various mechanisms. Besides explicitly selecting classes and explanatory features for filtering in the dropdown menus, the statistical properties such as the size of clusters and the average explanation values of a co-cluster allow the summary to be filtered through \textbf{sliding} different thresholds. Also, when clusters are selected, the values in the subset are shown in a parallel coordinates to export the important instances from a sparse cluster (\textbf{C.2}) to the subset view through \textbf{brushing} the axes. \subsection{Visual Analytics Workflow of ML Model Explanation} We now describe how to leverage the explanation summary to complete a visual analytics workflow. The interactions between different explanations in Figure~\ref{fig:need} are consistent with \textsc{Melody UI}\xspace's views. While the adjacency list acts as a global overview of the ML model explanation, the components in the list can be selected and exported to a more focused class and instance inspection. In return, the adjacency list can use the findings from local explanations for verification or further insights. Thus, the workflow in the system forms a finite state transition among global, subset (class), and instance explanations. The discussion below mainly focuses on how the system helps circulate different XAI tasks. \subsubsection{Global $\longrightarrow$ Subset (Class) Explanation} After exploring the adjacency list, users can proceed to a subset of the clusters by clicking on a row cluster or a co-cluster(\textbf{zoom and filter}). After selecting an explanation subset and extracting the instances with significant values, users can proceed to understand the local decision logics from the behavior of instances inside. To provide contextual explanations for tabular, image, and text data, we propose three different ways to visualize the subsets (\textbf{C.5}). \noindent \textbf{Tabular.} The system visualizes the tabular data in multiple sets of parallel coordinates (Figure~\ref{fig:case_tabular}\textbf{D}). Each set of parallel coordinates represents one class, and the lines inside represent the instances. The axes show the features in the original dataset, and the selected features are positioned at the front. The lines are colored based on whether their predictions. There are also two histograms on each axis that represent the distributions of the correctly and incorrectly predicted instances. \noindent \textbf{Image.} For image data, the system shows the similarly explained instance on one column and their corresponding common visual representations on another column (Figure~\ref{fig:teaser}\textbf{B}). The instances shown are also grouped by their classes and are surrounded by colored frames that indicate the predictions. All instances' and features' images are displayed to acquire a visual impression of similar images and explanations. \noindent \textbf{Text.} The system shows the number of selected instances as bar charts grouped by class and prediction outcome on the left column, and the topics and words that are used to explain the instances on the right column (Figure~\ref{fig:case_text}\textbf{C}). Users can understand what kinds of words are combined to make decisions on each prediction and further select individual words inside each topic to filter the bar charts. When a bar is clicked, the documents can be exported to the local explanation view. \subsubsection{Subset (Class) $\longrightarrow$ Local Explanation} After a subset of instances and features are inspected, users can drill down to inspect an instance with full details for insights or hypotheses (\textbf{detail on demand}). Similarly, different arrangements are provided to inspect instances from tabular, image, and text data (\textbf{C.5}). \noindent \textbf{Tabular Instances.} The instances are selected by brushing the parallel coordinates in the subset view and rendered in the data table with original features to browse the exact numerical and categorical values. The color of each cell represents the prediction outcome. \noindent \textbf{Single Image.} An image and its top influencing features (image patches with the highest similarity scores) are displayed. The instance and features also have their bounding box of neuron activations to inspect the relationship between different patches. \noindent \textbf{Text Documents.} The full documents selected from the bar charts are shown. The words that are explanatory features in the document are highlighted by a sequential color map with their explanation values. \begin{figure*} \centering \includegraphics[width=\linewidth]{tabular/case_tabular} \caption{ Use case of understanding a neural network of credit risk classification trained on tabular data. A, A1, A2: The explanation summary of the whole data, counterfactual of the query, and similar neighbors from the query, respectively. B: Explanatory features with value distributions to understand the popularity among the dataset. C: Explanation details for filtering and zooming significant explanations. D, D1, D2: Subset views of the selected subsets from the summary. } \label{fig:case_tabular} \end{figure*} \subsubsection{Local $\longrightarrow$ Global Explanation} Users might formulate insights and hypotheses throughout the top-down inspection. For testing the hypotheses, the explanatory features and instances in the local explanation panel are clicked, and their values become the conditions for filtering the adjacency list (\textbf{Query}) (\textbf{C.4}). For tabular data, when a cell in the data table is clicked, the logic that includes the cell's value will be included (e.g., when a categorical cell valued ``\textit{education}'' in column ``\textit{purpose}'' is clicked, the explanatory feature ``\textit{purpose = education}'' will be selected). For image and text data, the user clicks on the image or document for class queries and the image patches or words for feature queries. Overall, users can filter the explanation summary by class, prediction outcome, and explanatory features. As a result, a new and refined overview summary is available to perform global explanation tasks again, which completes the loop. \section{Evaluation} To evaluate the scalability and the quality of \textsc{Melody}\xspace, we perform quantitative experiments and use case scenarios on a variety of datasets. \subsection{Experimental Setup} The implementations are written in NumPy, and the experiments are run in a MacBook Pro with 2.4 GHz 8-Core Intel Core i9 CPUs and 32GB RAM. We use the following real-world datasets and ML models to conduct our experiments and use cases: \noindent\textbf{Caltech-UCSD Birds-200-2011 Images.} The dataset includes 11,788 images with 200 species of birds. We use a Convolutional Neural Network (CNN) with a prototype layer~\cite{chen2019looks} and achieves the highest test accuracy of $73.63\%$. The explanation matrix is extracted from the prototype layer, which has 1330 visual explanatory features. \noindent\textbf{Home Equity Line of Credit (HELOC).} It contains binary classifications of risk performance (i.e., good or bad) on 10,459 samples with even class distributions. We train a two-layer neural network and achieves the highest test accuracy of $72.59\%$. We extract 167 logics and use SHAP~\cite{lundberg2017unified} to construct our explanation matrix. \noindent\textbf{US Consumer Finance Complaints.} The dataset contains 22,200 documents with ten classes (e.g., debt, credit card, and mortgage). We train an LSTM neural network model and achieve the highest test accuracy of $84.54\%$. We use IntGrad~\cite{sundararajan2017axiomatic} to generate explanations for words in each document. We further combine the words by clustering their embeddings to generate 500 topics as the explanation features. \begin{table}[tb]\centering \small \ra{1} \begin{tabular}{@{}llcccc@{}}\toprule &\multicolumn{1}{c}{Algorithm} & \phantom{abc} & \multicolumn{1}{c}{Tabular} & \multicolumn{1}{c}{Image} & \multicolumn{1}{c}{Text}\\ \midrule \\[-3mm] &Baseline & & 33 mins & 21 mins & $>7$ hours\\ \\[-3mm] &Baseline + LSH & & \textbf{5s} & \textbf{13s} & \textbf{9s}\\ \bottomrule \\[-3mm] \end{tabular} \caption{Run time on different datasets.} \vspace{-3mm} \label{table:exp_runtime} \end{table} \subsection{Quantitative Evaluation} \label{sec:exp} \noindent\textbf{End-to-end quality evaluation.} To evaluate how each of our heuristics improves the quality of the summarization results, we report the quality (information loss) of the baseline implementation (i.e., straightforward minimization of Equation~\ref{eqn:kl}) as well as the effects of applying marginalization (Equation~\ref{eqn:loss}), smoothing, and pre-clustering from Section~\ref{sec:heuristics}. Overall, the heuristics significantly improve the quality of the result (Figure~\ref{fig:exp_quality}). The final reductions of information loss range from 78\% to 99\%. To visually understand the quality of the summarization results, we provide visual outcomes of the explanation summaries in Figure~\ref{fig:tabular_matrix}-\ref{fig:text_matrix} in the Appendix. \looseness=-1 \noindent\textbf{Effect of data sketches on run time performance.} We report the effect of the run time on the three datasets with the speedup strategies (Algorithm~\ref{algo:lsh}) in Table~\ref{table:exp_runtime}. The result clearly shows that by replacing the quadratic computation in the baseline approach (Algorithm~\ref{algo:coco}), it becomes possible to produce results in interactive time. We also observe that the calculation of information loss is not linear in runtime since there are lots of data slicing operations to compute the approximation matrix ($q(\hat{R},\hat{C})$). The results highlight the importance of limiting the number of candidate comparisons in the bottom-up process. \looseness=-1 \begin{figure*} \centering \includegraphics[width=\linewidth]{text/case_text} \caption{ Use case of understanding a neural network of document classification trained on text data. A: The explanation summary for customer loan complaints after filtering by class in \clabel{1}. B: By selecting a row cluster in \clabel{2}, the details of explanation values are shown for users to select significant instances and features by brushing in \clabel{3}. C: The subset view displaying the distributions of documents in each class related to the explanatory topics. D: Clicking a blue bar in \clabel{4} shows all the correctly classified documents that are explained by the selected topics. \clabel{5} Clicking the words formulates a query that extracts all documents predicted by the words in the model. } \label{fig:case_text} \end{figure*} \subsection{Use Cases} \label{sec:usecase} We present a usage scenario of understanding deep neural networks related to image recognition and two use cases regarding tabular and text classifications of financial data. Our goal is to demonstrate that our technique generalizes different ML model interpretation challenges in understanding the models and datasets. \subsubsection{Usage Scenario: Understanding an Image Classifier} We first describe a hypothetical walkthrough of understanding what a deep learning model has learned from a set of images (Figure~\ref{fig:teaser}). We use images as examples because the visual presentations are intuitive to understand. Imagine Chris, an ornithologist, wants to study how birds' appearances distinguish their species. He downloads the data and runs the ML model to understand how the machine learns the visual features. \noindent \textbf{Understand the summary.} Chris uses \textsc{Melody}\xspace to generate an explanation summary consisted of 37 instance clusters and 49 feature clusters. He imports the result to \textsc{Melody UI}\xspace. After filtering small clusters and clusters with low explanation values, Chris discovers three broad groups of birds with similar prediction logics (Figure~\ref{fig:teaser}\textbf{A}). Each group contains different visual explanatory features (i.e., color blocks), so he decides to go through the instance groups one by one. \noindent \textbf{Inspect an interesting subset.} Chris clicks the text box on the row to select all the instances and features from the instance cluster for a detailed inspection. From the subset view (Figure~\ref{fig:teaser}\textbf{B}), he realizes that the neural network learns to group birds with similar colors (yellowish birds) (Figure~\ref{fig:teaser}\clabel{B1}) for a coarse level of decision making. The images then are further classified by more detailed image patches such as the bird's head and belly. Chris notices some classes such as yellow-throated Vimeo have many wrong predictions (images with orange frames) in this subset. Therefore, he clicks on some of the images to examine an image and its classification logics in full detail. \noindent \textbf{Develop hypotheses by inspecting an instance.} Chris checks an image by clicking it in the subset view. The image and its top explanatory features thus are shown in the instance inspection view (Figure~\ref{fig:teaser}\textbf{C}). He sees a yellow-throated Vimeo is wrongly predicted as a blue-winged warbler because the furs on its neck look similar to the ones of a blue-winged warbler. Chris finds the whole process enjoyable since he quickly identifies the reasoning processes of the model on hundreds of images within a simple journey of visual analysis. \subsubsection{Tabular Use Case: Understanding the Data Capability} We now present a use case about approaching the limit of predictability in training a dataset. Understanding how the current features help to make predictions allows the financial worker to make improvements to the current credit system. \noindent \textbf{Understand the summary.} After filtering by value threshold and the number of instances in the clusters, the analyst obtains a visual explanation summary (Figure~\ref{fig:case_tabular}\textbf{A}). It shows that the blue-colored blocks occupy most of the rows. It consists of mainly items related to delinquency (Figure~\ref{fig:case_tabular}\textbf{B}). Then, he clicks and inspects the subsets and filters some low explanation values by brushing the parallel coordinates (Figure~\ref{fig:case_tabular}\textbf{C}), the subsets show very similar behaviors: for customers who have no history of delinquency, the model labels them as ``good''. \noindent \textbf{Discovering more detailed logics in the model.} The analyst sees that such logics provide an approximated accuracy of around $73\%$ in more than half of the population. To understand how a ``bad'' decision is correctly predicted with a good delinquency record, he refines the summary by the classes. The summary shows another logic that influences the model outcome (Figure~\ref{fig:case_tabular}\clabel{A1}). The pink blocks represent the features related to a low external risk estimate, which means that the customer would still be graded as ``bad'' if the external risk estimate is low (Figure~\ref{fig:case_tabular}\clabel{D1}). \noindent \textbf{Verifying insights.} From the dataflow, the analyst sees that combining delinquency and risk estimate yields a good prediction result. By verify this hypothesis, he filters the summary by showing only the wrong predictions under the same condition. By adjusting the value threshold to a low extent, the wrong predictions mainly attribute to the fact that they do not have a low risk-estimate (i.e., missing pink blocks related to risk estimate when blue blocks are presented) (Figure~\ref{fig:case_tabular}\clabel{A2}). Clicking the rows with missing pink blocks also reveals that the model fails to identify bad risk when the customer has a good delinquency record and high external risk estimates (Figure~\ref{fig:case_tabular}\clabel{D2}). Throughout the visual analysis in different granularity, the analyst acquires an overview of the model: the model mainly decides by the history of delinquency on the first level of reasoning, then further screen out the bad risks by low external risk estimates. The query panel shows that the rows explained by either these two logics cover more than $70\%$ of the whole dataset. \subsubsection{Text Use Case: Predicting Customer Complaints} We present a use case of exploring a text classification model to understand different types of customer complaints. Understanding how customers complain can improve the call center's services. Our financial analyst first uses \textsc{Melody}\xspace to acquire 28 instance clusters and 23 feature clusters. Then, as he is from the loan division in the company, he filters the explanation summary by ``customer loans'' class to explore the customers' inquiries related to loans (Figure~\ref{fig:case_text}(1)). \noindent\textbf{Identify useful subsets.} The analyst first discovers that the explanation summary is very sparse (Figure~\ref{fig:case_text}\textbf{A}). Therefore, he clicks on the block to examine a more detailed view of the explanation subset (Figure~\ref{fig:case_text}(1)). The details of value distributions in the selected block are shown in the explanation parallel coordinates (Figure~\ref{fig:case_text}\textbf{B}). The analyst discovers that the sparsity mainly comes from the low usage of many topics in the feature clusters. Thus, he brushes the axes of topics that have high values to acquire the subset of instances and topics that heavily correlate to each other. The result of the brushing is shown in the subset view (Figure~\ref{fig:case_text}\textbf{C}). \noindent\textbf{Discover interesting topics in the subset.} From the subset view, the analyst discovers that many complaints are related to words such as "auto", "bmw", and "ford". These words belong to automobiles and vehicles. Given the longer length of correctly labeled (blue) bars in the bar chart, these words contribute significantly to the correct prediction of customer loan complaints to the model. Therefore, by clicking the blue bar, the analyst inspects the raw documents of these automobile-related instances (Figure~\ref{fig:case_text}\textbf{D}). \noindent\textbf{Insights from instances.} By browsing the documents, the analyst confirms that the loan payments complaints are related to vehicle purchases. By clicking the words like ``vehicles'' and ``cars'', he queries the global explanation summary to verify his findings (Figure~\ref{fig:case_text}(5)). The queried results show that there are more than 120 complaints about customer loans that contain such phrases with the correctness of 93\%. Thus, he concludes that automobile purchase is a popular topic when customers approach the financial institution. The company should these topics included during the call center training sessions. \section{Conclusion and Future Work} In this work, we present \textsc{Melody}\xspace, an interactive algorithm to construct an explanation summary of an ML model from local explanations of a dataset. The summary allows users to understand the decision rationale and data characteristics together for a holistics XAI experience. With the algorithm, we also present \textsc{Melody UI}\xspace, an interactive visual analytics system to connect different granularity of XAI tasks. The versatility of our algorithm and system enable scalable visual explorations of generic ML model interpretations on tabular, image, and text data. Our future work includes: \noindent\textbf{Embed summarization to model training processes.} Instead of generating a summary after the training, we plan to integrate the summary as a layer in the deep neural network to increase the global explanation capability of the model. \noindent\textbf{User study.} Since many model developers use visualizations such as partial dependency plot or projections to understand the model, we plan to conduct a user study to see if providing explanatory features and similarity among data at the same time will improve any productivity in practice. Also, we plan to conduct a longitudinal evaluation of \textsc{Melody UI}\xspace to ML researchers to investigate how the system affects model design, data engineering, and model debugging. \noindent\textbf{Application domain.} Apart from tabular, image, and text classifications, there are also other data primitives such as time series and graph classifications tasks. We plan to explore the visual analytics approach to apply our algorithm to explain ML tasks in these domains. \subsubsection{Speed Up Strategies With Data Sketches} While a randomized bottom-up algorithm scales linearly, Algorithm~\ref{algo:coco} is time-consuming as it needs an extra loop to compare all possible row or column clusters (line 7) in every iteration. However, if we look at the example matrix in Section~\ref{sec:def}, it is obvious that the first two rows (columns) are completely different from the last two rows (columns). Comparing candidates that are different indeed is of no use since they are unlikely to reduce the total cost. Thus, to speed up the algorithm, we propose a k-nearest neighbor query strategy with a novel use of locality sensitive hashing (LSH) \cite{charikar2002similarity} scheme to encode a row of column clusters. LSH defines a family of hash functions (i.e., sketches) $[h_{1}(v_i),h_{2}(v_i),...,h_{n}(v_i)]$ for a vector $v_i$ so that the probability of hash collisions between two vectors is proportional to their euclidean distances (i.e., $sim(v_i,v_j)\sim Pr[h_k(v_i)=h_k(v_j)]$). Vectors with similar values thus can be stored in the same buckets in an LSH table. Furthermore, we can extend this proportional to retrieve similar row (column) clusters. If two clusters have many similar vectors, then the number of hash collisions will be high. Therefore, the top-k clusters from the query will likely be similar neighbors. The query algorithm is illustrated in Algorithm~\ref{algo:lsh}. First, an LSH table needs to be built for rows and columns, respectively. Then, when a neighbor query is performed, we can use the hash keys from the query's vectors to perform a table look-up to retrieve all the collided entries with the entries in the cluster ( subroutine $query\_lsh\_table$ in line 2). We count the average number of collisions between the entries from the query cluster and the ones from the candidate clusters (line 5) and return the top k clusters with the highest number. This can drastically reduce the number of comparisons and the running time when the matrices are large (Section~\ref{sec:exp}). \subsubsection{Strategies Addressing Skewness and Sparsity} \label{sec:heuristics} Empirically, we observe two challenges when computing the results from real datasets in which we provide the following heuristics to address the problems and demonstrate the effectiveness in Section~\ref{sec:exp}: \noindent\textbf{Smoothing the explanation values}: When an explanation model assigns values to important features of an instance, the values can be very high (e.g., extremely sensitive features). It could affect the calculation of loss function (Equation~\ref{eqn:loss}) and prevent instances with similarly activated features from being grouped. Therefore, to have an even data distribution in the explanation matrix, we set the maximum value to the knee point of the overall value distributions in the matrix using a knee finding algorithm\cite{satopaa2011finding}. \noindent\textbf{Pre-clustering for a cold start in a sparse environment}: Given a sparse explanation matrix, the bottom-up approach might face difficulties in cluster entries when the cost function is stuck a local minimum. Also, as the matrix is sparse, it is hard for the algorithm to know whether there are cluster structures at the beginning. These adversely affect the formation of significant clusters. To address this cold start problem, we reference from spectral graph partitioning \cite{dhillon2001co} to create relatively smaller partitions of rows and columns using their singular vectors from SVD decomposition. Then, we can use our information-theoretic objective function to compress the matrix further. \begin{algorithm}[tb] \label{algo:lsh} \SetKwFunction{cumprod}{cumprod} \SetKwFunction{length}{length} \SetKwFunction{zeros}{zeros} \SetKwFunction{ceil}{ceil} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \caption{Top-k Nearest Neighbor Query} \label{algo:lsh} \tcc{Initialize \xvbox{3mm}{$\xvar{T}_R$} $\leftarrow$ $build\_lsh\_table(\xvar{R})$ and \xvbox{3mm}{$\xvar{T}_C$} $\leftarrow$ $build\_lsh\_table(\xvar{C})$ after line 2 in Algo.~\ref{algo:coco}} \tcc{Replace $\xvar{R}$ with $query(\xvar{r},\xvar{R},\xvar{T}_R,k)$ in line 7 of Algo.~\ref{algo:coco}} \Input{% \xvbox{2mm}{$\xvar{v}$} -- query cluster\\ \xvbox{6mm}{$\xvar{V}$, $\xvar{T}_v$} -- remaining clusters and LSH table\\ \xvbox{2mm}{$k$} -- number of neighbors } \Output{% \xvbox{4mm}{$\xvar{knn}$} -- top k nearest neighbors } \xvbox{9mm}{$counter$} $\leftarrow$ $Counter()$ \tcc*{initialize counter} \xvbox{12mm}{$neighbors$} $\leftarrow$ $query\_lsh\_table(\xvar{v},\xvar{T}_v)$ \tcc*{get collided entries} \For{\xvbox{2mm}{$n$} $\leftarrow$ $neighbors$}{ \For{$\bar{\xvar{v}}$ in $\xvar{V}$}{ \If(\tcc*[h]{collision between the clusters}){$n$ in $\bar{\xvar{v}}$ } { \xvbox{12mm}{$counter[\bar{\xvar{v}}]$} $+= 1 / |\bar{\xvar{v}}|$ \\ break } } } \xvbox{4mm}{$knn$} $\leftarrow$ $counter.most\_common(k)$ \end{algorithm} \section{Introduction} ``All models are wrong, but some are useful.'' The application of machine learning (ML) models, including deep learning neural networks, is prevalent in all aspects of human activities and nowadays the main driving force of technological advances such as self-driving cars, personal assistants and medical diagnoses. While there are more new models and architectures proposed to improve the accuracies of different tasks, the reason for such popularity also implies that there is no silver bullet for creating the best model. Hence, the creation of an ML model is a human-centric activity that involves lots of reasoning, brainstorming, and most importantly -- understanding processes. Understanding how a model works on one's own data improves the task performances in a holistic scope not only limited to the model design but also the data preprocessing and feature engineering steps. Yet, ML models nowadays introduce a challenging problem on their \textit{interpretability}. It becomes so complex to understand what the models have learned that using them as a black box could result in adversely affecting people's safety, financial, or legal status \cite{voigt2017eu}. Thus, explainable Artificial Intelligence (XAI) becomes an emerging research field where a lot of efforts have been devoted to extracting the logic behind how the models think when making decisions. Overall, these logical models focus on the usage of \textit{decision trees}, \textit{rules}, and \textit{instance-level feature importance} to mimic or customize the behavior of the ML models \cite{guidotti2018survey} so that people can understand how the model works through decision paths or scoring systems. In particular, \textit{instance-level feature importance} explanations become more popular to explain sophisticated models. It produces an accurate \textit{local} explanation as they only focus on a single instance. Such customization can even allow the explainer to be embedded in the ML model \cite{bien2011prototype,chen2019looks,kim2014bayesian,li2018deep}. Thus, local explanations have been readily proposed not only in explaining tabular data classification \cite{lundberg2017unified,ribeiro2016should} but also complex deep learning tasks in natural language processing \cite{poerner2018evaluating} and computer vision \cite{chen2019looks}. Of course, the ability to customize does not come as a free lunch. As the explanation is tailored towards an individual instance, the explanation model loses the advantages of providing aggregated explanations to generalize on the whole dataset. This limits its usage on providing simple textual information or visualization to describe how the model works. Such a limit, however, is where visualization techniques come in handy. We observe that most feature importance based explanations in current literature can be \textit{summarized} \cite{chandola2007summarization} into an explanation summary. The goal of summarization (Figure~\ref{fig:illustration}) is to find a compact description of the dataset with a minimum cost of information loss (i.e., information theoretic). In other words, it finds a explanation summary of the ML model, which is the groups of instances with similar explanations(colored regions) and the groups of features that are used to explain similar sets of instances (dashed lines). Thus, the summary is a compact \textit{global explanation} that enables effective visualization to communicate a model's general behavior. To fill the gaps of local explanation techniques on XAI tasks, we propose a scalable data summarization technique that only takes the generic form of the explanation information into account so that we can leverage the existing explanation techniques on different domains to provide useful visual data summaries. In \textsc{Melody UI}\xspace, we show that our implementations helps establish a holistic workflow for XAI experience concerning tabular data, texts or images. In short, our contributions are as follows: \begin{itemize}[noitemsep,topsep=0pt,leftmargin=3mm] \item \textbf{\textsc{Melody}\xspace, a scalable algorithm that generates a compact data summary for an ML model and input data.} It takes any generic \textit{feature importance} based explanations from the model and works for both structured and unstructured data. The algorithm consists of (1) an information-theoretic model to determine the best data summary and (2) an efficient sketching technique to speed up the computation. In Section~\ref{sec:exp} we show that \textsc{Melody}\xspace produces meaningful results and scales to large data. \item \textbf{\textsc{Melody UI}\xspace, an interactive system for scalable interpretation and exploration of the input data and the ML model together.} By leveraging our algorithm to group similar instances and explanations, we enable a seamless workflow that connects different needs regarding the global, local, and class explanations in the current XAI systems \cite{liao2020questioning}. \item \textbf{Three use cases covering ML model interpretations on tabular data, image, and text.} We demonstrate that our algorithm and system enhance the XAI user experiences on model interpretability to three mainstream data analysis. \end{itemize} \section{Design Considerations} \label{sec:requirement} Based on Section~\ref{sec:tasks} and Section~\ref{sec:model}, we distill the main design considerations for an interactive visualization interface for addressing a holistic XAI workflow and the summary's characteristics. Considerations in \boxtext{lavenderblush}{pink} address the tasks in Section~\ref{sec:tasks} and those in \boxtext{lavender(web)}{blue} address the data perspective in Section~\ref{sec:model}. \begin{itemize}[noitemsep] \item[\textbf{C.1}]\boxtext{lavenderblush}{Visual Summary} \textbf{Synthesize instance and feature summary.} Clusters of instances and clusters of features should be displayed together to understand the decision boundaries(\textit{knowledge of a model}) on different subsets (\textit{the influenced population}). \item[\textbf{C.2}]\boxtext{lavender(web)}{Sparse Summary} \textbf{Scalable visualization for large sparse data.} As the local explanations are highly customized and independent, the explanation summary will also be a sparse matrix. The visualization needs to highlight small but significant co-clusters. \item[\textbf{C.3}]\boxtext{lavenderblush}{Prediction Outcome} \textbf{Display instances' outcomes from the model.} Knowing when and where the model can fail is essential to understand its capability. Thus, the prediction outcome should be embedded in the visual summary and explanations. \item[\textbf{C.4}]\boxtext{lavenderblush}{Filtering} \textbf{Filtering data summary by classes or features.} When a user collects insights from local or class explanations, the insights need to be verified on a larger population. Thus, filtering by classes or features act as a query from a local analysis to refine the explanation summary in a global view. \item[\textbf{C.5}]\boxtext{lavender(web)}{Level-of-detail} \textbf{Different level-of-detail presentations for tabular, image and text data.} Level-of-details may come in different forms for different data primitives. Although the explanation summaries are the same, when users drill down on details, the presentations should be different. \item[\textbf{C.6}]\boxtext{lavenderblush}{Explanation in a Loop} \textbf{Connecting local, global, and class explanations as a loop.} The main three themes of XAI should be connected for a complete ML model explanation (Figure~\ref{fig:need}). Different views related to different scopes of explanations should be tightly integrated. \end{itemize} \section{Machine Learning Model Explanation Summary} \label{sec:model} In this section, we describe the definition of a model explanation summary as well as the algorithms to compute it from the local explanations. \subsection{Generic Representation of Local Explanation} \label{sec:data} The most generic form of a local explanation $e_i$ is a feature vector with $n$ total number of explanatory features used in explaining the whole dataset. Each value in the feature vector $e_{ij}$ is the explanation importance of feature $j$ on the instance $i$ (e.g. ``skin color'' on a cat image). For more details of local explanation techniques, we redirect readers to Appendix~\ref{sec:background}. All $m$ instances' explanations from the whole dataset thus can be expressed as a real-value matrix $E \in \mathbb{R}^{m,n}$. One important property of the matrix is that it is \textit{sparse} i.e. $nm \gg nnz(E)$ where $nnz(E)$ is the number of nonzeros in $E$. It ensures the explanation to use a small number of features to explain the model behavior so that the decision logic would not overwhelm the user. Also, to simplify the discussion afterwards, we assume $e_{ij} \geq 0$ and $\sum_{i,j}e_{ij} = 1$ \footnote{All matrices can satisfy these properties with a min-max scaler (on its absolute values if the sign does not matter).}. \begin{table}[tb]\centering \ra{1.3} \begin{tabular}{@{}llcl@{}}\toprule &\multicolumn{1}{c}{Data Type} & \phantom{abc} & \multicolumn{1}{c}{Explanatory Features} \\ \midrule \\[-3mm] &Tabular & & \makecell{A set of \textbf{logics} (ranges). \\ e.g. It \underline{will} rain since \textcolor{amber(sae/ece)}{$\textit{precipitation} \geq 90\%$}.} \\ \\[-3mm] &Image & & \makecell{A set of \textbf{common visual representations}. \\ e.g. It is a \underline{pig \raisebox{-.3\height}{\coloremoji{🐷}}} since I notice its \textcolor{amber(sae/ece)}{\textit{nose}} \raisebox{-.3\height}{\coloremoji{🐽}}.} \\ \\[-3mm] &Text & & \makecell{A set of \textbf{topics}. \\ e.g. This review is \underline{positive} since there are \\ words like \textcolor{amber(sae/ece)}{\textit{excellent / fantastic / amazing}.}} \\ \end{tabular} \caption{Intrinsic nature of \textcolor{amber(sae/ece)}{explanations} among tabular, image and text data. It affects how we construct the explanation matrices.} \label{table:nature} \end{table} \subsection{Explaining Tabular, Image, and Text Instances} Given a generic form of instance explanation, we now drill down to an in-depth discussion of how these feature vectors can be applied to explain tabular, image, and text instances. Although all of them result in explanation matrices, their intrinsic nature, shown in Table~\ref{table:nature}, affects how we modify our modeling pipelines to construct the features (i.e., columns) to acquire meaningful explanations. We provide end-to-end data modeling examples in Appendix~\ref{sec:pipeline}. \noindent\textbf{Tabular Data}. Logical models like decision trees and random forests discretize the attributes in the dataset to a set of \textit{logics}. Similarly, the explanatory features should be not only the original attributes but also the different ranges for better diversity. For example, all cities (instances) rain based on their precipitations (attribute), but how each city rain in different percentages of precipitation (ranges) reveals different climates. \looseness=-1 \noindent\textbf{Image Data}. Users normally classify images with the common visual features among the same entity (e.g., stripes in zebras). Similarly, an image in an ML model can be explained with representative image patches collected from the original data that unify the reasoning process with a limited set of features instead of pixels in each image \noindent\textbf{Text Data}. Multiple documents are usually explained with common topics instead of single phrases because similarly meant data can be totally different words. For example, ``good'' and ``great'' represent similar sentiments. Thus, the explanatory features should be a set of topics instead of words to avoid an overly sparse matrix. \subsection{Problem Definition} \label{sec:def} The information-theoretic goal for summarizing the explanation matrix is to group similar instances and explanatory features simultaneously that balance compactness and information loss. Let $R$ and $C$ be the set of row (instance) and column (feature) vectors in $E$ respectively such that $E$ is equivalent to a joint distribution between $R$ and $C$ (i.e. $p(R,C)$). Our goal is to find the optimal row and column clusters $\hat{R}$ and $\hat{C}$ so that it presents the explanation summary in Figure~\ref{fig:illustration}. Therefore, the first question is, how we should measure the information loss? For example, consider the following synthetic explanation matrix below: \begin{equation*} p(R,C) = \begin{bmatrix} .1 & .1 & 0 & 0\\ .1 & .1 & 0 & 0\\ 0 & 0 & .2 & .2\\ 0 & 0 & 0 & .2 \end{bmatrix} \end{equation*} It is obvious to group the rows into two clusters: $\hat{r}_1 = \{r_1,r_2\}$, $ \hat{r}_2 = \{r_3,r_4\}$ and the columns into two clusters: $\hat{c}_1 = \{c_1,c_2\}$, $\hat{c}_2 = \{c_3,c_4\}$. The information theoretic definition of the resulting compression $p(\hat{R},\hat{C})$ and the approximation matrix recovered from the compression $q(\hat{R},\hat{C})$ are as follows \cite{dhillon2003information}: \begin{equation*} p(\hat{R},\hat{C}) = \begin{bmatrix} .4 & 0 \\ 0 & .6 \end{bmatrix} \textit{, } q(\hat{R},\hat{C}) = \begin{bmatrix} .1 & .1 & 0 & 0\\ .1 & .1 & 0 & 0\\ 0 & 0 & .133 & .267\\ 0 & 0 & .067 & .133 \end{bmatrix} \end{equation*} Each entry in the approximation matrix $q(\hat{R},\hat{C})$ is calculated as follows: \begin{equation} q(r,c) = p(\hat{r},\hat{c}) \times \frac{p_{R}(r)}{p_{\hat{R}}(\hat{r})} \times \frac{p_{C}(c)}{p_{\hat{C}}(\hat{c})} \end{equation} For example, $q(3,4) = .6 \times (.4)/(.6) \times (.4)/(.6) = 0.267$. Thus, the compression loss can be expressed with metrics such as Kullback-Leibler (KL) divergence of $p(R,C)$ from $q(\hat{R},\hat{C})$: \begin{equation} D_{KL}(P,Q)=\sum_{x \in \chi }P(x)log(\frac{P(x)}{Q(x)}) \label{eqn:kl} \end{equation} Yet, we observe a shortcoming of directly using KL divergence on the whole matrix. The $P(x)$ in Equation~\ref{eqn:kl} tells us that each entry's contribution to the result is not independent to the clusters that it does not belong to. Therefore, we propose a loss function $D(\hat{R},\hat{C})$ such that each entry's loss is marginal to its row and column cluster: \begin{equation} \begin{aligned} D(\hat{R},\hat{C}) = \sum_{\hat{r} \in \hat{R}}D_{KL}(P(r \in \hat{r}, C),Q(r \in \hat{r}, C)) \\ +\sum_{\hat{c} \in \hat{C}}D_{KL}(P(R, c \in \hat{c}),Q(R, c \in \hat{c})) \label{eqn:loss} \end{aligned} \end{equation} Such a marginalization prevents entries with high values dominating the calculation result, which we will demonstrate the subsequent improvement in Section~\ref{sec:exp}. Once we quantify the information loss, the next challenge is how should we choose the number of row and column clusters? If we do not cluster any rows and columns at all, $D$ will equal to zero. Whereas if we only have one cluster, the loss will be huge. Yet, neither of them is a summary of the data as it either represent the original matrix or a summary with poor quality. To automatically determine the optimal partitions, the idea is to use Minimum Description Length Principle (MDL), which states that the best model is the one that minimizes the total description length of the expression: $model$ (i.e., number of clusters $\left \| \hat{R} \right \|$ and $\left \| \hat{C} \right \|$) $-$ $correction$ (i.e., information loss $D$). Putting them all together, we can now write the total cost function $T$ as: \begin{equation} T(\hat{R};\hat{C}) = \beta_R \left \| \hat{R} \right \| + \beta_C \left \| \hat{C} \right \| + D(\hat{R},\hat{C}) \label{eqn:objective} \end{equation} which we try to minimize it with the best rows and columns partitions. $\beta_R$ and $\beta_C$ are user defined parameters to penalize large number of clusters. Users can increase the values to produce fewer clusters. \subsection{The \textsc{Melody}\xspace Algorithm} We now present our \textsc{Melody}\xspace (\textbf{\underline{M}}achin\textbf{\underline{E}} \textbf{\underline{L}}earning M\textbf{\underline{O}}\textbf{\underline{D}}el Summar\textbf{\underline{Y}}) algorithm. In the previous section we have created our goal to find the row and column clusters that minimize the cost function in Equation~\ref{eqn:objective} among all possible number of clusters and all possible rows and columns combinations. Yet, the equation itself does not tell us how to reach the solution efficiently. Since the matrix can be considered as a graph where each entry is a weighted edge between a row node and a column node, we can use graph summarization \cite{navlakha2008graph} approach to provide a baseline solution (Algorithm~\ref{algo:coco}). The overall idea is as follows: \begin{enumerate} \item Each row and column starts in its own cluster. Then, we put the row and column clusters into two separate lists (line 1-2). \item We first fix the column cluster assignment. For the row clusters in the list, we randomly select a row cluster (line 5). \item We compare the selected row cluster with the remaining row clusters in the list as merge candidates (line 7-12): we try merging the selected cluster with each remained cluster and calculate the cost reduction by Equation~\ref{eqn:objective} (line 8). We choose the candidate that produces the least cost. \item If merging the selected cluster and its best candidate reduces the total cost, then we merge two clusters in the list (line 14-15). Otherwise, we remove the selected cluster in the list (line 17). Either way, the list will have one fewer item. \item We repeat steps 2-4, but we fix the row clusters and merge the column clusters instead. The whole algorithm stops until there are no clusters remained in both lists. \end{enumerate} \begin{algorithm} \SetKwFunction{cumprod}{cumprod} \SetKwFunction{length}{length} \SetKwFunction{zeros}{zeros} \SetKwFunction{ceil}{ceil} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \caption{\textsc{Melody}\xspace (\textbf{\underline{M}}achin\textbf{\underline{E}} \textbf{\underline{L}}earning M\textbf{\underline{O}}\textbf{\underline{D}}el Summar\textbf{\underline{Y}})} \label{algo:coco} \Input{% \xvbox{5mm}{$R,C$} -- instances and explanatory features\\ \xvbox{8mm}{$\beta_R$, $\beta_C$} -- regularization terms } \Output{% \xvbox{5mm}{$\hat{R},\hat{C}$} -- row and column clusters } \BlankLine \xvbox{2mm}{$\xvar{R}$} $\leftarrow$ $[\{r_1\},\{r_2\},...,\{r_m\}]$, \xvbox{2mm}{$\hat{R}$} $\leftarrow$ $\{\}$ \tcc*{intialize rows} \xvbox{2mm}{$\xvar{C}$} $\leftarrow$ $[\{c_1\},\{c_2\},...,\{c_n\}]$, \xvbox{2mm}{$\hat{C}$} $\leftarrow$ $\{\}$ \tcc*{intialize columns} \xvbox{5mm}{$loss$} $\leftarrow$ $D($\xvar{R}$,$\xvar{C}$)$ \tcc*{initialize loss function} \While{ $size(\xvar{R}) > 0$ and $size(\xvar{C}) > 0$}{ \xvbox{2mm}{$\xvar{r}_0$} $\leftarrow$ $random\_pop(\xvar{R})$ \tcc*{randomly extract a cluster} \xvbox{7mm}{$\Delta L_{max}$} $\leftarrow$ 0, \xvbox{5mm}{$r_{max}$} $\leftarrow$ \textit{undefined} \For{\xvbox{2mm}{$\xvar{r}$} $\leftarrow$ $\xvar{R}$}{ \xvbox{4mm}{$\Delta L$} $\leftarrow$ $\beta_R - D_{KL}( \{\xvar{r} \cup \xvar{r}_0\},\xvar{C}\cup\hat{C})$ \If{$\Delta L > \Delta L_{max}$}{ \xvbox{8mm}{$\Delta L_{max}$} $\leftarrow$ $\Delta L$, \xvbox{5mm}{$r_{max}$} $\leftarrow$ $\xvar{r}$ } } \eIf{$\Delta L_{max} > 0$}{ \xvbox{5mm}{$r_{max}$} $\leftarrow$ $\{r_{max} \cup \xvar{r}_0\}$ \tcc*{merge two clusters} }{ \xvbox{2mm}{$\hat{R}$}.push($r_0$) \tcc*{push the cluster to final result} } \tcc{same procedure as for C...} } \end{algorithm} Overall, in every iteration, a row (column) needs to measure the cost reductions with the remaining candidates in the list, which has the maximum size of $||R||$ ($||C||$). Therefore, the time complexity of the basic algorithm is $O(||R||^{2} + ||C||^{2})$. As a quadratic algorithm is infeasible for any moderately sized data for exploratory visual analysis, we now propose a speed-up strategy to make our algorithm suitable for interactive performance. \section{Appendix} \label{sec:appendix} \subsection{Background of Local Explanation Models} \label{sec:background} We provide a background of the mainstream models that generate local explanations of a machine learning model's decisions to a dataset. The popularity of giving local explanations, except applying logical models such as decision trees or rules, is because these methods provide an independent and highly customized explanation for each instance. When explanations do not aggregate into general decisions or rules, they become more faithful to the original model. In general, to generate a local explanation for an instance, explanation algorithms usually seek one of the following approaches: \begin{enumerate} \item \textit{Local Linear Models}: The algorithm searches the neighbors of an instance, then fits the subset to a linear model such that the higher the gradient of a feature in the linear model, the more important the feature is to the prediction of the selected instances. SHAP \cite{lundberg2017unified} and LIME \cite{ribeiro2016should} are the examples that use neighbors to evaluate an instance. \item \textit{Perturbation}: Instead of using other instances to generate explanations, one can perturbate the values of its attributes and observe whether the output changes significantly after removing, masking or altering them. The sensitivity of each feature implies that its value lies in the decision boundary of the machine learning model. Thus, a sensitive feature from perturbation has a high influential power on the instance. This method has been applied to Convolutional Neural Networks (CNNs) for image classification \cite{zeiler2014visualizing}. \item \textit{Prototype}: The intuition is to use representative original training data (i.e. prototype) to explain a prediction, which can be selected by clustering the latent representations in the ML model. Given the class labels of each prototype and their similarities with the input data, the prediction and reasoning process becomes a scoring system where the class with the highest score (i.e. the sum of similarities of prototypes belonging to the class) is the returned result. This technique has been readily incorporated in deep neural networks for image, text and sequence predictions \cite{chen2019looks,li2018deep,ming2019interpretable}. \item \textit{Backpropagation}: Since complex models like neural networks contain series of propagation of weights from the input to the output neurons to produce predictions, one can invert the process to backpropagate the neurons with great gradients from the output to the input data locate the portion of original data that causes the neuron activations in the output. Such a portion implies the meaningful features that explain the model's decision. Saliency Maps \cite{simonyan2014deep}, DeepLIFT \cite{shrikumar2017learning} and Intgrad \cite{sundararajan2017axiomatic} are the examples of such methods. \end{enumerate} \subsection{Tasks Breakdown of Explainable AI} \label{sec:tasks_appendix} In general, ML models explainability is achieved by three different types of tasks \textbf{(T)}: \textbf{Global} (\textit{general behavior of a model}), \textbf{Local} (\textit{behavior of a model to an instance}), and \textbf{Class} (\textit{behavior of a model to a class}) \cite{guidotti2018survey,liao2020questioning}. Liao \textit{et.al.} \cite{liao2020questioning} in addition provides actionable suggestions \textbf{(A)} for each task. For each task and action, we identify different opportunities (\textbf{O}) that an explanation summary can help achieve the tasks. \begin{itemize}[noitemsep] \item[\textbf{T.1}]\textbf{Global Explanation}. The goal is to understand the overall weights of features used by the model to explain how AI makes decisions on the dataset in general. \begin{itemize}[noitemsep,topsep=0pt,leftmargin=6mm] \item[\textbf{A.1}] Users \textit{select the important features} that affect the whole dataset's outcome to uncover data-related issues such as data collection, bias, and privacy. \item[\textbf{A.2}] They may also \textit{evaluate the limit and capability of the model}. By inspecting the main features, users can develop a mental model to interact with or improve the system. \end{itemize} \begin{itemize}[noitemsep,topsep=0pt,leftmargin=6mm] \item[\textbf{O.1}] An explanation summary can define the appropriate \textit{level of details} to explain the model without losing too many details nor overwhelming the users. Grouping the relevant features and instances allow interactions for users to prioritize different information shown at a time. \item[\textbf{O.2}] Subsetting the information by similarity also decreases the complexity of the explanation since the instances and features shown have many common properties. This allows the global explanations visualized to be more \textit{representative}. \end{itemize} \item[\textbf{T.2}] \textbf{Local Explanation}. The goal here is to inspect the model's behavior on a specific instance and understand how the instance's properties influence the outcome. \begin{itemize}[noitemsep,topsep=0pt,leftmargin=6mm] \item[\textbf{A.3}] A popular activity is to explore different \textit{what-if} scenarios. Users observe the outcome if some features become different which helps to explore more scenarios of applying the model and gain insights into the model's capability. \item[\textbf{A.4}] Another action is to directly understand \textit{why} does the instance belong to a prediction and \textit{why not} does it result in other outcomes. This helps to discover the local decision boundaries of the model. \item[\textbf{A.5}] Providing the \textit{original input/data} provides a more holistic system capability to understand a particular decision and accommodate users' understandings and interactions. \end{itemize} \begin{itemize}[noitemsep,topsep=0pt,leftmargin=6mm] \item[\textbf{O.3}] Grouping similar instances provide \textit{neighbors of the instance} that are explained similarly by the model, which increases the number of instances to support users' insights and findings. Can we conclude that ``ears'' are important in the prediction of "cats" from what we see on a single image? We also know that if there exists lots of cat images exhibiting similar characteristics. An explanation summary thus allows a large set of instances to be analyzed to avoid spurious conclusions \cite{wu2019errudite}. \item[\textbf{O.4}] Similar to grouping instances, grouping features allows users to \textit{prioritize important features} that explain an instance and its neighbors, which reduce the cognitive workload when deriving understandings to the model's decision logic. \end{itemize} \item[\textbf{T.3}] \textbf{Class Explanation} (\textit{Counterfactual}). How a prediction (class) works in the model is also an important emphasis. It is similar to a global explanation but with a smaller granularity on a specific class. Yet, the actions to understand a class are more similar to instance explanations, which focus on the sensitivity of features to each prediction. \begin{itemize}[noitemsep,topsep=0pt,leftmargin=6mm] \item[\textbf{A.6}] Testing the sensitivity of features towards a prediction is equivalent to the test of different \textit{what-if} scenarios. By testing different ranges of features, users can understand the decision boundaries of a prediction class. \item[\textbf{A.7}] Besides interaction, the exploration of the relevant features of a prediction also helps understand \textit{why} and \textit{why not} cases of a prediction to gain insights into the decision logic. \item[\textbf{O.5}] With groups of similar instances and features, users can apply different \textit{levels of details} to acquire more precise subsets. \item[\textbf{O.6}] Extending the findings of an instance to its similar neighbors inside a class increases the confidences of the insights. \end{itemize} \end{itemize} \subsection{End-to-End Explanation Modeling Pipeline} \label{sec:pipeline} In this section, we describe the example pipelines in handling tabular, image, and text data that result in explanation matrices with the explanatory features in Table~\ref{table:nature}. We explicitly categorize the pipeline with \textbf{preprocessing}, \textbf{ML modeling}, and \textbf{explanation modeling} stages. Notice that they are not the only ways to achieve the objectives of data engineering. The explanation models can be interchanged as well. In addition, we provide an synthetic example to illustrate how the whole explanation process works, as well as our goal to summarize the whole explanation data in Figure~\ref{fig:synthetic_workflow}. \subsubsection{Tabular Data} \textbf{Preprocessing.} To enable logics as the explanatory features for tabular data, we need to preprocess the original data into one-hot encodings of logics under each attribute. For numerical and ordinal data, the attributes are first discretized into different quantiles. Then, the one-hot encoding can be applied to transform the quantiles into separate columns, where 0 indicates the data does not fall into the ranges while 1 indicates it does. The way of discretizing attributes can be as straightforward as choosing a fixed number of equal intervals or leveraging the statistical properties such as entropy. In our use case, we use Sturge's rule to determine the number of quantiles and the ranges of quantiles are determined by the training data. The one-hot encoding can also directly be applied to categorical attributes. \noindent\textbf{ML modeling.} Then, the transformed data is used to train a neural network so that the logics are the input features. This allows the logic to be evaluated in the explanation methods. \noindent\textbf{Explanation Modeling.} As the input features of the ML model are a set of logics, methods such as LIME and SHAP can be directly applied to the model and dataset to generate feature vectors composed of a set of logics. \subsubsection{Images} \textbf{Preprocessing.} For images, we do not need much feature engineering as the explanatory features are the pixels themselves. We only need to apply standard image augmenting techniques (i.e. replicating training images with scaling, rotating, and mirroring) to increase the training data size for a better model accuracy. \noindent\textbf{ML modeling.} We apply prototype learning inside a Convolutional Neural Network \cite{chen2019looks}. It adds a prototype layer on the last layer of the original neural network. The training process results in a selection of a fixed number of image patches from the training data as prototypes that are used to reason the prediction of new data. \noindent\textbf{Explanation Modeling.} As the explanation model is already incorporated as a layer in the ML model when new data comes in, an $n \times m$ explanation matrix can be constructed, where $n$ is the number of tested data and $m$ is the number of prototypes. \subsubsection{Text} \textbf{Preprocessing.} Similar to images, the explanation of text comes from the texts inside the documents as well. Thus, we only need to apply standard text preprocessing steps like removing stopwords and infrequent words to make sure the explanation models do not return explanations with meaningless topics. \noindent\textbf{ML modeling.} We can use common text models such as RNN and LSTM to generate predictions. Notice that usually the first layer of these models are the word embeddings of the whole dataset. We can leverage this word embeddings to find extract the topics in the dataset by clustering based on them. \noindent\textbf{Explanation Modeling.} For training the ML model, we can examine each word's importance to the prediction by gradient-based explanation models such as DeepLIFT and Intgrad. This results in an extremely sparse matrix where each feature is a word that appears in more or equal than one documents. Also, words with similar meanings such as ``good'' and ``excellent'' will be treated as different features. To densify the explanation matrix so that similar words are grouped and more significant hidden structures can be produced, we can transform the local explanation from a feature vector of words to a feature vector of topics. The explanation importance of each topic to an instance can be determined by the maximum explanation importance among the words in the topic. Such allows words with similar semantics to be grouped before the matrix is summarized. \begin{figure*} \centering \includegraphics[width=\linewidth]{tabular_matrix} \caption{ Explanation summary matrix of the HELOC dataset's explanation matrix. Rows represent the instances and columns represent the explanatory features. Vertical lines represent row clusters and horizontal lines represent column clusters. The color reflects the explanation values. } \label{fig:tabular_matrix} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{image_matrix} \caption{ Explanation summary matrix of the Caltech-UCSD Birds-200-2011 Images' explanation matrix. Rows represent the instances and columns represent the explanatory features. Vertical lines represent row clusters and horizontal lines represent column clusters. The color reflects the explanation values. } \label{fig:image_matrix} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{text_matrix} \caption{ Explanation summary matrix of the US Consumer Finance Complaints' explanation matrix. Rows represent the instances and columns represent the explanatory features. Vertical lines represent row clusters and horizontal lines represent column clusters. The color reflects the explanation values. } \label{fig:text_matrix} \end{figure*} \section{Tasks Analysis of XAI Systems} \label{sec:tasks} Before we propose our methodology to generate an explanation summary for feature importance based explanations, we first review the taxonomy of XAI tasks to induce the reasons \textit{how can a visual explanation summary of ML model help}. By understanding the tasks, we can consolidate the design considerations to expand our techniques into an effective user interface. To explain the tasks and the use of data summary systematically, we use a simple workflow of XAI (Figure~\ref{fig:need}) to connect the essential relationships among three main XAI tasks~\cite{liao2020questioning}: \textbf{Global}, \textbf{Local}, and \textbf{Class} explanations. \noindent\textbf{T.1 Global Explanation.} The goal is to understand the overall weights of features used by the model to explain how AI makes decisions on the dataset in general. For example, imagine we have a model that tells what animal does an image contains. To understand the model, the first question a user might ask is \textit{what features the model uses to make a prediction?} An XAI technique might give us a sorted list of features based on their influences to the model (Figure~\ref{fig:need}.1) -- it tells that ``skin color'' is the most important factor. \noindent\textbf{T.2 Class Explanation.} Understanding how and whether the model works in each class allows users to understand the decision boundaries in a smaller granularity to develop insights. For example, from a global explanation, ``eyes'' are used to predict many cats. ``Eyes'' and ``cat'' are the key information to understand the model rationale in a local region. \noindent\textbf{T.3 Local Explanation.} For verification and inspections in full details, users need to inspect all explanatory features of a predicted instance (Figure~\ref{fig:need}.3). For example, why is there a cat predicted as ``dog''? The difference between instances' behavior can evaluate important decision boundaries. We can know that the image's cat has white skin, which is a ``dog'' feature by inspecting a single image. \noindent\textbf{Usefulness of Explanation Summary.} First, it can provide a global explanation with a better granularity (Figure~\ref{fig:need}.2). Instead of aggregating the whole dataset to rank the features, it tells directly that ``eyes'' are used on many cats while ``ears'' are used on many dogs. This answer avoids the mirage of aggregated features over different subsets. Also, clustering instances allows users to go from local to global explanation. For example, by browsing a cat image and knowing it has a wrong prediction due to its white skin, we might want to know all the cat images with the ``white skin'' feature will be predicted as ``dog''. By inspecting all the cat images or images that have ``white skins'' (Figure~\ref{fig:need}.4), we go back to the inspection of a group of images again. In detail, the tasks in the above workflow generalize the studies that consolidate the key user requirements of model explainability \cite{adadi2018peeking,carvalho2019machine,gilpin2018explaining,guidotti2018survey,mohseni2018survey,ras2018explanation}. Furthermore, there is a plenty of empirical studies on the requirement of XAI from industry practitioners \cite{amershi2019guidelines,boukhelifa2017data,hohman2019gamut,holstein2019improving,liao2020questioning,muller2019data,rule2018exploration}. They provide good empirical evidence from real experts to outline the guidelines for designing XAI systems. The details of \textbf{T.1-3} derived from these studies are provided in Appendix~\ref{sec:tasks_appendix}. \section{Related Work} \label{sec:relwork} To facilitate human understanding towards complex models through visualization, research mainly focus on visualization on three aspects: model internals, logics induced from the models, and instance level feature vectors that describe the behavior of the model. \subsection{Visualization of Model Internals} \label{sec:relwork_internals} Visualization has been applied readily to understand and interact with deep learning neural networks. In fact, a survey about deep learning visual analytics by Hohman \textit{et al.}\xspace \cite{hohman2018visual} has listed more that 40 representative works in this area in the last 5 years. We encourage the readers to read the survey paper for a deeper investigation to the subject. The simplest form of a neural network can be represented by a node-link diagram in which each node represents a neuron and link presents a connection weight between two neurons \cite{tzeng2005opening}. As nowadays the ways neurons are connected become more sophisticated and opaque, various visual analytics approaches have been developed to understand different properties of the networks. RNNVis \cite{ming2017understanding} and LSTMVis \cite{strobelt2017lstmvis} address the understanding of recurrent neural network (RNN) by visualizing the bipartite relationship between hidden memories and input data and hidden memory dynamics with parallel coordinates respectively. Autoencoder is addressed by Seq2SeqVis which proposes a bipartite graph visualization to visualize the attention relationships between input and its possible translations to enable model debugging \cite{strobelt2018s}. Another type of popular models for image classification is Convolutional Neural Networks (CNN). CNNVis \cite{liu2016towards}, Blocks \cite{bilal2017convolutional}, AEVis \cite{liu2018analyzing} and Summit \cite{hohman2019s} are graph visualizations that aggregate similar neurons, connections, and similar activated image patches to convey learned visual representations from the model. Besides visualizing the structures of a neural network, there are visual analytics systems that assist the model development processes in the industry. ActiVis \cite{kahng2017cti} is a visual analytics system used by Facebook to explore industrial deep learning models. Google has developed TensorFlow Graph \cite{wongsuphasawat2017visualizing} and what-if tool \cite{wexler2019if} to help developers understand and test the behavior of different ML models. The work in these criteria mainly addresses the visual analysis for model developers who have sufficient knowledge of the methodologies of their models. However, a more general AI tool requires the assessment and involvement of end-users, decision-makers, and domain experts. Addressing the needs of border XAI user experience, our work focuses on providing general explanations of ML models to users without requiring them to know the architectures. \subsection{Visualization of Logical Models} \label{sec:relwork_surrogate} Logical models like decision trees \cite{craven1996extracting} or rules \cite{martens2008decompositional,yang2017scalable} can address the interpretation of complex models by using them to infer an approximated model from any ML models. Given a set of test data, the original model gives the predictions and the logical models use them as labels to train another classifier. The resulting classifier can be used to mimic the behavior of the original model while providing good interpretability to the users. Through visualizing logical models, users gain knowledge of the model's capability. Rule Matrix \cite{ming2018rulematrix} is proposed to build and visualize the surrogate rule list to understand the model's behavior by interacting with the rules. Gamut \cite{hohman2019gamut} uses generalized additive models to construct a set of linear functions for each feature in the dataset to understand models through line charts. TreePOD \cite{muhlbacher2017treepod} and Baobabview \cite{van2011baobabview} visualizes the decision trees with different metrics incorporated for model understanding. iForest \cite{zhao2018iforest} visualizes random forests with data flow diagrams to interact with multiple decision paths. For complex model, using logical models to explain the complex model is the consideration of \textit{fidelity} -- the accuracy of the explanation on the original model. It creates an additional layer of performance concerns. Therefore, local explanation methods are proposed to explore the possibility to provide accurate explanations or even be embedded in the original model training process. Yet, they only return results for an instance and do not consider a global explanation to the whole dataset, our work addresses the challenges of visually constructing a global view for local explanations. \subsection{Feature Vector Visualization} \label{sec:relwork_feature} Local explanation models give feature scores for each instance. The features can be the features from original data \cite{lundberg2017unified,ribeiro2016should,shrikumar2017learning} or a set of external information like concepts \cite{kim2018interpretability} or training data \cite{chen2019looks,li2018deep,ming2019interpretable}. Visual analysis can be directly applied to interact with the features \cite{ming2019protosteer} or the feature vectors can be visualized as a matrix where rows represent the instances and columns represent the features \cite{sawada2019model}. Besides, the data comes out from a deep neural network can appear as embeddings such that the linear distances between vectors represent their similarities as the model's rationale. The main visualization technique to understand these feature vectors is projection \cite{grover2016node2vec,li2018embeddingvis,liu2017visual,pezzotti2017deepeyes,rauber2016visualizing,xiang2020interactive}. Treating the embedding as high dimensional data projection techniques such as tSNE, MDS, or PCA are applied to discover semantic groups inside the dataset from the resulting scatterplot. Users can assess the ML model and refine the original data from the brushing the filtering interactions in the projection. Our technique identifies the scalability and usability challenges in the existing visualization. The projection technique mainly suffers from cluttering and the lack of feature information in the visualization which is crucial for a comprehensive explanation. \textsc{Melody}\xspace aims at providing compact representations of both data and features so that visual information is more precise. Also, we address the needs of explanation exploration with different granularity by the proposed analytic workflow illustrated by \textsc{Melody UI}\xspace, providing new ways to extend the powerful local explanations to scalable visual analytics.
proofpile-arXiv_065-170
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\setcounter{equation}{0} There are many works that study gapped boundaries in 2+1 d bosonic topological orders that are characterized by anyon condensation \cite{bais_broken_2002, bais_condensate-induced_2009, bais_theory_2009, hung_ground_2015, lan_gapped_2015, hung_generalized_2015, kapustin_topological_2011, davydov_witt_2010, kitaev_models_2012, Barkeshli:2013yta, barkeshli_classification_2013, Kong:2013aya, fuchs2013bicategories, levin_protected_2013, 2018ARCMP...9..307B,cong_topological_2016}. More recently, it is realized that some gapped boundaries of these bosonic orders can be characterized by ``fermion" condensations -- physically, it corresponds to emergent fermions pairing up with local free fermions which subsequently condense at the boundary \cite{Aasen:2017ubm,Bhardwaj:2016clt, Lan_2016, Wan:2016php,Bhardwaj:2016clt}. The boundary thus necessarily becomes sensitive to the spin structure. Mathematically, these gapped boundaries can be characterized by (super) Frobenius algebra in the tensor category describing the topological order concerned. \footnote{Let us emphasize here that gapped interfaces are also characterized by anyon condensation. However, every gapped interface can be understood in terms of a gapped boundary by the folding trick. The main difference between a gapped interface and a gapped boundary is that across the interface there are still non-trivial bulk excitations, while the phase is trivial across a gapped boundary. The discussion here, to avoid clutter, addresses directly the gapped boundaries. The discussion however can be easily turned around into a discussion of gapped interface. } The most signatory set of physical properties of a gapped boundary includes the collection of topological excitations and defects it supports, their quantum dimensions and fusion properties which controls ground state degeneracies of the system in an open manifold. As alluded to above, these gapped boundaries can be understood in terms of (super) Frobenius algebra. The physics of the gapped boundaries should be encoded in the mathematics, which in principle could be extracted systematically. This is indeed the case particularly for bosonic gapped boundaries -- except that the techniques are dispersed in the physics and mathematics literature, the latter of which is often shrouded in a language completely foreign to physicists, and that the formal principles laid out may not be readily converted into a practical computation. A practical way of computing fusion rules of defects localized at junctions between different gapped boundaries have been elucidated in \cite{Shen_2019}. In the case of fermion condensation which receives attention only more recently \cite{Aasen:2017ubm,Bhardwaj:2016clt, Lan_2016, Wan:2016php}, a systematic study including non-Abelian fermion condensation and junctions remain largely an open problem. We propose that a super-commutative separable special Frobenius algebra in the bulk topological order is responsible for characterizing its fermionic boundary conditions -- to our knowledge this is the first systematic use of the ``super-commutative'' version of the Frobenius algebra to describe the fermionic boundaries and through which to work out their properties. We elucidate properties of defects in a gapped boundary or junctions characterized by fermion condensations. This includes obtaining the full collection of topological defects, identifying their endomorphisms, and also computing their fusion rules, that can be summarized by a (super) defect Verlinde formula. We extended the results of \cite{Aasen:2017ubm,Bhardwaj:2016clt, Lan_2016, Wan:2016php} to include non-Abelian condensates, and also the study of junctions between (fermionic) gapped boundaries. We also develop new ways to compute the half-linking numbers. To understand these results, it is most convenient if the reader is familiar with the computational tools available in braided tensor categories and their algebra objects. We hope to convey the power of computations using category theory -- while some of the examples discussed in the current paper can be obtained using their explicit realization in field theory or lattice models, such as the boundaries and junctions of the toric code order -- category theory is really an elaborate and generalized group theory that allows one to work out the basic features of these gapped phase and their boundaries in a clean way without getting bogged down by extra details pertaining to a specific realization of the topological order that are in fact not universal to the order. Category theory is a powerful tool that keeps track of the combinatoric data assuming only that there are conserved (topological) charges, and that they can fuse in an associative way -- which are clearly model independent features of a topological order. Relevant mathematical results are mostly scattered in many different places, which maybe a major hindrance to entering the subject. We therefore collect the most relevant tools to make the paper self-contained. We give up some mathematical rigor to make the language more readily accessible to working physicists with minimal prior experience in tensor category theories -- i.e. we give a collection of mathematical definitions we deem immediately relevant in doing computations at least in the current context that makes heavy use of algebra in categories, and explain how these mathematical results can be used in explicit computations illustrated in examples. This feature hopefully fills the gap in most of the mathematical literature that is dense on definition and theorems, but scarce in making connections with explicit computations. A formal and proper introduction to the subject can be found in numerous places in the literature. Of particular use are \cite{Fuchs_2002, Fuchs_2004, kirillov}, and references therein. The paper is organized as follows. In section 2 we first give a brief review of (super) braided tensor categories. We review also algebra objects in a category, and their representations. These results are then extended to include super commutative algebra. The computation of half-linking number and the fusion rules of modules and bi-modules, in addition to their endomorphisms, are discussed. In section 3, we illustrate the results obtained in section 2 in explicit examples, namely the toric code model and the $D(S_3)$ quantum double, where we explicitly obtain the Frobenius algebra and the bi-modules that describe fermionic boundaries. We also demonstrate how their fusion rules are computed. In section 4, we describe the connection of the current results to supersymmetric CFT's and their topological defects. We also discuss the twisted version of the Verlinde formula that produces the difference in fermion parity even and odd channels in the fusion of primaries. In section 5, we conclude with some miscellaneous facts about fermion condensation, and various open problems to be addressed in the future. There are several lengthy computations that we have relegated into the appendix. In particular, the computation of the 6j symbol describing associativity of fusion of the boundary excitations, are explained and illustrated in detail there. We also include more sophisticated examples of fermion condensates in the $SU(2)_{10}$ and $D(D_4)$, which involve condensates of multiple fermions. In particular, in the latter example some of these (super) modular invariants do not appear to correspond to a condensate that preserve fermion parity. Whether these examples have physical meanings should be explored in greater details in the future. We present in appendix B the counting of Majorana modes localized at junction using the Abelian Chern-Simons description of the toric code order, and compute entanglement entropy on a cylinder with different fermionic/bosonic boundary conditions. Some topological data of the $D(S_3)$ quantum double is reviewed in appendix C. \section{A Physicist's skeletal manual to tensor category and gapped boundaries -- review and generalizations to fermionic boundaries} \label{sec:intro_cat} Tensor category covers a huge class of mathematical structures. To the author, the framework has a structure not unlike an onion where extra structures can be included layer by layer, adding to the complexity of the situation. As far as 2+1 d topological order is concerned, the categories that are of interest are (braided, or in fact modular) fusion categories. \footnote{Let us emphasize here that we are considering the mathematical abstraction of the topological order itself. We note that the construction of explicit lattice models of topological order -- such as the Kitaev models and Levin-Wen models, require data of a fusion category as input data. That is often called ``input category'' as opposed to the resultant topological order which is called the ``output category''. Using this language, we are describing the ``output'' category only in the current paper. } In the following, we will collect the most important results that will actually be used in the rest of the paper. Rather than listing all the algebraic equations in one full swoop as if all the properties are supposed to appear together from the beginning, we are presenting these results in a way to emphasize that many of the properties are in fact independent. Each add-on property is an extra mathematical structure to the construct, and each such addition has to be made consistent with all the other qualifiers already included, very often leading to extra consistency constraints, which is the origin of the many algebraic equations characterizing a certain tensor category. \subsection{The basics of Braided fusion tensor category} \label{sec:introcat} Let us summarize the most basic concepts below. \footnote{Materials here can be found reviewed for example in \cite{Bonderson_2008,Barkeshli_2019} and references therein. Our emphasis on explicit basis construction for morphisms can be attributed to \cite{Fuchs_2002}. } \vspace{1cm} {\bf \underline{Simple Objects}}\\ The most basic structure is the collection of objects. Simple objects describe different species of point excitations in a 2+1 d topological order. Physically interesting theories are semi-simple categories, where every object can be decomposed as direct sums of simple objects, which form a basis of elementary particles. \begin{equation} a = \oplus_i m_{ai} c_i, \qquad a, c_i \in C, \qquad m_{ai} \in \mathbb{Z}_{\ge 0}. \end{equation} The multiplicity $m_{ai}$ should be non-negative integers. \vspace{0.5cm} {\bf \underline{Morphism}}\\ Morphisms, or homomorphisms, often denoted Hom($a,b$) are maps taking $a$ to $b$. Morphisms reveal structures of the objects. In a bosonic theory, simple objects, describing point particles having {\it no internal structure} has a 1-dimensional ``endomorphism'' space. i.e. there is a unique map mapping a simple object to itself ({\it endomorphism}). That map is just the identity map. Graphically it is often represented as a straight line. \\ Between two simple objects $a$ and $b$ there is no map between them, unless $a=b$. For example if we have \begin{equation} a = \oplus_i m_{a i } c_i, \qquad b= \oplus_i m_{b i}c_i, \end{equation} where $c_i $ are the simple objects in $C$, \begin{equation} \textrm{dim} [\textrm{Hom}(a,b)] = \sum_i m_{ai} m_{bi}. \end{equation} We can construct a basis for these morphisms from the composite object $a$ to $c_i$. This is illustrated in (\ref{eq:hombasis}), where the basis index $\alpha$ runs from 1 to $m_{ai}$. \begin{figure}[h] \begin{equation} \label{eq:hombasis} \mathtikzS{0.7}{ \coordinate["$a$" right] (A) at (0,1cm); \coordinate (O) at (0,-0.2cm); \node[right] at (O) {$\alpha$}; \coordinate["$c_i$" right] (B) at (0,-1.5cm); \draw [thick] (A) -- (O); \draw [thick] (O) -- (B); \tri{0}{0}{270}; } \in\quad \Hom(c_i,a), \quad \alpha\in \{1,\dots,m_{ai}\} \end{equation} \end{figure} Let us note that in all these pictures here and in the rest of the paper, they have an orientation. One could think of them as the likes of Feynman diagrams, that each describing a process, and the orientation here is chosen (unless otherwise specified) such that ``time'' is flowing from the bottom to the top. If one flips a diagram upside down, that is equivalent to taking conjugate, where individual anyon species should be replaced by its dual, and any coefficients should be replaced by its complex conjugate. States in a Hilbert space are constructed from basis of morphisms. Particularly when we attempt to count the number of states in a topological order with a given number of anyons, the Hilbert space is basically the space of morphisms that map the collection of anyons to appropriate objects -- e.g. to the trivial anyons if the system is in a closed manifold with no boundaries. These ``maps of collection of anyons to another anyon" are part of an extra mathematical structure -- namely fusion, that is discussed below. One important technical issue is a phase ambiguity in constructing a basis for morphisms. Rescaling a given basis in (\ref{eq:hombasis}) by $\zeta^\alpha(a,c_i)$ defines an equally good basis. \begin{figure}[h] \begin{equation} \label{eq:hombasis_rescale} \mathtikzS{0.7}{ \coordinate["$a$" right] (A) at (0,1cm); \coordinate (O) at (0,-0.2cm); \node[right] at (O) {$\alpha$}; \coordinate["$c_i$" right] (B) at (0,-1.5cm); \draw [thick] (A) -- (O); \draw [thick] (O) -- (B); \tri{0}{0}{270}; } \to \zeta^a_{c_i}(\alpha) \mathtikzS{0.7}{ \coordinate["$a$" right] (A) at (0,1cm); \coordinate (O) at (0,-0.2cm); \node[right] at (O) {$\alpha$}; \coordinate["$c_i$" right] (B) at (0,-1.5cm); \draw [thick] (A) -- (O); \draw [thick] (O) -- (B); \tri{0}{0}{270}; } \end{equation} \end{figure} \vspace{0.5cm} {\bf \underline{Fusion}}\\ Anyons obey a commutative and associative fusion algebra: \begin{equation} a\otimes b = \sum_{c} N_{ab}^c c, \label{eq:bulk_fusion} \end{equation} where $N_{ab}^c=N_{ba}^c$ is a non-negative integer specifying the number of different ways in which anyons $a$ and $b$ can fuse to $c$. A special object $\mathbf{1}$ called the trivial object(vacuum) fuses trivially with all other objects: $\mathbf{1}\otimes a=a$.\footnote{In the appendix and in some literature $\mathbf{0}$ is also used to label the trivial object.} The building block of the states of anyonic Hilbert space is the fusion basis represented diagrammatically by a vertex: \begin{equation} \label{eq:fusion_basis} \ket{a,b;c,\mu} \quad = \quad \left( \frac{d_c}{d_a d_b} \right)^{1/4} \quad \mathtikzS{0.5}{ \vertexIC{0}{0}{1cm}{black}; \node[right] at (0,0) {$\mu$}; \node[above] at (0,1cm) {$c$}; \node[below] at (1cm,-1cm) {$b$}; \node[below] at (-1cm,-1cm) {$a$}; } \end{equation} where $\mu=1,\dots,N_{ab}^c$. The number $d_i$ is the quantum dimension of anyon $i$, which will be briefly reviewed later in this section. The collection of all fusion trees with the same input/output legs spans a subspace of the Hilbert space, namely the fusion space $V^{ab}_c$. Note that in the construction of explicit basis of these linear maps $V^{ab}_c$, or equivalently the definition of the states (\ref{eq:fusion_basis}), is ambiguous up to a phase $\xi^{ab}_c$, which is the same kind of rescaling of morphism basis as we have seen in (\ref{eq:hombasis_rescale}). Larger fusion bases are constructed from the building blocks by taking tensor product of the building blocks in an appropriate order. Associativity of anyon fusion is given by $(a\otimes b)\otimes c = a\otimes(b\otimes c)$. It follows that the corresponding fusion space $V^{abc}_d$ has two sets of basis with respect to the fusion order, and the basis transformation in this fusion space is captured by the $F$-symbols\footnote{For simplicity we have suppressed the vertex multiplicity label $\mu$, which can be easily restored in case of non-trivial fusion multiplicity.} as shown in (\ref{eq:Fmove}). \begin{equation} \mathtikzS{0.4}{ \coordinate["$i$" above] (A) at (-3cm,3cm); \coordinate["$j$" above] (B) at (-1cm,3cm); \coordinate["$k$" above] (C) at (1cm,3cm); \coordinate["$l$" below right] (O) at (0,0); \coordinate (D) at (-2cm,2cm); \coordinate (E) at (-1cm,1cm); \coordinate["$m$" below left] (M) at (-1.5cm,1.5cm); \draw[thick] (O) -- (A); \draw[thick] (D) -- (B); \draw[thick] (E) -- (C); } =\quad\sum_n\quad (F_l^{ijk})_{mn}^* \mathtikzS{0.4}{ \coordinate["$i$" above] (A) at (-3cm,3cm); \coordinate["$j$" above] (B) at (-1cm,3cm); \coordinate["$k$" above] (C) at (1cm,3cm); \coordinate["$l$" below right] (O) at (0,0); \coordinate (E) at (-1cm,1cm); \coordinate (F) at (0,2cm); \coordinate["$n$" below right] (N) at (-0.5cm,1.5cm); \draw[thick] (O) -- (A); \draw[thick] (C) -- (E); \draw[thick] (B) -- (F); } \label{eq:Fmove} \end{equation} The $F$-symbols are not independent, they're related by the coherent condition known as the pentagon equation, as shown in figure \ref{fig:pentagon}. Pentagon equations are sufficient to solve for all $F$-symbols in an anyonic model. \begin{figure} \centering \input{tikzPic/pentagon.tex} \caption{Pentagon equation w.r.t. the fusion space $(V^{abcd}_e)^*$.} \label{fig:pentagon} \end{figure} Note that because of the phase ambiguity mentioned above the $F$-symbols are not invariant under these rescalings. It allows one to fix some of the components of $F$. \vspace{1cm} {\bf \underline{Quantum dimension}}\\ A category is called \textit{pivotal} if every simple object $a$ has a unique dual $a^*$. Given any simple object $a$, the dual of $a$ is a simple object $a^*$ satisfying \begin{equation} a\otimes a^* = \mathbf{1} + \cdots, \end{equation} where $\mathbf{1}$ is the trivial object (vacuum). Diagrammatically, any line labeled by $a^*$ is equivalent to a line labeled by $a$ but with the direction reversed. The pivotal structure is essential in the definition of quantum dimension. The quantum dimension of a simple object is defined as the quantum trace (the pivotal trace) of an identity operator. \begin{equation} d_a = \Tr(\id_a) = \mathtikzS{0.6}{ \draw[thick] (0,0) circle [radius=1cm]; \node[right] at (1cm,0) {$a$}; } \label{eq:qd_def} \end{equation} The diagram in (\ref{eq:qd_def}) is direction-irrelevant, so we can freely replace $a$ by $a^*$ and therefore $d_a=d_{a^*}$. The quantum dimension of the trivial object $\mathbf{1}$ is $d_{\mathbf{1}}=1$ in any anyonic model. Quantum dimension is conserved under anyon fusion, following the fusion (\ref{eq:bulk_fusion}) we have \begin{equation} d_a d_b = \sum_c N_{ab}^c d_c. \end{equation} The notion of quantum dimension can be generalized to an arbitrary object in the category $C$, in particular \begin{equation} \dim(A)=\sum_a m_a d_a,\quad \text{for any object } A=\sum_a m_a a \in C. \end{equation} The total quantum dimension of category $C$ is defined as \begin{equation} D_C=\sqrt{\sum_a d_a^2}, \end{equation}. \vspace{1cm} {\bf \underline{Braiding and Twist (spin) }}\\ Using the fusion tree basis, the braiding exchange operator can be represented by R symbols shown below: \begin{equation} \mathtikzS{0.35}{ \vertexI{0}{0}{1cm}; \crossingI{0cm}{-2cm}{1cm}; \node[below] at (-1cm,-3cm) {$a$}; \node[below] at (1cm,-3cm) {$b$}; \node[above] at (0,1cm) {$c$}; } =\quad R^{ab}_c \mathtikzS{0.5}{ \vertexI{0}{0cm}{1cm}; \node[below] at (-1cm,-1cm) {$a$}; \node[below] at (1cm,-1cm) {$b$}; \node[above] at (0,1cm) {$c$}; }. \end{equation} In the special case where $b=a^*$ and $c=\mathbf{1}$, the R symbol is reduced to the \textit{spin} of an anyon. \begin{equation} \mathtikzS{0.5}{ \anticrossingI{0}{0}{0.5cm}; \draw[thick] (-0.5cm,-0.5cm) -- (-1cm,0) -- (-0.5cm,0.5cm); \draw[thick] (0.5cm,0.5cm) -- (0.5cm,1.5cm); \draw[thick] (0.5cm,-0.5cm) -- (0.5cm,-1.5cm); \node[right] at (0.5cm,-1.5cm) {$a$}; } = \quad \theta_a \mathtikzS{0.5}{ \draw[thick] (0,-1.5cm) -- (0,1.5cm); \node[right] at (0,-1.5cm) {$a$}; }. \end{equation} Taking the trace of the above relation we know from (\ref{eq:qd_def}) that the spin can be expressed as \begin{equation} \theta_a = \frac{1}{d_a} \mathtikzS{0.8}{ \anticrossingI{0}{0}{0.5cm}; \draw[thick] (-0.5cm,-0.5cm) -- (-1cm,0) -- (-0.5cm,0.5cm); \draw[thick] (0.5cm,-0.5cm) -- (1cm,0) -- (0.5cm,0.5cm); \node[below] at (0.5cm,-0.5cm) {$a$}; }. \end{equation} Each anyon has a definite spin, bosons are spin=$1$ particles while fermions are spin=$-1$ particles. We require that braiding and fusion commute. Diagrammatically this means we can freely move lines across a vertex. \begin{equation} \mathtikzS{0.5}{ \vertexI{0}{0}{-0.5cm}; \draw[thick] (-0.5cm,0.5cm) -- (-0.5cm,2.5cm); \draw[thick] (0.5cm,0.5cm) -- (0.5cm,2.5cm); \draw[thick] (0,-0.5cm) -- (0,-1cm); \draw[thick] (-1cm,-1cm) -- (-1cm,0.5cm) -- (-0.6cm,0.9cm); \draw[thick] (-0.4cm,1.1cm) -- (0.4cm,1.9cm); \draw[thick] (0.6cm,2.1cm) -- (1cm,2.5cm); } = \mathtikzS{0.5}{ \vertexI{0}{0}{-0.5cm}; \draw[thick] (-0.5cm,0.5cm) -- (-0.5cm,2.5cm); \draw[thick] (0.5cm,0.5cm) -- (0.5cm,2.5cm); \draw[thick] (0,-0.5cm) -- (0,-1cm); \draw[thick] (-0.5cm,-1cm) -- (-0.1cm,-0.6cm); \draw[thick] (0.1cm,-0.4cm) -- (1cm,0.5cm) -- (1cm,2.5cm); } \end{equation} $R$-symbols and $F$-symbols are not independent. In order for braiding to be compatible with fusion, it is found that some coherent condition must be satisfied by the $F$-symbols and $R$-symbols, which may be expressed diagrammatically as the Hexagon equation shown in figure \ref{fig:hexagon}. Given the solution of $F$-symbols, hexagon equations are sufficient to solve for all $R$-symbols. \begin{figure}[H] \centering \input{tikzPic/hexagon.tex} \caption{Hexagon equation w.r.t. the fusion space $(V^{abc}_d)^*$.} \label{fig:hexagon} \end{figure} \vspace{1cm} {\bf \underline{A note on Super-fusion category}} A useful place for the discussion of super-fusion category can be found in \cite{Gu_2015, Aasen:2017ubm}. The above discussion applies to a generic fusion category. In the presence of fermion condensation which we will be interested below, the resultant gapped boundary would carry extra structure connected to the $\mathbb{Z}_2$ fermion parity. To accommodate that structure, we need to upgrade the notion of a fusion category to a super-fusion category. There are many definitions of super-categories. At the level of the objects, it may involve a decomposition of the objects to a direct sum of even and odd parity objects \footnote{See for example \cite{Brundan_2017} and also some of the references that appear in \cite{Aasen:2017ubm} }. \begin{equation} \label{eq:decomposeC} C= C_0 \oplus C_1. \end{equation} However, this definition is quite restrictive. In the gapped boundaries that are considered, and more so when it comes to defects localized between boundaries, such a decomposition is not very clear. Therefore, we will adopt the discussion in \cite{Gu_2015, Aasen:2017ubm} -- which keeps track of fermion parity of morphisms, and not discuss a decomposition of the objects themselves as in (\ref{eq:decomposeC}). Then the most distinctive characteristic of a super-fusion category is the appearance of fermion parity odd morphisms. First, the allowed endo-morphism space of simple objects would be enlarged. Simple objects could potentially carry a ``fermion parity odd" map to itself, in addition to the usual identity map which carries even fermion parity. i.e. In this case, dim $[Hom(a,a) =2]$. Pictorially, the two ``basis maps'' to itself are represented as in figure below. Simple objects having a two dimensional endomorphism are referred to as {\bf q-type objects} in the literature \cite{Aasen:2017ubm}. \begin{equation} \mathtikzS{0.7}{ \coordinate["$a$" right] (A) at (0,1cm); \coordinate (O) at (0,-0.2cm); \node[right] at (O) {$\alpha$}; \coordinate["$a$" right] (B) at (0,-1.5cm); \draw [thick] (A) -- (O); \draw [thick] (O) -- (B); \tri{0}{0}{270}; } \in\quad \Hom(a,a), \quad \alpha\in\{1,2\} \end{equation} Second, fusion, being morphisms from $C\otimes C \to C$, could also acquire both parity even and odd channels. They are illustrated in (\ref{eq:even_odd_fuse}). Odd channels are often represented with an extra red dot on the vertex. \begin{equation} \label{eq:even_odd_fuse} \mathtikzS{0.6}{ \vertexIL{0}{0}{1cm}{$a$}{$b$}{$c$}; } \quad \text{and} \quad \mathtikzS{0.6}{ \vertexIL{0}{0}{1cm}{$a$}{$b$}{$c$}; \fill [fill=red] (0,0) circle [radius=0.1cm]; } \end{equation} There is an ambiguity in the definition of the fusion coefficients. Consider the following fusion process: \begin{equation} \label{eq:superfuse} a\otimes b = \otimes \Delta_{ab}^c c \end{equation} The fusion is an element in Hom$(a\otimes b, c)$. If $c$ is a q-type object that has a two dimensional endomorphism space, the fusion map could be concatenated by a non-trivial endomorphism in $c$ and remains an element in Hom$(a\otimes b, c)$. In other words, while defining $\Delta_{ab}$, we have implicitly made a choice in discarding possible endomorphism in $c$. This does not appear natural. Therefore, it is proposed in \cite{Aasen:2017ubm} to enlarge the fusion space to \begin{equation} \label{eq:superfuse} V_{ab}^c = \Delta_{ab}^c \otimes \textrm{End}(c). \end{equation} Under this definition, it would mean that the fermionic ``dots'' can be freely moved from the vertex to the connecting anyon lines if they are q-type objects \cite{Aasen:2017ubm}. Since fusion spaces are $\mathbb{Z}_2$ graded, and that the old channels essentially carry a Majorana mode which leads to sign changes under swapping of labels \cite{Gu_2015, Aasen:2017ubm}, the pentagon equation has to be upgraded to keep track of these labellings and signs. This leads to the super-pentagon equation illustrated in (\ref{fig:super_pent}). \begin{figure}[H] \centering \begin{equation} \label{fig:super_pent} \input{tikzPic/super_pentagon.tex} \end{equation} \end{figure} Note that the operation $K$ refers to the exchange of the labels of the vertices. Depending on their fermion parity, (i.e. if both vertices carry odd parity), there is a sign change there. \subsection{Gapped boundaries and (super)- Frobenius Algebra in tensor categories} \label{sec:frobeniusalgebra} It is well known that each bosonic gapped boundary of a non-chiral bosonic topological order $C$ in 2+1 dimensions is characterized by a (commutative separable symmetric) Frobenius algebra in $C$ \cite{Kong:2013aya, kirillov}. \footnote{It is pointed out to us that the relevant mathematics first appeared in \cite{B_ckenhauer_1999,B_ckenhauer_2000,bckenhauer2000longorehren}.} Physically the algebra encodes the condensation of a collection of bosons at the boundary. While there are already ample hints elsewhere such as \cite{Aasen:2017ubm, Wan:2016php, Bhardwaj:2016clt}, that attempts to obtain the collection of excitations in the condensed child theory, the discussion does not provide a systematic framework to compute the excitations in the child theory, not to mention excitations localized between different boundaries. Here, we propose that a fermionic gapped boundary is also encoded in a separable symmetric Frobenius algebra, except where ``commutativity'' is relaxed to ``super-commutativity'', to accommodate condensation of fermionic anyons. i.e. To reiterate, the relevant mathematical structure is a super-commutative separable symmetric Frobenius algebra. Each of these labels will be discussed below. We collect some necessary facts about Frobenius algebra and anyon condensation below. In this section, we rely heavily on \cite{kirillov}, and particularly \cite{Fuchs_2002, Fuchs_2004}, which have developed many useful tools and proved numerous identities related to Frobenius algebra and their modules, that assist us significantly in our quest -- the reason being that a super-commutative Frobenius algebra is a Frobenius algebra after all. Applications of these tools to understand super Frobenius algebra and their q-type modules/bi-modules are some of the main goals of the current paper. \vspace{1cm} {\bf \underline{Algebra and co-algebra}} An {\bf algebra} in the category $C$ is a collection $\mathcal{A} $ of simple objects equipped with a product $\mu$ and unit $\iota_\mathcal{A}$. This collection is expressible as \cite{kirillov, Fuchs_2002} \begin{equation} \mathcal{A} = \oplus_i W_{ i 1} c_i, \qquad W_{ i 1} \in \mathbb{Z}_{\ge 0}, \qquad c_i \in C. \end{equation} This collection of anyons when equipped with the appropriate set of structures that we will discuss below, would be identified with the set of anyons that condense at the gapped boundary. The product $\mu$ maps $\mathcal{A} \times \mathcal{A} \to \mathcal{A}$. It is trivially associated (\ref{fig:protrivialass}). Further we have already constructed all the basis of maps (or homomorphisms) from $C\times C \to C$ in the previous section. Therefore this product $\mu$ must be expressible in terms of the basis constructed out of the simple objects. \begin{equation} \centering \begin{tikzpicture}\label{fig:protrivialass} \draw [black, ultra thick] (0,0.5) to [out=90, in=90] (1,0.5); \draw [black, ultra thick] (0,0) -- (0,0.5); \draw [black, ultra thick] (1,0) -- (1,0.5); \draw [black, ultra thick] (0.5,1.5) to [out=90, in=90] (2,1.5); \draw [black, ultra thick] (0.5,0.8) -- (0.5,1.5); \draw [black, ultra thick] (2,0) -- (2,1.5); \draw [black, ultra thick] (1.25,1.92) -- (1.25,2.5); \draw [black, thick] (2.325,0.9) -- (2.675,0.9); \draw [black, thick] (2.325,1) -- (2.675,1); \draw [black, ultra thick] (4,0.5) to [out=90, in=90] (5,0.5); \draw [black, ultra thick] (4,0) -- (4,0.5); \draw [black, ultra thick] (5,0) -- (5,0.5); \draw [black, ultra thick] (3,1.5) to [out=90, in=90] (4.5,1.5); \draw [black, ultra thick] (4.5,0.8) -- (4.5,1.5); \draw [black, ultra thick] (3,0) -- (3,1.5); \draw [black, ultra thick] (3.75,1.92) -- (3.75,2.5); \node at (0.3,0) {$\mathcal{A}$} node at (1.3,0) {$\mathcal{A}$} node at (2.3,0) {$\mathcal{A}$} node at (3.3,0) {$\mathcal{A}$} node at (4.3,0) {$\mathcal{A}$} node at (5.3,0) {$\mathcal{A}$} node at (0.8,1) {$\mathcal{A}$} node at (4.8,1) {$\mathcal{A}$} node at (1.55,2.2) {$\mathcal{A}$} node at (4.05,2.2) {$\mathcal{A}$}; \end{tikzpicture} \end{equation}\\ This is illustrated in (\ref{eq:Aproduct}). The $\zeta$ in (\ref{eq:Aproduct}) labels the fusion channel $i\otimes j \rightarrow k$ in the bulk. \begin{figure}[h] \centering \begin{equation} \label{eq:Aproduct} \mathtikzS{0.8}{ \vertexI{0}{0}{1.5cm}; \node[right] at (0,0) {$\mu$}; \node[above] at (0,1.5cm) {$\mathcal{A}$}; \node[below] at (-1.5cm,-1.5cm) {$\mathcal{A}$}; \node[below] at (1.5cm,-1.5cm) {$\mathcal{A}$}; } =\quad\sum_{i,j,k\in \mathcal{A}}~\sum_{\alpha,\beta,\gamma,\zeta} \quad\mu_{(i\alpha)(j\beta)}^{(k\gamma);\zeta} \mathtikzS{0.8}{ \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {$\mathcal{A}$}; \node[below] at (-1.5cm,-1.5cm) {$\mathcal{A}$}; \node[below] at (1.5cm,-1.5cm) {$\mathcal{A}$}; \vertexIC{0}{0}{1cm}{green}; \tri{-1cm}{-1cm}{45}; \tri{0}{1.1cm}{270}; \tri{1cm}{-1cm}{135}; \node[right] at (0,0.5cm) {$k$}; \node[above] at (-0.65cm,-0.65cm) {$i$}; \node[above] at (0.65cm,-0.65cm) {$j$}; \node[right] at (0,1cm) {$\gamma$}; \node[left] at (-0.8cm,-0.8cm) {$\bar{\alpha}$}; \node[right] at (0.8cm,-0.8cm) {$\bar{\beta}$}; \filldraw [fill=yellow] (0,0) circle [radius=0.05cm]; \node[right] at (0,0) {$\zeta$}; } \end{equation} \end{figure} The multiplicity $W_{i 1}$ determines the dimension of the maps (homomorphisms) from $\mathcal{A}$ to the simple object $c_i$. We therefore introduce a label $\alpha$ as we did in (\ref{eq:hombasis}). Defining the product $\mu$ is equivalent to solving for the coefficients defining the linear combination of basis maps -- note that they are subjected to the same phase ambiguity as discussed in (\ref{eq:hombasis_rescale}). \begin{figure}[h] \centering \begin{equation} \label{fig:condensation_mapA} \mathtikzS{0.7}{ \coordinate["$\mathcal{A}$" right] (A) at (0,1cm); \coordinate (O) at (0,-0.2cm); \node[right] at (O) {$\alpha$}; \coordinate["$i$" right] (B) at (0,-1.5cm); \draw [thick] (A) -- (O); \draw [thick, green] (O) -- (B); \tri{0}{0}{270}; } \quad \alpha\in\{ 1,2,\dots,W_{i1} \} \end{equation} \end{figure} A {\bf unit} $\iota_\mathcal{A}$ is a morphism from the vacuum $\mathbf{1}$ to $\mathcal{A}$. This morphism is in fact an embedding $\iota_\mathcal{A}:\mathbf{1}\hookrightarrow\mathcal{A}$, so in any mathematical expression the unit can be simply understood as the vacuum object $\mathbf{1}$ despite its nature of morphism. A more accurate yet pedagogical understanding of the unit is to think of it as ``taking out the vacuum $\mathbf{1}$ from $\mathcal{A}$". Or alternatively, when we do arithmetic, any number $x = 1. x$. The unit map just means we can freely multiply any number by unity. The vacuum fuses trivially with all objects in the category, translating this back to a morphism we see immediately that the unit has to satisfy the morphism equality $\mu\circ(\iota_\mathcal{A}\otimes \id_{\mathcal{A}})=\id_{\mathcal{A}}$. Here $\id_{\mathcal{A}}$ is the identity map in $\mathcal{A}$. This equality is illustrated in (\ref{eq:unit}). \begin{equation} \mathtikzS{0.7}{ \draw[thick] (0,-1cm) -- (0,1cm); \draw[thick] (-0.5cm, -0.5cm) -- (0,0); \fill (-0.5cm,-0.5cm) circle [radius=0.05cm]; \node[below] at (0,-1cm) {$\mathcal{A}$}; \node[left] at (-0.5cm,-0.5cm) {$\iota_\mathcal{A}$}; } = \mathtikzS{0.7}{ \draw[thick] (0,-1cm) -- (0,1cm); \node[below] at (0,-1cm) {$\mathcal{A}$}; } \label{eq:unit} \end{equation} Each algebra in a category has a unique unit, which means the vacuum appears exactly once in the algebra $\mathcal{A}$. A {\bf co-algebra} $ \mathcal{A}$ is a collection of simple objects $\mathcal{A}$ equipped with a co-product $\Delta$, which maps $\mathcal{A} \to \mathcal{A} \times \mathcal{A}$, and a counit $\varepsilon_\mathcal{A}$. While this co-product operation may look mysterious, we have a very familiar example in physics. Consider for example two electrons with spins $S_1$ and $S_2$ respectively. The action of spatial rotations on these spins are effected through the total angular momentum operator $\hat S = \hat S_1 + \hat S_2$. This is an example where we are ``splitting'' an $SU(2)$ group element into the products of two $SU(2)$ group elements, expressed as a sum of operators in the corresponding Lie algebra. Like in the case of the product $\mu$, the map $\Delta$ can be expressed in terms of the basis maps constructed in $C$. This is illustrated in the following. \begin{equation} \mathtikzS{0.8}{ \vertexIC{0}{0}{-1.5cm}{black}; \node[right] at (0,0) {$\Delta$}; \node[below] at (0,-1.5cm) {$\mathcal{A}$}; \node[above] at (1.5cm,1.5cm) {$\mathcal{A}$}; \node[above] at (-1.5cm,1.5cm) {$\mathcal{A}$}; } =\quad\sum_{i,j,k\in\mathcal{A}}~\sum_{\alpha,\beta,\gamma,\xi}\quad \Delta_{(k\gamma);\xi}^{(i\alpha)(j\beta)} \mathtikzS{0.8}{ \vertexIC{0}{0}{-1.5cm}{black}; \node[below] at (0,-1.5cm) {$\mathcal{A}$}; \node[above] at (1.5cm,1.5cm) {$\mathcal{A}$}; \node[above] at (-1.5cm,1.5cm) {$\mathcal{A}$}; \vertexIC{0}{0}{-1cm}{green}; \tri{1cm}{1cm}{225}; \tri{-1cm}{1cm}{315}; \tri{0cm}{-1.1cm}{90}; \node[right] at (0.5cm,0.5cm) {$j$}; \node[right] at (-0.5cm,0.5cm) {$i$}; \node[right] at (0,-0.5cm) {$k$}; % \node[right] at (0.8cm,0.8cm) {$\beta$}; \node[left] at (-0.8cm,0.8cm) {$\alpha$}; \node[right] at (0,-1cm) {$\bar{\gamma}$}; % \filldraw [fill=yellow] (0,0) circle [radius=0.05cm]; \node[right] at (0,0) {$\xi$}; } \end{equation}\\ Similar to the product, the co-product $\Delta$ is also trivially associated (\ref{fig:coprotrivialass}). \begin{equation} \centering \begin{tikzpicture}\label{fig:coprotrivialass} \draw [black, ultra thick] (0,-0.5) to [out=-90, in=-90] (1,-0.5); \draw [black, ultra thick] (0,0) -- (0,-0.5); \draw [black, ultra thick] (1,0) -- (1,-0.5); \draw [black, ultra thick] (0.5,-1.5) to [out=-90, in=-90] (2,-1.5); \draw [black, ultra thick] (0.5,-0.8) -- (0.5,-1.5); \draw [black, ultra thick] (2,0) -- (2,-1.5); \draw [black, ultra thick] (1.25,-1.92) -- (1.25,-2.5); \draw [black, thick] (2.325,-0.9) -- (2.675,-0.9); \draw [black, thick] (2.325,-1) -- (2.675,-1); \draw [black, ultra thick] (4,-0.5) to [out=-90, in=-90] (5,-0.5); \draw [black, ultra thick] (4,0) -- (4,-0.5); \draw [black, ultra thick] (5,0) -- (5,-0.5); \draw [black, ultra thick] (3,-1.5) to [out=-90, in=-90] (4.5,-1.5); \draw [black, ultra thick] (4.5,-0.8) -- (4.5,-1.5); \draw [black, ultra thick] (3,0) -- (3,-1.5); \draw [black, ultra thick] (3.75,-1.92) -- (3.75,-2.5); \node at (0.3,0) {$\mathcal{A}$} node at (1.3,0) {$\mathcal{A}$} node at (2.3,0) {$\mathcal{A}$} node at (3.3,0) {$\mathcal{A}$} node at (4.3,0) {$\mathcal{A}$} node at (5.3,0) {$\mathcal{A}$} node at (0.8,-1) {$\mathcal{A}$} node at (4.8,-1) {$\mathcal{A}$} node at (1.55,-2.2) {$\mathcal{A}$} node at (4.05,-2.2) {$\mathcal{A}$}; \end{tikzpicture} \end{equation}\\ A {\bf counit} $\varepsilon_\mathcal{A}$ is the same morphism as the unit, but with the direction reversed, namely a morphism from $\mathcal{A}$ to the vacuum $\mathbf{1}$. Like the unit, the counit satisfies a similar morphism equality: $(\varepsilon_\mathcal{A} \otimes \id_{\mathcal{A}})\circ\Delta = \id_{\mathcal{A}}$. The equality is illustrated by \begin{equation} \mathtikzS{0.7}{ \draw[thick] (0,-1cm) -- (0,1cm); \draw[thick] (-0.5cm, 0.5cm) -- (0,0); \fill (-0.5cm,0.5cm) circle [radius=0.05cm]; \node[below] at (0,-1cm) {$\mathcal{A}$}; \node[left] at (-0.5cm,0.5cm) {$\varepsilon_\mathcal{A}$}; } = \mathtikzS{0.7}{ \draw[thick] (0,-1cm) -- (0,1cm); \node[below] at (0,-1cm) {$\mathcal{A}$}; }, \end{equation} which is a direction reversed version of (\ref{eq:unit}). A {\bf Frobenius algebra} $\mathcal{A}$ is both an algebra and co-algebra. It is equipped with a product and a co-product at the same time. By now, it should be familiar that every time an extra structure is introduced, we have to determine how the new structure and all the previous structures already in place should fit together. Its properties are conveniently summarised in the following picture. \begin{equation} \mathtikzS{0.6}{ \vertexIC{0}{0}{-1cm}{black}; \node[left] at (0,0) {$\Delta$}; \vertexI{1cm}{1cm}{1cm}; \node[right] at (1cm,1cm) {$\mu$}; \draw[thick] (2cm,0cm) -- (2cm,-1cm); \draw[thick] (-1cm,1cm) -- (-1cm,2cm); \node[below] at (0,-1cm) {$\mathcal{A}$}; \node[below] at (2,-1cm) {$\mathcal{A}$}; \node[above] at (-1cm,2cm) {$\mathcal{A}$}; \node[above] at (1cm,2cm) {$\mathcal{A}$}; } = \mathtikzS{0.6}{ \vertexI{0}{0}{1cm}; \antivertexI{0}{1cm}{1cm}; \node[left] at (0,0) {$\mu$}; \node[left] at (0,1cm) {$\Delta$}; \node[below] at (-1cm,-1cm) {$\mathcal{A}$}; \node[below] at (1,-1cm) {$\mathcal{A}$}; \node[above] at (-1cm,2cm) {$\mathcal{A}$}; \node[above] at (1cm,2cm) {$\mathcal{A}$}; } = \mathtikzS{0.6}{ \vertexI{0}{0}{1cm}; \antivertexI{1cm}{-1cm}{1cm} \draw[thick] (-1cm,-1cm) -- (-1cm,-2cm); \draw[thick] (2cm,0cm) -- (2cm,1cm); \node[left] at (0,0cm) {$\mu$}; \node[right] at (1cm,-1cm) {$\Delta$}; \node[below] at (-1cm,-2cm) {$\mathcal{A}$}; \node[below] at (1,-2cm) {$\mathcal{A}$}; \node[above] at (0cm,1cm) {$\mathcal{A}$}; \node[above] at (2cm,1cm) {$\mathcal{A}$}; } \end{equation} We have also included the conditions for the algebra being ``separable" and symmetric. A Frobenius algebra $\mathcal{A}$ is called {\bf separable} if there exists a map $e:\mathcal{A}\rightarrow \mathcal{A}\otimes\mathcal{A}$ such that $\mu\circ e = \id_{\mathcal{A}}$. A Frobenius algebra $\mathcal{A}$ is called {\bf special}(a.k.a. strongly-separable) if the product $\mu$ is the inverse of the coproduct $\Delta$ and the unit $\iota_{\mathcal{A}}$ is the inverse of the counit $\varepsilon_{\mathcal{A}}$, namely\footnote{This condition is sometimes called \textit{normalized special}.} \begin{equation} \label{eq:separability} \mathtikzS{0.5}{ \antivertexI{0}{-1cm}{1cm}; \vertexI{0}{1cm}{1cm}; \node[below] at (0,-2cm) {$\mathcal{A}$}; \node[above] at (0,2cm) {$\mathcal{A}$}; \node[left] at (0,-1cm) {$\Delta$}; \node[left] at (0,1cm) {$\mu$}; } = \mathtikzS{0.5}{ \draw[thick] (0,-2cm) -- (0,2cm); \node[below] at (0,-2cm) {$\mathcal{A}$}; } \qquad \text{and} \qquad \mathtikzS{0.7}{ \draw[thick] (0,-1cm) -- (0,1cm); \fill (0,1cm) circle [radius=0.1cm]; \fill (0,-1cm) circle [radius=0.1cm]; \node[left] at (0,-1cm) {$\iota_{\mathcal{A}}$}; \node[left] at (0,1cm) {$\varepsilon_{\mathcal{A}}$}; } =\quad\dim\mathcal{A} \end{equation} The separability of $\mathcal{A}$ allows a well-defined notion of simple objects in the representation category $\Rep \mathcal{A}$, and of non-simple objects as direct sums of simple objects. A Frobenius algebra $\mathcal{A}$ is called {\bf symmetric} if the product $\mu$ and the counit $\varepsilon_{\mathcal{A}}$ satisfy\footnote{This condition looks slightly different from (3.33) of \cite{Fuchs_2002}, there the authors have made implicit the composition of the unit $\iota_{\mathcal{A}}$ and the coproduct $\Delta$.} \begin{equation} \mathtikzS{0.4}{ \vertexI{0}{0}{1cm}; \draw[thick] (-1cm,-1cm) -- (-1cm,-3cm); \draw[thick] (1cm,-1cm) -- (2cm,0) -- (2cm,2cm); \draw[thick] (1cm,-1cm) -- (1cm,-2cm); \fill (1cm,-2cm) circle [radius=0.1cm]; \node[below] at (1cm,-2cm) {$\iota_{\mathcal{A}}$}; \node[right] at (1cm,-1cm) {$\Delta$}; \fill (0,1cm) circle [radius=0.1cm]; \node[left] at (0,1cm) {$\varepsilon_{\mathcal{A}}$}; \node[below] at (-1cm,-3cm) {$\mathcal{A}$}; \node[above] at (2cm,2cm) {$\mathcal{A}$}; \node[left] at (0,0) {$\mu$}; } = \mathtikzS{0.4}{ \vertexI{0}{0}{1cm}; \draw[thick] (1cm,-1cm) -- (1cm,-3cm); \draw[thick] (-1cm,-1cm) -- (-2cm,0) -- (-2cm,2cm); \draw[thick] (-1cm,-1cm) -- (-1cm,-2cm); \fill (-1cm,-2cm) circle [radius=0.1cm]; \node[below] at (-1cm,-2cm) {$\iota_{\mathcal{A}}$}; \node[right] at (-1cm,-1cm) {$\Delta$}; \fill (0,1cm) circle [radius=0.1cm]; \node[left] at (0,1cm) {$\varepsilon_{\mathcal{A}}$}; \node[below] at (1cm,-3cm) {$\mathcal{A}$}; \node[above] at (-2cm,2cm) {$\mathcal{A}$}; \node[left] at (0,0) {$\mu$}; }. \end{equation} A {\bf commutative algebra} is one where $ \mu \circ R_{\mathcal{A}, \mathcal{A}} = \mu$. This is illustrated in the following figure. The collection of anyons $\mathcal{A}$ can condense physically if they are mutually local, and that they are bosonic. It turns out that the above condition is sufficient to imply both. \begin{equation} \mathtikzS{0.6}{ \vertexI{0}{0}{0.7cm}; \crossingI{0cm}{-1.4cm}{0.7cm}; \node[below] at (0.7cm,-2.1cm) {$\mathcal{A}$}; \node[below] at (-0.7cm,-2.1cm) {$\mathcal{A}$}; \node[above] at (0,0.7cm) {$\mathcal{A}$}; } = \mathtikzS{0.6}{ \vertexI{0}{0}{1cm}; \node[above] at (0,1cm) {$\mathcal{A}$}; \node[below] at (-1cm,-1cm) {$\mathcal{A}$}; \node[below] at (1cm,-1cm) {$\mathcal{A}$}; }. \end{equation} We have explained all the necessary qualifiers of the algebra object that describe a condensate. The condensate should ultimately behave like the vacuum, or trivial anyon in the condensed phase, where intuitively it could be freely created or annihilated without causing any changes to the states. These mathematical structures introduced above are therefore physical requirements of the condensed anyons. To accommodate also fermionic anyons condensing, one has to relax the commutative condition to ``super-commutativity". A {\bf super algebra} is one which is graded by the $\mathbb{Z}_2$ (fermion parity) symmetry. Therefore, the algebra $\mathcal{A}$ acquires a decomposition \cite{Creutzig:2017anl} \begin{equation} \mathcal{A} = \mathcal{A}_0 \oplus \mathcal{A}_1. \end{equation} The fermion parity $\sigma(i)$ of an anyon $i$ belonging to $\mathcal{A}_{p}$ is $(-1)^p$, for $p\in{0,1}$. This decomposition allows us to define {\bf super-commutativity}, which is given by \footnote{There are again many mathematics papers discussing super algebra. However, many of the structures are probably not suited for the purpose here. We will restrict to a bare minimum of structures which are actually used in the current paper. Our discussion of super-commutativity is inspired by \cite{Creutzig:2017anl}. However, we have somewhat modified it to make it compatible with non-Abelian fermions taking part in $\mathcal{A}$, which is beyond \cite{Creutzig:2017anl} or \cite{Aasen:2017ubm}. } \begin{equation} \label{eq:super_braid} \mu \circ R_{c_i, c_j} (-1)^{\sigma(c_i) . \sigma(c_j)} = \mu, \qquad c_i, c_j \in \mathcal{A}, \end{equation} where $R_{c_i,c_j}$ is the half-braid of the modular tensor category $C$. The mathematical definition has a simple physical interpretation -- this is precisely to bind the fermionic anyon with a ``free fermion'' so that the pair together behaves like a boson and condenses. The idea of pairing is also discussed in \cite{Aasen:2017ubm}. Here we made it explicit that this is the physical realization of super-commutativity in the mathematics literature. The dimension $D_{\mathcal{A}}$ of the algebra is defined as \begin{equation} D_{\mathcal{A}} = \sum_{i}W_{i1} d_i, \qquad i \in \mathcal{A}. \end{equation} A {\bf (super)-Lagrangian algebra} is a (super)-commutative Frobenius algebra satisfying \begin{equation} D_{\mathcal{A}} = D_C \end{equation}. This is sufficient and necessary condition for the bosonic algebra to recover a modular invariant, and apparently a sufficient condition to recover a super-modular category. As we will demonstrate in the example of $D(D_4)$ in the appendix \ref{sec:dd4}, there are examples where a super-modular invariant with positive integer coefficients that does not admit an interpretation as a fermion condensate. In the following, we will study the representations (modules) of the (super)-algebra. It is the modules that form a super-fusion category defined in the sense in section \ref{sec:introcat}. These modules are the boundary excitations. Note that the braided structure is not preserved in this fusion category describing boundary excitations. \subsection{Defects via construction of modules} Having introduced the concept of an algebra in a category $C$ which plays the role of our condensate at the gapped boundary, we would like to obtain the collection of allowed defects, or excitations, at the boundary. When an anyon in the bulk phase approaches the gapped boundary, it would generally become an excitation at the boundary. However, since a bunch of anyons are ``condensed'' at the boundary, which can be freely created and annihilated there, it means that the fusion product between a bulk anyon and a condensed anyon might no longer be distinguishable at the boundary. In other words, the bulk anyons form "multiplets" under fusion with the condensed anyons. The corresponding mathematical jargon would be that the boundary excitations are modules (or representations) of the condensate algebra $\mathcal{A}$. We note that for a (super)-commutative Frobenius algebra, these left/right modules form a fusion category. In physical terms -- excitations at the gapped boundary have well defined fusion rules. In this subsection, we would summarise how to recover these modules. \subsubsection{Left (Right) Modules} Each module $M$ of $\mathcal{A}$ in $C$ is also a collection of anyons in $C$, i.e., \begin{equation} \label{eq:Mdecompose} M = \oplus_i W_{i M} c_i. \end{equation} Again, there are maps from $M \to C$, as well as its dual map, from $ C \to M$, which are illustrated in (\ref{fig:condensation_map}). \renewcommand{0.5}{0.5} \begin{figure}[htbp] \centering \begin{equation} \label{fig:condensation_map} \begin{tikzpicture}[scale=0.7] \draw [green, thick] (0,2) -- (0,0); \draw [magenta, thick] (0,0) -- (0,-2); \tri{0}{-0.25cm}{90}; \node at (0.3,2) {$i$}; \node at (0.4,-2) {$M$}; \node at (0.4,0) {$\bar\alpha$}; \node at (-1,0) {$b_M^{(i\alpha)}:=$}; \begin{scope}[xshift=4cm,yshift=0cm]{ \draw [green, thick] (0,-2) -- (0,0); \draw [magenta, thick] (0,0) -- (0,2); \tri{0}{0.25cm}{270}; \node at (0.3,-2) {$i$}; \node at (0.4,2) {$M$}; \node at (0.4,0) {$\alpha$}; \node at (-1,0) {$b_{(i\alpha)}^M:=$}; } \end{scope} \begin{scope}[xshift=10cm,yshift=-1cm]{ \scalebox{0.7}[0.7]{ \draw [magenta, thick] (0,-2) -- (0,0); \draw [green, thick] (0,0) -- (0,2); \draw [magenta, thick] (0,2) -- (0,4); \tri{0}{-0.25cm}{90} \tri{0}{2.25cm}{270} \node at (0.3,1) {\Large{$i$}} node at (0.4,-2) {\Large{$M$}} node at (0.4,4) {\Large{$M$}} node at (-1,1) {\Large{$\sum_{i,\alpha}$}} node at (1.5,1) {\Large{$=$}} node at (0.4,2) {\Large{$\alpha$}} node at (0.4,0) {\Large{$\bar{\alpha}$}}; \draw [magenta, thick] (3,-2) -- (3,4); \node at (3.5,4) {\Large{$M$}}; } } \end{scope} \begin{scope}[xshift=17cm,yshift=-1cm]{ \scalebox{0.7}[0.7]{ \draw [green, thick] (0,-2) -- (0,0); \draw [magenta, thick] (0,0) -- (0,2); \draw [green, thick] (0,2) -- (0,4); \tri{0}{0.25cm}{270} \tri{0}{1.75cm}{90} \node at (-0.6,1) {\Large{$M$}} node at (0.4,4) {\LARGE{$i$}} node at (0.4,-2) {\LARGE{$i$}} node at (1.5,1) {\LARGE{$=$}} node at (0.4,0) {\Large{$\beta$}} node at (0.4,2) {\Large{$\bar{\alpha}$}}; \draw [green, thick] (4,-2) -- (4,4); \node at (2.8,1) {\LARGE{$\delta_{ij}\delta_{\alpha\beta}$}} node at (4.4,4) {\LARGE{$i$}}; } } \end{scope} \node at (0.8,-0.2) {,} node at (4.8,-0.2) {,} node at (10,0) {and}; \end{tikzpicture} \end{equation} \end{figure} As a ``representation" of the algebra $\mathcal{A}$, the modules admit an action by $\mathcal{A}$ onto it, i.e., there is a linear map $\rho^M_\mathcal{A} : \mathcal{A} \times M \to M$. Since these anyons have non-trivial mutual braiding, we should specify whether $\mathcal{A}$ is acting on the left or on the right of $M$ at the gapped boundary. Here we will assume that the action is on the left, making $M$ a ``left-module''. In the case of commutative and super-commutative algebra $\mathcal{A}$, the right action can be generated from the left action, simply by composing the product with an R-crossing \cite{Fuchs_2002}. In the following, unless otherwise specified, we will first explicitly discuss left actions, which would automatically apply to right actions. Again, these (left or right) actions are linear maps which can be expressed in terms of the basis of morphisms $C \times C \to C$ (fusion) we have constructed in the previous section. The map $\rho^M_\mathcal{A}$ can thus be explicitly expressed in terms of these basis, as illustrated in (\ref{eq:reps}). \begin{equation} \label{eq:reps} \mathtikzS{0.7}{ \draw[thick, magenta] (0,-2cm) -- (0,2cm); \draw[thick] (-2cm,-2cm) -- (0,0); \node[below] at (0,-2cm) {$M$}; \node[above] at (0,2cm) {$M$}; \node[below] at (-2cm,-2cm) {$\mathcal{A}$}; \node[left] at (0,0) {$\rho$}; } \quad=\quad \sum_{a,i,j,\alpha,\gamma,\beta} \sum_{\delta} \quad \rho^{M(i\gamma);\delta}_{(a\alpha)(j\beta)} \mathtikzS{0.7}{ \draw[thick, magenta] (0,-2cm) -- (0,2cm); \draw[thick] (-2cm,-2cm) -- (0,0); \node[below] at (0,-2cm) {$M$}; \node[above] at (0,2cm) {$M$}; \node[below] at (-2cm,-2cm) {$\mathcal{A}$}; \draw[thick,green] (0,-1.3cm) -- (0,1.3cm); \draw[thick,green] (-1.3cm,-1.3cm) -- (0,0); \node[right] at (0,0.5cm) {$i$}; \node[right] at (0,-0.5cm) {$j$}; \node[left] at (-0.5cm,-0.5cm) {$a$}; \filldraw [fill=yellow] (0,0) circle [radius=0.08cm]; \triL{0}{1.3cm}{270}{0.5}; \triL{0}{-1.3cm}{90}{0.5}; \triL{-1.3cm}{-1.3cm}{45}{0.5}; \node[right] at (0,0) {$\delta$}; \node[right] at (0,-1.2cm) {$\bar{\beta}$}; \node[left] at (-1.1cm,-1.1cm) {$\bar{\alpha}$}; \node[right] at (0,1.2cm) {$\gamma$}; } \end{equation} As a representation of the algebra $\mathcal{A}$, $\rho^M_{\mathcal{A}}$ must satisfy (\ref{eq:associativity_module}). This is nothing but a generalization of our familiar property of a group representation, in which \begin{equation} \label{eq:reps_eq} \rho^M(gh)_{ab} = \sum_{c} \rho^M (g)_{ac} \rho^M(h)_{cb}, \end{equation} where $\rho^M(g)_{ab}$ is the representation matrix of the group element $g\in G$. Besides, it is well known that the irreducible representations of a group satisfy an orthogonality relation \begin{equation} \frac{1}{|G|}\sum_{g\in G}(\rho^M(g)^*)^b_a(\rho^{M'}(g))^d_c=\frac{1}{dim(M)}\delta^d_a\delta^b_c\delta^{M,M'} \end{equation} where $M$ and $M'$ are two irreducible representations of the group $G$. A similar orthogonality relation is satisfied by the simple modules $M$, $M'$ of a special Frobenius algebra $\mathcal{A}$, as illustrated in (\ref{eq:orthogonality}) \cite{Fuchs_2002}. \begin{figure}[htbp] \begin{equation}\label{eq:orthogonality} \centering \begin{tikzpicture}[scale=0.8] \begin{scope}[xshift=11cm,yshift=0.5cm]{ \draw [thick, green] (-3,-2) -- (-3,3.5); \node at (-6.5,0.5) {$=\frac{dim(i)}{dim(M)}\delta^{(j,\beta)}_{(j',\beta')}\delta^{(i,\alpha)}_{(i',\alpha')}\delta_{M,M'}$}; \node at (-2.8,3.5) {\tiny{$j$}}; } \end{scope} \draw [thick, green] (0.3,2.5) -- (0.3,4); \node at (0.5,4) {\tiny{$j$}}; \node at (0.5,2.85) {\tiny{$\bar\beta$}}; \draw [thick, magenta] (0.3,3) -- (0.3,2); \draw [thick, green] (0.3,2) -- (0.3,1); \scalebox{0.5}[0.5]{\tri{0.6cm}{4.3cm}{270}} \scalebox{0.5}[0.5]{\tri{0.6cm}{5.7cm}{90}} \node at (0.2,1.6) {\tiny{$i$}}; \node at (0.5,2.15) {\tiny{$\alpha$}}; \draw [thick] (0.3,2.5) to [out=180,in=90] (-0.3,2); \draw [thick] (-0.3,2) to [out=-90,in=60] (-1,0.75); \node at (0,2.7) {\tiny{$\rho^{M}_A$}}; \draw [thick, green] (0.3,1) -- (0.3,0.5); \node at (0.5,0.9) {\tiny{$i'$}}; \node at (0.5,0.35) {\tiny{$\bar\alpha'$}}; \draw [thick, magenta] (0.3,0.5) -- (0.3,-0.5); \draw [thick, green] (0.3,-0.5) -- (0.3,-1.5); \scalebox{0.5}[0.5]{\tri{0.6cm}{-0.7cm}{270}} \scalebox{0.5}[0.5]{\tri{0.6cm}{0.7cm}{90}} \node at (0.1,-1.5) {\tiny{$j'$}}; \node at (0.5,-0.35) {\tiny{$\beta'$}}; \draw [thick] (0,-0.3) -- (-0.5,-0.8); \draw [thick] (-1,-0.3) -- (-0.5,-0.8); \draw [thick] (-0.5,-1.3) -- (-0.5,-0.8); \draw [fill] (-0.5,-1.3) circle [radius=0.05]; \draw [thick] (0,-0.3) to [out=45,in=180] (0.3,0); \draw [thick] (-1,-0.3) to [out=135,in=-120] (-1,0.75); \node at (-0.7,-0.2) {$\mathcal{A}$}; \node at (0,0.2) {\tiny{$\rho^{M'}_A$}}; \end{tikzpicture} \end{equation} \end{figure} In fact after the left actions $\rho^M$ are expand in terms of basis, they form a basis of $Hom(\mathcal{A}\otimes j,k)$\cite{Fuchs_2002}. Hence for a morphism $\phi \in$ Hom$(\mathcal{A}\otimes j,k)$, it can be expressed in terms of linear combinations of the left actions on the modules, schematically as \begin{equation} \phi=\sum_{M} \lambda_{M,\{\alpha\}} \rho^{M, \{\alpha\}}. \end{equation} Here $\{\alpha\}$ are labels of basis of $\rho^M$ when expanded explicitly as maps in $C$. It will be made explicit in the following. To extract the coefficients $\lambda$ we can use the orthogonality relation (\ref{eq:orthogonality}) which then gives (\ref{eq:extract_lambda}) \cite{Fuchs_2002}. \begin{figure}[htbp] \begin{equation} \label{eq:extract_lambda} \centering \begin{tikzpicture} \begin{scope}[xshift=0.75cm,yshift=0.5cm]{ \draw [thick, green] (-4,-2) -- (-4,2); \node at (-4.7,0) {$\lambda_{M,\alpha}^\beta$}; \node at (-3,0) {$=\frac{dim(M)}{dim(i)}$}; \node at (-3.8,2) {\tiny{$j$}}; } \end{scope} \draw [thick, green] (0,2.5) -- (0,1.5); \draw [thick, green, fill=yellow] (-0.5,1.5)--(0.5,1.5)--(0.5,1)--(-0.5,1)--(-0.5,1.5); \node at (0.2,2.5) {\tiny{$j$}}; \node at (0,1.25) {$\phi$}; \draw [thick, green] (0.3,1) -- (0.3,0.5); \draw [thick] (-0.3,1) -- (-0.3,0.75); \node at (0.5,0.75) {\tiny{$i$}}; \node at (0.5,0.35) {\tiny{$\bar\alpha$}}; \draw [thick, magenta] (0.3,0.5) -- (0.3,-0.5); \draw [thick, green] (0.3,-0.5) -- (0.3,-1.5); \scalebox{0.5}[0.5]{\tri{0.6cm}{-0.7cm}{270}} \scalebox{0.5}[0.5]{\tri{0.6cm}{0.7cm}{90}} \node at (0.5,-1.5) {\tiny{$j$}}; \node at (0.5,-0.35) {\tiny{$\beta$}}; \draw [thick] (0,-0.3) -- (-0.5,-0.8); \draw [thick] (-1,-0.3) -- (-0.5,-0.8); \draw [thick] (-0.5,-1.3) -- (-0.5,-0.8); \draw [fill] (-0.5,-1.3) circle [radius=0.05]; \draw [thick] (0,-0.3) to [out=45,in=180] (0.3,0); \draw [thick] (-1,-0.3) to [out=135,in=-90] (-0.3,0.75); \node at (-0.7,-0.2) {$\mathcal{A}$}; \node at (0.1,0.2) {\tiny{$\rho^{M}_\mathcal{A}$}}; \end{tikzpicture} \end{equation} \end{figure} Note that the basis abstractly denoted $\{\alpha\}$ shows up above actually correspond to labels of the basis map projecting the modules to anyons in $C$ in (\ref{eq:extract_lambda}). i.e. $\{\alpha\} \to \alpha, \beta$. This identity is very useful. We note that (super)-commutativity of the Frobenius algebra $\mathcal{A}$ allows us to work simply with left modules. It also ensures that the resultant collection of boundary excitations (modules) form a (super) fusion category (i.e. the structure of fusion is well defined.) \cite{kirillov, Fuchs_2002} . To a physicist -- it means it makes sense to look for edge excitations which are always expressible as a linear combination of some basic excitations (the simple/irreducible representations) of the algebra, and that these excitations have well defined fusion rules. The 6j-symbols responsible for associativity of the fusion of the boundary excitations can be computed systematically, as soon as the precise multiplication $\mu$ of the condensate algebra $\mathcal{A}$ and the left/right action of the algebra on its modules are solved. Since this is relatively tedious and lengthy, we would relegate the computation to the appendix. In practice, $W_{c_i M}$ is crucial data to work out $\rho^M_{\mathcal{A}}$ using equation (\ref{eq:associativity_module}). One important handle towards solving for $W$ is via the inspection of induced modules. Modules can be ``induced'' by fusion with the condensate $\mathcal{A}$. The product $\mu$ would automatically supply the correct structure to produce a left (right) action satifying (\ref{eq:associativity_module}) described above. This is illustrated in (\ref{eq:ind_mod}) \cite{kirillov, Fuchs_2002}. \begin{figure}[htbp] \begin{equation} \label{eq:associativity_module} \centering \begin{tikzpicture} \node at (-0.5,0) {\LARGE{$=$}}; \draw [thick] (-3,-0.4) -- (-2,0.6); \draw [thick] (-3,-1.4) -- (-2,-0.4); \draw [thick] (-0.3,-1.4) -- (1.7,0.6); \draw [thick] (0.7,-0.4) -- (0.7,-1.4); \draw [thick, magenta] (1.7,1.6) -- (1.7,-1.4); \draw [thick, magenta] (-2,1.6) -- (-2,-1.4); \node at (-3.2,-0.4) {$\mathcal{A}$} node at (-3.2,-1.4) {$\mathcal{A}$} node at (-0.5,-1.4) {$\mathcal{A}$} node at (0.5,-1.4) {$\mathcal{A}$} node at (0.5,-0.2) {$\mathcal{A}$} node at (-1.6,1.6) {$M$} node at (2.1,1.6) {$M$}; \end{tikzpicture} \end{equation} \end{figure} \begin{equation} \label{eq:ind_mod} \mathtikzS{1}{ \draw[thick, purple] (0,-1cm) -- (0,1cm); \draw[thick] (-1cm,-1cm) -- (0,0); \node[right] at (0,0) {$\rho_{\Ind_{\mathcal{A}}(c_i)}$}; \node[below] at (0,-1cm) {$\mathcal{A}\otimes c_i$}; \node[above] at (0,1cm) {$\mathcal{A}\otimes c_i$}; \node[below] at (-1cm,-1cm) {$\mathcal{A}$}; } = \mathtikzS{1}{ \vertexI{0}{0}{1cm}; \node[left] at (0,0) {$\mu$}; \node[below] at (-1cm,-1cm) {$\mathcal{A}$}; \node[below] at (1cm,-1cm) {$\mathcal{A}$}; \node[above] at (0,1cm) {$\mathcal{A}$}; \begin{scope}[xshift=2cm] \draw[thick, green] (0,-1cm) -- (0,1cm); \node[below] at (0,-1cm) {$c_i$}; \node[above] at (0,1cm) {$c_i$}; \end{scope} } \end{equation} These induced representations following from $\mathcal{A} \otimes c_i$, denoted Ind${}_{\mathcal{A}}(c_i)$ are generally reducible. They can be expressed in terms of the simple (irreducible) modules as \cite{Fuchs_2002} \begin{equation} \textrm{ Ind${}_{\mathcal{A}}(c_i)$} = \sum_{x} \lambda_x \rho^{M_x}_\mathcal{A} \end{equation} These parameters $\lambda$ can be solved using the identity illustrated in (\ref{eq:extract_lambda}). This identity allows very efficient computation of the modules -- particularly when the induced module is itself simple. It is possible to generate all the simple modules through constructing induced modules. The deduction of the $W$-matrix is greatly faciliated by the identities relating quantum dimensions discussed in \ref{sec:quantumdim} below. {\bf \underline{Endomorphisms}} As we have emphasized in multiple instances, one novel ingredient in a fermionic gapped system (describable by a super-fusion category) is that some of the modules have non-trivial {\it endomorphism} -- i.e. the q-type objects we have referred to earlier. We need to identify which of the modules we obtained correspond to q-type objects. This is discussed in \cite{Aasen:2017ubm} in the context of Abelian fermion condensation. There, it is observed that anyons that are ``fixed points'' under fusion with the condensing fermion are q-type objects when considered as modules (or boundary excitations) of the condensate algebra $\mathcal{A}$. i.e. fixed point anyons satisfy \begin{equation} \label{eq:fixedpoint} \psi \otimes a = a, \end{equation} where $\psi \in \mathcal{A}$, and $d_{\psi} =1$. In the case of non-Abelian condensation and where the defects are localized at the junctions, the above condition (\ref{eq:fixedpoint}) is not well defined. In these cases, endomorphisms of the modules can be deduced by applying identities discussed in \cite{Fuchs_2002} and also solving for the modules explicitly. These methods are to be reviewed and extended in the next two subsections. The identities applicable to junction defects will be discussed separately in section \ref{sec:junctions}. A necessary signature of non-trivial endomorphism is that the {\it same} module acquires two independent left (right) action. The situation can easily be confused with the case when a single anyon is {\it splitted} into two boundary excitations (i.e. two simple modules). This situation has been discussed for example in \cite{Eliens:2013epa}. These two situations are distinguished precisely using identities relating quantum dimensions in the parent and the condensed theory that we will discuss below. \subsubsection{Quantum dimension and endomorphism-- some useful identities} \label{sec:quantumdim} This section explains a novel application of various useful identities proved in \cite{Fuchs_2002, Fuchs_2004}. These identitis connect the quantum dimensions of the defect and that of the anyons composing it . Since they are very useful and powerful, we would like to reproduce some of them here. To simplify notations, let us follow \cite{Fuchs_2002, Fuchs_2004} and denote \begin{equation} \textrm{dim [Hom$(a,b)$]} = \langle a, b\rangle. \end{equation} A module $M$ as a collection of anyons in $C$ has a quantum dimension in $C$ given by \begin{equation} \textrm{dim}_C(M) = \sum_i W_{i M} d_{c_i}, \end{equation} where now we can write \begin{equation} W_{iM } = \langle c_i, M\rangle_C. \end{equation} The subscript $C$ serves to remind us that this is counting the dimension of homeomorphisms (or maps) from the point of view of $C$. The dimension of a module $M$ as an object in the representation category of $\mathcal{A}$ can be defined by the quantum trace in $\mathcal{A}$. This is illustrated in the following. \begin{equation} \frac{1}{\dim\mathcal{A}} \mathtikzS{0.5}{ \draw[thick,magenta] (-0.4cm,1.6cm) -- (0,2cm) -- (1cm,1cm) -- (1cm,-1cm) -- (0,-2cm) -- (-1cm,-1cm) -- (-1cm,1cm) -- (-0.6cm,1.4cm); \draw[thick, black] (-1cm,0) -- (-2cm,-1cm) -- (-2cm,-2cm); \draw[thick, black] (1cm,0) -- (0.5cm,-0.5cm) -- (-0.5cm,0.5cm) -- (-0.5cm,2.5cm); \node[below] at (-2cm,-2cm) {$\mathcal{A}$}; \node[above] at (-0.5cm,2.5cm) {$\mathcal{A}$}; \node[left] at (-1cm,1cm) {$M$}; } =\quad \dim_{\mathcal{A}} (M) \mathtikzS{0.5}{ \draw[thick, black] (0,-2cm) -- (0,2cm); \node[below] at (0,-2cm) {$\mathcal{A}$}; } \end{equation} This gives \begin{equation} \label{eq:qd_defect} {\textrm{dim} }_\mathcal{A}(M) = \frac{{\textrm{dim}}_C (M) \times \langle M,M\rangle_{\mathcal{A}} }{{\textrm{dim}} \mathcal{A}}. \end{equation} The shorthand $\langle M,M\rangle_\mathcal{A}$ denotes the dimension of endomorphism of $M$ as a ``simple'' object in the representation category of $\mathcal{A}$. i.e. This is equal to 2 for a q-type representation, and 1 otherwise. This is a generalization of the result in \cite{kirillov, Fuchs_2002}, allowing for the left (right) action coming in two independent copies for q-type excitations. There is also a very useful theorem. {The {\bf Reciprocity theorem}. It states that \begin{equation} \langle c_i, M\rangle_C = \langle \textrm{Ind${}_\mathcal{A}(c_i)$}, M\rangle_\mathcal{A}, \qquad \langle M, c_i \rangle_C = \langle M, \textrm{Ind${}_\mathcal{A}(c_i)$},\rangle_\mathcal{A}. \end{equation} In words, it says the dimension of space of maps between $M$ and $c_i$ in $C$ is the same as that of maps between $M$ and the induced module of $c_i$ when they are treated as objects in the representation category of $\mathcal{A}$, or in other words, as boundary excitations. The is proved in \cite{Fuchs_2002}. Two useful relations follow from the above theorem. They are given by \begin{equation} \label{eq:indM} {\textrm{Ind}}_\mathcal{A}(c_i) \cong \oplus_x W_{i M_x} M_x, \end{equation} and \begin{equation} \label{eq:indMdim} \textrm{dim}(\mathcal{A}) d_{c_i} = \sum_x \langle M_x ,M_x\rangle_{\mathcal{A}} \textrm{dim}_C (M_x) W_{i M_x} = \sum_{x,j} \langle M_x ,M_x\rangle_{\mathcal{A}} W_{i M_x} W_{j M_x} d_{c_j} \end{equation} Equation (\ref{eq:indMdim}) is a new result. It is a generalization of Corollary 4.14 in \cite{Fuchs_2002} allowing for q-type objects. The generalization follows from the fact that a q-type object carries two independent left action which should be implicitly summed over in $x$. Since the two sets of left action belong to the same module $M_x$ for $M_x$ a q-type object, we replace the sum by the dimension of the endomorphism of $M_x$. Physically, this has a very simple interpretation, which applies to both fermionic and bosonic condensation. It can be re-written as \begin{equation} d_{c_i}= \sum_x W_{i M_x} \textrm{dim}_\mathcal{A}(M_x). \end{equation} This means that quantum dimension is ``conserved'' as a bulk anyon is ``decomposed'' into boundary excitations in (\ref{eq:indM}). Moreover, (\ref{eq:indM}) together with (\ref{eq:Mdecompose}) imply that \begin{equation} \label{eq:getW} \mathcal{A} \otimes c_i = \oplus_j \sum_x \, \langle M_x, M_x\rangle_{\mathcal{A}} W_{i M_x} W_{j M_x} c_j. \end{equation} Equation (\ref{eq:qd_defect}, \ref{eq:indMdim}, \ref{eq:getW}) are powerful handles to determining $W$, and also the endomorphism -- specifically to distinguish it from the situation in which an anyon ``splits'' at the boundary, actually participating in two distinct representations. Specifically, when two independent solutions of an irreducible representation can be solved involving the same collection of anyons, the dimension of the endomorphism must be consistent with a quantum dimension of the resultant excitation that is greater than 1. This will be illustrated in the example section with explicit solutions. \subsubsection{Fermion parity and spin structures} We note that there are two other new ingredients in working with fermion condensation. Here, we extended the techniques in \cite{Fuchs_2002, cong_topological_2016} to accommodate these new ingredients. Firstly, one would expect to work out $\sigma_{c_i}^M$, which is the fermion parity assignment to the anyon $c_i$ in the representation $M$. In a non-Abelian theory, it is possible that $c_i$ participates in multiple modules. Rather than assigning a fermion parity to individual anyons, it is more appropriate to determine fermion parity of the ``condensation channel''. i.e. Among the $W_{i x}$ different ways a bulk anyon is mapped to the boundary excitation $M_x$, some of the maps have even parity and others, odd parity. One can work out the fermion parity of these condensation channels systematically starting from the parity assignment of the condensate algebra $\mathcal{A}$. This follows from a twist of the relation (\ref{eq:getW}). \begin{equation} \label{eq:getOmega} \tilde{\mathcal{A}} \otimes c_i = \oplus_j \sum_x \, \Omega_{i x} \Omega_{j x} c_j, \end{equation} where $\Omega_{cx}$ gives the difference between the number of even and odd participation channels for $c$ in $x$, and \begin{equation} \tilde{\mathcal{A}} \equiv \oplus_i W_{i1} \exp(2\pi i h_{c_i}) c_i. \end{equation} i.e. there is a minus sign for every fermion in the condensate. For $x=1$ we actually have \begin{equation} \label{eq:Omega0} \Omega_{i1} = W_{i1} e^{2\pi i h_{c_i}}. \end{equation} Since $\Omega_{ix}$ is the difference between the number of even and odd ``condensation'' channels of $c_i$ in $x$, one can see that for $x$ a q-type excitation, $\Omega_{ix} =0.$ This has an impact on the derivation of the ``twisted Verlinde formula'' to be discussed below. In the case where all $W_{i1} <2$, $\Omega_{i1}$ can be directly treated as the fermion parity $\sigma_i$ of the anyon $c_i$ in defining the super commutative algebra (\ref{eq:super_braid}). For cases where some $W_{i1}\ge 2$, on first sight we might have to assign multiple parities to the same anyon participating in the algebra $\mathcal{A}$. However, we suspect this could never happen -- i.e. a fermionic anyon could never enter a super Frobenius algebra $\mathcal{A}$ twice having $W_{i1} >2$, since supercommutativity should be violated. We note that since (\ref{eq:getOmega}) shows up quadratically on the r.h.s., there is a sign ambiguity for $\Omega_{cx}$ for $x\neq 0$. Practically in the examples we work with, we make a specific choice. We are not aware of a canonical choice at present. \begin{figure}[h] \centering \begin{tikzpicture} \draw [thick] (-2,0) to [out=90,in=180] (0,1.5); \draw [thick] (0,1.5) to [out=0,in=90] (2,0); \draw [thick] (-2,0) to [out=270,in=180] (0,-1.5); \draw [thick] (0,-1.5) to [out=0,in=270] (2,0); \draw [thick] (-0.5,0) to [out=45,in=180] (0,0.25); \draw [thick] (0,0.25) to [out=0,in=135] (0.5,0); \draw [thick] (-0.7,0.2) to [out=-45,in=180] (0,-0.25); \draw [thick] (0,-0.25) to [out=0,in=-135] (0.7,0.2); \draw [thick, dashed] (-1.2,0) to [out=90,in=180] (0,0.9); \draw [thick, dashed] (0,0.9) to [out=0,in=90] (1.2,0); \draw [thick, dashed] (-1.2,0) to [out=270,in=180] (0,-0.9); \draw [thick, dashed] (0,-0.9) to [out=0,in=270] (1.2,0); \draw [thick] (0,-0.25) to [out=-150,in=150] (0,-1.5); \draw [thick] (0,-1.5) to [out=30,in=-30] (0,-0.25); \draw [thick, ->] (0.6,0.8) to [out=90,in=240] (1.2,1.7); \draw [thick, ->] (-0.25,-1.25) to [out=-90,in=100] (0.75,-2); \node at (1.4,2) {$NS/R\ bc$} node at (1,-2.2) {$anyon\ line\ c$}; \end{tikzpicture} \caption{A defect wrapping a cycle on the torus, while generating either periodic or anti-periodic boundary conditions for free fermions in the other cycle, determining the spin-structure on the torus.} \label{fig:spin_structure} \end{figure} Secondly, in the presence of fermions which are sensitive to spin-structures, there are anyons that are responsible for Ramond (or anti-periodic) boundary conditions for the free fermions -- or in other words, they fit with the Ramond type spin-structure when they are inserted in a closed manifold with non-contractible cycle. This is illustrated in figure \ref{fig:spin_structure}. Under a fermion condensation, the boundary excitations can either be responsible for the Neveu- Schwarz (NS) type spin structure or Ramond (R) type in a non-trivial cycle. This can be checked by checking the monodromy matrix: \begin{equation} M_{\psi c_i}^{c_j} = -\frac{\theta_{c_j}}{\theta_{c_i}} \end{equation} where $\psi \in \mathcal{A}$ has fermionic self-statistics, with $\theta_{\psi} = -1$, and both $c_{i,j}$ belong to the same boundary excitation, with $W_{ix}$ and $W_{jx}$ non-vanishing for some $x$, and that their $\Omega_{ix}$ and $\Omega_{jx}$ comes with opposite signs. The boundary excitation $x$ is R type if the monodromy matrix defined above evaluates to -1 for all $i,j$ in $x$, and NS type in the case of +1. This generalizes the discussion in \cite{Wan:2016php,Aasen:2017ubm, Lan_2016} to accommodate non-Abelian fermionic condensates. We note however, that in a condensate involving bosonic anyons not generated by fusion of two fermions, the confined anyons do not necessarily have a well defined spin structure -- since they are non-local wrt the bosonic components of the new vacuum made up of the condensed anyons. \subsubsection{Fusion rules} \label{sec:b_fusion} Physically, when we observe a cluster of excitations from sufficiently far away i.e. at a distance large compared to the separation between them, then the cluster would effectively appear as some point excitation. The fusion maps in the bulk is part of the data that defines the topological theory. The fusion between boundary excitations however are ``derived'' properties that can be worked out from the choice of the condensed anyons $\mathcal{A}$ and the bulk fusion rules. Mathematically, we have contended that boundary excitations are representations (or modules) of the condensed algebra $\mathcal{A}$. Therefore, the physical concept of fusion simply correspond to fusion of representations. Diagrammatically, when we have a pair of representations, we should be able to define a new left (and right) action on the combined system. Directly analogous to the situation in combining spins where we take $\hat S = \hat S_1 + \hat S_2$ -- as already explained when we introduce the co-product -- the new left action on the combined system should require the use of the co-product. This is illustrated diagrammatically in the middle figure in (\ref{eq:fuse_modules}) for fusion of left modules. An extra intermediate $\mathcal{A}$ line connecting $M_1$ and $M_2$ is introduced. As illustrated in (\ref{eq:fuse_modules}), it implements automatically \begin{equation} \label{eq:modA} \rho_\mathcal{A}^{M_1}\otimes id_{M_2}\sim (id_{M_1}\otimes\rho_\mathcal{A}^{M_2})\circ R_{A,M_1} \end{equation} The intermediate A-line indeed acts a projector. i.e. If one puts in two parallel A-lines between the modules $M_1, M_2$ as in (\ref{eq:modA}), using the fact that the algebra $\mathcal{A}$ is Frobenius and separable (\ref{eq:separability}) and that $M_{1,2}$ both satisfy (\ref{eq:reps_eq}), one can show that it can be reduced to having only one line. This would be left as a simple exercise for the readers. \begin{figure}[h] \centering \begin{equation} \label{eq:fuse_modules} \begin{tikzpicture} \begin{scope}[xshift=0,yshift=0]{ \draw [thick, magenta] (0,0.1) -- (0,2); \draw [thick, magenta] (0,-1) -- (0,-0.1); \draw [thick, magenta] (1.5,-1) -- (1.5,2); \draw [thick] (0,1) to [out=225,in=180] (0,0); \draw [thick] (0,0) to [out=0,in=230] (1.5,1); \draw [thick] (-1,0.5) -- (0,1.5); \node at (0.4,2) {$M_1$} node at (1.9,2) {$M_2$} node at (-0.5,0.2) {$\mathcal{A}$} node at (-0.5,1.5) {$\mathcal{A}$}; } \end{scope} \begin{scope}[xshift=4cm,yshift=0]{ \draw [thick, magenta] (0,0.1) -- (0,2); \draw [thick, magenta] (0,-1) -- (0,-0.1); \draw [thick, magenta] (1.5,-1) -- (1.5,2); \draw [thick] (0,1) to [out=225,in=180] (0,0); \draw [thick] (0,0) to [out=0,in=230] (1.5,1); \draw [thick] (-1,-1) -- (0,0); \node at (0.4,2) {$M_1$} node at (1.9,2) {$M_2$} node at (-1.3,0.5) {\Large{$=$}}; } \end{scope} \begin{scope}[xshift=8cm,yshift=0]{ \draw [thick, magenta] (0,0.1) -- (0,2); \draw [thick, magenta] (0,-0.7) -- (0,-0.1); \draw [thick, magenta] (0,-1) -- (0,-0.9); \draw [thick, magenta] (1.5,-1) -- (1.5,2); \draw [thick] (0,1) to [out=225,in=180] (0,0); \draw [thick] (0,0) to [out=0,in=230] (1.5,1); \draw [thick] (-0.2,-1) -- (1.5,0.7); \node at (0.4,2) {$M_1$} node at (1.9,2) {$M_2$} node at (-1.3,0.5) {\Large{$=$}}; } \end{scope} \end{tikzpicture} \end{equation} \end{figure} To summarise, the fusion map of modules of $\mathcal{A}$ is defined such that one mods out the relation (\ref{eq:modA}). This fusion map is denoted $\otimes_{\mathcal{A}}$, and practically implemented using the projector involving the A-line introduced in (\ref{eq:fuse_modules}). Note however, the module resulting from the fusion is not irreducible. It is important information to recover the decomposition of the fusion product in terms of irreducible representations. This has been considered in \cite{Fuchs_2002}. The decomposition coefficients can be computed using (\ref{eq:extract_lambda}). To recover only the fusion coefficients however, we can make use of the identity \cite{kirillov} (which has been generalized here to accommodate for non-trivial endomorphisms : \begin{align} \label{eq:evenfuse} \mathcal{A} \otimes c_i\otimes c_j &= \oplus_l \sum_{x,k} \langle M_x, M_x\rangle_\mathcal{A} \, W_{i M_x} W_{k M_x} N_{k j}^{l} \,c_l= \oplus_{l} \sum_{k,x}\langle M_x, M_x\rangle_\mathcal{A} \, N_{i j}^{k} W_{k M_x}W_{l M_x} c_l \nonumber \\ &= \oplus_l \sum_{x,y,z} W_{i M_x} W_{j M_y} n_{xy}^z W_{l M_z} c_l \end{align} where $n_{xy}^z$ are the fusion coefficients counting the total number of fusion channels mapping $M_x\times M_y$ to $M_z$ in the boundary, as defined in (\ref{eq:superfuse}) for a super-fusion category. i.e. \begin{equation} n_{xy}^z = \textrm{dim[ Hom}_{\mathcal{A}}(M_x\otimes M_y, M_z)] \end{equation} As reviewed already in the previous section, the fusion channels in the condensed phase describable by a super fusion category also come with even or odd fermion parities. The above equation should thus be refined. We found a twisted version of the above relation, using also (\ref{eq:getOmega}) \begin{align} \label{eq:tfusionM} \tilde{\mathcal{A}} \otimes c_i\otimes c_j & = \oplus_l \sum_{x,k} \langle M_x, M_x\rangle_\mathcal{A}^{\delta} \, \Omega_{k x} \Omega_{i x} N_{k j}^{l} \,c_l \nonumber \\ &= \oplus_l \sum_{x,y,z} \, \Omega_{ix} \Omega_{jy} \tilde{n}_{xy}^z \Omega_{l z} \, c_l. \end{align} The twisted fusion coefficient $\tilde{n}_{xy}^z$ is the difference between the number of even and odd fusion channels taking $x\otimes y$ to $z$. Here, $\langle M_x, M_x\rangle_{\mathcal{A}}^\delta$ denotes the difference between the number of even and odd endomorphisms of $M_x$. For example, a q-type object with one even and one odd endomorphic maps satisfies $\langle M_x, M_x\rangle_{\mathcal{A}}^\delta=0$. As discussed previously, the dimension of endomorphism space for simple objects is either 1 (non q-type) or 2 (q-type) in a super-fusion category. Also, $\Omega_{ix}$ vanishes for $x$ a q-type object. Therefore, the sum over $x$ in (\ref{eq:tfusionM}) might as well be restricted to non-q-type excitations, to give \begin{align} \label{eq:tfusionM2} \tilde{\mathcal{A}} \otimes c_i\otimes c_j & = \oplus_l \sum_{x \neq \textrm{q-type},k} \, \Omega_{k x} \Omega_{i x} N_{k j}^{l} \,c_l \nonumber \\ &= \oplus_l \sum_{x,y,z \neq \textrm{q-type}} \, \Omega_{ix} \Omega_{jy} \tilde{n}_{xy}^z \Omega_{l z} \, c_l. \end{align} Now this totally parallels (\ref{eq:evenfuse}). \subsubsection{(twisted) Defect Verlinde formula} \label{sec:twistedDVF} In \cite{Shen_2019}, we described a formula relating the fusion coefficients of the boundary excitations with the ``half-link'' between the boundary excitations and the condensed anyons. Here we would like to generalize it to the case accommodating fermion condensation, and also to express the ``half-link'' in terms of a trace of the different linear maps whose basis we have constructed explicitly in the previous section. First, let us obtain the pair of twisted defect Verlinde formula for a given gapped boundary characterized by $\mathcal{A}$. This can be derived using (\ref{eq:evenfuse}), which takes an identical form in fermionic condensates as in bosonic ones discussed in \cite{Shen_2019} \begin{equation} \label{eq:defect_verlinde_e} n_{xy}^{z} = \sum_{i}\frac{\langle M_z, M_z\rangle_\mathcal{A} \,V_{x i}V_{y i}V^{-1}_{i z}}{S_{1i}}, \qquad V^{-1}_{i x} = \sum_{k} \bar{S}_{i k} W_{k x} \end{equation} The matrix $V$ is invertible -- the first index $i$ runs over only $c_i \in \mathcal{A}$ and the index $x$ enumerates the boundary excitations. As we argued in \cite{hung_ground_2015, Shen_2019}, the number of anyons in $\mathcal{A}$ is always equal to the number of boundary excitations, so that $V$ is a square matrix. As observed in \cite{Shen_2019}, the matrix $V$ is related to the ``half-linking'' number as follows: \begin{equation} \label{eq:Vgamma} \frac{\gamma_{xi}}{\gamma_{1i}} = V_{ix}^{-1}, \qquad \gamma_{1i} = \sqrt{S_{1i}}. \end{equation} Here, we would also like to express the half-linking number in terms of the basic defining properties of the condensate algebra $\mathcal{A}$ and the modules, in the incarnation of a quantum trace that is illustrated in (\ref{eq:gamma_pic}). \begin{figure}[h] \centering \begin{equation} \label{eq:gamma_pic} \begin{tikzpicture}[scale=0.8] \draw [thick, green] (0,1) -- (0,0.1); \draw [thick, green] (0,-0.1) -- (0,-2); \draw [thick, green] (-0.5,1) -- (-0.5,0.5); \draw [thick, green] (-0.5,-1.5) -- (-0.5,-2); \draw [thick, green] (-0.5,0.5) to [out=270, in=180] (0.1,0); \draw [thick, green] (0.1,0) to [out=0, in=0] (0.1,-1); \draw [thick, green] (-0.1,-1) to [out=180, in=90] (-0.5,-1.5); \draw [thick, green] (0,1) to [out=90, in=90] (1.5,1); \draw [thick, green] (-0.5,1) to [out=90, in=90] (2.5,1); \draw [thick, green] (0,-2) to [out=270, in=270] (1.5,-2); \draw [thick, green] (-0.5,-2) to [out=270, in=270] (2.5,-2); \draw [thick, green] (1.5,1) -- (1.5,-2); \draw [thick, green] (2.5,1) -- (2.5,-2); \scalebox{0.5}[0.5]{\tri{3cm}{0}{90}} \scalebox{0.5}[0.5]{\tri{5cm}{0}{90}} \scalebox{0.5}[0.5]{\tri{3cm}{-2cm}{270}} \scalebox{0.5}[0.5]{\tri{5cm}{-2cm}{270}} \draw [thick] (2.5,0) -- (2.5,-1); \draw [thick, magenta] (1.5,0) -- (1.5,-1); \draw [thick] (1.5,-1/3) to [out=-45, in=225] (2.5,-1/3); \draw [thick] (2,-0.55) -- (2,-0.8); \draw [thick, fill] (2,-0.85) circle [radius=0.05]; \node at (2.7,-0.5) {$\mathcal{A}$} node at (1.3,-0.5) {$x$} node at (2,-0.3) {$\mathcal{A}$} node at (2.7,0.5) {$c$} node at (1.3,0.5) {$i$} node at (-2.3,-0.5) {\Large{$\gamma_{xc} = \mathcal{N}^x_c \sum_i$}}; \end{tikzpicture} \end{equation} \end{figure} We observe that the normalization constant takes the following form \begin{equation} \label{eq:NAB} \mathcal{N}^x_c = \frac{1}{ \sqrt{D_{bulk}d_c} } \end{equation} In a super fusion category, fusion channels can acquire even or odd fermion parities. The defect Verlinde formula given above relates the total number of fusion channels with the half-linking numbers. There is an independent equation that relates the difference between the number of even and odd fusion channels to a ``twisted'' half-linking number. This can be derived using (\ref{eq:tfusionM}) using very similar techniques as the derivation of (\ref{eq:defect_verlinde_e}). We first define the matrix $v^{-1}$ \begin{equation} \label{eq:littlev} v^{-1}_{ix} = \sum_j \bar{S}_{ij} \Omega_{jx}. \end{equation} This is the analogue of the $V$ matrix defined in (\ref{eq:defect_verlinde_e}). As already noted earlier when $\Omega_{jx} $ was defined, $\Omega_{jx} =0$ for $x$ a q-type excitation. Therefore, in the matrix $v_{ix}$, $x$ only runs over the gapped excitations that are not q-type. The other index $j$ now runs over the anyons $c_j$ belonging to the gapped excitation $x_f$ responsible for generating the fermion parity. i.e. There is a special gapped boundary excitation $x_f$ such that the monodromy with the condensate produces a +1 on all the bosonic condensed anyons, and a -1 on all the fermionic ones. -- This will be further discussed in section \ref{sec:CFT} below, where this special excitation can be readily worked out by a simple modular transformation in the bulk using (\ref{eq:work_xf}). Surprisingly, $v$ is also a square matrix -- there is always an equal number of anyons in the special boundary excitation $x_f$ as there are non-q-type boundary excitations! i.e. Once restricting $x$ to non-q-type objects, equation (\ref{eq:tfusionM2}) and (\ref{eq:evenfuse}) take the same form. Thus we obtain the following {\it twisted Verlinde formula} simply by replacing $V$ by $v$ in (\ref{eq:defect_verlinde_e}), which gives \begin{equation} \label{eq:twisted_VLF} \tilde{n}_{xy}^z = \sum_{i\in x_f}\frac{v_{x i}v_{y i}v^{-1}_{i z}}{S_{1i}}, \qquad x,y,z \neq \, \textrm{q-type}. \end{equation} The sum over $i$ runs over anyons participating in $x_f$. We note that this is not imposed by hand -- but simply follows from the property of $v$. This is one of the main results of this paper. \subsection{Defects at junctions and bimodules} \label{sec:junctions} In the previous section, we have focussed on excitations in a given gapped boundary where $\mathcal{A}$ condensed. Here, we would like to extend the discussion to excitations localized at the junction of two different gapped boundaries characterized by condensate algebra $\mathcal{A}$ and $\mathcal{B}$ should correspond to irreducible {\bf left-right bi-modules} of $\mathcal{A}$ and $\mathcal{B}$. Each bi-module is again a collection of anyons in $C$. The left and right action of $\mathcal{A}$ and $\mathcal{B}$ respectively should commute. This is illustrated in (\ref{eq:bimodule}). \begin{figure}[h] \centering \begin{equation} \label{eq:bimodule} \begin{tikzpicture} \draw [thick, magenta] (-2,-1.5) -- (-2,2); \draw [thick, magenta] (2,-1.5) -- (2,2); \draw [thick] (-2,0.5) -- (-4,-1.5); \draw [thick] (-2,-0.5) -- (-1,-1.5); \draw [thick] (2,0.5) -- (4,-1.5); \draw [thick] (2,-0.5) -- (1,-1.5); \node at (0,0) {\Large{$=$}} node at (-3,0) {$\mathcal{A}$} node at (-1.4,2) {$M^{\mathcal{A}|\mathcal{B}}$} node at (-1,-1) {$\mathcal{B}$} node at (1,-1) {$\mathcal{A}$} node at (2.6,2) {$M^{\mathcal{A}|\mathcal{B}}$} node at (3,0) {$\mathcal{B}$}; \end{tikzpicture} \end{equation} \end{figure} It is shown that the bimodules together also form a semi-simple fusion category \cite{Fuchs_2002}. Exactly analogous to the case of left (right) modules, one can generate induced modules from any anyon $c_i \in C$, by sandwiching $c_i$ by $\mathcal{A}$ and $\mathcal{B}$ on the left and right of $c_i$ respectively. i.e Repeating (\ref{eq:ind_mod}) with a copy of $\mathcal{B}$ on the right as well. The induced bimodule obtained are reducible (not simple), and thus can be decomposed in terms of simple ones. By inspecting the fusion $\mathcal{A} \otimes c_i \otimes \mathcal{B}$ that generates the induced bi-module, it is possible to isolate all the independent simple (irreducible) modules, and recover the W-matrix. Without the simple conservation formula for quantum dimensions as in (\ref{eq:indMdim}) and also the analogue of (\ref{eq:evenfuse}), it is not apparent if there is a simple formula for the W-matrix. Moreover, as in the case of left (right) modules, one has to work out the endomorphism of a given module. An identity particularly useful for the purpose is the following \cite{Fuchs_2004} : \begin{equation} \textrm{Hom}_{\mathcal{A}|\mathcal{B}}(\textrm{Ind}_{\mathcal{A}|\mathcal{B}}(c_i), \textrm{Ind}_{\mathcal{A}|\mathcal{B}}(c_j)) \cong \textrm{Hom}(c_i, \mathcal{A} \otimes c_j \otimes \mathcal{B}) \end{equation} where here we are using $\cong$ loosely to mean the two sides are isomorphic. This also implies \begin{equation} \label{eq:double_reciprocity2} \textrm{End}_{\mathcal{A}|\mathcal{B}} (\textrm{Ind}_{\mathcal{A}|\mathcal{B}} (c_i)) \cong \textrm{Hom}(c_i, \mathcal{A} \otimes c_i \otimes \mathcal{B}). \end{equation} As we will see in the example of $D(S_3)$ in section \ref{sec:bfjunction_take2}, the formula assists us in determining non-trivial endomorphisms in a junction excitation. Exactly as in the case of left (right) modules, one can work out the endomorphism of a given module. There is a new complication in the presence of fermionic condensates. As we have discussed, free fermions have been introduced into the system to enrich the theory into a spin-TQFT, and the condensate algebra a super-Frobenius algebra. In a spin TQFT, it is possible to introduce localized Majorana modes. Therefore, every excitation at the junctions would become $\mathbb{Z}_{>0}$ graded -- the non-negative integer ``grading'' keeping track of the number of Majorana modes that have been added to the spot. This has been observed in \cite{barkeshli_classification_2013,Barkeshli:2013yta} where gapped boundaries of Abelian spin TQFT were discussed. For each extra Majorana mode that is added, the quantum dimension of the defect would be raised by a factor of $\sqrt{2}$. We note that when we add a pair of Majorana mode to the same spot, they could pair up as a Dirac fermion mode and be gapped out by a local Hamiltonian. Therefore, the grading is not topologically robust, and could be reduced to a $\mathbb{Z}_2$ structure. In the current paper where we focus on bosonic bulk topological orders, it is observed that fusion of defects between bosonic junctions with defects at bosonic-fermionic junctions could generate different flavours (or grading) of the excitations. \subsubsection{Fusion rules and the Defect Verlinde formula} \label{sec:fusion_junc} Fusion of bi-modules (or excitations localized at junctions) follows a similar playbook as the fusion of the modules. For $\mathcal{A}, \mathcal{B}, \mathcal{C} \subset C$, the fusion map is given by \begin{equation} M^{\mathcal{A}|\mathcal{B}} \otimes_{\mathcal{B}} M^{\mathcal{B}|\mathcal{C}} = M^{\mathcal{A}|\mathcal{C}}. \end{equation} Practically, $\otimes_{\mathcal{B}}$, which we have already discussed while defining the fusion map for left (right) modules, can be implemented by inserting a $\mathcal{B}$ line. This is illustrated in (\ref{eq:fuse_bimod}). \begin{figure}[h] \centering \begin{equation} \label{eq:fuse_bimod} \begin{tikzpicture} \draw [thick, magenta] (0,-1) -- (0,2); \draw [thick, magenta] (2,-1) -- (2,2); \draw [thick] (0,1) to [out=-45,in=180] (1,0); \draw [thick] (1,0) to [out=0,in=230] (2,1); \draw [thick] (-1,-0.5) -- (0,0.5); \draw [thick] (2,0.5) -- (3,-0.5); \node at (0.6,2) {$M^{\mathcal{A}|\mathcal{B}}$} node at (2.6,2) {$M^{\mathcal{B}|\mathcal{C}}$} node at (-0.5,0.5) {$\mathcal{A}$} node at (1,0.5) {$\mathcal{B}$} node at (2.5,0.5) {$\mathcal{C}$}; \end{tikzpicture} \end{equation} \end{figure} Again, we can decompose the resultant $\mathcal{A}|\mathcal{C}$ bimodule in terms of irreducible (or simple) $\mathcal{A}|\mathcal{C}$ modules. This can be done by using equation (\ref{eq:orthogonality}, \ref{eq:extract_lambda}) again. We note that these equations work equally well for a bimodule -- we simply need to view the bimodule as the left-module of the algebra $\mathcal{A} \otimes \mathcal{C}^{\textrm{rev}}$ -- where the super-script {\it rev} refers to folding $\mathcal{C}$. Practically however since the left and right action of $\mathcal{A}$ and $\mathcal{C}$ respectively commute, one is basically including in (\ref{eq:orthogonality}) an extra $\mathcal{C}$ loop also on the right. In actual applications, one is often only interested in working out the fusion coefficients. As such, we might hope to adopt a strategy similar to (\ref{eq:evenfuse}). However, for bi-modules, we do not know of a simple analogue. For Abelian bulk topological order however, we can work it out as follows. For simplicity, we will assume that $\mathcal{A} \cap \mathcal{B} = 1$. i.e. The trivial anyon in $C$ is the only anyon in the intersection between the two condensates. In this case, every induced bimodule Ind${}_{\mathcal{A}|\mathcal{B}}(c_i)$ is also simple. The strategy to work out the fusion coefficients is that the fusion operation $\otimes_{\mathcal{A}}$ can be implemented by modding out redundant copies of the condensates as the induced bimodules fuse. i.e. \begin{align} \label{eq:bimodule_fuse} &\textrm{Ind}_{\mathcal{A}|\mathcal{B}}(c_i) \otimes_{\mathcal{B}} \textrm{Ind}_{\mathcal{B}|\mathcal{C}}(c_j) = \mathcal{A} \otimes c_i \otimes \mathcal{B} \otimes_{\mathcal{B}} \mathcal{B} \otimes c_j \otimes \mathcal{C} \nonumber \\ &= \mathcal{A} \otimes c_i \otimes \mathcal{B} \otimes c_j \otimes \mathcal{C} \nonumber \\ & = \oplus_k (\sum_{l,m} N_{il}^m N_{mj}^k W_{l1}^{\mathcal{B}}) \, \mathcal{A} \otimes c_k \otimes \mathcal{C} \end{align} We have included extra super-scripts over the W-matrix to distinguish the data of different condensates. The above is clearly a $\mathcal{A}|\mathcal{C}$ bimodule which is then decomposed into simple bimodules. \begin{itemize} \item Extra trapped Majorana modes at junctions The above rules apply to bosonic condensates $\mathcal{A}, \mathcal{B}, \mathcal{C}$. There is a caveat when it comes to algebra involving fermionic condensate as well. As already mentioned in the previous sub-section, junctions between boundaries can host Majorana zero modes in the presence of free fermions, so that every simple bimodules that we work out based on seeking representations of the algebra $\mathcal{A},\mathcal{B}$ comes in an infinite number of versions -- differing by the number of extra Majorana zero modes that they host. When considering fusion of junctions, different versions are often being generated, even if we start with a {\it canonical} choice obtained via the induced modules $\mathcal{A} \otimes c_i\otimes \mathcal{B}$. To accommodate this complication, we note the following. The fermionic anyons that condensed had in fact formed Cooper pairs with free fermions introduced at the gapped boundary. Therefore it is more proper to write the condensate algebra as \begin{equation} \label{eq:Ap} \mathcal{A}' = \oplus_i W_{i 1} c_i \otimes \psi_0^{\sigma(c_i) \end{equation} i.e. A free fermion is denoted $\psi_0$ which pairs up with the condensed fermion. An extra majorana mode trapped at a junction would correspond to an extra copy of $ \otimes (1 \oplus \psi_0)$ introduced there. This is an appropriate way of keeping track of these Majorana modes, since they can absorb or release a Dirac fermion, and so behaves as a genuine fermion condensate at a point -- and whose only module would be the ``condensate'' itself : $\chi \equiv 1\oplus \psi_0$. When we fuse two such modes (which are modules of $\chi$) we should get \begin{equation} \label{eq:fusemajorana} (1\oplus \psi_0)_{\textrm{0 d}} \otimes_{\chi} (1\oplus \psi_0)_{\textrm{0 d}} = 1\oplus \psi_0. \end{equation} On the rhs, it is understood that the fermion is no longer localized at a 0-dimensional junction. This then correctly recovers the fusion rule of Majorana modes -- that produces the direct sum of the trivial state and a single fermion state. When considering general fusion of junctions with possible extra Majorana modes, it is key to keep track of copies of $\chi$. We will illustrate this technicality in the examples section \ref{sec:bfjunction} and \ref{sec:bfjunction_take2}, which is crucial towards keeping track of the quantum dimensions of junction excitations. \end{itemize} If $\mathcal{C} = \mathcal{A}$, the resultant gapped boundary excitations should be further reduced from an $\mathcal{A}|\mathcal{A}$ bimodule to a left (right) module (recall that the left and right modules can be generated from each other since we are considering (super)- commutative Frobenius algebra). Then we have \begin{align} &\textrm{Ind}_{\mathcal{A}|\mathcal{B}}(c_i) \otimes_{\mathcal{B}} \textrm{Ind}_{\mathcal{B}|\mathcal{A}}(c_j) = \mathcal{A} \otimes c_i \otimes \mathcal{B} \otimes_{\mathcal{B}} \mathcal{B} \otimes c_j \otimes \mathcal{A} \nonumber \\ &= \mathcal{A} \otimes c_i \otimes \mathcal{B} \otimes c_j \otimes \mathcal{A} \nonumber \\ & \xlongleftrightarrow{\textrm{reducing to left-modules}} \oplus_k (\sum_{l,m} N_{il}^m N_{mj}^k W_{l1}^{\mathcal{B}}) \, \mathcal{A} \otimes c_k \nonumber \\ & = \oplus_x \sum_{l,m,k} N_{il}^m N_{mj}^k \langle M^\mathcal{A}_x, M^\mathcal{A}_x\rangle_{\mathcal{A}}W_{l1}^{\mathcal{B}} W^{\mathcal{A}}_{kx} M^\mathcal{A}_x, \end{align} where we have made use of (\ref{eq:getW}, \ref{eq:Mdecompose}) in the last equality. In \cite{Shen_2019}, we obtained a defect Verlinde formula describing the fusion of bi-modules. In the presence of both bosonic and fermionic condensates, one can write down a defect Verlinde formula too. Given the extra complication of Majorana fermion modes, we will have to fix the ambiguity in the defect Verlinde formula too. The half-linking number across a junction expressed as a quantum trace is illustrated in (\ref{eq:gamma_pic}). The junction excitation involved here is the ``canonical'' choice obtainable from the induced modules, and our defect Verlinde formula would describe the fusion of these canonical junction excitations. The defect Verlinde formula that describes the canonical fusion coefficients as defined above takes exactly the same form as in \cite{Shen_2019}. For completeness, we reproduce it here \begin{equation} \label{eq:defect_verlinde_junction} n_{x y}^z = \sum_c \sum_{\alpha_{\mathcal{A}}, \beta_{\mathcal{B}},\beta'_{\mathcal{B}}, \sigma_{\mathcal{C}}}\langle M_z,M_z\rangle_{A|C}~\gamma_{xc_{\alpha_{\mathcal{A}}, \beta_{\mathcal{B}}}}^{(\mathcal{A}|\mathcal{B})} (M^{\mathcal{B}}_c)^{-1}_{\beta_{\mathcal{B}} \beta'_{\mathcal{B}}} \gamma^{(\mathcal{B}|\mathcal{C})}_{yc_{\beta'_{\mathcal{B}}, \sigma_{\mathcal{C}}}} (\gamma^{(\mathcal{A}|\mathcal{C})})^{-1}_{c_{\sigma_{\mathcal{C}}, \alpha_{\mathcal{A}}}z}, \end{equation} where \begin{equation} (M^\mathcal{B}_{c})_{\alpha, \beta} = \gamma^\mathcal{B}_{1c_{\alpha, \beta}}, \end{equation} and the inverse of $M$ is taken by treating it as a matrix with indices $\alpha,\beta$, while the inverse of $\gamma$ is taken wrt the indices $\{c_{\alpha,\beta}, z\}$ i.e. the number of $z$ indices is equal to $c_{\alpha,\beta}$. We have taken extra pains to include subscripts for the $\alpha, \beta$ to indicate the precise condensate these condensation channels are related to. It should be clear that $x$ lives at the junction between $\mathcal{A}$ and $\mathcal{B}$, and $y$ between $\mathcal{B}$ and $\mathcal{C}$, and finally $z$ between $\mathcal{A}$ and $\mathcal{C}$. \begin{figure}[h] \centering \begin{equation}\label{eq:gamma_pic_2} \begin{tikzpicture} \draw [thick, green] (-0.45,0.5) -- (0.7,0.5); \draw [thick] (0.7,0.5) to [out=0,in=120] (1.5,-0.5); \draw [thick] (0.5,-0.5) -- (1.7,-0.5); \draw [thick, green] (-0.55,0.5) -- (-0.7,0.5); \draw [thick] (-0.7,0.5) to [out=180,in=60] (-1.5,-0.5); \draw [thick] (-0.5,-0.5) -- (-1.7,-0.5); \draw [thick, magenta] (-0.5,0) -- (-0.5,-1); \draw [thick, green] (-0.5,0) -- (-0.5,1); \draw [thick, green] (0.5,0) -- (0.5,0.45); \draw [thick, green] (0.5,1) -- (0.5,0.55); \draw [thick, magenta] (0.5,0) -- (0.5,-1); \draw [thick, green] (-0.5,1) to [out=90,in=90] (0.5,1); \draw [thick, green] (-0.5,-1) to [out=270,in=270] (0.5,-1); \draw [thick, fill] (1.75,-0.5) circle [radius=0.05]; \draw [thick, fill] (-1.75,-0.5) circle [radius=0.05]; \scalebox{0.4}[0.4]{\tri{-1*1.25cm}{0}{90}} \scalebox{0.4}[0.4]{\tri{1*1.25cm}{0}{90}} \scalebox{0.4}[0.4]{\tri{-1*1.25cm}{-1.6*1.25cm}{270}} \scalebox{0.4}[0.4]{\tri{1*1.25cm}{-1.6*1.25cm}{270}} \scalebox{0.4}[0.4]{\tri{-1.7*1.25cm}{1*1.25cm}{0}} \scalebox{0.4}[0.4]{\tri{1.7*1.25cm}{1*1.25cm}{180}} \node at (0,1.5) {$i$} node at (0,-1.5) {$j$} node at (-1.6,0) {$\mathcal{A}$} node at (-1,-0.8) {$\mathcal{A}$} node at (1,-0.8) {$\mathcal{B}$} node at (1.6,0) {$\mathcal{B}$} node at (-0.7,-0.2) {$x$} node at (0.7,-0.2) {$x$} node at (0,0.7) {$c$} node at (-5,0) {\Large{$\gamma_{xc_{\alpha_\mathcal{A},\beta_\mathcal{B}}}^{(\mathcal{A}|\mathcal{B})}=\mathcal{N}^x_{c_{\alpha_\mathcal{A},\beta_\mathcal{B}}}\sum_{i,j}$}}; \node at (-0.8,0.8) {$\alpha_\mathcal{A}$}; \node at (0.8,0.8) {$\beta_\mathcal{B}$}; \end{tikzpicture} \end{equation} \end{figure} Similarly to (\ref{eq:gamma_pic}), for half-linking numbers considering the fusion of bi-modules we propose (\ref{eq:gamma_pic_2}) with normalization constant given by \begin{equation}\label{eq:NAB_2} \mathcal{N}^x_{c_{\alpha_\mathcal{A},\beta_\mathcal{B}}}=\frac{1}{\sqrt{2D_{bulk}d_c}} \end{equation} \subsection{Note: the M3J and M6J symbols and VLCs} To assist our readers in the sea of literature, here we would like to comment on the relationship between the condensate algebra and some of the linear maps introduced elsewhere. {\bf \underline{M-symbols}} The notion of M-symbols were introduced in \cite{cong_topological_2016}. The idea is that the gapped boundary is an interface where bulk anyons could end on it. As one consider multiple bulk anyons ending on the boundary, it is possible to consider changing the order of fusion of the bulk anyons before they end on the boundary. These different processes should be related by linear maps, which are given by the M3J and M6J symbols. This is illustrated in (\ref{eq:M3JM6J}). \begin{figure}[htbp] \centering \begin{equation}\label{eq:M3JM6J} \begin{tikzpicture} \draw [thick] (1.25,4) -- (1.25,7); \draw [thick] (2,4) -- (2,7); \draw [ultra thick] (0.75,5.5) -- (2.5,5.5); \node at (1.05,4) {$x$} node at (1.05,7) {$a$} node at (1.8,4) {$y$} node at (1.8,7) {$b$} node at (1.45,5.3) {$\nu$} node at (2.2,5.3) {$\lambda$}; \begin{scope}[xshift=1cm,yshift=-2cm]{ \draw [ultra thick] (5,7.5) -- (7,7.5); \draw [thick] (5.5,6.5) to [out=90, in=90] (6.5,6.5); \draw [thick] (5.5,6) -- (5.5,6.5); \draw [thick] (6.5,6) -- (6.5,6.5); \draw [thick] (6,6.8) -- (6,8.2); \draw [thick] (5.5,8.5) -- (5.5,9); \draw [thick] (6.5,8.5) -- (6.5,9); \draw [thick] (5.5,8.5) to [out=270, in=270] (6.5,8.5); \node at (6.2,7.3) {$\psi$} node at (5.3,6) {$x$} node at (6.3,6) {$y$} node at (5.3,9) {$a$} node at (6.3,9) {$b$} node at (5.8,7) {$z$} node at (5.8,7.9) {$c$} node at (3.3,7.5) {$=\sum_{c,z}[M^{ab;z}_{c;xy}]^{\mu\nu}_{\psi}$}; } \end{scope} \begin{scope}[xshift=0cm,yshift=2.25cm]{ \draw [thick] (1.25,5.5) -- (1.25,7); \draw [thick] (2,5.5) -- (2,7); \draw [ultra thick] (0.75,5.5) -- (2.5,5.5); \node at (1.05,7) {$a$} node at (1.8,7) {$b$} node at (1.25,5.3) {$\nu$} node at (2,5.3) {$\lambda$}; \begin{scope}[xshift=0cm,yshift=-2cm]{ \draw [ultra thick] (5,7.5) -- (7,7.5); \draw [thick] (6,7.5) -- (6,8.2); \draw [thick] (5.5,8.5) -- (5.5,9); \draw [thick] (6.5,8.5) -- (6.5,9); \draw [thick] (5.5,8.5) to [out=270, in=270] (6.5,8.5); \node at (6,7.3) {$\psi$} node at (5.3,9) {$a$} node at (6.3,9) {$b$} node at (5.8,7.9) {$c$} node at (3.8,8.25) {$=\sum_{c}[M^{ab}_{c}]^{\mu\nu}_{\psi}$}; } \end{scope} } \end{scope} \end{tikzpicture} \end{equation} \end{figure} As expected, the M-symbols are directly related to the defining data of the condensate algebra $\mathcal{A}$ and its modules (up to appropriate normalizations). This is illustrated in (\ref{eq:Msymbols}). \begin{figure}[htbp] \centering \begin{equation}\label{eq:Msymbols} \begin{tikzpicture}[scale=0.5] \draw [thick, green] (-0.8,5*0.5) -- (-0.8,2*0.5); \draw [thick, green] (-0.8,-2*0.5) -- (-0.8,-5*0.5); \draw [thick, magenta] (-0.8,-2*0.5) -- (-0.8,-0.1); \draw [thick, magenta] (-0.8,0.1) -- (-0.8,2*0.5); \draw [thick, green] (0.8,5*0.5) -- (0.8,2*0.5); \draw [thick, green] (0.8,-2*0.5) -- (0.8,-5*0.5); \draw [thick, magenta] (0.8,-2*0.5) -- (0.8,2*0.5); \draw [thick] (-0.8,1*0.5) to [out=225,in=180] (-0.8,0); \draw [thick] (-0.8,0) to [out=0,in=230] (0.8,1*0.5); \draw [thick] (-1.8,-1*0.5) -- (-0.8,0); \draw [fill] (-1.8,-1*0.5) circle [radius=0.08]; \node at (-0.5,0.4*0.5) {\tiny{$M_i$}} node at (1.1,0.4*0.5) {\tiny{$M_j$}} node at (-1.5,-0.2*0.5) {\tiny{$\mathcal{A}$}} node at (-0.5,2.2*0.5) {\tiny{$\bar\mu$}} node at (1.1,2.2*0.5) {\tiny{$\bar\nu$}} node at (-0.6,5*0.5) {\tiny{$a$}} node at (1,5*0.5) {\tiny{$b$}} node at (-0.5,-2.3*0.5) {\tiny{$\mu'$}} node at (1.1,-2.3*0.5) {\tiny{$\nu'$}} node at (-0.6,-5*0.5) {\tiny{$a'$}} node at (1,-5*0.5) {\tiny{$b'$}}; \scalebox{0.5}[0.5]{\tri{-1.6cm}{4cm*0.5}{90}} \scalebox{0.5}[0.5]{\tri{-1.6cm}{-4cm*0.5}{270}} \scalebox{0.5}[0.5]{\tri{1.6cm}{4cm*0.5}{90}} \scalebox{0.5}[0.5]{\tri{1.6cm}{-4cm*0.5}{270}} \begin{scope}[xshift=9.5cm,yshift=0cm]{ \draw [thick, green] (-0.8,5*0.5) -- (0,4.2*0.5); \draw [thick, green] (0.8,5*0.5) -- (0,4.2*0.5); \draw [thick, green] (0,-4.2*0.5) -- (-0.8,-5*0.5); \draw [thick, green] (0,-4.2*0.5) -- (0.8,-5*0.5); \draw [thick, magenta] (-0.8,-2*0.5) -- (-0.8,-0.1); \draw [thick, magenta] (-0.8,0.1) -- (-0.8,2*0.5); \draw [thick, green] (0,2.8*0.5) -- (0.8,2*0.5); \draw [thick, green] (0,2.8*0.5) -- (-0.8,2*0.5); \draw [thick, green] (0.8,-2*0.5) -- (0,-2.8*0.5); \draw [thick, green] (-0.8,-2*0.5) -- (0,-2.8*0.5); \draw [thick, green] (0,2.8*0.5) -- (0,4.2*0.5); \draw [thick, green] (0,-4.2*0.5) -- (0,-2.8*0.5); \draw [thick, magenta] (0.8,-2*0.5) -- (0.8,2*0.5); \draw [thick] (-0.8,1*0.5) to [out=225,in=180] (-0.8,0); \draw [thick] (-0.8,0) to [out=0,in=230] (0.8,1*0.5); \draw [thick] (-1.8,-1*0.5) -- (-0.8,0); \draw [fill] (-1.8,-1*0.5) circle [radius=0.08]; \node at (-4.8,0) {\tiny{$=\sum_{c,c'}[F^{ab}_{ab}]_{0c}[F^{a'b'}_{a'b'}]_{0c'}$}}; \node at (-0.5,0.4*0.5) {\tiny{$M_i$}} node at (1.1,0.4*0.5) {\tiny{$M_j$}} node at (-1.5,-0.2*0.5) {\tiny{$\mathcal{A}$}} node at (-1,1.7*0.5) {\tiny{$\bar\mu$}} node at (1.1,1.7*0.5) {\tiny{$\bar\nu$}} node at (-1,5*0.5) {\tiny{$a$}} node at (1,5*0.5) {\tiny{$b$}} node at (-0.4,2.8*0.5) {\tiny{$a$}} node at (0.4,2.8*0.5) {\tiny{$b$}} node at (0.2,3.5*0.5) {\tiny{$c$}} node at (-1,-1.7*0.5) {\tiny{$\mu'$}} node at (1.1,-1.7*0.5) {\tiny{$\nu'$}} node at (-1,-5*0.5) {\tiny{$a'$}} node at (1,-5*0.5) {\tiny{$b'$}} node at (-0.4,-2.9*0.5) {\tiny{$a'$}} node at (0.4,-2.9*0.5) {\tiny{$b'$}} node at (-0.3,-3.5*0.5) {\tiny{$c'$}}; \scalebox{0.5}[0.5]{\tri{-1.6cm+9.5cm}{4cm*0.5}{90-45-20}} \scalebox{0.5}[0.5]{\tri{-1.6cm+9.5cm}{-4cm*0.5}{270+45+20}} \scalebox{0.5}[0.5]{\tri{1.6cm+9.5cm}{4cm*0.5}{90+45+20}} \scalebox{0.5}[0.5]{\tri{1.6cm+9.5cm}{-4cm*0.5}{270-45-20}} } \end{scope} \begin{scope}[xshift=24cm,yshift=0cm]{ \draw [thick, green] (-0.8,5*0.5) -- (0,4.2*0.5); \draw [thick, green] (0.8,5*0.5) -- (0,4.2*0.5); \draw [thick, green] (0,-4.2*0.5) -- (-0.8,-5*0.5); \draw [thick, green] (0,-4.2*0.5) -- (0.8,-5*0.5); \draw [thick, green] (0,2*0.5) -- (0,4.2*0.5); \draw [thick, green] (0,-4.2*0.5) -- (0,-2*0.5); \draw [thick, magenta] (0,-2*0.5) -- (0,2*0.5); \draw [thick] (-1,-1*0.5) -- (0,0); \draw [fill] (-1,-1*0.5) circle [radius=0.08]; \node at (0.4,0) {\tiny{$M_k$}} node at (0.3,1.7*0.5) {\tiny{$\bar\alpha$}} node at (-1,5*0.5) {\tiny{$a$}} node at (1,5*0.5) {\tiny{$b$}} node at (0.2,3.5*0.5) {\tiny{$c$}} node at (0.3,-1.7*0.5) {\tiny{$\alpha'$}} node at (-1,-5*0.5) {\tiny{$a'$}} node at (1,-5*0.5) {\tiny{$b'$}} node at (0.3,-3.5*0.5) {\tiny{$c'$}} node at (-0.7,-0.2*0.5) {\tiny{$\mathcal{A}$}}; \node at (-6.5,0) {\tiny{$=\sum_{c,c'}[F^{ab}_{ab}]_{0c}[F^{a'b'}_{a'b'}]_{0c'}\lambda^{[M_i,M_j,M_k](c\alpha)(a\mu)(b\nu)}_{(c'\alpha')(a'\mu')(b'\nu')}$}}; \scalebox{0.5}[0.5]{\tri{24cm}{4cm*0.5}{90}} \scalebox{0.5}[0.5]{\tri{24cm}{-4cm*0.5}{270}} } \end{scope} \begin{scope}[xshift=0cm,yshift=-6cm]{ \draw [ultra thick] (-1.5,2*0.5) -- (1.5,2*0.5); \draw [ultra thick] (-1.5,-2*0.5) -- (1.5,-2*0.5); \draw [thick, green] (-0.8,5*0.5) -- (-0.8,2*0.5); \draw [thick, green] (-0.8,-2*0.5) -- (-0.8,-5*0.5); \draw [thick, magenta] (-0.8,-2*0.5) -- (-0.8,2*0.5); \draw [thick, green] (0.8,5*0.5) -- (0.8,2*0.5); \draw [thick, green] (0.8,-2*0.5) -- (0.8,-5*0.5); \draw [thick, magenta] (0.8,-2*0.5) -- (0.8,2*0.5); \node at (-0.5,0) {\tiny{$i$}} node at (1.1,0) {\tiny{$j$}} node at (-0.5,2.4*0.5) {\tiny{$\mu$}} node at (1.1,2.4*0.5) {\tiny{$\nu$}} node at (-0.6,5*0.5) {\tiny{$a$}} node at (1,5*0.5) {\tiny{$b$}} node at (-0.5,-2.5*0.5) {\tiny{$\mu'$}} node at (1.1,-2.5*0.5) {\tiny{$\nu'$}} node at (-0.6,-5*0.5) {\tiny{$a'$}} node at (1,-5*0.5) {\tiny{$b'$}}; \begin{scope}[xshift=11.5cm,yshift=0cm]{ \draw [ultra thick] (-1.5,2*0.5) -- (1.5,2*0.5); \draw [ultra thick] (-1.5,-2*0.5) -- (1.5,-2*0.5); \draw [thick, green] (-0.8,5*0.5) -- (0,4.2*0.5); \draw [thick, green] (0.8,5*0.5) -- (0,4.2*0.5); \draw [thick, green] (0,-4.2*0.5) -- (-0.8,-5*0.5); \draw [thick, green] (0,-4.2*0.5) -- (0.8,-5*0.5); \draw [thick, green] (0,2*0.5) -- (0,4.2*0.5); \draw [thick, green] (0,-4.2*0.5) -- (0,-2*0.5); \draw [thick, magenta] (0,2*0.5) -- (0,0.8*0.5); \draw [thick, magenta] (0,-2*0.5) -- (0,-0.8*0.5); \draw [thick, magenta] (-0.8,0) -- (0,0.8*0.5); \draw [thick, magenta] (0.8,0) -- (0,0.8*0.5); \draw [thick, magenta] (-0.8,0) -- (0,-0.8*0.5); \draw [thick, magenta] (0.8,0) -- (0,-0.8*0.5); \node at (-5.5,0) {\tiny{$=\sum_{c,c'}[M^{ab;k}_{c;ij}]^{\mu\nu}_{\alpha}\left\{[M^{a'b';k'}_{c';i'j'}]^{\mu'\nu'}_{\alpha'}\right\}^*$}}; \node at (0.4,-1.4*0.5) {\tiny{$k$}} node at (0.4,1.4*0.5) {\tiny{$k$}} node at (-0.5,0) {\tiny{$i$}} node at (1.1,0) {\tiny{$j$}} node at (0.3,2.5*0.5) {\tiny{$\alpha$}} node at (-1,5*0.5) {\tiny{$a$}} node at (1,5*0.5) {\tiny{$b$}} node at (0.2,3.5*0.5) {\tiny{$c$}} node at (0.3,-2.5*0.5) {\tiny{$\alpha'$}} node at (-1,-5*0.5) {\tiny{$a'$}} node at (1,-5*0.5) {\tiny{$b'$}} node at (0.3,-3.5*0.5) {\tiny{$c'$}}; } \end{scope} \begin{scope}[xshift=25cm,yshift=0cm]{ \draw [ultra thick] (-1.5,2*0.5) -- (1.5,2*0.5); \draw [ultra thick] (-1.5,-2*0.5) -- (1.5,-2*0.5); \draw [thick, green] (-0.8,5*0.5) -- (0,4.2*0.5); \draw [thick, green] (0.8,5*0.5) -- (0,4.2*0.5); \draw [thick, green] (0,-4.2*0.5) -- (-0.8,-5*0.5); \draw [thick, green] (0,-4.2*0.5) -- (0.8,-5*0.5); \draw [thick, green] (0,2*0.5) -- (0,4.2*0.5); \draw [thick, green] (0,-4.2*0.5) -- (0,-2*0.5); \draw [thick, magenta] (0,-2*0.5) -- (0,2*0.5); \node at (0.4,0) {\tiny{$k$}} node at (0.3,1.5*0.5) {\tiny{$\alpha$}} node at (-1,5*0.5) {\tiny{$a$}} node at (1,5*0.5) {\tiny{$b$}} node at (0.2,3.5*0.5) {\tiny{$c$}} node at (0.3,-1.5*0.5) {\tiny{$\alpha'$}} node at (-1,-5*0.5) {\tiny{$a'$}} node at (1,-5*0.5) {\tiny{$b'$}} node at (0.3,-3.5*0.5) {\tiny{$c'$}}; \node at (-6,0) {\tiny{$=\sum_{c,c'}[M^{ab;k}_{c;ij}]^{\mu\nu}_{\alpha}\left\{[M^{a'b';k'}_{c';i'j'}]^{\mu'\nu'}_{\alpha'}\right\}^*\sqrt{\frac{d_id_j}{d_k}}$}}; } \end{scope} } \end{scope} \end{tikzpicture} \end{equation} \end{figure} It is not very convenient to solve for $M$ using the above relation. It is often easier to solve for $M$ directly based on its consistency conditions that is the analogue of the pentagon equation. Therefore, we generalize the consistency condition for purely bosonic condensate in \cite{cong_topological_2016} to accommodate fermionic condensates, where the M6J symbols would satisfy a twisted form of such a consistency identity. The major difference is to recognize that the junction at which a bulk anyon enters the gapped boundary is precisely described by the condensation maps that we have defined in (\ref{fig:condensation_map}) and (\ref{fig:condensation_mapA}). They come in fermion parity even and odd versions in the presence of a fermionic condensate, and one has to keep track of the ordering of these junctions, similar to the derivation of the super-pentagon identity. \begin{figure}[htbp] \centering \begin{equation}\label{eq:M_super_pentagon} \begin{tikzpicture} \draw [thick] (2.2,0.5) to [out=90, in=90] (3.2,0.5); \draw [thick] (2.2,0) -- (2.2,0.5); \draw [thick] (3.2,0) -- (3.2,0.5); \draw [thick] (2.7,0.8) -- (2.7,2.2); \draw [thick] (3.7,0) -- (3.7,3); \draw [ultra thick] (1.7,1.5) -- (4.2,1.5); \draw [thick] (2.2,2.5) -- (2.2,3); \draw [thick] (3.2,2.5) -- (3.2,3); \draw [thick] (2.2,2.5) to [out=270, in=270] (3.2,2.5); \node at (2,3) {$a$} node at (3,3) {$b$} node at (3.5,3) {$c$} node at (2,0) {$x$} node at (3,0) {$y$} node at (3.5,0) {$z$} node at (2.5,1) {$w$} node at (2.5,2) {$e$} node at (2.9,1.3) {$\sigma$} node at (3.9,1.3) {$\lambda$} node at (2.7, 0.6) {$\omega$}; \draw [fill,red] (2.7,1.5) circle [radius=0.05]; \draw [fill,red] (3.7,1.5) circle [radius=0.05]; \draw [fill,red] (2.7,0.8) circle [radius=0.05]; \draw [thick] (7.8,0.3) to [out=90, in=90] (8.8,0.3); \draw [thick] (7.8,-0.2) -- (7.8,0.3); \draw [thick] (8.8,-0.2) -- (8.8,0.3); \draw [thick] (8.3,0.8) to [out=90, in=90] (9.3,0.8); \draw [thick] (9.3,-0.2) -- (9.3,0.8); \draw [thick] (8.3,0.6) -- (8.3,0.8); \draw [thick] (8.8,1.1) -- (8.8,1.9); \draw [ultra thick] (7.3,1.5) -- (9.8,1.5); \draw [thick] (8.3,2.2) to [out=270, in=270] (9.3,2.2); \draw [thick] (7.8,2.7) to [out=270, in=270] (8.8,2.7); \draw [thick] (7.8,2.7) -- (7.8,3.2); \draw [thick] (8.8,2.7) -- (8.8,3.2); \draw [thick] (9.3,2.2) -- (9.3,3.2); \draw [thick] (8.3,2.2) -- (8.3,2.4); \node at (7.6,-0.2) {$x$} node at (8.6,-0.2) {$y$} node at (9.1,-0.2) {$z$} node at (8.3,0.3) {$\omega$} node at (8.8,0.8) {$\zeta$} node at (8.1,0.8) {$w$} node at (8.6,1.2) {$v$} node at (9,1.4) {$\phi$} node at (8.6,1.8) {$d$} node at (8.1,2.2) {$e$} node at (7.6,3.2) {$a$} node at (8.6,3.2) {$b$} node at (9.1,3.2) {$c$}; \draw [fill,red] (8.8,1.5) circle [radius=0.05]; \draw [fill,red] (8.8,1.1) circle [radius=0.05]; \draw [fill,red] (8.3,0.6) circle [radius=0.05]; \draw [thick] (0.5,4) -- (0.5,7); \draw [thick] (1.25,4) -- (1.25,7); \draw [thick] (2,4) -- (2,7); \draw [ultra thick] (0,5.5) -- (2.5,5.5); \node at (0.3,4) {$x$} node at (0.3,7) {$a$} node at (1.05,4) {$y$} node at (1.05,7) {$b$} node at (1.8,4) {$z$} node at (1.8,7) {$c$} node at (0.7,5.3) {$\mu$} node at (1.45,5.3) {$\nu$} node at (2.2,5.3) {$\lambda$}; \draw [fill,red] (0.5,5.5) circle [radius=0.05]; \draw [fill,red] (1.25,5.5) circle [radius=0.05]; \draw [fill,red] (2,5.5) circle [radius=0.05]; \draw [ultra thick] (4.5,7.5) -- (7,7.5); \draw [thick] (5,6) -- (5,9); \draw [thick] (5.5,6.5) to [out=90, in=90] (6.5,6.5); \draw [thick] (5.5,6) -- (5.5,6.5); \draw [thick] (6.5,6) -- (6.5,6.5); \draw [thick] (6,6.8) -- (6,8.2); \draw [thick] (5.5,8.5) -- (5.5,9); \draw [thick] (6.5,8.5) -- (6.5,9); \draw [thick] (5.5,8.5) to [out=270, in=270] (6.5,8.5); \node at (5.2,7.3) {$\mu$} node at (6.2,7.3) {$\psi$} node at (4.8,6) {$x$} node at (4.8,9) {$a$} node at (5.3,6) {$y$} node at (6.3,6) {$z$} node at (5.3,9) {$b$} node at (6.3,9) {$c$} node at (5.8,7) {$u$} node at (5.8,7.9) {$f$} node at (6,6.5) {$\kappa$}; \draw [fill,red] (5,7.5) circle [radius=0.05]; \draw [fill,red] (6,7.5) circle [radius=0.05]; \draw [fill,red] (6,6.8) circle [radius=0.05]; \draw [ultra thick] (9,5.5) -- (11.5,5.5); \draw [thick] (10,4.3) to [out=90, in=90] (11,4.3); \draw (10,3.8) -- (10,4.3); \draw (11,3.8) -- (11,4.3); \draw (10.5,4.6) -- (10.5,4.8); \draw [thick] (9.5,4.8) to [out=90, in=90] (10.5,4.8); \draw (9.5,3.8) -- (9.5,4.8); \draw (10,5.1) -- (10,5.9); \draw [thick] (9.5,6.2) to [out=270, in=270] (10.5,6.2); \draw (9.5,6.2) -- (9.5,7.2); \draw (10.5,6.2) -- (10.5,6.4); \draw [thick] (10,6.7) to [out=270, in=270] (11,6.7); \draw (10,6.7) -- (10,7.2); \draw (11,6.7) -- (11,7.2); \node at (10.2,5.4) {$\phi$} node at (9.3,3.8) {$x$} node at (9.8,3.8) {$y$} node at (10.8,3.8) {$z$} node at (9.3,7.2) {$a$} node at (9.8,7.2) {$b$} node at (10.8,7.2) {$c$} node at (9.8,5.2) {$v$} node at (9.8,5.8) {$d$} node at (10.3,4.7) {$u$} node at (10.3,6.2) {f} node at (10.5,4.4) {$\kappa$} node at (10,4.9) {$\rho$}; \draw [fill,red] (10,5.5) circle [radius=0.05]; \draw [fill,red] (10,5.1) circle [radius=0.05]; \draw [fill,red] (10.5,4.6) circle [radius=0.05]; \draw [thick,->] (4.6,1.5) -- (6.9,1.5); \draw [thick,->] (10.2,1.5) to [out=0,in=270] (11,3.5); \draw [thick,->] (0.5,3.5) to [out=270,in=180] (1.3,1.5); \draw [thick,->] (2.5,6.5) to [out=60,in=180] (4.2,7.5); \draw [thick,->] (7.3,7.5) to [out=0,in=120] (9,6.5); \node at (5.7,1.9) {$[M^{ec;v}_{d;wz}]^{\sigma\lambda}_{\phi\zeta}$} node at (0,1.3) {$[M^{ab;w}_{e;xy}]^{\mu\nu}_{\sigma\omega}$} node at (2.8,7.8) {$[M^{bc;u}_{f;yz}]^{\nu\lambda}_{\psi\kappa}$} node at (8.7,7.8) {$[M^{af;v}_{d;xu}]^{\mu\psi}_{\phi\rho}$} node at (12,1.3) {$F^{abc}_{d;ef}([F^{xyz}_{v;w u}]^{\omega\zeta}_{\kappa\rho})^\dagger$}; \end{tikzpicture} \end{equation} \end{figure} Similarly to bosonic condensation the M6J symbol for fermion condensation carries several groups of indices. First, there are labels of bulk anyons $a,b,c$. Second, there are boundary excitations $x,y,z$ they condense to and third, condensation channel labels $\mu,\nu,\lambda$. In addition to those, it has an extra index to label the fusion channels of the boundary excitations. Then we introduce $s^a_x(\mu),s^{xy}_w(\omega)$ to denote the parity of the condensation channel and fusion channel respectively, 0 for even and 1 for odd. To avoid dependence of fermionic wave functions on the ordering of odd channels , we introduce one ``Majorana number'' $\theta_{\underline{x}}$ on each vertex $\underline{x}$ of condensation and also each fusion vertex of the boundary excitations, which are denoted by red dots in the above diagram. These Majorana numbers satisfy \begin{align*} &\theta^2_{\underline{x}}=1\\ &\theta_{\underline{x}}\theta_{\underline{y}}=-\theta_{\underline{y}}\theta_{\underline{x}}\\ &\theta_{\underline{x}}^\dagger=\theta_{\underline{x}} \end{align*} In [zheng-cheng gu, zhenghan wang and xiao-gang wen, 1010.1517] they introduce 6j symbols that carry the Majorana numbers along. They are defined as \begin{equation} [\mathcal{F}^{xyz}_{v;wu}]^{\omega\zeta}_{\kappa\rho}=\theta^{s^{xy}_w(\omega)}_{\underline{\omega}}\theta^{s^{wz}_v(\zeta)}_{\underline{\zeta}}\theta^{s^{xu}_v(y)}_{\underline{y}}\theta^{s^{yz}_u(\kappa)}_{\underline{\kappa}}[F^{xyz}_{v;wu}]^{\omega\zeta}_{\kappa\rho} \end{equation} Similarly, one could define a new M-tensor carrying the Majorana numbers: \begin{equation} [\mathcal{M}^{bc;u}_{f;yz}]^{\nu\lambda}_{\psi\kappa}=(\theta^{s^b_y(\nu)}_{\underline{\nu}}\theta^{s^c_z(\lambda)}_{\underline{\lambda}}\theta^{s^f_u(\psi)}_{\underline{\psi}}\theta^{s^{yz}_u(\kappa)}_{\underline{\kappa}})^\dagger[M^{bc;u}_{f;yz}]^{\nu\lambda}_{\psi\kappa} \end{equation} Therefore the pentagon identity for M6J symbols with Majorana numbers are given by \begin{equation} \sum_{e,\epsilon,\sigma\omega,\zeta}\mathcal{F}^{abc}_{d;ef}([\mathcal{F}^{xyz}_{v;wu}]^{\omega\zeta}_{\kappa\rho})^\dagger[\mathcal{M}^{ec;v}_{d;wz}]^{\sigma\lambda}_{\phi\zeta}[\mathcal{M}^{ab;w}_{e;xy}]^{\mu\nu}_{\sigma\omega}\simeq\sum_\psi[\mathcal{M}^{af;v}_{d;xu}]^{\mu\psi}_{\phi\rho}[\mathcal{M}^{bc;u}_{f;yz}]^{\nu\lambda}_{\psi\kappa} \end{equation} The order of M and F tensors now matters because they carry with them Majorana numbers. Finally by removing the Majorana numbers we arrive at the fermionic pentagon identity for M6J symbols \begin{equation} \sum_{e,w,\sigma,\omega,\zeta}[M^{ab;w}_{e;xy}]^{\mu\nu}_{\sigma\omega}[M^{ec;v}_{d;wz}]^{\sigma\lambda}_{\phi\zeta}F^{abc}_{d;ef}([F^{xyz}_{v;wu}]^{\omega\zeta}_{\kappa\rho})^\dagger=(-1)^{s^a_x(\mu)s^{yz}_u(\kappa)}\sum_\psi[M^{bc;u}_{f;yz}]^{\nu\lambda}_{\psi\kappa}[M^{af;v}_{d;xu}]^{\mu\psi}_{\phi\rho}. \end{equation} \vspace{1cm} {\bf \underline{Vertex Lifting Coefficients (VLC)}} VLC's were introduced in \cite{Eliens:2013epa}. They are linear maps that map fusion basis in the bulk theory to the boundary fusion basis. This is illustrated in (\ref{eq:defineVLC}). \begin{equation} \label{eq:defineVLC} \mathtikzS{0.4}{ \vertexILC{0}{0}{2cm}{$X$}{$Y$}{$Z$}{magenta}; } = \quad\sum_{i,j,k} \quad \begin{bmatrix} X & Y & Z \\ i & j & k \end{bmatrix}\ \mathtikzS{0.4}{ \vertexILC{0}{0}{2cm}{$i$}{$j$}{$k$}{green}; } \end{equation} Note that the notation introduced there is applicable for all $W_{i x} \in \{0,1\}$. These VLC's can be separated into three classes. First, there are the ``vacuum vertex'' where three vacuum lines in the boundary theory meet. These vertices are precisely defining the product and co-product of the condensed algebra $\mathcal{A}$. Then, there are vertices where the boundary vacuum line meets a boundary excitation, leaving it ``invariant''. These vertices are precisely defining the left (right) action of the algebra on the module corresponding to the boundary excitation. Finally, there are three boundary excitations meeting at a vertex, defining fusion maps in the condensed theory. Fusion of modules are defined in (\ref{eq:fuse_modules}). These can be decomposed in terms of irreducible (simple) modules using (\ref{eq:extract_lambda}) which we have discussed. These decomposition coefficients can be related to the VLC's defining the fusion map as illustrated in (\ref{eq:lambda_VLC}) by mapping them to the parent theory, and compare fusion basis by basis. This connection with \cite{Eliens:2013epa} is valid when we restrict to the situation where $W_{i x} \in \{0,1\}$, and so the channel labels do not feature here. \footnote{We have not worked very hard to ensure that the normalisation implied in the relation shown is identical to the normalisation taken in \cite{Eliens:2013epa}. But to work it out is straightforward, and beside the point.} \begin{figure}[htbp] \centering \begin{equation} \label{eq:lambda_VLC} \begin{tikzpicture}[scale=0.7] \scalebox{0.7}[0.7]{ \draw [thick, magenta] (0,0.1) -- (0,2); \draw [thick, magenta] (0,-1) -- (0,-0.4); \draw [thick, magenta] (0,-0.2) -- (0,-0.1); \draw [thick, magenta] (2,-1) -- (2,2); \draw [thick, green] (1,3) -- (0,2); \draw [thick, green] (1,3) -- (2,2); \draw [thick, green] (0,-1) -- (1,-2); \draw [thick, green] (2,-1) -- (1,-2); \draw [thick, green] (1,4) -- (1,3); \draw [thick, green] (1,-3) -- (1,-2); \node at (0.4,1.5) {$M_1$} node at (2.4,1.5) {$M_2$} node at (1.2,4) {$i$} node at (1.2,-3) {$i'$} node at (0.5,2.8) {$j$} node at (1.5,2.8) {$k$} node at (0.5,-1.9) {$j'$} node at (1.5,-1.9) {$k'$}; \scalebox{0.5}[0.5]{\tri{0}{4cm}{45}} \scalebox{0.5}[0.5]{\tri{4cm}{4cm}{45+90}} \scalebox{0.5}[0.5]{\tri{4cm}{-2cm}{45-180}} \scalebox{0.5}[0.5]{\tri{0}{-2cm}{45-90}} \draw [thick] (0,1) to [out=225,in=180] (0,0); \draw [thick] (0,0) to [out=0,in=230] (2,1); \draw [thick] (0.3,0) -- (-0.7,-1); \draw [fill] (-0.7,-1) circle [radius=0.08]; \node at (-0.5,-0.3) {$\mathcal{A}$}; \begin{scope}[xshift=8cm,yshift=0cm]{ \node at (0.2,4) {$i$} node at (0.2,-3) {$i'$} node at (-3,0.5) {$\tiny{=\sum_{M_3}\lambda^{[M_1,M_2,M_3]ijk}_{i'j'k'}}$} node at (-0.5,-0.3) {$\mathcal{A}$} node at (0.4,0.5) {$M_3$}; \draw [thick,green] (0,4) -- (0,-3); \draw [thick] (-1,-0.5) -- (0,0.5); \draw [fill] (-1,-0.5) circle [radius=0.08]; \draw [thick, magenta] (0,-1) -- (0,2); \scalebox{0.5}[0.5]{\tri{8cm}{3.8cm}{90}} \scalebox{0.5}[0.5]{\tri{8cm}{-1.8cm}{270}} } \end{scope} \begin{scope}[xshift=21cm,yshift=0cm]{ \node at (-6,0.5) {$\tiny{=\sum_{M_3,a}\rho^{M_1j}_{aj'}\rho^{M_2k}_{ak'}\begin{bmatrix} M_1&M_2&M_3\\ j&k&i \end{bmatrix} \begin{bmatrix} M_1&M_2&M_3\\ j'&k'&i' \end{bmatrix}^*}$}; \draw [thick, green] (0,0.1) -- (0,1.5); \draw [thick, green] (0,-0.6) -- (0,-0.1); \draw [thick, green] (2,-0.6) -- (2,1.5); \draw [thick, green] (1,2.5) -- (0,1.5); \draw [thick, green] (1,2.5) -- (2,1.5); \draw [thick, green] (0,-0.6) -- (1,-1.6); \draw [thick, green] (2,-0.6) -- (1,-1.6); \draw [thick, green] (1,3.5) -- (1,2.5); \draw [thick, green] (1,-2.6) -- (1,-1.6); \draw [thick, green] (1,3.5) -- (1,4); \draw [thick, green] (1,-2.6) -- (1,-3); \node at (1.2,4) {$i$} node at (1.2,-3) {$i'$} node at (0,2) {$j$} node at (2,2) {$k$} node at (0,-1) {$j'$} node at (2,-1) {$k'$}; \draw [thick, green] (0,1) to [out=225,in=180] (0,0); \draw [thick, green] (0,0) to [out=0,in=230] (2,1); \node at (0.5,-0.2) {$a$}; } \end{scope} } \end{tikzpicture} \end{equation} \end{figure} Note that the right most diagram in (\ref{eq:lambda_VLC}) is nothing but the expansion of the following diagram (\ref{fig:bais_bubble}) in basis form. \begin{figure}[htbp] \centering \begin{equation} \label{fig:bais_bubble} \begin{tikzpicture}[scale=0.6] \draw [thick, magenta] (0,0.1) -- (0,1.5); \draw [thick, magenta] (0,-0.6) -- (0,-0.1); \draw [thick, magenta] (2,-0.6) -- (2,1.5); \draw [thick, magenta] (1,2.5) -- (0,1.5); \draw [thick, magenta] (1,2.5) -- (2,1.5); \draw [thick, magenta] (0,-0.6) -- (1,-1.6); \draw [thick, magenta] (2,-0.6) -- (1,-1.6); \draw [thick, magenta] (1,3.5) -- (1,2.5); \draw [thick, magenta] (1,-2.6) -- (1,-1.6); \node at (0.4,0.5) {\tiny{$M_1$}} node at (2.4,0.5) {\tiny{$M_2$}} node at (1.4,3) {\tiny{$M_3$}} node at (1.4,-2) {\tiny{$M_3$}}; \draw [thick] (0,1) to [out=225,in=180] (0,0); \draw [thick] (0,0) to [out=0,in=230] (2,1); \node at (-0.5,-0.3) {\tiny{$\mathcal{A}$}}; \node at (-1.5,0.5) {$\tiny{\sum_{M_3}}$}; \end{tikzpicture} \end{equation} \end{figure} A lesson learned here is that the defining property of the condensed theory is basically the product of the algebra $\mathcal{A}$, and its left (right) modules, from which everything else derives. The precise mathematical formulation also allows extension to cases where $W_{ix}>1 $ rather seamlessly. \section{Illustrating with examples} Having developed the formal computational tools based on (super)-Frobenius algebra and their modules, here we would like to illustrate these tools in explicit examples, and in the process, understanding interesting features of boundaries. \subsection{Beginner's level -- the Toric code} \label{sec:toric} The toric code is the paradigmatic example of a bosonic topological order in 2+1 dimensions. We are going to see that most of the important physics of gapped boundaries and junctions can already be understood here. It has four excitations, $\{1,e,m,f\}$. As it is well known, there are {\it three} kinds of gapped boundaries for the toric code topological order in 2+1 dimensions. Among these boundaries, two of them are conventional ones obtained from condensing bosons. Specifically one is termed the electric boundary where the electric charges condense (i.e. $\mathcal{A}_e = 1 \oplus e$), and the other, termed the magnetic boundary where the magnetic charges condense (i.e. $\mathcal{A}_m = 1\oplus m$). Here we would like to discuss in detail the third type of gapped boundary following from condensing the $e-m$ bound state which is a fermion. This has been mentioned before in \cite{Bhardwaj:2016clt}. We will also study junctions between these boundaries. \subsubsection{The fermion condensate} \label{sec:toricf_bc} The fermionic Frobenius algebra is given by \begin{equation} \mathcal{A}_f = 1 \oplus f. \end{equation} The boundary excitations are characterized by \begin{equation} X_f = e\oplus m. \end{equation} We can summarize this data in terms of the $W$ and $\Omega$ matrices : \begin{eqnarray} W= \begin{blockarray}{ccccc} 1 & e & m & f\\ \begin{block}{(cccc)l} 1 & 0 & 0 & 1 & ~\mathcal{A}_f\\ 0 & 1 & 1 & 0 & ~X_f\\ \end{block} \end{blockarray} ~~~~\&~~~~\Omega= \begin{blockarray}{ccccc} 1 & e & m & f\\ \begin{block}{(cccc)l} 1 & 0 & 0 & -1 & ~\mathcal{A}_f\\ 0 & 1 & -1 & 0 & ~X_f\\ \end{block} \end{blockarray} \end{eqnarray} The fusion rules can be obtained using (\ref{eq:evenfuse}). \begin{longtable}{l|ll} $\otimes$ & $\mathcal{A}_f$ & $X_f$\\\hline $\mathcal{A}_f$ & $\mathcal{A}_f$ & $X_f$\\ $X_f$ & $X_f$ & $\mathcal{A}_f$\\ \end{longtable} One should also check that $X_f$ is a non-q-type object with trivial endomorphism. Further, one can check that $X_f$ is responsible for generating the fermion parity. The number of objects it contains as a module of $\mathcal{A}_f$ is equals 2. This is the same as the total number of non-q-type defects in the gapped boundary -- which is also 2 (where the "trivial defect" has to be included.) This is a confirmation of the claim made after (\ref{eq:littlev}).\\ The 6j symbols of the condensed phase can be read off following the discussion in the previous section. They are given by $F^{\mathcal{A}_f\mathcal{A}_f\mathcal{A}_f}_{\mathcal{A}_f;\mathcal{A}_f\mathcal{A}_f}=F^{X_f\mathcal{A}_f\mathcal{A}_f}_{X_f;X_f\mathcal{A}_f}=F^{\mathcal{A}_fX_f\mathcal{A}_f}_{X_f;X_fX_f}=F^{\mathcal{A}_f\mathcal{A}_fX_f}_{X_f;\mathcal{A}_fX_f}=F^{X_fX_f\mathcal{A}_f}_{\mathcal{A}_f;\mathcal{A}_fX_f}=F^{X_f\mathcal{A}_fX_f}_{0;X_fX_f}=F^{\mathcal{A}_fX_fX_f}_{\mathcal{A}_f;X_f\mathcal{A}_f}=F^{X_fX_fX_f}_{X_f;\mathcal{A}_f\mathcal{A}_f}=1$ \subsubsection{The Bosonic-Fermionic junctions} \label{sec:bfjunction} As alluded to in the previous subsection, the toric code model admits two bosonic gapped boundaries that correspond to the electric $\mathcal{A}_e$ and magnetic $\mathcal{A}_m$ condensates. i.e. \begin{equation} \mathcal{A}_e = 1 \oplus e, \qquad \mathcal{A}_m = 1 \oplus m. \end{equation} For completeness, let us recall also that in each of these bosonic boundaries, there is one non-trivial excitation. Let us denote the one in the electric boundary by $X_e$ and that in the magnetic boundary by $X_m$. They are given by \begin{equation} X_e = m\oplus f, \qquad X_m = e\oplus f. \end{equation} One can readily check using (\ref{eq:evenfuse}) that they satisfy a $\mathbb{Z}_2$ fusion rules. We would like to consider junctions between these bosonic boundaries with the fermionic boundary introduced in the previous sub section. First, we consider the $e-f$ junction. Results of the $m-f$ would follow in a completely analogous manner. By considering the ``induced'' bimodule $\mathcal{A}_e \otimes c_i \otimes \mathcal{A}_f$, we find that there is only one excitation $X_{ef}$ localized at the $e-f$ junction. i.e. The four different anyons $c_i \in \{1,e,m,f\}$ in the toric code model would generate exactly the same bimodule. i.e. \begin{equation} X_{ef} = 1\oplus e \oplus m \oplus f, \qquad \textrm{i.e. } W^{\mathcal{A}_e| \mathcal{A}_f}_{c_i X_{ef}} = 1, \forall i \end{equation} We note that $W^{\mathcal{A}_e| \mathcal{A}_f}_{c_i X_{ef}} = W^{\mathcal{A}_f| \mathcal{A}_e}_{c_i X_{fe}}$. Since this is an Abelian model, one can work out the fusions readily using (\ref{eq:bimodule_fuse}). The fusion rules are given by \begin{equation} \begin{aligned} \label{eq:ef_fuse} &X_{ef}\otimes_{\mathcal{A}_f} X_{fe}= \mathcal{A}_e \oplus X_e\\ &X_{fe}\otimes_{\mathcal{A}_f} X_{ef}= \mathcal{A}_f \oplus X_f \end{aligned} \end{equation} Where $X_e$ and $X_f$ are non-trivial excitations of $e$ and $f$ boundaries respectively. From these fusion rules we can conclude that the quantum dimension of this excitation is $\sqrt{2}$.\\ We note that (\ref{eq:ef_fuse}) takes the same form as that in the fusion of defects localized at the $e-m$ junction. There, one could also readily check that \begin{equation} \begin{aligned} \label{eq:em_fuse} &X_{em}\otimes X_{me}= \mathcal{A}_e \oplus X_e,\\ &X_{me}\otimes X_{em}= \mathcal{A}_m \oplus X_m, \qquad X_{em} = X_{me} = 1\oplus e \oplus m\oplus f. \end{aligned} \end{equation} Therefore it is known that $X_{em}$ also has quantum dimension $ \sqrt{2}$ \cite{Barkeshli:2013yta, barkeshli_classification_2013}. Recall in section \ref{sec:fusion_junc} that there is generically a subtlety regarding Majorana modes, and that in the computation above one should make the replacement \begin{equation} \label{eq:AftoAfp} \mathcal{A}_f \to \mathcal{A}_f' \equiv 1 \oplus f \otimes \psi_0. \end{equation} This does not affect the conclusion in (\ref{eq:ef_fuse}), or any of the fusion rules in a single gapped boundary -- all it does is to tag an odd fusion channel by an explicit factor of $\psi_0$. It however makes a crucial difference below as we are going to see. Consider the fusion of excitations in different types of junctions. Specifically, this is illustrated in figure \ref{fig:ef-fm}. \begin{figure}[h] \centering \begin{tikzpicture} \draw [thick] (0,0) -- (6,0); \draw [thick] (1.9,-0.1) -- (2.1,0.1); \draw [thick] (1.9,0.1) -- (2.1,-0.1); \draw [thick] (3.9,-0.1) -- (4.1,0.1); \draw [thick] (3.9,0.1) -- (4.1,-0.1); \node at (1,0.3) {$\mathcal{A}_e$}; \node at (3,0.3) {$\mathcal{A}_f$}; \node at (5,0.3) {$\mathcal{A}_m$}; \draw [thick] (8,0) -- (12,0); \draw [thick] (9.9,-0.1) -- (10.1,0.1); \draw [thick] (9.9,0.1) -- (10.1,-0.1); \draw [thick,->] (6.5,0) -- (7.5,0); \node at (9,0.3) {$\mathcal{A}_e$}; \node at (11,0.3) {$\mathcal{A}_m$}; \end{tikzpicture} \caption{Fusion of $ef$ and $fm$ junctions. } \label{fig:ef-fm} \end{figure}\\ We might expect the following fusion rule \begin{equation} \label{eq:fuse_effm1} X_{ef}\otimes X_{fm}=\#\cdot X_{em} \end{equation} where $\#$ should be some positive integer. But one could readily see that this is not possible if quantum dimensions are conserved in the process of fusion -- which it should -- to ensure that the counting of ground state degeneracy a robust topological number. This is because using the methods above, we claimed that all the defects should have quantum dimension $\sqrt{2}$, so that $\#$ could not possibly be an integer. Now we reconsider (\ref{eq:fuse_effm1}) by introducing the free fermions $\psi_0$. The fusion described in (\ref{eq:fuse_effm1}) can now be computed as follows: \begin{align} \label{eq:fuse_effm2} &[(1\oplus e )\otimes (1\oplus f \otimes \psi_0)] \otimes_{1 \oplus f\psi_0} [(1 \oplus f \otimes \psi_0) \otimes (1+ m)] \nonumber \\ &= (1\oplus e \oplus m \oplus f) \otimes (1\oplus f\otimes \psi_0) \nonumber \\ &= (1\oplus e \oplus m \oplus f) \otimes (1\oplus \psi_0). \end{align} This shows that we obtain at the $e-m$ junction $X_{em}$ and also a Majorana mode $(1 \oplus \psi_0)$ -- here this has to be interpreted as such since it is localized at the $e-m$ junction! This may appear somewhat mysterious. To elucidate the physics, we demonstrate it using two different methods. First, let us study lattice model of the toric code and also explicit constructions of its gapped boundaries. It is convenient to describe these boundaries using the Wen-Plaquette version of the toric code topological order\cite{Wen:2003yv}, as had been thoroughly discussed in \cite{Yu:2012eu}. \begin{figure}[h] \centering \begin{tikzpicture} \draw [white, fill=yellow] (1,-1) -- (1,0) -- (2,0) -- (2,-1) -- (1,-1); \draw [white, fill=yellow] (0,2) -- (0,3) -- (1,3) -- (1,2) -- (0,2); \foreach \y in {0,2} \draw [white, fill=yellow] (2,\y) -- (2,\y+1) -- (3,\y+1) -- (3,\y) -- (2,\y); \foreach \x in {0,1} \foreach \y in {0} \draw [blue, thick, fill=yellow] (\x,\y+\x) -- (\x,1+\y+\x) -- (\x+1,1+\y+\x) -- (\x+1,\y+\x) -- (\x,\y+\x); \foreach \x in {0,1,2} \draw [blue, thick] (\x,-1) -- (\x,3); \foreach \y in {0,1,2} \draw [blue, thick] (0,\y) -- (3,\y); \foreach \y in {-0.5,1.5} \node at (0.5,\y) {x} node at (1.5,\y) {z} node at (2.5,\y) {x} node at (0.5,\y+1) {z} node at (1.5,\y+1) {x} node at (2.5,\y+1) {z}; \node at (0,2) {\scriptsize$\sigma_y$} node at (1,2) {\scriptsize$\sigma_y$} node at (2,2) {\scriptsize$\sigma_x$} node at (2,1) {\scriptsize$\sigma_x$} node at (1,1) {\scriptsize$\sigma_y$} node at (0,1) {\scriptsize$\sigma_y$}; \draw (-0.48,0.52) -- (2,3); \draw (0,-1) -- (3,2); \draw (2,-1) -- (3,0); \draw (1,3) -- (3,1); \draw (-0.48,2.48) -- (3,-1); \draw (-0.48,0.48) -- (1,-1); \draw (-0.48,2.52) -- (0,3); \draw [ultra thick, magenta, fill=magenta, opacity=0.5] (0.5,1.2) to [out=180, in=90] (0.25,2) to [out=90, in=180] (0.5,2.8) to [out=0, in=270] (0.75,2) to [out=270, in=0] (0.5,1.2); \node [magenta] at (-1,2.2) {f}; \draw [->, magenta] (-0.9,2.2) to [out=0,in=150] (0.15,2.2); \draw [dashed, magenta] (0.8,2.6) to [out=0, in= 90] (2.6,1.5); \draw [dashed, magenta] (0,0.4) -- (0.8,0.4) to [out=0, in= 270] (2.6,1.5); \draw [dotted, ultra thick] (1.5,3.2) -- (1.5,3.5); \draw [dotted, ultra thick] (1.5,-1.2) -- (1.5,-1.5); \draw [dotted, ultra thick] (3.2,1) -- (3.5,1); \end{tikzpicture} \caption{Both the Kitaev model \cite{kitaev_fault-tolerant_2003} and the Wen Plaquette model \cite{Wen:2003yv} realize the toric code topological order. They are illustrated in the same picture here. The black lines denote the lattice of Kitaev's toric code model and the blue lines denote the lattice of the Wen plaquette model \cite{Wen:2003yv}. Note that in the former, the spin-degrees of freedom lives on the links, whereas in the latter, they live on the vertices. Therefore, where the black lines intersect the blue lattice lives a spin 1/2 degree of freedom. The plaquettes are divided into two sets, the $Z$ and $X$ plaquettes. The Hamiltonian acts in a way depending on this division, as reviewed briefly in (\ref{eq:Wen_H}).} \label{fig:WenPlaq_Kitaev} \end{figure} For completeness, the Hamiltonian (viewed from the perspective of the Wen Plaquette model) is reproduced here \begin{equation} \begin{aligned} \label{eq:Wen_H} &H=-\sum_{i\in Z\ plaquette}\hat{Z_i}-\sum_{i\in X\ plaquette}\hat{X_i}\\ &\hat{X_i}=\prod_{e\in sites\ around\ plauette\ X_i}\sigma_e^x\,, & \, \hat{Z_i}=\prod_{e \in sites\ around\ plauette\ Z_i}\sigma_e^z \end{aligned} \end{equation} where $\sigma's$ are Pauli matrices. By acting a $\sigma^z$ operator on a vertex, a pair of $e$ can be created in two adjacent $X$ plaquettes. Similarly by acting a $\sigma^x$ on a vertex, a pair of $m$ will be created in two adjacent $Z$ plaquettes.\\ The fermion gapped boundary appears as a ``smooth'' boundary in the blue lattice. A ``smooth'' fermionic boundary on the Wen plaquette model was discussed in \cite{Yu:2012eu}. To visualize the boundary modes, it is most convenient to fermionize the boundary spin degrees of freedom, and turn it into a set of Majorana modes $\{c_i\}$, one at each boundary vertex $i$. As a check, a fermion string operator can be applied at the boundary as shown in the figure, showing that individual $f$ anyon can be created or destroyed there -- justifying the claim that $f$ condenses at the boundary \cite{Yu:2012eu,Bhardwaj:2016clt}. The boundary is gapless if translation invariance is preserved \cite{Yu:2012eu}. There are multiple ways to gap it. One way, discussed in \cite{Bhardwaj:2016clt}, is to introduce an extra set of Majorana modes $\{\gamma_i\}$, one at each vertex at the boundary. Another possibility is simply to give up translation invariance, and introduce a boundary Hamiltonian that pairs neighbouring Majorana modes. For our purpose, this suffices to illustrate the fusion rules of junctions discussed above. From the perspective of the Kitaev lattice, the fermionic boundary looks like a zig-zag rugged edge. On the other hand, it is well known that the {\it rough} and {\it smooth} boundaries in the Kitaev lattice correspond to gapped boundaries characterized by the electric condensate and the magnetic condensate respectively \cite{kitaev_models_2012, beigi_quantum_2011}. They in turn show up as a rugged surface in the Wen Plaquette lattice. The $e$ bounday only consists of $Z$ plaquettes. Therefore the boundary Hamitonian only includes $\hat{Z}$ operators, which commute with $\sigma^z$, the creation operator of the electric charge. When an electric charge approaches the boundary it will disappear, while a magnetic vortex will be stuck on it and becomes an excitation on the boundary. Note that on the boundary there are only three sites around the plaquette. The $m$ boundary works similarly -- one simply replaces $\hat{Z}$ by $\hat{X}$, and $\sigma_z$ by $\sigma_x$. Now we are ready to study $e-f$, $m-f$ and $e-m$ junctions on the lattice model. First, as a warm up, consider the most familiar situation of an $e-m$ junction. This is illustrated in figure \ref{fig:emlattice} below. One can see that an odd number of Majorana mode must be trapped between the boundaries. (i.e. 1 extra Majorana mode is left at the junction in the figure.) This is the well known conclusion that we have re-derived based on bi-modules in (\ref{eq:em_fuse}).\\ \begin{figure}[h] \centering \begin{tikzpicture} \draw (0,0) -- (5,0); \draw (0,1) -- (5,1); \draw (0,2) -- (3,2); \draw (1,0) -- (1,2); \draw (2,0) -- (2,2); \draw (3,0) -- (3,2); \draw (4,0) -- (4,2); \draw [blue, thick, fill=yellow] (0.5,0) -- (1,0.5) -- (0.5,1) -- (0,0.5) -- (0.5,0); \draw [blue, thick, fill=yellow] (1.5,0) -- (2,0.5) -- (1.5,1) -- (1,0.5) -- (1.5,0); \draw [blue, thick, fill=yellow] (2.5,0) -- (3,0.5) -- (2.5,1) -- (2,0.5) -- (2.5,0); \draw [blue, thick, fill=yellow] (3.5,0) -- (4,0.5) -- (3.5,1) -- (3,0.5) -- (3.5,0); \draw [blue, thick, fill=yellow] (4.5,0) -- (5,0.5) -- (4.5,1) -- (4,0.5) -- (4.5,0); \draw [blue, thick, fill=yellow] (0.5,1) -- (1,1.5) -- (0.5,2) -- (0,1.5) -- (0.5,1); \draw [blue, thick, fill=yellow] (1.5,1) -- (2,1.5) -- (1.5,2) -- (1,1.5) -- (1.5,1); \draw [blue, thick, fill=yellow] (2.5,1) -- (3,1.5) -- (2.5,2) -- (2,1.5) -- (2.5,1); \draw [blue, thick, fill=yellow] (3.5,1) -- (4,1.5) -- (3.5,2) -- (3,1.5) -- (3.5,1); \draw [blue, thick, fill=yellow] (4.5,1) -- (5,1.5) -- (4.5,2) -- (4,1.5) -- (4.5,1); \draw [blue, thick] (3,1.5) -- (3.5,1) -- (4,1.5) -- (4.5,1) -- (5,1.5); \draw [blue, thick] (2,1.5) -- (2.5,2) -- (2,2.5) -- (1.5,2) -- (2,1.5); \draw [blue, thick] (1,1.5) -- (1.5,2) -- (1,2.5) -- (0.5,2) -- (1,1.5); \draw [blue, thick] (0,2.5) -- (0.5,2); \foreach \x in {0.5,1.5,2.5,3.5,4.5} \foreach \y in {0.5,1.5} \node at (\x,\y) {z}; \foreach \x in {0,1,2,3,4,5} \foreach \y in {0,1} \node at (\x,\y) {x}; \node at (0,2) {x}; \node at (1,2) {x}; \node at (2,2) {x}; \foreach \x in {0,1,2} \draw[white, fill=white] (\x,2.5) circle [radius=0.07]; \draw[white, fill=white] (4.5,2) circle [radius=0.07]; \draw[white, fill=white] (3.5,2) circle [radius=0.07]; \draw[fill] (2.5,2) circle [radius=0.07]; \node at (6,2.2) {\scriptsize trapped Majorana mode}; \draw [thick] (4.4,2.2) to [out=180,in=30] (2.6,2.1); \draw [thick] (2.8,2.3) -- (2.6,2.1) -- (2.86, 2.12); \node at (-0.7,3) {m condensate}; \draw [thick,dashed] (0.7,3) -- (2,3) --(2,2.6); \node at (6.1,3) {e condensate}; \draw [thick,dashed] (4.85,3) -- (4,3) -- (4,2.1); \draw [dotted, ultra thick] (-0.5,1) -- (-0.2,1); \draw [dotted, ultra thick] (2.5,-0.5) -- (2.5,-0.2); \draw [dotted, ultra thick] (5.2,1) -- (5.5,1); \end{tikzpicture} \caption{An illustration of the $e-m$ junction on the lattice. The black lines denote the lattice of Kitaev's toric code model and the blue lines denote the lattice of the Wen plaquette model.} \label{fig:emlattice} \end{figure}\\ Now we can look into the problem we met in the previous subsection figure \ref{fig:ef-fm}. We notice that indeed there is an ambiguity in the result! As illustrated in figure \ref{fig:efmjunction}, whether an unpaired Majorana mode is trapped in the $e-f$ junction depends on how we choose to gap the fermion boundary pairing up Majorana modes. There is thus an ambiguity of $\sqrt{2}$ in the quantum dimension of the defect in the $e-f$ or $m-f$ junction. Nonetheless, at the end of the day, there is an odd number of Majorana modes shared between the $e-f$ and $f-m$ junctions. If we put such $m-f-e$ boundaries on a circle, we will find a total of two Majorana modes, one shared between the $e-f$ and $f-m$ junctions, and another located at the $e-m$ junction. \\ \begin{figure}[h] \centering \begin{tikzpicture} \draw [blue, thick, fill=yellow] (0,0) rectangle (0.5,0.5); \draw [blue, thick, fill=white] (0.5,0) rectangle (1,0.5); \draw [blue, thick, fill=yellow] (0.5,0.5) rectangle (1,1); \draw [blue, thick, fill=white] (1,0.5) rectangle (1.5,1); \draw [blue, thick, fill=yellow] (1,1) rectangle (1.5,1.5); \draw [blue, thick, fill=yellow] (1,0) rectangle (1.5,0.5); \draw [blue, thick, fill=white] (1.5,0) rectangle (2,0.5); \draw [blue, thick, fill=white] (2,0.5) rectangle (2.5,1); \draw [blue, thick, fill=white] (1.5,1) rectangle (2,1.5); \draw [blue, thick, fill=yellow] (2,0) rectangle (2.5,0.5); \draw [blue, thick, fill=yellow] (1.5,0.5) rectangle (2,1); \draw [blue, thick, fill=yellow] (2,1) rectangle (2.5,1.5); \draw [blue, thick, fill=white] (2.5,1) rectangle (3,1.5); \draw [blue, thick, fill=white] (2.5,0) rectangle (3,0.5); \draw [blue, thick, fill=yellow] (2.5,0.5) rectangle (3,1); \draw [blue, thick, fill=white] (3,0.5) rectangle (3.5,1); \draw [blue, thick, fill=yellow] (3,0) rectangle (3.5,0.5); \draw [blue, thick, fill=white] (3.5,0) rectangle (4,0.5); \draw [blue, thick, fill=yellow] (5,0) rectangle (5.5,0.5); \draw [blue, thick, fill=white] (5.5,0) rectangle (6,0.5); \draw [blue, thick, fill=yellow] (5.5,0.5) rectangle (6,1); \draw [blue, thick, fill=white] (6,0.5) rectangle (6.5,1); \draw [blue, thick, fill=yellow] (6,1) rectangle (6.5,1.5); \draw [blue, thick, fill=yellow] (6,0) rectangle (6.5,0.5); \draw [blue, thick, fill=white] (6.5,0) rectangle (7,0.5); \draw [blue, thick, fill=white] (7,0.5) rectangle (7.5,1); \draw [blue, thick, fill=white] (6.5,1) rectangle (7,1.5); \draw [blue, thick, fill=yellow] (7,0) rectangle (7.5,0.5); \draw [blue, thick, fill=yellow] (6.5,0.5) rectangle (7,1); \draw [blue, thick, fill=yellow] (7,1) rectangle (7.5,1.5); \draw [blue, thick, fill=white] (7.5,1) rectangle (8,1.5); \draw [blue, thick, fill=white] (7.5,0) rectangle (8,0.5); \draw [blue, thick, fill=yellow] (7.5,0.5) rectangle (8,1); \draw [blue, thick, fill=white] (8,0.5) rectangle (8.5,1); \draw [blue, thick, fill=yellow] (8,0) rectangle (8.5,0.5); \draw [blue, thick, fill=white] (8.5,0) rectangle (9,0.5); \draw [dotted, ultra thick] (2,-0.2) -- (2,-0.5); \draw [dotted, ultra thick] (7,-0.2) -- (7,-0.5); \draw[fill] (1.5,1.5) circle [radius=0.06]; \draw[fill] (7.5,1.5) circle [radius=0.06]; \draw[fill] (2.5,1.5) circle [radius=0.06]; \draw[fill] (6.5,1.5) circle [radius=0.06]; \draw[fill] (2,1.5) circle [radius=0.06]; \draw[fill] (7,1.5) circle [radius=0.06]; \draw (1.75,1.5) to [out=90, in=90] (2.75,1.5) to [out=270, in=270] (1.75,1.5); \draw (6.25,1.5) to [out=90, in=90] (7.25,1.5) to [out=270, in=270] (6.25,1.5); \node at (0.25,1) {\scriptsize m} node at (5.25,1) {\scriptsize m} node at (3.75,1) {\scriptsize e} node at (8.75,1) {\scriptsize e} node at (2,2) {\scriptsize f} node at (7,2) {\scriptsize f} node at (3.25,1.75) {\scriptsize unit cell} node at (5.75,1.75) {\scriptsize unit cell}; \end{tikzpicture} \caption{There is an odd number of Majorana modes shared between the $e-f$ and $f-m$ junction.} \label{fig:efmjunction} \end{figure}\\ As already noted in section \ref{sec:fusion_junc}, in the presence of a fermionic condensate, the quantum dimension of junctions could acquire a $\sqrt{2}$ factor ambiguity, corresponding to adding/subtracting a Majorana mode. Now the fusion rules worked out using methods of bi-modules in (\ref{eq:fuse_effm2}) can be understood as a {\it canonical} choice, where we beef up the junctions by inserting an extra Majorana mode at one of the two $e-f$ or $f-m$ junctions that originally lacks a Majorana mode, so that the two junctions become symmetric, and each carry a quantum dimension of $\sqrt{2}$. Of course, the resultant fusion product would carry two Majorana modes, instead of one that is always expected to be trapped at the $e-m$ junction. It is not possible to add only a single Majorana mode in a physical state. On a disk, one would have to add a Majorana at one of the $e-f$ and $f-m$ junctions, and another at the $e-m$ junction. The same results can also be understood from the perspective of Abelian Chern-Simons theory. This will be relegated to the appendix. \subsubsection{The bimodules and computing the half-linking number} In the previous sections, we have obtained some ``coarse-grained'' data regarding the bimodules. Here we would like to provide details of some ``fine-grained'' data of these boundaries -- namely the actual Frobenius algebra characterizing the boundary, and the left/right action of the bi-modules, to illustrate the general principles laid out in earlier sections. For concreteness, let us focus on the $e-f$ junction. To begin with, we need to solve for two Frobenius algebra $\mathcal{A}_e$ and $\mathcal{A}_f$. Using the conditions discussed in section \ref{sec:frobeniusalgebra} and also the 6j-symbols of the toric code topological order, we obtain (\ref{eq:algebra-Am}, \ref{eq:algebra-Af}). \begin{figure}[htbp] \begin{equation} \label{eq:algebra-Am} \centering \begin{tikzpicture}[scale=0.6] \begin{scope}[xshift=0cm,yshift=0cm]{ \node at (-1.2,0) {$\sqrt{2}$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$\mathcal{A}_e$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_e$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_e$}}; } \end{scope} \begin{scope}[xshift=4cm,yshift=0cm]{ \node at (-2,0) {$=$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$\mathcal{A}_e$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_e$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_e$}}; \vertexIC{0}{0}{1cm}{green}; \scalebox{0.5}[0.5]{\tri{-2cm+4cm}{-2cm}{45}}; \scalebox{0.5}[0.5]{\tri{0cm+4cm}{2.2cm}{270}}; \scalebox{0.5}[0.5]{\tri{2cm+4cm}{-2cm}{135}}; \node[right] at (0,0.5cm) {\scriptsize{$1$}}; \node[above] at (-0.5cm,-0.5cm) {\scriptsize{$1$}}; \node[above] at (0.5cm,-0.5cm) {\scriptsize{$1$}}; } \end{scope} \begin{scope}[xshift=8cm,yshift=0cm]{ \node at (-2,0) {$+$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$\mathcal{A}_e$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_e$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_e$}}; \vertexIC{0}{0}{1cm}{green}; \scalebox{0.5}[0.5]{\tri{-2cm+8cm}{-2cm}{45}}; \scalebox{0.5}[0.5]{\tri{0cm+8cm}{2.2cm}{270}}; \scalebox{0.5}[0.5]{\tri{2cm+8cm}{-2cm}{135}}; \node[right] at (0,0.5cm) {\scriptsize{$1$}}; \node[above] at (-0.5cm,-0.5cm) {\scriptsize{$e$}}; \node[above] at (0.5cm,-0.5cm) {\scriptsize{$e$}}; } \end{scope} \begin{scope}[xshift=12cm,yshift=0cm]{ \node at (-2,0) {$+$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$\mathcal{A}_e$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_e$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_e$}}; \vertexIC{0}{0}{1cm}{green}; \scalebox{0.5}[0.5]{\tri{-2cm+12cm}{-2cm}{45}}; \scalebox{0.5}[0.5]{\tri{0cm+12cm}{2.2cm}{270}}; \scalebox{0.5}[0.5]{\tri{2cm+12cm}{-2cm}{135}}; \node[right] at (0,0.5cm) {\scriptsize{$m$}}; \node[above] at (-0.5cm,-0.5cm) {\scriptsize{$e$}}; \node[above] at (0.5cm,-0.5cm) {\scriptsize{$1$}}; } \end{scope} \begin{scope}[xshift=16cm,yshift=0cm]{ \node at (-2,0) {$+$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$\mathcal{A}_e$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_e$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_e$}}; \vertexIC{0}{0}{1cm}{green}; \scalebox{0.5}[0.5]{\tri{-2cm+16cm}{-2cm}{45}}; \scalebox{0.5}[0.5]{\tri{0cm+16cm}{2.2cm}{270}}; \scalebox{0.5}[0.5]{\tri{2cm+16cm}{-2cm}{135}}; \node[right] at (0,0.5cm) {\scriptsize{$m$}}; \node[above] at (-0.5cm,-0.5cm) {\scriptsize{$1$}}; \node[above] at (0.5cm,-0.5cm) {\scriptsize{$e$}}; } \end{scope} \end{tikzpicture} \end{equation} \end{figure} \begin{figure}[htbp] \begin{equation} \label{eq:algebra-Af} \centering \begin{tikzpicture}[scale=0.6] \begin{scope}[xshift=0cm,yshift=0cm]{ \node at (-1.2,0) {$\sqrt{2}$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$\mathcal{A}_f$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_f$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_f$}}; } \end{scope} \begin{scope}[xshift=4cm,yshift=0cm]{ \node at (-2,0) {$=$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$\mathcal{A}_f$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_f$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_f$}}; \vertexIC{0}{0}{1cm}{green}; \scalebox{0.5}[0.5]{\tri{-2cm+4cm}{-2cm}{45}}; \scalebox{0.5}[0.5]{\tri{0cm+4cm}{2.2cm}{270}}; \scalebox{0.5}[0.5]{\tri{2cm+4cm}{-2cm}{135}}; \node[right] at (0,0.5cm) {\scriptsize{$1$}}; \node[above] at (-0.5cm,-0.5cm) {\scriptsize{$1$}}; \node[above] at (0.5cm,-0.5cm) {\scriptsize{$1$}}; } \end{scope} \begin{scope}[xshift=8cm,yshift=0cm]{ \node at (-2,0) {$+$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$\mathcal{A}_f$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_f$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_f$}}; \vertexIC{0}{0}{1cm}{green}; \scalebox{0.5}[0.5]{\tri{-2cm+8cm}{-2cm}{45}}; \scalebox{0.5}[0.5]{\tri{0cm+8cm}{2.2cm}{270}}; \scalebox{0.5}[0.5]{\tri{2cm+8cm}{-2cm}{135}}; \node[right] at (0,0.5cm) {\scriptsize{$1$}}; \node[above] at (-0.5cm,-0.5cm) {\scriptsize{$f$}}; \node[above] at (0.5cm,-0.5cm) {\scriptsize{$f$}}; } \end{scope} \begin{scope}[xshift=12cm,yshift=0cm]{ \node at (-2,0) {$+$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$\mathcal{A}_f$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_f$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_f$}}; \vertexIC{0}{0}{1cm}{green}; \scalebox{0.5}[0.5]{\tri{-2cm+12cm}{-2cm}{45}}; \scalebox{0.5}[0.5]{\tri{0cm+12cm}{2.2cm}{270}}; \scalebox{0.5}[0.5]{\tri{2cm+12cm}{-2cm}{135}}; \node[right] at (0,0.5cm) {\scriptsize{$f$}}; \node[above] at (-0.5cm,-0.5cm) {\scriptsize{$f$}}; \node[above] at (0.5cm,-0.5cm) {\scriptsize{$1$}}; } \end{scope} \begin{scope}[xshift=16cm,yshift=0cm]{ \node at (-2,0) {$+$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$\mathcal{A}_f$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_f$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$\mathcal{A}_f$}}; \vertexIC{0}{0}{1cm}{green}; \scalebox{0.5}[0.5]{\tri{-2cm+16cm}{-2cm}{45}}; \scalebox{0.5}[0.5]{\tri{0cm+16cm}{2.2cm}{270}}; \scalebox{0.5}[0.5]{\tri{2cm+16cm}{-2cm}{135}}; \node[right] at (0,0.5cm) {\scriptsize{$f$}}; \node[above] at (-0.5cm,-0.5cm) {\scriptsize{$1$}}; \node[above] at (0.5cm,-0.5cm) {\scriptsize{$f$}}; } \end{scope} \end{tikzpicture} \end{equation} \end{figure} Then we would like to obtain the unique simple bimodule $X_{ef}$ already discussed in (\ref{sec:bfjunction}). Note that here we have used the freedom to rescale discussed in (\ref{eq:hombasis_rescale}) and introduce $\zeta^{\mathcal{A}_e}_1$, $\zeta^{\mathcal{A}_f}_1$, $\zeta^{\mathcal{A}_e}_e$ and $\zeta^{\mathcal{A}_f}_f$ to set all the coefficients in (\ref{eq:algebra-Am}, \ref{eq:algebra-Af}) to 1. Then we would like to obtain the unique simple bimodule $X_{ef}$ already discussed in (\ref{sec:bfjunction}). A bi-module is separately a left module of $\mathcal{A}_e$ and right module of $\mathcal{A}_f$. Therefore the left-right actions must separately satisfy (\ref{eq:reps_eq}) and its right-action counterpart. But as a bi-module, it must satisfy commutativity between the left and right action as illustrated in (\ref{eq:bimodule}) too. These results in the bimodule are illustrated in (\ref{eq:bimodule-M1-1}) and (\ref{eq:bimodule-M1-1}). \begin{figure}[htbp] \centering \begin{equation}\label{eq:bimodule-M1-1} \begin{tikzpicture}[scale=0.3] \begin{scope}[xshift=0cm,yshift=0cm]{ \node at (-7,0) {$\sqrt{2}$}; \draw [thick, magenta] (-4,-4) -- (-4,4); \draw [thick] (-7,-3) -- (-4,0); \node at (-7.8,-3) {\footnotesize{$\mathcal{A}_e$}} node at (-3.2,4) {\footnotesize{$X_{ef}$}} node at (-3.2,-4) {\footnotesize{$X_{ef}$}}; } \end{scope} \begin{scope}[xshift=-5cm,yshift=0cm]{ \node at (4.5,0) {\Large{$=$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$\mathcal{A}_e$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (7,-0.3) {\footnotesize{$1$}} node at (8.6,0.7) {\footnotesize{$1$}} node at (8.6,-0.9) {\footnotesize{$1$}}; } \end{scope} \begin{scope}[xshift=2cm,yshift=0cm]{ \node at (4.5,0) {\Large{$+$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$\mathcal{A}_e$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (7,-0.3) {\footnotesize{$1$}} node at (8.6,0.7) {\footnotesize{$e$}} node at (8.6,-0.9) {\footnotesize{$e$}}; } \end{scope} \begin{scope}[xshift=9cm,yshift=0cm]{ \node at (4.5,0) {\Large{$+$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$\mathcal{A}_e$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (7,-0.3) {\footnotesize{$1$}} node at (8.6,0.7) {\footnotesize{$e$}} node at (8.6,-0.9) {\footnotesize{$e$}}; } \end{scope} \begin{scope}[xshift=16cm,yshift=0cm]{ \node at (4.5,0) {\Large{$+$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$\mathcal{A}_e$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (7,-0.3) {\footnotesize{$1$}} node at (8.6,0.7) {\footnotesize{$f$}} node at (8.6,-0.9) {\footnotesize{$f$}}; } \end{scope} \begin{scope}[xshift=-8cm,yshift=-10cm]{ \node at (4.5,0) {\Large{$+$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$\mathcal{A}_e$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (7,-0.3) {\footnotesize{$m$}} node at (8.6,0.7) {\footnotesize{$1$}} node at (8.6,-0.9) {\footnotesize{$m$}}; } \end{scope} \begin{scope}[xshift=-1cm,yshift=-10cm]{ \node at (4.5,0) {\Large{$+$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$\mathcal{A}_e$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (7,-0.3) {\footnotesize{$m$}} node at (8.6,0.7) {\footnotesize{$m$}} node at (8.6,-0.9) {\footnotesize{$1$}}; } \end{scope} \begin{scope}[xshift=6cm,yshift=-10cm]{ \node at (4.5,0) {\Large{$+$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$\mathcal{A}_e$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (7,-0.3) {\footnotesize{$m$}} node at (8.6,0.7) {\footnotesize{$e$}} node at (8.6,-0.9) {\footnotesize{$f$}}; } \end{scope} \begin{scope}[xshift=13cm,yshift=-10cm]{ \node at (4.5,0) {\Large{$+$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$\mathcal{A}_e$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (7,-0.3) {\footnotesize{$m$}} node at (8.6,0.7) {\footnotesize{$f$}} node at (8.6,-0.9) {\footnotesize{$e$}}; } \end{scope} \end{tikzpicture} \end{equation} \end{figure} \begin{figure}[htbp] \centering \begin{equation}\label{eq:bimodule-M1-2} \begin{tikzpicture}[scale=0.3] \begin{scope}[xshift=0cm,yshift=0cm]{ \node at (-6,0) {$\sqrt{2}$}; \draw [thick, magenta] (-4,-4) -- (-4,4); \draw [thick] (-1,-3) -- (-4,0); \node at (-0.2,-3) {\footnotesize{$\mathcal{A}_f$}} node at (-3.2,4) {\footnotesize{$X_{ef}$}} node at (-3.2,-4) {\footnotesize{$X_{ef}$}}; } \end{scope} \begin{scope}[xshift=-5cm,yshift=0cm]{ \node at (4.5,0) {\Large{$=$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (11,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (9.5,-1.5) -- (8,0); \tri{9.8cm}{-1.8cm}{45+90} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (11.8,-3) {\footnotesize{$\mathcal{A}_f$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (9,-0.3) {\footnotesize{$1$}} node at (7.4,0.7) {\footnotesize{$1$}} node at (7.4,-0.9) {\footnotesize{$1$}}; } \end{scope} \begin{scope}[xshift=2cm,yshift=0cm]{ \node at (4.5,0) {\Large{$+$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (11,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (9.5,-1.5) -- (8,0); \tri{9.8cm}{-1.8cm}{45+90} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (11.8,-3) {\footnotesize{$\mathcal{A}_f$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (9,-0.3) {\footnotesize{$1$}} node at (7.4,0.7) {\footnotesize{$e$}} node at (7.4,-0.9) {\footnotesize{$e$}}; } \end{scope} \begin{scope}[xshift=9cm,yshift=0cm]{ \node at (4.5,0) {\Large{$+$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (11,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (9.5,-1.5) -- (8,0); \tri{9.8cm}{-1.8cm}{45+90} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (11.8,-3) {\footnotesize{$\mathcal{A}_f$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (9,-0.3) {\footnotesize{$1$}} node at (7.4,0.7) {\footnotesize{$m$}} node at (7.4,-0.9) {\footnotesize{$m$}}; } \end{scope} \begin{scope}[xshift=16cm,yshift=0cm]{ \node at (4.5,0) {\Large{$+$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (11,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (9.5,-1.5) -- (8,0); \tri{9.8cm}{-1.8cm}{45+90} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (11.8,-3) {\footnotesize{$\mathcal{A}_f$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (9,-0.3) {\footnotesize{$1$}} node at (7.4,0.7) {\footnotesize{$f$}} node at (7.4,-0.9) {\footnotesize{$f$}}; } \end{scope} \begin{scope}[xshift=-8cm,yshift=-10cm]{ \node at (4.5,0) {\Large{$+$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (11,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (9.5,-1.5) -- (8,0); \tri{9.8cm}{-1.8cm}{45+90} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (11.8,-3) {\footnotesize{$\mathcal{A}_f$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (9,-0.3) {\footnotesize{$f$}} node at (7.4,0.7) {\footnotesize{$f$}} node at (7.4,-0.9) {\footnotesize{$1$}}; } \end{scope} \begin{scope}[xshift=-1cm,yshift=-10cm]{ \node at (4.5,0) {\Large{$+$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (11,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (9.5,-1.5) -- (8,0); \tri{9.8cm}{-1.8cm}{45+90} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (11.8,-3) {\footnotesize{$\mathcal{A}_f$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (9,-0.3) {\footnotesize{$f$}} node at (7.4,0.7) {\footnotesize{$1$}} node at (7.4,-0.9) {\footnotesize{$f$}}; } \end{scope} \begin{scope}[xshift=6cm,yshift=-10cm]{ \node at (4.5,0) {\Large{$+$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (11,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (9.5,-1.5) -- (8,0); \tri{9.8cm}{-1.8cm}{45+90} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (11.8,-3) {\footnotesize{$\mathcal{A}_f$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (9,-0.3) {\footnotesize{$f$}} node at (7.4,0.7) {\footnotesize{$e$}} node at (7.4,-0.9) {\footnotesize{$m$}}; } \end{scope} \begin{scope}[xshift=13cm,yshift=-10cm]{ \node at (4.5,0) {\Large{$+$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (11,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (9.5,-1.5) -- (8,0); \tri{9.8cm}{-1.8cm}{45+90} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (11.8,-3) {\footnotesize{$\mathcal{A}_f$}} node at (8.8,4) {\footnotesize{$X_{ef}$}} node at (8.8,-4) {\footnotesize{$X_{ef}$}} node at (9,-0.3) {\footnotesize{$f$}} node at (7.4,0.7) {\footnotesize{$m$}} node at (7.4,-0.9) {\footnotesize{$e$}}; } \end{scope} \end{tikzpicture} \end{equation} \end{figure} We still have enough phase rescaling freedom here ($\zeta^{X_{ef}}_e$, $\zeta^{X_{ef}}_m$ and $\zeta^{X_{ef}}_f$) to set all the coefficients to unity. Substituting the algebra and the bimodules into (\ref{eq:gamma_pic_2}), we obtain \begin{equation} \gamma_{X_{ef}\, 1}= \mathcal{N}_{ef} \sum_{i,j\in X_{ef}}\rho^{X_{ef} j}_{1i}\rho^{X_{ef}j}_{i1}(R^{1i}_jR^{i1}_j)^*\sqrt{d_id_jd_1}=\frac{\mathcal{N}_{ef}}{2}\sum_{i\in M_1}d_i=1. \end{equation} The normalization $\mathcal{N}_{ef}$ is given by \begin{equation} \label{eq:Nef} \mathcal{N}_{ef} = \frac{1}{\sqrt{2D_{Toric\ code}}}, \end{equation} which recovers the fusion rules (\ref{eq:ef_fuse}), confirming (\ref{eq:NAB_2}). \subsection{Intermediate level -- $D(S_3)$} The quantum double model $D(S_3)$ is the paradigmatic example of non-Abelian topological orders that illustrate non-trivial features that could arise. For completeness, we include the topological data of the bulk theory in the appendix, which sets the notations of the anyons that we will use below. The bosonic gapped boundaries of $D(S_3)$ have been studied in many places \cite{cong_topological_2016, Cong_2017, Shen_2019}. These condensates and the junctions between them are also summarized in the appendix. In addition to the well known bosonic gapped boundaries, there is also one fermionic gapped boundary. This is already noted in \cite{Wan:2016php}. Let us study it in somewhat more detail below. \subsubsection{The fermionic boundary} The condensate is given by \begin{equation} \label{eq:finDs3} \mathcal{A}_f = A \oplus C \oplus E. \end{equation} This condensation is closely related to the fermionic boundary of the toric code. In this case, one can readily work out the $W$ matrix and $\Omega$ matrix using the methods in section \ref{sec:b_fusion}. The fusion rules between defects are also readily obtainable. We will slightly delay the presentation of these results, by taking a somewhat longer route. As discussed in \cite{Wan:2016php}, for a fermionic condensate that preserves fermion parity, one could consider splitting the condensation into two steps -- first condensing the bosons in $\mathcal{A}_f$, which should form a closed Frobenius sub-algebra, before condensing the fermion, which would be reduced to an Abelian anyon in the intermediate condensed phase. Applying this logic here, it would imply that one could first consider condensing \begin{equation} \alpha_{AC} \equiv A \oplus C \subset \mathcal{A}_f. \end{equation} For completeness, let us present the Frobenius algebra $\alpha_{AC}$ in (\ref{eq:algebra-condenseC}). Similarly to the case of the Toric code, here we have chosen the phase ambiguities ($\zeta^1_A$ and $\zeta^1_C$) such that all the coefficients including $A$ to be 1. The virtue of the sequential condensation is that it allows one to work out $\mathcal{A}_f$ in (\ref{eq:finDs3}) as a {\it Lagrangian algebra} of $D(S_3)$ by treating it as condensation of simple modules of $\alpha_{AC}$. Un-packaging it into fusion basis in $D(S_3)$ is simple. \begin{figure}[ht] \begin{equation} \label{eq:algebra-condenseC} \centering \begin{tikzpicture}[scale=0.5] \begin{scope}[xshift=0cm,yshift=0cm]{ \node at (-1.2,0) {$\sqrt{3}$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$1$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$1$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$1$}}; } \end{scope} \begin{scope}[xshift=4.5cm,yshift=0cm]{ \node at (-2.25,0) {$=$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$1$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$1$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$1$}}; \vertexIC{0}{0}{1cm}{green}; \scalebox{0.5}[0.5]{\tri{-2cm+4.5cm}{-2cm}{45}}; \scalebox{0.5}[0.5]{\tri{0cm+4.5cm}{2.2cm}{270}}; \scalebox{0.5}[0.5]{\tri{2cm+4.5cm}{-2cm}{135}}; \node[right] at (0,0.5cm) {\tiny{$A$}}; \node[above] at (-0.5cm,-0.5cm) {\tiny{$A$}}; \node[above] at (0.5cm,-0.5cm) {\tiny{$A$}}; } \end{scope} \begin{scope}[xshift=9cm,yshift=0cm]{ \node at (-2.25,0) {$+$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$1$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$1$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$1$}}; \vertexIC{0}{0}{1cm}{green}; \scalebox{0.5}[0.5]{\tri{-2cm+9cm}{-2cm}{45}}; \scalebox{0.5}[0.5]{\tri{0cm+9cm}{2.2cm}{270}}; \scalebox{0.5}[0.5]{\tri{2cm+9cm}{-2cm}{135}}; \node[right] at (0,0.5cm) {\tiny{$A$}}; \node[above] at (-0.5cm,-0.5cm) {\tiny{$C$}}; \node[above] at (0.5cm,-0.5cm) {\tiny{$C$}}; } \end{scope} \begin{scope}[xshift=13.5cm,yshift=0cm]{ \node at (-2.25,0) {$+$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$1$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$1$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$1$}}; \vertexIC{0}{0}{1cm}{green}; \scalebox{0.5}[0.5]{\tri{-2cm+13.5cm}{-2cm}{45}}; \scalebox{0.5}[0.5]{\tri{0cm+13.5cm}{2.2cm}{270}}; \scalebox{0.5}[0.5]{\tri{2cm+13.5cm}{-2cm}{135}}; \node[right] at (0,0.5cm) {\tiny{$C$}}; \node[above] at (-0.5cm,-0.5cm) {\tiny{$A$}}; \node[above] at (0.5cm,-0.5cm) {\tiny{$C$}}; } \end{scope} \begin{scope}[xshift=18cm,yshift=0cm]{ \node at (-2.25,0) {$+$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$1$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$1$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$1$}}; \vertexIC{0}{0}{1cm}{green}; \scalebox{0.5}[0.5]{\tri{-2cm+18cm}{-2cm}{45}}; \scalebox{0.5}[0.5]{\tri{0cm+18cm}{2.2cm}{270}}; \scalebox{0.5}[0.5]{\tri{2cm+18cm}{-2cm}{135}}; \node[right] at (0,0.5cm) {\tiny{$C$}}; \node[above] at (-0.5cm,-0.5cm) {\tiny{$C$}}; \node[above] at (0.5cm,-0.5cm) {\tiny{$A$}}; } \end{scope} \begin{scope}[xshift=23cm,yshift=0cm]{ \node at (-2,0) {$+\ \phi$}; \vertexI{0}{0}{1.5cm}; \node[above] at (0,1.5cm) {\scriptsize{$1$}}; \node[below] at (-1.5cm,-1.5cm) {\scriptsize{$1$}}; \node[below] at (1.5cm,-1.5cm) {\scriptsize{$1$}}; \vertexIC{0}{0}{1cm}{green}; \scalebox{0.5}[0.5]{\tri{-2cm+23cm}{-2cm}{45}}; \scalebox{0.5}[0.5]{\tri{0cm+23cm}{2.2cm}{270}}; \scalebox{0.5}[0.5]{\tri{2cm+23cm}{-2cm}{135}}; \node[right] at (0,0.5cm) {\tiny{$C$}}; \node[above] at (-0.5cm,-0.5cm) {\tiny{$C$}}; \node[above] at (0.5cm,-0.5cm) {\tiny{$C$}}; \node at (4,0) {$\phi=2^{-\frac{1}{4}}$}; } \end{scope} \end{tikzpicture} \end{equation} \end{figure} One could work out the intermediate phase where $C$ is condensed. The methods discussed in section \ref{sec:b_fusion} continues to apply, even though $\alpha_{AC}$ is not a {\it Lagrangian} algebra that defines a gapped boundary. In this case, one finds that the condensed phase is described by a fusion category that contains the toric code category as a sub-category. It has been noted that the toric code order remains ``deconfined'' such that the braiding structure is preserved, in addition to sectors identified as ``confined defects'' that are non-local wrt to the condensate, and thus whose braided structure is lost \cite{kirillov}. Let us summarize the properties of the intermediate phase in the table below: \begin{equation} \label{tab:AC} \begin{tabular}{|c|c|c|c|c|c|c|} \hline sectors $x$ & 1 & $ e$ & $ m$ & $ f$ & $X$ & $Y$ \\ \hline $W^{\alpha_{AC}}_{ix}$ & $A\oplus C$ & $ B\oplus C$ & $D$ & $ E $ & $D\oplus E$ & $F\oplus G\oplus H$ \\ \hline \end{tabular} \end{equation} One could also work out the precise left actions of the algebra $\alpha_{AC}$ on these modules. They are presented in (\ref{eq:modules-condenseC}). One observes that there are multiple solutions in each given module in (\ref{eq:modules-condenseC}). In this case however, they all correspond to a phase redundancy following from the choice of phase for the fusion basis discussed in (\ref{eq:hombasis_rescale}). In other words, they do not lead to independent modules. This should be contrasted with a q-type object that we will study below, where a single module gives rise to two truly independent solutions. \begin{figure}[htbp!] \centering \begin{equation} \label{eq:modules-condenseC} \begin{tikzpicture}[scale=0.3] \begin{scope}[xshift=-12cm,yshift=0cm]{ \begin{scope}[xshift=0cm,yshift=0cm]{ \node at (-7,0) {$\sqrt{3}$}; \draw [thick, magenta] (-4,-4) -- (-4,4); \draw [thick] (-7,-3) -- (-4,0); \node at (-7.8,-3) {\footnotesize{$1$}} node at (-3.2,4) {\footnotesize{$e$}} node at (-3.2,-4) {\footnotesize{$e$}}; } \end{scope} \begin{scope}[xshift=-5cm,yshift=0cm]{ \node at (4.5,0) {\Large{$=$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$e$}} node at (8.8,-4) {\footnotesize{$e$}} node at (7,-0.3) {\footnotesize{$A$}} node at (8.6,0.7) {\footnotesize{$B$}} node at (8.6,-0.9) {\footnotesize{$B$}}; } \end{scope} \begin{scope}[xshift=2cm,yshift=0cm]{ \node at (4.5,0) {\Large{$+$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$e$}} node at (8.8,-4) {\footnotesize{$e$}} node at (7,-0.3) {\footnotesize{$A$}} node at (8.6,0.7) {\footnotesize{$C$}} node at (8.6,-0.9) {\footnotesize{$C$}}; } \end{scope} \begin{scope}[xshift=9cm,yshift=0cm]{ \node at (4.5,0) {$+\ \theta$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$e$}} node at (8.8,-4) {\footnotesize{$e$}} node at (7,-0.3) {\footnotesize{$C$}} node at (8.6,0.7) {\footnotesize{$C$}} node at (8.6,-0.9) {\footnotesize{$B$}}; } \end{scope} \begin{scope}[xshift=16cm,yshift=0cm]{ \node at (4.5,0) {$-\ \theta$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$e$}} node at (8.8,-4) {\footnotesize{$e$}} node at (7,-0.3) {\footnotesize{$C$}} node at (8.6,0.7) {\footnotesize{$C$}} node at (8.6,-0.9) {\footnotesize{$B$}}; } \end{scope} \begin{scope}[xshift=23cm,yshift=0cm]{ \node at (4.5,0) {$-\ \phi$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$e$}} node at (8.8,-4) {\footnotesize{$e$}} node at (7,-0.3) {\footnotesize{$C$}} node at (8.6,0.7) {\footnotesize{$C$}} node at (8.6,-0.9) {\footnotesize{$C$}}; \node at (14,0) {$\theta=\pm i$}; } \end{scope} } \end{scope} \begin{scope}[xshift=0cm,yshift=-10cm]{ \begin{scope}[xshift=-13cm,yshift=0cm]{ \begin{scope}[xshift=0cm,yshift=0cm]{ \node at (-7,0) {$\sqrt{3}$}; \draw [thick, magenta] (-4,-4) -- (-4,4); \draw [thick] (-7,-3) -- (-4,0); \node at (-7.8,-3) {\footnotesize{$1$}} node at (-3.2,4) {\footnotesize{$m$}} node at (-3.2,-4) {\footnotesize{$m$}}; } \end{scope} \begin{scope}[xshift=-5cm,yshift=0cm]{ \node at (4.5,0) {\Large{$=$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$m$}} node at (8.8,-4) {\footnotesize{$m$}} node at (7,-0.3) {\footnotesize{$A$}} node at (8.6,0.7) {\footnotesize{$D$}} node at (8.6,-0.9) {\footnotesize{$D$}}; } \end{scope} \begin{scope}[xshift=3cm,yshift=0cm]{ \node at (4.5,0) {$+\ \phi^{-1}$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$m$}} node at (8.8,-4) {\footnotesize{$m$}} node at (7,-0.3) {\footnotesize{$C$}} node at (8.6,0.7) {\footnotesize{$D$}} node at (8.6,-0.9) {\footnotesize{$D$}}; } \end{scope} } \end{scope} \begin{scope}[xshift=13cm,yshift=0cm]{ \begin{scope}[xshift=0cm,yshift=0cm]{ \node at (-7,0) {$\sqrt{3}$}; \draw [thick, magenta] (-4,-4) -- (-4,4); \draw [thick] (-7,-3) -- (-4,0); \node at (-7.8,-3) {\footnotesize{$1$}} node at (-3.2,4) {\footnotesize{$f$}} node at (-3.2,-4) {\footnotesize{$f$}}; } \end{scope} \begin{scope}[xshift=-5cm,yshift=0cm]{ \node at (4.5,0) {\Large{$=$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$f$}} node at (8.8,-4) {\footnotesize{$f$}} node at (7,-0.3) {\footnotesize{$A$}} node at (8.6,0.7) {\footnotesize{$E$}} node at (8.6,-0.9) {\footnotesize{$E$}}; } \end{scope} \begin{scope}[xshift=2cm,yshift=0cm]{ \node at (4.5,0) {$-\ \phi^{-1}$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$f$}} node at (8.8,-4) {\footnotesize{$f$}} node at (7,-0.3) {\footnotesize{$C$}} node at (8.6,0.7) {\footnotesize{$E$}} node at (8.6,-0.9) {\footnotesize{$E$}}; } \end{scope} } \end{scope} } \end{scope} \begin{scope}[xshift=-16cm,yshift=-25cm]{ \scalebox{0.8}[0.8]{ \begin{scope}[xshift=0cm,yshift=0cm]{ \node at (-7,0) {$\sqrt{3}$}; \draw [thick, magenta] (-4,-4) -- (-4,4); \draw [thick] (-7,-3) -- (-4,0); \node at (-7.8,-3) {\footnotesize{$1$}} node at (-3.2,4) {\footnotesize{$X$}} node at (-3.2,-4) {\footnotesize{$X$}}; } \end{scope} \begin{scope}[xshift=-5cm,yshift=0cm]{ \node at (4.5,0) {\Large{$=$}}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$X$}} node at (8.8,-4) {\footnotesize{$X$}} node at (7,-0.3) {\footnotesize{$A$}} node at (8.6,0.7) {\footnotesize{$D$}} node at (8.6,-0.9) {\footnotesize{$D$}}; } \end{scope} \begin{scope}[xshift=2cm,yshift=0cm]{ \node at (4.5,0) {$+$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$X$}} node at (8.8,-4) {\footnotesize{$X$}} node at (7,-0.3) {\footnotesize{$A$}} node at (8.6,0.7) {\footnotesize{$E$}} node at (8.6,-0.9) {\footnotesize{$E$}}; } \end{scope} \begin{scope}[xshift=9cm,yshift=0cm]{ \node at (4.5,0) {$-\ \phi^3$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$X$}} node at (8.8,-4) {\footnotesize{$X$}} node at (7,-0.3) {\footnotesize{$C$}} node at (8.6,0.7) {\footnotesize{$D$}} node at (8.6,-0.9) {\footnotesize{$D$}}; } \end{scope} \begin{scope}[xshift=16cm,yshift=0cm]{ \node at (4.5,0) {$+\ \phi^3$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$X$}} node at (8.8,-4) {\footnotesize{$X$}} node at (7,-0.3) {\footnotesize{$C$}} node at (8.6,0.7) {\footnotesize{$E$}} node at (8.6,-0.9) {\footnotesize{$E$}}; } \end{scope} \begin{scope}[xshift=26cm,yshift=0cm]{ \node at (3,0) {$+\ \sqrt{3}\phi^3\alpha$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$X$}} node at (8.8,-4) {\footnotesize{$X$}} node at (7,-0.3) {\footnotesize{$C$}} node at (8.6,0.7) {\footnotesize{$E$}} node at (8.6,-0.9) {\footnotesize{$D$}}; } \end{scope} \begin{scope}[xshift=36cm,yshift=0cm]{ \node at (3,0) {$+\ i\sqrt{3}\phi^3\alpha$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$X$}} node at (8.8,-4) {\footnotesize{$X$}} node at (7,-0.3) {\footnotesize{$C$}} node at (8.6,0.7) {\footnotesize{$D$}} node at (8.6,-0.9) {\footnotesize{$E$}}; \node at (13,0) {$\alpha=\pm e^{i\frac{3\pi}{4}}$}; } \end{scope} } } \end{scope} \begin{scope}[xshift=-12cm,yshift=-30cm]{ \begin{scope}[xshift=0cm,yshift=0cm]{ \node at (-7,0) {$\sqrt{3}$}; \draw [thick, magenta] (-4,-4) -- (-4,4); \draw [thick] (-7,-3) -- (-4,0); \node at (-7.8,-3) {\footnotesize{$1$}} node at (-3.2,4) {\footnotesize{$Y$}} node at (-3.2,-4) {\footnotesize{$Y$}}; } \end{scope} \begin{scope}[xshift=-5cm,yshift=0cm]{ \node at (4.5,0) {$=$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$Y$}} node at (8.8,-4) {\footnotesize{$Y$}} node at (7,-0.3) {\footnotesize{$A$}} node at (8.6,0.7) {\footnotesize{$F$}} node at (8.6,-0.9) {\footnotesize{$F$}}; } \end{scope} \begin{scope}[xshift=2cm,yshift=0cm]{ \node at (4.5,0) {$+$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$Y$}} node at (8.8,-4) {\footnotesize{$Y$}} node at (7,-0.3) {\footnotesize{$A$}} node at (8.6,0.7) {\footnotesize{$G$}} node at (8.6,-0.9) {\footnotesize{$G$}}; } \end{scope} \begin{scope}[xshift=9cm,yshift=0cm]{ \node at (4.5,0) {$+$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$Y$}} node at (8.8,-4) {\footnotesize{$Y$}} node at (7,-0.3) {\footnotesize{$A$}} node at (8.6,0.7) {\footnotesize{$H$}} node at (8.6,-0.9) {\footnotesize{$H$}}; } \end{scope} \begin{scope}[xshift=16cm,yshift=0cm]{ \node at (4.5,0) {$+\ \phi\beta$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$Y$}} node at (8.8,-4) {\footnotesize{$Y$}} node at (7,-0.3) {\footnotesize{$C$}} node at (8.6,0.7) {\footnotesize{$G$}} node at (8.6,-0.9) {\footnotesize{$F$}}; } \end{scope} \begin{scope}[xshift=23cm,yshift=0cm]{ \node at (4.5,0) {$+\ \phi\gamma$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$Y$}} node at (8.8,-4) {\footnotesize{$Y$}} node at (7,-0.3) {\footnotesize{$C$}} node at (8.6,0.7) {\footnotesize{$H$}} node at (8.6,-0.9) {\footnotesize{$F$}}; } \end{scope} \begin{scope}[xshift=-7cm,yshift=-10cm]{ \node at (3.5,0) {$+\ \phi\beta^{-1}$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$Y$}} node at (8.8,-4) {\footnotesize{$Y$}} node at (7,-0.3) {\footnotesize{$C$}} node at (8.6,0.7) {\footnotesize{$F$}} node at (8.6,-0.9) {\footnotesize{$G$}}; } \end{scope} \begin{scope}[xshift=2cm,yshift=-10cm]{ \node at (3.5,0) {$+\ \phi\gamma\beta^{-1}$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$Y$}} node at (8.8,-4) {\footnotesize{$Y$}} node at (7,-0.3) {\footnotesize{$C$}} node at (8.6,0.7) {\footnotesize{$H$}} node at (8.6,-0.9) {\footnotesize{$G$}}; } \end{scope} \begin{scope}[xshift=10.5cm,yshift=-10cm]{ \node at (3.5,0) {$+\ \phi\omega\gamma$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$Y$}} node at (8.8,-4) {\footnotesize{$Y$}} node at (7,-0.3) {\footnotesize{$C$}} node at (8.6,0.7) {\footnotesize{$F$}} node at (8.6,-0.9) {\footnotesize{$H$}}; } \end{scope} \begin{scope}[xshift=19cm,yshift=-10cm]{ \node at (3.5,0) {$+\ \phi\omega\beta\gamma$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$1$}} node at (8.8,4) {\footnotesize{$Y$}} node at (8.8,-4) {\footnotesize{$Y$}} node at (7,-0.3) {\footnotesize{$C$}} node at (8.6,0.7) {\footnotesize{$G$}} node at (8.6,-0.9) {\footnotesize{$H$}}; \node at (13,1.5) {$\beta=\pm \omega^2$} node at (12.6,0) {$\gamma=\pm \omega$} node at (13,-1.5) {$\omega=e^{i\frac{2\pi}{3}}$}; } \end{scope} } \end{scope} \end{tikzpicture} \end{equation} \end{figure} \newpage The fermionic gapped boundary is then generated by condensing $f$, exactly as in the toric code case. What is new here is that $X$ and $Y$ are non-Abelian defects, and they display interesting properties, particularly when we begin considering junctions between gapped boundaries. The fusion rules involving $X$ and $Y$ are summarized in the table below: \begin{equation} \begin{tabular}{|c|c|c|} \hline $\otimes_{\alpha_{AC}}$ & $X$ & $Y$ \\ \hline $e$ & $ X$ & $Y$\\ $m$ & $Y$ & $X$\\ $f$ & $Y$ & $X$\\ $X$ & $ 1\oplus e \oplus Y$ & $m \oplus f \oplus X$ \\ $Y$ & $m \oplus f \oplus X$ & $ 1\oplus e \oplus Y$ \\ \hline \end{tabular} \end{equation} Finally, as we condense $f$, it is not hard to see that $X$ and $Y$ together form a module, while the toric code sub-category behaves in exactly the same way described in section \ref{sec:toricf_bc}. Let us summarize the overall $W$ and $\Omega$ matrix: \begin{equation} \label{tab:Af} \begin{tabular}{|c|c|c|c|} \hline modules $x$ & 1 & $ X_{f}$ & $Z$ \\ \hline $W^{\mathcal{A}_f}_{ix}$ & $A\oplus C\oplus E$ & $B\oplus C\oplus D$ & $D\oplus E \oplus F\oplus G\oplus H$ \\ \hline $\Omega^{\mathcal{A}_f}_{ix}$ & $A \oplus C \oplus -E$ & $B\oplus C \oplus - D$ & $-D \oplus -E \oplus F \oplus G \oplus H$\\ \hline \end{tabular} \end{equation} The fusion rules between these defects are given by \begin{equation} \label{eq:fuseAf} X_f \otimes_{\mathcal{A}_f} X_f = 1, \qquad X_f \otimes_{\mathcal{A}_f} Z = Z, \qquad Z \otimes_{\mathcal{A}_f} Z = 1 \oplus X_f \oplus Z. \end{equation} These fusion rules satisfy the (twisted) defect Verlinde formula. It confirms that all the defects have trivial endomorphism. Now here, we again confirm the claim made after (\ref{eq:littlev}), that the number of objects $N_f$ in the defect responsible for generating the fermion parity -- in this case it is $X_f = B\oplus C\oplus D$ containing 3 objects i.e. $N_f =3$ -- equals the total number of non-q-type defects in the gapped phase -- i.e. $\{1, X_f, Z\}$! \subsubsection{A bosonic-fermionic junction -- Take 2} \label{sec:bfjunction_take2} It is particularly interesting to revisit the bosonic-fermionic junction corresponding to juxtaposing the magnetic boundary and the fermionic boundary in the toric code. There is new physics precisely because of the presence of non-Abelian confined defects. The magnetic boundary described in terms of a Frobenius algebra in $D(S_3)$ can be summarized by the following W-matrix: \begin{equation} \begin{tabular}{|c|c|c|c|} \hline modules $x$ & 1 & $ X_{m}$ & $Z_m$ \\ \hline $W^{\mathcal{A}_m}_{ix}$ & $A\oplus C\oplus D$ & $B\oplus C\oplus E$ & $D\oplus E \oplus F\oplus G\oplus H$ \\ \hline \end{tabular} \end{equation} Their fusion rules are identical to (\ref{eq:fuseAf}) by replacing $X_f \to X_m$ and $Z \to Z_m$. One observes that the above table is equivalent to (\ref{tab:Af}) upon exchanging $D$ and $E$. There is this curious situation that $Z_m$ as a defect in the magnetic boundary contains the same list of anyons as the $Z$ defect in the fermionic boundary. Now we would like to work out the junction defects. Following the playbook in section \ref{sec:junctions}, one could identify two different bimodules by inspecting all the induced modules one by one. i.e. We inspect $\mathcal{A}_m\otimes c_i \otimes \mathcal{A}_f, \,\, \forall c_i \in D(S_3)$. The two junction modules are summarized as follows: \begin{equation} X_{mf} = A\oplus C\oplus E \oplus B\oplus C \oplus D, \qquad Z_{mf} = D\oplus E\oplus F \oplus G\oplus H. \end{equation} One can see from the table (\ref{tab:AC}), that $X_{mf}$ is the same $X_{mf}$ we have discussed previously in (\ref{eq:ef_fuse}). i.e. The fusion of $X_{mf}$ is given by \begin{equation} \label{eq:xxmf} X_{mf}\otimes_{\mathcal{A}_f} X_{fm} = \mathcal{A}_m \oplus X_m, \end{equation} and thus it should carry a quantum dimension of $\sqrt{2}$ -- up to the ambiguity of the addition of extra Majorana modes. The fusion of $Z_{mf}$ is slightly trickier. To understand it, it is necessary to see that it is actually a q-type object with non-trivial endomorphism. This is where studying the intermediate phase where $A\oplus C$ has already been condensed simplifies the problem significantly. We can make use of the identity (\ref{eq:double_reciprocity2}), applying it on the intermediate phase where $\alpha_{AC}$ has condensed. We notice that \begin{equation} \mathcal{A}_m \otimes X \otimes \mathcal{A}_f = 2 (X\oplus Y) = 2 Z_{mf}. \end{equation} The identity (\ref{eq:double_reciprocity2}) then implies \begin{equation} \textrm{End}_{\mathcal{A}_m|\mathcal{A}_f} (\textrm{Ind}_{\mathcal{A}_m|\mathcal{A}_f} (X)) \equiv \textrm{Hom}_{\mathcal{A}_m|\mathcal{A}_f} (\textrm{Ind}_{\mathcal{A}_m|\mathcal{A}_f} (X),\textrm{Ind}_{\mathcal{A}_m|\mathcal{A}_f} (X) ) = \textrm{Hom}(X, 2(X\oplus Y)) \end{equation} The space of maps from $X$ to $2(X\oplus Y)$ has to be 2 dimensional, for $X,Y$ {\it simple} objects. Therefore the endomorphism space of Ind$_{\mathcal{A}_m|\mathcal{A}_f}(X) = Z$ is also 2 dimensional. We note that one might entertain the possibility that $Z_{mf}$ is {\it not} a simple object (irreducible representation) -- that could also support a non-trivial space of endomorphism. However, the dimension should take the form of $\sum_x n_x^2 \langle \mathcal{M}_x, \mathcal{M}_x\rangle$, where $x$ runs through all the irreducible representations contained in $Z_{mf}$ and $n_x$ is the multiplicity of $M_x$ appearing in $Z$ and so $n_x \in \mathbb{Z}_{>0}$. Again for simple objects $\langle \mathcal{M}_x, \mathcal{M}_x\rangle$ can only be 1 for a non-q-type object, or 2 for a q-type object. Clearly, $\langle Z_{mf} , Z_{mf} \rangle_{\mathcal{A}_m|\mathcal{A}_f} =2$ is only compatible with $Z_{mf}$ being a simple q-type object. There is another manifestation of the non-trivial endomorphism. Consider solving for the left-right action of $\mathcal{A}_m$ and $\mathcal{A}_f$ on $Z_{mf}$ using the methods described in (\ref{eq:reps_eq}) and (\ref{eq:bimodule}). One should be able to obtain 2 independent solutions, despite the fact that $Z_{mf}$ remains simple. These would form basis of the two generators of the endomorphism maps! For illustration purpose, we solve for them explicitly in the next subsection. Finally, we are ready to recover the fusion rules of the junctions. Again to be careful with Majorana modes, we should upgrade the Frobenius algebra to include the free fermion explicitly, exactly as in (\ref{eq:Ap}, \ref{eq:AftoAfp}). Using notations of the intermediate phase, we obtain the following fusion rules \begin{align} &X_{mf} \otimes_{\mathcal{A}_f'} Z_{fm} =_{\textrm{reduction to left $\mathcal{A}_m$ module}} (1\oplus m) \otimes 1 \otimes (1\oplus f\otimes \psi_0) X = (1\oplus \psi_0) \otimes Z_m , \label{eq:xzmf} \\ &Z_{mf} \otimes_{\mathcal{A}_f'} Z_{fm} =_{\textrm{reduction to left $\mathcal{A}_m$ module}} (1\oplus m) \otimes X \otimes (1\oplus f\otimes \psi_0) X \nonumber \\ &= (1\oplus \psi_0) \otimes (\mathcal{A}_m \oplus X_m \oplus Z_m). \label{eq:zzmf} \end{align} Note that we have made the reduction from a $\mathcal{A}_m|\mathcal{A}_m$ bimodule to a left (right) $\mathcal{A}_m$ module by implicitly modding out by a factor of $\mathcal{A}_m$. A warning here has to be flagged: the extra factor of $(1\oplus \psi_0)$ is {\bf not} a Majorana mode. We have actually confronted this situation in (\ref{eq:fusemajorana}), Recall that Majorana modes are localized at a point. Here, it is roaming free along the entire magnetic condensate boundary. It is only making explicit that there are two fusion channels, one with even and the other odd fermion parity. Had we kept $\psi_0$ explicit in (\ref{eq:evenfuse}) the odd channels would be tagged by a copy of $\psi_0$ too. Now we see that $\psi_0$ is an important book-keeping device -- if localized at a junction it accounts for quantum dimensions of $\sqrt{2}$. On the other hand if it roams free in the 1 dimensional boundary or in the bulk, {\it they account for factors of 2 }. Their introduction allows one to keep track of quantum dimensions of defects clearly -- which are naturally conserved under fusion. Recall that $Z_{mf}$ is a q-type object, and in cases as such, it is expected that its fusion maps always carry an even number of even and odd channels, since they could be converted between each other by composing with an odd endomorphism. This has been briefly discussed in the introduction of super-category in section \ref{sec:intro_cat}. The quantum dimension of $Z_{mf}$ is thus given by \begin{equation} d_{Z_{mf}} = 2 \sqrt{2}, \end{equation} again with the ambiguity of adding Majorana modes at the junction on top of this ``canonical'' basis. This is rather amusing, since $Z_{mf}, Z_{m} $ and $Z$ all contain exactly the same list of anyons $D\oplus E \oplus F\oplus G \oplus H$! The computation of the half-linking numbers and a check of the defect Verlinde formula will be discussed below. \subsubsection{Half-linking numbers and the defect Verlinde formula -- Take 2} In the previous subsection we studied the bi-modules are argued that $Z_{mf}$ should carry non-trivial endomorphism. One manifestation of the non-trivial endomorphism is the emergence of two independent solutions when one solves for the left-right action of the algebras on the module after fixing all the phase ambiguities. To illustrate this point, we solve for the left-right actions explicitly below. We note that the Frobenius algebra $\mathcal{A}_m = 1\oplus m$ is identical to (\ref{eq:algebra-Am}), replacing $e$ by $m$. \begin{figure}[h!] \centering \begin{equation} \begin{tikzpicture}[scale=0.3] \begin{scope}[xshift=4cm,yshift=0cm]{ \begin{scope}[xshift=0cm,yshift=0cm]{ \node at (-7,0) {$\sqrt{2}$}; \draw [thick, magenta] (-4,-4) -- (-4,4); \draw [thick] (-7,-3) -- (-4,0); \node at (-7.8,-3) {\footnotesize{$\mathcal{A}_m$}} node at (-3,4) {\footnotesize{$Z_{mf}$}} node at (-3,-4) {\footnotesize{$Z_{mf}$}}; } \end{scope} \begin{scope}[xshift=-5cm,yshift=0cm]{ \node at (4.5,0) {$=$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$\mathcal{A}_m$}} node at (9,4) {\footnotesize{$Z_{mf}$}} node at (9,-4) {\footnotesize{$Z_{mf}$}} node at (7,-0.3) {\footnotesize{$1$}} node at (8.6,0.7) {\footnotesize{$X$}} node at (8.6,-0.9) {\footnotesize{$X$}}; } \end{scope} \begin{scope}[xshift=2cm,yshift=0cm]{ \node at (4.5,0) {$+$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$\mathcal{A}_m$}} node at (9,4) {\footnotesize{$Z_{mf}$}} node at (9,-4) {\footnotesize{$Z_{mf}$}} node at (7,-0.3) {\footnotesize{$1$}} node at (8.6,0.7) {\footnotesize{$Y$}} node at (8.6,-0.9) {\footnotesize{$Y$}}; } \end{scope} \begin{scope}[xshift=9cm,yshift=0cm]{ \node at (4.5,0) {$+$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$\mathcal{A}_m$}} node at (9,4) {\footnotesize{$Z_{mf}$}} node at (9,-4) {\footnotesize{$Z_{mf}$}} node at (7,-0.3) {\footnotesize{$m$}} node at (8.6,0.7) {\footnotesize{$X$}} node at (8.6,-0.9) {\footnotesize{$Y$}}; } \end{scope} \begin{scope}[xshift=16cm,yshift=0cm]{ \node at (4.5,0) {$+$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (5,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (6.5,-1.5) -- (8,0); \tri{6.2cm}{-1.8cm}{45} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (4.2,-3) {\footnotesize{$\mathcal{A}_m$}} node at (9,4) {\footnotesize{$Z_{mf}$}} node at (9,-4) {\footnotesize{$Z_{mf}$}} node at (7,-0.3) {\footnotesize{$m$}} node at (8.6,0.7) {\footnotesize{$Y$}} node at (8.6,-0.9) {\footnotesize{$X$}}; } \end{scope} } \end{scope} \begin{scope}[xshift=0cm,yshift=-10cm]{ \begin{scope}[xshift=0cm,yshift=0cm]{ \node at (-6,0) {$\sqrt{2}$}; \draw [thick, magenta] (-4,-4) -- (-4,4); \draw [thick] (-1,-3) -- (-4,0); \node at (-0.2,-3) {\footnotesize{$\mathcal{A}_f$}} node at (-3,4) {\footnotesize{$Z_{mf}$}} node at (-3,-4) {\footnotesize{$Z_{mf}$}}; } \end{scope} \begin{scope}[xshift=-5cm,yshift=0cm]{ \node at (4.5,0) {$=$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (11,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (9.5,-1.5) -- (8,0); \tri{9.8cm}{-1.8cm}{45+90} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (11.8,-3) {\footnotesize{$\mathcal{A}_f$}} node at (9,4) {\footnotesize{$Z_{mf}$}} node at (9,-4) {\footnotesize{$Z_{mf}$}} node at (9,-0.3) {\footnotesize{$1$}} node at (7.4,0.7) {\footnotesize{$X$}} node at (7.4,-0.9) {\footnotesize{$X$}}; } \end{scope} \begin{scope}[xshift=2cm,yshift=0cm]{ \node at (4.5,0) {$+$}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (11,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (9.5,-1.5) -- (8,0); \tri{9.8cm}{-1.8cm}{45+90} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (11.8,-3) {\footnotesize{$\mathcal{A}_f$}} node at (9,4) {\footnotesize{$Z_{mf}$}} node at (9,-4) {\footnotesize{$Z_{mf}$}} node at (9,-0.3) {\footnotesize{$1$}} node at (7.4,0.7) {\footnotesize{$Y$}} node at (7.4,-0.9) {\footnotesize{$Y$}}; } \end{scope} \begin{scope}[xshift=11cm,yshift=0cm]{ \node at (4.5,0) {$+\ \psi\ $}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (11,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (9.5,-1.5) -- (8,0); \tri{9.8cm}{-1.8cm}{45+90} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (11.8,-3) {\footnotesize{$\mathcal{A}_f$}} node at (9,4) {\footnotesize{$Z_{mf}$}} node at (9,-4) {\footnotesize{$Z_{mf}$}} node at (9,-0.3) {\footnotesize{$f$}} node at (7.4,0.7) {\footnotesize{$X$}} node at (7.4,-0.9) {\footnotesize{$Y$}}; } \end{scope} \begin{scope}[xshift=21cm,yshift=0cm]{ \node at (4.5,0) {$+\ \psi^{-1}\ $}; \draw [thick, magenta] (8,-4) -- (8,4); \draw [thick] (11,-3) -- (8,0); \draw [green, thick] (8,-1.5) -- (8,-2.4); \draw [green, thick] (8,-2) -- (8,2); \draw [green, thick] (9.5,-1.5) -- (8,0); \tri{9.8cm}{-1.8cm}{45+90} \tri{8cm}{2cm}{270} \tri{8cm}{-2.4cm}{90} \node at (11.8,-3) {\footnotesize{$\mathcal{A}_f$}} node at (9,4) {\footnotesize{$Z_{mf}$}} node at (9,-4) {\footnotesize{$Z_{mf}$}} node at (9,-0.3) {\footnotesize{$f$}} node at (7.4,0.7) {\footnotesize{$Y$}} node at (7.4,-0.9) {\footnotesize{$X$}}; } \end{scope} \node at (37,0) {\large{$\psi=\pm i\omega$}}; } \end{scope} \end{tikzpicture} \end{equation} \end{figure} We can make use of the explicit form of the algebra and modules and compute the gamma matrix using (\ref{eq:gamma_pic_2}). This gives \begin{eqnarray} \label{eq:gammamf} \gamma^{(m|f)}= \begin{blockarray}{ccc} A & C \\ \begin{block}{(cc)c} 2 \mathcal{N}_A & 4 \mathcal{N}_C & ~X_{mf} \\ 4 \mathcal{N}_A & -4 \mathcal{N}_C & ~Z_{mf} \\ \end{block} \end{blockarray} \end{eqnarray} We found that \begin{equation} \label{eq:NAB2} \mathcal{N}_{c_i} = \frac{1 } {\sqrt{2D_{D(S_3) } d_{c_i} }}, \qquad c_i \in \mathcal{A}_m \cap \mathcal{A}_f. \end{equation} This expression again confirms (\ref{eq:NAB_2}). Substituting (\ref{eq:gammamf}, \ref{eq:NAB2}) into the defect Verlinder formula (\ref{eq:defect_verlinde_junction}), we recover the fusion rules (\ref{eq:xxmf}, \ref{eq:xzmf}, \ref{eq:zzmf}). (Recall that the untwisted Verlinde formula gives the total number of fusion channels -- adding even and odd channels. Therefore the $1\oplus \psi_0$ factors in (\ref{eq:xzmf}, \ref{eq:zzmf}) should be interpreted as factors of 2. ) \section{(super)-modular invariants and twisted characters} \label{sec:CFT} Bosonic gapped boundaries in a topological order are in 1-1 correspondence with modular invariants. For topological orders corresponding to the representation category $C$ of the tensor product of a chiral algebra and an anti-chiral algebra , each of these bosonic gapped boundaries correspond to a modular invariant CFT. In the case of fermionic gapped boundaries, each of them certainly defines a ``super''- modular invariant i.e. a modular invariant under $S$ and $T^2$ transformation on a torus \cite{levin_protected_2013}. The reverse is not true however, as we will describe interesting examples in the appendix. It is argued that invariance under $T^2$ and $S$ is the appropriate generalization of the concept of modular invariance for spin CFT \cite{levin_protected_2013}. \footnote{These ``super'' modular invariants should not be confused with modular invariants of supersymmetric CFT's discussed in the CFT literature. While the chiral symmetry algebra would contain a fermionic sector, the requirement of invariance under $T$ remains. } Meanwhile, the (super) modular invariant essentially defines a Hilbert space $H^\mathcal{A}$. \begin{equation} H^{\mathcal{A}} = \oplus_i W^{\mathcal{A}}_{i 1} H_i, \end{equation} where \begin{equation} H_i = \mathcal{V}_{i} \otimes \overline{\mathcal{V}_{\bar{i}}}, \end{equation} and $\mathcal{V}_i$ are the representations of the chiral algebra that defines the topological order introduced at the beginning of the section. The excitations at the gapped boundary correspond to topological defects in the (super) modular invariant CFT. The defect operator takes the following form \begin{equation} \hat X =\sum_{i \in H^\mathcal{A}} \frac{\gamma_{x \,i_{\alpha,\beta}}}{\sqrt{S_{0c}}} | c_i, \alpha \rangle \langle c_i, \beta |, \end{equation} where $|c_i\rangle$ is a short hand for the primary and also descendents in $H_i$. As a topological defect, the descendents are summed over in a way where the levels match in the bra and ket \cite{petkova_generalised_2001}. The indices $\alpha, \beta \in \{1, \cdots, W^{\mathcal{A}}_{i1}\}$. The coefficients $\gamma_{x \,i_{\alpha,\beta}}$ correspond precisely to half-linking numbers between the condensed anyon $c_i$ and the boundary excitation $x$ in the topological theory. Taking the trace on a torus produces the twisted character $\chi_X(-1/\tau, - 1/\bar \tau)$, \begin{equation} \label{eq:chiX1} \chi_X(\tilde q, \bar{\tilde q}) \equiv \textrm{tr} (\tilde q^{L_0 - c/24} \bar {\tilde q}^{\bar L_0 - \bar c/24} X) = \sum_{i, \alpha} \frac{\gamma_{x \,i_{\alpha,\alpha}}}{\sqrt{S_{1i}}} \chi_i(\tilde q, \bar {\tilde q} ), \end{equation} where $\chi_i$ is an abuse of notation corresponding to $\chi_i(\tau) \chi_{\bar i }(\bar\tau)$ that follows from tracing the holomorphic and anti-holomorphic parts in $H_i$. It is customary to denote \begin{equation} \tilde q = e^{2 i \pi \tilde \tau}, \,\, \tilde \tau = -1/\tau, \,\, q = e^{2i\pi \tau}. \end{equation} The rhs of (\ref{eq:chiX1}) can be rewritten using its $S$ modular transformation property to yield \begin{equation} \label{eq:StransX} \chi_X(\tilde q, \bar {\tilde q} ) = \sum_{i \in H^\mathcal{A},\alpha,J} \gamma_{x \,i_{\alpha,\alpha}}S_{ij} \chi_j( q, \bar q), \end{equation} where the $S$ matrix here correspond precisely to that of the bulk phase. When the condensation multiplicities (i.e. elements of the W matrix) are either 0 or 1, one can readily show, using the identities (\ref{eq:defect_verlinde_e}, \ref{eq:Vgamma}), that (\ref{eq:StransX}) reduces to the following: \begin{equation} \label{eq:X_chi_decomp} \chi_X(\tilde q, \bar{\tilde q}) = \sum_j W_{jx} \chi_j(q, \bar q). \end{equation} We note that here $j$ runs also over sectors outside of $\mathcal{A}$. While we do not have a direct proof of this result for general $W_{i1}>1$ , from physical considerations -- that the edge excitation admits a decomposition as bulk anyons -- the result (\ref{eq:X_chi_decomp}) should remain true. Parts of these have been discussed in \cite{Lou:2019heg, Shen:2019rck} where $\mathcal{A}$ defines a bosonic modular invariant. These results readily apply to super modular invariants, with the fusion algebra of the topological defects again given by the defect Verlinde formula (\ref{eq:defect_verlinde_e}, \ref{eq:Vgamma}). The novel structure that comes with a super modular invariant is the presence of fermion parity. In the following, we will discuss also twisted characters with R-type boundary conditions in the time direction in the following. \subsection{Topological defects in the CFT and the fermion parity defect} It is well known that in a CFT carrying global symmetries, one can define characters twisted by a generator $g$ of the global symmetry group $G$ in the time direction. \begin{equation} \chi^g_X(\tilde q,\bar{\tilde q}) = \textrm{tr}(g \hat X\tilde q^{L_0 - c/24} \bar{\tilde q}^{\bar{L}_0 -\bar c/24} ). \end{equation} As a global symmetry $g$ would commute with $L_0$ and $\bar{L}_0$. In the context of the fermion parity symmetry, $g = (-1)^F$, where $F$ is the fermion parity operator. We thus define \begin{equation} \chi^R_X(\tilde q,\bar{\tilde q}) \equiv \textrm{tr}( (-)^F \hat X \tilde q^{L_0 - c/24} \bar{\tilde q}^{\bar{L}_0 -\bar c/24} ). \end{equation} By considering the fermion parity of the different sectors appearing in (\ref{eq:X_chi_decomp}), one concludes that \begin{equation} \chi^R_X(\tilde q,\bar{\tilde q}) = \sum_i \Omega_{ix} \chi_i( q, \bar{q}). \end{equation} Moreover, in a spin CFT, we should keep track of the spin structure in the spatial direction that follows from the insertion of $X$. For $X$ corresponding to an $NS\, (R)$-type object, it generates a $NS\, (R)$ type spin structure in the spatial cycle. We note that since $\mathcal{A}$ is a Lagrangian algebra, among the $\mathcal{A}$-modules only $\mathcal{A}$ is NS type. As expected-- none of the topological line operators are local wrt to $\mathcal{A}$. An important fact is that under an $S$ transformation, the spin structures of the two cycles are expected to swap. i.e. \begin{equation} \label{eq:S_trans} \chi^s_{X(t)} (\tilde q, \bar{\tilde q}) = \sum_{Y(s)} S^{s, t}_{X(t) Y(s)} \chi^t_{Y(s)}(q, \bar q), \end{equation} where $s,t \in \{NS,R\}$ denote the spin structures along the time and spatial cycle, and $S^{s,t}_{X(t) Y(s)}$ denotes the $S$ transformation matrix that would swap the spin structures, so that we only sum over defects $Y$ with spin structure $s$. The ``defect S-matrix'' is related to the W matrix and the bulk $S$ matrix. It is given by: \begin{equation} \sum_i W^t_{i X(s)} S_{ij} = \sum_{Y(t)} S^{s,t}_{X(s) Y(t)} W^s_{j Y(t)}, \end{equation} where we have introduced the shorthand $W^{1} \equiv \Omega$ and $W^0 \equiv W$. Equation (\ref{eq:S_trans}) has a handy application. As discussed in (\ref{sec:twistedDVF}), there is a special defect $x_f$ that generates an R-type spin structure. One can work out the the components $W_{ix_f}$ readily if we have $\Omega_{i 1}$ by applying (\ref{eq:S_trans}) -- noting that the trivial defect is the only $NS$ sector defect here: \begin{equation} \chi^R_{0(NS)}( q, \bar{q}) = \sum_i \Omega_{i1} \chi_i (\tilde q, \bar{\tilde q}) = \chi^{NS}_{x_f(R)}(\tilde q, \bar{\tilde q}) = \sum_i W_{ix_f} \chi_i(q,\bar q). \end{equation} This finally gives \begin{equation} \label{eq:work_xf} \sum_i \Omega_{i1} S_{ij} = W_{jx_f}. \end{equation} Generically, given any other symmetries $g$, and corresponding defects generating $g$-twisted boundary conditions, the W matrix of the latter can be extracted using analogues of (\ref{eq:work_xf}). For example, using this method, we identify analogues of RR, NSR, RNS, defects in a condensed theory involving multiple non-Abelian fermion condensation. These example concerning $SU(2)_{10}$ and $D(D_4)$ will be relegated to the appendix. \subsection{Revisiting the (twisted)- Verlinde formula} The discussion above inspires a re-visit of the Verlinde formula for a spin CFT. The structure of the S-matrix of characters in a spin -CFT being decomposed into different sectors, namely $\{S^{NS,NS}, S^{NS, R}, S^{R,NS}, S^{R,R}\}$, have long been discussed in the literature. The supersymmetric CFT literature has given a Verlinde formula (see for example \cite{Abdurrahman:1994ar,Eholzer_1994}), although that does not distinguish even and odd fusion channels. On the other hand, in a spin CFT that is graded by fermion parity, one should distinguish parity even and odd fusion channels. There is a separate identity isolating the difference between the even and odd channels, in the form of the twisted Verlinde formula. The derivation is based on consideration of dimensional reduction of the 3d spin TQFT to a 2d spin TQFT in \cite{Aasen:2017ubm}. We supplied an alternative derivation in the context of (non Abelian) fermion condensation in (\ref{eq:twisted_VLF}). Here, we will obtain a third derivation that depends solely on properties of the decomposition of the $S$ matrix recalled above. An extra key observation is that the fusion of two excitations whose worldlines cut across a common twist lines would fuse in a twisted way. The wave-function satisfies i.e. \begin{equation} \label{eq:twistedfuse} \chi^{NS}_{x\otimes y} = \sum_z {n}_{xy}^z \chi^{NS}_{z}, \qquad \chi^R_{x\otimes y} = \sum_z \tilde{n}_{xy}^z \chi^R_{z} \end{equation} Now we can also evaluate the wavefunction via \begin{align}\label{eq:SSS} \chi^{X}_{x(Z)\otimes y(Y)} & =\sum_{w(X)} S^{X, Y}_{y(Y) w(X)} \hat{x}(Z) \chi^Y_{w(X)} = \sum_{w(X)}S^{X,Y}_{y(Y) w} \frac{S^{Z,X}_{x(Z) w(X)} }{S^{NS,X}_{0,w(X)}} \chi^Y_{w(X)} \nonumber \\ & = \sum_{w(X), u(Y)}S^{X,Y}_{y(Y) w(X)} \frac{S^{Z,X}_{x(Z) w(X)} }{S^{NS,X}_{0,w(X)}} (S^{-1})^{Y,Y.Z}_{w(X) u(Y.Z)} \chi^X_{u(Y.Z)}. \end{align} The expression $Y.Z$ corresponds to the aggregate spin structure after fusing two objects with spin structures $Y$ and $Z$ respectively. We note that they form a $\mathbb{Z}_2$ group structure, with $R$ playing the role of a $\mathbb{Z}_2$ generator satisfying $R.R = NS$. The second equality above is obtained by considering (\ref{eq:loop_identity}). It is a ``spin-structure enriched'' version of a well-known identity. \begin{figure}[h] \centering \begin{equation} \label{eq:loop_identity} \begin{tikzpicture} \draw [thick] (0,1.5) -- (0,0.1); \draw [thick] (0,-1) -- (0,-0.1); \draw [thick] (-0.1,0.4) to [out=180,in=30] (-0.8,0.2) to [out=330,in=180] (0,0) to [out=0,in=210] (0.8,0.2) to [out=150,in=0] (0.1,0.4); \draw [thick] (2.5,1.5) -- (2.5,-1); \node at (-1.3,0.25) {$\hat x(Z)$} node at (0.7,1.5) {$\omega(X)$} node at (3.2,1.5) {$\omega(X)$} node at (1.7,0.25) {$=\lambda$} node at (4.5,0.25) {$,\quad\lambda=\frac{S^{Z,X}_{x\omega}}{S^{NS,X}_{0\omega}}$}; \end{tikzpicture} \end{equation} \end{figure} Combining with (\ref{eq:twistedfuse}, \ref{eq:SSS}), we obtain \begin{equation} \label{eq:twistedverlinde} ({n^X)}_{x(Z) y(Y)}^{u(Z.Y)} = \sum_{w(X), u(Y)}S^{X,Y}_{y(Y) w(X)} \frac{S^{Z,X}_{x(Z) w(X)} }{S^{NS,X}_{0w(X)}} (S^{-1})^{Y,X.Z}_{w(X) u(Y.Z)}, \end{equation} where we have again introduced the short-hand notation \begin{equation} n^{NS} \equiv n, \qquad n^R \equiv \tilde n. \end{equation} Let us emphasize that the $S^{X,Y}$ matrix we are working with is not in a unitary basis. Let us take the $C_2$ theory as an example which follows from the Ising theory (with three sectors $0, \psi, \sigma$ ) with the fermion $\psi$ condensed, this theory only has 1 NS sector and 1 R sector. i.e. \begin{equation} 0_{NS} = 0 \oplus \psi, \qquad \beta_R = \sigma. \end{equation} The corresponding characters in the $C_2$ theory can be written as \begin{equation} \chi^{NS}_0(q,\bar{q}) = \chi_0(q,\bar{q}) + \chi_\psi(q,\bar{q}) , \qquad \chi^{R}_{0}= \chi_0 (q,\bar{q}) - \chi_\psi (q,\bar{q}) , \qquad \chi^{NS}_\beta (q,\bar{q}) = \chi_\sigma(q,\bar{q}) . \end{equation} In this basis, the $S^{X,Y}$-matrix is given by \begin{equation} S^{NS NS}_{00} = 1, \qquad S^{NS R}_{0\beta} = \sqrt{2}, \qquad S^{R NS}_{\beta 0} = \frac{1}{\sqrt{2}}. \end{equation} If instead we pick the normalization as \begin{equation} \tilde \chi_\beta^{NS} = \sqrt{2} \chi_\sigma, \end{equation} (corresponding to rescaling a $q$ type object by $\sqrt{2}$, then $\tilde{S}^{NS R}_{0 \beta} = \tilde{S}^{R NS}_{\beta 0} =1$, and the Verlinde formula would require an extra factor of $\sqrt{e_x e_y e_u}$ on the rhs of (\ref{eq:twistedverlinde}), where $e_i$ is the dimension of the endormorphism of sector $i$. This is the version that appears in \cite{Aasen:2017ubm}. These are some general considerations that a priori are not connected to anyon condensation. \section{Conclusion} The main aim of the current paper is to study in detail gapped boundaries of 2+1 d topological orders characterized by an anyon condensate that contains fermions. The physics of these gapped boundaries include the different species of excitations, their fusion rules, and also the properties of junctions when two different gapped boundaries meet. In the case of bosonic condensates, these issues have been studied in detail by many, which we have mentioned in the introduction. It is realized that the underlying mathematical structure characterizing each boundary condensate is a commutative Frobenius algebra, and that the boundary excitations and junction excitations are modules and bi-modules of these Frobenius algebra, respectively. In the current paper, we have generalized these considerations to cover gapped boundaries following from fermionic anyon condensation. The mathematical generalization is to replace ``commutativity'' by ``super-commutativity''. This has been discussed to some extent in \cite{Aasen:2017ubm} for simple current condensations. Here we generalize it to accommodate arbitrary fermionic anyon condensation. Moreover, we also extended the discussion to include junction excitations. Particularly, we developed systematic ways to read off the endomorphism of a (bi)-module -- which describes whether the corresponding defect could host a Majorana mode. Along the way, we clarify and generalize the defect Verlinde formula discussed in \cite{Shen_2019} to fermionic boundaries, as well as providing a systematic recipe to compute the half-linking numbers central to the formula. We also discussed the connection between these defects in a super-condensate and line operators in a super modular invariant CFTs -- as in the bosonic case, each fermionic condensate defines a ``super'' modular invariant, and these gapped boundary excitations are topological line operators. There are some miscellaneous facts that we have omitted in the main text, but which maybe of interest. Fermion condensation that preserves fermion parity at the end can be reduced to an Abelian fermion condensation \cite{Wan:2016php}. Consider a fermionic gapped boundary of a bosonic bulk topological order. If we adopt the strategy of a sequential condensation that condenses the bosons in the condensate first, the intermediate phase has only three different possible choices. Namely, the toric code order (c=0), the Ising order (c=1/2) , and the 3-fermion order $(D_4, 1)$ (c=4) \cite{rowell2007classification}. These are the only bosonic modular tensor categories with at least one fermionic simple current and that they have quantum dimension 2 -- which would become fully confined by condensing just one more fermion. This simple fermion is responsible for carrying the odd fermion parity, and it is necessarily a simple current in the intermediate phase, as demonstrated in \cite{Wan:2016php}. This fact gives a simple check that decides if a bulk order has a fermionic gapped boundary -- i.e. by staring at the topological central charge which is preserved under anyon condensation. Another curious fact is the apparent scarcity of fermionic gapped boundaries, particularly those beyond simple fermion condensation. In the entire $SU(2)_k$ series of modular tensor categories, only $SU(2)_{10}$ contains a Lagrangian super-Frobenius algebra. We could not find any in $SU(3)_k$, by following the principle of sequential condensation. These conclusions are made by adopting the philosophy of ``sequential'' condensation. We first look up modular invariants of these models ($SU(2)_k$ series adopts an ADE classification, and $SU(3)_k$ has been classified for example in \cite{gannon1994classification}.) and among them look for candidates where one can condense a further simple fermionic current. To conclude, we note that there are various questions that are still pending. We looked into super modular invariants in $D(D_4)$, and there, we found examples where the super-modular invariants do not appear to correspond to a super-commutative Frobenius algebra as we have defined it in the main text. It rather appeared as a non-commutative algebra where anyons in the condensate can be non-local wrt each other. It would also imply that the product of the tentative algebra must break fermion parity -- the product of total parity even anyons gives a parity odd anyon. One could perhaps discard them as simply being unphysical. However, the mutual non-locality among the condensed anyons was only at worst a minus sign. It is a curiosity whether there might after all be an interpretation to these super modular invariants. The generalization to fermionic condensates suggests that it is possible to further extend the idea of anyon condensation to include anyons of arbitrary spin -- the new ingredient is to couple to the condensing anyon an appropriate gauge field -- the counter part of spin structures that would render the condensation consistent. It is believed that such condensates might be related to gapless boundary conditions. Finally, it is realized recently that these gapped boundaries are examples of spontaneous breaking of a categorical symmetry\cite{thorngren2019fusion,ji2019categorical,kong2020algebraic}. It is important to understand if there are other implications of the categorical symmetry, or how to systematically reverse the process of condensation (i.e. the analogue of gauging), and how these ideas could be generalized to higher dimensions. \section*{12}~\\ \begin{figure}[h] \centering \begin{tikzpicture} \draw [thick] (-2,0) -- (2,0); \draw [fill] (0,0) circle [radius=0.1]; \draw [thick,->] (0,-1.5) -- (0,-0.1); \node at (0,0.5) {\LARGE{$\bigoplus_xW_{ix}x$}}; \node at (0.3,-1.7) {\LARGE{$i$}}; \draw [fill] (0,-1.7) circle [radius=0.07]; \end{tikzpicture} \end{figure} \section*{13}~\\ \begin{figure}[h] \centering \begin{tikzpicture} \draw [green, thick] (0,-2) -- (0,0); \draw [magenta, thick] (0,0) -- (0,2); \tri{0}{0.25cm}{270} \node at (0.3,-2) {\LARGE{$i$}} node at (0.4,2) {\Large{$M$}} node at (0.4,0) {\Large{$\alpha$}}; \end{tikzpicture} \end{figure} \section*{14}~\\ \begin{figure}[h] \centering \begin{tikzpicture} \draw [magenta, thick] (0,-2) -- (0,0); \draw [green, thick] (0,0) -- (0,2); \draw [magenta, thick] (0,2) -- (0,4); \tri{0}{-0.25cm}{90} \tri{0}{2.25cm}{270} \node at (0.3,1) {\LARGE{$i$}} node at (0.4,-2) {\Large{$M$}} node at (0.4,4) {\Large{$M$}} node at (-1.5,1) {\Huge{$\sum_{i,\alpha}$}} node at (1.5,1) {\LARGE{$=$}} node at (0.4,2) {\Large{$\alpha$}} node at (0.4,0) {\Large{$\bar{\alpha}$}}; \draw [magenta, thick] (3,-2) -- (3,4); \node at (3.5,1) {\Large{$M$}}; \end{tikzpicture} \end{figure} \section*{15}~\\ \begin{figure}[h] \centering \begin{tikzpicture} \draw [green, thick] (0,-2) -- (0,0); \draw [magenta, thick] (0,0) -- (0,2); \draw [green, thick] (0,2) -- (0,4); \tri{0}{0.25cm}{270} \tri{0}{1.75cm}{90} \node at (0.4,1) {\Large{$M$}} node at (0.4,4) {\LARGE{$i$}} node at (0.4,-2) {\LARGE{$i$}} node at (1.5,1) {\LARGE{$=$}} node at (0.4,0) {\Large{$\beta$}} node at (0.4,2) {\Large{$\bar{\alpha}$}}; \draw [green, thick] (4,-2) -- (4,4); \node at (2.8,1) {\LARGE{$\delta_{ij}\delta_{\alpha\beta}$}} node at (4.4,4) {\LARGE{$i$}}; \end{tikzpicture} \end{figure} \section*{16}~\\ \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.8] \draw [thick, magenta] (-4*0.75,-4*0.75) -- (-4*0.75,4*0.75); \draw [thick] (-7*0.75,-3*0.75) -- (-4*0.75,0*0.75); \node at (-5*0.75,0) {\LARGE{$\rho$}} node at (-7.8*0.75,-3*0.75) {\Large{$A$}} node at (-3.2*0.75,4*0.75) {\Large{$M$}} node at (-3.2*0.75,-4*0.75) {\Large{$M$}} node at (-1.9*0.75,0) {\LARGE{$=$}} node at (0.5*0.75,0) {\large{$\sum_{a,i,j,\alpha,\gamma,\beta}$}} node at (2.5*0.75,0) {\large{$\sum_{\delta}$}} node at (4.2*0.75,0) {\large{$\rho^{M(i\gamma);\delta}_{(a\alpha)(j\beta)}$}}; \draw [thick, magenta] (8*0.75,-4*0.75) -- (8*0.75,4*0.75); \draw [thick] (5*0.75,-3*0.75) -- (8*0.75,0); \draw [green, thick] (8*0.75,-1.5*0.75) -- (8*0.75,-2.4*0.75); \draw [green, thick] (8*0.75,-2*0.75) -- (8*0.75,2*0.75); \draw [green, thick] (6.5*0.75,-1.5*0.75) -- (8*0.75,0); \tri{6.2*0.75cm}{-1.8*0.75cm}{45} \tri{8*0.75cm}{2*0.75cm}{270} \tri{8*0.75cm}{-2.4*0.75cm}{90} \node at (5.8*0.75,-1.2*0.75) {\Large{$\bar{\alpha}$}} node at (8.8*0.75,-2.2*0.75) {\Large{$\bar{\beta}$}} node at (8.8*0.75,1.7*0.75) {\Large{$\gamma$}} node at (4.2*0.75,-3*0.75) {\Large{$A$}} node at (8.8*0.75,4*0.75) {\Large{$M$}} node at (8.8*0.75,-4*0.75) {\Large{$M$}} node at (7*0.75,-0.3*0.75) {\Large{$a$}} node at (8.6*0.75,0.7*0.75) {\Large{$i$}} node at (8.6*0.75,-0.9*0.75) {\Large{$j$}}; \draw [fill=yellow] (8*0.75,0) circle [radius=0.1]; \node at (8.4*0.75,-0.2) {\Large{$\delta$}}; \end{tikzpicture} \end{figure} \section*{17}~\\ \begin{figure}[h] \centering \begin{tikzpicture} \node at (-0.5,0) {\LARGE{$=$}}; \draw [thick] (-3,-0.4) -- (-2,0.6); \draw [thick] (-3,-1.4) -- (-2,-0.4); \draw [thick] (-0.3,-1.4) -- (1.7,0.6); \draw [thick] (0.7,-0.4) -- (0.7,-1.4); \draw [thick, magenta] (1.7,1.6) -- (1.7,-1.4); \draw [thick, magenta] (-2,1.6) -- (-2,-1.4); \node at (-3.2,-0.4) {\large{$A$}} node at (-3.2,-1.4) {\large{$A$}} node at (-0.5,-1.4) {\large{$A$}} node at (0.5,-1.4) {\large{$A$}} node at (0.5,-0.2) {\large{$A$}} node at (-1.6,1.6) {\large{$M$}} node at (2.1,1.6) {\large{$M$}}; \end{tikzpicture} \end{figure} \section*{18}~\\ \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.75] \begin{scope}[xshift=-12cm,yshift=2.5cm]{ \draw [thick, magenta] (8*0.75,-4*0.75) -- (8*0.75,4*0.75); \draw [thick] (5*0.75,-3*0.75) -- (8*0.75,0); \draw [green, thick] (8*0.75,-1.5*0.75) -- (8*0.75,-2.4*0.75); \draw [green, thick] (8*0.75,-2*0.75) -- (8*0.75,2*0.75); \draw [green, thick] (6.5*0.75,-1.5*0.75) -- (8*0.75,0); \tri{6.2*0.75cm}{-1.8*0.75cm}{45} \tri{8*0.75cm}{2*0.75cm}{270} \tri{8*0.75cm}{-2.4*0.75cm}{90} \node at (5.8*0.75,-1.2*0.75) {\Large{$\bar{\alpha}$}} node at (8.8*0.75,-2.2*0.75) {\Large{$\bar{\beta}$}} node at (8.8*0.75,1.7*0.75) {\Large{$\gamma$}} node at (4.2*0.75,-3*0.75) {\Large{$A$}} node at (8.8*0.75,4*0.75) {\Large{$M$}} node at (7*0.75,-0.3*0.75) {\Large{$a$}} node at (8.6*0.75,0.7*0.75) {\Large{$i$}} node at (8.6*0.75,-0.9*0.75) {\Large{$j$}}; \draw [fill=yellow] (8*0.75,0) circle [radius=0.1]; \node at (8.4*0.75,-0.2) {\Large{$\delta$}}; } \end{scope} \begin{scope}[xshift=-12cm,yshift=-2.5cm]{ \draw [thick, magenta] (8*0.75,-4*0.75) -- (8*0.75,4*0.75); \draw [thick] (5*0.75,-3*0.75) -- (8*0.75,0); \draw [green, thick] (8*0.75,-1.5*0.75) -- (8*0.75,-2.4*0.75); \draw [green, thick] (8*0.75,-2*0.75) -- (8*0.75,2*0.75); \draw [green, thick] (6.5*0.75,-1.5*0.75) -- (8*0.75,0); \tri{6.2*0.75cm}{-1.8*0.75cm}{45} \tri{8*0.75cm}{2*0.75cm}{270} \tri{8*0.75cm}{-2.4*0.75cm}{90} \node at (5.8*0.75,-1.2*0.75) {\Large{$\bar{\alpha'}$}} node at (8.8*0.75,-2.2*0.75) {\Large{$\bar{\beta'}$}} node at (8.8*0.75,1.7*0.75) {\Large{$\gamma'$}} node at (4.2*0.75,-3*0.75) {\Large{$A$}} node at (8.8*0.75,3.2*0.75) {\Large{$M$}} node at (8.8*0.75,-4*0.75) {\Large{$M$}} node at (7*0.75,-0.3*0.75) {\Large{$a'$}} node at (8.6*0.75,0.7*0.75) {\Large{$i'$}} node at (8.6*0.75,-0.9*0.75) {\Large{$j'$}}; \draw [fill=yellow] (8*0.75,0) circle [radius=0.1]; \node at (8.4*0.75,-0.2) {\Large{$\delta'$}}; } \end{scope} \begin{scope}[xshift=-2cm,yshift=0]{ \node at (-7.5,-0.5) {\large{$\sum_{j,\beta,i',\gamma',\delta,\delta'}\rho^{M(i\gamma);\delta}_{(a\alpha)(j\beta)}\rho^{M(i'\gamma');\delta'}_{(a'\alpha')(j'\beta')}$}}; } \end{scope} \begin{scope}[xshift=0cm,yshift=1.5cm]{ \draw [thick, magenta] (8*0.75,-4*0.75) -- (8*0.75,4*0.75); \draw [thick] (5*0.75,-3*0.75) -- (8*0.75,0); \draw [green, thick] (8*0.75,-1.5*0.75) -- (8*0.75,-2.4*0.75); \draw [green, thick] (8*0.75,-2*0.75) -- (8*0.75,2*0.75); \draw [green, thick] (6.5*0.75,-1.5*0.75) -- (8*0.75,0); \tri{6.2*0.75cm}{-1.8*0.75cm}{45} \tri{8*0.75cm}{2*0.75cm}{270} \tri{8*0.75cm}{-3*0.75cm}{90} \node at (5.8*0.75,-1.2*0.75) {\Large{$\bar{\chi}$}} node at (8.8*0.75,-3*0.75) {\Large{$\bar{\beta'}$}} node at (8.8*0.75,1.7*0.75) {\Large{$\gamma$}} node at (4.8*0.75,-2.2*0.75) {\Large{$A$}} node at (8.8*0.75,3.2*0.75) {\Large{$M$}} node at (8.8*0.75,-8*0.75) {\Large{$M$}} node at (7*0.75,-0.3*0.75) {\Large{$c$}} node at (8.6*0.75,0.7*0.75) {\Large{$i$}} node at (8.6*0.75,-1.2*0.75) {\Large{$j'$}}; \draw [fill=yellow] (8*0.75,0) circle [radius=0.1]; \node at (8.5*0.75,-0.2) {\Large{$\sigma$}}; \draw [thick, magenta] (8*0.75,-4*0.75) -- (8*0.75,-8*0.75); \draw [thick] (5*0.75,-3*0.75) -- (0,-8*0.75); \draw [thick] (3*0.75,-5*0.75) -- (3*0.75,-8*0.75); \begin{scope}[xshift=-5*0.75cm,yshift=-5*0.75cm]{ \draw [green, thick] (8*0.75,-2*0.75) -- (8*0.75,0); \draw [green, thick] (6.5*0.75,-1.5*0.75) -- (9.5*0.75,1.5*0.75); \tri{6.2*0.75cm}{-1.8*0.75cm}{45} \tri{9.5*0.75cm}{1.5*0.75cm}{225} \tri{8*0.75cm}{-2.4*0.75cm}{90} \node at (5.7*0.75,-1.4*0.75) {\Large{$\bar{\alpha}$}} node at (8.7*0.75,-2*0.75) {\Large{$\bar{\alpha'}$}} node at (8.7*0.75,1.9*0.75) {\Large{$\lambda$}} node at (6.8*0.75,-0.4*0.75) {\Large{$a$}} node at (8.7*0.75,-0.8*0.75) {\Large{$a'$}} node at (8*0.75,1*0.75) {\Large{$b$}}; \draw [fill=yellow] (8*0.75,0) circle [radius=0.1]; \node at (8.5*0.75,-0.15) {\Large{$\zeta$}}; } \end{scope} \node at (-0.4,-8*0.75) {\Large{$A$}} node at (3*0.75-0.4,-8*0.75) {\Large{$A$}} node at (-1.3,-2) {\large{$=\sum_{c,\chi,b,\lambda,\sigma,\zeta}\rho^{M(i\gamma);\sigma}_{(c\chi)(j'\beta')}\mu^{(b\lambda);\zeta}_{(a\alpha)(a'\alpha')}$}}; } \end{scope} \end{tikzpicture} \end{figure} \section*{19}~\\ \begin{figure}[h] \centering \begin{tikzpicture} \draw [thick, magenta] (0,0.1) -- (0,2); \draw [thick, magenta] (0,-1) -- (0,-0.1); \draw [thick, magenta] (1.5,-1) -- (1.5,2); \draw [thick] (0,1) to [out=225,in=180] (0,0); \draw [thick] (0,0) to [out=0,in=230] (1.5,1); \node at (0.4,2) {$M_1$} node at (1.9,2) {$M_2$} node at (-0.5,0.5) {$A$}; \end{tikzpicture} \end{figure} \section*{20}~\\ \begin{figure}[h] \centering \begin{tikzpicture} \begin{scope}[xshift=0,yshift=0]{ \draw [thick, magenta] (0,0.1) -- (0,2); \draw [thick, magenta] (0,-1) -- (0,-0.1); \draw [thick, magenta] (1.5,-1) -- (1.5,2); \draw [thick] (0,1) to [out=225,in=180] (0,0); \draw [thick] (0,0) to [out=0,in=230] (1.5,1); \draw [thick] (-1,0.5) -- (0,1.5); \node at (0.4,2) {$M_1$} node at (1.9,2) {$M_2$} node at (-0.5,0.2) {$A$} node at (-0.5,1.5) {$A$}; } \end{scope} \begin{scope}[xshift=4cm,yshift=0]{ \draw [thick, magenta] (0,0.1) -- (0,2); \draw [thick, magenta] (0,-1) -- (0,-0.1); \draw [thick, magenta] (1.5,-1) -- (1.5,2); \draw [thick] (0,1) to [out=225,in=180] (0,0); \draw [thick] (0,0) to [out=0,in=230] (1.5,1); \draw [thick] (-1,-1) -- (0,0); \node at (0.4,2) {$M_1$} node at (1.9,2) {$M_2$} node at (-1.3,0.5) {\Large{$=$}}; } \end{scope} \begin{scope}[xshift=8cm,yshift=0]{ \draw [thick, magenta] (0,0.1) -- (0,2); \draw [thick, magenta] (0,-1) -- (0,-0.1); \draw [thick, magenta] (1.5,-1) -- (1.5,2); \draw [thick] (0,1) to [out=225,in=180] (0,0); \draw [thick] (0,0) to [out=0,in=230] (1.5,1); \draw [thick] (0.5,-0.5) -- (1.5,0.5); \node at (0.4,2) {$M_1$} node at (1.9,2) {$M_2$} node at (-1.3,0.5) {\Large{$=$}}; } \end{scope} \end{tikzpicture} \end{figure} \section*{21}~\\ \begin{figure}[h] \centering \begin{tikzpicture} \draw [thick, magenta] (0,-1) -- (0,2); \draw [thick, magenta] (2,-1) -- (2,2); \draw [thick] (0,1) to [out=-45,in=180] (1,0); \draw [thick] (1,0) to [out=0,in=230] (2,1); \draw [thick] (-1,-0.5) -- (0,0.5); \draw [thick] (2,0.5) -- (3,-0.5); \node at (0.6,2) {$M^{A|B}$} node at (2.6,2) {$M^{B|C}$} node at (-0.5,0.5) {$A$} node at (1,0.5) {$B$} node at (2.5,0.5) {$c$}; \end{tikzpicture} \end{figure} \section*{22}~\\ \begin{figure}[h] \centering \begin{tikzpicture} \draw [thick, magenta] (0,-1) -- (0,2); \draw [thick, magenta] (2,-1) -- (2,2); \draw [thick] (0,1) to [out=-45,in=180] (1,0); \draw [thick] (1,0) to [out=0,in=230] (2,1); \draw [thick] (-1,-0.5) -- (0,0.5); \draw [thick] (2,0) -- (3,-1); \draw [thick, green] (1,3) -- (0,2); \draw [thick, green] (1,3) -- (2,2); \draw [thick, green] (0,-1) -- (1,-2); \draw [thick, green] (2,-1) -- (1,-2); \draw [thick, green] (1,4) -- (1,3); \draw [thick, green] (1,-3) -- (1,-2); \draw [thick, green] (-1,-0.5) -- (-0.3,0.2); \draw [thick, green] (3,-1) -- (2.3,-0.3); \node at (0.6,1.5) {$M^{A|B}$} node at (2.6,1.5) {$M^{B|C}$} node at (-0.3,0.5) {$A$} node at (1,0.5) {$B$} node at (2.3,0) {$C$} node at (-0.8,0) {$a$} node at (2.8,-0.5) {$c$} node at (1.2,4) {$i$} node at (1.2,-3) {$i'$} node at (0.5,2.8) {$j$} node at (1.5,2.8) {$k$} node at (0.5,-1.9) {$j'$} node at (1.5,-1.9) {$k'$} node at (-0.3,-0.1) {$\alpha$} node at (2.3,-0.7) {$\beta$} node at (0,2.3) {$\bar{\mu}$} node at (2,2.4) {$\bar{\nu}$} node at (0,-1.5) {$\mu'$} node at (2,-1.4) {$\nu'$} node at (3.2,0.5) {\Large{$=$}} node at (1.2,3.2) {$\chi$} node at (1.2,-2.2) {$\zeta$}; \scalebox{0.5}[0.5]{\tri{0}{4cm}{45}} \scalebox{0.5}[0.5]{\tri{4cm}{4cm}{45+90}} \scalebox{0.5}[0.5]{\tri{4cm}{-2cm}{45-180}} \scalebox{0.5}[0.5]{\tri{0}{-2cm}{45-90}} \scalebox{0.5}[0.5]{\tri{-0.5cm}{0.5cm}{45-180}} \scalebox{0.5}[0.5]{\tri{4.5cm}{-0.5cm}{45-90}} \draw [fill=yellow] (1,3) circle [radius=0.08]; \draw [fill=yellow] (1,-2) circle [radius=0.08]; \begin{scope}[xshift=11cm,yshift=0cm]{ \node at (0.2,4) {$i$} node at (0.2,-3) {$i'$} node at (-0.8,0) {$a$} node at (0.8,-0.5) {$c$} node at (0.2,0.25) {$b$} node at (0.3,0.7) {$\delta$} node at (-0.3,-0.1) {$\gamma$} node at (-4.2,0.5) {\Large{$\sum_{\delta,\gamma,b}\rho^{M^{A|C}i(j\mu)(k\nu)b;\chi\zeta}_{i'(a\alpha)(c\beta)(j'\mu')(k'\nu');\delta\gamma}$}}; \draw [thick,green] (0,4) -- (0,-3); \draw [thick,green] (-1,-0.5) -- (0,0.5); \draw [thick,green] (0,0) -- (1,-1); \draw [fill=yellow] (0,0.5) circle [radius=0.08]; \draw [fill=yellow] (0,0) circle [radius=0.08]; } \end{scope} \end{tikzpicture} \end{figure} \section*{23}~\\ \begin{figure}[h] \centering \begin{tikzpicture} \draw [thick, magenta] (-2,-1.5) -- (-2,2); \draw [thick, magenta] (2,-1.5) -- (2,2); \draw [thick] (-2,0.5) -- (-4,-1.5); \draw [thick] (-2,-0.5) -- (-1,-1.5); \draw [thick] (2,0.5) -- (4,-1.5); \draw [thick] (2,-0.5) -- (1,-1.5); \node at (0,0) {\Large{$=$}} node at (-3,0) {$A$} node at (-1.4,2) {$M^{A|B}$} node at (-1,-1) {$B$} node at (1,-1) {$A$} node at (2.6,2) {$M^{A|B}$} node at (3,0) {$B$}; \end{tikzpicture} \end{figure} \section*{24}~\\ \begin{figure}[h] \centering \begin{tikzpicture} \draw [thick, green] (-0.45,0.5) -- (0.7,0.5); \draw [thick] (0.7,0.5) to [out=0,in=120] (1.5,-0.5); \draw [thick] (0.5,-0.5) -- (1.7,-0.5); \draw [thick, green] (-0.55,0.5) -- (-0.7,0.5); \draw [thick] (-0.7,0.5) to [out=180,in=60] (-1.5,-0.5); \draw [thick] (-0.5,-0.5) -- (-1.7,-0.5); \draw [thick, magenta] (-0.5,0) -- (-0.5,-1); \draw [thick, green] (-0.5,0) -- (-0.5,1); \draw [thick, green] (0.5,0) -- (0.5,0.45); \draw [thick, green] (0.5,1) -- (0.5,0.55); \draw [thick, magenta] (0.5,0) -- (0.5,-1); \draw [thick, green] (-0.5,1) to [out=90,in=90] (0.5,1); \draw [thick, green] (-0.5,-1) to [out=270,in=270] (0.5,-1); \draw [thick] (1.75,-0.5) circle [radius=0.05]; \draw [thick] (-1.75,-0.5) circle [radius=0.05]; \scalebox{0.4}[0.4]{\tri{-1*1.25cm}{0}{90}} \scalebox{0.4}[0.4]{\tri{1*1.25cm}{0}{90}} \scalebox{0.4}[0.4]{\tri{-1*1.25cm}{-1.6*1.25cm}{270}} \scalebox{0.4}[0.4]{\tri{1*1.25cm}{-1.6*1.25cm}{270}} \scalebox{0.4}[0.4]{\tri{-1.7*1.25cm}{1*1.25cm}{0}} \scalebox{0.4}[0.4]{\tri{1.7*1.25cm}{1*1.25cm}{180}} \node at (0,1.5) {$i$} node at (0,-1.5) {$j$} node at (-1.3,0.5) {$A$} node at (-1,-0.8) {$A$} node at (1,-0.8) {$B$} node at (1.3,0.5) {$B$} node at (-0.7,-0.2) {$x$} node at (0.7,-0.2) {$x$} node at (0,0.7) {$c$} node at (-3.3,0) {\Large{$\gamma_{xc}^{(A|B)}\propto\sum_{i,j}$}}; \end{tikzpicture} \end{figure} \section*{25}~\\ \begin{figure}[h] \centering \begin{tikzpicture} \draw [thick, green] (0,1) -- (0,0.1); \draw [thick, green] (0,-0.1) -- (0,-2); \draw [thick, green] (-0.5,1) -- (-0.5,0.5); \draw [thick, green] (-0.5,-1.5) -- (-0.5,-2); \draw [thick, green] (-0.5,0.5) to [out=270, in=180] (0.1,0); \draw [thick, green] (0.1,0) to [out=0, in=0] (0.1,-1); \draw [thick, green] (-0.1,-1) to [out=180, in=90] (-0.5,-1.5); \draw [thick, green] (0,1) to [out=90, in=90] (1.5,1); \draw [thick, green] (-0.5,1) to [out=90, in=90] (2.5,1); \draw [thick, green] (0,-2) to [out=270, in=270] (1.5,-2); \draw [thick, green] (-0.5,-2) to [out=270, in=270] (2.5,-2); \draw [thick, green] (1.5,1) -- (1.5,-2); \draw [thick, green] (2.5,1) -- (2.5,-2); \scalebox{0.5}[0.5]{\tri{3cm}{0}{90}} \scalebox{0.5}[0.5]{\tri{5cm}{0}{90}} \scalebox{0.5}[0.5]{\tri{3cm}{-2cm}{270}} \scalebox{0.5}[0.5]{\tri{5cm}{-2cm}{270}} \draw [thick] (2.5,0) -- (2.5,-1); \draw [thick, magenta] (1.5,0) -- (1.5,-1); \draw [thick] (1.5,-1/3) to [out=-45, in=225] (2.5,-1/3); \draw [thick] (2,-0.55) -- (2,-0.8); \draw [thick] (2,-0.85) circle [radius=0.05]; \node at (2.7,-0.5) {$A$} node at (1.3,-0.5) {$x$} node at (2,-0.3) {$A$} node at (2.7,0.5) {$c$} node at (1.3,0.5) {$i$} node at (-2,-0.5) {\Large{$\gamma_{xc}\propto\sum_{i}$}}; \end{tikzpicture} \end{figure} \section*{26}~\\ \begin{figure}[h] \centering \begin{tikzpicture} \draw [thick] (-2,0) to [out=90,in=180] (0,1.5); \draw [thick] (0,1.5) to [out=0,in=90] (2,0); \draw [thick] (-2,0) to [out=270,in=180] (0,-1.5); \draw [thick] (0,-1.5) to [out=0,in=270] (2,0); \draw [thick] (-0.5,0) to [out=45,in=180] (0,0.25); \draw [thick] (0,0.25) to [out=0,in=135] (0.5,0); \draw [thick] (-0.7,0.2) to [out=-45,in=180] (0,-0.25); \draw [thick] (0,-0.25) to [out=0,in=-135] (0.7,0.2); \draw [thick, dashed] (-1.2,0) to [out=90,in=180] (0,0.9); \draw [thick, dashed] (0,0.9) to [out=0,in=90] (1.2,0); \draw [thick, dashed] (-1.2,0) to [out=270,in=180] (0,-0.9); \draw [thick, dashed] (0,-0.9) to [out=0,in=270] (1.2,0); \draw [thick] (0,-0.25) to [out=-150,in=150] (0,-1.5); \draw [thick] (0,-1.5) to [out=30,in=-30] (0,-0.25); \draw [thick, ->] (0.6,0.8) to [out=90,in=240] (1.2,1.7); \draw [thick, ->] (-0.25,-1.25) to [out=-90,in=100] (0.75,-2); \node at (1.4,2) {$NS/R\ bc$} node at (1,-2.2) {$anyon\ line\ c$}; \end{tikzpicture} \end{figure} \end{document} \subsection*{壹} \begin{equation*} \mathtikz{ \coordinate["$a$" right] (A) at (0,1cm); \coordinate (O) at (0,-0.2cm); \node[right] at (O) {$\alpha$}; \coordinate["$c_i$" right] (B) at (0,-1.5cm); \draw [thick] (A) -- (O); \draw [thick, magenta] (O) -- (B); \tri{0}{0}{270}; } \in\quad \Hom(a,c_i), \quad \alpha\in m_{ai} \end{equation*} \subsection*{贰} \begin{equation*} \mathtikz{ \coordinate["$a$" right] (A) at (0,-1.5cm); \coordinate["$b$" right] (B) at (0,1.5cm); \draw[thick] (A) -- (B); \renewcommand {0.2} {0.6cm} \mysquare{0}{0}{white}; \node at (0,0) {$f$}; } =\quad \sum_{\alpha,\beta,i}(f_{ai}^b)_{\alpha\beta} \mathtikz{ \coordinate["$a$" right] (A) at (0,-1.5cm); \coordinate["$b$" right] (B) at (0,1.5cm); \draw[thick] (A) -- (B); \coordinate (C) at (0,-0.65cm); \node[right] at (C) {$\alpha$}; \coordinate (D) at (0,0.65cm); \node[right] at (D) {$\beta$}; \node at (0.2cm,0) {$i$}; \draw[thick, magenta] (C) -- (D); \tri{0}{-0.8cm}{90}; \tri{0}{0.8cm}{270}; } \in\quad \Hom(a,b); \quad \alpha\in m_{ai}, \beta\in m_{bi} \end{equation*} \subsection*{叁} \begin{equation*} \mathtikz{ \coordinate["$i$" above] (A) at (-3cm,3cm); \coordinate["$j$" above] (B) at (-1cm,3cm); \coordinate["$k$" above] (C) at (1cm,3cm); \coordinate["$l$" below right] (O) at (0,0); \coordinate (D) at (-2cm,2cm); \coordinate (E) at (-1cm,1cm); \coordinate["$m$" below left] (M) at (-1.5cm,1.5cm); \draw[thick] (O) -- (A); \draw[thick] (D) -- (B); \draw[thick] (E) -- (C); } =\quad\sum_n\quad (F_l^{ijk})_{mn} \mathtikz{ \coordinate["$i$" above] (A) at (-3cm,3cm); \coordinate["$j$" above] (B) at (-1cm,3cm); \coordinate["$k$" above] (C) at (1cm,3cm); \coordinate["$l$" below right] (O) at (0,0); \coordinate (E) at (-1cm,1cm); \coordinate (F) at (0,2cm); \coordinate["$n$" below right] (N) at (-0.5cm,1.5cm); \draw[thick] (O) -- (A); \draw[thick] (C) -- (E); \draw[thick] (B) -- (F); } \end{equation*} \subsection*{伍} \begin{equation*} \mathtikz{ \vertexI{0}{1cm}{1cm}; \vertexI{0}{-1cm}{-1cm}; \node[right] at (0,1.5cm) {$b$}; \node[right] at (0,-1.5cm) {$a$}; \node[right] at (1cm,0) {$c$}; } \end{equation*} \subsection*{陆} \begin{equation*} \mathtikz{ \vertexI{0}{0}{1cm}; \node[right] at (0,0) {$\mu$}; \node[above] at (0,1cm) {$A$}; \node[below] at (-1cm,-1cm) {$A$}; \node[below] at (1cm,-1cm) {$A$}; } =\quad\sum_{i,j,k\in A} \quad\mu_{ij}^k \mathtikz{ \vertexIC{0}{0}{1cm}{magenta}; \node[right] at (0,0) {$ $}; \node[above] at (0,1cm) {$k$}; \node[below] at (-1cm,-1cm) {$i$}; \node[below] at (1cm,-1cm) {$j$}; } \end{equation*} \subsection*{柒} \begin{equation*} \mathtikz{ \coordinate["$A$" right] (A) at (0,1cm); \coordinate (O) at (0,-0.2cm); \node[right] at (O) {$\alpha$}; \coordinate["$i$" right] (B) at (0,-1.5cm); \draw [thick] (A) -- (O); \draw [thick, magenta] (O) -- (B); \tri{0}{0}{270}; } \quad \alpha\in\{ 1,2,\dots,W_{0i} \} \end{equation*} \subsection*{捌} \begin{equation*} \mathtikz{ \vertexIC{0}{0}{-1.5cm}{black}; \node[right] at (0,0) {$\Delta$}; \node[below] at (0,-1.5cm) {$A$}; \node[above] at (1.5cm,1.5cm) {$A$}; \node[above] at (-1.5cm,1.5cm) {$A$}; } =\quad\sum_{ijk}\quad \Delta_k^{ij} \mathtikz{ \vertexIC{0}{0}{-1.5cm}{black}; \node[below] at (0,-1.5cm) {$A$}; \node[above] at (1.5cm,1.5cm) {$A$}; \node[above] at (-1.5cm,1.5cm) {$A$}; \vertexIC{0}{0}{-1cm}{magenta}; \tri{1.1cm}{1.1cm}{225}; \tri{-1.1cm}{1.1cm}{315}; \tri{0cm}{-1.3cm}{90}; \node[below] at (0.5cm,0.5cm) {$j$}; \node[below] at (-0.5cm,0.5cm) {$i$}; \node[right] at (0,-0.5cm) {$k$}; } \end{equation*} \subsection*{玖} \begin{equation*} \mathtikz{ \vertexIC{0}{0}{-1cm}{black}; \node[left] at (0,0) {$\Delta$}; \vertexI{1cm}{1cm}{1cm}; \node[right] at (1cm,1cm) {$\mu$}; \draw[thick] (2cm,0cm) -- (2cm,-1cm); \draw[thick] (-1cm,1cm) -- (-1cm,2cm); \node[below] at (0,-1cm) {$A$}; \node[below] at (2,-1cm) {$A$}; \node[above] at (-1cm,2cm) {$A$}; \node[above] at (1cm,2cm) {$A$}; } = \mathtikz{ \draw[thick] (0,-1cm) -- (0,2cm); \draw[thick] (1cm,-1cm) -- (1cm,2cm); \draw[thick] (0,1cm) -- (1cm,0); \node[left] at (0,1cm) {$\mu$}; \node[right] at (1cm,0) {$\Delta$}; } \end{equation*} \subsection*{拾} \begin{equation*} \mathtikz{ \draw[thick] (0,-1cm) -- (0,2cm); \draw[thick] (1cm,-1cm) -- (1cm,2cm); \draw[thick] (0,1cm) -- (1cm,0); \node[above] at (0,2cm) {$a$}; \node[above] at (1cm,2cm) {$b$}; \node[below] at (0,-1cm) {$c$}; \node[below] at (1cm,-1cm) {$d$}; \node[above] at (0.5cm,0.5cm) {$e$}; } \quad = \quad \sum_{f} \left( F^{ab}_{cd} \right)_{ef} \mathtikz{ \vertexI{0}{0}{1cm}; \vertexI{0}{1cm}{-1cm}; \node[above] at (-1cm,2cm) {$a$}; \node[above] at (1cm,2cm) {$b$}; \node[below] at (-1cm,-1cm) {$c$}; \node[below] at (1cm,-1cm) {$d$}; \node[right] at (0,0.5cm) {$f$}; } \end{equation*} \subsection*{拾壹} \begin{equation*} \mathtikz{ \vertexI{0}{0}{0.7cm}; \crossingI{0cm}{-1.4cm}{0.7cm}; \node[below] at (0.7cm,-2.1cm) {$a$}; \node[below] at (-0.7cm,-2.1cm) {$b$}; \node[above] at (0,0.7cm) {$c$}; } =\quad R_{ab}^c \mathtikz{ \vertexI{0}{0}{1cm}; \node[above] at (0,1cm) {$c$}; \node[below] at (-1cm,-1cm) {$b$}; \node[below] at (1cm,-1cm) {$a$}; } \end{equation*} \subsection*{Pentagon} \input{pentagon.tex} \subsection*{Hexagon} \input{hexagon.tex} \end{document} \subsection*{壹} \begin{equation*} \mathtikz{ \coordinate["$a$" right] (A) at (0,1cm); \coordinate (O) at (0,-0.2cm); \node[right] at (O) {$\alpha$}; \coordinate["$c_i$" right] (B) at (0,-1.5cm); \draw [thick] (A) -- (O); \draw [thick, magenta] (O) -- (B); \tri{0}{0}{270}; } \in\quad \Hom(a,c_i), \quad \alpha\in m_{ai} \end{equation*} \subsection*{贰} \begin{equation*} \mathtikz{ \coordinate["$a$" right] (A) at (0,-1.5cm); \coordinate["$b$" right] (B) at (0,1.5cm); \draw[thick] (A) -- (B); \renewcommand {0.2} {0.6cm} \mysquare{0}{0}{white}; \node at (0,0) {$f$}; } =\quad \sum_{\alpha,\beta,i}(f_{ai}^b)_{\alpha\beta} \mathtikz{ \coordinate["$a$" right] (A) at (0,-1.5cm); \coordinate["$b$" right] (B) at (0,1.5cm); \draw[thick] (A) -- (B); \coordinate (C) at (0,-0.65cm); \node[right] at (C) {$\alpha$}; \coordinate (D) at (0,0.65cm); \node[right] at (D) {$\beta$}; \node at (0.2cm,0) {$i$}; \draw[thick, magenta] (C) -- (D); \tri{0}{-0.8cm}{90}; \tri{0}{0.8cm}{270}; } \in\quad \Hom(a,b); \quad \alpha\in m_{ai}, \beta\in m_{bi} \end{equation*} \subsection*{叁} \begin{equation*} \mathtikz{ \coordinate["$i$" above] (A) at (-3cm,3cm); \coordinate["$j$" above] (B) at (-1cm,3cm); \coordinate["$k$" above] (C) at (1cm,3cm); \coordinate["$l$" below right] (O) at (0,0); \coordinate (D) at (-2cm,2cm); \coordinate (E) at (-1cm,1cm); \coordinate["$m$" below left] (M) at (-1.5cm,1.5cm); \draw[thick] (O) -- (A); \draw[thick] (D) -- (B); \draw[thick] (E) -- (C); } =\quad\sum_n\quad (F_l^{ijk})_{mn} \mathtikz{ \coordinate["$i$" above] (A) at (-3cm,3cm); \coordinate["$j$" above] (B) at (-1cm,3cm); \coordinate["$k$" above] (C) at (1cm,3cm); \coordinate["$l$" below right] (O) at (0,0); \coordinate (E) at (-1cm,1cm); \coordinate (F) at (0,2cm); \coordinate["$n$" below right] (N) at (-0.5cm,1.5cm); \draw[thick] (O) -- (A); \draw[thick] (C) -- (E); \draw[thick] (B) -- (F); } \end{equation*} \subsection*{伍} \begin{equation*} \mathtikz{ \vertexI{0}{1cm}{1cm}; \vertexI{0}{-1cm}{-1cm}; \node[right] at (0,1.5cm) {$b$}; \node[right] at (0,-1.5cm) {$a$}; \node[right] at (1cm,0) {$c$}; } \end{equation*} \subsection*{陆} \begin{equation*} \mathtikz{ \vertexI{0}{0}{1cm}; \node[right] at (0,0) {$\mu$}; \node[above] at (0,1cm) {$A$}; \node[below] at (-1cm,-1cm) {$A$}; \node[below] at (1cm,-1cm) {$A$}; } =\quad\sum_{i,j,k\in A} \quad\mu_{ij}^k \mathtikz{ \vertexIC{0}{0}{1cm}{magenta}; \node[right] at (0,0) {$ $}; \node[above] at (0,1cm) {$k$}; \node[below] at (-1cm,-1cm) {$i$}; \node[below] at (1cm,-1cm) {$j$}; } \end{equation*} \subsection*{柒} \begin{equation*} \mathtikz{ \coordinate["$A$" right] (A) at (0,1cm); \coordinate (O) at (0,-0.2cm); \node[right] at (O) {$\alpha$}; \coordinate["$i$" right] (B) at (0,-1.5cm); \draw [thick] (A) -- (O); \draw [thick, magenta] (O) -- (B); \tri{0}{0}{270}; } \quad \alpha\in\{ 1,2,\dots,W_{0i} \} \end{equation*} \subsection*{捌} \begin{equation*} \mathtikz{ \vertexIC{0}{0}{-1.5cm}{black}; \node[right] at (0,0) {$\Delta$}; \node[below] at (0,-1.5cm) {$A$}; \node[above] at (1.5cm,1.5cm) {$A$}; \node[above] at (-1.5cm,1.5cm) {$A$}; } =\quad\sum_{ijk}\quad \Delta_k^{ij} \mathtikz{ \vertexIC{0}{0}{-1.5cm}{black}; \node[below] at (0,-1.5cm) {$A$}; \node[above] at (1.5cm,1.5cm) {$A$}; \node[above] at (-1.5cm,1.5cm) {$A$}; \vertexIC{0}{0}{-1cm}{magenta}; \tri{1.1cm}{1.1cm}{225}; \tri{-1.1cm}{1.1cm}{315}; \tri{0cm}{-1.3cm}{90}; \node[below] at (0.5cm,0.5cm) {$j$}; \node[below] at (-0.5cm,0.5cm) {$i$}; \node[right] at (0,-0.5cm) {$k$}; } \end{equation*} \subsection*{玖} \begin{equation*} \mathtikz{ \vertexIC{0}{0}{-1cm}{black}; \node[left] at (0,0) {$\Delta$}; \vertexI{1cm}{1cm}{1cm}; \node[right] at (1cm,1cm) {$\mu$}; \draw[thick] (2cm,0cm) -- (2cm,-1cm); \draw[thick] (-1cm,1cm) -- (-1cm,2cm); \node[below] at (0,-1cm) {$A$}; \node[below] at (2,-1cm) {$A$}; \node[above] at (-1cm,2cm) {$A$}; \node[above] at (1cm,2cm) {$A$}; } = \mathtikz{ \draw[thick] (0,-1cm) -- (0,2cm); \draw[thick] (1cm,-1cm) -- (1cm,2cm); \draw[thick] (0,1cm) -- (1cm,0); \node[left] at (0,1cm) {$\mu$}; \node[right] at (1cm,0) {$\Delta$}; } \end{equation*} \subsection*{拾} \begin{equation*} \mathtikz{ \draw[thick] (0,-1cm) -- (0,2cm); \draw[thick] (1cm,-1cm) -- (1cm,2cm); \draw[thick] (0,1cm) -- (1cm,0); \node[above] at (0,2cm) {$a$}; \node[above] at (1cm,2cm) {$b$}; \node[below] at (0,-1cm) {$c$}; \node[below] at (1cm,-1cm) {$d$}; \node[above] at (0.5cm,0.5cm) {$e$}; } \quad = \quad \sum_{f} \left( F^{ab}_{cd} \right)_{ef} \mathtikz{ \vertexI{0}{0}{1cm}; \vertexI{0}{1cm}{-1cm}; \node[above] at (-1cm,2cm) {$a$}; \node[above] at (1cm,2cm) {$b$}; \node[below] at (-1cm,-1cm) {$c$}; \node[below] at (1cm,-1cm) {$d$}; \node[right] at (0,0.5cm) {$f$}; } \end{equation*} \subsection*{拾壹} \begin{equation*} \mathtikz{ \vertexI{0}{0}{0.7cm}; \crossingI{0cm}{-1.4cm}{0.7cm}; \node[below] at (0.7cm,-2.1cm) {$a$}; \node[below] at (-0.7cm,-2.1cm) {$b$}; \node[above] at (0,0.7cm) {$c$}; } =\quad R_{ab}^c \mathtikz{ \vertexI{0}{0}{1cm}; \node[above] at (0,1cm) {$c$}; \node[below] at (-1cm,-1cm) {$b$}; \node[below] at (1cm,-1cm) {$a$}; } \end{equation*} \subsection*{Pentagon} \input{pentagon.tex} \subsection*{Hexagon} \input{hexagon.tex} \end{document}
proofpile-arXiv_065-171
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The study of the distribution of the logarithm of the Riemann zeta-function was precipitated by the work of Bohr and Jessen~\cite{BJ} in the early 1930s. They showed that for a fixed $\frac12 < \sigma \leq 1$ and any rectangle $\mathcal{R}$ in the complex plane with sides parallel to the coordinate axes, the quantity \[ \frac{1}{T}\mu\Big\{T < t \leq 2T: \log\zeta(\sigma+it) \in \mathcal{R}\Big\} \] converges to a value $\mathbb{F}_\sigma(\mathcal{R})$ as $T\to \infty,$ where $\mu$ denotes the Lebesgue measure and $\mathbb{F}_\sigma$ denotes a probability distribution function on $\mathbb{C}.$ This result is one of the many lovely connections between probability theory and analytic number theory. Notice that this means, for example, that $ \log|\zeta(s)|$ and $ \arg\zeta(s)$ are usually bounded on the line $\Re s =\sigma$ when $\frac12 < \sigma \leq 1.$ In contrast to this, $\log|\zeta(\frac12+it)|$ and $\arg\zeta(\frac12+it)$ are typically much smaller or much larger, for the work of Selberg~\cite{Selberg1944, S1946Archiv} and Tsang~\cite{Tsang} shows that for large $T$ and fixed real numbers $a, b$ with $a< b$ we have \begin{equation}\label{CLT real} \begin{split} \frac 1T\text{meas}\Big\{T <t \leq 2T:\frac{\log|\zeta(\tfrac12+it)|}{\sqrt{\tfrac12\log\log T}}\in [a, b]\Big\} =\frac{1}{\sqrt{2\pi}}\int_a^b e^{-x^2/2}\mathop{dx}+O\bigg(\frac{(\log\log\log T)^2}{\sqrt{\log\log T}}\bigg) \end{split} \end{equation} and \begin{equation}\label{CLT im} \begin{split} \frac 1T\text{meas}\Big\{T <t \leq 2T:\frac{\arg\zeta(\tfrac12+it)}{\sqrt{\tfrac12\log\log T}}\in [a, b]\Big\} =\frac{1}{\sqrt{2\pi}}\int_a^b e^{-x^2/2}\mathop{dx}+O\bigg(\frac{\log\log\log T}{\sqrt{\log\log T}}\bigg). \end{split} \end{equation} Here, the function $\arg\zeta(\tfrac12+it)$ is defined as follows. If $t$ is not the ordinate of a zero, then starting with $\arg\zeta(2)=0,$ $\arg\zeta(\tfrac12+it)$ is defined by continuous variation over the line segment from $2$ to $2+it,$ and then from $2+it$ to $\tfrac12+it.$ If $t$ is the ordinate of a zero, then we define \begin{equation}\label{defn of arg} \arg\zeta(\tfrac12+it)=\lim_{\epsilon \to 0} \frac{\arg\zeta(\tfrac12+i(t+\epsilon))+\arg\zeta(\tfrac12+i(t-\epsilon))}{2}. \end{equation} Notice that the main term on the right-hand side of \eqref{CLT real} and \eqref{CLT im} is the distribution function of a random variable with the standard Gaussian distribution. Indeed, these results are famously known as Selberg's central limit theorem. Selberg later generalized his theorem to functions in the so-called Selberg class (see \cite{Selberg92}). He also gave a number of applications of \eqref{CLT real} and \eqref{CLT im} to such problems as determining the proportion of $a$-points of linear combinations of functions in the Selberg class in various regions of the critical strip and the proportion of zeros of such combinations on the critical line. Later in a related work, Hejhal~\cite{H} proved that, if the Riemann hypothesis (RH) is true, then the function $\displaystyle\log(|\zeta'(\tfrac12+it)|/\log t)$ has an approximate Gaussian distribution on the interval $[T, 2T]$ with mean $0$ and variance $\frac{1}{2}\log\log T.$ Indeed, this was later shown to hold unconditionally by Selberg in unpublished work \cite{SUnpublished}. In the same paper Hejhal further proved a discrete version of this result. To describe his work, we need to introduce some notation and a hypothesis. Let $N(T)$ denote the number of nontrivial zeros $\r=\b+i\gamma $ of $\zeta(s)$ with $0<\b < 1, 0<\gamma \leq T.$ By the Riemann-von Mangoldt formula, \[ N(T)=\frac{T}{2\pi}\log\frac{T}{2\pi}-\frac{T}{2\pi}+ \frac{1}{\pi}\arg\zeta (\tfrac12+it)+\frac78+O\Big(\frac{1}{T}\Big). \] Following \eqref{defn of arg}, if $T$ is the ordinate of a zero, then we set $N(T)=\lim_{\epsilon \to 0}\frac{N(T+\epsilon)+N(T-\epsilon)}{2}.$ For $\a$ a positive real number consider the following zero-spacing hypothesis (which inherently assumes RH). \begin{hyp}\label{hypothesis} We have \[ \limsup_{T\to\infty}\frac{1}{N(T)}\#\Big\{0 < \gamma \leq T:0\leq \gamma ^+-\gamma \leq \frac{C}{\log T}\Big\} \ll C^\a \] uniformly for $0<C<1.$ Here $\tfrac12+i\gamma ^+$ is the immediate successor of $\tfrac12+i\gamma $ with the convention that $\gamma ^+=\gamma $ if and only if $ \tfrac12+i\gamma $ is a multiple zero. \end{hyp} Notice that if $C$ is any positive number, then the left-hand side is $\ll \min\{C^\a, 1\} N(T).$ Hejhal~\cite{H} proved that if one assumes RH, Hypothesis $\mathscr H_\a$ for some fixed $\a\in(0,1],$ and that all the zeros of the zeta-function are simple, then as $T\to \infty,$ \begin{equation}\label{Hejhal discrete CLT} \begin{split} \frac{1}{N(2T)-N(T)} \# \bigg\{T < \gamma \leq 2T: \frac{\log(|\zeta'(\r)|/\log T )} {\sqrt{\tfrac12\log\log T}} \in [a, b]\bigg\} \sim \frac{1}{\sqrt{2\pi}}\int_a^b e^{-x^2/2}\mathop{dx}. \end{split} \end{equation} Here, Hypothesis $\mathscr{H}_\a$ ensures that there are not too many dense clusters of zeros of the zeta-function, which is in turn necessary for controlling some of the error terms arising in the proof of \eqref{Hejhal discrete CLT}. This and similar hypotheses have been used by a number of authors, for example, see~\cite{BH}, \cite{Kirila} and~\cite{L}. We also remark that $\mathscr{H}_1,$ which implies $\mathscr{H}_\a$ for every $\a\in(0,1].$ $\mathscr{H}_1$ is believed to be true since it is implied by the following well-known conjecture of Montgomery~\cite{Montgomery73}. \begin{pcc}\label{pcc} Let $\a <\b$ be real numbers and define $\delta_0=1$ if $\a\leq 0 < \beta,$ and $\delta_0=0$ otherwise. Then we have \[ \frac{1}{N(T)} \sum_{\substack{0 < \gamma , \gamma ' \leq T, \\ \tfrac{2\pi\a}{\log T}\leq \gamma -\gamma ' \leq \tfrac{2\pi\b}{\log T}}} 1 \sim \int_\a^\b \left(1-\frac{\sin^2(\pi x)}{(\pi x)^2}+\delta_0\right) \mathop{dx} \] as $T\to\infty.$ \end{pcc} Our goal in this paper is to prove a suitable discrete analogue of Selberg's central limit theorem given in \eqref{CLT real} and \eqref{CLT im}. We also obtain a more precise version of Hejhal's result in \eqref{Hejhal discrete CLT}. \begin{thm}\label{distr of Re log zeta} Assume the Riemann hypothesis and Montgomery's Pair Correlation Conjecture. Let $z=u+iv$ be a complex number with $0 < u \ll \tfrac{1}{\log T}$ and $v=O\bigs( \tfrac{1}{\log X}\bigs),$ where $\displaystyle X=T^{\frac{1}{16\Psi(T)^6}}$ with $\Psi(T)=\sum_{p\leq T} p^{-1}$ and $T$ is sufficiently large. Then \begin{align*} \frac{1}{N(T)}\#\bigg\{0 < \gamma \leq T: \frac{\log|\zeta(\r+z)|-M_X(\r, z)}{\sqrt{\tfrac12\log\log T}} \in [a, b]&\bigg\} \\ =\frac{1}{\sqrt{2\pi}}&\int_a^b e^{-x^2/2}\mathop{dx} +O\bigg(\frac{(\log\log\log T)^2}{\sqrt{\log\log T}}\bigg), \end{align*} where \begin{equation}\label{mean} \begin{split} M_X(\r, z)= m(\r+iv)\Big(\log\Big(\sdfrac{eu\log X}{4}\Big)-\sdfrac{u\log X}{4}\Big). \end{split} \end{equation} Here, $m(\r+iv)$ denotes the multiplicity of the zero at $\r+iv$ if it is a zero of $\zeta(s),$ otherwise $m(\r+iv)=0.$ \end{thm} We shall see later in the proof of Proposition \ref{zero spacing eta v} where we use Montgomery's Pair Correlation Conjecture that when $v\neq 0,$ $\r+iv$ is not usually a zero of the zeta-function, and so $M_X(\r, z)=0.$ \begin{thm}\label{distr of Im log zeta} Assume the Riemann hypothesis. Let $z=u+iv$ be a complex number with $0 < u \leq \tfrac{1}{\log X}$ and $v=O\bigs(\tfrac{1}{\log X}\bigs),$ where $\displaystyle X=T^{\frac{1}{16\Psi(T)^6}}$ with $\Psi(T)=\sum_{p\leq T} p^{-1}$ and $T$ is sufficiently large. Then \[ \frac{1}{N(T)}\#\bigg\{0 < \gamma \leq T: \frac{\arg\zeta(\r+z)}{\sqrt{\tfrac12\log\log T}} \in [a, b]\bigg\} =\frac{1}{\sqrt{2\pi}}\int_a^b e^{-x^2/2}\mathop{dx} +O\bigg(\frac{\log{\log\log T}}{\sqrt{\log\log T}}\bigg). \] \end{thm} Note that unlike Theorem~\ref{distr of Re log zeta}, Theorem~\ref{distr of Im log zeta} does not require the assumption of Montgomery's Pair Correlation Conjecture. Also, Theorem~\ref{distr of Re log zeta} is uniform in $u.$ Letting $v=0$ and then $u\to 0^+$ in the statement of the theorem, we immediately deduce the following corollary. \begin{cor} \label{cor: log zeta'} Assume the Riemann hypothesis and Montgomery's Pair Correlation Conjecture, and assume in addition that all zeros of the zeta-function are simple. Then for sufficiently large $T,$ \[ \begin{split} \frac{1}{N(T)}\#\bigg\{0 < \gamma \leq T: \frac{\log(|\zeta'(\r)|/\log T)}{\sqrt{\tfrac12\log\log T}}\in [a, b]\bigg\}& \\ =\frac{1}{\sqrt{2\pi}}&\int_a^b e^{-x^2/2}\mathop{dx} +O\bigg(\frac{(\log\log\log T)^2}{\sqrt{\log\log T}}\bigg). \end{split} \] \end{cor} In fact, we can slightly weaken the hypotheses of the corollary. \begin{thm} \label{thm: log zeta'} Assume the Riemann hypothesis and Hypothesis $\mathscr H_\a$ for some $\a\in (0,1].$ If all zeros of the zeta-function are simple, then for sufficiently large $T$ \[ \begin{split} \frac{1}{N(T)}\#\bigg\{0 < \gamma \leq T: \frac{\log(|\zeta'(\r)|/\log T)}{\sqrt{\tfrac12\log\log T}}\in [a, b]\bigg\}& \\ =\frac{1}{\sqrt{2\pi}}&\int_a^b e^{-x^2/2}\mathop{dx} +O\bigg(\frac{(\log\log\log T)^2}{\sqrt{\log\log T}}\bigg). \end{split} \] \end{thm} Observe that this theorem improves Hejhal's result \eqref{Hejhal discrete CLT} by providing an error term. It does not seem that Hypothesis $\mathscr H_\a$ (together with RH) is sufficient to prove Theorem \ref{distr of Re log zeta}. However, though we shall not do so, we feel it is worth remarking that one can prove Theorem \ref{distr of Re log zeta} under the assumption of the following alternative zero-spacing hypothesis (and RH): There is an $\a \in (0, 1]$ such that for every real number $\t,$ we have \[ \limsup_{T\to\infty}\frac{1}{N(T)}\#\Big\{0 < \gamma \leq T:0\leq \gamma ^+_\t-(\gamma +\t) \leq \sdfrac{C}{\log T}\Big\} \ll C^\a \] uniformly for $0<C<1.$ Here $\tfrac12+i\gamma ^+_\t$ is the zero that immediately follows $\tfrac12+i(\gamma +\t).$ Moreover, one has $\gamma ^{+}_{\t} = \gamma +\t$ if and only if $ \tfrac12+i(\gamma +\t)$ is a multiple zero. A word is in order concerning our use of RH and Montgomery's Pair Correlation Conjecture or Hypothesis $\mathscr H_\a.$ The proofs of our theorems depend on the calculation of moments of the form \[ \sum_{0<\gamma \leq T} A(\r)^j \mkern4.5mu\overline{\mkern-4.5mu B(\r)}{}\,^k \] where $A(s)$ and $B(s)$ are Dirichlet polynomials and $\rho$ runs over the zeros of the zeta-function. A formula of Landau and Gonek~\cite{GonekLandaulemma1, GonekLandaulemma2} allows one to estimate sums of the type \[ \sum_{0<\gamma \leq T} A(\r)^j B(1-\r)^k, \] which are of the above type, provided that RH is true. In addition to RH, we need to assume Montgomery's Pair Correlation Conjecture in Theorem \ref{distr of Re log zeta}, or Hypothesis $\mathscr H_\a$ in Theorem~\ref{thm: log zeta'}, in order to control one of the error terms in our moment calculations. The remainder of this paper is organized into five sections. In Section~\ref{sec:approximate formula for zeta} we use Dirichlet polynomials over the primes to approximate the real and imaginary parts of $\log \zeta(\r+z).$ In Section~\ref{sec:lemmas} we present a number of technical lemmas. In Section~\ref{moments} we calculate some discrete moments related to the real part of $\log\zeta(\r+z).$ Section~\ref{sec:proof} is where we complete the proof of Theorem~\ref{distr of Re log zeta}. The proof of Theorem~\ref{distr of Im log zeta} is very similar and easier, so we do not include it. Finally, we prove Theorem~\ref{thm: log zeta'} in Section~\ref{proof of thm Re log zeta'}. Throughout the paper, we assume RH and take $T$ to be a sufficiently large positive real number. We suppose that $c, A$ and $D$ always denote positive constants, and they may be different at each occurrence. $c$ denotes an absolute constant, while $A$ and $D$ depend on some parameters. The variables $p$ and $q,$ indexed or not, are reserved to denote prime numbers, and the variables $j, k, \ell, m$ and $n$ always denote nonnegative integers. \section*{Acknowledgements} The author gives sincere thanks to her doctoral advisor Steven M. Gonek for introducing the problem in this paper and also for providing guidance and support during the process of its study. Professor Gonek also read an earlier version of this paper and made many useful suggestions which significantly improved the exposition. In the beginning stages of this work, the author was partially supported by the NSF grant DMS-1200582 through her advisor. \section{Approximate Formulas} \label{sec:approximate formula for zeta} Our goal in this section is to prove approximate formulas for the real and imaginary parts of $\log\zeta(\r+z),$ where $\r=\tfrac12+i\gamma $ denotes a typical nontrivial zero of $\zeta(s)$ with multiplicity $m(\r),$ and $z=u+iv$ denotes a complex-valued shift such that $0 < u=\Re z \leq \tfrac{1}{\log X}$ and $v=\Im z= O\Big(\tfrac{1}{\log X}\Big).$ Here, $4\leq X\leq t^2$ for a sufficiently large number $t.$ We also set $s=\sigma+it.$ The Riemann zeta-function has the two well known expressions \[ \zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s} =\prod_{p}\Big(1-\frac{1}{p^s}\Big)^{-1} \qquad \text{for} \quad \sigma>1. \] From the Euler product one finds that \[ \log\zeta(s)=\sum_{n=1}^\infty \frac{\Lambda(n)}{n^s\log n} \qquad \text{for} \quad \sigma>1, \] where $\Lambda(n)$ is the von Mangoldt function. This last series is absolutely convergent in the half-plane $\sigma>1.$ It is not difficult to show that if we truncate it at $X^2$ and only include the primes, the resulting Dirichlet polynomial $\sum_{p\leq X^2} \frac{1}{p^s}$ provides a good approximation to $\log\zeta(s)$ in this region. In Lemma \ref{Re log zeta} and Lemma \ref{arg zeta}, we show that we may similarly use a Dirichlet polynomial to approximate $\log\zeta(\r+z).$ Before stating our lemmas we require some additional notation. Let \begin{equation}\label{P defn} \CMcal{P}_X(\gamma +v)=\sum_{p\leq X^2}\frac{1}{p^{1/2+i(\gamma +v)}}. \end{equation} Also let \begin{equation}\label{Lambda_X} \begin{split} \Lambda_X(n)=\Lambda(n)w_X(n), \end{split} \end{equation} where \begin{equation}\label{w_X} \begin{split} w_X(n)= \begin{cases} 1&\quad \text{if} \quad 1\leq n \leq X,\\ \tfrac{\log{(X^2/n)}}{\log{X}} &\quad \text{if} \quad X< n \leq X^2. \end{cases} \end{split} \end{equation} Finally, we set \[ \sigma_1=\frac12+\frac{4}{\log X}, \] and \begin{equation}\label{eta} \eta_{\gamma +v} =\min_{\substack{\gamma ' \neq \gamma +v}} |\gamma '-(\gamma +v)|, \end{equation} where $\gamma '$ runs over all ordinates of the nontrivial zeros of the zeta-function. Notice, in particular, that $\eta_\gamma $ is the distance from $\gamma $ to the nearest ordinate of a zero other than $\r.$ \begin{lem} \label{Re log zeta} Let $4\leq X\leq T^2,$ and $\displaystyle \sigma_1=\tfrac12+\tfrac{4}{\log X}.$ Let $z=u+iv$ denote a complex number with $0 < u \leq \tfrac{1}{\log X}$ and $\displaystyle v=O\bigs(\tfrac{1}{\log X}\bigs).$ Then \begin{equation}\label{eq:Re log zeta} \log|\zeta(\r+z)| \, = M_X(\r, z)+\Re\CMcal{P}_X(\gamma +v) +O\bigg(\sum_{i=1}^{4}r_i(X, \gamma +v)\bigg). \end{equation} Here $M_X(\r, z)$ is as defined in \eqref{mean}, and $\CMcal{P}_X(\gamma +v)$ is as defined in \eqref{P defn}. We also have \begin{align*} r_1(X, \gamma +v)=\bigg|\sum_{p\leq X^2}&\frac{1-w_X(p)}{p^{1/2+i(\gamma +v)}}\bigg| \, , \quad r_2(X, \gamma +v)=\bigg|\sum_{p\leq X}\frac{w_X(p^2)}{p^{1+2i(\gamma +v)}}\bigg|\, , \\ r_3(X, \gamma +v)=\, &\frac{1}{\log X} \int_{1/2}^\infty X^{\tfrac12-\sigma}\bigg|\sum_{p\leq X^2}\frac{\Lambda_X(p)\log{(Xp)}}{p^{\sigma+i(\gamma +v)}}\bigg|\mathop{d\sigma}, \\ \shortintertext{and}\hspace{1.8 cm} r_4(X, \gamma +v)&=\bigg(1+\log^{+}\Big(\sdfrac{1}{\eta_{\gamma +v}\log X }\Big)\bigg)\sdfrac{E(X,\gamma +v)}{\log X}\, , \end{align*} where \[ \quad E(X, \gamma +v) =\bigg|\sum_{n\leq X^2} \sdfrac{\Lambda_X(n)}{n^{\sigma_1+i(\gamma +v)}}\bigg|+\log(\gamma +v). \] \end{lem} A similar result holds for $\arg\zeta(\r+z).$ \begin{lem} \label{arg zeta} With the same notation as in Lemma~\ref{Re log zeta}, we have \begin{equation}\label{Im part 1} \arg\zeta(\r+z) =\Im{\CMcal{P}_X(\gamma +v)} +O\bigg(\sum_{i=1}^{3}r_i(X, \gamma +v)\bigg) +O\bigg(\frac{E(X, \gamma +v) }{\log X}\bigg). \end{equation} \end{lem} The statement of Lemma \ref{Re log zeta} is uniform in $u.$ Subtracting $M_X(\r,z)$ from both sides of \eqref{eq:Re log zeta}, letting $v=0$ and then $u$ tend to $0$ from the right, we immediately obtain \begin{cor} \label{Re log zeta`} With the same notation as in Lemma \ref{Re log zeta}, \[ \log\bigg|\frac{\zeta^{(m(\r))}(\r)}{(m(\r))!}\bigg|-m(\r)\log\Big(\sdfrac{e\log X}{4}\Big) =\Re\CMcal{P}(\gamma ) +O\bigg(\sum_{i=1}^{4}r_i(X, \gamma )\bigg). \] \end{cor} We start with the proof of Lemma \ref{Re log zeta}. This is more complicated than the proof of Lemma \ref{arg zeta}, where some of the intermediate results below will be reused. \begin{proof}[Proof of Lemma \ref{Re log zeta}] Let $ 4\leq X\leq t^2$ and $t\geq 2.$ We write \[ s_1=\sigma_1+it \quad \text{and} \quad s=\sigma+it, \] where $s$ is not a zero of $\zeta(s).$ By (14.21.4) in \cite{T}, \begin{equation}\label{Z'/Z} \frac{\zeta^{'}}{\zeta}(s) = - \sum_{n\leq X^2} \frac{\Lambda_X(n)}{n^s} +O\bigs(X^{\frac12-\sigma}E(X, t)\bigs) \quad \text{for} \quad \sigma\geq \sigma_1, \end{equation} where \begin{equation}\label{E} E(X, t) =\Big|\sum_{n\leq X^2} \frac{\Lambda_X(n)}{n^{\sigma_1+it}}\Big|+\log t. \end{equation} We also have for $s$ not equal to any zero $\r^\prime =\frac12+i\gamma ^\prime$ that \begin{equation}\label{Z'/Z 2} \frac{\zeta^{'}}{\zeta}(s) =\sum_{\r^\prime} \Big(\frac{1}{s-\r'} +\frac1{\r'}\Big) +O(\log t). \end{equation} Taking real parts of both sides and setting $s=s_1,$ we find that \[ \Re \frac{\zeta^{'}}{\zeta}(s_1) =\sum_{\r^\prime} \frac{\sigma_1-1/2}{(\sigma_1-1/2)^2 +(t-\gamma ^\prime)^2} +O(\log t). \] Using this and \eqref{Z'/Z} with $s=s_1,$ we see that \begin{equation}\label{Sum<E} \sum_{\r^\prime} \frac{\sigma_1-1/2}{(\sigma_1-1/2)^2 +(t-\gamma ^\prime)^2} \ll E(X, t). \end{equation} Now suppose that $\r=\frac 1 2+i\gamma $ is a fixed zero with $0<\gamma \leq T,$ and choose a number $z=u+iv$ with $0 <u \leq \sigma_1-\tfrac12$ and $ v=O\bigs(\tfrac{1}{\log X}\bigs).$ Then we have \begin{equation}\label{logzeta rho+z} \begin{split} \log\zeta(\r+z) &= -\int_{\frac12+u}^\infty \frac{\zeta^{'}}{\zeta}(\sigma+i(\gamma +v))\mathop{d\sigma} \\ &=-\int_{\sigma_1}^\infty \frac{\zeta^{'}}{\zeta}(\sigma+i(\gamma +v)) \mathop{d\sigma} -\Big(\sigma_1-\sdfrac12-u\Big) \frac{\zeta^{'}}{\zeta}(\sigma_1+i(\gamma +v)) \\ &\hskip.5in+\int_{1/2+u}^{\sigma_1} \Big(\frac{\zeta^{'}}{\zeta}(\sigma_1+i(\gamma +v)) -\frac{\zeta^{'}}{\zeta}(\sigma+i(\gamma +v))\Big)\mathop{d\sigma} \\[3pt] &= J_1 +J_2 + J_3. \end{split} \end{equation} By \eqref{Z'/Z} \begin{align}\label{J_1} J_1 =&\sum_{n\leq X^2} \frac{\Lambda_X(n)}{n^{\sigma_1+i(\gamma +v)} \log n} +O\bigg(\frac{E(X, \gamma +v) }{\log X}\bigg) . \end{align} Again by \eqref{Z'/Z} with $\sigma=\sigma_1,$ \begin{equation}\label{J_2} J_2 \ll \Big(\sigma_1-\sdfrac12-u\Big) E(X, \gamma +v) \ll \frac{E(X, \gamma +v)}{\log X} . \end{equation} By \eqref{logzeta rho+z}, the last two estimates imply \begin{equation}\label{Re part} \log|\zeta(\r+z)| = \Re\, \sum_{n\leq X^2} \frac{\Lambda_X(n)}{n^{\sigma_1+i(\gamma +v)} \log n} +\Re\, J_3 +O\bigg(\frac{E(X, \gamma +v)}{\log X}\bigg). \end{equation} In regards to $\Re J_3,$ we have by \eqref{Z'/Z 2} \begin{align*} \Re\bigg(\frac{\zeta^{'}}{\zeta}(\sigma_1&+i(\gamma +v)) -\frac{\zeta^{'}}{\zeta}(\sigma+i(\gamma +v)) \bigg) \\ =& \sum_{\gamma ^\prime} \bigg( \frac{\sigma_1-1/2}{ (\sigma_1-1/2)^2+ (\gamma +v-\gamma ^\prime)^2}-\frac{\sigma-1/2}{(\sigma-1/2)^2+ (\gamma +v-\gamma ^\prime)^2} \bigg) +O(\log \gamma ) \end{align*} for $\frac12\leq \sigma \leq \sigma_1$ and $s\neq \r.$ We separate out the terms $\gamma '$ corresponding to $\gamma +v,$ if any, from the sum. There are $m(\r+iv)$ of them, so we find that \begin{align*} &\bigg| \Re \bigg(\frac{\zeta^{'}}{\zeta}(\sigma_1+ i(\gamma +v)) -\frac{\zeta^{'}}{\zeta}(\sigma+ i(\gamma +v)) \bigg) -m(\r+iv)\bigg(\frac{1}{ \sigma_1-1/2}-\frac{1}{\sigma-1/2}\bigg)\bigg| \\ \leq & \sum_{\gamma '\neq \gamma +v} \frac{ \big|(\sigma_1-1/2) \big((\sigma-1/2)^2+(\gamma +v-\gamma ')^2 \big)-(\sigma-1/2) \big((\sigma_1-1/2)^2+(\gamma +v-\gamma ')^2 \big)\big|} {\big((\sigma_1-1/2)^2+(\gamma +v-\gamma ')^2 \big) \big((\sigma-1/2)^2+(\gamma +v-\gamma ')^2 \big)} \\ &\quad +O(\log \gamma ) \\ = & \sum_{\gamma '\neq \gamma +v} \frac{(\sigma_1-\sigma)\bigs|-(\sigma_1-1/2)(\sigma-1/2)+(\gamma +v-\gamma ')^2\bigs|}{\big((\sigma_1-1/2)^2+(\gamma +v-\gamma ')^2 \big) \big((\sigma-1/2)^2+(\gamma +v-\gamma ')^2 \big)}\\ &\quad +O(\log \gamma ). \end{align*} Integrating the first and the last term of the inequalities over $\sigma \in [\frac12+u, \sigma_1],$ we deduce by triangle inequality that \begin{align}\label{ReJ3} &\qquad \bigg|\Re \,J_3 -m(\r+iv)\bigg(\frac{\sigma_1-1/2-u}{\sigma_1-1/2}- \frac12\log \frac{(\sigma_1-1/2)^2}{u^2}\bigg)\bigg| \notag \\ \leq &\sum_{\gamma '\neq \gamma +v} \int_{1/2+u}^{\sigma_1} \frac{(\sigma_1-\sigma)(\sigma_1-1/2)(\sigma-1/2)} {\big((\sigma-1/2)^2+(\gamma +v-\gamma ')^2 \big)\big((\sigma_1-1/2)^2+(\gamma +v-\gamma ')^2 \big)} \mathop{d\sigma} \\ + \sum_{\gamma '\neq \gamma +v} &\frac{1}{(\sigma_1-1/2)^2+(\gamma +v-\gamma ')^2}\int_{1/2+u}^{\sigma_1} \frac{(\sigma_1-\sigma)(\gamma +v-\gamma ')^2}{(\sigma-1/2)^2+(\gamma +v-\gamma ')^2} \mathop{d\sigma} +O \bigg( \frac{\log{\gamma }}{\log X} \bigg) \notag . \end{align} The term being subtracted on the left-hand side is the function $M_X(\r, z)$ in \eqref{mean}, namely, \[ M_X(\r, z)=m(\r+iv)\Big(\log\Big(\sdfrac{eu\log X}{4}\Big)-\sdfrac{u\log X}{4}\Big). \] We now study the second sum on the right-hand side of \eqref{ReJ3}. The integral is at most $(\sigma_1-1/2)^2.$ Thus the second sum is \begin{equation}\label{second sum J3} \ll \sum_{\gamma '\neq \gamma +v} \frac{(\sigma_1-1/2)^2 }{(\sigma_1-1/2)^2 +(\gamma +v-\gamma ')^2} . \end{equation} For the first sum, note that \[ (\sigma_1-\sigma)(\sigma_1-1/2)(\sigma-1/2) \ll (\sigma_1-1/2)^2(\sigma-1/2) \qquad \text{for} \quad \sigma\in[1/2+u,\sigma_1]. \] Thus, the first sum is \begin{equation}\label{first sum lemma} \ll \sum_{\gamma '\neq \gamma +v} \frac{(\sigma_1-1/2)^2} {(\sigma_1-1/2)^2+(\gamma +v-\gamma ')^2} \int_{1/2+u}^{\sigma_1}\frac{\sigma-1/2} {(\sigma-1/2)^2+(\gamma +v-\gamma ')^2}\mathop{d\sigma}. \end{equation} Since $\sigma_1-\tfrac12= \tfrac{4}{\log X},$ the integral here is \[ \frac12\log{\Big(1+\sdfrac{16-u^2\log^2{X}}{u^2\log^2{X}+(\gamma +v-\gamma ')^2\log^2{X}}\Big)}. \] For $x>0,$ set $\log^+ x =\max\{\log x, 0\}.$ It is easy to check that $\log(1+x)\leq 1+\log^+x,$ and that $\log^+(x/y)\leq \log^+x+\log^+(1/y).$ Using these inequalities, we see that the above expression is \[ \leq 1+ \log^+(16-u^2\log^2{X})+\log^+\Big(\sdfrac{1}{u^2\log^2{X}+(\gamma +v-\gamma ')^2\log^2{X}}\Big). \] The first two terms are $O(1)$ since $\displaystyle 0<u\leq\tfrac{1}{\log X}.$ To estimate the third, observe that $\log^+\bigs(\tfrac{1}{x+y}\bigs)\leq \log^+\bigs(\tfrac{1}{y}\bigs)$ for $0< x\leq 1$ and $y>0.$ Then by the definition of $\eta_{\gamma +v}$ in \eqref{eta}, the third term is \[ \ll \log^+\Big(\sdfrac{1}{(\eta_{\gamma +v}\log X)^2}\Big) \ll \log^+\Big(\sdfrac{1}{\eta_{\gamma +v}\log X}\Big). \] Combining this with \eqref{first sum lemma}, we obtain from \eqref{ReJ3} and \eqref{second sum J3} that \[ \begin{split} \big|\Re \,J_3 &-M_X(\r, z)\big| \\ &\ll \bigg(1+\log^{+}\Big(\frac{1}{\eta_{\gamma +v}\log X}\Big)\bigg) \sum_{\gamma ' \neq \gamma +v}\frac{(\sigma_1-1/2)^2}{(\sigma_1-1/2)^2 +(\gamma +v-\gamma ')^2 } +O\bigg( \frac{\log{\gamma }}{\log X}\bigg) . \end{split} \] Now, by \eqref{Sum<E} \[ \qquad \sum_{\gamma ' \neq \gamma +v} \frac{(\sigma_1-1/2)^2 }{(\sigma_1-1/2)^2 +(\gamma +v-\gamma ')^2 } \ll \Big(\sigma_1-\sdfrac12\Big) E(X, \gamma +v) \ll \frac{E(X, \gamma +v)}{\log X}, \] so \[ \Re \,J_3 =M_X(\r, z) +\bigg(1+\log^{+}\Big(\frac{1}{\eta_{\gamma +v}\log X}\Big)\bigg)\frac{E(X, \gamma +v)}{\log X}. \] Going back to \eqref{Re part}, we have shown that \begin{equation} \label{Re part 2} \begin{split} \log&|\zeta(\r+z)| \\ &=M_X(\r, z) +\Re\sum_{n\leq X^2}\sdfrac{\Lambda_X(n)}{n^{\sigma_1+i(\gamma +v)} \log n} +O\bigg(\Big(1+\log^{+}\Big(\sdfrac{1}{\eta_{\gamma +v}\log X}\Big)\Big)\sdfrac{E(X, \gamma +v)}{\log X}\bigg). \end{split} \end{equation} Next, following \cite[p. 35]{S1946Archiv} (or \cite[p. 35]{L}) we write the sum \[ \Re{\sum_{n\leq X^2}\sdfrac{\Lambda_X(n)}{n^{\sigma_1+i(\gamma +v)}\log n}} \] in a more convenient form. By the notation in \eqref{Lambda_X}, we have $\Lambda_X(p^\ell)=w_X(p^\ell)\log p\, .$Thus, the above is \[ \Re{\sum_{p\leq X^2} \frac{1}{p^{\sigma_1+i(\gamma +v)}}} +\Re {\sum_{p\leq X^2}\sdfrac{w_X(p)-1}{p^{\sigma_1+i(\gamma +v)}}} +\Re {\sum_{p^2\leq X^2}\sdfrac{w_X(p^2)}{2p^{2(\sigma_1+i(\gamma +v))}}} +\Re {\sum_{\substack{p^\ell \leq X^2,\\ \ell>2}}\sdfrac{w_X(p^\ell)}{\ell p^{\ell(\sigma_1+i(\gamma +v))}}}. \] This in turn equals \begin{align*} &\Re {\CMcal{P}_X(\gamma +v)} +O\bigg(\Big| \sum_{p\leq X^2}\sdfrac{1-w_X(p)}{p^{1/2+i(\gamma +v)}}\Big|\bigg) +O\bigg(\Big| \Re \sum_{p\leq X^2}\Big(\sdfrac{w_X(p)}{p^{\sigma_1+i(\gamma +v)}}-\sdfrac{w_X(p)}{p^{1/2+i(\gamma +v)}}\Big)\Big|\bigg) \\ &+O\bigg( \Big| \sum_{p\leq X}\sdfrac{w_X(p^2)}{p^{1+2i(\gamma +v)}}\Big|\bigg) +O\bigg(\Big| \sum_{p\leq X}\sdfrac{w_X(p^2)}{p^{2\sigma_1+2i(\gamma +v)}}-\sdfrac{w_X(p^2)}{p^{1+2i(\gamma +v)}}\Big|\bigg) +O\Big(\sum_{\substack{p^\ell\leq X^2, \\ \ell>2}}\sdfrac{1}{\ell p^{\ell/2}}\Big). \end{align*} We leave the first and the third error terms intact. In the second error term we apply the mean value theorem for integrals. This ensures that there exists a number $\sigma_{\ast}$ between $\frac 1 2$ and $\sigma_1$ such that this error term is \[ \int_{1/2}^{\sigma_1}\bigg( \Re \sum_{p\leq X^2}\sdfrac{w_X(p)\log{p}}{p^{u+i(\gamma +v)}}\bigg)\mathop{du} =\Big(\sigma_1-\sdfrac12\Big)\, \Re\sum_{p\leq X^2}\sdfrac{\Lambda_X(p)} {p^{\sigma_{\ast}+i(\gamma +v)}}. \] Using the integral $\displaystyle \int_{\sigma_{\ast}}^\infty \sdfrac{\mathop{d\sigma}}{(Xp)^\sigma}= \sdfrac{1}{(Xp)^{\sigma_\ast} \log(Xp)},$ we rewrite this, and then estimate \[ \begin{split} \Big(\sigma_1-\sdfrac12\Big)X^{\sigma_{\ast}-\frac 1 2} \int_{\sigma_{\ast}}^\infty X^{\frac 1 2-\sigma}&\bigg(\Re \sum_{p\leq X^2}\sdfrac{\Lambda_X(p)\log{(Xp)}}{p^{\sigma+i(\gamma +v)}}\bigg)\mathop{d\sigma} \\ &\ll \frac{1}{\log X} \int_{1/2}^\infty X^{\frac 1 2-\sigma}\bigg|\sum_{p\leq X^2}\sdfrac{\Lambda_X(p)\log{(Xp)}}{p^{\sigma+i(\gamma +v)}}\bigg|\mathop{d\sigma}. \\ \end{split} \] Next, since $|w_X(p^2) /p^{2i(\gamma +v)}| \leq 1$ the fourth error term is \[ =\bigg|\sum_{p\leq X}\sdfrac{w_X(p^2)}{p^{2i(\gamma +v)}}\Big(\sdfrac{1}{p}-\sdfrac{1}{p^{2\sigma_1}}\Big)\bigg| \ll \sum_{p\leq X}\Big(\sdfrac{1}{p}-\sdfrac{1}{p^{2\sigma_1}} \Big) \ll \Big(\sigma_1-\sdfrac12\Big)\sum_{p\leq X}\sdfrac{\log p}{p} \ll 1, \] where we used Mertens' theorem $\sum_{p\leq X}\frac{\log p}{p}=\log X+O(1).$ Clearly the fifth error term is just $O(1)$ as well. We combine our estimates to find that \begin{equation}\label{eq:prime sum for Re} \begin{split} \qquad \Re \sum_{n\leq X^2}\sdfrac{\Lambda_X(n)}{n^{\sigma_1+i(\gamma +v)}\log n} =\Re {\CMcal{P}_X(\gamma +v)} &+O\bigg(\Big| \sum_{p\leq X^2}\sdfrac{1-w_X(p)}{p^{1/2+i(\gamma +v)}}\Big|\bigg) +O\bigg( \Big| \sum_{p\leq X}\sdfrac{w_X(p^2)}{p^{1+2i(\gamma +v)}}\Big|\bigg)\\ \qquad &+O\bigg(\frac{1}{\log X} \int_{1/2}^\infty X^{\tfrac12-\sigma}\Big|\sum_{p\leq X^2}\sdfrac{\Lambda_X(p)\log{(Xp)}}{p^{\sigma+i(\gamma +v)}}\Big|\mathop{d\sigma}\bigg). \end{split} \end{equation} By \eqref{Re part 2} and the above result, the proof of the lemma is complete. \end{proof} As we mentioned earlier, the proof of Lemma \ref{arg zeta} has some similarities to the proof of Lemma \ref{Re log zeta}. We will indicate those as needed. \begin{proof}[Proof of Lemma \ref{arg zeta}] By \eqref{J_1} and \eqref{J_2}, \eqref{logzeta rho+z} gives \begin{equation}\label{Im part} \arg\zeta(\r+z) = \Im\, J_3 + \Im\, \sum_{n\leq X^2} \frac{\Lambda_X(n)}{n^{\sigma_1+i(\gamma +v)} \log n} +O\bigg(\frac{ E(X, \gamma +v )}{\log X} \bigg), \end{equation} where \[ \Im {J_3} =\Im \int_{1/2+u}^{\sigma_1} \Big(\frac{\zeta^{'}}{\zeta}(\sigma_1+i(\gamma +v)) -\frac{\zeta^{'}}{\zeta}(\sigma+i(\gamma +v))\Big)\mathop{d\sigma} . \] An argument similar to the one in ~\cite[pp. 8--9]{Selberg1944} can be applied to $ \Im{J_3}.$ By \eqref{Z'/Z 2}, we have for $\frac12\leq \sigma \leq \sigma_1$ and $s\neq \r$ \begin{align*} \Im\bigg(\frac{\zeta^{'}}{\zeta}(\sigma_1&+ i(\gamma +v)) -\frac{\zeta^{'}}{\zeta}(\sigma+i(\gamma +v)) \bigg) \\ &=\sum_{\gamma ^\prime} \bigg( \frac{\gamma +v-\gamma ^\prime}{ (\sigma_1-1/2)^2+ (\gamma +v-\gamma ^\prime)^2} -\frac{\gamma +v-\gamma ^\prime}{(\sigma-1/2)^2+ (\gamma +v-\gamma ^\prime)^2} \bigg) +O(\log \gamma )\\ &=\sum_{\gamma ^\prime} \bigg( \frac{(\gamma +v-\gamma ^\prime)\left((\sigma-1/2)^2-(\sigma_1-1/2)^2\right)} {\left((\sigma_1-1/2)^2+ (\gamma +v-\gamma ^\prime)^2\right)\left((\sigma-1/2)^2+ (\gamma +v-\gamma ^\prime)^2\right)}\bigg) +O(\log \gamma ). \end{align*} By integrating the first and the last terms over $[\frac12+u,\sigma_1],$ we obtain the estimate \[ \begin{split} | \Im{J_3}|\ &\leq \, \sum_{\gamma '}\frac{(\sigma_1-1/2)^2}{ (\sigma_1-1/2)^2+(\gamma +v-\gamma ')^2 } \int_{1/2}^\infty \frac{|\gamma +v-\gamma '|}{(\sigma-1/2)^2+(\gamma +v-\gamma ')^2}{d\sigma} +O\bigg(\frac{\log{\gamma }}{\log X} \bigg) \\ &\ll \sum_{\gamma '}\frac{(\sigma_1-1/2)^2}{(\sigma_1-1/2)^2+(\gamma +v-\gamma ')^2} +O\bigg(\frac{\log{\gamma }}{\log X} \bigg). \end{split} \] The last step followed from the convergence of the integral. Then from \eqref{Sum<E}, we conclude that $\displaystyle |\Im{J_3}| \, \ll \tfrac{E(X, \gamma +v)}{\log X}$ since $\log \gamma \ll E(X, \gamma +v).$ Hence by \eqref{Im part} \begin{equation}\label{Im part 2} \arg\zeta(\r+z) = \Im\, \sum_{n\leq X^2} \frac{\Lambda_X(n)}{n^{\sigma_1+i(\gamma +v)}\log n} +O\bigg(\sdfrac{E(X, \gamma +v)}{\log X} \bigg). \end{equation} The argument we used to prove \eqref{eq:prime sum for Re} similarly shows that \[ \begin{split} \Im \sum_{n\leq X^2}\frac{\Lambda_X(n)}{n^{\sigma_1+i(\gamma +v)}\log n} =\Im{\CMcal{P}_X(\gamma +v)} &+O\bigg(\Big|\sum_{p\leq X^2}\sdfrac{1-w_X(p)}{p^{1/2+i(\gamma +v)}}\Big|\bigg) +O\bigg( \Big|\sum_{p\leq X}\sdfrac{w_X(p^2)}{p^{1+2i(\gamma +v)}}\Big|\bigg)\\ &+O\bigg( \frac{1}{\log X} \int_{1/2}^\infty X^{\tfrac12-\sigma}\Big|\sum_{p\leq X^2}\sdfrac{\Lambda_X(p)\log{(Xp)}}{p^{\sigma+i(\gamma +v)}}\Big|\mathop{d\sigma}\bigg). \end{split} \] Substituting this into \eqref{Im part 2} completes the proof. \end{proof} \section{Some Preliminary Lemmas}\label{sec:lemmas} The fundamental result that we will use throughout this section is the Landau-Gonek formula. \begin{LGF} Assume RH and let $x >1, T\geq 2.$ Then \[ \sum_{0<\gamma \leq T}x^{i\gamma } = -\frac{T}{2\pi} \frac{\Lambda(x)}{\sqrt x} +\mathcal{E}(x, T), \] where \begin{equation}\label{LandauLemmaError} \begin{split} \mathcal{E}(x, T) \ll \sqrt{x} \log( xT) \log\log (3x) + \frac{ \log x}{\sqrt{x}} \min\bigg(T, \frac{x }{\langle x \rangle} \bigg) + \frac{\log T}{\sqrt{x} } \min \bigg(T, \frac{ 1}{\log x} \bigg), \end{split} \end{equation} and $\langle x \rangle$ denotes the distance from $x$ to the closest prime power other than $x$ itself. \end{LGF} \begin{proof} Landau~\cite{Landau} proved a weaker version of this in the sense that the error term was not uniform in $x.$ The above version was later proven by Gonek~\cite{GonekLandaulemma1,GonekLandaulemma2}. \end{proof} The rest of the results in this section are obtained from the Landau-Gonek formula. The following one will be essential in computing discrete moments of Dirichlet polynomials. \begin{lem}\label{sumanbnlemma} Assume RH. Let $(a_n)$ and $(b_n)$ be sequences of complex numbers and suppose that $ M, N\leq T.$ Then \begin{equation}\label{sumanbn} \begin{split} \sum_{0 < \gamma \leq T}&\bigg(\sum_{n\leq N} a_n n^{-i(\gamma +v)}\bigg)\bigg( \mkern4.5mu\overline{\mkern-4.5mu \sum_{m\leq M}b_mm^{-i(\gamma +v)}}\bigg) \\ = & \, N(T)\sum_{n\leq \min\{M, N\}}a_n\overline{b_n} -\sdfrac{T}{2\pi}\sum_{m\leq M, n\leq N}a_n\overline{b_m}\Big(\frac{m}{n}\Big)^{iv}\bigg\{\sdfrac{\Lambda(m/n)}{\sqrt{m/n}}+\sdfrac{\Lambda(n/m)}{\sqrt{n/m}}\bigg\} \\ &+O\bigg(\max\{M, N\}\log^2{T}\Big(\sum_{n\leq N}|a_n|^2+\sum_{m\leq M}|b_m|^2 \Big)\bigg) \\ +O\bigg(&\max\{M, N\} \log T\log\log T\Big(\sum_{m\leq M}\sdfrac{|b_m|}{\sqrt m}\sum_{m<n\leq N}|a_n|\sqrt n+\sum_{n\leq N}\sdfrac{|a_n|}{\sqrt n}\sum_{n<m\leq M}|b_m|\sqrt m\,\Big)\bigg). \end{split} \end{equation} In particular, \[ \begin{split} \sum_{0 < \gamma \leq T} \Big| \sum_{n\leq N} & a_n n^{-i(\gamma +v)}\Big|^2 \\ =&\, N(T) \sum_{n\leq N} |a_n|^2 -\frac{T}{\pi}\Re\sum_{m, n\leq N} a_n \overline{a_m}\Big( \frac{m}{n}\Big)^{iv}\frac{\Lambda(m/n) }{\sqrt{m/n}} \\ &+O\bigg(N\log T\log \log T\sum_{n\leq N}\sum_{n<m\leq N}|a_na_m| \sdsqrt{\sdfrac mn}\, \bigg) +O\bigg(N\log^2{T}\sum_{n\leq N}|a_n|^2\bigg). \end{split} \] \end{lem} \begin{proof} The left-hand side of \eqref{sumanbn} is equal to \begin{equation}\label{proof of sumanbn} \begin{split} \sum_{m\leq M} \sum_{n\leq N} a_n \overline{b_m} \sum_{0 < \gamma \leq T}\Big(\frac m n\Big)^{i(\gamma +v)} = &\, N(T) \sum_{n\leq \min\{M, N\}} a_n\overline{b_n} \\ &+\sum_{\substack{n<m\\ m\leq M, n\leq N}} a_n \overline{b_m} \Big(\frac{m}{n} \Big)^{iv} \bigg\{-\sdfrac{T}{2\pi}\sdfrac{\Lambda(m/n)}{\sqrt{m/n}}+\mathcal{E}\Big(\sdfrac{m}{n}, T\Big)\bigg\} \\ &+\sum_{\substack{m<n\\m\leq M, n\leq N }} a_n \overline{b_m} \Big(\frac{m}{n}\Big)^{iv} \bigg\{-\sdfrac{T}{2\pi}\sdfrac{\Lambda(n/m)}{\sqrt{n/m}}+\mathcal{E}\Big(\sdfrac{n}{m}, T\Big)\bigg\}, \end{split} \end{equation} where we applied the Landau-Gonek formula. Then by \eqref{LandauLemmaError}, \begin{equation}\label{error from epsilon} \begin{split} \sum_{n\leq N}\sum_{n<m\leq M} a_n \overline{b_m} \Big(\frac{m}{n} \Big)^{iv}\mathcal{E}\Big(\sdfrac{m}{n}, T\Big) \ll &\sum_{n\leq N}\sum_{n<m\leq M} |a_n b_m| \bigg\{\sdsqrt{\frac{m}{n}} \log\Big(\sdfrac{mT}{n}\Big) \log\log\Big(\sdfrac{3m}{n}\Big)\bigg\} \\ &+ \sum_{n\leq N}\sum_{n<m\leq M} |a_n b_m| \frac{ \log (m/n)}{\sqrt{m/n}} \min\bigg(T, \frac{(m/n) }{\langle m/n \rangle}\bigg) \\ &+ \sum_{n\leq N}\sum_{n<m\leq M} |a_n b_m| \frac{\log T}{\sqrt{m/n}} \min \bigg(T, \frac{1}{\log (m/n)} \bigg) . \end{split} \end{equation} Since $M, N\leq T,$ the first term on the right-hand side is clearly \begin{equation}\label{eq:first term} \ll \log T \log \log T\sum_{n\leq N}\frac{|a_n|}{\sqrt{n}} \sum_{n<m \leq M}\sqrt m\,|b_m|. \end{equation} For the second term, note that since $\langle m/n \rangle\geq \frac 1n$ \[ \begin{split} \frac{ \log (m/n)}{\sqrt{m/n}} \min\bigg(T, \sdfrac{(m/n) }{\langle m/n \rangle} \bigg) \ll \sqrt{mn}\log M \ll N\sqrt{m/n}\log T. \end{split} \] Thus, the second error term is bounded by $N$ times our bound for the first. For the third term on the right-hand side of \eqref{error from epsilon}, we note that \[ \frac{\log T}{\sqrt{m/n} } \min \bigg(T, \frac{1}{\log (m/n)}\bigg) \ll \frac{\log T}{\sqrt{m/n} \log (m/n)} \] since $\log{(m/n)\gg \frac{1}{T}}$ for $M, N\leq T.$ Then the third term is \[ \begin{split} &\ll \log T \sum_{n\leq N} \; \sum_{n<m\leq M}\sqrt{\frac{n}{m}}\, \frac{|a_n|^2+ |b_m|^2}{ \log (m/n)} \\ &=\log T \sum_{n\leq N} \; \sum_{n<m\leq M}\sqrt{\frac{n}{m}}\frac{|a_n|^2}{ \log (m/n)} +\log T \sum_{n\leq N} \; \sum_{n<m\leq M}\sqrt{\frac{n}{m}}\frac{|b_m|^2}{\log (m/n)} \\[1ex] &:=S_1+S_2\, . \end{split} \] For $S_1,$ we separate the sum over $m$ as follows. \[ \bigg( \, \sum_{n<m \leq \min\{2n, M\}}+ \sum_{2n< m\leq M}\, \bigg)\frac{1}{\sqrt m \log (m/n)} . \] If $n<m \leq \min\{2n, M\},$ then $\log(m/n) \gg (m-n)/n.$ For the remaining $m$ which satisfy $2n< m \leq M,$ we simply have $\log(m/n) \gg 1 .$ Using these bounds, we see that \begin{align*} S_1 &\ll \log T \sum_{n\leq N} \sqrt n\, |a_n|^2 \,\bigg(\sum_{n< m \leq \min\{2n, M\}}\frac{n}{\sqrt{m}(m-n)}+ \sum_{2n< m \leq M}\frac{1}{\sqrt m}\bigg) \\ &\ll \log T \sum_{n\leq N}n\, |a_n|^2 \sum_{n< m \leq \min\{2n, M\}}\frac{1}{m-n} + \sqrt M\sum_{n\leq N}\sqrt n\, |a_n|^2 \\[1.6ex] &\ll N \log N \log T \sum_{n\leq N} |a_n|^2 +\sqrt{NM}\log T\sum_{n\leq N} |a_n|^2 \ll N\log^2{T} \sum_{n\leq N} |a_n|^2 . \end{align*} The other term we need to consider is \[ S_2= \log T\sum_{n\leq N} \; \sum_{n<m\leq M}\sqrt{\frac{n}{m}}\frac{|b_m|^2}{\log(m/n)}. \] If we change the order of summation and use $\sqrt n\leq \sqrt m,$ then this is at most \begin{equation*} \log T\sum_{m\leq M} |b_m| ^2 \sum_{\substack{n<m, \\n\leq N}}\sdfrac{1}{\log (m/n)}. \end{equation*} Further, since $\log(m/n) \geq (m-n)/m$ for $n<m,$ we have \begin{equation*} S_2 \ll \log T\sum_{m\leq M} m|b_m| ^2 \, \sum_{\substack{n<m,\\ n\leq N}} \frac{1}{m-n} \ll M\log M\log T\sum_{m\leq M} |b_m| ^2 \ll M\log^2{T} \sum_{m\leq M} |b_m| ^2. \end{equation*} When we combine our estimates for $S_1$ and $S_2$ with \eqref{eq:first term} in \eqref{error from epsilon}, we find \begin{align*} \sum_{n\leq N} & \sum_{n< m\leq M}a_n \overline{b_m} \Big(\frac{m}{n}\Big)^{iv} \mathcal{E}\Big(\sdfrac{m}{n}, T\Big) \\ \ll \, N\log T \log \log T \sum_{n \leq N}\frac{|a_n|}{\sqrt n} & \sum_{n <m\leq M }\sqrt m\, |b_m| \,+\, N\log^2{T}\sum_{n\leq N} |a_n| ^2 +M\log^2{T}\sum_{m\leq M} |b_m| ^2 . \end{align*} In a similar way, one shows that \begin{align*} \sum_{m\leq M}&\sum_{m<n\leq N} a_n \overline{b_m} \Big(\frac{n}{m}\Big)^{iv}\mathcal{E}\Big(\sdfrac{n}{m}, T\Big) \\ \ll M \log T \log \log T \sum_{m \leq M}\frac{|b_m|}{\sqrt m} &\sum_{m <n \leq N}\sqrt{n}\, |a_n| \,+\,\max\{M, N\}\log^2{T}\bigg(\sum_{n\leq N} |a_n|^2+\sum_{m \leq M} |b_m|^2 \bigg). \end{align*} The lemma now follows from \eqref{proof of sumanbn}. \end{proof} The following is an easy consequence of the previous lemma. We state it separately because the following form will be more convenient for our purposes. \begin{cor} \label{cor:moments} Assume RH. Let $(a_n)$ and $(b_n)$ be two sequences of complex numbers. Suppose that $ N^j, N^{k-j}\leq T,$ where $j$ and $k$ are nonnegative integers. Then \begin{equation} \notag \begin{split} &\sum_{0 < \gamma \leq T}\bigg(\, \sum_{n\leq N}a_nn^{-i(\gamma +v)}\bigg)^j \bigg(\, \mkern4.5mu\overline{\mkern-4.5mu \sum_{m\leq N}b_mm^{-i(\gamma +v)}}\,\bigg)^{k-j} \\ =&\, N(T)\sum_{n\leq \min\{N^j, N^{k-j}\}}A_n\overline{B_n} -\frac{T}{2\pi}\sum_{\substack{n\leq N^j, \\ m\leq N^{k-j}}}A_n\overline{B_m}\Big(\frac mn\Big)^{iv} \bigg\{\frac{\Lambda(m/n)}{\sqrt{m/n}}+\frac{\Lambda(n/m)}{\sqrt{n/m}}\bigg\} \\ &+O\bigg(N^k\log T \log\log T \Big(\sum_{n\leq N^j}\sdfrac{|A_n|}{\sqrt n}\sum_{n<m\leq N^{k-j}}\sqrt m\,|B_m| +\sum_{m\leq N^{k-j}}\sdfrac{|B_m|}{\sqrt m}\sum_{m<n\leq N^j}\sqrt n\, |A_n|\Big)\bigg)\\ &+O\bigg(N^k \log^2{T}\Big(\sum_{n\leq N^j}|A_n|^2\,+\sum_{m\leq N^{k-j}}|B_m|^2 \Big)\bigg), \end{split} \end{equation} where \[ A_n=\sum_{n=n_1\dots n_j} a_{n_1}\dots a_{n_j} \quad \text{and} \quad B_m=\sum_{m=m_1\dots m_{k-j}}b_{m_1}\dots b_{m_{k-j}}. \] \end{cor} \begin{proof} This is the result of Lemma \ref{sumanbnlemma} applied to the sequences $(A_n)$ and $(B_n).$ \end{proof} The following lemma can be viewed as the discrete version of Lemma 3 in \cite{Soundararajan09}. \begin{lem}\label{Soundmomentlemma3} Assume RH. Let $k$ be a positive integer and suppose $\displaystyle 1<Y\leq (T/\log{T})^{\frac{1}{3k}}.$ For any complex-valued sequence $(a_p)_p$ indexed by the primes, we have \[ \sum_{0 < \gamma \leq T}\bigg| \sum_{p\leq Y}\frac{a_p}{p^{1/2+i\gamma }}\bigg|^{2k} \ll k! N(T)\Big(\sum_{p\leq Y}\sdfrac{|a_p|^2}{p}\Big)^{k}. \] \end{lem} \begin{proof} We begin by using the multinomial theorem to write \[ \Big(\sum_{p\leq Y}\sdfrac{a_p}{p^{1/2+i\gamma }}\Big)^{k}= \sum_{n\leq Y^k}\sdfrac{A_n}{n^{1/2+i\gamma }}, \] where \[ A_n=\frac{k!}{{\a_1}!\dots {\a_r}!}a_{p_1}^{\a_1}\dots a_{p_r}^{\a_r} \qquad \text{for} \quad n=p_1^{\a_1}\dots p_r^{\a_r}. \] Here the $p_i$ are distinct primes, each of which is less than $Y,$ and the powers $\a_i\geq 1$ satisfy the condition $\a_1+\dots +\a_r=k.$ For $n$ that cannot be written as $p_1^{\a_1}\dots p_r^{\a_r}$ for such $p_i$'s and $\a_i$'s, we set $A_n=0.$ Now, by the second assertion of Lemma \ref{sumanbnlemma}, \begin{equation}\label{eq:proof of Soundmomentlemma3} \begin{split} \sum_{0 < \gamma \leq T}\bigg| \sum_{p\leq Y}\frac{a_p}{p^{1/2+i\gamma }}\bigg|^{2k} =&\, \sum_{0 < \gamma \leq T}\bigg(\sum_{n\leq Y^k}A_n n^{-1/2-i\gamma }\bigg) \bigg(\mkern4.5mu\overline{\mkern-4.5mu\, \sum_{m\leq Y^k} A_mm^{-1/2-i\gamma }}\bigg) \\ =&\, N(T)\sum_{n\leq Y^k}\sdfrac{|A_n|^2}{n} -\frac{T}{\pi}\Re \sum_{m\leq Y^k}\sum_{n\leq Y^k}\frac{A_n\overline{A_m}}{\sqrt{nm}}\frac{\Lambda(m/n)}{\sqrt{m/n}}\\ &+O\bigg(Y^k\log T\log\log T \sum_{n\leq Y^k}\sum_{n<m\leq Y^k}\sdfrac{|A_n A_m|}{n}\,\bigg) \\ &+O\bigg( Y^k\log^2{T}\sum_{n\leq Y^k} \sdfrac{|A_n|^2}{n}\bigg). \end{split} \end{equation} Note that by the definition of $A_n,$ \begin{equation}\label{proof of Sound lemma} \begin{split} \sum_{n\leq Y^k}\frac{|A_n|^2}{n} &=\sum_{\substack{\a_i\geq 1, \sum \a_i =k \\ p_i\leq Y}}\bigg(\frac{k!}{{\a_1}!\dots {\a_r}!}\bigg)^2 \frac{|a_{p_1}|^{2\a_1}\dots|a_{p_r}|^{2\a_r}}{p_1^{\a_1}\dots p_r^{\a_r}} \\ &\leq k!\sum_{\substack{\a_i\geq 1, \sum \a_i =k, \\ p_i\leq Y}}\frac{k!}{{\a_1}!\dots {\a_r}!} \frac{|a_{p_1}|^{2\a_1}\dots|a_{p_r}|^{2\a_r}}{p_1^{\a_1}\dots p_r^{\a_r}} =k!\Big(\sum_{p\leq Y}\frac{|a_p|^2}{p}\,\Big)^k. \end{split} \end{equation} Thus the first main term on the right-hand side of \eqref{eq:proof of Soundmomentlemma3} is \[ \leq k!N(T) \Big(\sum_{p\leq Y}\sdfrac{|a_p|^2}{p}\Big)^{k}. \] The second main term on the right-hand side of \eqref{eq:proof of Soundmomentlemma3} vanishes because the number of prime divisors of both $m$ and $n$ is $k,$ and so if $m\neq n,$ then their ratio cannot be a nontrivial prime power. Next, note that $Y^k\log^2{T} \ll N(T)$ by the choice of $Y.$ Then by \eqref{proof of Sound lemma}, the second error term is smaller than $ k! N(T) \Big(\sum_{p\leq Y}|a_p|^2 /p\Big)^{k}.$ Finally, we estimate the first error term on the right hand-side of \eqref{eq:proof of Soundmomentlemma3} using the arithmetic mean-geometric mean inequality. \begin{align*} Y^k\log T\log\log T\sum_{n\leq Y^k}\sum_{n<m\leq Y^k}\sdfrac{|A_n A_m|}{n} &\leq Y^k \log T\log\log T\sum_{m\leq Y^k}m \sum_{n\leq Y^k}\sdfrac{|A_n A_m|}{nm} \\ &\leq Y^{2k}\log T\log\log T\sum_{m\leq Y^k}\sum_{n\leq Y^k}\bigg(\sdfrac{|A_n|^2}{2n^2}+\sdfrac{|A_m|^2}{2m^2}\bigg)\\ &\leq Y^{3k}\log T\log\log T\sum_{n\leq Y^k}\sdfrac{|A_n|^2}{n^2} \\ &\leq k!Y^{3k}\log^2 T\Big(\sum_{p\leq Y}\sdfrac{|a_p|^2}{p^2}\Big)^k. \end{align*} Note that the last step follows by an argument similar to the one we used in \eqref{proof of Sound lemma}. The bound we obtained here is even smaller than our bound for the second error term. Hence, all of the four terms in \eqref{eq:proof of Soundmomentlemma3} are of smaller size than the $O$-term in the statement of the lemma. \end{proof} \section{Moment Calculations} \label{moments} In Lemma \ref{Re log zeta}, we saw that the sum \[ \Re\CMcal{P}_X(\gamma +v)=\Re \sum_{p\leq X^2} \frac{1}{p^{1/2+i(\gamma +v)}} \] can be used to approximate the function $\log{|\zeta(\r+z)|}-M_X(\r, z)$, where \[ M_X(\r, z)=m(\r+iv)\Big(\log\Big(\sdfrac{eu\log X}{4}\Big)-\sdfrac{u\log X}{4}\Big). \] We will now calculate discrete integral moments of $ \Re\CMcal{P}_X(\gamma +v)$, which will then allow us to compute such moments of the function $\log{|\zeta(\r+z)|}$ $-M_X(\r, z)$ under the assumption of RH and Montgomery's Pair Correlation Conjecture. We remind the reader that $z=u+iv$ denotes a complex number with \[ 0 < u\leq \frac{1}{\log X} \quad \text{and} \quad v=O\Big(\frac{1}{\log X}\Big). \] We also take $\displaystyle X\leq T^{\tfrac{1}{8k}}$. It will be useful in this chapter to use the notation \begin{equation}\label{Psi} \Psi=\sum_{p\leq X^2} \frac 1p, \end{equation} and express some of our terms in terms of $\Psi$. Note that by Mertens' Theorem, we have \[ \Psi =\log\log X+O(1). \] Our main theorem is the following. \begin{thm}\label{moments of Re log zeta} Assume RH and Montgomery's Pair Correlation Conjecture. Suppose that $k$ is a positive integer with $k \ll \log\log\log T$, and let $T^{\tfrac{\d}{8k}} \leq X\leq T^{\tfrac{1}{8k}}$ for $0<\d \leq 1$ fixed. If $k$ is even, then \begin{align*} \sum_{0 < \gamma \leq T} \bigs(\log{|\zeta(\r+z)|}-M_X(\r, z)\bigs)^k & \\ = \beta_kN(T) & \Psi^{\tfrac k2} +O\Big(D^kk^{\tfrac{3k+2}{2}}\beta_k N(T)\Psi^{\tfrac{k-1}{2}}\Big). \end{align*} If $k$ is odd, then we have \begin{align*} \sum_{0 < \gamma \leq T} \bigs(\log{|\zeta(\r+z)|}-M_X(\r, z)\bigs)^k =O\Big(D^k k^{\tfrac{3k+1}{2}}\beta_{k+1} N(T)\Psi^{\tfrac{k-1}{2}}\Big). \end{align*} Here, $D$ is a constant depending on $\d$ and $\Psi$ is as defined in \eqref{Psi}. We also have \begin{equation}\label{beta} \beta_r=\frac{r!}{2^r (r/2)!} \quad \text{for an even positive integer } r. \end{equation} \end{thm} The coefficients $\beta_r$ are closely related to the Gaussian distribution. The moments of a random variable $Z$ that has Gaussian distribution with mean $0$ and variance $V$ are as follows: \[ \begin{split} \mathbb{E}[Z^r]= \begin{cases} \beta_r (2V)^{r/2} \quad &r \text{ : even,} \\ 0 \quad &r \text{ : odd.} \end{cases} \end{split} \] It is well known that if a random variable $Z'$ has the same moments, then $Z'$ has the same distribution as $Z$ (see \cite[p. 413]{Billingsley}). Hence the above theorem states that for $z$ chosen as above, the sequence $\bigs(\displaystyle \log{|\zeta(\r+z)|}-M_X(\r, z) \bigs)$ has an approximate Gaussian distribution with mean $0$ and variance $\tfrac12\Psi.$ \subsection{Moments of $\Re{\CMcal{P}_X(\gamma +v)}$} We prove the following result for the moments of \[ \Re\CMcal{P}_X(\gamma +v)=\Re \sum_{p\leq X^2} \frac{1}{p^{1/2+i(\gamma +v)}}. \] Even though this proposition will not be used later, it is still of interest since we obtain an explicit main term for odd moments of the real part of the polynomial. Note, however, that Theorem \ref{moments of Re log zeta} does not provide an explicit main term for odd $k$. \begin{prop}\label{moments of Re Dirichlet polyl v} Assume RH. Let $\CMcal{P}_X(\gamma +v)=\sum_{p\leq X^2} p^{-1/2-i(\gamma +v)}$ where $X\leq T^{\frac{1}{8k}}$ and $k \ll \sqrt[6]{\log\log T}.$ Then for even $k,$ \[ \sum_{0 < \gamma \leq T}\bigs(\Re{\CMcal{P}_X(\gamma +v)}\bigs)^k = \beta_kN(T)\Psi^{\frac k2} +O\Big(k^2\beta_kN(T)\Psi^{\frac{k-4}{2}}\Big). \] If $k$ is odd, then \[ \begin{split} \sum_{0 < \gamma \leq T}\bigs(\Re{\CMcal{P}_X(\gamma +v)}\bigs)^k =-\frac{\beta_{k+1}}{\pi}\frac{\sin(2v\log X)-\sin(v\log 2)}{v}T &\Psi^{\frac{k-1}{2}} \\ &+O\Big(k^2\beta_{k+1}T\log X\Psi^{\frac{k-3}{2}}\Big). \end{split} \] \end{prop} \begin{proof} Expanding the $k$th moment of $\Re\CMcal{P}_X(\gamma +v)$ by means of the identity $\displaystyle \Re{z}=\frac{z+\overline{z}}{2}$ and the binomial theorem, we see that \[ \begin{split} \sum_{0 < \gamma \leq T}\bigs(\Re{\CMcal{P}_X(\gamma +v)}\bigs)^{k} &=\frac{1}{2^k}\sum_{j=0}^k\binom{k}{j}\sum_{0 < \gamma \leq T}\CMcal{P}_X(\gamma +v)^j \overline{\CMcal{P}_X(\gamma +v)}^{k-j}\\ &=\frac{1}{2^k}\sum_{j=0}^k\binom{k}{j} S_j(v). \end{split} \] Thus, it suffices to estimate the sums \[ S_j(v):= \sum_{0 < \gamma \leq T}\CMcal{P}_X(\gamma +v)^j \overline{\CMcal{P}_X(\gamma +v)}^{k-j} \] for $j=0,1,\dots, k$. We write \[ \CMcal{P}_X(\gamma +v)^j=\sum_{\substack{n=p_1\dots p_j,\\ p_i\leq X^2}}\frac{a_j(n)}{n^{1/2+i(\gamma +v)}} \quad \text{and} \quad \CMcal{P}_X(\gamma +v)^{k-j}=\sum_{\substack{m=q_1\dots q_{k-j},\\ q_i\leq X^2}}\frac{a_{k-j}(m)}{m^{1/2+i(\gamma +v)}}, \] where $a_{r}(p_1\dots p_{r})$ denotes the number of permutations of the primes $p_1,\dots, p_{r}$. It is clear that $a_r(p_1\dots p_{r}) \leq r!$, where the equality holds if and only if the primes $p_1,\dots ,p_{r}$ are all distinct, in other words, the product $p_1\dots p_{r}$ is square-free. Further note that $a_0(n)$ equals $1$ if $n=1$, and equals $0$ otherwise. To avoid a cumbersome notation, throughout the proof of this proposition we suppose that the number $n$ is product of $j$ primes, each of which is at most $X^2$, and $m$ is product of $k-j$ primes, each of size at most $X^2$. By Corollary \ref{cor:moments}, we have the following expression for $S_j(v)$: \begin{equation}\label{eq:S_j(v)} \begin{split} S_j(v) =N(T)&\sum_{n}\frac{a_{j}(n)a_{k-j}(n)}{n} -\frac{T}{2\pi}\sum_{m, n}\frac{a_j(n)a_{k-j}(m)}{\sqrt{mn}}\Big(\frac{m}{n}\Big)^{iv}\bigg\{\frac{\Lambda(m/n)}{\sqrt{m/n}}+\frac{\Lambda(n/m)}{\sqrt{n/m}}\bigg\} \\ &+O\bigg( X^{2k}\log T\log\log T\Big(\sum_n\frac{a_j(n)}{n}\sum_{m>n}a_{k-j}(m)+\sum_m\frac{a_{k-j}(m)}{m}\sum_{n>m}a_j(n)\Big)\bigg)\\ &+O\bigg(X^{2k}\log^2{T}\Big(\sum_m\frac{a_{k-j}(m)^2}{m}+\sum_n\frac{a_j(n)^2}{n}\Big)\bigg); \end{split} \end{equation} here we have suppressed the conditions of summation. It will be useful later on to note that \begin{equation}\label{a_j sum} \Psi^j =\sum_n \frac{a_j(n)}{n} =\sum_{n \text{ sq-free}} \frac{a_j(n)}{n} +\sum_{n \text{ not sq-free}} \frac{a_j(n)}{n}. \end{equation} The second sum on the right is zero if $j=0$ or $1$. If $j \geq 2$, then \[ \sum_{n \text{ not sq-free}} \frac{a_j(n)}{n} =\sum_{q\leq X^2} \frac{1}{q^2} \sum_{n_1=\frac{n}{q^2}} \frac{a_j(q^2 n_1)}{n_1} \leq \binom{j}{2} \sum_{n_1} \frac{a_{j-2}(n_1)}{n_1} \ll j^2 \Psi^{j-2}. \] Note the last estimate still holds when $j=0$ or $1$. Hence, \begin{equation}\label{not sq-free} \sum_{n \text{ not sq-free}} \frac{a_j(n)}{n} \ll j^2 \Psi^{j-2}. \end{equation} Combining this with \eqref{a_j sum}, we then see that \begin{equation}\label{sq-free} \sum_{n \text{ sq-free}} \frac{a_j(n)}{n} = \Psi^j+O(j^2 \Psi^{j-2}). \end{equation} Returning to \eqref{eq:S_j(v)}, we see that \begin{equation} \label{estimate1} \sum_n\frac{a_j(n)}{n}\sum_{m>n}a_{k-j}(m) \ll k! X^{2(k-j)}\sum_n\frac{a_j(n)}{n} \leq k!X^{2k}\Psi^k, \end{equation} and \begin{equation} \label{estimate2} \sum_n\frac{a_j(n)^2}{n} \ll k!\sum_n\frac{a_j(n)}{n} \leq k!\Psi^k. \end{equation} Hence, both of the error terms in \eqref{eq:S_j(v)} are $\ll k!\Psi^k\sqrt T\log^2{T}$, and we can write \begin{equation}\label{eq:S_j(v) rewritten} \begin{split} S_j(v) =&\, N(T)\sum_{n}\frac{a_j(n)a_{k-j}(n)}{n} -\frac{T}{2\pi}\sum_{m, n}\frac{a_j(n)a_{k-j}(m)}{\sqrt{mn}}\Big(\frac{m}{n}\Big)^{iv} \bigg\{\frac{\Lambda(m/n)}{\sqrt{m/n}}+\frac{\Lambda(n/m)}{\sqrt{n/m}}\bigg\}\\ &+O\bigs(k!\Psi^k \sqrt T\log^2{T}\bigs) \\ =&\, S_{j,1}(v)+S_{j,2}(v)+O\bigs(k!\Psi^k \sqrt T\log^2{T}\bigs). \end{split} \end{equation} Next, we estimate the terms $S_{j,1}(v)$ and $S_{j,2}(v)$. First observe that $S_{j,1}(v)$ vanishes unless $k$ is even and $j=\frac{k}{2}$. In that case, using \eqref{not sq-free} and \eqref{sq-free}, we obtain \begin{equation}\label{eq:Skby2(v)} \begin{split} S_{\frac k2,1}(v) =& \, N(T)\sum_{n}\frac{a_{k/2}^2(n)}{n} \\ =& \, (k/2)!N(T)\sum_{n\text{ sq-free}}\frac{a_{k/2}(n)}{n} +O\bigg((k/2)!N(T)\sum_{n\text{ not sq-free}}\frac{a_{k/2}(n)}{n}\bigg). \\ =&\, (k/2)!N(T)\Psi^{k/2}+O\bigs(k^2(k/2)!N(T)\Psi^{k/2-2}\bigs). \end{split} \end{equation} Now consider the term $S_{j,2}(v)$ in \eqref{eq:S_j(v) rewritten}. In order for this term not to vanish, one of the ratios $m/n$ and $n/m$ must be a prime power. This condition puts a restriction on $j$: if $m/n=q^{\ell}$, then $j=\frac{k-\ell}{2}$, and if $n/m=q^{\ell}$, then $j=\frac{k+\ell}{2}$ . Furthermore, the terms with $\ell\geq 2$ in $S_{j,2}(v)$ contribute \[ \ll T\sum_{\substack{m=nq^2,\\q\leq X^2}}\frac{a_j(n)a_{k-j}(nq^2)}{n}\frac{\log q}{q^2} +T\sum_{\substack{n=mq^2,\\q\leq X^2}}\frac{a_j(mq^2)a_{k-j}(m)}{m}\frac{\log q}{q^2}. \] In the first term note that $\sum_{q\leq X^2} \tfrac{\log q}{q^2} \ll 1$, and $\displaystyle a_{k-j}(nq^2)\leq (k-j)!$. Hence, this term is \[ \ll (k-j)! T\sum_{n} \frac{a_j(n)}{n} \leq k! T\Psi^k. \] Similarly, the second error term is also $\ll k! T \Psi^k$. Therefore \begin{equation}\label{eq: Landau term of S_j(v)} \begin{split} S_{j,2}(v) =& \,-\frac{T}{2\pi}\sum_{q\leq X^2}\sum_{n=mq}\frac{a_j(n)a_{k-j}(m)}{\sqrt{mn}}\frac{\log q}{q^{1/2+iv}} -\frac{T}{2\pi}\sum_{q\leq X^2}\sum_{m=nq}\frac{a_j(n)a_{k-j}(m)}{\sqrt{mn}}\frac{\log q}{q^{1/2-iv}} \\ &+O\bigs(k! T\Psi^k\bigs). \end{split} \end{equation} Both the main terms here correspond to the case $\ell=1$, so we must have $j=\frac{k-1}{2}$ or $j=\frac{k+1}{2}$. In particular, $k$ must be odd. Note that when $j=\frac{k-1}{2}$, the first sum vanishes, and when $j=\frac{k+1}{2}$, the second sum vanishes. Furthermore, \begin{equation}\label{conjugates} S_{\frac{k-1}{2},2}(v)= \overline{S_{\frac{k+1}{2},2}(v)}. \end{equation} Thus, it suffices to estimate $S_{\frac{k-1}{2},2}(v)$. We have \[ \begin{split} S_{\frac{k-1}{2},2}(v) = -\frac{T}{2\pi}\sum_{q\leq X^2} \sum_{n}\frac{a_{(k-1)/2}(n)a_{(k+1)/2}(nq)}{n}\frac{\log q}{q^{1+iv}} +O\bigs(k! T\Psi^k\bigs). \end{split} \] We rewrite this as \begin{equation}\label{eq:Sk-1by2(v)} \begin{split} S_{\frac{k-1}{2},2}(v) =&\, -\frac{T}{2\pi}\sum_{q\leq X^2} \Big(\sum_{\substack{n\\ q \nmid n}}+\sum_{\substack{n \\ q \mid n}}\Big) \frac{a_{(k-1)/2}(n)a_{(k+1)/2}(nq)}{n}\frac{\log q}{q^{1-iv}} +O\bigs(k! T\Psi^k\bigs) \\ =&\quad T_1+T_2+O\bigs(k! T\Psi^k\bigs). \end{split} \end{equation} Note that if $q\nmid n$, then \[ a_{(k+1)/2}(nq)=\tfrac{(k+1)}{2}a_{(k-1)/2}(n) . \] Thus \begin{equation}\label{eq:Sk-1by2(v) main term} T_1 =-\frac{(k+1)T}{4\pi}\sum_{q\leq X^2}\frac{\log q}{q^{1-iv}} \sum_{\substack{n\\ q\nmid n}} \frac{a_{(k-1)/2}^2(n)}{n}. \end{equation} We separate the sum over $n$ into sums for which n which does or does not satisfy $a_{(k-1)/2}(n)=\left((k-1)/2\right)!$. We then find that \begin{equation}\label{q nmid n} \sum_{n: q\nmid n}\frac{a_{(k-1)/2}^2(n)}{n} =\left((k-1)/2\right)! \sum_{\substack{q\nmid n \\n \text{ sq-free}}}\frac{a_{(k-1)/2}(n)}{n} +O\bigg(\left((k-1)/2\right)! \sum_{\substack{q\nmid n \\ n \text{ not sq-free}}}\frac{a_{(k-1)/2}(n)}{n}\bigg). \end{equation} By \eqref{not sq-free}, the error term is $O\bigs(k^2\left((k-1)/2\right)! \Psi^{\frac{k-5}{2}}\bigs)$. To treat the main term, we set \[ \Psi'=\sum_{\substack{p\leq X^2\\ p\neq q}} \frac 1p, \] and note that the same analysis that led to \eqref{sq-free} shows that \begin{equation}\label{sq free q nmid n} \sum_{\substack{q\nmid n \\ n \text{ sq-free}}}\frac{a_j(n)}{n} =(\Psi ')^j+O\bigs(j^2(\Psi ')^{j-2}\bigs). \end{equation} Now, since $(1-x)^j=1+O(jx)$ for $0\leq x \leq 1$, we see that $(\Psi ')^j=(\Psi-1/q)^j=\Psi^j+O\big(j \Psi^{j-1}\big)$. Thus, \[ \sum_{\substack{q\nmid n \\ n \text{ sq-free}}}\frac{a_j(n)}{n} =\Psi^j+O\big(j^2 \Psi^{j-1}\big). \] Combining this and our estimate for the error term in \eqref{q nmid n}, we see that \begin{equation}\label{q nmid n second} \sum_{\substack{n \\ q\nmid n}}\frac{a_{(k-1)/2}^2(n)}{n} =\left((k-1)/2\right)!\Psi^{\frac{k-1}{2}} +O\bigs(k^2\left((k-1)/2\right)!\Psi^{\frac{k-3}{2}}\bigs). \end{equation} We will insert this into \eqref{eq:Sk-1by2(v) main term}. To handle the sum over $q$ we use the prime number theorem which, under RH, says that \[ \sum_{q\leq x} \log q =x+O\bigs(x^{\frac12}\log^2{x}\bigs). \] Partial summation then gives \[ \sum_{q\leq X^2} \frac{\log q}{q^{1-iv}}=\frac{X^{2iv}-2^{iv}}{iv}+O(1). \] Substituting this and \eqref{q nmid n second} into \eqref{eq:Sk-1by2(v) main term}, we find that \begin{equation}\label{T_1 final} T_1 =-\frac{T}{2\pi}((k+1)/2)! \frac{X^{2iv}-2^{iv}}{iv} \Psi^{\frac{k-1}{2}} +O\bigs(k^2((k+1)/2)!T\log X\Psi^{\frac{k-3}{2}}\bigs). \end{equation} We now proceed to the estimation of the sum $T_2$ in \eqref{eq:Sk-1by2(v)}. Writing $n=qn_1$, we have \[ T_2 = -\frac{T}{2\pi} \sum_{q\leq X^2} \frac{\log q}{q^{1-iv}} \sum_{n_1} \frac{a_{(k-1)/2}(n_1q)a_{(k+1)/2}(n_1q^2)}{n_1 q}, \] where $n_1$ runs over integers with exactly $(k-3)/2$ prime factors. Since $a_{(k+1)/2}(n_1q^2)\leq \left((k+1)/2\right)!$, it easily follows that \[ T_2 \ll T\left((k+1)/2\right)! \sum_{q\leq X^2}\frac{\log q}{q} \sum_{n_1} \frac{a_{(k-1)/2}(n_1q)}{n_1 q}. \] Now $a_{(k-1)/2}(n_1q) \leq \frac{k-1}{2} a_{(k-3)/2}(n_1)$. Hence, \[ T_2 \ll k \left((k+1)/2\right)! \sum_{q\leq X^2}\frac{\log q}{q^2} \sum_{n_1} \frac{a_{(k-3)/2}(n_1)}{n_1} \ll k \left((k+1)/2\right)! T \Psi^{\frac{k-3}{2}}. \] Combining this estimate with \eqref{T_1 final} in \eqref{eq:Sk-1by2(v)}, we see that \begin{equation}\label{S_(k-1)/2} S_{\frac{k-1}{2},2}(v) =-\frac{T}{2\pi}((k+1)/2)! \frac{X^{2iv}-2^{iv}}{iv}\Psi^{\frac{k-1}{2}} +O\bigs(k^2((k+1)/2)!T\log X \Psi^{\frac{k-3}{2}}\bigs) +O\bigs(k!T\Psi^k\bigs). \end{equation} By \eqref{eq:S_j(v) rewritten}, the same estimate holds for $S_{\frac{k-1}{2}}(v)$. By \eqref{conjugates}, $ S_{\frac{k+1}{2}}(v)$ and $S_{\frac{k+1}{2},2}(v)$ are both equal to the conjugate of this. Moreover, whenever $j\neq \frac{k-1}{2}, \frac{k}{2}$ or $\frac{k+1}{2}$, we have \begin{equation}\label{Sj(v) for most j} S_j(v)=O\bigs(k! T \Psi^k\bigs) \end{equation} by \eqref{eq:S_j(v) rewritten} and \eqref{eq: Landau term of S_j(v)}. Now we can complete the proof of the proposition. Recall that \begin{equation} \label{setup sum Sj(v)} \sum_{0 < \gamma \leq T} \bigs(\Re{\CMcal{P}_X(\gamma +v)}\bigs)^k=\frac{1}{2^k}\sum_{j=0}^{k}{k\choose j}S_j(v). \end{equation} If $k$ is even, then by \eqref{eq:Skby2(v)} and \eqref{Sj(v) for most j} \begin{equation} \notag \begin{split} \sum_{0 < \gamma \leq T} \bigs(\Re{\CMcal{P}_X(\gamma +v)}\bigs)^k =&\, \frac{1}{2^k}{k\choose k/2}\Big((k/2)!N(T) \Psi^{\frac k2} +O\bigs(k^2(k/2)!N(T)\Psi^{\frac{k-4}{2}} \, \bigs)\Big) \\ &+O\bigg(\sum_{\substack{0\leq j\leq k\\ j\neq \frac{k}{2}}}\frac{1}{2^k}{k\choose j}k! T\Psi^k\bigg) \\ =&\, \beta_kN(T)\Psi^{\frac k2} +O\bigs(k^2\beta_kN(T)\Psi^{\frac{k-4}{2}}\bigs) +O\bigs(k! T \Psi^k\bigs). \end{split} \end{equation} If $k$ is odd, then by \eqref{Sj(v) for most j} and \eqref{setup sum Sj(v)} \[ \begin{split} \sum_{0 < \gamma \leq T} \bigs(\Re{\CMcal{P}_X(\gamma +v)}\bigs)^k =&\, \frac{1}{2^k} \binom{k}{\frac{k-1}{2}} \Big(S_{\frac{k-1}{2}}(v)+S_{\frac{k+1}{2}}(v)\Big) +O\bigg( \sum_{\substack{0\leq j \leq k, \\j\neq \frac{k-1}{2},\frac{k+1}{2}}} \frac{1}{2^k} \binom{k}{j} k! T\Psi^k\bigg) \\ =&\, \frac{1}{2^k} \binom{k}{\frac{k-1}{2}} \Big(S_{\frac{k-1}{2}}(v)+\overline{S_{\frac{k-1}{2}}(v)}\Big) +O\bigs(k!T\Psi^k\bigs). \end{split} \] Using \eqref{S_(k-1)/2} (and the remark immediately after), we obtain \[ \begin{split} \sum_{0 < \gamma \leq T} \bigs(\Re{\CMcal{P}_X(\gamma +v)}\bigs)^k =& -\frac{1}{2^{k}}{k\choose {\frac{k-1}{2}}}((k+1)/2)! \frac{T}{\pi}\frac{\sin(2v\log X)-\sin(v\log 2)}{v}\Psi^{\frac{k-1}{2}} \\ &+O\Big(\frac{1}{2^{k}}{k\choose {\frac{k-1}{2}}}k^2((k+1)/2)!T\log X \Psi^{\frac{k-3}{2}}\Big) \\ =& -\beta_{k+1} \frac{T}{\pi} \Big(\sdfrac{\sin(2v\log X)-\sin(v\log 2)}{v}\Big)\Psi^{\frac{k-1}{2}} +O\bigs(k^2\beta_{k+1}T\log X\Psi^{\frac{k-3}{2}}\bigs) \\ &+O\bigs(k!T\Psi^k\bigs). \end{split} \] This completes the proof. \end{proof} \subsection{Other Moment Calculations} In Lemma \ref{Re log zeta}, we saw that the sum $ \Re\CMcal{P}_X(\gamma +v) $ can be used to approximate the function $\log{|\zeta(\r+z)|}-M_X(\r, z),$ where \[ M_X(\r, z)=m(\r+iv)\Big(\log\Big(\sdfrac{eu\log X}{4}\Big)-\sdfrac{u\log X}{4}\Big). \] In the following result, we provide an estimate for the average difference between the function $\log{|\zeta(\r+z)|}-M_X(\r, z)$ and the sum $\Re{\CMcal{P}_X(\gamma +v)}.$ \begin{prop}\label{Re log zeta error} Let $T^{\frac{\d}{8k}} \leq X\leq T^{\frac{1}{8k}}$ for $0<\d \leq 1.$ Then for a constant $D$ depending on $\d,$ we have \[ \sum_{0 < \gamma \leq T} \bigs(\log{|\zeta(\r+z)|}-M_X(\r, z)-\Re{\CMcal{P}_X(\gamma +v)}\bigs)^{k} \ll (Dk)^{2k}N(T). \] \end{prop} \begin{proof}\let\qed\relax By \eqref{eq:Re log zeta} from Lemma \ref{Re log zeta} \[ \log|\zeta(\r+z)|-M(\r, z) -\Re\CMcal{P}_X(\gamma +v) \ll \sum_{i=1}^{4}r_i(X, \gamma +v), \] where the $O$-terms are as given in the statement of the lemma. We take the $k$th power of each sides of this equation and sum over $0 < \gamma \leq T.$ Then the right-hand side is \begin{equation}\label{remainder to k} 4^k \sum_{i=1}^4 \bigg( \sum_{0 < \gamma \leq T}r_i(X, \gamma +v)^k\bigg). \end{equation} Here we have used the elementary inequality \begin{equation}\label{convexity} \Big(\frac 1n \sum_{i=1}^n x_i \Big)^k \leq \frac{1}{n} \sum_{i=1}^n x_i^k, \end{equation} which is valid for positive numbers $x_1, x_2, \dots, x_n$ and $k\geq 1.$ The first two error terms in \eqref{remainder to k} can be estimated in a straightforward manner by applying Lemma \ref{Soundmomentlemma3}. For the first one, we note that by the definition of $w_X(n)$ in \eqref{w_X}, \begin{dmath*} 1-w_X(p)= \begin{cases} 0 \quad &\text{if} \quad p\leq X,\\ \frac{\log(p/X)}{\log X} \quad &\text{if} \quad X\leq p\leq X^2. \end{cases} \end{dmath*} Thus \begin{equation} \notag \Big|\frac{1-w_X(p)}{p^{iv}}\Big|^2<\frac{\log^2 p}{\log^2 X}\quad \text{ for } \quad p\leq X^2. \end{equation} By the Cauchy-Schwarz inequality and then Lemma \ref{Soundmomentlemma3}, \begin{equation}\label{1-w} \begin{split} \sum_{0 < \gamma \leq T}\Big|\sum_{p\leq X^2} \sdfrac{1-w_X(p)}{p^{1/2+i(\gamma +v)}}\Big|^{k} &\leq \sqrt{N(T)}\bigg(\sum_{0 < \gamma \leq T}\Big|\sum_{p\leq X^2}\sdfrac{1-w_X(p)}{p^{1/2+i(\gamma +v)}}\Big|^{2k}\bigg)^{\frac12} \\ &\ll k!N(T)\Big(\sum_{p\leq X^2}\sdfrac{\log^2{p}}{p\log^2{X}}\Big)^k \ll (ck)^kN(T). \end{split} \end{equation} In the last step, we have used the estimate $\sum_{p\leq X^2} \frac{\log^2{p}}{p}\ll \log^2{X}.$ To estimate the second error term, we need the following: \[ w_X(p^2)= \begin{cases} 0 \quad &\text{if} \quad p\leq \sqrt{X},\\ \frac{\log(p^2/X)}{\log X} \quad &\text{if} \quad \sqrt{X}\leq p\leq X. \end{cases} \] This implies \[ \Big|\frac{w_X(p^2)}{p^{1/2+i(\gamma +2v)}}\Big|^2 \leq \dfrac{1}{p} \quad \text{ for } \quad p\leq X. \] Again, we apply the Cauchy-Schwarz inequality and then Lemma \ref{Soundmomentlemma3} to obtain \begin{equation}\label{w} \begin{split} \sum_{0 < \gamma \leq T}\Big|\sum_{p\leq X}\sdfrac{w_X(p^2)}{p^{1+2i(\gamma +v)}}\Big|^{k} &\leq \sqrt{N(T)}\bigg(\sum_{0 < \gamma \leq T}\Big|\sum_{p\leq X}\sdfrac{w_X(p^2)}{p^{1+2i(\gamma +v)}}\Big|^{2k}\bigg)^{\frac12} \\ &\ll (ck)^kN(T)\Big(\sum_{p\leq X}\sdfrac{1}{p^{2}}\Big)^k \ll (ck)^kN(T). \end{split} \end{equation} The third error term is handled in Proposition \ref{error term integral}, and the fourth error term is estimated in Proposition \ref{zero spacing eta v} under the assumption of Montgomery's Pair Correlation Conjecture. \end{proof} \begin{prop}\label{error term integral} Assume RH and let $\displaystyle X\leq T^{\frac{1}{8k}}.$ Then \[ \sum_{0 < \gamma \leq T}\bigg(\frac{1}{\log X} \int_{1/2}^\infty X^{\tfrac12-\sigma} \bigg|\sum_{p\leq X^2}\frac{\Lambda_X(p)\log{(Xp)}}{p^{\sigma+i(\gamma +v)}}\bigg|\mathop{d\sigma}\bigg)^{k} \ll (ck)^{k/2}N(T). \] \end{prop} \begin{proof} By the Cauchy-Schwarz inequality, \begin{equation}\label{eq:integral CS} \begin{split} \sum_{0 < \gamma \leq T}\bigg(\frac{1}{\log X} \int_{1/2}^\infty X^{\tfrac12-\sigma}&\Big|\sum_{p\leq X^2}\frac{\Lambda_X(p)\log{(Xp)}}{p^{\sigma+i(\gamma +v)}}\Big|\mathop{d\sigma}\bigg)^{k} \\ \leq \frac{\sqrt{N(T)}}{(\log X)^k}& \bigg(\sum_{0 < \gamma \leq T}\bigg(\int_{1/2}^\infty X^{\tfrac12-\sigma}\Big|\sum_{p\leq X^2}\frac{\Lambda_X(p)\log{(Xp)}}{p^{\sigma+i(\gamma +v)}}\Big|\mathop{d\sigma}\bigg)^{2k}\bigg)^{1/2}. \end{split} \end{equation} By H\"{o}lder's inequality, \begin{equation} \begin{split}\label{eq:integral} \bigg(\int_{1/2}^\infty X^{\tfrac12-\sigma}&\Big|\sum_{p\leq X^2}\frac{\Lambda_X(p)\log{(Xp)}}{p^{\sigma+i(\gamma +v)}}\Big|\mathop{d\sigma}\bigg)^{2k} \\ &\leq \Big( \int_{1/2}^\infty X^{\tfrac12-\sigma}\mathop{d\sigma}\Big)^{2k-1} \int_{1/2}^\infty X^{\tfrac12-\sigma}\Big|\sum_{p\leq X^2} \frac{\Lambda_X(p)\log{(Xp)}}{p^{\sigma+i(\gamma +v)}}\Big|^{2k}\mathop{d\sigma} \\ & = \sdfrac{1}{(\log X)^{2k-1}}\int_{1/2}^\infty X^{\tfrac12-\sigma}\Big|\sum_{p\leq X^2} \frac{\Lambda_X(p)\log{(Xp)}}{p^{\sigma+i(\gamma +v)}}\Big|^{2k}\mathop{d\sigma}. \end{split} \end{equation} Here by Lemma \ref{Soundmomentlemma3}, \begin{equation}\label{eq:sum prop} \sum_{0 < \gamma \leq T} \Big|\sum_{p\leq X^2}\frac{\Lambda_X(p)\log{(Xp)}}{p^{\sigma+i(\gamma +v)}}\Big|^{2k} \ll k!N(T)\Big(\sum_{p\leq X^2}\frac{\Lambda_X^2(p)\log^2{(Xp)}}{p^{2\sigma}}\Big)^k. \end{equation} Since $\Lambda_X(p)\leq \log p $ and $p\leq X^2,$ we find that \[ \Lambda_X^2(p)\log^2(Xp) \leq \log^2{p}(\log X+\log p)^2 \leq 9\log^2{p} \log^2{X}. \] Thus \[ \Big(\sum_{p\leq X^2}\sdfrac{\Lambda_X^2(p)\log^2{(Xp)}}{p^{2\sigma}}\Big)^k \ll c^k (\log X)^{2k} \Big(\sum_{p\leq X^2} \frac{\log^2 p}{p^{2\sigma}}\Big)^k. \] By Mertens' theorem, $ \sum_{p\leq X^2} \frac{\log^2 p}{p^{2\sigma}} \ll \log^2{X} $ for any fixed $\sigma\geq \frac12.$ Thus, the right-hand side of \eqref{eq:sum prop} is at most $\ll (ck)^k N(T) (\log X)^{4k}.$ Inserting this bound into \eqref{eq:integral}, we find that \[ \begin{split} \bigg(\int_{1/2}^\infty X^{\tfrac12-\sigma}&\Big|\sum_{p\leq X^2}\frac{\Lambda_X(p)\log{(Xp)}}{p^{\sigma+i(\gamma +v)}}\Big|\mathop{d\sigma}\bigg)^{2k} \\ \ll &\, (ck)^k N(T) (\log X)^{2k+1} \int_{1/2}^\infty X^{\tfrac12-\sigma}\mathop{d\sigma} \ll (ck)^k N(T) (\log X)^{2k}. \end{split} \] The claim of the proposition follows from this and \eqref{eq:integral CS}. \end{proof} \begin{prop}\label{zero spacing eta v} Let $T^{\frac{\d}{8k}} \leq X\leq T^{\frac{1}{8k}}$ for $0<\d \leq 1.$ If Montgomery's Pair Correlation Conjecture is true, then for a constant $D$ depending on $\d$ \[ \sum_{0 < \gamma \leq T}\bigg(\Big( 1+\log^+ \sdfrac{1}{\eta_{\gamma +v}\log{X}}\Big)\frac{E(X,\gamma +v)}{\log{X}}\bigg)^k \ll (Dk)^{2k}N(T). \] Here $\eta_{\gamma +v}$ is as defined by \eqref{eta}, and $E(X, t)$ is as defined in \eqref{E}. \end{prop} \begin{proof} By the Cauchy-Schwarz inequality, \begin{equation} \label{eq:Cauchy Schwarz on zero spacing} \begin{split} \sum_{0 < \gamma \leq T}\bigg(\Big( 1+&\log^+ \sdfrac{1}{\eta_{\gamma +v}\log{X}}\Big) \sdfrac{E(X,\gamma +v)}{\log X}\bigg)^k \\ &\leq \frac{1}{(\log X)^{k}}\Big(\sum_{0 < \gamma \leq T}\Big( 1+\log^+ \sdfrac{1}{\eta_{\gamma +v}\log{X}} \Big)^{2k}\Big)^{1/2}\Big(\sum_{0 < \gamma \leq T} \big| E(X,\gamma +v)\big|^{2k}\Big)^{1/2}. \end{split} \end{equation} First consider the sum \[ \sum_{0 < \gamma \leq T} \big|E(X,\gamma +v)\big|^{2k} =\sum_{0 < \gamma \leq T}\Big| \sum_{n\leq X^2}\frac{\Lambda_X(n)}{n^{\sigma_1+i(\gamma +v)}}+\log{(\gamma +v)}\Big|^{2k}. \] We separate the sum over $n$ into primes and higher powers of primes. Then by \eqref{convexity}, \begin{equation} \label{eq:moment of E} \begin{split} &\sum_{0 < \gamma \leq T}\Big| \sum_{n\leq X^2}\frac{\Lambda_X(n)}{n^{\sigma_1+i(\gamma +v)}}+\log{(\gamma +v)}\Big|^{2k} \\ \leq 9^k &\sum_{0 < \gamma \leq T}\Big| \sum_{p\leq X^2}\frac{\Lambda_X(p)}{p^{\sigma_1+i(\gamma +v)}}\Big|^{2k} +9^k\sum_{0 < \gamma \leq T}\Big| \sum_{\substack{p^\ell \leq X^2, \\ \ell\geq 2}}\frac{\Lambda_X(p^\ell)}{p^{\ell\sigma_1+i \ell(\gamma +v)}}\Big|^{2k} +9^k\sum_{0 < \gamma \leq T} (\log{(\gamma +v)})^{2k}. \end{split} \end{equation} Since $\sigma_1=\frac12 +\frac4{\log X},$ it is easy to see that the second sum on the right is $O\bigs((c\log X)^{2k}N(T)\bigs).$ By Lemma \ref{Soundmomentlemma3}, \[ \sum_{0 < \gamma \leq T}\Big| \sum_{p\leq X^2}\frac{\Lambda_X(p)}{p^{\sigma_1+i(\gamma +v)}}\Big|^{2k} \ll k! N(T) \Big(\sum_{p\leq X^2} \frac{\Lambda_X(p)^2}{p^{2\sigma_1}}\Big)^k \ll (ck)^k(\log X)^{2k}N(T). \] Furthermore, we trivially have \[ \sum_{0 < \gamma \leq T} (\log{(\gamma +v)})^{2k} \ll N(T)(\log T)^{2k}. \] From these estimates and \eqref{eq:moment of E}, we obtain \[ \sum_{0 < \gamma \leq T} \big|E(X,\gamma +v)\big|^{2k} \ll c^kN(T)\big(k^k(\log X)^{2k}+(\log T)^{2k}\big). \] Since $X\geq T^{\d/(8k)},$ this gives \begin{equation}\label{eq:moment of E 2} \sum_{0 < \gamma \leq T} \big|E(X,\gamma +v)\big|^{2k}\ll (Dk)^{2k} N(T) (\log X)^{2k} \end{equation} for a constant $D$ depending on $\d.$ Next, we treat the expression \begin{equation}\label{sum log eta} \sum_{0 < \gamma \leq T}\Big( 1+\log^+ \sdfrac{1}{\eta_{\gamma +v}\log{X}} \Big)^{2k} \end{equation} from \eqref{eq:Cauchy Schwarz on zero spacing}. Recall that \[ \eta_{\gamma +v}=\min_{\gamma '\neq \gamma +v}|\gamma '-(\gamma +v)|. \] Let $\gamma \in (0, T].$ If $\displaystyle \eta_{\gamma +v}> \sdfrac{1}{\log X},$ then the contribution of $\gamma $ to the sum is $1.$ If $\displaystyle \eta_{\gamma +v}\leq \sdfrac{1}{\log X},$ then there exists a nonnegative integer $j$ such that \begin{equation}\label{eta inequality} \frac{e^{-j-1}}{\log X} < \eta_{\gamma +v} \leq \frac{e^{-j}}{\log X}. \end{equation} By the inequality on the right-hand side, for some $\gamma '$ in $\Big(-|v|-\tfrac{1}{\log X}, T+|v|+\tfrac{1}{\log X}\Big]$ other than $\gamma $ \[ -v-\frac{e^{-j}}{\log X} \leq \gamma -\gamma ' \leq -v+\frac{e^{-j}}{\log X}. \] By Montgomery's Pair Correlation Conjecture, the number of such ordinates $\gamma $ and $\gamma '$ is \[ \ll N(T)\int_{(-v-e^{-j}/\log X)\log T/(2\pi)}^{(-v+e^{-j}/\log X)\log T/(2\pi)} \left(1-\frac{\sin^2(\pi x)}{(\pi x)^2}\right) \mathop{dx}. \] The integrand is nonnegative and at most $1,$ so the above is \[ \ll \frac{e^{-j}\log T}{\log X} N(T) \ll \frac{k}{\d}e^{-j} N(T) \] since $X \geq T^{\d/(8k)}.$ For those $\gamma $ we have \[ 1+\log^+\sdfrac{1}{\eta_{\gamma +v}\log{X}} < j+2 . \] Thus, the sum in \eqref{sum log eta} is \begin{equation}\label{eq:proof eta} \ll \frac{k}{\d} N(T) \sum_{j=0}^\infty \frac{(j+2)^{2k}}{e^j}. \end{equation} The series is \[ \sum_{j=0}^\infty \frac{(j+2)^{2k}}{e^j} = e^2\sum_{j=2}^\infty \frac{j^{2k}}{e^j} \leq e^2 \int_2^\infty \frac{x^{2k}}{e^x} \mathop{dx} \ll \Gamma(2k) \ll (2k-1)!. \] Hence, \[ \sum_{0 < \gamma \leq T}\Big( 1+\log^+ \sdfrac{1}{\eta_{\gamma +v}\log{X}} \Big)^{2k} \ll (Dk)^{2k} N(T) \] for a constant $D=D(\d).$ Combining this bound with \eqref{eq:moment of E 2} in \eqref{eq:Cauchy Schwarz on zero spacing}, we complete the proof. \end{proof} \subsection{Proof of Theorem \ref{moments of Re log zeta}} We can now complete the proof of Theorem \ref{moments of Re log zeta}. Recall that $\displaystyle T^{\tfrac{\d}{8k}}\leq X\leq T^{\tfrac{1}{8k}}$ for some $0<\d \leq 1$. We write \[ \log{|\zeta(\r+z)|}-M_X(\r, z) = \Re\CMcal{P}_X(\gamma +v)+ r(X,\gamma +v). \] Taking the $k$th moment of each side, we obtain \begin{equation}\label{log-M moments} \begin{split} \sum_{0<\gamma \leq T} \bigs(\log{|\zeta(\r+z)|}-M_X(\r, z)\bigs)^k &=\sum_{0<\gamma \leq T} \bigs(\Re\CMcal{P}_X(\gamma +v)\bigs)^k +\sum_{0 < \gamma \leq T} \bigs(r(X,\gamma +v)\bigs)^k \\ &+O \bigg( \sum_{j=1}^{k-1}\binom{k}{j} \sum_{0 < \gamma \leq T}\bigs|\Re{\CMcal{P}_X(\gamma +v)}\bigs|^j \bigs|r(X,\gamma +v)\bigs|^{k-j}\bigg). \notag \end{split} \end{equation} We write the right-hand side as \[ \sum_{0<\gamma \leq T} \bigs(\Re\CMcal{P}_X(\gamma +v)\bigs)^k +A_1+A_2. \] By Proposition \ref{Re log zeta error} , \begin{equation}\label{A 1 bound} A_1 \ll (Dk)^{2k} N(T), \end{equation} where $D$ depends on $\d$. To estimate each term in $A_2$, we use the Cauchy-Schwarz inequality to find that \begin{align*} \sum_{0 < \gamma \leq T} \bigs|\Re\CMcal{P}_X&(\gamma +v)\bigs|^j \bigs|r(X,\gamma +v)\bigs|^{k-j} \\ &\leq \Big(\sum_{0 < \gamma \leq T} \bigs|\Re{\CMcal{P}_X(\gamma +v)}\bigs|^{2j}\Big)^{\frac{1}{2}} \Big(\sum_{0 < \gamma \leq T} \bigs|r(X,\gamma +v)\bigs|^{2k-2j}\Big)^{\frac{1}{2}}. \end{align*} By Propositions \ref{moments of Re Dirichlet polyl v} and \ref{Re log zeta error}, and the hypothesis that $k \ll \log\log\log T$, the right-hand side is \[ \ll \Big(\beta_{2j}N(T)\Psi^j \Big)^{\frac12}\Big(\bigs(D(k-j)\bigs)^{4k-4j}N(T)\Big)^{\frac12} \ll \beta_{2j}^{1/2}(Dk)^{2(k-j)}N(T)\Psi^{j/2}, \] where $\beta_{2j}$ is as defined in \eqref{beta} and $D=D(\d)$. Thus \[ A_2 \ll N(T) \sum_{j=1}^{k-1} \binom{k}{j} \beta_{2j}^{1/2} (Dk)^{2(k-j)} \Psi^{j/2}. \] It can be easily seen by Stirling's approximation that \begin{equation}\label{beta asymptotic} \beta_{2j} \sim c\Big(\frac{j}{e}\Big)^j, \end{equation} so $\displaystyle \beta_{2j}^{1/2} \ll j^{j/2} < k^{j/2}$. Using this, we obtain \[ A_2 \ll (Dk)^{2k} N(T) \sum_{j=1}^{k-1} \binom{k}{j} \Psi^{j/2}. \] Here, the right-hand side equals \[ (Dk)^{2k} N(T) \Big\{ \Psi^{k/2}\bigs(1+1/\sqrt{\Psi}\,\bigs)^k-\Psi^{k/2}-1\Big\}. \] By the mean value theorem of differential calculus, $(1+x)^k=1+O(k2^kx) $ for $ 0\leq x\leq 1$. Hence for a constant $D=D(\d)$, \[ A_2 \ll k(Dk)^{2k} N(T) \Psi^{\tfrac{k-1}{2}}. \] Combining this with \eqref{log-M moments} and \eqref{A 1 bound}, we find that for $D=D(\d)$ \begin{align*} &\sum_{0<\gamma \leq T}\bigs(\log{|\zeta(\r+z)|}-M_X(\r, z)\bigs)^k \\ =&\, \sum_{0<\gamma \leq T} \bigs(\Re\CMcal{P}_X(\gamma +v)\bigs)^k +O\bigs((Dk)^{2k}N(T)\bigs) +O\Big(k(Dk)^{2k}N(T)\Psi^{\tfrac{k-1}{2}}\Big) \\ =& \sum_{0<\gamma \leq T} \bigs(\Re\CMcal{P}_X(\gamma +v)\bigs)^k +O\Big(k(Dk)^{2k}N(T)\Psi^{\tfrac{k-1}{2}}\Big). \end{align*} Now suppose that $k$ is even. Then by Proposition \ref{moments of Re Dirichlet polyl v}, the right-hand side is \[ \beta_k N(T)\Psi^{\tfrac k2} +O\Big(k^2\beta_k N(T)\Psi^{\tfrac{k-4}{2}}\Big) +O\Big(k(Dk)^{2k}N(T)\Psi^{\tfrac{k-1}{2}}\Big). \] By \eqref{beta asymptotic}, the second $O$-term may be replaced by \[ O\Big(k\beta_k(Dk)^{\tfrac{3k}{2}}N(T)\Psi^{\tfrac{k-1}{2}}\Big), \] which is larger than the first $O$-term in the above sum. Thus, we obtain \[ \sum_{0<\gamma \leq T} \bigs(\log{|\zeta(\r+z)|}-M_X(\r, z)\bigs)^k =\beta_k N(T)\Psi^{\tfrac k2} + O\Big(D^k k^{\tfrac{3k+2}{2}}\beta_k N(T)\Psi^{\tfrac{k-1}{2}}\Big) \] for even $k$. Arguing similarly when $k$ is odd, we find that \begin{align*} \sum_{0<\gamma \leq T} \bigs(\log|\zeta(\r+z)|-M_X(\r, z)\bigs)^k &= -\frac{\beta_{k+1}}{\pi}\frac{\sin(2v\log X)-\sin(v\log 2)}{v} T\Psi^{\tfrac{k-1}{2}} \\ &\quad+O\Big(D^k k^{\tfrac{3k+1}{2}}\beta_{k+1} N(T)\Psi^{\tfrac{k-1}{2}}\Big). \end{align*} Since \[ \frac{\sin(2v\log X)-\sin(v\log 2)}{v} \ll \log X, \] the above $k$th moment is \[ \ll D^k k^{\tfrac{3k+1}{2}}\beta_{k+1} N(T)\Psi^{\tfrac{k-1}{2}}. \] This completes the proof of Theorem \ref{moments of Re log zeta}. \section{Proof of Theorem \ref{distr of Re log zeta}}\label{sec:proof} It follows from Proposition \ref{moments of Re Dirichlet polyl v} that \[ \frac{1}{N(T)}\sum_{0<\gamma \leq T}\mathbbm{1}_{[a,b]} \bigg(\frac{\Re\CMcal{P}_X(\gamma +v)}{\sqrt{\Psi/2}}\bigg) = \frac{1}{\sqrt{2\pi}}\int_a^b e^{-x^2/2}\mathop{dx}+o(1). \] We obtain a precise error term for this statement in Section \ref{subsection distr of P}. In order to do this, we introduce a random polynomial whose real part has the same Gaussian distribution as the asymptotic distribution of $\Re\CMcal{P}_X(\gamma +v).$ \subsection{A Random Model for $\Re\CMcal{P}_X(\gamma +v)$}\label{random model} Suppose that for each prime $p,$ $\theta_p$ denotes a random variable that is uniformly distributed over $[0, 1],$ and $(\theta_p)_p$ is a sequence of independent and identically distributed random variables. We define \[ \CMcal{P}_v(\underline{\theta}):= \sum_{p\leq X^2}\frac{e^{2\pi i \theta_{p}}}{p^{1/2+iv}}, \] similarly to the polynomial $\CMcal{P}_X(\gamma +v).$ For now, we take $\displaystyle X\leq T^{\frac{1}{8k}}.$ In order to understand the distribution of $\Re\CMcal{P}_v(\underline{\theta}),$ we first observe that \[ \int_0^1 e^{2\pi i x\theta_p }\mathop{d\theta_p}= \begin{cases} 1 &\text{if} \quad x=0,\\ 0 & \text {for other real } x. \end{cases} \] We also define $\theta_n$ for positive integers $n$ that are not primes. For $n >1,$ let $n$ have the prime factorization $\displaystyle n=p_1^{\a_1}\dots p_r^{\a_r}.$ Then we set \[ \theta_n:=\a_1\theta_{p_1}+\dots+\a_r\theta_{p_r}. \] It then follows that \[ \theta_{mn}=\theta_m+\theta_n \quad \text{for any numbers } \mkern9mu m, n \quad \text{and} \quad \theta_m=\theta_n \quad \text{if and only if} \quad m=n. \] Furthermore, the last assertion implies that \begin{equation}\label{orthogonality} \int_0^1 e^{2\pi i\theta_m} \overline{e^{2\pi i\theta_n}} \mathop{d\underline{\theta}} = \begin{cases} 1 &\text{if} \quad m=n, \\ 0 &\text{if} \quad m\neq n. \end{cases} \end{equation} Here and from now on, $\displaystyle \int_0^1(\dots)\mathop{d\underline{\theta}} $ represents the multidimensional integral $\int_0^1\dots \int_0^1(\dots) \\ \prod_{p\leq X^2}\mathop{d\theta_p}.$ A consequence of this and the identity $\displaystyle \Re z=\tfrac{z+\overline{z}}{2}$ is \[ \int_0^1 \bigs(\Re\CMcal{P}_v(\theta)\bigs)^k\mathop{d\underline{\theta}}= \begin{cases} \beta_k \Psi^{\frac k2} &\text{if} \quad k \text{ is even,}\\ 0 &\text{if} \quad k \text{ is odd}. \\ \end{cases} \] The coefficients $\beta_k$ are as defined in \eqref{beta} and $\Psi=\sum_{p\leq X^2} 1/p.$ This means that the polynomial $\displaystyle \Re\CMcal{P}_v(\theta)$ has a Gaussian distribution with mean $0$ and variance $\tfrac{1}{2}\Psi.$ Our goal is to understand the distribution of $\displaystyle \Re\CMcal{P}_X(\gamma +v)$ in relation to the distribution of the random polynomial $\displaystyle \Re\CMcal{P}_v(\theta).$ In order to do this, we compare the Fourier transform of $\displaystyle \Re\CMcal{P}_X(\gamma +v)$ with that of $\displaystyle \Re\CMcal{P}_v(\theta).$ First, we need to express the moments of $\displaystyle \Re\CMcal{P}_X(\gamma +v)$ in terms of the moments of $\displaystyle \Re\CMcal{P}_v(\theta).$ \begin{lem}\label{lemma 3.4} Let $\displaystyle X\leq T^{\frac{1}{8k}}$ where $k\ll \log\log T.$ Then \begin{equation}\label{eq:lemma 3.4 real} \begin{split} &\sum_{0 < \gamma \leq T} \bigs(\Re{\CMcal{P}_X(\gamma +v)}\bigs)^k \\ =& \,N(T)\int_0^1\bigs(\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k\mathop{d{\underline{\theta}}} -\frac{T}{\pi}\sum_{\substack{q\leq X^2,\\ 1\leq\ell\leq k}}\frac{\log q}{q^{\ell/2}}\int_0^1\bigs(\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k \Re e^{2\pi i\ell\theta_q}\mathop{d{\underline{\theta}}} +O\bigs((ck)^k\sqrt{T}\log^2{T}\bigs). \end{split} \end{equation} \end{lem} \begin{proof} By the identity $ \Re z=\sdfrac{z+\overline{z}}{2},$ \[ \sum_{0 < \gamma \leq T}\bigs(\Re{\CMcal{P}_X(\gamma +v)}\bigs)^k =\frac{1}{2^k}\sum_{j=0}^k \binom{k}{j} \sum_{0 < \gamma \leq T}\CMcal{P}_X(\gamma +v)^j \overline{\CMcal{P}_X(\gamma +v)}^{k-j}. \] Thus it suffices to compute the sum over $\gamma $ in the above for each $j.$ Suppose that $j$ is fixed. By definition, \[ \begin{split} \sum_{0 < \gamma \leq T}&\CMcal{P}_X(\gamma +v)^j \overline{\CMcal{P}_X(\gamma +v)}^{k-j} \\ &=\sum_{0 < \gamma \leq T}\Big(\sum_{p\leq X^2}\sdfrac{1}{p^{1/2+i(\gamma +v)}}\Big)^j \Big(\overline{\sum_{p\leq X^2}\sdfrac{1}{p^{1/2+i(\gamma +v)}}}\Big)^{k-j}. \end{split} \] By Corollary \ref{cor:moments}, this is \begin{equation}\label{eq:proof of lemma 3.4} \begin{split} &N(T)\sum_{n} \frac{a_j(n)}{\sqrt n} \frac{a_{k-j}(n)}{\sqrt n} \\ &-\frac{T}{2\pi}\sum_{n}\sum_{m} \frac{a_j(n)}{\sqrt n} \frac{a_{k-j}(m)}{\sqrt m}\Big(\frac{m}{n}\Big)^{iv} \bigg\{\frac{\Lambda(m/n)}{\sqrt{m/n}}+\frac{\Lambda(n/m)}{\sqrt{n/m}}\bigg\} \\ &+O\bigg(\log T \log\log T\Big(\sum_n\sdfrac{a_j(n)}{n}\sum_{n<m}a_{k-j}(m) +\sum_m\sdfrac{a_{k-j}(m)}{m}\sum_{m<n} a_j(n)\Big)\bigg) \\ &+O\bigg(X^{2k}\log^2{T}\Big(\sum_n\sdfrac{a_j(n)^2}{n}+\sum_m\sdfrac{a_{k-j}(m)^2}{m}\Big)\bigg), \end{split} \end{equation} where $a_{r}(p_1\dots p_{r})$ denotes the number of permutations of the primes $p_1,\dots, p_{r}.$ Also, in the above equation and throughout this proof, we suppose that $n$ is always a product of $j$ primes while $m$ is always a product of $k-j$ primes, where all these primes are of size at most $X^2.$ By \eqref{orthogonality}, the first main term can be written as \[ N(T)\int_0^1\Big(\sum_n\frac{a_j(n)}{n^{1/2+iv}}e^{2\pi i\theta_n}\Big) \Big(\overline{\sum_{m}\frac{a_{k-j}(m)}{m^{1/2+iv}}e^{2\pi i\theta_m}}\Big)\mathop{d{\underline{\theta}}}. \] We can write this as \[ N(T)\int_0^1\CMcal{P}_v(\underline{\theta})^j \overline{\CMcal{P}_v(\underline{\theta})}^{k-j}\mathop{d{\underline{\theta}}}. \] Then we see that the first main term in \eqref{eq:lemma 3.4 real} follows from summing the above term over $0\leq j \leq k$ with the binomial coefficients. Again by \eqref{orthogonality}, the second main term in \eqref{eq:proof of lemma 3.4} can be written as \[ \begin{split} &\\ &-\frac{T}{2\pi} \int_0^1\Big(\sum_n\sdfrac{a_j(n)}{n^{1/2+iv}}e^{2\pi i\theta_n}\Big) \Big(\overline{\sum_m\sdfrac{a_{k-j}(m)}{m^{1/2+iv}}e^{2\pi i\theta_m}}\Big) \Big(\sum_{\substack{q\leq X^2,\\1\leq\ell\leq k}}\sdfrac{\log q}{q^{\ell/2}}e^{2\pi i\ell\theta_{q}}\Big)\mathop{d{\underline{\theta}}} \\ &-\frac{T}{2\pi}\int_0^1\Big(\sum_n\sdfrac{a_j(n)}{n^{1/2+iv}}e^{2\pi i\theta_n}\Big) \Big(\overline{\sum_m\sdfrac{a_{k-j}(m)}{m^{1/2+iv}}e^{2\pi i\theta_m}} \Big) \Big(\sum_{\substack{q\leq X^2,\\ 1\leq\ell\leq k}} \sdfrac{\log q}{q^{\ell/2}} e^{-2\pi i\ell\theta_q} \Big)\mathop{d{\underline{\theta}}} \end{split} \] We write the first integral as below \[ {m_j}' := \sum_{\substack{q\leq X^2,\\ 1\leq\ell\leq k}}\frac{\log q}{q^{\ell/2}} \int_{0}^{1} {\CMcal{P}_v(\underline{\theta})}^j \overline{\CMcal{P}_v(\underline{\theta})}^{k-j} e^{2\pi i\ell\theta_q}\mathop{d{\underline{\theta}}}. \] Similarly, we write the second integral as \[ {m_j}{''} :=\sum_{\substack{q\leq X^2,\\ 1\leq\ell\leq k}}\frac{\log q}{q^{\ell/2}} \int_{0}^{1} \CMcal{P}_v(\underline{\theta})^j \overline{\CMcal{P}_v(\underline{\theta})}^{k-j} e^{-2\pi i\ell\theta_q}\mathop{d{\underline{\theta}}}. \] We sum these over $j$ with the binomial coefficients and obtain \[ \frac{1}{2^k}\sum_{j=0}^k \binom{k}{j}\bigs( {m_j}'+{m_j}{''} \bigs) =2\sum_{\substack{q\leq X^2,\\ 1\leq\ell\leq k}} \sdfrac{\log q}{q^{\ell/2}} \int_0^1 \bigs(\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k \Re e^{2\pi i\ell\theta_q}\mathop{d{\underline{\theta}}}. \] Thus the second main term of \eqref{eq:lemma 3.4 real} follows. It is left to bound the error terms in \eqref{eq:proof of lemma 3.4}. By \eqref{estimate1} and \eqref{estimate2}, both of the error terms are $\ll (ck)^k \Psi^k \sqrt[4]{T}\log^2{T}.$ Then the total contribution from $\displaystyle j=0, 1, \dots, k$ is \[ \ll \frac{1}{2^k}\sum_{j=0}^{k}{{k}\choose{j}} (ck)^k \Psi^k \sqrt[4]{T} \log^2{T} \ll (ck)^k\sqrt{T}\log^2{T} . \] This completes the proof. \end{proof} The next two lemmas will be needed in the next section. \begin{lem}\label{our lemma 3.16} For $\displaystyle X\leq T^{\frac{1}{8k}},$ \[ \sum_{0 < \gamma \leq T}\bigs|\Re{\CMcal{P}_X(\gamma +v)}\bigs|^k \ll (ck\Psi)^{k/2}N(T). \] \end{lem} \begin{proof} We have \[ \sum_{0 < \gamma \leq T}\bigs|\Re{\CMcal{P}_X(\gamma +v)}\bigs|^k \leq \sqrt{N(T)}\Big(\sum_{0 < \gamma \leq T}\bigs|\CMcal{P}_X(\gamma +v)\bigs|^{2k}\Big)^{1/2} \] by the Cauchy-Schwarz inequality. Then the result easily follows by Lemma \ref{Soundmomentlemma3}, which implies that \[ \sum_{0 < \gamma \leq T}\Big|\sum_{p\leq X^2}\frac{1}{p^{1/2+i(\gamma +v)}}\Big|^{2k} \ll k!N(T)\Big(\sum_{p\leq X^2}\sdfrac{1}{p}\Big)^k =k!\Psi^k N(T). \] \end{proof} \begin{lem}\label{Tsang lemma 3.4} Suppose $k$ is a nonnegative integer. For $\displaystyle 2\leq X \leq T^{\frac{1}{2k}} ,$ we have \[ \int_0^1\bigs|\Re{\CMcal{P}_v(\underline{\theta}})\bigs|^k \mathop{d{\underline{\theta}}} \ll (ck\Psi)^{k/2}. \] \end{lem} \begin{proof} For $v=0,$ this is (3.10) in \cite{Tsang}. For $v\neq 0,$ the result can be proven in a similar way by using \eqref{orthogonality}. \end{proof} \subsection{The Fourier Transform of $\Re\CMcal{P}_X(\gamma +v)$} We use the polynomial approximation \begin{equation}\label{eq:exponential} e^{ix}=\sum_{0\leq k<K}\frac{(ix)^k}{k!}+O\bigg(\frac{|x|^K}{K!}\bigg) \quad \text{for} \quad x\in\mathbb{R}, \end{equation} to obtain an approximation to the Fourier transform of the polynomial $\Re\CMcal{P}_X(\gamma +v).$ We define the following parameters \begin{equation}\label{Omega K} \Omega=\Psi(T)^2, \quad K=2\lfloor \Psi(T)^6\rfloor, \quad X \leq T^{\tfrac{1}{16\Psi(T)^6}}, \end{equation} where \[ \Psi(T)=\sum_{p\leq T} \frac{1}{p}. \] The following lemma relates the Fourier transform of $\Re\CMcal{P}_X(\gamma +v)$ to the Fourier transform of $\Re\CMcal{P}_v(\theta).$ \begin{lem}\label{fourier} Let $\displaystyle X \leq T^{\tfrac{1}{16\Psi(T)^6}}.$ Then for $0\leq \omega \leq \Omega,$ \[ \begin{split} \sum_{0 < \gamma \leq T}&\exp\bigs(2\pi i \omega\Re{\CMcal{P}_X(\gamma +v)}\bigs) \\ &=\,N(T)\int_0^1 \exp \bigs(2\pi i \omega \Re{\CMcal{P}_v(\underline{\theta})}\bigs)\mathop{d\underline{\theta}} -\frac{T}{\pi}\sum_{\substack{q\leq X^2,\\ 1\leq\ell\leq K}} \frac{\log q}{q^{\ell/2}}\int_0^1\exp\bigs(2\pi i\omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs)\Re e^{2\pi i\ell\theta_q}\mathop{d\underline{\theta}} \\ &\quad+O\bigg(N(T) \frac{\omega}{2^K}\bigg). \end{split} \] \end{lem} \begin{proof} By \eqref{eq:exponential}, \begin{equation}\label{eq:exponential expansion} \begin{split} \sum_{0 < \gamma \leq T} &\exp\bigs(2\pi i \omega\Re{\CMcal{P}_X(\gamma +v)}\bigs) \\ &=\sum_{0\leq k< K}\frac{(2\pi i \omega)^k}{k!} \sum_{0 < \gamma \leq T}\bigs(\Re{\CMcal{P}_X(\gamma +v)}\bigs)^k +O\bigg(\sdfrac{(2\pi \omega)^K}{K!}\sum_{0 < \gamma \leq T} \bigs|\Re{\CMcal{P}_X(\gamma +v)}\bigs|^K\bigg). \end{split} \end{equation} By Lemma \ref{our lemma 3.16}, the $O$-term is \[ \begin{split} &\ll N(T)\frac{(2\pi\omega)^K}{K!}(cK\Psi)^{K/2} \\ &\ll N(T) \omega \frac{(2\pi e)^K\omega^{K-1} }{K^K}(cK\Psi)^{K/2} \ll N(T) \omega \bigg(\sdfrac{c\, \Omega\sqrt{\Psi}}{\sqrt{K}}\bigg)^{K} \ll N(T)\frac{\omega}{2^{K}}, \end{split} \] where the final estimate follows from \eqref{Omega K}. Now we apply Lemma \ref{lemma 3.4} to the main term on the right-hand side of \eqref{eq:exponential expansion} and obtain \begin{equation}\label{eq:fourier} \begin{split} \sum_{0\leq k< K}\frac{(2\pi i \omega)^k}{k!}\sum_{0 < \gamma \leq T}\bigs(\Re\CMcal{P}_X(\gamma +v)\bigs)^k &=N(T)\sum_{0\leq k< K}\frac{(2\pi i \omega)^k}{k!}\int_0^1 \bigs(\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k \mathop{d\underline{\theta}} \\ &-\frac{T}{\pi} \sum_{0\leq k< K}\frac{(2\pi i \omega)^k}{k!} \sum_{\substack{q\leq X^2,\\ 1\leq\ell\leq k}} \frac{\log q}{q^{\ell/2}}\int_0^1\bigs(\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k \Re e^{2\pi i\ell\theta_q}\mathop{d{\underline{\theta}}} \\ &\quad +O\bigg(\sqrt T\log^2{T}\sum_{1\leq k< K}\frac{(2\pi \omega)^k}{k!}(ck)^k\bigg). \\ &= A_1+ A_2+A_3 . \end{split} \end{equation} Note that the sum in the error term starts from $k=1$ because we can assume that there is no error term in \eqref{eq:lemma 3.4 real} when $k=0.$ We start by treating $A_1.$ Via a reverse use of \eqref{eq:exponential}, we retrieve the Fourier transform of $\Re\CMcal{P}_v(\underline{\theta}).$ \[ \begin{split} A_1 = N(T)\sum_{0\leq k< K}&\frac{(2\pi i \omega)^k}{k!}\int_0^1 \bigs(\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k \mathop{d\underline{\theta}} \\ &=N(T)\int_0^1 \exp\bigs(2\pi i \omega\Re{\CMcal{P}_v(\underline{\theta})\bigs)} \mathop{d\underline{\theta}} +O\bigg(N(T)\frac{(2\pi\omega)^K}{K!}\int_0^1\bigs|\Re{\CMcal{P}_v(\underline{\theta})}\bigs|^K \mathop{d\underline{\theta}} \bigg). \end{split} \] By Lemma \ref{Tsang lemma 3.4}, the error term here is \[ \ll N(T)\frac{(2\pi \omega)^K}{K!}(cK\Psi)^{K/2}, \] which is \[ \ll N(T) \frac{\omega(2\pi e)^K\omega^{K-1}}{K^K} (cK\Psi)^{K/2} \ll N(T) \omega \bigg(\frac{c\, \Omega\sqrt{\Psi}}{\sqrt{K}}\bigg)^K \ll N(T) \frac{\omega}{2^{K}} \] by \eqref{Omega K}. Hence, \begin{equation}\label{A1 fourier} A_1 =N(T)\int_0^1 \exp\bigs(2\pi i \omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs) \mathop{d\underline{\theta}} +O\bigg(N(T) \frac{\omega}{2^{K}}\bigg). \end{equation} We proceed to the estimation of $A_2$ in \eqref{eq:fourier}. First, observe that we may extend the inner sum in $A_2$ from $1\leq \ell \leq k$ to $1\leq \ell \leq K.$ To see this, note that by the binomial theorem \[ \int_0^1\bigs(\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k e^{2\pi i\ell\theta_q}\mathop{d{\underline{\theta}}} = \frac{1}{2^k}\int_0^1 \sum_{j=0}^k \binom{k}{j} \bigg(\sum_{p\leq X^2}\frac{e^{2\pi i\theta_p}}{p^{1/2+iv}}\bigg)^{j} \bigg(\overline{\sum_{p\leq X^2}\frac{e^{2\pi i\theta_p}}{p^{1/2+iv}}}\bigg)^{k-j}e^{2\pi i\ell\theta_q} \mathop{d{\underline{\theta}}}. \] By comparing the number of primes when this is multiplied out and using \eqref{orthogonality}, it is clear that unless $k-j=j+\ell$ for some $0\leq j\leq k,$ the right-hand side equals zero. This implies $1\leq \ell \leq k.$ The same is true for the integral \[ \int_0^1\bigs(\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k e^{-2\pi i\ell\theta_q}\mathop{d{\underline{\theta}}}. \] Thus, the integral in $A_2$ is $0$ for $\ell> k.$ We may therefore extend the sum over $\ell$ up to $K$ in $A_2,$ and write \begin{equation}\label{A2 fourier} \begin{split} A_2 =&-\frac{T}{\pi}\sum_{\substack{q\leq X^2,\\ 1\leq\ell\leq K}} \frac{\log q}{q^{\ell/2}} \sum_{0\leq k< K}\frac{(2\pi i \omega)^k}{k!} \int_0^1 \bigs(\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k \Re e^{2\pi i\ell\theta_q}\mathop{d\underline{\theta}} \\ =& -\frac{T}{\pi}\sum_{\substack{q\leq X^2,\\ 1\leq\ell\leq K}} \frac{\log q}{q^{\ell/2}} \int_0^1 \bigg( \sum_{0\leq k< K} \frac{\bigs(2\pi i \omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k}{k!} \bigg) \Re e^{2\pi i\ell\theta_q}\mathop{d\underline{\theta}} \\ =& -\frac{T}{\pi}\sum_{\substack{q\leq X^2,\\ 1\leq\ell\leq K}} \frac{\log q}{q^{\ell/2}}M_\ell (q), \end{split} \end{equation} say. Our goal now is to show that we may replace the sum over $k$ in $M_\ell(q)$ by $\exp\bigs(2\pi i\omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs)$ at the cost of a very small error term. We do this in different ways for $\ell=1$ and $\ell>1.$ Suppose first that $\ell>1.$ Then by \eqref{eq:exponential} \[ \begin{split} \sum_{0\leq k< K}\frac{(2\pi i \omega)^k}{k!} \int_0^1\bigs(\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k \Re e^{2\pi i\ell\theta_q}\mathop{d{\underline{\theta}}} =&\int_0^1\exp\bigs(2\pi i\omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs) \Re e^{2\pi i\ell\theta_q}\mathop{d\underline{\theta}} \\ & + O\bigg(\frac{(2\pi \omega)^K}{K!} \int_0^1\bigs|\Re{\CMcal{P}_v(\underline{\theta})}\bigs|^K \mathop{d{\underline{\theta}}}\bigg). \end{split} \] By Lemma \ref{Tsang lemma 3.4} and an estimate similar to some above, the error term is \[ \ll \frac{(2\pi\omega\sqrt{cK\Psi})^K}{K!} \ll \omega \bigg(\frac{c\Omega^2 \Psi}{K}\bigg)^K \ll \frac{\omega}{2^K}. \] Thus for $\ell>1,$ \begin{equation}\label{M_ell} M_\ell(q)= \int_0^1 \exp\bigs(2\pi i\omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs) \Re e^{2\pi i\ell\theta_q}\mathop{d\underline{\theta}} +O\bigg(\frac{\omega}{2^K}\bigg). \end{equation} Now assume that $\ell=1.$ We write \[ \begin{split} \int_0^1 \exp &\bigs(2\pi i\omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs) \Re e^{2\pi i \theta_q}\mathop{d\underline{\theta}} \\ =&\sum_{0\leq k< K}\frac{(2\pi i \omega)^k}{k!} \int_0^1\bigs(\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k \Re e^{2\pi i \theta_q}\mathop{d{\underline{\theta}}} +\sum_{k\geq K}\frac{(2\pi i \omega)^k}{k!} \int_0^1\bigs(\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k \Re e^{2\pi i \theta_q}\mathop{d{\underline{\theta}}} . \end{split} \] The first term is $M_1(q)$ so we may write this as \begin{equation}\label{M1+R1} \begin{split} \int_0^1 \exp\bigs(2\pi i\omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs) \Re e^{2\pi i \theta_q}\mathop{d\underline{\theta}} = M_1(q) + R_1(q). \end{split} \end{equation} We now estimate $R_1(q)$ which we rewrite as \begin{equation}\label{R_1(q)} \begin{split} R_1(q)=& \, \frac12 \sum_{k\geq K}\frac{(2\pi i \omega)^k}{k!} \int_0^1\bigs(\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k e^{2\pi i\theta_q}\mathop{d{\underline{\theta}}} + \frac12 \sum_{k\geq K}\frac{(2\pi i \omega)^k}{k!} \int_0^1\bigs(\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k e^{-2\pi i\theta_q}\mathop{d{\underline{\theta}}}\\ =& \, R_1^{'}(q)+R_1{''}(q). \end{split} \end{equation} By the binomial theorem, the integral in $R_1^{'}(q)$ is \[ \frac{1}{2^k} \sum_{j=0}^k \binom{k}{j} \int_0^1 \bigg(\sum_{p\leq X^2}\frac{e^{2\pi i\theta_p}}{p^{1/2+iv}}\bigg)^{j} \bigg(\overline{\sum_{p\leq X^2}\frac{e^{2\pi i\theta_p}}{p^{1/2+iv}}}\bigg)^{k-j}e^{2\pi i\theta_q} \mathop{d{\underline{\theta}}}. \] By \eqref{orthogonality}, the integral on the right-hand side is $0$ unless $j+1=k-j,$ that is, $j=\frac{k-1}{2}.$ In that case, we have $q_1\dots q_{(k+1)/2}=p_1\dots p_{(k-1)/2}q$ for some primes $p_1,\dots,p_{(k-1)/2}, q_1,\dots, q_{(k+1)/2}.$ We then can see that the above is equal to \[ \frac{1}{q^{1/2-iv}} \frac{1}{2^k}\binom{k}{(k-1)/2} \sum_{p_1,\dots,p_{(k-1)/2} \leq X^2}\frac{a_{(k-1)/2}(p_1\dots p_{(k-1)/2}) a_{(k+1)/2}( p_1\dots p_{(k-1)/2}q)}{p_1\dots p_{(k-1)/2}}, \] where $a_r(p_1p_2\dots p_r)$ denotes the number of permutations of $p_1,p_2,\dots, p_r.$ Note that \[ a_{(k+1)/2}( p_1\dots p_{(k-1)/2}q) \leq \frac{(k+1)a_{(k-1)/2}( p_1\dots p_{(k-1)/2})}{2}, \] and also \[ \begin{split} \sum_{p_1,\dots,p_{(k-1)/2} \leq X^2}\frac{a^2_{(k-1)/2}(p_1\dots p_{(k-1)/2})}{p_1\dots p_{(k-1)/2}} \leq &\, \bigs((k-1)/2\bigs)! \sum_{p_1,\dots,p_{(k-1)/2} \leq X^2}\frac{a_{(k-1)/2}(p_1\dots p_{(k-1)/2})}{p_1\dots p_{(k-1)/2}} \\ =&\, \bigs((k-1)/2\bigs)! \Big(\sum_{p\leq X^2}\frac{1}{p}\Big)^{(k-1)/2} \leq (ck\Psi)^{\frac{k-1}{2}}. \end{split} \] This shows that \[ \int_0^1\bigs(\Re{\CMcal{P}_v(\underline{\theta})}\bigs)^k e^{2\pi i\theta_q}\mathop{d{\underline{\theta}}} \ll \frac{1}{\sqrt q} k(ck\Psi)^{\frac{k-1}{2}}. \] Thus, using \eqref{Omega K}, we see that \[ \begin{split} R_1^{'}(q) \ll &\, \frac1{\sqrt q} \sum_{k\geq K} k \frac{(2\pi \omega )^k}{k!}(ck\Psi)^{\frac{k-1}{2}} \ll \, \frac{ \omega }{\sqrt q} \sum_{k\geq K} \frac{(2\pi c\, \omega \sqrt{k\Psi})^{k-1}}{(k-1)!} \\ \ll & \frac{ \omega }{\sqrt q} \sum_{k\geq K} \Big(\frac{c\,\Omega\sqrt{ \Psi} }{\sqrt{k} }\Big)^{k-1} \ll \frac{\omega}{2^K \sqrt q}. \end{split} \] In a similar way it follows that $R_1^{''}(q) \ll \omega/(2^K\sqrt q).$ Hence by \eqref{R_1(q)}, $ R_1(q) \ll \omega/(2^K\sqrt q).$ By \eqref{M1+R1} we therefore have \[ M_1(q) =\int_0^1 \exp\bigs(2\pi i\omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs) \Re e^{2\pi i \theta_q}\mathop{d\underline{\theta}} +O\bigg(\frac{\omega}{2^K \sqrt q}\bigg). \] Combining this with \eqref{M_ell} in \eqref{A2 fourier}, we find that \[ \begin{split} A_2 =&-\frac{T}{\pi} \sum_{\substack{q\leq X^2,\\ 1\leq\ell\leq K}} \frac{\log q}{q^{\ell/2}} \; \int_0^1 \exp{ \bigs(2\pi i\omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs)}\Re e^{2\pi i \ell \theta_q}\mathop{d\underline{\theta}} \\ & + O\bigg(\frac{T\omega}{2^K} \sum_{\substack{q\leq X^2,\\ 2<\ell\leq K}} \frac{\log q}{q^{\ell/2}}\bigg) +O\bigg(\frac{T\omega}{2^K} \sum_{q\leq X^2} \frac{\log q}{q}\bigg) . \end{split} \] The sums in the error terms here are both $\ll \log X \ll \log T,$ so we see that \begin{equation}\label{A2 fourier final} \begin{split} A_2 =&-\frac{T}{\pi} \sum_{\substack{q\leq X^2,\\ 1\leq\ell\leq K}} \frac{\log q}{q^{\ell/2}} \; \int_0^1 \exp\bigs(2\pi i\omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs) \Re e^{2\pi i \ell \theta_q}\mathop{d\underline{\theta}} +O\bigg(\frac{N(T)\omega}{2^K}\bigg) . \end{split} \end{equation} Next, we estimate $A_3$ in \eqref{eq:fourier}. Clearly \[ \begin{split} A_3 \ll \sqrt{T}\log^2{T} \sum_{1\leq k< K}\frac{(2\pi\omega)^k}{k!}(ck)^k &\ll \omega \, \Omega^K \sqrt{T}\log^2{T} \, \sum_{1\leq k< K}\frac{(ck)^k}{k!} \\ &\ll \omega \, \Omega^K\sqrt{T}\log^2{T} \, e^{cK}. \end{split} \] By \eqref{Omega K}, we see that $K\log(2e^c\Omega)\leq \sdfrac 1{10} \log T,$ say, hence $(2e^c\Omega)^K\ll T^{\tfrac{1}{10}}.$ This gives \[ A_3 \ll \frac{N(T)\omega}{2^K}. \] Combining this with \eqref{A1 fourier} and \eqref{A2 fourier final} in \eqref{eq:fourier}, we obtain the assertion of the lemma. \end{proof} \subsection{Beurling-Selberg Functions} In this section, we introduce Beurling-Selberg functions which can be used to transmit information about Fourier transform to obtain information about distribution function. Let $a$ and $b$ be real numbers with $a< b.$ The indicator function $\mathbbm{1}_{[a,b]}$ is defined as \[ \mathbbm{1}_{[a,b]}(x)= \begin{cases} 1 &\text{if} \quad x\in [a, b], \\ 0 &\text{else}. \end{cases} \] It can also be written as \begin{equation}\label{chi} \mathbbm{1}_{[a,b]}(x)=\frac12\operatorname{sgn}(x-a)-\frac12\operatorname{sgn}(x-b)+\frac{\delta_a(x)}{2}+\frac{\delta_b(x)}{2}. \end{equation} Here, $\operatorname{sgn}$ denotes the signum function given by \[ \operatorname{sgn}(x)= \begin{cases} 1 &\text{if} \quad x>0, \\ -1 &\text{if} \quad x<0, \\ 0 &\text{if} \quad x=0, \end{cases} \] and $\delta_a$ is the Dirac delta function at $a.$ For a parameter $\Omega > 0,$ a Beurling-Selberg function is given by \begin{equation}\label{eq:F} F_{\Omega}(x) =\Im \int_0^\Omega G\Big(\frac \omega \Omega\Big)\exp{(2\pi ix\omega)}\frac{\mathop{d\omega}}{\omega}, \end{equation} where \[ G(u)= \frac{2u}{\pi}+2u(1-u)\cot{(\pi u)} \quad \text{for} \quad u\in[0, 1]. \] It is easily seen that $G(u)$ is a positive valued, differentiable function on $[0, 1].$ We also know that \begin{equation}\label{eq:sgn} \operatorname{sgn}{(x)} =F_{\Omega}(x) +O\bigg(\sdfrac{\sin^2(\pi\Omega x)}{(\pi\Omega x)^2}\bigg). \end{equation} A proof of this result can be found in \cite[pp. 26--29]{Tsang}. Using \eqref{chi} and \eqref{eq:sgn}, the indicator function can be approximated by a differentiable function. \begin{equation}\label{eq:chi F} \mathbbm{1}_{[a,b]}(x) =\frac12F_\Omega(x-a)-\frac12F_\Omega(x-b) +O\bigg(\sdfrac{\sin^2(\pi\Omega(x-a))}{(\pi\Omega(x-a))^2}\bigg) +O\bigg(\sdfrac{\sin^2(\pi\Omega(x-b))}{(\pi\Omega(x-b))^2}\bigg). \end{equation} \subsection{The Distribution of $\Re{\CMcal{P}_X(\gamma +v)}$}\label{subsection distr of P} The aim of this section is to prove the following result. \begin{prop}\label{distr of chi Re P v} Let $\Psi(T)=\sum_{p\leq T}\frac{1}{p}$ and $\displaystyle X = T^{\frac{1}{16\Psi(T)^6}}.$ Then \[ \frac{1}{N(T)}\sum_{0<\gamma \leq T}\mathbbm{1}_{[a,b]}\bigs(\Re\CMcal{P}_X(\gamma +v)\bigs) = \frac{1}{\sqrt{2\pi}}\int_{a/\sqrt{\tfrac12\log\log T}}^{b/\sqrt{\tfrac12\log\log T}} e^{-x^2/2}\mathop{dx} +O\bigg(\sdfrac{\log{\Psi(T)}}{\Psi(T)}\bigg). \] \end{prop} \begin{proof} We will prove that \begin{equation}\label{eq:sum F} \begin{split} \mathcal F :=\sum_{0 < \gamma \leq T}F_\Omega\bigs(\Re\CMcal{P}_X(\gamma +v)-a\bigs) =\frac{N(T)}{\sqrt{2\pi}}\int_{-\infty}^\infty\operatorname{sgn}{\bigg(x-\sdfrac{a}{\sqrt{\Psi/2}}\bigg)}e^{-x^2/2}\mathop{dx} +O\bigg(\frac{N(T)}{\Psi^2}\bigg), \end{split} \end{equation} and \begin{equation}\label{eq:sum sin} \mathcal S :=\sum_{0 < \gamma \leq T}\frac{\sin^2\big(\pi \Omega(\Re{\CMcal{P}_X(\gamma +v)}-a)\big)}{\big(\pi \Omega(\Re{\CMcal{P}_X(\gamma +v)}-a)\big)^2} =O\bigg( \frac{N(T)}{\Psi^2}\bigg), \end{equation} and that the same results hold when we replace $b$ for $a.$ Then by \eqref{eq:chi F}, it will follow that \[ \sum_{0<\gamma \leq T}\mathbbm{1}_{[a,b]}\bigs(\Re\CMcal{P}_X(\gamma +v)\bigs) =\frac{N(T)}{\sqrt{2\pi}}\int_{a/\sqrt{\Psi/2}}^{b/\sqrt{\Psi/2}} e^{-x^2/2}\mathop{dx} +O\bigg(\frac{N(T)}{\Psi^2}\bigg). \] By our choice of $X,$ we have \[ \Psi=\log\log T+O\bigs(\log \Psi(T)\bigs). \] Thus the right-hand side is \[ \frac{N(T)}{\sqrt{2\pi}} \int_{a/\sqrt{\tfrac12\log\log T}}^{b/\sqrt{\tfrac12\log\log T}} e^{-x^2/2}\mathop{dx}+O\bigg(N(T)\sdfrac{\log{\Psi(T)}}{\Psi(T)}\bigg). \] Since $ \frac{N(T)}{\Psi^2} \ll N(T)\frac{\log{\Psi(T)}}{\Psi(T)}, $ the proof of the proposition will then be complete. We start with the proof of \eqref{eq:sum sin}. For this, we take advantage of the identity \[ \frac{\sin^2(\pi \Omega x)}{(\pi \Omega x)^2} =\frac{2}{\Omega^2}\int_0^{\Omega}(\Omega-\omega) \cos(2\pi x\omega)\mathop{d\omega}, \] and find that \[ \mathcal S =\frac{2}{\Omega^2}\int_0^{\Omega}(\Omega-\omega) \Re{\sum_{0 < \gamma \leq T}\exp\bigs(2\pi i\omega(\Re\CMcal{P}_X(\gamma +v)-a)\bigs)}\mathop{d\omega}. \] By Lemma \ref{fourier}, we then see that \begin{align}\label{sin term} \mathcal S \ll & \, \frac{N(T)}{\Omega^2}\int_0^\Omega (\Omega-\omega)\bigg|\int_0^1\exp\bigs(2\pi i\omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs)\mathop{d\underline{\theta}}\bigg| \mathop{d\omega} \notag \\ & +\frac{T}{\Omega^2}\sum_{\substack{q\leq X^2,\\ 1\leq\ell \leq K}}\frac{\log q}{q^{\ell/2}}\int_0^\Omega (\Omega-\omega)\bigg| \int_0^1\exp\bigs(2\pi i\omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs) \Re e^{2\pi i\ell\theta_q}\mathop{d\underline{\theta}} \bigg|\mathop{d\omega} \notag \\ &+ \, N(T)\frac{\Omega}{2^K} \notag \\ =& \, \mathcal S_1+\mathcal S_2+O\bigg(N(T)\frac{\Omega}{2^K}\bigg), \end{align} say. We first consider $\mathcal S_1.$ By the definition of $\Re\CMcal{P}_v(\underline{\theta})$ and the definition of the Bessel function of order $0,$ that is, \[ J_0(z)=\int_0^1\exp\bigs(iz\cos(2\pi\theta)\bigs)\mathop{d\theta}, \] it follows that \begin{equation}\label{exp int 1} \begin{split} \int_0^1\exp\bigs(2\pi i\omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs)\mathop{d\underline{\theta}} &=\int_0^1\prod_{p\leq X^2}\exp\bigg(\frac{2\pi i\omega \cos(2\pi\theta_p-v\log p)}{\sqrt{p}}\bigg)\mathop{d\underline{\theta}} \\ &=\prod_{p\leq X^2}J_0\Big(\frac{2\pi \omega}{\sqrt p}\Big). \end{split} \end{equation} Then by Lemma 4.2 in \cite{Tsang}, we find that \begin{equation}\label{eq:Bessel} \int_0^1\exp\bigs(2\pi i\omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs)\mathop{d\underline{\theta}} \ll e^{ -c\Psi\omega^2}. \end{equation} Thus \[ \mathcal S_1 \ll \frac{N(T)}{\Omega^2}\int_0^\Omega (\Omega-\omega)e^{-c\Psi\omega^2}\mathop{d\omega} \leq \frac{N(T)}{\Omega }\int_0^\infty e^{-c\Psi\omega^2}\mathop{d\omega} \ll \frac{N(T)}{\Omega \sqrt \Psi} . \] Now we estimate $\mathcal S_2.$ Let \[ \mathcal J = \mathcal J (q, \ell, \omega) =\int_0^1\exp\bigs(2\pi i\bigs(\omega\Re{\CMcal{P}_v(\underline{\theta})+\ell\theta_q}\bigs)\bigs) \mathop{d\underline{\theta}}. \] The integral with respect to $\underline{\theta}$ in $\mathcal S_2$ is \[ \frac{\mathcal J (q, \ell, \omega)+\mathcal J (q, -\ell, \omega)}{2}= \int_0^1\exp\bigs(2\pi i\omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs) \Re e^{2\pi i\ell\theta_q}\mathop{d\underline{\theta}}. \] We start with estimating $\mathcal{J}.$ Using the independence of the variables $\{\theta_p\},$ we may separate the term corresponding to the prime $q$ and write \begin{align*} \mathcal J =\int_0^1 &\exp\bigg(2\pi i\Big(\omega\frac{\cos(2\pi\theta_q-v\log q)}{\sqrt q}+\ell\theta_q\Big)\bigg)\mathop{d{\theta_q}} \\ &\cdot \prod_{\substack{p\leq X^2\\ p\neq q}}\int_0^1 \exp\Big(2\pi i\omega\frac{\cos(2\pi\theta_p-v\log p)}{\sqrt p}\Big)\mathop{d{\theta_p}}. \end{align*} Here, we recall the definition of the Bessel function of the first kind and of order $\ell$: \begin{equation}\label{Jell integral} J_\ell(z) =(-i)^\ell\int_0^1 \exp\bigg(2\pi i \Big(\sdfrac{z\cos(2\pi\theta)}{2\pi}+\ell\theta\Big)\bigg)\mathop{d\theta}. \end{equation} Then in the above expression for $\mathcal J,$ each factor corresponding to a prime $p\neq q$ is $\displaystyle J_0\Big(\sdfrac{2\pi \omega}{\sqrt p}\Big).$ For the remaining term corresponding to the prime $q,$ we apply the change of variable $\theta_q \to \theta_q+\sdfrac{v\log q}{2\pi},$ and find that it equals \[ \int_{\tfrac{v\log q}{2\pi}}^{1+\tfrac{v\log q}{2\pi}} \exp\bigg(2\pi i\Big(\omega\frac{\cos(2\pi\theta_q)}{\sqrt q}+\ell\Big(\theta_q+\frac{v\log q}{2\pi}\Big)\Big)\bigg) \mathop{d{\theta_q}}. \] The integrand has period $1,$ so we see from \eqref{Jell integral} that this equals \[ (iq^{iv})^\ell J_{\ell}\Big(\frac{2\pi\omega}{\sqrt q}\Big). \] Hence \begin{equation}\label{J} \mathcal J =\mathcal J (q,\ell, \omega) =(iq^{iv})^\ell J_{\ell}\Big(\frac{2\pi\omega}{\sqrt q}\Big)\prod_{\substack{p\leq X^2,\\p\neq q}}J_0\Big(\frac{2\pi\omega}{\sqrt p}\Big). \end{equation} One can similarly prove that \begin{equation}\label{J minus ell} \mathcal J (q,-\ell, \omega) =(iq^{-iv})^\ell J_{\ell}\Big(\frac{2\pi\omega}{\sqrt q}\Big)\prod_{\substack{p\leq X^2,\\p\neq q}}J_0\Big(\frac{2\pi\omega}{\sqrt p}\Big). \end{equation} We estimate $\mathcal J$ in two different ways, depending on the size of $\omega.$ First, we note that if $|z| \leq 1$ and $\ell$ is a nonnegative integer, then \begin{equation}\label{Bessel estimate} J_\ell(2z)=\frac{z^\ell}{\ell!}e^{\textstyle{-\frac{z^2}{\ell+1}}}\, \Big(1+O(z^4)\Big). \end{equation} This can be proven by using the series expansion (see \cite[p. 15]{Watson}) \begin{equation}\label{eq:series for Bessel} J_\ell(z)=\sum_{j=0}^\infty \frac{(-1)^j}{j!(\ell+j)!}\Big(\frac{z}{2}\Big)^{2j+\ell}. \end{equation} From the above series, we see that \eqref{Bessel estimate} holds for $z=0.$ For $z\neq 0,$ together with the series expansion of $e^{z^2/(\ell+1)},$ we have \begin{align*} J_\ell(2z)\frac{\ell!}{z^\ell}e^{z^2/(\ell+1)} &= \bigg(\sum_{j=0}^\infty \frac{(-1)^j \ell !}{j!(\ell+j)!}z^{2j}\bigg) \bigg( \sum_{k=0}^\infty \frac{1}{k!(\ell+1)^k}z^{2k} \bigg) \\ &= \sum_{j=0}^\infty \sum_{k=0}^\infty \frac{(-1)^j \ell !}{j!(\ell+j)! k!(\ell+1)^k} z^{2j+2k}. \end{align*} In the double series, we see that the term for $(j,k)=(0, 0)$ is $1,$ and that the term for $(j,k)=(0, 1)$ cancels the term for $(j,k)=(1, 0).$ Thus for $|z| \leq 1,$ the double series is \begin{align*} =\sum_{j=1}^\infty \sum_{k=1}^\infty \frac{(-1)^j \ell !}{j!(\ell+j)! k!(\ell+1)^k} z^{2j+2k} &\leq z^4 \sum_{j=1}^\infty \frac{\ell !}{j!(\ell+j)!} \sum_{k=1}^\infty \frac{1}{k!(\ell+1)^k} \\ &\ll z^4. \end{align*} This proves \eqref{Bessel estimate}. Now, note that if $\omega \in[0,\sqrt 2/\pi],$ then $\frac{\pi\omega}{\sqrt{p}}\leq 1$ for all primes $p,$ and we can apply \eqref{Bessel estimate} to estimate the Bessel functions in \eqref{J}. This gives \begin{align*} \mathcal J &=(iq^{iv})^\ell \frac{(\pi\omega)^\ell}{\ell!q^{\ell/2}}e^{-\tfrac{\pi^2 \omega^2}{q(\ell+1)}}\Big(1+O\Big(\frac{\omega^4}{q^2}\Big)\Big) \prod_{\substack{p\leq X^2,\\p\neq q}} e^{-\tfrac{\pi^2 \omega^2}{p}}\Big(1+O\Big(\frac{\omega^4}{p^2}\Big)\Big)\\ &= \frac{(iq^{iv}\pi\omega)^\ell}{\ell!q^{\ell/2}}e^{-\pi^2\omega^2\left(\Psi-\tfrac{\ell}{(\ell+1)q}\right)}\Big(1+O\bigs(\omega^{4}\,\bigs)\Big). \end{align*} Introducing the notation $\Psi_q=\Psi-\sdfrac{\ell}{(\ell+1)q},$ we may write \begin{equation}\label{product Bessels 1} \mathcal J =\frac{(iq^{iv}\pi\omega)^\ell}{\ell!q^{\ell/2}}e^{-\pi^2\Psi_q\omega^2}\Big(1+O\bigs(\omega^{4}\,\bigs)\Big) \quad \text{for} \quad \omega \in[0,\sqrt{2}/\pi]. \end{equation} Now suppose that $\omega\in [\sqrt 2/\pi, \Omega].$ Then from the proof of Lemma 4.2 in \cite{Tsang}, it is not hard to see that \[ \prod_{\substack{p\leq X^2, \\p\neq q}} J_0\Big(\frac{2\pi\omega}{\sqrt p}\Big) \ll e^{-c\Psi\omega^2} \] for some positive constant $c.$ For the Bessel function corresponding to the prime $q$ in \eqref{J}, we use the series expansion in \eqref{eq:series for Bessel} to obtain \begin{align*} J_\ell \Big(\frac{2\pi\omega}{\sqrt q}\Big) &=\sum_{j=0}^\infty \frac{(-1)^j}{j!(j+\ell)!}\Big(\frac{\pi\omega}{\sqrt q}\Big)^{2j+\ell} \\ &\leq \Big(\frac{\pi\omega}{\sqrt q}\Big)^{\ell}\sum_{j=0}^\infty \frac{(\pi^2\omega^2)^j}{2^j j!(j+\ell)!} \leq \frac{(\pi\omega)^\ell}{\ell !q^{\ell/2}}\sum_{j=0}^\infty \frac{(\sqrt 2\pi\omega)^{2j}}{(2j)!} \leq \frac{(\pi\omega)^\ell}{\ell !q^{\ell/2}} e^{\sqrt 2\pi\omega}. \end{align*} In the next-to-last inequality we have used the bound \begin{equation}\label{binom ineq} 2^j j!(j+\ell)! \geq \frac{(2j)!\ell !}{2^j}, \end{equation} which holds for all integers $j, \ell \geq 0.$ To see this, first note that from $\binom{j+\ell}{j} \geq 1,$ we have $(j+\ell)!\geq j! \ell!.$ Secondly, $\binom{2j}{j} \leq 2^{2j},$ so $2^j j! \geq \sdfrac{(2j)!}{2^{j}j!}.$ Combining these inequalities, we obtain \eqref{binom ineq}. Then it follows that for $\omega\in [\sqrt{2}/\pi, \Omega]$ \[ \mathcal J \ll \frac{(\pi\omega)^\ell}{\ell !q^{\ell/2}} e^{-c\Psi\omega^2+\sqrt{2}\pi\omega}. \] Combining this with \eqref{product Bessels 1}, we obtain \begin{equation}\label{product Bessels 4} \mathcal J(q, \ell, \omega) \ll \frac{(\pi\omega)^\ell}{\ell !q^{\ell/2}} e^{-c\Psi\omega^2+\sqrt{2}\pi\omega} \, \quad \text{ for } \omega \in[0, \Omega]. \end{equation} By \eqref{J minus ell}, the same bound holds for $ \mathcal J(q, -\ell, \omega).$ We use this bound to estimate $\mathcal S_2$ in \eqref{sin term} as \begin{align*} \mathcal S_2 \ll &\, \frac{T}{\Omega^2}\sum_{1\leq\ell \leq K} \frac{\pi^\ell}{\ell!} \sum_{q\leq X^2} \frac{\log q}{q^{\ell}}\int_0^\Omega (\Omega-\omega)\omega^\ell e^{-c\Psi\omega^2+\sqrt 2\pi\omega}\mathop{d\omega} \\ \ll &\, \frac{T}{\Omega} \sum_{q\leq X^2} \frac{\log q}{q} \int_0^\Omega \bigg( \sum_{1\leq\ell \leq K} \frac{(\pi\omega)^\ell}{\ell!}\bigg) e^{-c\Psi\omega^2+\sqrt 2\pi\omega}\mathop{d\omega} \\ \ll &\, \frac{T}{\Omega}\sum_{q\leq X^2} \frac{\log q}{q} \int_0^\Omega e^{-c\Psi\omega^2+(\sqrt 2+1)\pi\omega}\mathop{d\omega} \ll \frac{T\log X}{\Omega \sqrt \Psi}\ll \frac{N(T)}{\Omega \sqrt \Psi}. \end{align*} Combining our estimates for $\mathcal S_1$ and $\mathcal S_2$ in \eqref{sin term}, we obtain \[ \mathcal S \ll \frac{N(T)}{\Omega \sqrt \Psi} + \frac{N(T) \Omega}{2^K}. \] By \eqref{Omega K}, this is $O\left(\sdfrac{N(T)}{\Psi^2}\right),$ so this proves \eqref{eq:sum sin}. We proceed to prove \eqref{eq:sum F}. Substituting $x=\Re{\CMcal{P}_X(\gamma +v)}-a$ in \eqref{eq:F} and summing both sides of the equation over $\gamma ,$ we obtain \[ \mathcal F =\Im \int_0^ \Omega G\Big(\frac{\omega}{\Omega}\Big)e^{-2\pi i a\omega}\sum_{0 < \gamma \leq T}\exp\bigs(2\pi i\omega\Re\CMcal{P}_X(\gamma +v)\bigs)\frac{\mathop{d\omega}}{\omega}. \] By Lemma \ref{fourier}, this becomes \begin{align*} \mathcal F &=N(T)\Im \int_0^\Omega G\Big(\frac{\omega}{\Omega}\Big)e^{-2\pi i a\omega}\int_0^1\exp\bigs(2\pi i\omega\Re \CMcal{P}_v(\underline{\theta})\bigs)\mathop{d\underline{\theta}} \frac{\mathop{d\omega}}{\omega} \\ &-\sdfrac{T}{\pi}\sum_{\substack{q\leq X^2,\\ 1\leq\ell \leq K}}\sdfrac{\log q}{q^{\ell/2}} \Im \int_0^{\Omega}G\Big(\sdfrac{\omega}{\Omega}\Big)e^{-2\pi ia\omega} \int_0^1\exp\bigs(2\pi i\omega\Re{\CMcal{P}_v(\underline{\theta})}\bigs) \Re e^{2\pi i\ell\theta_q}\mathop{d\underline{\theta}}\frac{\mathop{d\omega}}{\omega} \\ &+O\bigg(\sdfrac{N(T)}{2^K}\int_0^\Omega \Big| G\Big(\frac{\omega}{\Omega}\Big)e^{-2\pi ia\omega}\Big| \mathop{d\omega} \bigg) =\, \mathcal F_1+\mathcal F_2+\mathcal F_3. \end{align*} \noindent We estimate these three terms in turn. By \eqref{exp int 1}, \[ \mathcal F_1 = N(T)\Im\int_0^{\Omega}G\Big(\frac{\omega}{\Omega}\Big)e^{-2\pi ia\omega}\prod_{p\leq X^2}J_0\Big(\frac{2\pi\omega}{\sqrt p}\Big)\frac{\mathop{d\omega}}{\omega}. \] The imaginary part of the integral here has been evaluated by Tsang (see~\cite[pp. 34--35]{Tsang}). Using his result, we obtain \[ \mathcal F_1 =\frac{N(T)}{\sqrt{2\pi}} \int_{-\infty}^\infty \operatorname{sgn}\bigg(x-\frac{a}{\sqrt{\Psi/2}}\bigg)e^{-x^2/2}\mathop{dx} +\, O\bigg(\frac{N(T)}{\Psi^2}\bigg). \] To treat $\mathcal F_2,$ recall the notation \[ \mathcal J(q, \ell, \omega)= \int_0^1\exp\bigs(2\pi i\bigs(\omega\Re{\CMcal{P}_v(\underline{\theta})+\ell\theta_q}\bigs)\bigs) \mathop{d\underline{\theta}}. \] Then we may write \begin{align*} \mathcal F_2 =&-\frac{T}{2\pi}\sum_{\substack{q\leq X^2,\\ 1\leq\ell \leq K}}\frac{\log q}{q^{\ell/2}}\Im\int_0^\Omega G\Big(\frac{\omega}{\Omega}\Big)e^{-2\pi ia\omega} \mathcal J(q, \ell, \omega) \frac{\mathop{d\omega}}{\omega} \\ &-\frac{T}{2\pi}\sum_{\substack{q\leq X^2,\\ 1\leq\ell \leq K}}\frac{\log q}{q^{\ell/2}}\Im\int_0^\Omega G\Big(\frac{\omega}{\Omega}\Big)e^{-2\pi ia\omega} \mathcal J(q, -\ell, \omega)\frac{\mathop{d\omega}}{\omega} = \mathcal F'_2+\mathcal F^{''}_2, \end{align*} say. Now by \eqref{product Bessels 4}, \[ \begin{split} \mathcal F'_2 =&- \frac{T}{2\pi} \sum_{\substack{q\leq X^2,\\ 1\leq\ell\leq K}}\frac{\log q}{q^{\ell/2}} \; \Im\int_0^\Omega G\Big(\frac{\omega}{\Omega}\Big)e^{-2\pi ia\omega}\, \mathcal J(q, \ell, \omega) \frac{\mathop{d\omega}}{\omega} \\ \ll& \, T \sum_{\substack{q\leq X^2,\\ 1\leq\ell\leq K}} \frac{\pi ^\ell \log q}{\ell! q^{\ell}} \int_0^\Omega G\Big(\frac{\omega}{\Omega}\Big) \omega^{\ell-1} e^{-c\Psi\omega^2+\sqrt 2\pi\omega} \mathop{d\omega}. \end{split} \] Since $G$ is bounded on $[0, 1],$ we see that \begin{align*} \mathcal F'_{2} & \ll \, T \sum_{q\leq X^2}\frac{\log q}{q} \sum_{1\leq\ell \leq K} \frac{\pi^\ell}{\ell!} \int_0^\Omega \omega^{\ell-1} e^{-c\Psi\omega^2+\sqrt 2\pi\omega}\mathop{d\omega} \\ &\ll T \sum_{q\leq X^2}\frac{\log q}{q} \int_0^\Omega \sum_{1 \leq \ell \leq K} \frac{(\pi \omega)^{\ell-1} }{(\ell-1)!} e^{-c\Psi\omega^2+\sqrt 2\pi\omega}\mathop{d\omega} \\ & \ll T \sum_{q\leq X^2}\frac{\log q}{q} \int_0^\Omega e^{-c\Psi\omega^2+(\sqrt 2\pi+\pi)\omega}\mathop{d\omega}. \end{align*} The integrand is bounded and the sum over $q$ is $\ll \log X,$ so by \eqref{Omega K}, \[ \mathcal F'_{2} \ll T\, \Omega \log X \ll \frac{N(T)}{\Psi(T)^4} . \] Similarly, $\mathcal F^{''}_2 \ll \frac{N(T)}{\Psi(T)^4}.$ Finally, since $G$ is a bounded function on $[0, 1],$ \[ \mathcal F_3 \ll \frac{N(T) \Omega}{2^K} \ll \frac{N(T)}{\Psi^2} \] by \eqref{Omega K}. Recall that we set $\mathcal F=\sum_{1\leq i \leq 3} \mathcal F_i.$ Combining our estimates for $\mathcal F_1, \mathcal F_2$ and $\mathcal F_3,$ we obtain \eqref{eq:sum F}. \end{proof} \subsection{Completing the Proof of Theorem \ref{distr of Re log zeta}} \label{proof of thm Re log zeta} Let $\displaystyle X=T^{\tfrac{1}{16\Psi(T)^6}}$ where $\Psi(T)=\sum_{p\leq T} p^{-1},$ and $0<u \ll \frac{1}{\log T}$ and $|v| = O\bigs(\tfrac{1}{\log X}\bigs).$ Define \[ r(X,\gamma +v)=\log{|\zeta(\r+z)|}-M_X(\r, z)-\Re\CMcal{P}_X(\gamma +v), \] where $M_X(\r, z)$ is as defined in \eqref{mean} and $\CMcal{P}_X(\gamma +v)$ is as given in \eqref{P defn}. We also set $\displaystyle Y= T^{\tfrac{1}{8k}}$ where we choose $k=\lfloor \log{\Psi(T)}\rfloor .$ We similarly define \[ r(Y,\gamma +v)=\log{|\zeta(\r+z)|}-M_Y(\r, z)-\Re\CMcal{P}_Y(\gamma +v), \] where \[ M_Y(\r, z)=m(\r+iv)\Big(\log\Big(\sdfrac{eu\log Y}{4}\Big)-\sdfrac{u\log Y}{4}\Big) \] and \[ \CMcal{P}_Y(\gamma +v)=\sum_{p\leq Y^2} \frac{1}{p^{1/2+i(\gamma +v)}}. \] Observe that \[ \mathbbm{1}_{[a,b]}\bigs(\Re\CMcal{P}_X(\gamma +v)\bigs) =\mathbbm{1}_{[a,b]}\bigs(\log{|\zeta(\r+z)|}-M_X(\r, z)\bigs) \] unless \[ \bigs| r(X,\gamma +v)\bigs| > \bigs|\Re\CMcal{P}_X(\gamma +v)-a\bigs| \quad \text{or}\quad \bigs| r(X,\gamma +v)\bigs| > \bigs|\Re\CMcal{P}_X(\gamma +v)-b\bigs|. \] To estimate the number of such exceptional $\gamma ,$ we define the set \[ A_a=\Big\{0< \gamma \leq T : \bigs|r(X,\gamma +v)\bigs| \mkern9mu > \mkern9mu \bigs|\Re\CMcal{P}_X(\gamma +v)-a\bigs|\Big\}, \] and a similar set $A_b.$ Let $\gamma \in A_a.$ Trivially, for any positive constant $C$ we either have $\bigs| r(X,\gamma +v)\bigs| \leq 2Ck^2$ or $\bigs|r(X,\gamma +v)\bigs| > 2Ck^2.$ By the definition of $A_a,$ the first case implies that $\bigs|\Re\CMcal{P}_X(\gamma +v)-a\bigs| < 2Ck^2.$ Then by Proposition \ref{distr of chi Re P v} and our choice of $k,$ \[ \#\Big\{\gamma \in A_a : a-2Ck^2 < \Re\CMcal{P}_X(\gamma +v) < a+2Ck^2\Big\} \ll N(T)\sdfrac{\log^2 {\Psi(T)}}{\sqrt{\Psi(T)}}. \] On the other hand, by Chebyshev's inequality, \begin{equation}\label{Chebyshev} \#\Big\{\gamma \in A_a:\bigs| r(X,\gamma +v)\bigs| > 2Ck^2\Big\} \leq \frac{1}{(2Ck^2)^{2k}}\sum_{0<\gamma \leq T} \bigs| r(X,\gamma +v)\bigs|^{2k}. \end{equation} Hence, we need to estimate moments of $r(X,\gamma +v).$ Since $X < Y,$ \begin{align*} & r(X,\gamma +v)\\ = &\, r(Y,\gamma +v)+\bigs(M_Y(\r,z)-M_X(\r,z)\bigs)+\bigs(\Re\CMcal{P}_Y(\gamma +v)-\Re\CMcal{P}_X(\gamma +v)\bigs)\\ =&\, r(Y,\gamma +v) +m(\r+iv)\Big(\log\Big(\sdfrac{\log Y}{\log X}\Big)-u\sdfrac{\log(Y/X)}{4}\Big) +\Re\sum_{X^2 < p \leq Y^2}\frac{1}{p^{1/2+i(\gamma +v)}}\\ =&\, A_1+A_2+A_3 , \end{align*} say. Then by \eqref{convexity}, we have \begin{equation}\label{moment of r} \sum_{0<\gamma \leq T}\bigs|r(X,\gamma +v)\bigs|^{2k} \ll c^k \bigg( \sum_{0<\gamma \leq T}\bigs|A_1\bigs|^{2k} + \sum_{0<\gamma \leq T}\bigs|A_2\bigs|^{2k} + \sum_{0<\gamma \leq T}\bigs|A_3\bigs|^{2k}\bigg). \end{equation} Here, Proposition \ref{Re log zeta error} gives \begin{equation}\label{moment of A1 prop} \sum_{0<\gamma \leq T} \bigs|A_1\bigs|^{2k} \ll (ck)^{4k} N(T). \end{equation} For the $(2k)$th moment of $A_2,$ note that \begin{equation}\label{moment of m} \begin{split} A_2&= \bigs(-\log(8k)+\log(16\Psi(T)^6)\bigs)m(\r+iv) \\ &\ll m(\r+iv)\bigs(\log k+\log\log\log T \bigs) \ll k\, m(\r+iv), \end{split} \end{equation} where we used our choice for $k$ and Mertens' theorem $\Psi(T) = \log\log T+O(1).$ Thus, it suffices to study the $(2k)$th moment of $m(\r+iv).$ For each positive integer $j,$ let $N_j(T)$ denote the number of zeros $\r$ with its ordinate in $(0, T]$ and with multiplicity $j,$ that is, $m(\r)=j.$ By a result of Korolev (see Theorem A in ~\cite{Korolev}), it is known that \[ N_j(T) \ll e^{-cj}N(T), \] where $c$ is an absolute constant. Clearly, $N(T+|v|)\ll N(T)$ since $|v| \ll \sdfrac{1}{\log X}.$ Thus, the number of zeros of the form $\r+iv$ with $\gamma \in(0, T]$ and $m(\r+iv)=j$ is \[ \ll e^{-cj}N(T). \] Then by \eqref{moment of m}, \[ \sum_{0<\gamma \leq T} \bigs|A_2\bigs|^{2k} \ll (ck)^{2k} \sum_{0<\gamma \leq T} \bigs| m(\r+iv)\bigs|^{2k} \ll (ck)^{2k}\sum_{j=1}^\infty \frac{j^{2k}}{e^{c j}}\, N(T). \] In a similar way to our estimation of the series in \eqref{eq:proof eta}, the above series can be seen to be $\ll (ck)^{2k}.$ Hence \begin{equation}\label{moment of m A2} \sum_{0<\gamma \leq T} \bigs|A_2\bigs|^{2k} \ll (ck)^{4k} N(T). \end{equation} Finally, we estimate moments of $A_3.$ We easily have \[ \sum_{0<\gamma \leq T} \bigs|A_3\bigs|^{2k} \ll \sum_{0<\gamma \leq T}\Big|\sum_{X^2<p\leq Y^2} \frac{1}{p^{1/2+i(\gamma +v)}}\Big|^{2k}. \] By Lemma \ref{Soundmomentlemma3}, the right-hand side is \[ \ll k! N(T)\Big( \sum_{X^2< p\leq Y^2} \frac{1}{p}\Big)^k, \] Then by Mertens' theorem and our choices for $X$ and $Y,$ this is \[ (ck)^k N(T) \bigs(\log(\log Y/\log X) \bigs) ^k \ll (ck)^k N(T) (\log k+\log\log\log T)^k, \] and so \[ \sum_{0<\gamma \leq T} \bigs|A_3\bigs|^{2k} \ll (ck)^{2k} N(T). \] Together with \eqref{moment of A1 prop} and \eqref{moment of m A2}, we substitute this into \eqref{moment of r} and obtain \[ \sum_{0<\gamma \leq T}\bigs| r(X,\gamma +v)\bigs|^{2k} \ll (ck)^{4k} N(T). \] We now go back to \eqref{Chebyshev}, which, together with the above estimate, implies \[ \#\Big\{\gamma \in A_a:\bigs| r(X,\gamma +v)\bigs| > 2Ck^2\Big\} \ll \frac{(ck)^{4k}}{(2Ck^2)^{2k}}N(T) \] for some constant $c>0.$ If we choose $C$ such that $C\geq c^2,$ then \[ \#\Big\{\gamma \in A_a:\bigs|r(X,\gamma +v)\bigs| > 2Ck^2 \Big\} \leq \frac{N(T)}{4^k} \ll N(T)\frac{\log^2{\Psi(T)}}{\sqrt{\Psi(T)}}, \] where we used $k=\lfloor \log{\Psi(T)}\rfloor .$ Thus $ \# A_a=O\Big(N(T)\sdfrac{\log^2{\Psi(T)}}{\sqrt{\Psi(T)}}\Big).$ Similarly, $ \# A_b=O\Big(N(T)\sdfrac{\log^2{\Psi(T)}}{\sqrt{\Psi(T)}}\Big).$ This proves that \[ \sum_{0<\gamma \leq T}\mathbbm{1}_{[a,b]}\bigs(\Re\CMcal{P}_X(\gamma +v)\bigs) =\sum_{0<\gamma \leq T}\mathbbm{1}_{[a,b]}\bigs(\log{|\zeta(\r+z)|}-M_X(\r, z)\bigs) +O\bigg(N(T)\frac{\log^2 \Psi(T)}{\sqrt{\Psi(T)}}\bigg). \] With the result of Proposition \ref{distr of chi Re P v}, the left-hand side is \[ \frac{N(T)}{\sqrt{2\pi}}\int_{a/\sqrt{\tfrac12\log\log T}}^{b/\sqrt{\tfrac12\log\log T}} e^{-x^2/2}\mathop{dx} +O\bigg(N(T)\frac{\log\Psi(T)}{\Psi(T)}\bigg). \] By replacing $a$ with $a\sqrt{\tfrac12\log\log T}$ and $b$ with $b\sqrt{\tfrac12\log\log T},$ we obtain \[ \sum_{0<\gamma \leq T}\mathbbm{1}_{[a,b]}\bigg(\frac{\log{|\zeta(\r+z)|}-M_X(\r, z)}{\sqrt{\tfrac12\log\log T}}\bigg) =\frac{N(T)}{\sqrt{2\pi}}\int_a^b e^{-x^2/2}\mathop{dx} +O\bigg(N(T)\frac{\log^2 \Psi(T)}{\sqrt{\Psi(T)}}\bigg). \] Noting that $\Psi(T)=\log\log T+O(1)$ by Mertens' theorem, we complete the proof of Theorem \ref{distr of Re log zeta}. \begin{rem*} One can see from the above discussion that for $u$ as small as $O\Big( \frac{\log\log\log T}{\log T}\Big),$ we can replace $M_X(\r,z)$ in the statement of Theorem \ref{distr of Re log zeta} by \[ M(\r,z)=m(\r+iv)\Big(\log\Big(\sdfrac{eu\log T}{4}\Big)-\sdfrac{u\log T}{4}\Big). \] \end{rem*} \section{Proof of Theorem~\ref{thm: log zeta'}} \label{proof of thm Re log zeta'} Our proof of Theorem~\ref{thm: log zeta'} is very similar to that of Theorem~\ref{distr of Re log zeta}. We are, however, required to assume that all the zeros of the Riemann zeta-function are simple. Another difference from Theorem~\ref{distr of Re log zeta} is that the error resulting from the spacing between consecutive (distinct) zeros will be estimated under the weaker assumption of Hypothesis $\mathscr{H}_\a$ instead of Montgomery's Pair Correlation Conjecture. We start with the formula in Corollary \ref{Re log zeta`}. This gives \begin{equation}\label{eq:difference} \log{\frac{|\zeta^{(m(\r))}(\r)|}{\big((e\log{X})/4\big)^{m(\r)}}}- \Re\CMcal{P}_X(\gamma ) =\sum_{i=1}^{4} r_i(X, \gamma ), \end{equation} where the terms on the right-hand side are given as \begin{align*} r_1(X, \gamma )= \bigg|\sum_{p\leq X^2}\frac{1-w_X(p)}{p^{1/2+i\gamma }}\bigg| \, , & \quad \quad r_2(X, \gamma )=\bigg|\sum_{p\leq X}\frac{w_X(p^2)}{p^{1+2i\gamma }}\bigg|\, , \\[4pt] r_3(X, \gamma )= \frac{1}{\log X} \int_{1/2}^\infty X^{\frac 1 2-\sigma}&\bigg|\sum_{p\leq X^2}\frac{\Lambda_X(p)\log{(Xp)}}{p^{\sigma+i\gamma }}\bigg|\mathop{d\sigma}, \end{align*} and \[ r_4(X, \gamma ) =\bigg(1+\log^{+}\Big(\frac{1}{\eta_{\gamma }\log X }\Big)\bigg)\frac{E(X,\gamma )}{\log X}. \] In the following proposition, we estimate the moments of the left-hand side of \eqref{eq:difference}. \begin{prop}\label{Re log zeta' error} Assume RH and Hypothesis $\mathscr H_\a$ for some $\a\in(0,1].$ Let $\, T^{\tfrac{\d}{8k}} \leq X\leq T^{\tfrac{1}{8k}},$ where $k$ is a positive integer and $0<\d \leq 1$ is fixed. Then for a constant $A$ depending on $\a$ and $\d,$ we have \[ \sum_{0< \gamma \leq T} \bigg(\log{\frac{|\zeta^{(m(\r))}(\r)|}{\big((e\log{X})/4\big)^{m(\r)}}}-\Re\CMcal{P}_X(\gamma )\bigg)^k \ll (Ak)^{2k}N(T). \] \end{prop} \begin{proof} By \eqref{eq:difference} and then the inequality \eqref{convexity}, \[ \sum_{0< \gamma \leq T} \bigg(\log{\frac{|\zeta^{(m(\r))}(\r)|}{\big((e\log{X})/4\big)^{m(\r)}}}-\Re\CMcal{P}_X(\gamma )\bigg)^k = O\bigg(c^k\sum_{i=1}^{4} \sum_{0< \gamma \leq T} r_i(X, \gamma )^k\bigg). \] The error terms which correspond to $i=1$ and $i=2$ have already been shown to be $\ll (ck)^k N(T)$ in \eqref{1-w} and \eqref{w}, respectively. The third error term is absorbed in this bound by the result of Proposition \ref{error term integral}. It remains to show that \[ \sum_{0 < \gamma \leq T} \bigg(\Big(1+\log^+ \sdfrac{1}{\eta_{\gamma }\log{X}}\Big)\frac{E(X,\gamma )}{\log X}\bigg)^{k} \ll (Ak)^{2k}N(T). \] We will estimate this conditionally on Hypothesis $\mathscr{H}_\a.$ By \eqref{eq:moment of E 2} with $v=0,$ we have \[ \sum_{0 < \gamma \leq T} \big|E(X,\gamma )\big|^{2k}\ll (Dk)^{2k} N(T) (\log X)^{2k} \] for a constant $D$ depending on $\d.$ It suffices to show that for a constant $A$ depending on $\a$ and $\d,$ \begin{equation}\label{eq:moment of eta} \sum_{0 < \gamma \leq T}\Big( 1+\log^+ \sdfrac{1}{\eta_{\gamma }\log{X}} \Big)^{2k} \ll (Ak)^{2k} N(T). \end{equation} Recall the definition \[ \eta_{\gamma }=\min_{\gamma ' \neq \gamma }|\gamma '-\gamma |. \] In the case when $\displaystyle \eta_{\gamma } >\frac{1}{\log X},$ the term on the left-hand side of \eqref{eq:moment of eta} will be just $1$ and such terms will not contribute more than $N(T)$ to the sum. To consider larger terms, we assume $\displaystyle \eta_{\gamma } \leq \frac{1}{\log X}.$ Then for a nonnegative integer $j$ \[ \frac{e^{-j-1}}{\log X} < \eta_{\gamma } \leq \frac{e^{-j}}{\log X}, \] and so \[ 1+\log^+\sdfrac{1}{\eta_\gamma \log{X}} < j+2 . \] By Hypothesis $\mathscr H_\a$ and the comment immediately following it, the number of such $\gamma $ is at most \[ \#\Big\{0 < \gamma \leq T: \eta_\gamma \leq \sdfrac{e^{-j}}{\log X} \Big\} \ll \min\Big\{ 1, \Big(\sdfrac{e^{-j}\log T}{\log X}\Big)^\a \Big\}N(T). \] Note that $\displaystyle\sdfrac{e^{-j}\log T}{\log X}\geq 1$ for $j\leq \log(\log T/\log X),$ so for $\displaystyle j\leq \log\Big(\sdfrac{8k}{\d}\Big).$ The contribution of $\gamma $ corresponding to such $j$ to the sum in \eqref{eq:moment of eta} is at most \[ N(T)\sum_{j \leq \log(8k/{\d})} (j+2)^{2k} \ll (D\log k)^{2k+1} N(T), \] where $D$ is a constant depending on $\d.$ Thus, the sum over $\gamma $ is \begin{align*} \sum_{0 < \gamma \leq T}\Big( 1+\log^+ \sdfrac{1}{\eta_\gamma \log{X}} \Big)^{2k} &\ll N(T) \Big(\sdfrac{\log T}{\log X}\Big)^{\a} \sum_{j=0}^\infty \frac{(j+2)^{2k}}{e^{j\a}} +(D\log k)^{2k+1} N(T) \\ &\ll \Big(\frac{k}{\d}\Big)^\a N(T)\sum_{j=0}^\infty \frac{(j+2)^{2k}}{e^{j\a}}+(D\log k)^{2k+1} N(T). \end{align*} By using a similar argument to the one we applied to the series in \eqref{eq:proof eta}, we see that the series on the last line is $\ll (Ak)^{2k}$ for a constant $A$ depending on $\a.$ Hence \[ \sum_{0 < \gamma \leq T}\Big( 1+\log^+ \sdfrac{1}{\eta_\gamma \log{X}} \Big)^{2k} \ll (Ak)^{2k} N(T), \] where $A$ denotes a constant depending on both $\a$ and $\d.$ \end{proof} \begin{proof}[Proof of Theorem~\ref{thm: log zeta'}] Let $\displaystyle X = T^{\tfrac{1}{16\Psi(T)^6}}$ where $\Psi(T)=\sum_{p\leq T} p^{-1}.$ By Proposition \ref{distr of chi Re P v} for $v=0,$ we have \begin{align*} \frac{1}{N(T)}\#\bigg\{0 < \gamma \leq T: \frac{\Re\CMcal{P}_X(\gamma )}{\sqrt{\tfrac12\log\log T}}\in [a, b]\bigg\} = \frac{1}{\sqrt{2\pi}}\int_a^b e^{-x^2/2}\mathop{dx} +O\bigg(\frac{\log{\Psi(T)}}{\Psi(T)}\bigg). \end{align*} We assume that $m(\r)=1$ for all zeros $\r.$ We quickly see that one can prove the following result by using a simpler version of the discussion in Section \ref{proof of thm Re log zeta}. \begin{equation}\label{proof of thm log zeta'} \begin{split} \frac{1}{N(T)}\#\bigg\{0 < \gamma \leq T: & \frac{\log\sdfrac{|\zeta'(\r)|}{(e\log X)/4}}{\sqrt{\tfrac12\log\log T}}\in [a, b] \bigg\} \\ =&\frac{1}{\sqrt{2\pi}}\int_a^b e^{-x^2/2}\mathop{dx} +O\bigg(\frac{(\log\log\log T)^2}{\sqrt{\log\log T}}\bigg). \end{split} \end{equation} Now, note that there is an absolute constant $c$ for which \[ \bigg|\log\frac{|\zeta'(\r)|}{(e\log X)/4} -\log\frac{|\zeta'(\r)|}{\log T}\bigg| \leq c\log\log\log T \quad \text{ for all} \, \,\, \r. \] Then for a suitable constant $c,$ \begin{align*} \#\bigg\{0 < \gamma \leq T: \frac{\log\sdfrac{|\zeta'(\r)|}{(e\log X)/4}}{\sqrt{\tfrac12\log\log T}} \in \bigg[a+c&\sdfrac{\log\log\log T}{\sqrt{\log\log T}}, b-c\sdfrac{\log\log\log T}{\sqrt{\log\log T}}\bigg] \bigg\} \\ \leq \, &\#\bigg\{0 < \gamma \leq T: \frac{\log\sdfrac{|\zeta'(\r)|}{\log T}}{\sqrt{\tfrac12\log\log T}}\in [a,b] \bigg\}, \end{align*} and \begin{align*} \#\bigg\{0 < \gamma \leq& T: \frac{\log\sdfrac{|\zeta'(\r)|}{\log T}}{\sqrt{\tfrac12\log\log T}}\in [a, b] \bigg\} \\ \leq &\, \#\bigg\{0 < \gamma \leq T: \frac{\log\sdfrac{|\zeta'(\r)|}{(e\log X)/4}}{\sqrt{\tfrac12\log\log T}}\in \bigg[a-c\sdfrac{\log\log\log T}{\sqrt{\log\log T}}, b+c\sdfrac{\log\log\log T}{\sqrt{\log\log T}}\bigg] \bigg\}. \end{align*} By \eqref{proof of thm log zeta'}, the left-hand side of the first inequality and the right-hand side of the second inequality are both \[ \frac{N(T)}{\sqrt{2\pi}}\int_a^b e^{-x^2/2}\mathop{dx} +O\bigg(N(T)\frac{(\log\log\log T)^2}{\sqrt{\log\log T}}\bigg) +O\bigg(N(T)\frac{\log\log\log T}{\sqrt{\log\log T}}\bigg). \] Combining the inequalities, we get an estimate which is essentially the statement \eqref{proof of thm log zeta'}, but with $\displaystyle \log\Big(\sdfrac{|\zeta'(\r)|}{(e\log X)/4}\Big)$ replaced by $\displaystyle \log\Big(\sdfrac{|\zeta'(\r)|}{\log T}\Big).$ This is the claim of Theorem~\ref{thm: log zeta'}. \end{proof} \section*{Final Comments} Similarly to the proof of Selberg's central limit theorem in \cite{Tsang}, it would be possible to generalize the results in this paper to hold for $\gamma $ within an interval $[T, T+H],$ where $H$ satisfies $T^\theta < H \leq T$ with $\theta > 1/2.$ As another generalization of our work, one can consider the value distribution of the sequence $(\log{L(\r, \chi)})$ for a fixed primitive $L$-function $L(s, \chi).$ In fact, central limit theorems as in Theorem \ref{distr of Re log zeta} and Theorem \ref{distr of Im log zeta} hold in this case, and they are conditional on the Generalized Riemann hypothesis, a zero-spacing hypothesis between nontrivial zeros of $\zeta(s)$ and nontrivial zeros of $L(s,\chi),$ and the assumption that nontrivial zeros of $L(s, \chi)$ never coincide with any of the nontrivial zeros of $\zeta(s).$ This can be proven by suitable modifications of the results in this paper.
proofpile-arXiv_065-172
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Acknowledgement} SJ and SNK acknowledge IIT-Delhi post-doctoral fellowship. \section*{Disclosures} The authors declare no conflicts of interest. \section*{Acknowledgement} SJ and SNK acknowledge IIT-Delhi post-doctoral fellowship. \section*{Disclosures} The authors declare no conflicts of interest.
proofpile-arXiv_065-173
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Multi-body hadronic $D^{0(+)}$ decays provide an ideal laboratory to study strong and weak interactions. Amplitude analyses of these decays offer comprehensive information of quasi-two-body $D^{0(+)}$ decays, which are important to explore $D\bar D^0$ mixing, charge-parity ($CP$) violation and quark SU(3)-flavor asymmetry breaking phenomenon~\cite{ref5,theory_1,theory_2,chenghy1,yufs}. In particular, for the search of $CP$ violation, it is important to understand the intermediate structures for the singly Cabibbo-suppressed decays of $D^{0(+)}\to K\bar K\pi\pi$~\cite{xwkang,Charles:2009ig,yufs-cpv}. Current measurements of the $D^{0(+)}\to K\bar K\pi\pi$ decays containing $K^0_S$ or $\pi^0$ are limited~\cite{pdg2018}. The branching fractions (BFs) of $D^0\to K^0_SK^0_S\pi^+\pi^-$~\cite{FOCUS_kskspipi,ARGUS_kkpipi}, $D^+\to K^0_SK^-\pi^+\pi^+$~\cite{FOCUS_kskpipi}, $D^+\to K^0_SK^+\pi^+\pi^-$~\cite{FOCUS_kskpipi}, and $D^+\to K^+K^-\pi^+\pi^0$~\cite{ACCMOR_kkpipi0} were only determined relative to some well known decays or via topological normalization, with poor precision. This paper presents the first direct measurements of the absolute BFs for the decays $D^0\to K^+K^-\pi^0\pi^0$, $D^0\to K^0_SK^0_S\pi^+\pi^-$, $D^0\to K^0_SK^-\pi^+\pi^0$, $D^0\to K^0_SK^+\pi^-\pi^0$, $D^+\to K^+K^-\pi^+\pi^0$, $D^+\to K^0_SK^+\pi^0\pi^0$, $D^+\to K^0_SK^-\pi^+\pi^+$, $D^+\to K^0_SK^+\pi^+\pi^-$, and $D^+\to K^0_SK^0_S\pi^+\pi^0$. The $D^0\to K^0_SK^0_S\pi^0\pi^0$ decay is not included since it suffers from poor statistics and high background. Throughout this paper, charge conjugate processes are implied. An $e^+e^-$ collision data sample corresponding to an integrated luminosity of 2.93~fb$^{-1}$~\cite{lum_bes3} collected at a center-of-mass energy of $\sqrt s=$ 3.773~GeV with the BESIII detector is used to perform this analysis. \section{BESIII detector and Monte Carlo simulation} The BESIII detector is a magnetic spectrometer~\cite{BESIII} located at the Beijing Electron Positron Collider (BEPCII)~\cite{Yu:IPAC2016-TUYA01}. The cylindrical core of the BESIII detector consists of a helium-based multilayer drift chamber (MDC), a plastic scintillator time-of-flight system (TOF), and a CsI(Tl) electromagnetic calorimeter (EMC), which are all enclosed in a superconducting solenoidal magnet providing a 1.0~T magnetic field. The solenoid is supported by an octagonal flux-return yoke with resistive plate counter muon identifier modules interleaved with steel. The acceptance of charged particles and photons is 93\% over $4\pi$ solid angle. The charged-particle momentum resolution at $1~{\rm GeV}/c$ is $0.5\%$, and the $dE/dx$ resolution is $6\%$ for the electrons from Bhabha scattering. The EMC measures photon energies with a resolution of $2.5\%$ ($5\%$) at $1$~GeV in the barrel (end cap) region. The time resolution of the TOF barrel part is 68~ps, while that of the end cap part is 110~ps. Simulated samples produced with the {\sc geant4}-based~\cite{geant4} Monte Carlo (MC) package including the geometric description of the BESIII detector and the detector response, are used to determine the detection efficiency and to estimate the backgrounds. The simulation includes the beam-energy spread and initial-state radiation (ISR) in the $e^+e^-$ annihilations modeled with the generator {\sc kkmc}~\cite{kkmc}. The inclusive MC samples consist of the production of $D\bar{D}$ pairs with consideration of quantum coherence for all neutral $D$ modes, the non-$D\bar{D}$ decays of the $\psi(3770)$, the ISR production of the $J/\psi$ and $\psi(3686)$ states, and the continuum processes. The known decay modes are modeled with {\sc evtgen}~\cite{evtgen} using the BFs taken from the Particle Data Group (PDG)~\cite{pdg2018}, and the remaining unknown decays from the charmonium states are modeled with {\sc lundcharm}~\cite{lundcharm}. The final-state radiations from charged final-state particles are incorporated with the {\sc photos} package~\cite{photos}. \section{Measurement Method} The $D^0\bar D^0$ or $D^+D^-$ pair is produced without an additional hadron in $e^+e^-$ annihilations at $\sqrt s=3.773$ GeV. This process offers a clean environment to measure the BFs of the hadronic $D$ decay with the double-tag (DT) method. The single-tag (ST) candidate events are selected by reconstructing a $\bar D^0$ or $D^-$ in the following hadronic final states: $\bar D^0 \to K^+\pi^-$, $K^+\pi^-\pi^0$, and $K^+\pi^-\pi^-\pi^+$, and $D^- \to K^{+}\pi^{-}\pi^{-}$, $K^0_{S}\pi^{-}$, $K^{+}\pi^{-}\pi^{-}\pi^{0}$, $K^0_{S}\pi^{-}\pi^{0}$, $K^0_{S}\pi^{+}\pi^{-}\pi^{-}$, and $K^{+}K^{-}\pi^{-}$. The event in which a signal candidate is selected in the presence of an ST $\bar D$ meson, is called a DT event. The BF of the signal decay is determined by \begin{equation} \label{eq:br} {\mathcal B}_{{\rm sig}} = N^{\rm net}_{\rm DT}/(N^{\rm tot}_{\rm ST}\cdot\epsilon_{{\rm sig}}), \end{equation} where $N^{\rm tot}_{\rm ST}=\sum_i N_{{\rm ST}}^i$ and $N^{\rm net}_{\rm DT}$ are the total yields of the ST and DT candidates in data, respectively. $N_{{\rm ST}}^i$ is the ST yield for the tag mode $i$. For the signal decays involving $K^0_S$ meson(s) in the final states, $N^{\rm net}_{\rm DT}$ is the net DT yields after removing the peaking background from the corresponding non-$K^0_S$ decays. For the other signal decays, the variable corresponds to the fitted DT yields as described later. Here, $\epsilon_{{\rm sig}}$ is the efficiency of detecting the signal $D$ decay, averaged over the tag mode $i$, which is given by: \begin{equation} \label{eq:eff} \epsilon_{{\rm sig}} = \sum_i (N^i_{{\rm ST}}\cdot\epsilon^i_{{\rm DT}}/\epsilon^i_{{\rm ST}})/N^{\rm tot}_{\rm ST}, \end{equation} where $\epsilon^i_{{\rm ST}}$ and $\epsilon^i_{{\rm DT}}$ are the efficiencies of detecting ST and DT candidates in the tag mode $i$, respectively. \section{Event selection} The selection criteria of $K^\pm$, $\pi^\pm$, $K^0_S$, and $\pi^0$ are the same as those used in the analyses presented in Refs.~\cite{epjc76,cpc40,bes3-pimuv,bes3-Dp-K1ev,bes3-etaetapi,bes3-omegamuv,bes3-etamuv,bes3-etaX}. All charged tracks, except those from $K^0_{S}$ decays, are required to have a polar angle $\theta$ with respect to the beam direction within the MDC acceptance $|\rm{cos\theta}|<0.93$, and a distance of closest approach to the interaction point (IP) within 10~cm along the beam direction and within 1~cm in the plane transverse to the beam direction. Particle identification (PID) for charged pions, kaons, and protons is performed by exploiting TOF information and the specific ionization energy loss $dE/dx$ measured by the MDC. The confidence levels for pion and kaon hypotheses ($CL_{\pi}$ and $CL_{K}$) are calculated. Kaon and pion candidates are required to satisfy $CL_{K}>CL_{\pi}$ and $CL_{\pi}>CL_{K}$, respectively. The $K^0_S$ candidates are reconstructed from two oppositely charged tracks to which no PID criteria are applied and which masses are assumed to be that of pions. The charged tracks from the $K^0_S$ candidate must satisfy $|\rm{cos\theta}|<0.93$. In addition, due to the long lifetime of the $K^0_S$ meson, there is a less stringent criterion on the distance of closest approach to the IP in the beam direction of less than 20~cm and no requirement on the distance of closest approach in the plane transverse to the beam direction. Furthermore, the $\pi^+\pi^-$ pairs are constrained to originate from a common vertex and their invariant mass is required to be within $(0.486,0.510)~{\rm GeV}/c^2$, which corresponds to about three times the fitted resolution around the nominal $K^0_S$ mass. The decay length of the $K^0_S$ candidate is required to be larger than two standard deviations of the vertex resolution away from the IP. The $\pi^0$ candidate is reconstructed via its $\gamma\gamma$ decay. The photon candidates are selected using the information from the EMC shower. It is required that each EMC shower starts within 700~ns of the event start time and its energy is greater than 25 (50)~MeV in the barrel (end cap) region of the EMC~\cite{BESIII}. The energy deposited in the nearby TOF counters is included to improve the reconstruction efficiency and energy resolution. The opening angle between the candidate shower and the nearest charged track must be greater than $10^{\circ}$. The $\gamma\gamma$ pair is taken as a $\pi^0$ candidate if its invariant mass is within $(0.115,\,0.150)$\,GeV$/c^{2}$. To improve the resolution, a kinematic fit constraining the $\gamma\gamma$ invariant mass to the $\pi^{0}$ nominal mass~\cite{pdg2018} is imposed on the selected photon pair. \section{Yields of ST $\bar D$ mesons} To select $\bar D^0\to K^+\pi^-$ candidates, the backgrounds from cosmic rays and Bhabha events are rejected by using the same requirements described in Ref.~\cite{deltakpi}. In the selection of $\bar D^0\to K^+\pi^-\pi^-\pi^+$ candidates, the $\bar D^0\to K^0_SK^\pm\pi^\mp$ decays are suppressed by requiring the mass of all $\pi^+\pi^-$ pairs to be outside $(0.478,0.518)$~GeV/$c^2$ The tagged $\bar D$ mesons are identified using two variables, namely the energy difference \begin{equation} \Delta E_{\rm tag} \equiv E_{\rm tag} - E_{\rm b}, \label{eq:deltaE} \end{equation} and the beam-constrained mass \begin{equation} M_{\rm BC}^{\rm tag} \equiv \sqrt{E^{2}_{\rm b}-|\vec{p}_{\rm tag}|^{2}}. \label{eq:mBC} \end{equation} Here, $E_{\rm b}$ is the beam energy, $\vec{p}_{\rm tag}$ and $E_{\rm tag}$ are the momentum and energy of the $\bar D$ candidate in the rest frame of $e^+e^-$ system, respectively. For each tag mode, if there are multiple candidates in an event, only the one with the smallest $|\Delta E_{\rm tag}|$ is kept. The tagged $\bar D$ candidates are required to satisfy $\Delta E_{\rm tag}\in(-55,40)$\,MeV for the tag modes containing $\pi^0$ in the final states and $\Delta E_{\rm tag}\in(-25,25)$\,MeV for the other tag modes, thereby taking into account the different resolutions. To extract the yields of ST $\bar D$ mesons for individual tag modes, binned-maximum likelihood fits are performed on the corresponding $M_{\rm BC}^{\rm tag}$ distributions of the accepted ST candidates following Refs.~\cite{epjc76,cpc40,bes3-pimuv,bes3-Dp-K1ev,bes3-etaetapi,bes3-omegamuv,bes3-etamuv}. In the fits, the $\bar D$ signal is modeled by an MC-simulated shape convolved with a double-Gaussian function describing the resolution difference between data and MC simulation. The combinatorial background shape is described by an ARGUS function~\cite{ARGUS} defined as $c_f(f;E_{\rm end},\xi_f)=A_f\cdot f\cdot \sqrt{1 - \frac {f^2}{E^2_{\rm end}}} \cdot \exp\left[\xi_f \left(1-\frac {f^2}{E^2_{\rm end}}\right)\right]$, where $f$ denotes $M^{\rm tag}_{\rm BC}$, $E_{\rm end}$ is an endpoint fixed at 1.8865 GeV, $A_f$ is a normalization factor, and $\xi_f$ is a free parameter. The resulting fits to the $M_{\rm BC}$ distributions for each mode are shown in Fig.~\ref{fig:datafit_MassBC}. The total yields of the ST $\bar D^0$ and $D^-$ mesons in data are $2327839\pm1860$ and $1558159\pm2113$, respectively, where the uncertainties are statistical only. \begin{figure}[htp] \centering \includegraphics[width=1.0\linewidth]{massbc.eps} \caption{\small Fits to the $M_{\rm BC}$ distributions of the ST $\bar D^0$ (left column) and $D^-$ (middle and right columns) candidates, where the points with error bars are data, the blue solid and red dashed curves are the fit results and the fitted backgrounds, respectively.} \label{fig:datafit_MassBC} \end{figure} \section{Yields of DT events} In the recoiling sides against the tagged $\bar D$ candidates, the signal $D$ decays are selected by using the residual tracks that have not been used to reconstruct the tagged $\bar D$ candidates. To suppress the $K^0_S$ contribution in the individual mass spectra for the $D^0\to K^+K^-\pi^0\pi^0$, $D^0\to K^0_SK^0_S\pi^{+}\pi^{-}$, and $D^+\to K^0_SK^+\pi^+\pi^-$ decays, the $\pi^{+}\pi^{-}$ and $\pi^{0}\pi^{0}$ invariant masses are required to be outside $(0.468,0.528)$~GeV/$c^2$ and $(0.438,0.538)$~GeV/$c^2$, respectively. To suppress the background from $D^0\to K^-\pi^+\omega$ in the identification of the $D^0\to K^0_SK^-\pi^+\pi^0$ process, the $K^0_S\pi^0$ invariant mass is required to be outside $(0.742,0.822)$ GeV/$c^2$. These requirements correspond to at least five times the fitted mass resolution away from the fitted mean of the mass peak. The signal $D$ mesons are identified using the energy difference $\Delta E_{\rm sig}$ and the beam-constrained mass $M_{\rm BC}^{\rm sig}$, which are calculated with Eqs.~(\ref{eq:deltaE}) and (\ref{eq:mBC}) by substituting ``tag'' with ``sig''. For each signal mode, if there are multiple candidates in an event, only the one with the smallest $|\Delta E_{\rm sig}|$ is kept. The signal decays are required to satisfy the mode-dependent $\Delta E_{\rm sig}$ requirements, as shown in the second column of Table~\ref{tab:DT}. To suppress incorrectly identified $D\bar D$ candidates, the opening angle between the tagged $\bar D$ and the signal $D$ is required to be greater than $160^\circ$, resulting in a loss of (2-6)\% of the signal and suppressing (8-55)\% of the background. Figure~\ref{fig:mBC2D} shows the $M_{\rm BC}^{\rm tag}$ versus $M_{\rm BC}^{\rm sig}$ distribution of the accepted DT candidates in data. The signal events concentrate around $M_{\rm BC}^{\rm tag} = M_{\rm BC}^{\rm sig} = M_{D}$, where $M_{D}$ is the nominal $D$ mass~\cite{pdg2018}. The events with correctly reconstructed $D$ ($\bar D$) and incorrectly reconstructed $\bar D$ ($D$), named BKGI, are spread along the lines around $M_{\rm BC}^{\rm tag} = M_{D}$ or $M_{\rm BC}^{\rm sig} = M_{D}$. The events smeared along the diagonal, named BKGII, are mainly from the $e^+e^- \to q\bar q$ processes. The events with uncorrelated and incorrectly reconstructed $D$ and $\bar D$, named BKGIII, disperse across the whole allowed kinematic region. For each signal $D$ decay mode, the yield of DT events ($N^{\rm fit}_{\rm DT}$) is obtained from a two-dimensional (2D) unbinned maximum-likelihood fit~\cite{cleo-2Dfit} on the $M_{\rm BC}^{\rm tag}$ versus $M_{\rm BC}^{\rm sig}$ distribution of the accepted candidates. In the fit, the probability density functions (PDFs) of signal, BKGI, BKGII, and BKGIII are constructed as \begin{itemize} \item signal: $a(x,y)$, \item BKGI: $b(x)\cdot c_y(y;E_{\rm b},\xi_{y}) + b(y)\cdot c_x(x;E_{\rm b},\xi_{x})$, \item BKGII: $c_z(z;\sqrt{2}E_{\rm b},\xi_{z}) \cdot g(k)$, and \item BKGIII: $c_x(x;E_{\rm b},\xi_{x}) \cdot c_y(y;E_{\rm b},\xi_{y})$, \end{itemize} respectively. Here, $x=M_{\rm BC}^{\rm sig}$, $y=M_{\rm BC}^{\rm tag}$, $z=(x+y)/\sqrt{2}$, and $k=(x-y)/\sqrt{2}$. The PDFs of signal $a(x,y)$, $b(x)$, and $b(y)$ are described by the corresponding MC-simulated shapes. $c_f(f;E_{\rm end},\xi_f)$ is an ARGUS function~\cite{ARGUS} defined above, where $f$ denotes $x$, $y$, or $z$; $E_{\rm b}$ is fixed at 1.8865 GeV. $g(k)$ is a Gaussian function with mean of zero and standard deviation parametrized by $\sigma_k=\sigma_0 \cdot(\sqrt{2}E_{\rm b}/c^2-z)^p$, where $\sigma_0$ and $p$ are fit parameters. \begin{figure}[htp] \centering \includegraphics[width=1.0\linewidth]{2Dfit_2018.eps} \caption{ The $M_{\rm BC}^{\rm tag}$ versus $M_{\rm BC}^{\rm sig}$ distribution of the accepted DT candidates of $D^+\to K^+K^-\pi^+\pi^0$ in data. Here, ISR denotes the signal spreading along the diagonal direction. } \label{fig:mBC2D} \end{figure} Combinatorial $\pi^+\pi^-$ pairs from the decays $D^0\to K^0_S2(\pi^+\pi^-)$ [and $D^0\to 3(\pi^+\pi^-)$], $D^0\to K^-\pi^+\pi^+\pi^-\pi^0$, $D^0\to K^+\pi^+\pi^-\pi^-\pi^0$, $D^+\to K^-\pi^+\pi^+\pi^+\pi^-$, $D^+\to K^+2(\pi^+\pi^-)$, $D^+\to K^+\pi^+\pi^-\pi^0\pi^0$, $D^+\to K^0_S\pi^+\pi^+\pi^-\pi^0$ [and $D^+\to 2(\pi^+\pi^-)\pi^+\pi^0$] may also satisfy the $K^0_S$ selection criteria and form peaking backgrounds around $M_D$ in the $M_{\rm BC}^{\rm sig}$ distributions for $D^0\to K^0_SK^0_S\pi^+\pi^-$, $D^0\to K^0_SK^-\pi^+\pi^0$, $D^0\to K^0_SK^+\pi^-\pi^0$, $D^+\to K^0_SK^+\pi^0\pi^0$ $D^+\to K^0_SK^-\pi^+\pi^+$, $D^+\to K^0_SK^+\pi^+\pi^-$, and $D^+\to K^0_SK^0_S\pi^+\pi^0$, respectively. This kind of peaking background is estimated by selecting events in the $K^0_S$ sideband region of $(0.454,0.478)\cup(0.518,0.542)~{\rm GeV}/c^2$. For $D^0\to K^0_SK^-\pi^+\pi^0$, $D^0\to K^0_SK^+\pi^-\pi^0$, $D^+\to K^0_SK^-\pi^+\pi^+$, $D^+\to K^0_SK^+\pi^+\pi^-$, and $D^+\to K^0_SK^+\pi^0\pi^0$ decays, one-dimensional (1D) signal and sideband regions are used. For $D^0\to K^0_SK^0_S\pi^+\pi^-$ and $D^+\to K^0_SK^0_S\pi^+\pi^0$ decays, 2D signal and sideband regions are used. The 2D $K^0_S$ signal region is defined as the square region with both $\pi^+\pi^-$ combinations lying in the $K^0_S$ signal regions. The 2D $K^0_S$ sideband 1~(2) regions are defined as the square regions with 1~(2) $\pi^+\pi^-$ combination(s) located in the 1D $K^0_S$ sideband regions and the rest in the 1D $K^0_S$ signal region. Figure~\ref{fig:mks} shows 1D and 2D $\pi^+\pi^-$ invariant-mass distributions as well as the $K^0_S$ signal and sideband regions. \begin{figure}[htp] \centering \includegraphics[width=1.0\linewidth]{D0_ks2.eps} \caption{\small (a)~The $\pi^+\pi^-$ invariant-mass distributions of the $D^+\to K^0_SK^-\pi^+\pi^+$ candidate events of data (points with error bars) and inclusive MC sample (histogram). Pairs of the red solid~(blue dashed) arrows denote the $K^0_S$ signal~(sideband) regions. (b)~Distribution of $M_{\pi^+\pi^-(1)}$ versus $M_{\pi^+\pi^-(2)}$ for the $D^0\to K^0_SK^0_S\pi^+\pi^-$ candidate events in data. Red solid box denotes the 2D signal region. Pink dot-dashed~(blue dashed) boxes indicate the 2D sideband 1~(2) regions. }\label{fig:mks} \end{figure} For the signal decays involving $K^0_S$ meson(s) in the final states, the net yields of DT events are calculated by subtracting the sideband contribution from the DT fitted yield by \begin{equation} \label{eq:1} N^{\rm net}_{\rm DT} = N^{\rm fit}_{\rm DT} + \Sigma^N_i \left [\left (-\frac{1}{2} \right )^i N^{\rm fit}_{{\rm sid}i} \right ]. \end{equation} Here, $N=1$ for the decays with one $K^0_S$ meson while $N=2$ for the decays with two $K^0_S$ mesons. The combinatorial $\pi^+\pi^-$ backgrounds are assumed to be uniformly distributed and double-counting is avoided by subtracting (2) yields from (1) yields appropriately. $N^{\rm fit}_{\rm DT}$ and $N^{\rm fit}_{{\rm sid}i}$ are the fitted $D$ yields in the 1D or 2D signal region and sideband $i$ region, respectively. For the other signal decays, the net yields of DT events are $N^{\rm fit}_{\rm DT}$. Figure~\ref{fig:2Dfit} shows the $M^{\rm tag}_{\rm BC}$ and $M^{\rm sig}_{\rm BC}$ projections of the 2D fits to data. From these 2D fits, we obtain the DT yields for individual signal decays as shown in Table~\ref{tab:DT}. For each signal decay mode, the statistical significance is calculated according to $\sqrt{-2{\rm ln ({\mathcal L_0}/{\mathcal L_{\rm max}}})}$, where ${\mathcal L}_{\rm max}$ and ${\mathcal L}_0$ are the maximum likelihoods of the fits with and without involving the signal component, respectively. The effect of combinatorial $\pi^+\pi^-$ backgrounds in the $K^0_S$-signal regions has been considered for the decays involving a $K^0_S$. The statistical significance for each signal decay is found to be greater than $8\sigma$. \section{Results} Each of the $D^0\to K^0_SK^-\pi^+\pi^0$, $D^+\to K^+K^-\pi^+\pi^0$, $D^+\to K^0_SK^-\pi^+\pi^+$, and $D^+\to K^0_SK^+\pi^+\pi^-$ decays is modeled by the corresponding mixed signal MC samples, in which the dominant decay modes containing resonances of $K^*(892)$, $\rho(770)$, and $\phi$ are mixed with the phase space (PHSP) signal MC samples. The mixing ratios are determined by examining the corresponding invariant mass and momentum spectra. The other decays, which are limited in statistics, are generated with the PHSP generator. The momentum and the polar angle distributions of the daughter particles and the invariant masses of each two- and three-body particle combinations of the data agree with those of the MC simulations. As an example, Fig.~\ref{add} shows the invariant mass distributions of two or three-body particle combinations of $D^+\to K^+K^-\pi^+\pi^0$ candidate events for data and MC simulations. The measured values of $N^{\rm net}_{{\rm DT}}$, $\epsilon^{}_{{\rm sig}}$, and the obtained BFs are summarized in Table~\ref{tab:DT}. The current world-average values are also given for comparison. The signal efficiencies have been corrected by the necessary data-MC differences in the selection efficiencies of $K^\pm$ and $\pi^\pm$ tracking and PID procedures and the $\pi^0$ reconstruction. These efficiencies also include the BFs of the $K^0_S$ and $\pi^0$ decays. The efficiency for $D^+\to K^0_SK^+\pi^+\pi^-$ ($D^0\to K^0_SK^-\pi^+\pi^0$) is lower than that of $D^+\to K^0_SK^-\pi^+\pi^+$ ($D^0\to K^0_SK^+\pi^-\pi^0$) due to the $K^0_S$ $(\omega)$ rejection in the $\pi^+\pi^-$ ($K^0_S\pi^0$) mass spectrum. \begin{figure*}[htbp] \centering \includegraphics[width=0.49\linewidth]{2Dfit_tag33.eps} \includegraphics[width=0.49\linewidth]{2Dfit_sig33.eps} \caption{\small Projections on the $M^{\rm tag}_{\rm BC}$ and $M^{\rm sig}_{\rm BC}$ distributions of the 2D fits to the DT candidate events with all $\bar D^0$ or $D^-$ tags. Data are shown as points with error bars. Blue solid, light blue dotted, blue dot-dashed, red dot-long-dashed, and pink long-dashed curves denote the overall fit results, signal, BKGI, BKGII, and BKGIII components (see text), respectively. } \label{fig:2Dfit} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[width=0.8\linewidth]{data_mc.eps} \caption{\small The invariant mass distributions of two or three-body particle combinations of $D^+\to K^+K^-\pi^+\pi^0$ candidate events for data and MC simulations. Data are shown as points with error bars. Red solid histograms are mixed signal MC samples. Blue dashed histograms are PHSP signal MC samples. Yellow hatched histograms are the backgrounds estimated from the inclusive MC sample. } \label{add} \end{figure*} \section{Systematic uncertainties} The systematic uncertainties are estimated relative to the measured BFs and are discussed below. In BF determinations using Eq.~(\ref{eq:br}), all uncertainties associated with the selection of tagged $\bar D$ canceled in the ratio. The systematic uncertainties in the total yields of ST $\bar D$ mesons related to the $M_{\rm BC}$ fits to the ST $\bar D$ candidates, were previously estimated to be 0.5\% for both neutral and charged $\bar D$~\cite{epjc76,cpc40,bes3-pimuv}. The tracking and PID efficiencies for $K^\pm$ or $\pi^\pm$, $\epsilon_{K\,{\rm or}\,\pi}^{\rm tracking\,(PID)}[{\rm data}]$ and $\epsilon_{K\,{\rm or}\,\pi}^{\rm tracking\,(PID)}[{\rm MC}]$, are investigated using DT $D\bar D$ hadronic events. The averaged ratios between data and MC efficiencies ($f_{K\,{\rm or}\,\pi}^{\rm tracking\,(PID)}=\epsilon_{K\,{\rm or}\,\pi}^{\rm tracking\,(PID)}[{\rm data}]/\epsilon_{K\,{\rm or}\,\pi}^{\rm tracking\,(PID)}[{\rm MC}]$) of tracking (PID) for $K^\pm$ or $\pi^\pm$ are weighted by the corresponding momentum spectra of signal MC events, giving $f_K^{\rm tracking}$ to be $1.022{\text -}1.031$ and $f_\pi^{\rm tracking}$ to be close to unity. After correcting the MC efficiencies by $f_K^{\rm tracking}$, the residual uncertainties of $f_{K\,{\rm or}\,\pi}^{\rm tracking}$ are assigned as the systematic uncertainties of tracking efficiencies, which are (0.4-0.7)\% per $K^\pm$ and (0.2-0.3)\% per $\pi^\pm$. $f_K^{\rm PID}$ and $f_\pi^{\rm PID}$ are all close to unity and their individual uncertainties, (0.2-0.3)\%, are taken as the associated systematic uncertainties per $K^\pm$ or $\pi^\pm$. The systematic error related to the uncertainty in the $K_{S}^{0}$ reconstruction efficiency is estimated from measurements of $J/\psi\to K^{*}(892)^{\mp}K^{\pm}$ and $J/\psi\to \phi K_S^{0}K^{\pm}\pi^{\mp}$ control samples~\cite{sysks} and found to be 1.6\% per $K^0_S$. The systematic uncertainty of $\pi^0$ reconstruction efficiency is assigned as (0.7-0.8)\% per $\pi^0$ from a study of DT $D\bar D$ hadronic decays of $\bar D^0\to K^+\pi^-\pi^0$ and $\bar D^0\to K^0_S\pi^0$ decays tagged by either $D^0\to K^-\pi^+$ or $D^0\to K^-\pi^+\pi^+\pi^-$~\cite{epjc76,cpc40}. The systematic uncertainty in the 2D fit to the $M_{\rm BC}^{\rm tag}$ versus $M_{\rm BC}^{\rm sig}$ distribution is examined via the repeated measurements in which the signal shape and the endpoint of the ARGUS function ($\pm0.2$\,MeV/$c^2$) are varied. Quadratically summing the changes of the BFs for these two sources yields the corresponding systematic uncertainties. The systematic uncertainty due to the $\Delta E_{\rm sig}$ requirement is assigned to be 0.3\%, which corresponds to the largest efficiency difference with and without smearing the data-MC Gaussian resolution of $\Delta E_{\rm sig}$ for signal MC events. Here, the smeared Gaussian parameters are obtained by using the samples of DT events $D^0\to K^0_S\pi^0$, $D^0\to K^-\pi^+\pi^0$, $D^0\to K^-\pi^+\pi^0\pi^0$, and $D^+\to K^-\pi^+\pi^+\pi^0$ versus the same $\bar D$ tags in our nominal analysis. The systematic uncertainties due to $K^0_S$ sideband choice and $K^0_S$ rejection mass window are assigned by examining the changes of the BFs via varying nominal $K^0_S$ sideband and corresponding rejection window by $\pm5$~MeV/$c^2$. For the decays whose efficiencies are estimated with mixed signal MC events, the systematic uncertainty in the MC modeling is determined by comparing the signal efficiency when changing the percentage of MC sample components. For the decays whose efficiencies are estimated with PHSP-distributed signal MC events, the uncertainties are assigned as the change of the signal efficiency after adding the possible decays containing $K^*(892)$ or $\rho(770)$. The imperfect simulations of the momentum and $\cos\theta$ distributions of charged particles are considered as a source of systematic uncertainty. The signal efficiencies are re-weighted by those distributions in data with background subtracted. The largest change of the re-weighted to nominal efficiencies, 0.9\%, is assigned as the corresponding systematic uncertainty. The measurements of the BFs of the neutral $D$ decays are affected by quantum correlation effect. For each neutral $D$ decay, the $CP$-even component is estimated by the $CP$-even tag $D^0\to K^+K^-$ and the $CP$-odd tag $D^0\to K^0_S\pi^0$. Using the same method as described in Ref.~\cite{QC-factor} and the necessary parameters quoted from Refs.~\cite{R-ref1,R-ref2,R-ref3}, we find the correction factors to account for the quantum correlation effect on the measured BFs are $(98.3^{+1.6}_{-1.1{\,\rm stat}})\%$, $(98.1^{+2.8}_{-1.7{\,\rm stat}})\%$, $(95.9^{+3.4}_{-2.7{\,\rm stat}})\%$, and $(98.4^{+1.1}_{-1.0{\,\rm stat}})\%$ for $D^0\to K^+K^-\pi^0\pi^0$, $D^0\to K^0_SK^0_S\pi^+\pi^-$, $D^0\to K^0_SK^-\pi^+\pi^0$, and $D^0\to K^0_SK^+\pi^-\pi^0$, respectively. After correcting the signal efficiencies by the individual factors, the residual uncertainties are assigned as systematic uncertainties. The uncertainties due to the limited MC statistics for various signal decays, (0.4-0.8)\%, are taken into account as a systematic uncertainty. The uncertainties of the quoted BFs of the $K^0_S\to \pi^+\pi^-$ and $\pi^0\to \gamma\gamma$ decays are 0.07\% and 0.03\%, respectively~\cite{pdg2018}. The efficiencies of $D\bar D$ opening angle requirement is studied by using the DT events of $D^0\to K^-\pi^+\pi^+\pi^-$, $D^0\to K^-\pi^+\pi^0\pi^0$, and $D^+\to K^-\pi^+\pi^+\pi^0$ tagged by the same tag modes in our nominal analysis. The difference of the accepted efficiencies between data and MC simulations, 0.4\% for the decays without $\pi^0$, 0.8\% for the decays involving one $\pi^0$ and 0.3\% for the decays involving two $\pi^0$s, is assigned as the associated systematic uncertainty. Table~\ref{tab:relsysuncertainties1} summarizes the systematic uncertainties in the BF measurements. For each signal decay, the total systematic uncertainty is obtained by adding the above effects in quadrature to be (2.6-6.0)\% for various signal decay modes. \begin{table*}[htbp] \centering \caption{\small Requirements of $\Delta E_{\rm sig}$, net yields of DT candidates ($N^{\rm net}_{{\rm DT}}$), signal efficiencies ($\epsilon_{\rm sig}$), and the obtained BFs (${\mathcal B}_{\rm sig}$) for various signal decays as well as comparisons with the world-average BFs (${\mathcal B}_{\rm PDG}$). The first and second uncertainties for ${\mathcal B}_{\rm sig}$ are statistical and systematic, respectively, while the uncertainties for $N^{\rm net}_{\rm DT}$ and $\epsilon_{\rm sig}$ are statistical only. The world-average BF of $D^+\to K^+K^-\pi^+\pi^0$ is obtained by summing over the contributions of $D^+\to \phi(\to K^+K^-)\pi^+\pi^0$ and $D^+\to K^+K^-\pi^+\pi^0|_{{\rm non\text-}\phi}$. }\label{tab:DT} \begin{ruledtabular} \begin{tabular}{lcccccc} \multicolumn{1}{c} {Signal mode}&$\Delta E_{\rm sig}$\,(MeV) &$N^{\rm net}_{\rm DT}$ & $\epsilon_{\rm sig}$\,(\%) & ${\mathcal B}_{\rm sig}$\,($\times10^{-3}$) & ${\mathcal B}_{\rm PDG}$\,($\times10^{-3}$) \\ \hline $D^0\to K^+K^-\pi^0\pi^0$ &$(-59,40)$&$ 132.1\pm13.9$&$ 8.20\pm0.07$&$0.69\pm0.07\pm0.04$&--\\ $D^0\to K^0_SK^0_S\pi^+\pi^-$&$(-22,22)$&$ 62.5\pm10.4$&$ 5.14\pm0.04$&$0.52\pm0.09\pm0.03$&$1.22\pm0.23$\\ $D^0\to K^0_SK^-\pi^+\pi^0$ &$(-43,32)$&$ 195.8\pm20.3$&$ 6.38\pm0.06$&$1.32\pm0.14\pm0.07$&--\\ $D^0\to K^0_SK^+\pi^-\pi^0$ &$(-44,33)$&$ 119.3\pm12.9$&$ 7.94\pm0.06$&$0.65\pm0.07\pm0.02$&--\\ $D^+\to K^+K^-\pi^+\pi^0$ &$(-39,30)$&$1311.7\pm40.4$&$12.72\pm0.08$&$6.62\pm0.20\pm0.25$&$26^{+9}_{-8}$\\ $D^+\to K^0_SK^+\pi^0\pi^0$ &$(-61,44)$&$ 34.7\pm 7.2$&$ 3.77\pm0.02$&$0.59\pm0.12\pm0.04$&--\\ $D^+\to K^0_SK^-\pi^+\pi^+$ &$(-22,21)$&$ 467.9\pm26.6$&$13.24\pm0.08$&$2.27\pm0.12\pm0.06$&$2.38\pm0.17$\\ $D^+\to K^0_SK^+\pi^+\pi^-$ &$(-21,20)$&$ 279.6\pm18.1$&$ 9.39\pm0.06$&$1.91\pm0.12\pm0.05$&$1.74\pm0.18$\\ $D^+\to K^0_SK^0_S\pi^+\pi^0$&$(-46,37)$&$ 80.4\pm12.0$&$ 3.84\pm0.03$&$1.34\pm0.20\pm0.06$&--\\ \end{tabular} \end{ruledtabular} \end{table*} \begin{table*}[htp] \centering \caption{ Systematic uncertainties (\%) in the measurements of the BFs of the signal decays (1) $D^0\to K^+K^-\pi^0\pi^0$, (2) $D^0\to K^0_SK^0_S\pi^+\pi^-$, (3) $D^0\to K^0_SK^-\pi^+\pi^0$, (4) $D^0\to K^0_SK^+\pi^-\pi^0$, (5) $D^+\to K^+K^-\pi^+\pi^0$, (6) $D^+\to K^0_SK^+\pi^0\pi^0$, (7) $D^+\to K^0_SK^-\pi^+\pi^+$, (8) $D^+\to K^0_SK^+\pi^+\pi^-$, and (9) $D^+\to K^0_SK^0_S\pi^+\pi^0$.} \label{tab:relsysuncertainties1} \centering \begin{ruledtabular} \begin{tabular}{cccccccccc} Source/Signal decay & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline $N^{\rm tot}_{\rm ST}$ &0.5 &0.5 &0.5 &0.5 &0.5 &0.5 &0.5 &0.5 &0.5 \\ $(K/\pi)^\pm$ tracking &1.0 &0.6 &0.9 &0.9 &1.6 &0.4 &1.1 &1.2 &0.3 \\ $(K/\pi)^\pm$ PID &0.4 &0.4 &0.6 &0.6 &1.0 &0.2 &0.6 &0.7 &0.2 \\ $K^0_S$ reconstruction &... &3.2 &1.6 &1.6 &... &1.6 &1.6 &1.6 &3.2 \\ $\pi^0$ reconstruction &1.6 &... &0.7 &0.7 &0.8 &1.6 &... &... &0.7 \\ $\Delta E_{\rm sig}$ requirement &0.7 &0.7 &0.7 &0.7 &0.7 &0.7 &0.7 &0.7 &0.7 \\ $K_{S}^{0}$ rejection &4.2 &2.4 &... &... &... &4.2 &... &0.8 &... \\ $K_{S}^{0}$ sideband &... &0.2 &1.1 &0.2 &... &1.3 &0.1 &0.1 &0.2 \\ Quoted BFs &0.0 &0.1 &0.1 &0.1 &0.0 &0.1 &0.1 &0.1 &0.1 \\ MC statistics &0.8 &0.6 &0.7 &0.6 &0.5 &0.4 &0.4 &0.5 &0.6 \\ MC modeling &1.3 &1.0 &0.5 &0.7 &2.1 &1.4 &0.5 &0.7 &0.5 \\ Imperfect simulation &0.9 &0.9 &0.9 &0.9 &0.9 &0.9 &0.9 &0.9 &0.9 \\ $D\bar D$ opening angle &0.3 &0.4 &0.8 &0.8 &0.8 &0.3 &0.4 &0.4 &0.8 \\ 2D fit &1.3 &2.8 &3.1 &1.5 &1.9 &2.7 &0.5 &0.6 &3.0 \\ Quantum correlation effect &1.6 &2.8 &3.4 &1.1 &... &... &... &... &... \\ \hline Total &5.5 &5.9 &5.4 &3.3 &3.8 &6.0 &2.6 &2.8 &4.8 \\ \end{tabular} \end{ruledtabular} \end{table*} \section{Summary} In summary, by analyzing a data sample obtained in $e^+e^-$ collisions at $\sqrt{s}=3.773$~GeV with the BESIII detector and corresponding to an integrated luminosity of 2.93~fb$^{-1}$, we obtained the first direct measurements of the absolute BFs of nine $D^{0(+)}\to K\bar K\pi\pi$ decays containing $K^0_S$ or $\pi^0$ mesons. The $D^0\to K^+K^-\pi^0\pi^0$, $D^0\to K^0_SK^-\pi^+\pi^0$, $D^0\to K^0_SK^+\pi^-\pi^0$, $D^+\to K^0_SK^+\pi^0\pi^0$, and $D^+\to K^0_SK^0_S\pi^+\pi^0$ decays are observed for the first time. Compared to the world-average values, the BFs of the $D^0\to K^0_SK^0_S\pi^+\pi^-$, $D^+\to K^+K^-\pi^+\pi^0$, $D^+\to K^0_SK^-\pi^+\pi^+$, and $D^+\to K^0_SK^+\pi^+\pi^-$ decays are measured with improved precision. Our BFs of $D^+\to K^0_SK^-\pi^+\pi^+$ and $D^+\to K^0_SK^+\pi^+\pi^-$ are in agreement with individual world averages within $1\sigma$ while our BFs of $D^0\to K^0_SK^0_S\pi^+\pi^-$ and $D^+\to K^+K^-\pi^+\pi^0$ deviate with individual world averages by $2.3\sigma$ and $2.8\sigma$, respectively. The precision of the BF of $D^+\to K^+K^-\pi^+\pi^0$ is improved by a factor of about seven. Future amplitude analyses of all these $D^{0(+)}\to K\bar K\pi\pi$ decays with larger data samples foreseen at BESIII~\cite{bes3-white-paper}, Belle~II~\cite{belle2-white-paper}, and LHCb~\cite{lhcb-white-paper} will supply rich information of the two-body decay modes containing scalar, vector, axial and tensor mesons, thereby benefiting the understanding of quark SU(3)-flavor symmetry. \section{Acknowledgement} Authors thank for valuable discussions with Prof. Fu-sheng Yu. The BESIII collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key Basic Research Program of China under Contract No. 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts Nos.~11775230, 11475123, 11625523, 11635010, 11735014, 11822506, 11835012, 11935015, 11935016, 11935018, 11961141012; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts Nos.~U1532101, U1932102, U1732263, U1832207; CAS Key Research Program of Frontier Sciences under Contracts Nos. QYZDJ-SSW-SLH003, QYZDJ-SSW-SLH040; 100 Talents Program of CAS; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; ERC under Contract No. 758462; German Research Foundation DFG under Contracts Nos. Collaborative Research Center CRC 1044, FOR 2359; Istituto Nazionale di Fisica Nucleare, Italy; Ministry of Development of Turkey under Contract No. DPT2006K-120470; National Science and Technology fund; STFC (United Kingdom); The Knut and Alice Wallenberg Foundation (Sweden) under Contract No. 2016.0157; The Royal Society, UK under Contracts Nos. DH140054, DH160214; The Swedish Research Council; U. S. Department of Energy under Contracts Nos. DE-FG02-05ER41374, DE-SC-0012069.
proofpile-arXiv_065-174
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Ultrashort laser pulses are able to change the magnetic order of materials within pico- or femtoseconds, orders of magnitude faster than conventional spin-based devices operating on nanosecond timescales \cite{Kirilyuk2010,Nemec2018}. Usually, the electromagnetic field components of a laser pulse couple to electronic degrees of freedom of the magnetic ions, leading to the notion of ultrafast opto-magnetism \cite{Kampfrath2011,Kalashnikova2015,Tzschaschel2017,Kubacka2014,Schlauderer2019}. Recent studies have demonstrated that light can also couple to the spins indirectly by exciting coherent vibrations of the crystal lattice (phonons) that transfer angular momentum to the magnetic ions \cite{nova:2017,juraschek2:2017,Shin2018,Maehrlein2018,Juraschek2019,Juraschek2020_3,Juraschek2020_5} or modulate the crystal structure into a transient state of modified magnetic order \cite{juraschek:2017,Radaelli2018,Gu2018,Khalsa2018,Fechner2018,Afanasiev2021,Disa2020,Juraschek2020,Rodriguez-Vega2020,Stupakiewicz2021,Giorgianni2021}. These \textit{phono-magnetic} methods promise higher selectivity and lower dissipation than techniques based on opto-magnetic effects due to the lower energy of the excitation. A central challenge is to produce effective magnetic fields that are strong enough to induce qualitative changes in the magnetic order, and common fields for optical and phononic driving have so far ranged in the order of milli to a few tesla \cite{kimel:2005,nova:2017,juraschek2:2017,Juraschek2020_3}. Here, we propose that circularly driven phonons in the rare-earth trihalides produce effective magnetic fields that exceed those previously seen by several orders of magnitude. We predict at the example of CeCl$_3$ that effective magnetic fields of over 100~tesla, which polarize the paramagnetically disordered spins, should be achievable for laser energies well within the damage threshold of the crystal. The mechanism allows for bidirectional control of the induced magnetization and possibly creates a way to control the magnetic and electrical order of ferroic materials through interfacial coupling with the phonon-induced magnetization in heterostructures. \section{Properties of cerium trichloride} \begin{figure}[b] \centering \includegraphics[scale=0.077]{figure_structuresplitting.pdf} \caption{ Structure and properties of CeCl$_3$. (a) Hexagonal $P6_3/m$ structure of paramagnetic CeCl$_3$. (b) Schematic splitting of a doubly degenerate phonon mode with frequency $\Omega_0$ into right and left-handed circularly polarized components in an external magnetic field at liquid helium temperatures, saturating at frequencies $\Omega_+$ and $\Omega_-$. At higher temperatures, the phonon splitting saturates at higher magnetic fields. } \label{fig:CeCl3} \end{figure} Rare-earth trihalides are a class of $4f$ paramagnets with formula unit $RH_3$. CeCl$_3$ ($R=\mathrm{Ce}$, $H=\mathrm{Cl}$) is a representative of this class of materials that crystallizes in the hexagonal $P6_3/m$ structure with an electronic band gap of 4.2~eV \cite{Zachariasen1948,Park1993}. We chose CeCl$_3$ as our model system, because the primitive unit cell consists of only 8 atoms (Fig.~\ref{fig:CeCl3}(a)), resulting in a small number of 21 optical phonon modes characterized by the irreducible representations $2A_g + 1A_u + 2B_g + 2B_u + 1E_{1g} + 3E_{2g} + 2E_{1u} + 1E_{2u}$ in its $6/m$ point group. Early Raman studies have shown that the polarization of the $4f$ electrons in an external magnetic field leads to a splitting of the doubly degenerate $E_{1g}$ and $E_{2g}$ phonon modes into left- and right-handed circularly polarized \cite{schaack:1976,schaack:1977}, see Fig.~\ref{fig:CeCl3}(b). It has been suggested that also the infrared-active $E_{1u}$ phonon modes split in the same way \cite{Thalmeier1978}, yet no experimental infrared spectroscopy measurements had been performed at that time. The infrared-active $E_{1u}$ modes map into the same $E'$ representation at the local $\bar{6}$ symmetry of the cerium ions as the Raman-active $E_{2g}$ modes, for which phonon splittings have been measured, and should therefore have the same effect on the paramagnetic spins. Infrared-active phonon modes possess an electric dipole moment and can therefore be resonantly excited by the electric field component of a laser pulse to yield large vibrational amplitudes. We will explore in this work how circularly driven $E_{1u}$ phonons act on the spins through the inverse of the spin-phonon coupling. \begin{figure}[t] \centering \includegraphics[scale=0.115]{figure_crystalfield.pdf} \caption{ Spin-phonon coupling. (a) Cl ligands around the magnetic cerium ion. (b) Displacements of the Ce (left) and Cl ions (right) along the eigenvectors of the circularly polarized $E_{1u}$ modes in the $ab$ plane of the crystal. The equilibrium positions of the ions are set to the center of the each plot, respectively. (c) Circularly polarized phonons induce transitions between the $m_J=5/2$ ground-state Kramers doublet and higher crystal electric field levels \cite{schaack:1977,Thalmeier1977}. } \label{fig:crystalfield} \end{figure} \section{Spin-phonon coupling and coherent phonon dynamics} We begin by reviewing the theory of spin-phonon coupling in $4f$ paramagnets. Motions of the ions along the eigenvectors of phonon modes modify the crystal electric field (CEF) around the paramagnetic ions and induce virtual transitions between the ground-state energy levels and higher-lying CEF states, see Fig.~\ref{fig:crystalfield}. The spin states of rare-earth ions in compounds are close to those of the free ions and the total angular momentum (isospin), $J$, is a good quantum number. In CeCl$_3$, the lowest energy level has $J=5/2$, which splits into three Kramers doublets, of which $m_J=\pm5/2$ is the ground state. The interaction of phonons with the isospin can be written as an effective ``spin-orbit'' type Hamiltonian \cite{Ray1967,Capellmann1991,Ioselevich1995,sheng:2006,Kagan2008,Wang2009,zhang:2014}, \begin{equation}\label{eq:spinphonon} H^{\mathrm{s-ph}} = \sum\limits_{\alpha n} k_{\alpha n} \mathbf{J}_\alpha \cdot \mathbf{L}_{\alpha n}, \end{equation} where $\mathbf{J}_\alpha$ is the total isospin of unit cell $\alpha$, $\mathbf{L}_{\alpha n}$ is the phonon angular momentum generated by mode $n$, and $k_{\alpha n}$ is the coupling coefficient. The index $\alpha$ runs over all unit cells of the crystal and $n$ over all phonon modes. For optical phonons at the Brillouin-zone center, the phonon angular momentum is homogeneous across unit cells and we can drop the index $\alpha$. It is given by $\mathbf{L}_n=\mathbf{Q}_n\times\dot{\mathbf{Q}}_n$, where $\mathbf{Q}_n=(Q_{na},Q_{nb},0)$ contains the normal mode coordinates of the two orthogonal components of a doubly degenerate phonon mode, $Q_{na}$ and $Q_{nb}$, in the $ab$ plane of the crystal. We can further treat the isospin in a mean-field approximation and replace $\mathbf{J}_\alpha$ by the ensemble average, $\Braket{\mathbf{J}_\alpha}$. Taking into account only isospin components perpendicular to the doubly degenerate phonon modes, the ensemble average of the isospin is $\Braket{\mathbf{J}_\alpha}=2|\mathbf{J}|\mathbf{e}_z(\Braket{n_{-J}}-\Braket{n_J})$, where $\mathbf{e}_z$ is a unit vector along the $c$ axis of the crystal. $\mathbf{J}$ is the isospin of a single cerium ion, of which there are two per unit cell. $\Braket{n_{\pm J}}$ is the Fermi-Dirac distribution describing the occupation of the ground-state Kramers doublet. The magnetic moment per unit cell, $\mathbf{m}$, is then given by \begin{equation}\label{eq:magnetization} \mathbf{m} = g_J \mu_B \Braket{\mathbf{J}_n}=2 g_J \mu_B \sqrt{J(J+1)} \mathbf{e}_z \left(\Braket{n_{-J}}-\Braket{n_J}\right), \end{equation} where $g_J$ is the Land\'{e} factor. The theoretical value of the prefactor in Eq.~(\ref{eq:magnetization}) for the $m_J=\pm5/2$ ground-state doublet is $g_{\pm5/2} = g_{J} \sqrt{J(J+1)} = 2.54$, which is reasonably close to the experimental value of 2.02 \cite{Thalmeier1977}, showing that there is some quenching of orbital angular momentum. We can now rewrite Eq.~(\ref{eq:spinphonon}) in terms of the magnetic moment, yielding \begin{equation}\label{eq:newspinphonon} H^{\mathrm{s-ph}} = \sum\limits_{n} K_{n} \mathbf{m} \cdot \mathbf{L}_n, \end{equation} where we redefined the coupling as $K_n \mathbf{m} = k_{\alpha n}\Braket{\mathbf{J}_\alpha}$. We now look at the influence of the interaction on the phonons one mode at a time and therefore drop the index $n$ in the following. The spin-phonon coupling modifies the off-diagonal terms of the dynamical matrix as $\mathbf{D}(m) = \mathbf{D}^{(0)} + i\mathbf{D}^{(1)}(m)$, where $m=|\mathbf{m}|$ \cite{anastassakis:1972,Holz1972,schaack:1976,schaack:1977,dzyaloshinskii:2009,riseborough:2010,zhang:2014,juraschek2:2017,Juraschek2020_3,Juraschek2020_5}. (For details, see the Appendix.) As a result, the frequencies of right- and left-handed circular polarizations, $\Omega_\pm$, of the doubly degenerate phonon mode split, \begin{equation}\label{eq:phononsplittinglinear} \Omega_\pm(m) = \Omega_0\sqrt{1\pm\frac{2 K m}{\Omega_0}} \approx \Omega_0 \pm K m, \end{equation} where $\Omega_0$ is the eigenfrequency of the doubly degenerate phonon mode. Without an external magnetic field, the energy levels of the ground-state Kramers doublet are degenerate, there is no net magnetic moment per unit cell, and the phonon frequencies in Eq.~(\ref{eq:phononsplittinglinear}) remain degenerate. Applying a magnetic field, $B\parallel c$, to the paramagnet splits the ground-state doublet, $\Delta E = E_{-5/2}-E_{5/2} = 2 g_{\pm5/2} \mu_B B$, and induces a magnetization of the form (see the Appendix for details) \begin{eqnarray}\label{eq:inducedmagnetization} m & = & 2g_{\pm5/2}\mu_B \tanh\left(\frac{g_{\pm5/2} \mu_B B}{2k_B T}\right). \end{eqnarray} Inserting Eq.~(\ref{eq:inducedmagnetization}) into (\ref{eq:phononsplittinglinear}), the frequency splitting yields \begin{equation}\label{eq:phononsplittingfull} \Delta\Omega(B) = 4 K g_{\pm5/2}\mu_B \tanh\left(\frac{g_{\pm5/2} \mu_B B}{2k_B T}\right). \end{equation} The prefactor in Eq.~(\ref{eq:phononsplittingfull}) directly corresponds to the saturation splitting, $\Delta\Omega_s=4 K g_{\pm5/2}\mu_B$, and we can reciprocally extract the spin-phonon coupling from experimentally measured phonon splittings, $K=\Delta\Omega_s/(4 g_{\pm5/2}\mu_B)$. We now look at the inverse effect the interaction has on the magnetization, when the phonons are circularly driven with an ultrashort laser pulse. The phonon angular momentum acts as an effective magnetic field, $\mathbf{B}$, \begin{equation}\label{eq:magneticfield} \mathbf{B} = \partial H^\mathrm{s-ph} / (\partial \mathbf{m}) = K \mathbf{L}, \end{equation} where $V_c$ is the volume of the unit cell. Phenomenologically, this interaction is a phonon analog of the inverse Faraday effect in optics \cite{Juraschek2020_3,Juraschek2020_5}, which is known to induce magnetizations in paramagnets \cite{vanderziel:1965,Pershan1967,reid:2010,Mikhaylovskiy2012}. If the spins reacted instantaneously to the effective magnetic field, the magnetization could be described statically by Eq.~(\ref{eq:inducedmagnetization}). From experiments on the optical inverse Faraday effect, it is known that this static limit of spin response holds when the driving pulse is on the order of nanoseconds \cite{vanderziel:1965,Pershan1967}. For femtosecond pulse durations, additional diamagnetic effects from the unquenching of electronic orbital moments come into play \cite{reid:2010,popova:2011,Popova2012,Mikhaylovskiy2012}, which cannot be described in the thermodynamic limit. Coherently driven phonons evolve over several picoseconds, which is also the timescale that spins and phonons have been shown to equilibrate through effective magnetic fields \cite{Maehrlein2018}. We therefore apply a rate-equation model to describe the dynamics of the spin population of the $m_J=\pm5/2$ ground-state doublet, $n_{\pm J}$ \cite{Breuer2003, Blum:1433745}, \begin{eqnarray} \partial_t n_{J} & = & -\gamma_{J}(\Delta E) n_{J} + \gamma_{-J}(\Delta E) n_{-J}, \label{eq:rateequation1}\\ \partial_t n_{-J} & = & -\gamma_{-J}(\Delta E) n_{-J} + \gamma_{J}(\Delta E) n_{J}, \label{eq:rateequation2} \end{eqnarray} where the decay rates of spins in the respective states are described by $\gamma_{-J} = \eta_0\Delta E N(\Delta E)$ and $\gamma_{J} = \eta_0\Delta E (N(\Delta E)+1) $, where $N(\Delta E)$ is the Bose-Einstein distribution, $\eta_0=\gamma_0/(k_B T)$, and $\gamma_0$ is the decay rate for zero level splitting, $\gamma_{\pm J}(\Delta E \rightarrow 0)=\gamma_0$. For coherently excited phonons, $\mathbf{Q}$ can be treated as semi-classical field amplitude \cite{merlin:1997,Dekorsy2000,Forst2008,subedi:2014}, which we obtain by solving the equation of motion \begin{equation}\label{eq:phononeom} \ddot{\mathbf{Q}} + 2\kappa\dot{\mathbf{Q}} + \Omega^2_0\mathbf{Q} = Z \mathbf{E}(t). \end{equation} Here, $\kappa$ is the linewidth of the phonon mode, $\Omega_0$ is its eigenfrequency, and $Z=\sum_m Z^\ast_m \mathbf{q}_{m}/\sqrt{\mathcal{M}_m}$ is its mode effective charge, where $Z^\ast_m$ is the Born effective charge tensor, $\mathbf{q}_{m}$ is the eigenvector, and $\mathcal{M}_m$ is the atomic mass of ion $m$. The sum runs over all ions in the unit cell. We model the circularly polarized terahertz pulse as $\mathbf{E}(t)=(E(t),E(t-2\pi/(4\Omega)),0)/\sqrt{2}$, where $E(t) = E_0 \exp(-t^2/(2(\tau/\sqrt{8\ln2})^2)) \cos(\omega_0 t)$, $E_0$ is the peak electric field, $\omega_0$ is the center frequency, and $\tau$ is the full width at half maximum duration of the pulse. Here, the two perpendicular components of the doubly degenerate phonon mode are excited with a quarter-period difference, resulting in circular polarization. As light couples to phonon modes close to the center of the Brillouin zone, we may neglect any wavevector dependence in Eq.~(\ref{eq:phononeom}). \begin{figure*}[t] \centering \includegraphics[scale=0.11]{figure_phonondynamics.pdf} \caption{ Coherent phonon dynamics and effective magnetic fields. (a) Time evolutions of the infrared-active 5.9 and 4.8~THz $E_{1u}$ phonon amplitudes, $Q_a$, in response to the excitation by a circularly polarized terahertz pulse with a full width at half maximum duration of $\tau=350$~fs and a fluence of 10~mJ/cm$^2$. The evolutions of the $Q_b$ components (not shown) are shifted by a quarter period, respectively. The carrier envelope of the terahertz pulse is shown schematically. (b) Time evolutions of the phonon-induced effective magnetic fields, $B$, acting on the paramagnetic spins. (c) Linear scaling of the effective magnetic fields with the fluence, $F$, of the terahertz pulse. The shaded area marks the range of magnetic fields that can be achieved through commonly found spin-phonon coupling strengths. } \label{fig:dynamics} \end{figure*} \section{Phonon-induced effective magnetic fields and magnetizations} We extract the magnitude of the spin-phonon coupling from experimental data of the phonon-frequency splitting according to Eq.~(\ref{eq:phononsplittingfull}), $K=\Delta\Omega_s/(4g_{\pm5/2}\mu_B)$. In the rare-earth trihalides, splittings of the Raman-active modes range between 0.3~THz and 0.75~THz \cite{schaack:1976,schaack:1977}. Because the infrared-active modes change the local symmetry of the magnetic cerium ion in the same way, we expect a similar strength of the spin-phonon coupling as for the Raman-active modes and use an average of the experimentally found values of $\Delta\Omega_s/(2\pi) = 0.5$~THz. Note that this splitting is several orders of magnitude larger than the one induced by the magnetic moments of phonons in the phonon Zeeman effect \cite{Rebane:1983,juraschek2:2017,Juraschek2019,Dunnett2019}, which we can therefore neglect here. In the following, we evaluate the effective magnetic fields produced by the two doubly degenerate infrared-active $E_{1u}$ modes in CeCl$_3$ with eigenfrequencies of 5.9 and 4.8~THz. We find the mode effective charges of these modes to be $0.24e$ and $0.66e$, respectively, where $e$ is the elementary charge. For details of the ab initio calculations of the phonon eigenfrequencies and eigenvectors, and the Born effective charges, see the Appendix and the corresponding references \cite{Gonze1997,Gonze1997_2,kresse:1996,kresse2:1996,phonopy,csonka:2009,Momma2011}. For the phonon linewidth, $\kappa$, we assume a phenomenological value of 5\%{} of the phonon frequency that matches those typically found in rare-earth trihalides \cite{schaack:1976,schaack:1977}. Fig.~\ref{fig:dynamics} shows the coherent phonon dynamics following the excitation by a circularly polarized terahertz pulse with a duration of $\tau=350$~fs and a fluence of 10~mJ/cm$^2$, as described by Eq.~(\ref{eq:phononeom}). The fluence $F$ is connected to the peak electric field and the duration of the pulse through $F=\tau/\sqrt{8\ln 2}c_0 \epsilon_0\sqrt{\pi/2}E_0^2$, where $c_0$ and $\epsilon_0$ are the speed of light and the vacuum permittivity. The center frequency, $\omega_0$, is chosen to be resonant with the eigenfrequencies of the respective phonon modes. In Fig.~\ref{fig:dynamics}(a), we show the evolution of the phonon amplitudes $Q_a$ according to Eq.~(\ref{eq:phononeom}). The phases of the $Q_b$ components are shifted by a quarter period, respectively. The maximum amplitude of the $E_{1u}(5.9)$ mode of $Q_a = 0.33$~\AA$\sqrt{\mathrm{amu}}$, where amu denotes the atomic mass unit, is roughly three times smaller than that of the $E_{1u}(4.8)$ mode of $Q_a = 1.1$~\AA$\sqrt{\mathrm{amu}}$ due to the smaller mode effective charge and higher phonon frequency. In Fig.~\ref{fig:dynamics}(b), we show the evolutions of the effective magnetic fields produced by the two phonon modes according to Eq.~(\ref{eq:magneticfield}). We obtain a maximum effective magnetic field of $B=2.9$~T for the $E_{1u}(5.9)$ mode and $27$~T for the $E_{1u}(4.8)$ mode. This order-of-magnitude difference comes from the quadratic scaling of the effective magnetic field with the phonon amplitudes. The direction of the effective magnetic field is determined by the handedness of the phonon circular polarization, which can straightforwardly be controlled by the polarization of the pulse. We now vary the strength of the excitation. We show the maximum amplitudes of the effective magnetic fields for a range of experimentally accessible fluences of the terahertz pulse \cite{Liu2017} in Fig.~\ref{fig:dynamics}(c), where we fix the pulse duration at $\tau = 350$~fs. The effective magnetic fields depend linearly on the fluence and reach 11.4~T for the $E_{1u}(5.9)$ mode and 107~T for the $E_{1u}(4.8)$ mode at a fluence of 40~mJ/cm$^2$. In order to ensure experimental feasibility, we evaluate the atomic displacements along the eigenvectors of the phonon modes. The Lindemann stability criterion predicts melting of the crystal lattice when the root mean square displacements reach between 10\%{} and 20\%{} of the interatomic distance \cite{Lindemann1910}. We extract the maximum root mean square displacements as $d=\mathrm{max}_{n} | \mathbf{d}_{n}/\sqrt{2} |$, where $\mathbf{d}_{n} = \mathbf{q}_{n} Q_a(t)/\sqrt{\mathcal{M}_n}$ is the displacement of ion $n$. Even at fluences of 40~mJ/cm$^2$, the largest root mean square displacements of the chloride ions reach only 1.3\%{} of the interatomic distance of $2.97$~\AA{} for the $E_{1u}(5.9)$ mode and 3.8\%{} for the $E_{1u}(4.8)$ mode, well below the vibrational damage threshold. Note that other effects may occur, e.g. Zener tunneling, that are not accounted for here. At these high fields, nonlinear couplings between coherently excited infrared-active modes and other vibrational degrees of freedoms come into play \cite{forst:2011,subedi:2014}. These modes do not contribute directly to the spin-phonon coupling however, and we therefore neglect the effect of nonlinear phonon-phonon coupling in this context. Furthermore, the centrosymmetry of CeCl$_3$ prevents nonlinear optical effects, such as second-harmonic generation, to occur at high fluences. \begin{figure}[b] \centering \includegraphics[scale=0.0825]{figure_magnetization.pdf} \caption{ Magnetization, $M=m/V_c$, induced by the $E_\mathrm{1u}(4.8)$ mode when excited by a circularly polarized terahertz pulse with a duration of 350~fs. (a) Time evolution of $M$, varying with the decay rate, $\gamma_0$, for a fluence of 10~mJ/cm$^2$ at 4~K. The dashed line marks the saturation magnetization. (b) Fluence dependence of $M$, varying with $\gamma_0$. (c) Time evolution of $M$ varying with temperature for a fluence of 10~mJ/cm$^2$ for $\gamma_0=1$~THz. Shown are graphs for 2~K, the boiling temperatures of helium (4~K), hydrogen (20~K), and nitrogen (77~K), as well as for room temperature (295~K). (d) Fluence dependence of $M$, varying with temperature. } \label{fig:magnetizationdependence} \end{figure} Next, we look at the magnetization, $M=m/V_c$, that can be induced in CeCl$_3$ according to Eqs.~(\ref{eq:magnetization}), (\ref{eq:rateequation1}), and (\ref{eq:rateequation2}). In Fig.~\ref{fig:magnetizationdependence}, we show the evolution of the magnetization in response to the effective magnetic field generated by the $E_\mathrm{1u}(4.8)$ mode when excited with a resonant terahertz pulse with a duration of 350~fs and a fluence of 10~mJ/cm$^2$, as well as the dependence of the magnetization on the fluence of the laser pulse. In Figs.~\ref{fig:magnetizationdependence}(a) and \ref{fig:magnetizationdependence}(b), we vary the decay rate, $\gamma_0$, while keeping the temperature fixed at 4~K, and in Figs.~\ref{fig:magnetizationdependence}(c) and \ref{fig:magnetizationdependence}(d) we vary the temperature, while keeping $\gamma_0=0.1$~THz. For fast decay rates and at low temperatures, even small fluences of $<$10~mJ/cm$^2$ are sufficient to fully polarize the spins of the material, yielding a transient saturation magnetization of $M=4~\mu_B/V_c$. The slower the decay rate and the higher the temperature, the higher the fluence of the laser pulse has to be in order to induce a significant magnetization. The influence of the decay rate on the achievable magnetization is hereby much larger than that of temperature. The slowest decay rate of $\gamma_0=0.1$~MHz that we look at here corresponds to the nanosecond timescale, on which the thermodynamic picture of spin polarization (Eq.~(\ref{eq:inducedmagnetization})) is known to hold \cite{vanderziel:1965,Pershan1967}. Therefore, the corresponding value of induced magnetization of $M\approx 0.1~\mu_B/V_c$ from Fig.~\ref{fig:magnetizationdependence}(a) can be regarded as lower boundary. \section{Discussion} Our predictions can be experimentally realized in state-of-the-art tabletop setups that provide terahertz pulses in the required frequency range \cite{Liu2017}, where the phonon-induced magnetization of the material can be probed by Faraday rotation measurements. Tuning the frequency of the terahertz pulse in and out of resonance with the phonon modes can distinguish a possible contribution of the optical inverse Faraday effect to the magnetization (that should have negligible frequency dependence in this spectral range) from the phonon-induced mechanism. In the future, explicit calculations of spin-phonon decay rates \cite{Lunghi2019} will help to further quantify the timescale of the effect. While we have chosen CeCl$_3$ as our model system, the mechanism described here should be general to the entire class of rare-earth trihalides \cite{schaack:1976,schaack:1977} and possibly to $4f$ magnets in general, as similar magnitudes of the spin-phonon coupling have been found in ferromagnetic LiTbF$_4$ \cite{Dorfler1983} and paramagnetic Tb$_3$Ga$_5$O$_{12}$ \cite{sheng:2006,zhang:2014}. A future question to answer is whether spin-phonon couplings in $3d$ magnets can reach similar magnitudes to those in $4f$ magnets. Potential giant phonon-induced effective magnetic fields in the paramagnetic phases of $3d$ magnets would directly impact a large variety of materials that are already being used in magnetoelectronic technologies \cite{Bader2010,Spaldin2019}. \begin{acknowledgments} We are grateful to C. Tzschaschel (Harvard University), J. Lehmann, S. Pal and N. Spaldin (ETH Zurich), and M. Fechner, A. Disa, A. von Hoegen and A. Cavalleri (MPI Hamburg) for useful discussions. This project was supported by the Swiss National Science Foundation (SNSF) under Project ID 184259 and the DARPA DRINQS Program under Award No. D18AC00014. P.N. is a Moore Inventor Fellow and gratefully acknowledges support from the Gordon and Betty Moore Foundation through Grant No. GBMF8048. Calculations were performed at the National Energy Research Scientific Computing Center (NERSC), supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. \end{acknowledgments} \section*{Appendix A: Phonon-frequency splitting and magnetization in $4f$ paramagnets} \noindent{}We start from the spin-phonon interaction of Eq.~(\ref{eq:newspinphonon}) derived in the main text, \begin{equation}\label{eq:App_newspinphonon} H^{\mathrm{s-ph}} = \sum\limits_{n} K_{n} \mathbf{m} \cdot \mathbf{L}_n, \end{equation} where $K_n$ is the spin-phonon coupling and $\mathbf{L}_n=\mathbf{Q}_n\times\dot{\mathbf{Q}}_n$ is the phonon angular momentum of mode $n$, with $\mathbf{Q}_n=(Q_{na},Q_{nb},0)$ containing the normal mode coordinates of the two orthogonal components of a doubly degenerate phonon mode, $Q_{na}$ and $Q_{nb}$, in the $ab$ plane of a crystal. The magnetic moment per unit cell, $\mathbf{m}$, derived in Eq.~(\ref{eq:magnetization}) of the main text, is given by \begin{equation}\label{eq:App_magnetization} \mathbf{m} = 2 g_J \mu_B \sqrt{J(J+1)} \mathbf{e}_z \left(\Braket{n_{-J}}-\Braket{n_J}\right), \end{equation} where $\Braket{n_{\pm J}}$ is the Fermi-Dirac distribution for the $m_J=\pm5/2$ ground-state Kramers doublet states, $g_J$ is the Land\'{e} factor and $\mathbf{e}_z$ is a unit vector along the $c$ axis of the crystal. The theoretical value of the prefactor in Eq.~(\ref{eq:App_magnetization}) for the $m_J=\pm5/2$ ground-state Kramers doublet is $g_{\pm5/2} = g_{J} \sqrt{J(J+1)} = 2.54$, which is reasonably close to the experimental value of 2.02 \cite{Thalmeier1977}. We now look at the influence of the coupling on the phonons one mode at a time and therefore drop the index $n$ in the following. The phonon Lagrangian, $\mathcal{L}$, including the interaction in Eq.~(\ref{eq:App_newspinphonon}), can be written as \begin{eqnarray}\label{eq:App_Lagrangian} \mathcal{L}(Q,\dot{Q}) & = & \frac{1}{2} \dot{Q}_a^2 + \frac{1}{2} \dot{Q}_b^2 - \frac{\Omega^2_0}{2} Q_a^2 - \frac{\Omega^2_0}{2} Q_b^2 \nonumber\\ & & - K m (Q_a \dot{Q}_b - Q_b \dot{Q}_a), \end{eqnarray} where $Q_a$ and $Q_b$ are the normal mode coordinates of the $ab$ component of the doubly degenerate phonon mode, $\Omega_a=\Omega_b\equiv\Omega_0$ is the eigenfrequency, and $m=|\mathbf{m}|$. In frequency space, the Lagrangian can be transformed to \begin{equation}\label{eq:App_frequencyspace} \mathcal{L}(Q_\Omega,Q_\Omega^*) = \mathbf{Q}_\Omega \mathbf{D} \mathbf{Q}_\Omega^*, \end{equation} where $\mathbf{Q}_\Omega=(Q_{\Omega,a},Q_{\Omega,b},0)$, and $\mathbf{D}$ is the dynamical matrix. The spin-phonon coupling modifies the dynamical matrix as $\mathbf{D}(m) = \mathbf{D}^{(0)} + i\mathbf{D}^{(1)}(m)$ \cite{anastassakis:1972,Holz1972,schaack:1976,schaack:1977,dzyaloshinskii:2009,riseborough:2010,zhang:2014,juraschek2:2017,Juraschek2020_3,Juraschek2020_5}. In order to evaluate the effect of the spin-phonon coupling on the phonon frequencies, we compute the determinant of the dynamical matrix, which contains the spin-phonon coupling in its off-diagonal components, \begin{equation}\label{eq:App_determinant} \mathrm{det}\mathbf{A} = \left| \begin{array}{cc} \Omega^2-\Omega^2_0 & -2i\Omega K m \\ 2i\Omega K m & \Omega^2-\Omega^2_0 \end{array} \right|. \end{equation} Solving the determinant yields Eq.~(\ref{eq:phononsplittinglinear}) from the main text, describing a splitting of the frequencies of right- and left-handed circular polarizations, $\Omega_\pm$, of the doubly degenerate phonon mode, \begin{equation}\label{eq:App_phononsplittinglinear} \Omega_\pm(m) = \Omega_0\sqrt{1\pm\frac{2 K m}{\Omega_0}} \approx \Omega_0 \pm K m. \end{equation} Without an external magnetic field, the energy levels of the $m_J=\pm 5/2$ ground-state Kramers doublet are degenerate, there is no net magnetic moment per unit cell, and the phonon frequencies in Eq.~(\ref{eq:App_phononsplittinglinear}) remain degenerate. Applying a magnetic field, $B\parallel c$, to the paramagnet splits the ground-state doublet, $\Delta E = E_{-5/2}-E_{5/2} = 2 g_{\pm5/2} \mu_B B$, and induces a magnetization given by \begin{eqnarray}\label{eq:App_inducedmagnetization} m & = & 2g_{\pm5/2}\mu_B \left(\Braket{n_{-5/2}} - \Braket{n_{5/2}}\right) \nonumber\\ & = & 2g_{\pm5/2} \mu_B \Bigg(\frac{1}{\exp\left(-\frac{g_{\pm5/2} \mu_B B}{k_B T}\right)+1} \nonumber\\ & & - \frac{1}{\exp\left(\frac{g_{\pm5/2} \mu_B B}{k_B T}\right)+1}\Bigg) \nonumber\\ & = & 2g_{\pm5/2}\mu_B \tanh\left(\frac{g_{\pm5/2} \mu_B B}{2k_B T}\right), \end{eqnarray} which is Eq.~(\ref{eq:inducedmagnetization}) from the main text. In the above equation, we have used the relation \begin{equation} \frac{1}{\exp(-x)+1}-\frac{1}{\exp(x)+1}=\tanh\left(\frac{x}{2}\right). \end{equation} Inserting Eq.~(\ref{eq:App_inducedmagnetization}) into (\ref{eq:App_phononsplittinglinear}) yields Eq.~(\ref{eq:phononsplittingfull}) of the main text, \begin{equation}\label{eq:App_phononsplittingfull} \Delta\Omega(B) = 4 K g_{\pm5/2}\mu_B \tanh\left(\frac{g_{\pm5/2} \mu_B B}{2k_B T}\right), \end{equation} which allows us to extract the spin-phonon coupling from experimentally measured phonon splittings, $K=\Delta\Omega_s/(4 g_{\pm5/2}\mu_B)$, where $\Delta\Omega_s$ is the saturation phonon-frequency splitting. \section*{Appendix B: Computational details} We calculate the phonon eigenfrequencies and eigenvectors, and Born effective charges from first principles, using the density functional perturbation theory formalism \cite{Gonze1997,Gonze1997_2} as implemented in the Vienna ab-initio simulation package (\textsc{vasp}) \cite{kresse:1996,kresse2:1996} and the frozen-phonon method as implemented in the \textsc{phonopy} package \cite{phonopy}. We use the VASP projector augmented wave (PAW) pseudopotentials with valence electron configurations Ce ($6s^2 5s^2 5p^6 5d^1 4f^1$) and Cl ($3p^5 3s^2$) and converge the Hellmann-Feynman forces to 25~$\mu$eV/\AA. For the 8-atom unit cell, we use a plane-wave energy cut-off of 600~eV, and a 4$\times$4$\times$7 gamma-centered $k$-point mesh to sample the Brillouin zone. For the exchange-correlation functional, we choose the Perdew-Burke-Ernzerhof revised for solids (PBEsol) form of the generalized gradient approximation (GGA) \cite{csonka:2009}. We perform nonmagnetic calculations to obtain the structural and dynamical properties of CeCl$_3$ in order to avoid paramagnetic supercell calculations with $4f$ electrons. Within this treatment, the lattice constants of our fully relaxed hexagonal structure (space group $P6_3/m$, point group $6/m$) of $a=4.21$~\AA{} and $c=7.38$~\AA{} with a unit-cell volume of $V=199$~\AA$^3$ agree reasonably well with experimental values \cite{Zachariasen1948}. Furthermore, our calculated phonon eigenfrequencies match the experimental values reasonably well \cite{schaack:1977,Thalmeier1978}, with a maximum deviation of $\sim$10\%{}. Crystal structures are visualized using \textsc{vesta} \cite{Momma2011}.
proofpile-arXiv_065-175
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Recently there have been many efforts to imbue deep-learning models with the ability to perform causal inference. This has been motivated primarily by the inability of traditional correlative models to make predictions on interventional and counterfactual questions \cite{pcrbook, pearlbook}, as well as the explainability of causal graphical models. These efforts have largely run in parallel to the developing trend of exploiting the non-local properties of graph neural networks \cite{DBLP:journals/corr/abs-1711-07971} to generate powerful and efficient representations of high-dimensional data. In this note we dichotomize the task of causal inference as a two-step process, illustrated in Figure \ref{fig:2step}. The first step involves inferring the graphical structure of a causal model associated with a given observational data set as a directed acyclic graph (DAG). Inferring the structure of causal DAG's from observational data has a long history and there have been many proposed techniques including constraint-based \cite{pcrbook, pearlbook, Zhang2008-ZHAOTC-3, 10.5555/2074158.2074204} and score-based methods \cite{10.1007/BFb0028180, Chickering2002OptimalSI, DBLP:journals/corr/abs-1302-3567, heckarticle}, recently developed masked-gradient methods \cite{zheng2018dags, zheng2019learning, DBLP:journals/corr/abs-1904-10098, ng2019graph, ng2019masked, fang2020low, ng2020role}, as well as hybrid methods \cite{DBLP:journals/corr/abs-1906-02226}. Notable novel alternatives also include methods based on reinforcement-learning \cite{DBLP:journals/corr/abs-1906-04477}, adversarial networks \cite{kalainathan2018structural} and restricted Boltzmann machines \cite{Sokolovska2020UsingUD}. Since the task of causal structural discovery is merely a means to an end for this work, we (rather arbitrarily) adopt the masked-gradient approach due to its parsimonious integration with the neural network based architectures for SEM-learning that are the subject of this note.\footnote{codebase: \url{http://github.com/q1park/spacetime}} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.5]{twostep.PNG}\vspace{-0.75cm} \end{center} \caption{The causal inference steps in this note begin with existing DAG structure-learning algorithms to infer causal structures in latent representations of data. Using the learned DAG, neural-networks are used to estimate the response of conditional probabilities under various graphical interventions.} \label{fig:2step} \end{figure} For the second step of causal inference, we develop a novel autoencoding architecture that applies generative moment-matching neural-networks \cite{DBLP:journals/corr/ZhaoSE17b, DBLP:journals/corr/RenLLZ16} to the edges of the learned causal graph, in order to estimate the functional dependence of the causally related observables as a structural equation model (SEM). Since their inception, generative moment-matching networks have been used for various tasks \cite{diane2017a, gaoproceed, briol2019statistical, lotfollahi2019conditional} related to the estimation of joint and conditional probability distributions, but to our knowledge this is the first use of their applications to an explicit causal graph structure. Our aim is to develop a fully unsupervised formalism that starts from purely observational tabular data, and ends with a robust automated sampling procedure that generates an accurate functional estimate of conditional probability distributions for the associated SEM. Existing techniques for Bayesian sampling on the latent space of generative models are also numerous, including Monte Carlo and gradient-optimization based methods \cite{ahn2012bayesian, 2001SPIE.4322..456H, DBLP:journals/corr/abs-1812-03285}. Much of this work has been inspired by several recent efforts to develop generative models that encode causal structure. For example, in \cite{DBLP:journals/corr/abs-1709-02023} the authors develop specific conditional adversarial loss functions for learning multi-step causal relations. Their goals are similar to those described in this note with a focus on linear relations within high-dimensional image vectors. In \cite{yang2020causalvae} the authors use supervised learning to endow the latent space distributions of a variational autoencoder with a causal graphical structure, with the aim of intervening on this latent space to control specific properties of their feature maps. In this note we perform experiments on simple low-dimensional feature maps, and examine the performance of our autoencoder in generating accurate conditional probability distributions from complex non-linear multi-step causal structures. These causal structures are assumed to exist as relations among dimensions in the latent representation of the data. Thus in principle, the methods described here should also be applicable to more complex feature maps such as those generated by image and language data. However experimentation on these high-dimensional data types are beyond the scope of this note. In Section \ref{sec:bkg} we give a brief review of causal graphs and describe a vectorized formulation for structural equation models that is suited for deep-learning applications. In Section \ref{sec:exp} we give the results of our experiments on causal structure learning using existing masked gradient methods. We then describe our algorithm for SEM-learning and provide results on its performance. In Section \ref{sec:disc} we conclude with a discussion on possible applications and future directions for this work. \raggedbottom \section{Background} \label{sec:bkg} \subsection{Causal Graphs} The identification of a causal effect between two variables is equivalent to measuring the response $\delta_0$ of some endogenous variable $X_0$ with respect to a controlled change $\delta_1$ in some exogenous variable $X_1$. If all of the variables are controlled, then the causal effect can be directly inferred via the conditional probability distribution $P(X_0 +\delta_0 | X_1+\delta_1)$. Inferring causal effects from uncontrolled observational data is challenging due to the existence of confounding variables $S_n$ which generate spurious correlations whose effects on the conditional probability $P(X_0 (S_n) | X_1 (S_n))$ may be statistically indistinguishable from true causal effects. This is illustrated diagramatically in Figure \ref{fig:spurion}. Here we adopt the formalism of Pearl in which the effect of a controlled change in variable $X_1$ is represented on a causal graph by mutilating all of the arrows going into node $X_1$ as shown in Figure \ref{fig:intervention}. The result is referred to as the {\it intervened}\footnote{For notational simplicity we use slashes to indicate graph mutilated variables in conditional probabilities rather than Pearl's original notation of $P(X_0|{\rm do}(X_1))$} conditional probability distribution $P(X_0|\slashed{X}_1) \sim P(X_0 +\delta_0 | X_1+\delta_1)$ \begin{figure}[ht] \begin{center} \includegraphics[scale=0.45]{spurion.PNG} \vspace{-0.75cm} \end{center} \caption{Integrating out a confounding common cause variable $S_n$ generates a spurious correlation via a correction to the conditional probability distribution $P(X_0 | X_1)$.} \label{fig:spurion} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.45]{intervention.PNG} \vspace{-0.75cm} \end{center} \caption{Observing a controlled change to some variable $X_1$ requires removing the effects of any possible external influences. This is represented graphically by mutilating all in-going arrows into node $X_1$.} \label{fig:intervention} \end{figure} There exists a rich literature describing the necessary and sufficient conditions for statistical distinguishability between causal and correlative effects, as well as methods for estimating causal responses when these conditions are met \cite{pcrbook, pearlbook}. Although the necessary conditions are beyond the scope of this brief review, the sufficient conditions amount to a requirement that the subset of measured confounding variables must be {\it sufficiently complete} so as to provide adequate control over the causal effects. In particular, the requirement of {\it sufficient completeness} can be succinctly dichotomized into two cases known as the {\it back-door} and {\it front-door} criterion. The {\it back-door criteria} can be used to estimate the causal response on a pair of nodes $X_1 \rightarrow X_0$, given an observation of a set of confounding variables $S = \{ S_0, S_1 \}$ as shown in Figure \ref{fig:backfront}. The intervened conditional probability can then be computed via the back-door adjustment formula given in Equation \ref{eq:adjustback}. \begin{align} P(X_i | \slashed{X}_j = x) &= \displaystyle\int d s \, P(X_i | X_j = x, S=s) \, P(S=s) \label{eq:adjustback} \end{align} The {\it front-door criteria} can be used to estimate the causal response on a pair of nodes $X_2 \rightarrow X_0$ in situations where there exists a chain of causal influences $X_2 \rightarrow X_1 \rightarrow X_0$ as shown in Figure \ref{fig:backfront}. The intervened conditional probability can then be computed via the front-door adjustment formula given in Equation \ref{eq:adjustfront}. \begin{align} P(X_i | \slashed{X}_j = x) &= \displaystyle\int d s \, P(S=s | X_j = x) \displaystyle\int d x^\prime \, P(X_i | X_j = x^\prime, S=s) \, P(X_j = x^\prime) \label{eq:adjustfront} \end{align} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.45]{backfrontdoor.PNG} \vspace{-0.75cm} \end{center} \caption{(Left) Given the sufficiently complete set of measured confounding variables $S = \{ S_0, S_1 \}$, the back-door adjustment formula estimates the causal effect of $X_1$ on $X_0$. A measurement of only the set $S = \{ S_0 \}$ would be insufficient due to the existence of an unblocked ``back-door" path between the observables given by $X_1 \rightarrow S_1 \rightarrow S_0 \rightarrow S_2 \rightarrow X_0$. (Right) If there exists a causal chain $X_2 \rightarrow X_1 \rightarrow X_0$, the front-door adjustment formula can be used to disentangle the causal effect of $X_2$ on $X_0$ from any measured or unmeasured confounding variables.} \label{fig:backfront} \end{figure} \subsection{Structural Equation Models} Structural equation models (SEM's) are a functional extension of causal graphical models in which the values of each node variable $X_{\mu}$ are determined as a function of its parent node variables $X_{{\rm pa}(\mu)}$ and noise $\xi_\mu$. Here we adopt a notation where each node in a causal graph with $V$ nodes is specified by a spacetime index $\mu = 1, ..., V$ and Einstein summation is assumed. The set of parent (child) nodes corresponding to $\mu$ is given by $X_{{\rm pa}(\mu)}$ ($X_{{\rm ch}(\mu)}$) as illustrated in Figure \ref{fig:pach}. The generic form for an SEM can then be expressed as shown in Equation \ref{eq:sem} \begin{equation} X_{\mu} = f \left( \xi_\mu, \, X_{ {\rm pa} (\mu) } \right) \label{eq:sem} \end{equation} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.45]{pach.PNG} \vspace{-0.75cm} \end{center} \caption{Given some node in a causal graph $X_\mu$, we use $X_{{\rm pa}(\mu)}$ to refer to the set of all nodes that are parents of node $\mu$ and $X_{{\rm ch}(\mu)}$ to refer to the set of all nodes that are children of node $\mu$.} \label{fig:pach} \end{figure} If the contribution from noise is assumed to be additive, then each node variable $X_\mu$ can be expressed simply as a polynomial (or other) expansion in its parent nodes $X_{{\rm pa} (\mu)}$ as shown in Equation \ref{eq:polysem}. The leading order term in this expansion describes a linearized SEM, which is typically expressed in terms of a weighted graph adjacency matrix $W_{\mu \nu}$ in the form shown in Equation \ref{eq:linsem}. \begin{align} X_\mu &= -\xi_\mu + f \left( X_{ {\rm pa} (\mu) } \right) \nonumber \\ &\approx - \xi_\mu + \displaystyle\sum_{n=1}^\infty c_{n,{\rm pa} (\mu) } X_{ {\rm pa} (\mu) }^n \label{eq:polysem} \\ &\xrightarrow{\mathcal{O}(1)} - \xi_\mu + W_{\mu \nu} X_\nu \label{eq:linsem} \end{align} The linear SEM of Equation \ref{eq:linsem} has the unique property that its exact solution describes a generative model that predicts each variable from pure noise as shown in Equation \ref{eq:gensem}. The inverse operator can be expressed in closed-form as a degree-$d$ polynomial in terms of Cayley-Hamilton coefficients $c_n$, which describe the propagation of ancestral noise through the causal graph. Thus each node variable $X_\mu$ can be expressed as a linear combination of its noise $\xi_\mu$ and the noise of its $n^{\rm th}$ ancestors $\xi_{{\rm pa}_n (\mu)}$, as shown in Equation \ref{eq:noiseprop}. \begin{align} X_\mu &= \left( - \delta_{\mu \nu} + W_{\mu \nu} \right)^{-1} \xi_\nu \label{eq:gensem} \\ &= \left( - \delta_{\mu \nu} + \displaystyle\sum_{n=1}^d c_n W_{\mu \nu}^n \right) \xi_\nu \nonumber \\ &= - \xi_\mu + \displaystyle\sum_{n=1}^d c_n \, \xi_{{\rm pa}_n (\mu)} \label{eq:noiseprop} \end{align} The weighted adjacency matrix $W_{\mu \nu}$ serves the dual purpose of masking each node variable $X_\mu$ from its non-parent nodes through its zero-entries, while the non-zero entries define the strength of linear correlations between each pair of nodes in the causal graph. Unfortunately there is no standardized generalization to non-linear SEM's. One natural possibility is to define a separate weighted adjacency matrix $W_{\mu \nu}^{(n)}$ for each order $n$ in a functional expansion like the polynomial example in Equation \ref{eq:polysem}. While this interpretation nicely generalizes the linear approximation, its computational complexity is unbounded, and there have been various other suggested interpretations for the adjacency matrix weights, related to the mutual information between parent-child node variables \cite{fang2020low}. In this note we develop an alternative formalism for describing non-linear SEM's that is agnostic to the interpretation of the weights in the adjacency matrix. We thus define a causal mask matrix $M_{\mu \nu}$ which is just the unweighted adjacency matrix as shown in Equation \ref{eq:maskmatrix}, where $\odot$ refers to an element-wise multiplication. \begin{align} M_{\mu \nu} \equiv | W_{\mu \nu} | \odot \frac{1}{|W_{\mu \nu}| + \epsilon} \label{eq:maskmatrix} \end{align} We then define a procedure for extracting the data for the parents of each node in the following way. We first lift each node variable into an auxiliary dimension $\dot{\mu} = 1, ..., V$. Index contraction of the spacetime index with the mask matrix $M_{\mu \nu}$ then produces a vector $X_{{\rm pa} (\mu)}^{\dot{\mu}}$ for each node $\mu$ whose index in the auxiliary dimension contains its parent-node data as shown in Equation \ref{eq:nodehot}. This vectorized parental masking procedure is suitable for expressing functions of sets of parent-nodes in a generalized SEM as $X_{\mu}^{\dot{\mu}} = f ( \xi_\mu, \, X_{ {\rm pa} (\mu) }^{\dot{\mu}} )$. \begin{align} X_\mu &~\longrightarrow~ X_\mu^{\dot{\mu}} \equiv X_\mu \otimes \delta_\mu^{\dot{\mu}} = \quad \text{\normalsize $\mu$}\mymatrix{ \begin{pmatrix} X_V & 0 & \cdots & 0 & 0 \\ 0 & X_{V-1} & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & X_1 & 0 \\ 0 & 0 & \cdots & 0 & X_0 \end{pmatrix} } \nonumber \\ &~\longrightarrow~ M_{\mu \nu} X_\nu^{\dot{\mu}} = \begin{pmatrix} 0 & 0 & \cdots & 0 & 0 \\ X_V & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ X_V & X_{V-1} & \cdots & 0 & 0 \\ X_V & X_{V-1} & \cdots & X_{1} & 0 \end{pmatrix} = \begin{pmatrix} X_{{\rm pa} (V)}^{\dot{\mu}} \\ ~~\ X_{{\rm pa} (V-1)}^{\dot{\mu}} \\ \vdots \\ X_{{\rm pa} (1)}^{\dot{\mu}} \\ X_{{\rm pa} (0)}^{\dot{\mu}} \end{pmatrix} = X_{{\rm pa} (\mu)}^{\dot{\mu}} \label{eq:nodehot} \end{align} \section{Experiments} \label{sec:exp} \subsection{Causal Structure Learning} The algorithms for SEM-learning described in this note rely on first inferring the correct causal graph structure for a given data set. Fortunately the last two years have seen exciting progress in applications of neural networks to the problem of causal graph structure-learning, particularly in the area of masked-gradient methods \cite{zheng2018dags, DBLP:journals/corr/abs-1904-10098, ng2019graph, fang2020low, ng2020role}. These methods center around an identity for acyclic weighted adjacency matrices, which was first derived in \cite{zheng2018dags} and is shown in Equation \ref{eq:acyclic}. This identity enables a re-formulation of acyclic graph-learning as a continuous optimization problem. Here again $\odot$ denotes element-wise multiplication. \begin{align} {\rm tr} \, e^{W \odot W} = {\rm tr} \, I \label{eq:acyclic} \end{align} The graph-learning network can then be constructed using an encoder/decoder framework with an objective function that attempts to minimize some reconstruction loss, subject to an acyclicity constraint $h=0$, where $h$ is a function of the weighted adjacency matrix given in Equation \ref{eq:acconstraint}. \begin{align} h(W) = - {\rm tr} \, I + {\rm tr} \, e^{W \odot W} = 0 \label{eq:acconstraint} \end{align} The original formulation for this continuous optimization, referred to as $\texttt{NO-TEARS}$ \cite{zheng2018dags}, uses a reconstruction loss inspired directly by the form of the linear SEM in Equation \ref{eq:linsem}. As illustrated in in the first line of Table \ref{tab:structalgos}, the encoder $\mathcal{E}$ is just the identity function while the decoder $\mathcal{D}$ is an MLP that takes as input a weighted masked latent space vector $W \cdot Z$. \bgroup \def1.5{1.5} \begin{table}[ht] \begin{tabular}{ccc} & Encoder & Decoder \\ \hline \texttt{NO-TEARS}: & \qquad $Z = X$ \qquad & \qquad $\widehat{X} = \mathcal{D}(W \cdot Z)$ \\ \texttt{GNN}: & \qquad $Z = (-I+W) \cdot \mathcal{E} (X)$ \qquad & \qquad $\widehat{X} = \mathcal{D}((-I+W)^{-1} \cdot Z)$ \\ \texttt{GAE}: & \qquad $Z = \mathcal{E}(X)$ \qquad & \qquad $\widehat{X} = \mathcal{D} ( W \cdot Z)$ \end{tabular} \caption{A comparison of functional structures for three well known masked-gradient-based algorithsm for causal structure learning.} \label{tab:structalgos} \end{table} \egroup In this note we focus our tests on two non-linear generalizations of the $\texttt{NO-TEARS}$ algorithm, referred to as $\texttt{GNN}$ and $\texttt{GAE}$. The encoder/decoder architectures are given in Table \ref{tab:structalgos}, where $\mathcal{E}$ and $\mathcal{D}$ refer to generic MLP based function-learners. Both of the $\texttt{GNN}$ and $\texttt{GAE}$ frameworks generalize the well known closed-form solution for linear SEM's. However the salient difference between them is the presence of a residual connection in \texttt{GNN} represented by the identity term in the second line of Table \ref{tab:structalgos}. The reconstruction loss function for $\texttt{GNN}$ is given by the usual evidence lower-bound (ELBO) for variational autoencoders while the reconstruction loss for $\texttt{GAE}$ is simply the mean-squared-error (MSE). The above optimization can be implemented using the method of Lagrange multipliers with the Lagrangian defined in Equation \ref{eq:lagrangian}. \begin{align} \mathcal{L}_\texttt{GNN/GAE} &= -\mathcal{L}_{\rm ELBO/MSE} + \lambda \, | h(W_{\mu \nu}) | + \frac{c}{2} \, | h(W_{\mu \nu}) |^2 \label{eq:lagrangian} \end{align} Following the work in \cite{DBLP:journals/corr/abs-1904-10098, ng2019graph} we perform tests on four different toy data sets generated by structural equation models of increasing non-linear complexity, as shown in Equations \ref{eq:egsem1}-\ref{eq:egsem4}. \begin{align} \text{linear:}& \quad X = -\xi + W \cdot X \label{eq:egsem1} \\ \text{non-linear 1:}& \quad X = -\xi + W \cdot \cos \ (X + 1) \label{eq:egsem2} \\ \text{non-linear 2:}& \quad X = -\xi + 2 \, \sin \left( W \cdot (X + 1/2) \right) + W \cdot (X + 1/2) \label{eq:egsem3} \\ \text{non-linear 3:}& \quad X = -\xi + 2 \, \sin \left( W \cdot ( \cos \ (X + 1) + 1/2) \right) + W \cdot ( \cos \ (X + 1) + 1/2) \label{eq:egsem4} \end{align} In the original papers, both $\texttt{GNN}$ and $\texttt{GAE}$ were tested using randomly generated Erd\H os-R\'enyi graphs. For graphs with $V$ nodes, the authors of $\texttt{GNN}$ reported structural hamming distance (SHD) errors ranging from $0.2 \times V$ (for nonlinear 2) and $0.8 \times V$ for (nonlinear 1). Impressively, the performance of the $\texttt{GAE}$ algorithm exhibits a scaling that is roughly independent of the number of nodes in the graph for the Erd\H os-R\'enyi case, which we have verified in our own experiments. The primary reason for the difference in performance on large graphs is due to the presence of the residual connection in $\texttt{GNN}$, which enables an extremely accurate reconstruction of the data despite an incorrect causal graph structure. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.45]{graphAB.PNG}\vspace{-0.5cm} \caption{Two graph structures used for the experiments in this note, which we refer to as Graph A (left) and Graph B (right). Causal estimation for Graph A requires mutilating two edges independent on the number of confounders, while causal estimation for Graph B requires mutilating a number of edges equal to the number of confounders.} \label{fig:graph} \end{center} \end{figure} In this note we perform tests on the $\texttt{GNN}$ and $\texttt{GAE}$ algorithms using the two graph structures shown in Figure \ref{fig:graph}, referred to as Graph A and Graph B. These two graph structures form the baseline cases for our structural equation model tests described in the next section, and represent different configurations of confounding variables increasing in number. The results of our structure-learning experiments, shown in Figure \ref{fig:shd}, indicate that the explicit presence of numerous confounding variables presents a significant obstacle to the recovery of correct causal structures relative to the Erd\H os-R\'enyi case, even for simple graphs with nodes as few as $\mathcal{O}(10)$. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.52]{shdA.png} \includegraphics[scale=0.52]{shdB.png}\vspace{-0.5cm} \end{center} \caption{Structural hamming distances (SHD) for \texttt{GNN} and \texttt{GAE} as a function of the total number of nodes. Results are shown for Graph A (top row) and Graph B (bottom row) as defined in \ref{fig:graph}. For each \# nodes we generate two graphs with different weights from different random seeds and perform 3 runs for each graph. The error bars indicate variations between the 3 runs on each seed.} \label{fig:shd} \end{figure} \subsection{Structural Equation Modeling} The network architecture for SEM-learning proposed in this note is illustrated in Figure \ref{fig:archi}, and can be factorized into two components. The first component is just a generic variational autoencoder that encodes each node feature $X_\mu$ into its latent representation $Z_\mu$ before decoding it back to the target representation $\widehat{X}_\mu$. The second component introduces a ``causal block" $\mathcal{C}$ that performs ancestral sampling on the latent representation $Z_\mu$ and produces a latent representation for each child-node $\widehat{Z}_{{\rm ch} (\mu)}$ that is a function of \textit{only its parent-nodes} $Z_\mu$. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.4]{archi.PNG}\vspace{-0.75cm} \end{center} \caption{The proposed network architecture is an extension of a generic variational autoencoder (blue). The generator for the latent space $Z_\mu$ is augmented with an additional causal network block $\mathcal{C}$ (orange) that uses a causal mask $M_{\mu \nu}$ as defined in \ref{eq:maskmatrix} to generate a latent space distribution for each child node $\widehat{Z}_{{\rm ch} (\mu)}$ that is a function of only it's parent nodes $Z_\mu$. The $n^{th}$ child node of a latent variable $Z_\mu$ can thus be generated by cycling the inputs $n$ times through $\mathcal{C}$.} \label{fig:archi} \end{figure} For SEM-learning on a graph with $V$ nodes, the causal block $\mathcal{C}$ is correspondingly composed of $V$ neural-networks as illustrated diagramatically in Figure \ref{fig:sampling}. A restriction on the functional dependence of each node to only its parent nodes is crucial for the automated generation of intervened conditional probability distributions. This is achieved simply through the use of the causal mask $M_{\mu \nu}$ in the causal block $\mathcal{C}$, as well as the absence of any residual connection except for those nodes which have no parents. This includes those nodes which are chosen for intervention, as well as those nodes with no parents since they can be viewed as being intervened on by the environment. Ancestral sampling of an intervened distribution can then be performed simply by generating data for the intervened node $Z_\mu$ from a random-normal distribution, and cycling the data through the causal block $n$ times in order to obtain the data for its $n^{\rm th}$ child node $Z_{{\rm ch_n} (\mu)}$, as illustrated in \ref{fig:archi}. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.4]{sampling.PNG}\vspace{-0.75cm} \end{center} \caption{The causal block $\mathcal{C}$ takes inputs from the latent node variables $Z_\mu$. A single neural network for each latent dimension generates means and variances for the child nodes $\widehat{Z}_\mu$. Nodes with no parents, including the intervened node $Z_2$, contain a residual connection, and all nodes with parents are functions of only their parents.} \label{fig:sampling} \end{figure} A functional expression for the causal block $\mathcal{C}$ can be expressed as a sum of three terms as shown in Equation \ref{eq:causalsem}. The first term $\xi_\mu$ describes the contribution from noise and is computed via the usual reparameterization trick \cite{kingma2013autoencoding} from neural-network-generated variances. The second term provides a residual connection only for node variables that have no parents. We thus define a delta function $\delta_{{\rm pa} (\mu)}$ whose argument given a specified node $\mu$ is the number of parents belonging to that node, and normalized as shown in equation \ref{eq:parentres}. \begin{align} \mathcal{C}(Z_\mu) &= - \xi_\mu - \delta_{{\rm pa}(\mu)} Z_\mu + \left( 1-\delta_{{\rm pa}(\mu)} \right) {\rm NN}_\mu^{\dot{\mu}} ( Z_{{\rm pa} (\mu)}^{\dot{\mu}} ) \label{eq:causalsem} \\ &\longrightarrow \widehat{Z}_{{\rm ch} (\mu)} \nonumber \end{align} \begin{equation} \delta_{{\rm pa}(\mu)}=\left\{ \begin{array}{@{}ll@{}} 1 & ~~\text{if \# parents = 0 for node}\ \mu \\ 0 & ~~\text{otherwise} \end{array}\right. \label{eq:parentres} \end{equation} The third and final term is generated by the set of $V$ neural networks ${\rm NN}_\mu^{\dot{\mu}}$ whose input is the vector containing the latent representation of $\mu$'s parent node data $Z_{{\rm pa} (\mu)}^{\dot{\mu}}$, as constructed according to Equation \ref{eq:nodehot}. The loss function used is a combination of the joint \cite{DBLP:journals/corr/ZhaoSE17b} and conditional \cite{DBLP:journals/corr/RenLLZ16} maximum-mean-discrepancies (MMD and CMMD) as shown in Equation \ref{eq:caeloss}, with $\gamma \gg \beta$. The set of networks ${\rm NN}_\mu^{\dot{\mu}}$ thus together form a generative conditional moment-matching graph-neural-network. \begin{align} \mathcal{L} = &- \beta \, D_{\rm MMD} \big( Q(Z|X) || P(Z) \big) - \gamma \, D_{\rm CMMD} \big( Q(\widehat{Z}|Z_{\rm pa}) || P(Z | Z_{\rm pa}) \big) \nonumber \\ &+E_{Q(Z|X)} \big( \log P(\widehat{X} | Z ) \big) \label{eq:caeloss} \end{align} To measure the performance of interventional sampling we perform tests using an MLP-based encoder and decoder $\mathcal{E}$/$\mathcal{D}$ each consisting of a single hidden layer with 16 neurons. The causal block $\mathcal{C}$ is composed of $V$ neural networks, each with input dimension $V$ and output dimension $1$, and each consisting of a single hidden-layer containing 64 neurons. For the loss function we choose (rather arbitrarily) $\beta=1$ and $\gamma=300$, and each trial is run on 8000 data points. The performance metric used is the relative entropy (KL divergence) between the conditional probability distributions generated by the intervened and unintervened ground truth SEM's $D_{\rm KL} \left( P(X_i | \slashed{X}_j = x_j) || Q (X_i | \slashed{X}_j = x_j) \right)$. We then compare it with the relative entropy between the intervened SEM and the one predicted by the causal autoencoder $D_{\rm KL} \left( P(X_i | \slashed{X}_j = x_j) || Q(X_i | X_j = x_j) \right)$ at different standard deviations away from the distribution means, as illustrated in Figure \ref{fig:metric}. The autoencoder predictions for these results have been smoothened using a kernel density estimator with a normal reference bandwidth. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.5]{wiggles_1.png} \includegraphics[scale=0.5]{wiggles_2a.png}\vspace{-0.75cm} \end{center} \caption{The performance metric adopted in this note is the relative entropy $D_{KL}$ between the conditional probability distribution for the predicted intervened SEM (top right) and the ground truth SEM (top middle). The $D_{KL}$ is computed along slices corresponding to points at various standard deviations away from the mean (bottom right). As a baseline we compare this against the $D_{KL}$ with respect to the unintervened conditional probability distribution (bottom left).} \label{fig:metric} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.5]{dkl01A.png} \includegraphics[scale=0.5]{dkl02A.png}\vspace{-0.75cm} \end{center} \caption{Performance metrics for experiments on Graph A. $D_{KL}$'s are shown along contours of varying standard deviation $\sigma$ for the probability distributions $P(X_0 | X_1)$ (top row) and $P(X_0 | X_2)$ (bottom row). The solid and dashed lines represent averages for 4 randomly generated adjacency matrices.} \label{fig:resultsA} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.5]{dkl01B.png} \includegraphics[scale=0.5]{dkl02B.png}\vspace{-0.75cm} \end{center} \caption{Performance metrics for Graph B along contours of varying standard deviation $\sigma$. Results are shown for the probability distributions $P(X_0 | X_1)$ (top row) and $P(X_0 | X_2)$ (bottom row). The solid and dashed lines represent averages for 4 randomly generated adjacency matrices.} \label{fig:resultsB} \end{figure} \section{Discussion} \label{sec:disc} The results of our experiments indicate that the proposed framework for simulating structural equation models is capable of capturing complex non-linear relationships among variables in way that is amenable to multi-step counterfactual interventions. Importantly, the generated probability distributions appear faithful to the ground truth intervened SEM's, even when the intervened variables are fixed to values that are outside the range of values contained in the training data distributions. This capability implies a predictive ability that is manifestly beyond what is possible through analytical calculations via the back-door and front-door adjustment formulas, which can only be applied to intervened variables that take on values for which observable data exists. With 8000 data points in each of the training sets, the maximum and minimum values for the node variable $X_2$ typically fall within the range of $3.5 \sigma$ from the distribution mean, never exceeding $4.0 \sigma$. From Figure \ref{fig:resultsA} and \ref{fig:resultsB}, we can observe that the linearly correlated data sets are faithful to the ground truth well beyond the $4.0 \sigma$ mark. On the other hand, those data sets with strong non-linear components vary in their predictive performance beyond $3 \sigma$, but are reliably closer to the ground truth relative to the un-intervened distributions. This is unsurprising upon closer inspection of the predicted conditional (intervened) probabilities, which demonstrate a clear tendency for our generative model to perform simple linear extrapolations of the distributions in regimes outside those contained in the training data. Although the experiments performed in this note were restricted to the case of scalar-valued node variables, we expect that a very simple extension of these methods could make them applicable to complex high dimensional image and language data. For example in CausalVAE \cite{yang2020causalvae}, the authors use supervised learning to encode specific image labels into a single dimension of the latent space $Z_\mu$. In one example, they use the CelebA data set of facial images to encode causal relationships between features like $Age \rightarrow Beard$, thus allowing them to intervene on the latent space to produce images of unnaturally young bearded faces. Augmenting this procedure with the causal block $\mathcal{C}$ described in this note would in principle enable synthetic generation of image populations with features that accurately represent conditional probabilities under multiple steps of causal influence. For example, an accurate distribution of hair colors if the graph structure contained $Age \rightarrow Beard \rightarrow Hair \ Color$. Unfortunately a detailed exploration on these high dimensional data types is beyond the scope of this note. Another potential application of these methods could be for use with model-based reinforcement learning. In \cite{DBLP:journals/corr/abs-1901-08162} the authors performed several experiments in a model-free RL framework in which they trained agents to make causal predictions in simple one-step-querying scenarios. In these experiments, the agents were directed to sample points from joint and conditional probability distributions of SEM-generated data, as well as the corresponding distributions from arbitrarily mutilated SEM graphs. These experiments showed evidence that their agents learned to exploit interventional and counterfactual reasoning to accumulate significantly higher rewards compared to the relevant baselines. In \cite{nair2019causal} the authors expand on the previous work by successfully training RL agents to perform causal reasoning in a more complex multi-step relational scenario with the ability to generalize to unseen causal structures that were held-out during training. Their experiments involved two separate RL agents. One which used supervised learning to generate a causal graph model off ground truth graphs, and another which was directed to take ``goal-oriented" actions based on models learned by the first agent. The authors strongly hypothesized that the impressive level of generalizability displayed by their algorithm was a direct result of the explicit model-based approach. We find the possibility of performing such experiments using graphical models learned via the fully unsupervised approach described in this note to be both very intriguing and plausibly practical as a future area of exploration. \section{Acknowledgements} We thank Vincent Tang, Jiheum Park, Ignavier Ng, Jungwoo Lee, and Tim Lou for useful discussions. \bibliographystyle{unsrtnat}
proofpile-arXiv_065-176
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Future Work} \section{Conclusion} In this paper, we design this tracing and notification system based on blockchain and smart contract, which provides three types of services: location-based contact tracing, Bluetooth-based contact tracing and health tracing services. Our system can trace the user's travel and contact history, and can remind the user about his past contacts that may cause infection. In addition, users can also estimate the probability of being infected through the health tracing service. In order to protect users' privacy, they can anonymously send their visiting records and health status on the blockchain platform. At the same time, users can use a large number of random mac addresses provided by Bluetooth technology as a temporary identity to further protect privacy. In addition, our smart contract group embedded in the system records the infection status of each location and performs the same sequence of check-in operations to ensure that each user can get the consistent infection results of the location. We also simulate the interaction between users and our prototype system, and then evaluate its performance, including gas consumption, operating stability, and request processing speed. In a simulated environment, our system has a good scalability and good stability. We expect to have real data about user contact records to evaluate our system in the future. \section{Evaluation} We build a prototype system and evaluate its performance with the following experiments, where we simulate users' daily contact and check-in activities by the Poisson distribution equation. In the experiments, we focus on the average cost of processing requests and the total cost to operate our prototype system. First, we introduce the environment for the experiments. Then, we evaluate and analyze the system performance on stability and scalability. In the future, we may deploy our system and build benchmarks to evaluate when the real dataset is available. \subsection{System Implementation} \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{graph/avg_gas_per_req.png} \caption{Average gas cost used for the system to process each request.}~\label{fig:figure1} \end{figure} We conduct experiments on the MacBook Pro with a macOS system in version 10.14.5. This machine has an Intel i5 CPU with 2.3 GHz and an 8 GB LPDDR3 memory. We use a programming language called Solidity to develop and implement the smart contract group, which are deployed on a private Ethereum blockchain simulated by a software called Ganache. Then, we use Python to code the script for data analysis. Our experiments focus on three basic variables affecting performance: 1). The number of $\mathtt{|U|}$ increasing from 100 to 600, with even intervals of 100, 2). the user's contact and check-in frequency $\mathtt{Freq}$ following Poisson distribution, and 3). smart contract group size $\mathtt{SCG.size}$. \subsection{Measure Avg Request Cost} \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{graph/std_avg_gas_per_req.png} \caption{Standard deviation on average request cost for ten rounds experiments.}~\label{fig:figure1} \end{figure} Based on three quantities of deployed smart contracts and six different numbers of requests increased from 100 to 600, we measure the average gas cost of all requests, and the standard deviation on the average cost of ten rounds experiments. Fig. 5 shows that as the quantity of contracts increases from 3 to 18, and the number of requests increases from 100 to 600, the average request gas consumption is reduced by a factor of 5 from 2,000,000 $\mathtt{wei}$ to 400,000 $\mathtt{wei}$. Fig. 6 shows that with the increase of requests, the deviation of request cost is reduced by 5 times. Although the gas amount and deviation in Fig. 5 and 6 are very large, the actual overhead is very small. We know that $\mathtt{wei}$ is the smallest gas unit in Ethereum system and 1 $\mathtt{ether}$ is equal to $\mathtt{10^{18}}$ $\mathtt{wei}$. Then, assuming 1 $\mathtt{ether}$ is worth $\mathtt{\$250}$, an 400,000 $\mathtt{wei}$ request is worth $\mathtt{\$10^{-10}}$, and the deviation of 500 $\mathtt{wei}$ is negligible. Also, we find that in the condition of different number of contracts, the average costs trend to be a similar amount around 400,000 $\mathtt{wei}$. \subsection{Evaluate System Overhead} \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{graph/gas_cost.png} \caption{The total gas consumed by mobile services, smart contracts and users.}~\label{fig:figure1} \end{figure} System overhead refers to the interaction costs between mobile services and users and operating costs for smart contracts, which are measured by Ethereum gas. Fig. 7 shows that when the number of requests increases from 50 to 250, and the number of contracts increases from 3 to 18, the overall gas consumption of the system increases linearly, considering the case of the same amount of requests with different amount of contracts, and the case of the amount of contracts with different amount of requests numbers. Based on the three measurements, we believe that this prototype system has good stability and scalability. With the increase in the number of requests and contracts, the request cost, which is the major overhead in the system operation, has a stable approaching to a lower boundary. This trend supports the stability of the system. Similarly, the system overhead is linear growth, instead of an exponential one, which is an acceptable performance. \section{Introduction} Until today, coronavirus has infected more than 11.5 million people and caused nearly 530,000 deaths\cite{corvuis_worldwide}. In particular, the United States has become the country with the largest number of known infections\cite{corvuis_us}. Every country has its own privacy policy to share information about infected people. It is challenging to share information across the world with reliable privacy protection since different countries have their own privacy policy to share the information of infected people. Information sharing in a centralized manner, such as MIT, Apple, and Google who announced their tracking solutions by storing users' personal data in cloud \cite{rivest2020pact}\cite{Apple_Google_virus_tracking}, is highly relying on users' trust. Once the personal data has been uploading to the cloud, users generally cannot control the potential abuse. If a comprehensive data security solution is missing, the private user data in the cloud may be hacked for any other harmful purposes. As a global organization, the World Health Organization (WHO) collaborates with the governments around the world to share information and enhance the management of epidemic prevention. However, WHO is losing trust from some countries and cannot obtain sufficient support. Some other governments may conceal, falsely report, or hinder from reporting epidemic information. These may create a gaping security hole for global epidemic prevention. To this end, there is no way for individuals to share their information and protect their data privacy simultaneously. Government departments and organizations may access the health medical data of all people, and go beyond the scope of their responsibility and duty. For example, some government health departments may locate the personal identity of infected patients, and then enforce them in a centralized isolation shelter, resulting in secondary infections, and restricting personal freedom. Actually, Apple and Google will share the infected individuals' information with health authorities\cite{Apple_Google_virus_tracking}, which means people's personal data privacy and human rights are being violated without their knowledge. Currently, there are two separate tracing systems existing: location-based and individual-based contact tracing. Location-based contact tracing always provides a centralized service and records if there are infections in the given locations without the knowledge of infection movement\cite{We-care}. The individual-based tracing systems only focus on the person-to-person contact via Bluetooth\cite{rivest2020pact}\cite{Apple_Google_virus_tracking}, and they do not have a record about where users get infected. WHO states that the virus could survive on materials surfaces\cite{WHO_Lab_Testing_for_COVID_19}\cite{WHO_Protocol_for_COVID_Factors}, so the virus has an effect on the people daily activity environment. But the individual-based system cannot trace and estimate the COVID-19 effect on the given location. Our proposed system combines the location-based and individual-based systems together for users to access the public area infection status and lookup personal contact tracing history at the same time. Our lab already deploys the centralized location-based tracing system\cite{We-care} and we may merge our two systems together for a future research. Blockchain can natively provide the decentralized service to share the information and protect the privacy of people\cite{Ethereum_Yellow_Paper}. User information can be packaged in transactions and stored in the blockchain in each computing node. Even if the data of one node is manipulated, it will not affect the data's consistency because the manipulated data will not pass the verification by other nodes. A smart contract is a program that runs on the blockchain platform; it can execute instructions in a distributed manner, and maintain output consistency. In our system, the smart contract allows the user to check-in at each location and query the blockchain database for the location infection status. Even though MIT, Apple, and Google provide virus contact tracking solutions, the tracking service still brings the following new challenges. The infection transmission factors considered by current existing virus tracking service systems are too simple. The virus is not only carried by the infected person but also remains in the environment, so the user will still be infected by the virus attached to the object surface. Therefore, our proposed system not only records the user's contact history but also tracks the user's travel trajectory. Protecting user data privacy about his personal health information in a public network is also very challenging. Traditional methods of hiding users' personal information, such as removing names, addresses, and social security numbers, cannot fully protect user privacy. So we embedded the function of Bluetooth randomization address in the contact tracking service of the system. Besides, after users uploading health data and identity information, they lose the ownership and the ability to control the data. Apple and Google provide anonymous services to users, but the infected patients will be reported to the government health department\cite{Apple_Google_virus_tracking}. We choose the blockchain database as the storage method for the system, combined with the address randomization function of Bluetooth, only users can identify and verify their health data and personal information stored in the database. We design and propose a blockchain-based Covid-19 contact information sharing and risk notification system that can be used by the world to take preventive measures against epidemics. This system implements the following main functions with smart contracts and Bluetooth embedded: (1) Users can record their visited location information and personal contact history to the blockchain database. (2) Users can update their infection status. (3) The system can update location infection status. (4) The system can notify the users that had been exposed the infected people or infected locations before. (5) The system can estimate the probability of being infected for users based on his location visiting history and personal contacting records. For example, a shopping mall can use our system to detect if this building is infected and the daily infection status. The customers can also query the database to gain the mall infection record before their shopping. The status of an individual is written to the blockchain and cannot be changed. Our system is able to protect privacy. We consider a variety of privacy that can be protected by our system. Users can regularly change their cellphone Bluetooth mac addresses and encode these addresses to virtual IDs. Then, it is hard to trace individual identity from their public information written in the blockchain. Our system will not trace personal identity or report personal health information to authorities. We have the following contributions. \begin{itemize} \item We propose a hierarchical smart contract group in this system, to manage the infection status of each location and transportation. This design reduces the average operation cost and request processing time in the system. \item We build and merge location-based and individual person based virus contact tracing and notification system, which tracks the virus infection activity from higher dimensions. \item We embed a blockchain database in the system to ensure the safety of users' data and this design avoids the problem that health information may be tampered and stolen in a centralized database. \item Our system uses a weak random Bluetooth address design to generate the user's identity information. This design not only protects the user's privacy but also reduces the congestion of data transmission within the blockchain network. \item We propose a mathematical formula for the possibility of user infection, which quantifies the user's contact history and traveling history from multiple dimensions and thus provides a basis for the system's notification service. \item We propose an optimal equation for the operating costs of the system, simulate person-to-person contact and user check-in activities in our system, and evaluate the system performance based on the different quantity of users and smart contracts. \end{itemize} \textbf{Roadmap:} This paper is organized as the following sections: Section II presents the system overview and system components. Section III formulize challenging problems for our system. Section IV describes the system design from the perspective of four layers. Section V shows how we simulate and evaluate system performance. The last two sections VI and VII present the related work in smart contract tracing and conclusion for this paper. \section{System Overview} Our system can track the locations visited by users and the personal contact history of users with others. When a user updates himself infected, our system will notify other users who have been in direct or indirect contact with him, and provide an assessment of the probability of infection. \subsection{Tracing User Visited Locations} In a general scenario, the user may visit many public places, such as offices, restaurants, shopping centers, or gyms in one day, so he can use the designed contact tracing service to upload his visiting records including the time and location to our system. Then, our system will store the visiting records in decentralized blockchain databases. Also, users can check the infection status of a certain location before his visit to ensure safety. When a user reports that his infected status, a smart contract group embedded in our system will update the virus infection status of the locations and transportation that the user visited and took based on his visiting records. Our system helps users to track the history of their indirect contact with others because the virus can land on the surfaces of the objects and continue to remain active\cite{COVID_on_different_surfaces}, and even if the user does not have face-to-face contact with an infected patient, there is still a possibility of being infected after he touches the surface of the virus-contaminated object. \subsection{Tracing Person to Person Contact} Our system uses the Bluetooth functionality in users' mobile devices to automatically detect the others and then upload their contact records to decentralized blockchain databases. The range of Bluetooth detection is relatively small, which is about 5 to 10 meters\cite{Bluetooth_range}. So, when the user's mobile phone detects the Bluetooth signals, it indicates that there are people nearby. When a user reports himself as infected status, our system will broadcast his infection status and alert other users to check if they have close contact with this infected user. It is important to trace the direct contact recorded by Bluetooth because viruses can attach to water vapor and spread through the airborne\cite{MIT_airborne}, and users may be infected by other people in a close range\cite{MIT_Sneeze}. Therefore, uploading users‘ contact history records will help them to track the infection transmission path of the virus and assess users' probability of being infected. \subsection{Health Tracing Service for Analysis and Notification} Our proposed system provides users with this integrated mobile service, which contains the following functionalities: \begin{itemize} \item Location and transportation check-in \item Bluetooth signals detection and collection \item Self-infection status update \item Query about the infection status of visited places \item Notification of infection contact \item Estimation of infection probability \end{itemize} In the previous sections, we have outlined the first four functions, so we focus on the health tracing service, showing that how the possibility of infection function is estimated and the notification function works for users. \textbf{Probability of infection: }According to the manual for WTO medical staff\cite{WHO_Worker_protocol}\cite{WHO_Lab_Testing_for_COVID_19}, healthy people could be infected directly and indirectly, where the direct factor is person-to-person contact in a close distance and the indirect factor is that the virus surviving on the surface of the object spread by the patient infects the healthy people after they touch the surface. So the system will evaluate the user's risk from these factors based on the feature data extracted from location tracing and person-to-person contact tracing, like, the length of contact time between users, the spatial distance between users, and the materials of objects in public places. \textbf{Notification of infection: }After the estimation of infection probability for this user, the notification function sends a warning alert to him, which reminds him to prepare for virus infection in advance or to seek medical help, before his health condition gets worse. Another scenario is when a user reports his infected status, our system will broadcast his virtual identity to other users. Then, these users who receive the notification will query the local database to see if he has had any direct or indirect contact with the infected patient and calculate the infection probability. \subsection{Security and Privacy} In our system, three main technologies guarantee data security and personal privacy: decentralized database in the blockchain, automatic execution of smart contracts, and randomization of Bluetooth mac addresses. (a) \textbf{Data Security: } Our design guarantees that user data will not be manipulated. The blockchain protocol stipulates that the current block will always use the hash value generated from the previous block, so if an attacker manipulates a transaction in a block, he must tamper with all the previous blocks data to ensure his tampering, before the next new block to be mined, verified and accepted by other users. This computing workload is extremely large, so it is almost impossible for an attacker to forcibly manipulate the blockchain data. Moreover, users in the network can detect violations of smart contracts, and such attempts to violate the contract are verified to be invalid and those malicious transactions will not be stored in the blockchain. Our system contains a smart contract group to handle all user check-in requests for public locations and transportation. Since smart contracts have the properties of being unmodifiable, requiring no supervision, and being automatically executed, smart contracts with distributed features ensure that user data cannot be tampered with, or produce malicious results. So, this design ensures the security of user data. Decentralized blockchain databases and smart contracts also enhance the usability of our system. Since no individual or centralized entity can control all data, there will be no smart contract or system crash due to malicious operations done by some users. (b) \textbf{Identity Privacy: } We use Bluetooth technology to protect user data privacy. The mac addresses uploaded by users are randomly generated by the Bluetooth protocol\cite{8016185}. Although these addresses are stored in the blockchain database, each user's mac address is random and is not fixedly bound to the mobile device. It guarantees that users will not be tracked and located by the addresses provided in the network. From another perspective, Bluetooth technology frequently replaces the user's mac address\cite{HandoffAllYourPrivacyAReviewofApplesBluetoothLowEnergyContinuityProtocol}, which means a user will gradually use multiple different mac addresses in a unit of time. Other people in the surroundings have no way to associate the mac address to this user through investigation. The frequent randomized mac address protects the user's privacy in real life. In order to reduce the operating overhead and internet congestion at the system level, we choose weak randomized generation of Bluetooth mac address. Users upload their random mac addresses to the system, which is measured by Ethereum gas\cite{Ethereum_Yellow_Paper} in the system overheads. The larger quantity of random mac addresses, the better privacy protection users could have. But the larger quantity will lead to higher operating costs and network congestion. Therefore, we need to find a balance between a sufficiently large quantity of Bluetooth random addresses for privacy protection, and a relatively small quantity for lower system operating cost. \section{Problem Formulation} We highlight four problems: latency, throughput, operating cost, and probability estimation for our proposed COVID-19 information sharing and risk notification system and these problems are described by mathematical formulas at a high-level. \subsection{Latency Minimization} In our system, latency is a time difference $\mathtt{\Delta t}$ about how long the user's latest check-in request can be processed completely by smart contracts and stored in the databases. We consider that latency is affected by the following five factors: 1. the number of users $\mathtt{|U|}$ in the system, 2. the frequency $\mathtt{Freq}$ of users sending requests, 3. the size of one block $\mathtt{|B|}$, 4. the height of the smart contract group $\mathtt{SCG.height}$, and 5. the length of the waiting queue $\mathtt{|Queue|}$ of the smart contract. $\mathtt{U}$ includes not only users already exist in the system, but also new users who enter the system within a unit time. The total number of users and the frequency $\mathtt{Freq}$ of user's check-in determine the total number of user requests per unit time in the system. If the number of user check-in requests exceeds the processing capacity of the smart contract per unit time, the user's requests will enter the waiting queue of the smart contract, which increases the request processing time. We introduce the variable of block size $\mathtt{|B|}$, because it is one of the bottlenecks of the blockchain's development towards high throughput and low latency\cite{10.1007/978-3-662-53357-4_8}. The height of the smart contract group $\mathtt{SCG.height}$ and the length of the queue $\mathtt{|Queue|}$ in each contract are also important factors that affect latency. In our proposed hierarchical structure, the smart contracts at the same level will not affect the efficiency of processing requests between each other. The requests received by the smart contracts at the current level are all from the higher-level smart contracts, so the hierarchical structure we propose is a tree structure, whose height is described as $\mathtt{lg(|SCG|)}$. As mentioned before, the smart contract will put the requests that have not been processed in time in the waiting queue. If the number of unprocessed requests is greater than the length of the queue, these requests will be abandoned. Then the smart contract needs to wait for other nodes to synchronize the transaction operations and data. This brings a longer latency. We thus establish the latency formula: \begin{eqnarray} \mathtt{Latency} = \{\mathtt{|U|}, \mathtt{Freq}, \mathtt{|B|}, \mathtt{SCG.height}, \mathtt{|Queue|},\nonumber\\ \mathtt{\delta^{U,F}}, \mathtt{\delta^{|B|}}, \mathtt{\delta^{SCG.height, |Queue|}}, \mathtt{\phi}\} \end{eqnarray} where we have a series of transition functions $\mathtt{\delta}$s provided to determine the latency, $\mathtt{latency} = \mathtt{\phi(\delta^{U,Freq}, \delta^{|B|}, \delta^{SCG.height, |Queue|})}$. We propose that the three polynomial $\mathtt{\delta}$s are combined by function $\mathtt{\phi}$. \subsection{Throughput Maximization} In our system, throughput $\mathtt{TP}$ refers to the number of user requests that the system can completely handle in a unit time. It can intuitively show the system's ability to process user requests. The greater the throughput, the stronger the processing capacity of the system. Throughput is limited by $\mathtt{latency}$, packet loss rate $\mathtt{Rate^{PL}}$ and bandwidth $\mathtt{BW}$. As described in the previous section, five factors in the system can affect the latency, but the $\mathtt{Rate^{PL}}$ and $BW$ depends on the network conditions at the user end. Therefore, the throughput can be defined as follows: \begin{eqnarray} \mathtt{TP} = \mathtt{\{latency, Rate^{PL}, BW, \theta}\} \end{eqnarray} where $\mathtt{\theta}$ is the transition function with three arguments to determine the $\mathtt{TP}$. \subsection{Minimum Operating Cost Optimization} Operating cost is another important problem we consider. Reasonable operating costs determine that our system can serve users stably, and support its operation for a long time. The operating cost in the system is measured by the Ethereum gas\cite{Ethereum_Yellow_Paper}, which includes the following five influencing factors: location-based and Bluetooth-based contact tracing services, Health tracing service, the setup and operation of smart contracts. We will explain all these services and components in the next section. Since the blockchain decentralized structure, users can query their local blockchain database without consuming gas and we do not include the cost of the database in the operating cost calculation. We consider the number of users and requests have a polynomial relationship with the cost of three tracing services. The setup cost of a smart contract is fixed, but its operating costs will increase with the increase of users. So we have the following cost formula: \begin{eqnarray} \mathtt{Cost = \{Loc, Bt, Heal, Setup, Op, \lambda\}} \end{eqnarray} where $\mathtt{Loc}$ represents location-based contact tracing service, $\mathtt{Bt}$ represents Bluetooth-based contact tracing service, $\mathtt{Heal}$ represents health tracing service, and $\mathtt{Op}$ represents all the operations in the smart contracts like adding a user check-in and getting a location infection status. We define $\mathtt{\lambda}$ to be the transition function to calculate the cost combined by the four factors. Then, we measure the average and variance values of the operating cost, which represent the system's stability in an optimal condition. Here are the formulas when the system has five factors at minimal cost: \begin{eqnarray} \mathtt{args_{var} = \argmin var(\lambda(Loc, Bt, Heal, Setup, Op))}\\ \mathtt{args_{avg} = \argmin avg(\lambda(Loc, Bt, Heal, Setup, Op))} \end{eqnarray} Therefore, we can provide the optimal arguments with a minimal system cost, which is shown as: \begin{eqnarray} \mathtt{args_{optimal} = minCost(args_{var}, args_{avg}, \zeta)} \end{eqnarray} where we introduce penalty function $\zeta$ to adjust five arguments and get the global optimal sets for the cost calculation. \subsection{Maximum likelihood Infection Estimation} In the previous section, we briefly describe the functionality of estimating the infection probability in the health tracing service. Since the ground truth data contains only binary values representing infected or not, without a percentage value for the probability of infection, we introduce the statistical logistic regression analysis method with iteratively reweighted least squares $\mathtt{(IRLS)}$\cite{murphy2012machine} to maximum likelihood estimation and find the optimal arguments. We assume that the data set is relatively clean and there are no outstanding outliers. Ground truth dataset contains $\mathtt{n}$ user data points $\mathtt{(x_i, y_i)}$, $\mathtt{i = 1, ..., n}$, where $x_i$ is the independent variable tuple with $($$\mathtt{RSSI}$, $\mathtt{\Delta t^B}$, $\mathtt{\Delta t^C}$ and $\mathtt{ms}$$)$ and $y_i$ is the dependent variable with binary values $\mathtt{\{0, 1\}}$. We conclude five elements in the tuple affecting the infection probability: \begin{itemize} \item $\mathtt{RSSI}$ is the received (Bluetooth) signal strength index, which represents the distance between users\cite{Jung2013DistanceEO}. \item $\mathtt{\Delta t^B}$ is Bluetooth contact time interval, which indicates how long users detect each other's Bluetooth signal from a short distance. \item $\mathtt{\Delta t^C}$ is a overlapping time calculated based on two check-in time points $\mathtt{t_i}$ and $\mathtt{t_j}$, which infers the time when two users enter the location, and $\mathtt{\Delta t^C}$ = $\mathtt{|t_i}$ - $\mathtt{t_j|}$. \item $\mathtt{ms}$ is a discrete value in set $\mathtt{MS}$ representing a virus residual active time period on a material surface\cite{COVID_on_different_surfaces}. \end{itemize} So we have: \begin{eqnarray} \mathtt{P(infection) = \{RSSI, \Delta t^B, \Delta t^C, ms, \beta\}} \end{eqnarray} where $\mathtt{\beta}$ is the set of parameters $\{\mathtt{\beta_0, \beta_1, ..., \beta_4}\}$ for these five arguments including the constant. Then, we can use the estimation method $\mathtt{IRLS}$ to get the fitted model and arguments: \begin{eqnarray} \mathtt{args_{optimal}[w] = IRLS(X, Y, w, \mu)} \end{eqnarray} where $\mathtt{w}$ is arguments tuple $\mathtt{(RSSI, \Delta t^B, \Delta t^C, ms)}$ and $\mathtt{\mu}$ is the predicated value from formula (7). \section{Related Work} In the field of contact tracing, MIT\cite{rivest2020pact}, Apple and Google\cite{Apple_Google_virus_tracking} have related products and projects. However, their solutions are either a centralized database that is built in the system or incomplete privacy protection which is provided for users. Such designs cannot meet the requirements of user privacy. In terms of data security, the smart contract guarantees the consistency of operating execution and then obtains a consistent output. There is one paper presenting a computing resource trading platform based on a smart contract-based edge computing network\cite{song2020smart}. This design uses a similar tree-structured smart contract group with ours, but their implementation focuses on how to match users and complete resource trading. And our purpose of smart contracts is to record the infection status for locations. Regarding privacy, there are articles that use differential privacy algorithms in IoT data processing\cite{lu2017lightweight}, but differential privacy methods will return a relatively accurate output with noise added. This conflicts with the property of blockchain because when storing data, the latter block must verify the correctness of the data of the previous block and cannot tolerate deviations. But it could be interesting to see intersection research of these two fields. \section{System design} In this section, we will define and explain our system components from the perspective of the four layers in the system, and the interactions within the contact tracing mechanisms. \subsection{System Architecture} \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{graph/layers.png} \caption{Blockchain enabled COVID-19 Trace and Notification Architecture.}~\label{fig:figure1} \end{figure} Our trace and notification system contains four layers for users to trace person-to-person contact via Bluetooth, check-in locations, and lookup infection status with other users on the blockchain platform. Fig. 1 shows four layers: User Interaction Layer, Mobile Service Layer, Smart Contract Service Layer, and Data Storage Layer. This system provides two primary services for trace and notification: Bluetooth-based personal contact trace service and location-based contact trace service. Both of the two services are developed on the blockchain platform, and the data generated from these two services is stored in the distributed blockchain databases. The location-based contact trace is coordinated by the smart contracts in the third layer and the Bluetooth-based contact trace is handled in the second layer. (a) \textbf{User Interaction Layer}\\ At the user interaction layer, we have two entities: user $\mathtt{U}$ and location $\mathtt{L}$. Users are people who hold Bluetooth-enabled mobile phones and have two health statue types: healthy users $\mathtt{U^{normal}}$ and infected users $\mathtt{U^{infected}}$. Users access to Mobile Service in the second layer and update their health status in the system based on their medical reports. We assume that users always honestly upload their infection status to our system. A location $\mathtt{L}$ is a public place or transportation that users often visit in their daily lives, such as office, restaurant, stadium, bus, and even airplane. Location $\mathtt{L}$ also has two status types, uninfected location $\mathtt{L^{normal}}$ and infected location $\mathtt{L^{infected}}$. If an infected user $\mathtt{U^{infected}}$ visited this location, then this location $\mathtt{L}$ would be marked as $\mathtt{L^{infected}}$ by the system. (b) \textbf{Mobile Service Layer}\\ Mobile Service Layer is the the core handler in our system and it interacts directly with the other three layers. Mobile Service $\mathtt{MS}$ is our proposed mobile phone application in this layer. Cooperating with other layers, the Mobile Service layer is an interface layer for providing users with services, including the following two aspects: \textbf{contact tracing service} based on Bluetooth or location, and \textbf{health tracing service} supported by data provided from the blockchain database. \textbf{Contact tracing services} include location-based and Bluetooth-based, focusing on indirect and direct contact between users. The Bluetooth-based service is an integral part of the client's mobile phone service which embeds the Bluetooth function. It can sniff other surrounding users' Bluetooth devices, and broadcast its own Bluetooth random mac address $\mathtt{MacAddr}$ as a virtual identity. This Bluetooth-based service can exchange its user randomized mac address with other users via Bluetooth, and then packet received other users' mac addresses $\mathtt{MacAddr}$s, time interval $\mathtt{TimeInterval}$ of the interaction and received signal strength index $RSSI$ into the transaction $\mathtt{Tx}$, which is broadcast to the blockchain network. In addition, when the user receives other's infection notification from the broadcast transaction $\mathtt{Tx}$, our mobile APP will automatically query the local blockchain database for the infected user's Bluetooth address, check-in information, and check whether the infected user has a direct or indirect contact with current user. The second contact tracing service is the location-based tracing. It accepts the user check-in requests $\mathtt{Req^{checkin}}$s from the User Interaction Layer and it coordinates these requests to different smart contracts in the third layer based on the sender's location. At the same time, the infection status of this location $\mathtt{L}$ is affected by the user's check-in request $\mathtt{Req^{checkin}}$. If the user $\mathtt{U}$ updates the health status as infected $\mathtt{U^{infected}}$ and broadcasts it to the smart contract, the location $\mathtt{L}$ will be marked as infected $\mathtt{L^{infected}}$ by this corresponding smart contract. These two services will be explained step by step in detail and described in mathematical formulas in the next two sections. \textbf{Health tracing service} is the third sub-service in Mobile Service Layer. This service has two functions: 1. Broadcast user's infected status $\mathtt{U^{infected}}$ to alert other users and update the infected status for the locations $\mathtt{L}$s where the infected user visited. 2. Estimate the probability of being infected for users $\mathtt{Prob^{U}(Infection)}$. Users $\mathtt{U}$s at the first layer can update their health status through this service at the second layer, packet their health status $\mathtt{\{U^{normal}, U^{infected}\}}$ in a transaction $\mathtt{Tx}$ and send it to the smart contract responsible for the infection status at the third layer and broadcast it to other users on the network. Once the health status of the infected person has been updated as $\mathtt{U^{infected}}$, people who had contact with the infected person will be prompted with a warning alert provided by this health tracing service. After the normal user $\mathtt{U^{normal}}$ receives the warning alert, this health tracking service estimates the probability of the user being infected $\mathtt{Prob^{U}(Infection)}$, based on the data collected from location-based and Bluetooth-based tracing services contact regarding the infected person $\mathtt{U^{infected}}$. We consider there is a relationship between received (Bluetooth) signal strength index $RSSI$ and infection probability $P(infection)$ with COVID-19. Similar to Apple and Google's design\cite{Apple_Google_virus_tracking}, we use the Bluetooth sniff to detect if two users are in a close distance $D$, which can be measured by the Received Signal Strength Indication (RSSI) of Bluetooth\cite{Jung2013DistanceEO} and the author uses Low Pass Filter $(LPF)$ to reduce the errors in the measurement data. To describe the content of the article without repeating, we only quote the method based on the Bluetooth signal strength index. In the following section, we will use the simplified Mathematical formulas including $LPF$ by default to reduce errors in measured data as designed. Research and experiments made by MIT show the COVID-19 can spread by airbone\cite{MIT_airborne}, which can reach 26 feet away by a sneeze\cite{MIT_Sneeze}. So, the closer the user is to the infected person, the easier he is to be exposed to more viruses and become easier to be infected. Therefore, we correlate the strength of the Bluetooth signal strength with the possibility of being infected by a virus. (c) \textbf{Smart Contract Service Layer}\\ The smart contract service layer is the secondary core of our system. The check-in request $\mathtt{Req^{checkin}}$ generated by the user's visiting location $\mathtt{L}$ is processed by the Mobile Service Layer and forwarded to this Smart Contract Service Layer, where it is managed by the smart contract group. The smart contract group integrates the contracts according to the administrative hierarchy system. The top level is the smart contract at the state level $\mathtt{Contract^{state}}$, followed by the county $\mathtt{Contract^{county}}$, then the city $\mathtt{Contract^{city}}$, and finally the smallest unit location $\mathtt{Contract^{location}}$. Location smart contracts $\mathtt{Contract^{location}}$ belong to the city-level contracts $\mathtt{Contract^{city}}$ to manage, city-level contracts belong to counties $\mathtt{Contract^{county}}$, and county-level contracts belong to states $\mathtt{Contract^{state}}$. Each contract will only inherit from one superior contract, and will not belong to two different superior contracts at the same time. Each location must be in one of these three states: \{$\mathtt{Empty Status}$, $\mathtt{Infected Status}$, $\mathtt{Clean Status}$\}, and the corresponding smart contract $\mathtt{Contract^{location}}$ dynamically records the infection status of the location $\mathtt{L}$. If an infected user $\mathtt{U^{infected}}$ visits this location $\mathtt{L}$, or a user who has visited this location, reports that he is infected, then this location $\mathtt{L}$ is considered infected by this user $\mathtt{U^{infected}}$. Only after the location is cleaned, or 14 days after being infected, the location $\mathtt{L}$ is considered to be in a $\mathtt{Clean Status}$. In order to save the operating cost of the smart contract $\mathtt{Contract^{location}}$ and maintain the accuracy of the location $\mathtt{L}$ status record, the coming requests $Req$s trigger the smart contract $\mathtt{Contract^{location}}$ to check and update the infection status for the location $\mathtt{L}$, otherwise the smart contract $\mathtt{Contract^{location}}$ will not actively check the infection status of the location $\mathtt{L}$. This design ensures that users can get the latest location status for their requests while avoiding the operations of the smart contract. (d) \textbf{Data Storage Layer}\\ We have deployed a distributed blockchain database $DB$ in the data storage layer, and every user and computing node can be synchronized in our network to get a complete database, which is consistent with the data of others. From the perspective of data storage, the traditional centralized database stores all data\cite{10.1093/nar/gkj092} in one data center; the traditional distributed database stores data in multiple data centers, but each center may not have global complete data\cite{10.1145/1773912.1773922}. From the data management perspective, traditional databases require a central node to process read and write requests to the database, but a blockchain database does not require a central node to process read and write requests \cite{blockchainIntro}\cite{blockchain} because all users can have the same database locally, to directly query for consistent results. In our system, the blockchain database will store all transactions in the network, including users' Bluetooth contact records, check-in information of the visited locations, and the change of the user's public health status. The three tracing services mentioned above, location-based contact tracing, Bluetooth-based contact tracing, and health tracing services, except location-based contact tracing requiring smart contracts to update and store infection status to the database, in the other two services, users can query the blockchain database directly for personal contacts and visiting records. Current Bitcoin and Ethereum blockchain database design have problems such as high computational cost, and slow writing and querying data\cite{blockchain}, but there are existing third-generation blockchain databases now whose performance, such as throughput, can be comparable to databases of VISA credit card companies\cite{mcconaghy2016bigchaindb}\cite{visa_throughput}. Since the third-generation blockchain database has no mature commercial development and lacks a mature platform to support the development of smart contracts, we still deploy our system on the Ethereum platform for simulation and evaluation. \subsection{Location-based Contact Tracing} \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{graph/location_based_state_machine.png} \caption{Location-based Contact Tracing State Machine.}~\label{fig:figure1} \end{figure} In this section, we describe the location-based contact tracing service in detail: first, we show entities participating in the service, such as users, smart contracts, and locations; and then, we introduce the interaction activities between entities, including user's check-in a place while his visiting and the user's query of the infection status of a place before visiting. As defined in previous sections, we have location $\mathtt{L}$, which represents a public place or transportation, users $\mathtt{U}$ with $\mathtt{normal}$ and $\mathtt{infected}$ status and smart contracts, who are in a hierarchy structure to process all users' check-in requests. In Figure 2, we illustrate location-based contact tracing service with a simplified state machine. \begin{eqnarray} \{\mathtt{Q}, \delta, \mathtt{InitialState}, \mathtt{CleanStatus}\} \end{eqnarray} where we have \begin{itemize} \item $\mathtt{Q}$ = \{$\mathtt{Empty Status}$, $\mathtt{Infected Status}$, $\mathtt{Clean Status}$\} \item $\mathtt{\delta}$ = \{$\mathtt{Generate Contract}$, $\mathtt{Normal User Checkin}$,\\ $\mathtt{Infected User Checkin}$, $\mathtt{Infected User Update}$,\\ $\mathtt{Location is Cleaned}$, $\mathtt{Wait 14 Days}$\} \item $\mathtt{InitialState}$ is the Null state before the smart contract exists \item $\mathtt{CleanStatus}$ is the accepting state \end{itemize} First, when a user issues a check-in request $\mathtt{Req^{checkin}}$ through the location-based contact service at the second layer, the system checks whether there is a smart contract $\mathtt{Contract^{location}}$ corresponding to this location at the third layer. If the contract does not exist, the smart contract of the city $\mathtt{Contract^{city}}$ where the location is located will create a smart contract $\mathtt{Contract^{location}}$ for the location. If the contract already exists, the system goes to the next step to process the user's check-in request. In case the coming user is infected, this location enters the next state $\mathtt{Infected Status}$, otherwise, it enters the $\mathtt{Clean Status}$. Then, when the location is in a state $\mathtt{Infected Status}$, either after 14 days or if it can be cleaned and disinfected, this location can transform to $\mathtt{Clean Status}$. Finally, in the state $\mathtt{Clean Status}$, this location is affected by the request of the next user. If the next user is infected $\mathtt{U^{infected}}$, or his past visit is updated to be infected, then this location will return to the previous state $\mathtt{Infected Status}$. Otherwise, it remains in the current $\mathtt{Clean Status}$. Fig. 3 is an example of user $\mathtt{U}$ checking in a building in location-based contact tracing service. The check-in information about this user visiting the building, like timestamp $\mathtt{T}$, geographic position $\mathtt{GeoPos}$ and health status \{$\mathtt{U^{normal}, U^{infected}}$\}, is packaged in a transaction $\mathtt{Tx}$ with the help of our proposed mobile client app in the second layer. Then, the app sends the transaction $\mathtt{Tx}$ to the smart contract group in the third layer. According to the address of the building $\mathtt{GeoPos}$, this transaction $\mathtt{Tx}$ is passed from the state-level $\mathtt{Contract^{state}}$ to the county-level $\mathtt{Contract^{county}}$, to the city-level $\mathtt{Contract^{city}}$, and finally transferred to the smart contract $\mathtt{Contract^{location}}$ corresponding to this building. Then, based on the health status $\mathtt{U^{infected}}$ for example, provided by the user, the location smart contract changes the infection status to $\mathtt{L^{infected}}$ for the building. In this process, the transaction $\mathtt{Tx}$ that records the user's check-in information will be saved by the blockchain database $\mathtt{DB}$. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{graph/MSC.png} \caption{Location-based Contact Tracing: User checks in the site and stores the record in the blockchain database.}~\label{fig:figure1} \end{figure} Also, the location-based contact tracing service can help the user to get the infection status record of the location $\mathtt{L}$. We need to discuss two scenarios based on whether the user has the $\mathtt{Contract^{location}}$ network address locally corresponding to the building for example. If the user has checked in this location before or has previously queried the smart contract of this location, the corresponding $\mathtt{Contract^{location}}$ address should exist in his mobile APP. Therefore, the user can directly send a request to the $\mathtt{Contract^{location}}$ network address through the mobile APP to get the location infection status. If the user is interacting with this location for the first time, then he needs to get the contract network address first. With the provided the geographic position $\mathtt{GeoPos}$ of this location, the user's mobile APP will query the state-level $\mathtt{Contract^{state}}$ in the smart contract group, who will transfer this query request from the top level to the city-level $\mathtt{Contract^{city}}$ based on the provided $\mathtt{GeoPos}$. If there is a corresponding $\mathtt{Contract^{location}}$ for this location, the network address of this contract will be returned to the user. Otherwise, the city-level smart contract $\mathtt{Contract^{city}}$ will create a new $\mathtt{Contract^{location}}$, and its address will be returned to the user. After receiving the location infection records from Smart Contract Service Layer, the request sender can verify the response by querying his local blockchain database for the transaction record. If the infection exists in this location $\mathtt{L^{infected}}$, our proposed mobile APP will alert the user. To encourage users to use mobile services more frequently, we have developed a check-in and query incentive mechanism for users. Whenever a user checks in a location or queries the corresponding location smart contract, this smart contract will return a slightly more amount of transaction fee to encourage the user to check-in or query more often. The additional fee given to the user is to support the user to use mobile services to broadcast his transaction that contains the Bluetooth contact data, check-in information, and update health status. \subsection{Bluetooth-based Contact Tracing} \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{graph/Bluetooth_P2P.png} \caption{Bluetooth-based Person-to-Person Contact Tracing.}~\label{fig:figure1} \end{figure} Bluetooth-based contact tracing involves all entities except the smart contract group and all layers except Smart Contract Service Layer. The mobile APP on the second layer packs the Bluetooth contact data in the transactions, broadcasts them on the blockchain network, and saves all of them in the blockchain database. Similarly, the mobile APP processes real-time transactions received about the senders' health status, matches user contact records and reminds users of the danger of contact with infected persons. In the transaction, we will record four elements: a period of users detecting each other $\mathtt{\Delta t^B}$, the detected mac addresses $\mathtt{MacAddress}$, mobile phone model $\mathtt{Apple Inc}$ for example, and the received signal strength index $\mathtt{RSSI}$. Fig.1 overviews the workflow of Bluetooth-based contact tracing. It helps users from the first layer to exchange randomized mac addresses with each other, and then packet the data with a timestamp and received (Bluetooth) signal strength index $\mathtt{RSSI}$ in the transaction, which is sent to the blockchain database in the Data Storage Layer. The signal strength of different Bluetooth devices will be different, so we assume that users are using the same type of Apple mobile phone, which provides assumption conditions for the next section to calculate the distance between two users through signal strength. Except that, this Bluetooth-based service will alert users when receiving transactions containing infected health status from another user. Fig. 4 shows that the mobile app in layer 2 detects surrounding users through Bluetooth, recording the time interval $\mathtt{\Delta t^B}$ of direct contact of the users, the random mac address $\mathtt{MacAddr}$ of the other user, the type of mobile device $\mathtt{DeviceType}$, and the range of $\mathtt{RSSI}$. The mobile app will packet these segments into a transaction to broadcast in this blockchain network and store in the blockchain database. And the mobile APP will store all the Bluetooth mac addresses generated locally for another service functionality health tracing, which will be discussed in the next section. Another scenario of Bluetooth-based contact tracing is that, if the mobile APP receives a transaction containing other user's infected health status and his mac addresses, the Bluetooth-based contact tracing will query its local blockchain database based on these mac addresses, and check whether the current user has a contact record with the infected user. If they have a contact history, then the mobile APP will alert the user. Then, the mobile APP will transfer and broadcast this transaction again on the network to alert other users who may have a contact history. We have mentioned in the previous section that there are two types of challenges: 1. The balance between the number of random mac addresses generated by Bluetooth and protecting users' privacy, 2. The balance of the number of random mac addresses and network congestion. For the first challenge, we adopt the method of changing the silent period\cite{1424677}. The silent period refers to the gap time between Bluetooth device discarding the old mac address and adopting the new mac address\cite{1424677}, and during this period, the Bluetooth device cannot decide to use the new or old address. The article points out that changing the length of the silent period obviously reduces the duration of the Bluetooth device being tracked, and thus the device will not be located easily\cite{1424677}. This is still an open question, and it is worth our research in the future. For the second challenge, we use weak randomization, which reduces the number of random addresses generated, so fewer transactions are generated for packing these mac addresses. Then in the blockchain network, each computing node and the user can achieve data consistency without synchronizing a large number of communication requests. \subsection{Health Tracing} Health tracing is the third service in the proposed mobile APP and it has two major functionalities: 1). Broadcast user's infection status to update contracts infection status and alert other users. 2). Estimate the probability of being infected. Within the first functionality, users can update their infection status as $\mathtt{U^{infected}}$ or $\mathtt{U^{normal}}$. When the user's status becomes infected, the mobile APP will automatically two things: 1). update all infection status among $\mathtt{Contract^{location}}$s where this user visited in the past 14 days based on his visiting records and 2). broadcast transactions containing his infected status and all the Bluetooth randomized mac addresses generated in the past 14 days to alert others who have close contacts with him. Within the second functionality, users can estimate the probability of being infected based on the analysis of the four factors $\mathtt{RSSI, \Delta t^B, \Delta t^C, ms}$. Referring to the formula (7) in section 3, we believe these parameters and $\mathtt{P(infection)}$ should be formalized in a logit function: \begin{eqnarray} \mathtt{P(infection) = \frac{1}{1 + e^{\beta_0 + \beta_1 RSSI \otimes \beta_2 \Delta t^B + \beta_3 \Delta t^C \otimes \beta_4(ms)}}} \end{eqnarray} where $\mathtt{\beta_4 (ms)}$ is formulized as \[ \mathtt{\beta_4 (ms) = \begin{cases} 3 & \text{if $ms=Aerosol$} \\ 4 & \text{if $ms=Copper$}\\ 24 & \text{if $ms=Cardboard$}\\ 30 & \text{if $ms=Other$}\\ 48 & \text{if $ms=Stainless Steel$}\\ 72 & \text{if $ms=Plastic$} \end{cases}} \] After accessing real medical data and investigating the public locations, we will calculate and prove this proposed formula.
proofpile-arXiv_065-177
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsubsection*{\bibname}} \usepackage[dvipsnames,table,xcdraw]{xcolor} \usepackage{comment} \usepackage{pdfcomment} \usepackage{environ} \RenewEnviron{comment}{\pdfcomment{\BODY}} \usepackage{dcolumn} \usepackage{booktabs} \usepackage{tikz} \usetikzlibrary{positioning,shapes,arrows} \usetikzlibrary{decorations.pathreplacing} \usepackage{graphicx} \usepackage{subcaption} \usepackage{algorithm} \usepackage[noend]{algpseudocode} \usepackage{amsmath} \usepackage{amssymb} \algrenewcommand\algorithmicindent{1em} \usepackage{amsthm} \theoremstyle{definition} \newtheorem{definition}{Definition} \theoremstyle{remark} \newtheorem*{remark}{Remark} \newtheorem{theorem}{\normalfont{\textbf{Theorem}}} \newtheorem{example}[theorem]{\normalfont{\textbf{Example}}} \newtheorem{corollary}{Corollary}[theorem] \newtheorem{lemma}{\textbf{Lemma}} \newtheorem{claim}[theorem]{claim} \newtheorem{prop}[theorem]{Proposition} \renewcommand{\qedsymbol}{\rule{0.7em}{0.7em}} \makeatletter \def\thm@space@setup{% \thm@preskip=0mm plus 0.5mm minus 0mm \thm@postskip=0mm plus 0.5mm minus 0mm } \makeatother \usepackage{enumerate} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \usepackage{wrapfig} \usepackage{amssymb} \DeclareMathOperator*{\E}{\mathbb{E}} \title{Mathematical construction of a low-bias high-resolution deprivation index for the United States} \author{Amin Ghafourian$^{1,2,\ast}$, Noli Brazil$^3$, and Thilo Gross$^{1,4,5,6}$} \date{\today} \begin{document} \maketitle \vspace{-0.5cm} \begin{center} \begin{minipage}{.75\linewidth} \footnotesize $^1$University of California, Davis, Department of Computer Science, USA\\ $^2$University of California, Davis, Department of Mechanical and Aerospace Engineering, USA\\ $^3$University of California, Davis, Department of Human Ecology, USA\\ $^4$HIFMB, Helmholtz Institute for Functional Marine Biodiversity, Germany\\ $^5$Carl-Von-Ossietzky University, Germany\\ $^6$Alfred-Wegener Institute, Helmholtz Centre for Marine and Polar Research, Germany\\ $^\ast$corresponding author: aghafourian@ucdavis.edu \end{minipage} \end{center} \begin{abstract} The construction of deprivation indices is complicated by the inherent ambiguity in defining deprivation as well as the potential for partisan manipulation. Nevertheless, deprivation indices provide an essential tool for mitigating the effects of deprivation and reducing it through policy interventions. Here we demonstrate the construction of a deprivation index using diffusion maps, a manifold learning technique capable of finding the variables that optimally describe the variations in a dataset in the sense of preserving pairwise relationships among the data points. The method is applied to the 2010 US decennial census. In contrast to other methods the proposed procedure does not select particular columns from the census, but rather constructs an indicator of deprivation from the complete dataset. Due to its construction the proposed index does not introduce biases except those already present in the source data, does not require normative judgment regarding the desirability of certain life styles, and is highly resilient against attempts of partisan manipulation. We demonstrate that the new index aligns well with established income-based deprivation indices but deviates in aspects that are perceived as problematic in some of the existing indices. The proposed procedure provides an efficient way for constructing accurate, high resolution indices. These indices can thus have the potential to become powerful tools for the academic study of social structure as well as political decision making. \end{abstract} \section*{Introduction} Over the past decades rising productivity and international cooperation have led to a rapid growth of wealth throughout the developed world. Nevertheless, the degree to which individuals profit from this growth is increasingly uneven and thus a significant proportion of the population find themselves in worse economic circumstances, absolutely or relatively, than they would have been in previous decades \cite{piketty2015}. Simultaneously also the disparity between favored and disadvantaged neighborhoods is increasing \cite{florida2017}. Mounting evidence indicates that the social, physical and economic characteristics of residential environments impact social mobility \cite{chetty2014, sampson2012,galster2019}. Even in the United States, where social mobility is counted among the nation's foundational values, people from poor backgrounds find themselves locked in social environments that further disenfranchise them \cite{sharkey2012}. The rise in spatial inequality, the inequality across places, and the backlash it has engendered in the form of populism has convinced many policymakers of the need to embrace place-based policies to bolster the conditions of declining communities \cite{neumark2015place}. For this purpose areas in need of social and economic investment are sometimes still identified based on a single statistical variable, such as income level \cite{us2019census,jargowsky1997poverty,berube2007geography}. However, deprivation is an inherently multidimensional construct, where well-being is both impacted and reflected by a multitude of factors. The acknowledgment that deprivation is multidimensional has led to the adoption of deprivation indices that take multiple factors into account. For example in the UK the so-called indices of multiple deprivation factor several well-being domains including crime and safety, housing, education, and employment \cite{payne2012uk}. Although the United States has yet to adopt a national deprivation index, local public agencies have constructed their own indices in order to efficiently allocate resources to the most deprived areas \cite{us2019census,united2019hhs,fox2018supplemental,bohn2013california,renwick2011geographic}. The construction of a national-level multi-factorial deprivation index is difficult due to issues related to the variable selection method and the statistical model used to construct the index. A complex index designed by experts is highly susceptible to accusations of partisan bias. Furthermore, current approaches make judgments regarding which contextual characteristics are relevant for adequate functioning and a minimally acceptable standard for each of these characteristics. Moreover, proxies used to assess deprivation remain varied and unstandardized. These approaches may also overlook the geographic heterogeneity of the US population or the ways in which uniquely characterized neighborhoods, such as retirement communities or college campuses, complicate a unidimensional application of deprivation. This defines a need for a methodology that allows for the construction of a deprivation index that minimizes biases and is robust against intentional manipulation. An analysis of the UK census showed that economic deprivation is one of the two most important variables that shaped census responses \cite{barter2019manifold}. Importantly that study did not look for deprivation specifically but used a so-called diffusion map, a general mathematical methodology, to identify explanatory variables in large data sets. Economic deprivation emerged as an explanatory variable although the UK census does not contain income information, instead the method detected a similarity in living conditions in subset of the population that was reflected in several hundred different statistics reported in the census. In this paper we use diffusion maps to construct a social deprivation index for the United States. The diffusion map is a nonlinear method that is applicable to large and complex datasets. Nevertheless it is mathematically simple and builds on a strong physical intuition \cite{coifman2005geometric,coifman2006diffusion,lafon2004diffusion,lafon2006data}. It contains only few tunable parameters governed by strong rationals and results are typically robust against parameter variation. Hence the diffusion map leaves almost no room for partisan manipulation and does not introduce biases beyond those that may already exist in the source data. Our analysis produces an index that aligns well with previous deprivation indices but offers a more comprehensive and detailed picture. This picture reveals large disadvantaged regions, but also highlights very high heterogeneity at the local scale. While diffusion maps of the UK cities revealed the divide between poor and middle class, it is very affluent areas that stand out most in the US. One interpretation of this result is that unlike deprivation in the UK, where the middle class neighborhoods distance themselves from poor areas, it is the affluent neighborhoods in the United States that distance themselves from the middle class. \section*{Census Analysis with Diffusion Maps} We start our analysis not by trying to identify the deprived areas, but rather by asking which are the most similar, where the notions of similarity used are mathematically discovered from the dataset itself. The analysis considers census tract level data from the United States Census \cite{uscensustechnical,us2010census}. Census tracts are administrative units that have been used as proxies for neighborhoods in community and neighborhood level analysis and are designed to represent social and economically homogeneous groups of approximately 1200 to 8000 persons. For our main analysis we removed information on race. Though not strictly necessary for the method to work, this was done as a precaution to avoid racial bias in the results. The result is a dataset containing $1385$ parameters for each of the $73057$ census tracts in the US. We then computed the similarity between all pairs of tracts using a common metric of similarity, the inverse of the Euclidean distance between the census data vectors. This metric of similarity is well-suited for comparing very similar tracts, but performs poorly when considering dissimilar areas \cite{lafon2004diffusion,barter2019manifold}. To avoid accumulation of error from such dissimilar comparisons, we threshold the initial similarity measurements. This is done, by checking the similarity between each pair of nodes $a,b$ and setting the similarity to zero if it is not among the 9 greatest values of similarity that either $a$ or $b$ have with any other node. The matrix of pairwise similarities can be interpreted as a weighted adjacency matrix of a sparse network. In this network the nodes are census tracts and the links between tracts indicate similarity. We then constructed more refined notions of similarity by computing the normalized Laplacian matrix corresponding to the network and computing its eigenvectors (see Appendix). For a high-dimensional dataset there are many such eigenvectors which represent different alternative notions of similarity. Every eigenvector assigns a score to each of the census tracts which implies an ordering of the tracts. Thus the eigenvectors identify new emergent variables that describe the dataset. For example, below we argue the values of one of the eigenvectors acts as an indicator of deprivation by ordering the census tracts from the most deprived to least deprived. Each eigenvector is associated with an eigenvalue. Eigenvalues indicate the relative importance of the differences picked up by the eigenvectors. In the implementation of the diffusion map described here the most important notions of similarity are those that correspond to eigenvalues close to zero. The steps above described the diffusion map \cite{lafon2004diffusion}, a straightforward mathematical procedure, where the only user defined parameters are the kernel used for the construction of initial similarities and the choice of threshold value. Both of these are guided by physical principles and cannot plausibly be used to manufacture ideologically desirable results (see Appendix). By contrast the interpretation (e.g.~``deprivation'') that we attribute to an eigenvector requires human intuition, and hence must be verified using additional data or analysis. One problem that had to be overcome in our analysis is that for certain tracts in southern states only partial census statistics were reported. However, previous work \cite{barter2019manifold} has shown that such gaps can be closed by running the analysis without these incomplete tracts and then assigning them the vector elements from the most similar analyzed tract. For illustration we visualize the first four census eigenvectors in an area around Los Angeles (Fig.~1). The most important eigenvector reveals a sharply localized pattern, where some tracts receive large magnitude entries whereas the entries in most others are virtually zero. This shows that there is a set of tracts where census responses are similar, while being different from the responses in most other tracts. To interpret this result we manually identified the respective tracts and found they coincided with universities. \begin{figure*}[tbhp] \centering \includegraphics[width=\textwidth]{Combined.png} \caption{Patterns of social similarity in Los Angeles (center) and the surrounding area (right). The first eigenvector is found to highlight colleges. The second eigenvector detects prisons and detention centers. Military bases and training facilities are highlighted in the third eigenvector. The fourth eigenvector provides an accurate high-resolution proxy for deprivation. The color corresponds directly to eigenvector entries (arbitrary units). Grey indicates uninhabited areas. Places of interest are marked with numbers. Some of these places where zoomed (circles) to make small features visible. } \end{figure*} Similarly, the second and third largest eigenvectors exhibit localized patterns corresponding to the locations of correctional facilities and military bases. Collectively, these first three eigenvectors highlight strong social differences, which exist for clear extrinsic reasons, i.e.~enrollment in college, military service, or incarceration. \begin{figure*}[tbhp] \centering \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.95\textwidth]{Eigenvector_4_US.png} \caption{} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[height=4.7cm]{Eigenvector_4_FL.png} \caption{} \end{subfigure}% \begin{subfigure}{.65\textwidth} \centering \includegraphics[height=4.7cm]{Eigenvector_4_South.png} \caption{} \end{subfigure} \begin{subfigure}{.42\textwidth} \centering \includegraphics[height=5.9cm]{Eigenvector_4_NYC.png} \caption{} \end{subfigure}% \begin{subfigure}{.53\textwidth} \centering \includegraphics[height=5.9cm]{Eigenvector_4_EastCoast.png} \caption{} \end{subfigure} \caption{Diffusion-map-based deprivation index in the US shown for the 50 states and Washington, D.C. (a), parts of Florida (b), south (c), New York City (d), and east and northeast (e). Darker shade of blue indicates higher deprivation. Areas with zero population are shown in white. } \end{figure*} \section*{Deprivation in the USA} We hypothezise that the fourth eigenvector is an indicator of deprivation. By visual inspection one can confirm that tracts that are assigned negative numbers are affluent areas, retirement homes and country clubs, whereas tracts assigned positive numbers coincide with deprived areas. To test the hypothesis further we correlated the results in the Los Angeles county with an income-based deprivation index that exists for this area \cite{california2017calenviroscreen,calenviro2017OEHHA}. We find a rank correlation of $0.84$, which strongly supports the interpretation of the eigenvector as a deprivation indicator. We found that for this purpose the diffusion map performs better than comparable methods (e.g.~PCA yields a rank correlation of only $0.58$ with the LA index, see Appendix). We note that neither our eigenvector nor other, e.g.~income-based indicators reflect a ground truth. Particularly we see the use of income-based indicators on a national level as problematic due to the different rent and consumer price levels. By contrast, differences in living circumstances reflected in many census variables and picked up by the eigenvector may reveal a more precise picture. We show some of the attributes of the top and bottom 10 census tracts in the diffusion map index (Table 1) from an independent dataset \cite{united2013american, subject2012census}. The statistics on income, percentage of population below poverty level, and housing costs that are significantly affected by the location vary within each group and do not delineate a clear separation margin between the two extremes; On the other hand, statistics such as the educational attainment and percentage employment that are less dependent on circumstances like location remain consistent within each group and widely differ across the two extremes. Overall, among the tracts highlighted as deprived we see several different forms of deprivation, ranging from tracts with incredibly low housing costs to comparatively expensive but contextually poor inner city neighborhoods. This diversity shows that the diffusion map has successfully identified a commonality that underlies the diverse forms of deprivation. \begin{table*} \footnotesize \centering \caption{Socioeconomic statistics for the 10 most and least deprived census tracts in the diffusion map deprivation index (respectively top 10 and bottom 10 entries) taken from the 2008--2012 American Community Survey 5-Year Estimates subject tables. This information is not part of the census and thus constitutes an independent test.} \begin{tabular*}{\hsize}{l @{\extracolsep{\fill}}rrrrrr} \makecell[bl]{Census tract}& \makecell[br]{\% pop. 25+\\w. bachelor's\\or higher}& \makecell[br]{\% pop. below\\poverty level}& \makecell[br]{Median household\\income (USD)}& \makecell[br]{Median gross\\rent (USD)}& \makecell[br]{Median house\\value (USD)}& \makecell[br]{\% pop. 16+\\in labor force}\\ \midrule \makecell[l]{743, Orange\\County, CA}& 3.6& 27.7& 55720& 1195& 329100& 69.0 \\ \makecell[l]{1042.01, Los\\Angeles\\County, CA}& 1.5& 19.0& 45089& 972& 291000& 69.3 \\ \makecell[l]{741.09, Orange\\County, CA}& 10.0& 14.2& 69688& 1401& 338600& 68.6 \\ \makecell[l]{747.01, Orange\\County, CA}& 3.1& 14.4& 58447& 1355& 309900& 67.8 \\ \makecell[l]{747.02, Orange\\County, CA}& 5.9& 23.0& 53169& 1117& 328200& 71.5 \\ \makecell[l]{23.05, Santa\\Barbara\\County, CA}& 3.4& 22.3& 52000& 1200& 205700& 62.8 \\ \makecell[l]{47.17, Ventura\\County, CA}& 4.0& 15.2& 65063& 1610& 288200& 72.6 \\ \makecell[l]{85.02, Hamilton\\County, OH}& 11.7& 84.4& 8878& 580& 70300& 53.1 \\ \makecell[l]{301, Lake\\County, IN}& 0.0& 79.5& 9504& 248& 180600& 73.8 \\ \makecell[l]{1143, Cuyahoga\\County, OH}& 0.8& 87.1& 8810& 298& 31000& 51.5 \\ \\ \\ \makecell[l]{307.05, Broward\\County, FL}& 21.9& 11.9& 26943& 985& 72500& 17.5 \\ \makecell[l]{77.47, Palm Beach\\County, FL}& 34.7& 12.1& 25482& 785& 70900& 14.1 \\ \makecell[l]{1551.01, Queens\\County, NY}& 60.0& 2.2& 70036& 1645& 410200& 25.2 \\ \makecell[l]{3511.02, Contra\\Costa County, CA}& 50.2& 4.8& 42827& 946& 207200& 18.7 \\ \makecell[l]{995.09, Orange\\County, CA}& 29.6& 8.6& 30188& 536& 192600& 15.0 \\ \makecell[l]{3511.01, Contra\\Costa County, CA}& 55.8& 1.4& 50385& 1312& 332500& 24.9 \\ \makecell[l]{77.46, Palm Beach\\County, FL}& 24.1& 8.8& 22813& 707& 47000& 20.0 \\ \makecell[l]{995.10, Orange\\County, CA}& 22.7& 17.3& 27223& 463& 153200& 15.3 \\ \makecell[l]{3511.03, Contra\\Costa County, CA}& 61.9& 4.7& 72625& 1538& 529900& 20.9 \\ \makecell[l]{405.13, Maricopa\\County, AZ}& 38.6& 1.2& 56471& N/A& 211000& 7.1 \\ \bottomrule \end{tabular*} \end{table*} We now inspect the eigenvector result (Fig.~2) more closely. In contrast to previous results from British cities, we do not see such a clear separation between middle class and deprived areas in the US. However there is a very strong separation between middle-class and highly affluent tracts, which are almost exclusively country clubs and certain holiday and retirement properties. This can be interpreted as an indication that the most significant social divide in the US is the separation of the affluent from the rest of society. A notable difference between the eigenvector-based index and the traditional poverty index is that the eigenvector highlights many Native American reservations as deprived areas \cite{saipe2010census}. Examples include parts of Wind River Reservation, Warm Springs Reservation, Hopi Reservation, and various pueblos in northern New Mexico. The high spatial resolution of the eigenvector-index also allows to study differences that exist on a small scale. We find that these differences are extremely pronounced in Florida. The state harbors some of the most and least \cite{florida2019toward,sommeiller2018income} deprived tracts in nation, often in direct proximity. For comparison we also show the differences that exist for example in New York, between very affluent area east of Central park and borderline deprived areas in Harlem. An interesting feature can be seen along the route one between Washington and New York, where a thin \FloatBarrier \noindent stripe of increased deprivation follows the busy commuter route. Other notable areas of deprivation are seen along the lower Mississippi river, in California's Central Valley and South Texas. These are previously recognized areas of high deprivation \cite{wimberley2003us,calenviro2017OEHHA,bohn2011poverty,hotez2012texas}. \section*{Summary and Conclusions} In the present paper we have proposed manifold-learning with diffusion maps as a method to generate a deprivation index from census data. One advantage of this method is that it can utilize large amount of data contained in the census. The diffusion map uses this data to assign new variables to census tracts and thus reveal similarities in living conditions. The observed patterns of similarity include some that are induced by specific external circumstances such as enrollment in the military, but also reveal the effect of deprivation. Our results reveal well-known large scale deprivation in certain areas, but also highlight the feature at the local level. What makes diffusion maps particularly attractive for constructing deprivation indices is that they are strongly resistant to manipulation and don't introduce biases beyond those inherent in the source data. The only choices to make in this method are the construction of the similarity matrix (here the inverse of Euclidean distance, an intuitive non-parametric choice) and the way thresholding is done (here we kept the top-9 strongest links, again an intuitive choice). We found the results to be very robust to sensible variations and outperformed other methods (PCA, k-means, see Appendix). Perhaps more importantly, even armed with a detailed understanding of the diffusion map it is not possible to make choices in such a way as to create a specific result, which greatly limits the potential to manipulate the procedure for ideological reasons. Even the diffusion map does not remove human intuition entirely from the data analysis process as interpretation is still needed to interpret the meaning of the eigenvectors. These interpretations should be regarded as hypotheses that are then confirmed or rejected through additional tests. Here we performed such a test by correlating the respective eigenvector with existing deprivation indices in the area of Los Angeles where a detailed index was available. Beyond the first four eigenvectors the diffusion map reveals other eigenvectors that are of lesser importance overall but may still hold interesting information that yields deeper insights. We hope that the exploration of these eigenvectors will provide additional understanding of the social geography of the US in the future. \section*{Acknowledgements} TG thanks the Ministry of Science and Culture of Lower Saxony and the Volkswagen Foundation (grant ZN3285) for support. \section*{References} \bibliographystyle{unsrtnat} \renewcommand{\bibsection}{}
proofpile-arXiv_065-178
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec.introduction} We consider the design of algorithms for solving smooth nonlinear optimization problems with equality constraints. Such problems arise in various important applications throughout science and engineering. Numerous algorithms have been proposed for solving \emph{deterministic} equality constrained optimization problems. Penalty methods \cite{Cour43,Flet87}, including augmented Lagrangian methods \cite{ConnGoulToin92,Hest69,Powe69}, attempt to solve such problems by penalizing constraint violation through an objective term---weighted by a penalty parameter---and employing unconstrained optimization techniques for solving (approximately) a corresponding sequence of penalty subproblems. Such algorithms can behave poorly due to ill-conditioning and/or nonsmoothness of the penalty subproblems, depending on the type of penalty function employed. Their performance also often suffers due to sensitivity to the scheme for updating the penalty parameter. Algorithms that consistently outperform penalty methods are those based on sequential quadratic optimization (commonly known as SQP), which in this setting of equality constrained optimization is intimately connected to the idea of applying Newton's method to stationarity conditions of the problem \cite{Wils63}. In particular, it is commonly accepted that one of the state-of-the-art algorithms for solving equality constrained optimization problems is such an SQP method that chooses stepsizes based on a line search applied to an exact penalty function \cite{Han77,HanMang79,Powe78}. In such a method, the penalty function acts as a merit function only. It does not influence the computed search direction; it only influences the computed stepsize. Significantly fewer algorithms have been proposed for solving \emph{stochastic} equality constrained optimization problems. In particular, in this paper, we focus on such problems with constraint functions that are deterministic, but objective functions that are stochastic, in the sense that the objective is an expectation of a function defined with respect to a random variable with unknown distribution. (Various modeling paradigms have been proposed for solving problems involving stochastic constraints. These are out of our scope; we refer the reader to~\cite{ShapDentRusz09}.) We assume that it is intractable to compute objective function and gradient values, although one is able to compute (unbiased) stochastic gradient estimates. A few algorithms have been proposed that may be employed in this setting \cite{ChenTungVeduMori18,KumaSoumMhamHara18,NandPathAbhiSing19,RaviDinhLokhSing19}, although these are based on penalty methodologies, so do not benefit from advantages of SQP techniques. Let us also mention various proposed stochastic Frank-Wolfe algorithms \cite{goldfarb2017linear,hazan2016variance,locatello2019stochastic,lu2020generalized,negiar2020stochastic,reddi2016stochastic,zhang2020one} for (non)convex stochastic optimization with convex constraints, although these are not applicable for our setting of having general nonlinear equality constraints. \subsection{Contributions} In this paper, we propose two algorithms modeled after the aforementioned line-search SQP methodology. Our primary focus is an algorithm for the aforementioned setting of a problem with deterministic constraint functions, but a stochastic objective function. However, as a first step for considering this setting, we begin by proposing an algorithm for the deterministic setting that employs an adaptive stepsize selection scheme that makes use of Lipschitz constants (or adaptively updated Lipschitz constant estimates) rather than a line search. Based on this algorithm for the deterministic setting, we propose our algorithm for the stochastic setting that also uses Lipschitz constants (or, in practice, estimates of them) for stepsize selection. We prove under common assumptions that our deterministic algorithm has convergence guarantees that match those of a state-of-the-art line-search SQP method. In addition, we prove under loose assumptions that our stochastic algorithm offers convergence guarantees that can match those of our deterministic algorithm \emph{in expectation}. In particular, the results that we prove for our stochastic algorithm are of the type offered by stochastic gradient schemes for unconstrained optimization \cite{BottCurtNoce18}. An additional challenge for constrained stochastic optimization is potentially poor behavior of an adaptive merit function parameter that balances emphasis between minimizing constraint violation and reducing the objective function. To address this, in addition to our aforementioned convergence analysis, which considers the behavior of the algorithm under good behavior of this adaptive parameter, we prove under pragmatic assumptions that certain poor behavior either cannot occur or only occurs in extreme circumstances, and other poor behavior occurs with probability zero. The results of numerical experiments show that our deterministic algorithm is as reliable as a state-of-the-art line-search SQP method, although, as should be expected, it is sometimes less efficient than such a method that performs line searches. Our experiments with our stochastic algorithm show that it consistently and significantly outperforms an approach that attempts to solve constrained problems by applying a stochastic (sub)gradient scheme to minimize an exact penalty function. \subsection{Notation} Let $\R{}$ denote the set of real numbers (i.e., scalars), let $\R{}_{\geq r}$ (resp.,~$\R{}_{>r}$) denote the set of real numbers greater than or equal to (resp.,~greater than) $r \in \R{}$, let $\R{n}$ denote the set of $n$-dimensional real vectors, let $\R{m \times n}$ denote the set of $m$-by-$n$-dimensional real matrices, and let $\mathbb{S}^n$ denote the set of $n$-by-$n$-dimensional symmetric matrices. The set of natural numbers is denoted as $\N{} := \{0,1,2,\dots\}$. For any $m \in \N{}$, let $[m]$ denote the set of integers $\{1,\dots,m\}$. Each of our algorithms is iterative, generating a sequence of iterates $\{x_k\}$ with $x_k \in \R{n}$ for all $k \in \N{}$. The iteration number is also appended as a subscript to other quantities corresponding to each iteration; e.g., $f_k := f(x_k)$ for all $k \in \N{}$. \subsection{Organization} Our algorithm for the deterministic setting is proposed and analyzed in \S\ref{sec.deterministic}. We present our analysis alongside that of a line-search SQP method for ease of comparison with this state-of-the-art strategy. Our algorithm for the stochastic setting is proposed and analyzed in \S\ref{sec.stochastic}. The results of numerical experiments are provided in \S\ref{sec.numerical} and concluding remarks are offered in \S\ref{sec.conclusion}. \section{Deterministic Setting}\label{sec.deterministic} Given an objective function $f : \R{n} \to \R{}$ and a constraint function $c : \R{n} \to \R{m}$, consider the optimization problem \bequation\label{prob.deterministic} \min_{x\in\R{n}}\ f(x)\ \st\ c(x) = 0. \eequation We make the following assumption about the optimization problem~\eqref{prob.deterministic} and the algorithms that we propose, each of which generates an iterate sequence $\{x_k\} \subset \R{n}$, search direction sequence $\{d_k\} \subset \R{n}$, and trial stepsize sequence $\{\alpha_{k,j}\} \subset \R{}_{>0}$. \bassumption\label{ass.deterministic} Let $\Xcal \subseteq \R{n}$ be an open convex set containing the iterates $\{x_k\}$ and trial points $\{x_k+\alpha_{k,j}d_k\}$. The objective function $f : \R{n} \to \R{}$ is continuously differentiable and bounded below over $\Xcal$, and its gradient $\nabla f : \R{n} \to \R{n}$ is Lipschitz continuous with constant $L$ and bounded over $\Xcal$. The constraint function $c : \R{n} \to \R{m}$ and its Jacobian $\nabla c^T : \R{n} \to \R{m \times n}$ are bounded over $\Xcal$, each gradient $\nabla c_i : \R{n} \to \R{n}$ is Lipschitz continuous with constant $\gamma_i$ over $\Xcal$ for all $i \in \{1,\dots,m\}$, and the singular values of $\nabla c^T$ are bounded away from zero over $\Xcal$. \eassumption Most of the statements in Assumption~\ref{ass.deterministic} are standard smoothness assumptions for the objective and constraint functions. We remark that we do not assume that the set $\Xcal$ is bounded. The assumption that the singular values of $\nabla c^T$ are bounded away from zero is equivalent to the linear independence constraint qualification (LICQ). The LICQ is a relatively strong assumption in the modern literature on algorithms for solving constrained optimization problems, but it is a reasonable one in our context due to the significant challenges that arise in the stochastic setting in \S\ref{sec.stochastic}. Defining the Lagrangian $\ell : \R{n} \times \R{m} \to \R{}$ corresponding to \eqref{prob.deterministic} by $\ell(x,y) = f(x) + c(x)^Ty$, first-order stationarity conditions for \eqref{prob.deterministic}---which are necessary due to the inclusion of the LICQ in Assumption~\ref{ass.deterministic}---are given by \bequation\label{eq.KKT} 0 = \bbmatrix \nabla_x \ell(x,y) \\ \nabla_y \ell(x,y) \ebmatrix = \bbmatrix \nabla f(x) + \nabla c(x)y \\ c(x) \ebmatrix. \eequation A consequence of Lipschitz continuity of the constraint functions is the following. Since this fact is well known and easily proved, we present it without proof. \blemma\label{lem.c_upper} Under Assumption~\ref{ass.deterministic}, it follows for any $x \in \R{n}$, $\alpha \in \R{}_{>0}$, and $d \in \R{n}$ such that $(x,x+\alpha d) \in \Xcal \times \Xcal$ that \bequationNN \baligned |c_i(x + \alpha d)| &\leq |c_i(x) + \alpha \nabla c_i(x)^T d| + \thalf \gamma_i \alpha^2 \|d\|^2_2\ \ \text{for all}\ \ i \in [m] \\ \text{and}\ \ \|c(x + \alpha d)\|_1 &\leq \|c(x) + \alpha \nabla c(x)^T d\|_1 + \thalf \Gamma \alpha^2 \|d\|^2_2\ \ \text{with}\ \ \Gamma := \sum_{i\in[m]} \gamma_i. \ealigned \eequationNN \elemma \subsection{Merit Function}\label{sec.Merit} As is common in SQP techniques, our algorithms use as a merit function the $\ell_1$-norm penalty function $\phi : \R{n} \times \R{}_{>0} \to \R{}$ defined by \bequation\label{eq.penalty_function} \phi(x,\tau) = \tau f(x) + \|c(x)\|_1. \eequation Here, $\tau \in \R{}_{>0}$ is a merit parameter, the value of which is chosen in the algorithm according to a positive sequence $\{\tau_k\}$ that is set adaptively. We make use of a local model of the merit function $q : \R{n} \times \R{}_{>0} \times \R{n} \times \mathbb{S}^n \times \R{n} \to \R{}$ defined by \bequationNN q(x,\tau,\nabla f(x),H,d) = \tau (f(x) + \nabla f(x)^Td + \thalf \max\{d^THd,0\}) + \|c(x) + \nabla c(x)^Td\|_1. \eequationNN A critical quantity in our algorithms is the reduction in this model for a given $d \in \R{n}$ with $c(x) + \nabla c(x)^Td = 0$, i.e., $\Delta q : \R{n} \times \R{}_{>0} \times \R{n} \times \mathbb{S}^n \times \R{n} \to \R{}$ defined by \bequation\label{def.merit_model_reduction} \baligned \Delta q(x,\tau,\nabla f(x),H,d) :=&\ q(x,\tau,\nabla f(x),H,0) - q(x,\tau,\nabla f(x),H,d) \\ =&\ -\tau(\nabla f(x)^Td + \thalf \max\{d^THd, 0\}) + \|c(x)\|_1. \ealigned \eequation The following lemma shows an important relationship between the directional derivative of the merit function and this model reduction function. \blemma\label{lem.directional_derivative} Given $(x,\tau,H,d) \in \R{n} \times \R{}_{>0} \times \mathbb{S}^n \times \R{n}$ with $c(x) + \nabla c(x)^Td = 0$, \bequation\label{eq.directional_derivative} \phi'(x,\tau,d) = \tau \nabla f(x)^Td - \|c(x)\|_1 \leq -\Delta q(x,\tau,\nabla f(x),H,d). \eequation where $\phi' : \R{n} \times \R{}_{>0} \times \R{n} \to \R{}$ is the directional derivative of $\phi$ at $(x,\tau)$ for $d$. \elemma \bproof The first equation in \eqref{eq.directional_derivative} is well known; see, e.g., \cite[Theorem~18.2]{NoceWrig06}. On the other hand, from the definition \eqref{def.merit_model_reduction} one finds that \bequationNN \Delta q(x,\tau,\nabla f(x),H,d) = -\phi'(x,\tau,d) - \thalf \tau \max\{d^THd,0\} \leq -\phi'(x,\tau,d), \eequationNN which shows the inequality in \eqref{eq.directional_derivative}. \eproof \subsection{Algorithm Preliminaries} The algorithms that we discuss for solving \eqref{prob.deterministic} are based on an SQP paradigm. Specifically, at $x_k$ for all $k \in \N{}$, a search direction $d_k \in \R{n}$ is computed by solving a quadratic optimization subproblem based on a local quadratic model of $f$ and a local affine model of $c$ about $x_k$. Letting $f_k := f(x_k)$, $g_k := \nabla f(x_k)$, $c_k := c(x_k)$, and $J_k := \nabla c(x_k)^T$ for all $k \in \N{}$ and given a sequence~$\{H_k\}$ satisfying Assumption~\ref{ass.H} below (a standard type of sufficiency condition for equality constrained optimization), this subproblem is given by \bequationNN \min_{d\in\R{n}}\ f_k + g_k^Td + \thalf d^TH_kd\ \ \st\ \ c_k + J_kd = 0. \eequationNN The optimal solution $d_k$ of this subproblem, and an associated Langrange multiplier $y_k \in \R{m}$, can be obtained by solving the linear system of equations \bequation\label{eq.system_deterministic} \bbmatrix H_k & J_k^T \\ J_k & 0 \ebmatrix \bbmatrix d_k \\ y_k \ebmatrix = - \bbmatrix g_k \\ c_k \ebmatrix. \eequation \bassumption\label{ass.H} The sequence $\{H_k\}$ is bounded in norm by $\kappa_H \in \R{}_{>0}$. In addition, there exists a constant $\zeta \in \R{}_{>0}$ such that, for all $k \in \N{}$, the matrix $H_k$ has the property that $u^TH_ku \geq \zeta \|u\|_2^2$ for all $u \in \R{n}$ such that $J_k u = 0$. \eassumption We stress that our algorithms and analysis do \emph{not} assume that $H_k$ is equal to the Hessian of the Lagrangian at $x_k$ for some multiplier $y_k$, although choosing $\{H_k\}$ in this manner would be appropriate in order to ensure fast local convergence guarantees. Since our focus is only on achieving convergence to stationarity from remote starting points, we merely assume that $\{H_k\}$ satisfies Assumption~\ref{ass.H}. Under Assumptions~\ref{ass.deterministic} and \ref{ass.H}, the following results are well known in the literature. \blemma\label{lem.nonsingular} For all $k \in \N{}$, the linear system~\eqref{eq.system_deterministic} has a unique solution. \elemma \blemma\label{lem.stationary} For any $k \in \N{}$, the solution $(d_k,y_k)$ obtained by solving~\eqref{eq.system_deterministic} has $d_k = 0$ if and only if the pair $(x_k,y_k)$ satisfies \eqref{eq.KKT}. \elemma \subsection{Algorithms}\label{sec.Algorithms} In this section, we present two algorithms for solving problem~\eqref{prob.deterministic}. The first algorithm chooses stepsizes based on a rule using Lipschitz constant estimates, which can be set adaptively. This algorithm is new to the literature and establishes a foundation upon which our method for the stochastic setting will be built. The second algorithm, by contrast, employs a standard type of backtracking line search. This algorithm is standard in the literature. We prove a convergence theory for it alongside that for our newly proposed algorithm for illustrative purposes. In both algorithms, after $d_k$ is computed, the merit parameter $\tau_k$ is set. This is done by first setting, for some $\sigma \in (0,1)$, a trial value $\tau_k^{trial} \in \R{}_{>0} \cup \{\infty\}$ by \bequation\label{eq.merit_parameter_trial} \tau_k^{trial} \gets \bcases \infty & \text{if $g_k^Td_k + \max\{d_k^TH_kd_k,0\} \leq 0$} \\ \tfrac{(1 - \sigma)\|c_k\|_1}{g_k^Td_k + \max\{d_k^TH_kd_k,0\}} & \text{otherwise.} \ecases \eequation (If $c_k = 0$, then it follows from \eqref{eq.system_deterministic} and Assumption~\ref{ass.H} that $d_k^TH_kd_k \geq 0$ and $g_k^Td_k + d_k^TH_kd_k = 0$, meaning $\tau_k^{trial} \gets \infty$. Hence, $\tau_k^{trial} < \infty$ requires $\|c_k\|_1 > 0$, in which case $\tau_k^{trial} > 0$.) Then, the merit parameter $\tau_k$ is set, for some $\epsilon \in (0,1)$, by \bequation\label{eq.merit_parameter_lower} \tau_k \gets \bcases \tau_{k-1} & \text{if $\tau_{k-1} \leq \tau_k^{trial}$} \\ (1-\epsilon) \tau_k^{trial} & \text{otherwise.} \ecases \eequation This ensures that $\tau_k \leq \tau_k^{trial}$. Regardless of the case in \eqref{eq.merit_parameter_lower}, it follows that \bequation\label{eq.merit_model_reduction_lower} \Delta q(x_k,\tau_k,g_k,H_k,d_k) \geq \thalf \tau_k \max\{d_k^TH_kd_k,0\} + \sigma \|c_k\|_1. \eequation This inequality will be central in our analysis of both algorithms. In particular, it will be useful when combined with the fact that each algorithm ensures that, for all $k \in \N{}$, the stepsize $\alpha_k \in \R{}_{>0}$ is selected such that for $\eta \in (0,1)$ one finds \bequation\label{eq.sufficient_decrease} \phi(x_k+\alpha_kd_k,\tau_k) \leq \phi(x_k,\tau_k) - \eta \alpha_k \Delta q(x_k,\tau_k,g_k,H_k,d_k). \eequation \begin{remark} An alternative approach for setting the merit function parameter, which is commonly found in textbooks on nonlinear constrained optimization, is to set it based on the computed Lagrange multiplier estimate $y_k$. For example, in the context of our $\ell_1$-norm exact penalty function $\phi(x_k,\cdot)$, one can ensure that the computed search direction $d_k$ is a direction of descent for $\phi(\cdot,\tau_k)$ from $x_k$ if $\tau_k < \|y_k\|_{\infty}^{-1}$; see, e.g., \cite{NoceWrig06}. However, it has been recognized that it is often better in practice to set it based on ensuring sufficient reduction in a model of the merit function (see, e.g., \cite{ByrdGilbNoce00,ByrdHribNoce99}), which is our motivation for using the rule defined by \eqref{eq.merit_parameter_trial}--\eqref{eq.merit_parameter_lower}. \end{remark} Our first algorithm is stated as Algorithm~\ref{alg.sqp_adaptive}. A signifying feature of it is the manner in which it can adapt Lipschitz constant estimates, which are used in the stepsize selection scheme. For any $(k,j) \in \N{} \times \N{}$, if the estimates $L_{k,j}$ and $\{\gamma_{k,i,j}\}_{i=1}^m$ satisfy $L_{k,j} \geq L$ and $\gamma_{k,i,j} \geq \gamma_{i}$ for all $i \in [m]$, then it follows (see~\cite{Nest04} and Lemma~\ref{lem.c_upper}) that for $\alpha_{k,j} \in \R{}_{>0}$ yielding $x_k + \alpha_{k,j}d_k \in \Xcal$ (recall Assumption~\ref{ass.deterministic}) one has \bsubequations\label{eq.Lipschitz_bounds} \begin{align} f(x_k + \alpha_{k,j} d_k) &\leq f_k + \alpha_{k,j} g_k^Td_k + \thalf L_{k,j} \alpha_{k,j}^2 \|d_k\|_2^2 \label{eq.Lipschitz_bound_f} \\ \text{and}\ \ |c_i(x_k + \alpha_{k,j} d_k)| &\leq |c_i(x_k) + \alpha_{k,j} \nabla c_i(x_k)^T d_k| + \thalf \gamma_{k,i,j} \alpha_{k,j}^2 \|d_k\|_2^2 && \label{eq.Lipschitz_bound_c} \end{align} \end{subequations} for all $i \in [m]$. If one knows Lipschitz constants for $\nabla f$ and $\{\nabla c_i\}_{i=1}^m$, then one could simply set $L_{k,0}$ and $\gamma_{k,i,0}$ for all $i \in [m]$ to these values for all $k \in \N{}$, in which case the inner \textbf{for} loop would terminate in iteration $j=0$ for all $k \in \N{}$. However, if such Lipschitz constants are unknown, as is often the case, then the adaptive procedure in Algorithm~\ref{alg.sqp_adaptive} ensures that convergence can be guaranteed, as shown in the next subsection. For now, we simply prove the following lemma showing that the inner loop of the algorithm is well-posed. (One could choose a different increase factor $\rho \in \R{}_{>1}$ for each Lipschitz constant estimate; we use a common value of $\rho$ for simplicity.) \blemma\label{lem.well_defined} Under Assumption~\ref{ass.deterministic}, the inner \textbf{for} loop in Algorithm \ref{alg.sqp_adaptive} is well-posed in that for any $k \in \N{}$, it terminates finitely. In addition, for all $k \in \N{}$, \bequation\label{eq.L_rho_bounds} \baligned L_k \leq L_{\max} &:= \max\left\{L_{-1},\rho L\right\} \\ \text{and}\ \ \gamma_{k,i} \leq \gamma_{\max,i} &:= \max\left\{\gamma_{-1,i},\rho \gamma_{i} \right\}\ \ \text{for all}\ \ i \in [m]. \ealigned \eequation \elemma \bproof To derive a contradiction, suppose that for some $k \in \N{}$ the inner \textbf{for} loop does not terminate. This means that for each iteration of the \textbf{for} loop at least one inequality in \eqref{eq.Lipschitz_bounds} does not hold. In such a case, the \textbf{for} loop sets $L_{k,j+1}$ (resp.,~$\gamma_{k,i,j+1}$ for some $i \in [m]$) as $\rho > 1$ times $L_{k,j}$ (resp.,~$\gamma_{k,i,j}$ for some $i \in [m]$). This leads to a contradiction to the fact that if $L_{k,j} \geq L$ and $\gamma_{k,i,j} \geq \gamma_{i}$ for all $i \in [m]$, then \eqref{eq.Lipschitz_bounds} holds. Finally, \eqref{eq.L_rho_bounds} follows from the initialization of the Lipschitz constant estimates; the fact that if any of these values is ever increased in the \textbf{for} loop, then this occurs by the value being multiplied by $\rho > 1$; and the fact that for all $k \in \N{}$ the algorithm initializes $L_{k,0} \in (0,L_{k-1}]$ and $\gamma_{k,i,0} \in (0,\gamma_{k-1,i}]$ for all $i \in [m]$. \eproof \balgorithm[ht] \caption{SQP Algorithm with Adaptive Lipschitz Constant Estimates} \label{alg.sqp_adaptive} \balgorithmic[1] \Require $x_0 \in \R{n}$; $\tau_{-1} \in \R{}_{>0}$; $\epsilon \in (0,1)$; $\sigma \in (0,1)$; $\eta \in (0,1)$; $\rho \in \R{}_{>1}$; $L_{-1} \in \R{}_{>0}$; $\gamma_{-1,i} \in \R{}_{>0}$ for all $i \in [m]$ \For{\textbf{all} $k \in \N{}$} \State Compute $(d_k,y_k)$ as the solution of \eqref{eq.system_deterministic} \State \textbf{if} $(x_k,y_k)$ satisfies \eqref{eq.KKT} \textbf{then return} $(x_k,y_k)$ \label{step.termination_adaptive} \State Set $\tau_k^{trial}$ by \eqref{eq.merit_parameter_trial} and $\tau_k$ by \eqref{eq.merit_parameter_lower} \State Initialize $L_{k,0} \in (0,L_{k-1}]$ and $\gamma_{k,i,0} \gets (0,\gamma_{k-1,i}]$ for all $i \in [m]$ \label{step.initialize} \For{\textbf{all} $j \in \N{}$} \label{step.loop} \State Set \bequationNN \baligned \widehat\alpha_{k,j} &\gets \tfrac{2(1-\eta) \Delta q(x_k,\tau_k,g_k,H_k,d_k)}{(\tau_k L_{k,j} + \sum_{i\in[m]} \gamma_{k,i,j})\|d_k\|_2^2}\ \ \text{and} \\ \widetilde\alpha_{k,j} &\gets \widehat\alpha_{k,j} - \tfrac{4 \|c_k\|_1}{(\tau_k L_{k,j} + \sum_{i\in[m]} \gamma_{k,i,j})\|d_k\|_2^2} \ealigned \eequationNN \State Set \bequationNN \alpha_{k,j} \gets \bcases \widehat\alpha_{k,j} & \text{if $\widehat\alpha_{k,j} < 1$} \\ 1 & \text{if $\widetilde\alpha_{k,j} \leq 1 \leq \widehat\alpha_{k,j}$} \\ \widetilde\alpha_{k,j} & \text{if $\widetilde\alpha_{k,j} > 1$} \ecases \eequationNN \State \textbf{if} \eqref{eq.sufficient_decrease} or \eqref{eq.Lipschitz_bounds} holds \textbf{then} \State \hspace{\algorithmicindent} Set $L_k \gets L_{k,j}$ and $\gamma_{k,i} \gets \gamma_{k,i,j}$ for all $i \in [m]$ \State \hspace{\algorithmicindent} Set $\alpha_k \gets \alpha_{k,j}$ and $x_{k+1} \gets x_k + \alpha_kd_k$ and \textbf{break} (loop over $j \in \N{}$) \label{step.term_loop} \State \textbf{else} \State \hspace{\algorithmicindent} \textbf{if} \eqref{eq.Lipschitz_bound_f} (resp.,~\eqref{eq.Lipschitz_bound_c} for some $i \in [m]$) is not satisfied \State \hspace{\algorithmicindent}\hspace{\algorithmicindent} Set $L_{k,j+1} \gets \rho L_{k,j}$ (resp.,~$\gamma_{k,i,j+1} \gets \rho \gamma_{k,i,j}$) \State \hspace{\algorithmicindent} \textbf{else} \State \hspace{\algorithmicindent}\hspace{\algorithmicindent} Set $L_{k,j+1} \gets L_{k,j}$ (resp.,~$\gamma_{k,i,j+1} \gets \gamma_{k,i,j}$) \EndFor \EndFor \ealgorithmic \ealgorithm The intuition behind the stepsize selection scheme in Algorithm~\ref{alg.sqp_adaptive} is that the stepsize is chosen to minimize an upper bound on the change in the merit function. This upper bounding function is revealed in Lemma~\ref{lem.phi_reduction_upper} later on. Due to the nonsmoothness of the merit function, which creates a \emph{kink} at a unit stepsize, there are three cases for the minimizer: It can occur \emph{before}, \emph{at}, or \emph{after} the kink. An illustration of these cases is shown in Figure~\ref{fig:stepsize_illustration}. Certain situations that lead to each of the three cases is as follows. (There are additional situations that one may consider since the upper bounding function involves a combination of many terms, but the following are a few example situations to provide some intuition.) If the Lipschitz constant estimates are large enough, indicating high nonlinearity of the problem functions, then the minimizer may be at a stepsize less than 1. On the other hand, if the Lipschitz constant estimates are not too large and derivative information of the objective function suggests that the merit function improves beyond a unit stepsize, then the minimizer is at a stepsize greater than 1. Otherwise, the minimizer occurs at a unit stepsize since at least this corresponds to a step toward linearized feasibility. \bfigure[ht] \centering \includegraphics[width=3in,clip=true,trim=110 30 90 30]{stepsizes.png} \caption{Illustration of three cases for an upper bounding function of the merit function (see Lemma~\ref{lem.phi_reduction_upper}) motivating the three cases in the stepsize selection scheme in Algorithm~\ref{alg.sqp_adaptive}. Each graph shows the value of the upper bound on the change in the merit function as a function of $\alpha_k$.} \label{fig:stepsize_illustration} \efigure The second algorithm is stated as Algorithm~\ref{alg.sqp_line_search}. In each iteration, it employs a traditional backtracking line search scheme until the reduction in the merit function is sufficiently large compared to the reduction in the model of the merit function. This is sufficient for showing a convergence result, as shown in the next subsection. \balgorithm[ht] \caption{SQP Algorithm with Backtracking Line Search} \label{alg.sqp_line_search} \balgorithmic[1] \Require $x_0 \in \R{n}$; $\tau_{-1} \in \R{}_{>0}$; $\epsilon \in (0,1)$; $\sigma \in (0,1)$; $\eta \in (0,1)$; $\nu \in (0,1)$; $\alpha \in \R{}_{>0}$ \For{\textbf{all} $k \in \N{}$} \State Compute $(d_k,y_k)$ as the solution of \eqref{eq.system_deterministic} \State \textbf{if} $(x_k,y_k)$ satisfies \eqref{eq.KKT} \textbf{then return} $(x_k,y_k)$ \label{step.termination_line_search} \State Set $\tau_k^{trial}$ by \eqref{eq.merit_parameter_trial} and $\tau_k$ by \eqref{eq.merit_parameter_lower} \For{\textbf{all} $j \in \N{}$} \State Set $\alpha_{k,j} \gets \nu^j \alpha$ \If{\eqref{eq.sufficient_decrease} holds} \State Set $\alpha_k \gets \alpha_{k,j}$ and $x_{k+1} \gets x_k + \alpha_k d_k$ and \textbf{break} (loop over $j \in \N{}$) \EndIf \EndFor \EndFor \ealgorithmic \ealgorithm \subsection{Convergence Analysis} We prove in this section that, from any initial iterate, each of Algorithm~\ref{alg.sqp_adaptive} and Algorithm~\ref{alg.sqp_line_search} generates a sequence of iterates over which a first-order measure of primal-dual stationarity for \eqref{prob.deterministic} (recall \eqref{eq.KKT}) vanishes. We assume throughout this section that both Assumptions~\ref{ass.deterministic} and \ref{ass.H} hold; for brevity, we do not remind the reader of this fact within the statement of each result. We also remark that if an algorithm terminates finitely, then it does so with $(x_k,y_k)$ satisfying~\eqref{eq.KKT}, meaning primal-dual stationarity has been achieved. Hence, we may assume without loss of generality in this section that neither algorithm terminates finitely, meaning that $\{x_k\}$ is infinite and $d_k \neq 0$ for all $k \in \N{}$ (recall Lemma~\ref{lem.stationary}). In all of the results of this section, the statements are proved to hold with respect to \emph{both} Algorithms~\ref{alg.sqp_adaptive} and~\ref{alg.sqp_line_search}. There are only a few differences in the results for the two algorithms; when a result differs, we say so explicitly. Much of our analysis, at least prior to Lemma~\ref{lem.adaptive_red}, follows standard analysis for line-search SQP methods; see, e.g., \cite{ByrdCurtNoce08,ByrdCurtNoce10a}. Nonetheless, we provide proofs of the results for completeness. Our analysis uses the orthogonal decomposition of the search directions given by \bequationNN d_k = u_k + v_k\ \ \text{where}\ \ u_k \in \Null(J_k)\ \ \text{and}\ \ v_k \in \Range(J_k^T)\ \ \text{for all}\ \ k \in \N{}. \eequationNN \emph{We emphasize that the components $u_k$ and $v_k$ do not need to be computed explicitly for any $k \in \N{}$.} They are merely tools for our analysis. As is common in the literature, we refer to $u_k$ as the tangential component and $v_k$ as the normal component of $d_k$. We first show an upper bound on the normal components of the search directions. \blemma\label{lem.bound_v} There exists $\kappa_v \in \R{}_{>0}$ such that, for all $k \in \N{}$, the normal component $v_k$ satisfies $\max\{\|v_k\|_2,\|v_k\|_2^2\} \leq \kappa_v \|c_k\|_2$. \elemma \bproof Let $k \in \N{}$ be arbitrary. From $J_kd_k = J_k(u_k+v_k) = -c_k$, $u_k \in \Null(J_k)$, and $v_k \in \Range(J_k^T)$, one has $v_k = -J_k^T(J_kJ_k^T)^{-1}c_k$; hence, by Cauchy-Schwarz, \bequationNN \baligned \|v_k\|_2 &\leq \|J_k^T(J_kJ_k^T)^{-1}\|_2\|c_k\|_2 \\ \iff\ \ \|v_k\|_2^2 &\leq (\|J_k^T(J_kJ_k^T)^{-1}\|_2\|c_k\|_2)^2 = (\|J_k^T(J_kJ_k^T)^{-1}\|_2^2\|c_k\|_2)\|c_k\|_2. \ealigned \eequationNN Hence, the desired conclusion follows under Assumption~\ref{ass.deterministic}. \eproof Our next result reveals that there exists a critical threshold between the norms of the tangential and normal components of the search directions, and in any iteration $k \in \N{}$ in which the search direction $d_k$ is dominated by the tangential component $u_k$, the curvature of $H_k$ along $d_k$ has a useful lower bound defined with $u_k$. \blemma\label{lem.tangential_big} There exists $\kappa_{uv} \in \R{}_{>0}$ such that, for any $k \in \N{}$, if $\|u_k\|_2^2 \geq \kappa_{uv} \|v_k\|_2^2$, then $\thalf d_k^TH_kd_k \geq \tfrac14 \zeta \|u_k\|_2^2$, where $\zeta \in \R{}_{>0}$ is defined in Assumption~\ref{ass.H}. \elemma \bproof Assumption~\ref{ass.H} implies for any $\kappa_{uv} \in \R{}_{>0}$ that $\|u_k\|_2^2 \geq \kappa_{uv} \|v_k\|_2^2$ means \bequationNN \baligned \thalf d_k ^T H_k d_k &= \thalf u_k^T H_k u_k + u_k^T H_k v_k + \thalf v_k^T H_k v_k \\ &\geq \thalf \zeta \|u_k\|_2^2 - \|u_k\|_2 \|H_k \|_2 \|v_k\|_2 - \thalf \|H_k\|_2 \|v_k\|_2^2 \\ &\geq \(\tfrac{\zeta}{2} - \tfrac{\kappa_H}{\sqrt{\kappa_{uv}}} - \tfrac{\kappa_H}{2\kappa_{uv}}\) \|u_k\|_2^2. \ealigned \eequationNN Thus, under Assumption~\ref{ass.H}, the results holds for $\kappa_{uv} \in \R{}_{>0}$ with $\tfrac{\kappa_H}{\sqrt{\kappa_{uv}}} + \tfrac{\kappa_H}{2\kappa_{uv}} \leq \tfrac{\zeta}{4}$. \eproof For the constant $\kappa_{uv} \in \R{}_{>0}$ defined in Lemma~\ref{lem.tangential_big}, let us define \bequationNN \Psi_k := \bcases \|u_k\|_2^2 + \|c_k\|_2 & \text{if $\|u_k\|_2^2 \geq \kappa_{uv} \|v_k\|_2^2$} \\ \|c_k\|_2 & \text{otherwise,} \ecases \eequationNN along with the related index sets (that partition $\N{}$) \bequationNN \Kcal_u := \{k \in \N{} : \|u_k\|_2^2 \geq \kappa_{uv} \|v_k\|_2^2\}\ \ \text{and}\ \ \Kcal_v := \{ k \in \N{} : \|u_k\|_2^2 < \kappa_{uv} \|v_k\|_2^2\}. \eequationNN Our next result shows that the squared norms of the search directions and the constraint violations are bounded above by this critical quantity in all iterations. \blemma\label{lem.Psi_1} There exists a constant $\kappa_\Psi \in \R{}_{>0}$ such that, for all $k \in \N{}$, \bequationNN \|d_k\|_2^2 \leq \kappa_\Psi \Psi_k\ \ \text{and}\ \ \|d_k\|_2^2 + \|c_k\|_2 \leq (\kappa_\Psi + 1) \Psi_k. \eequationNN \elemma \bproof For all $k \in \Kcal_u$, it follows that \bequationNN \|d_k\|_2^2 = \|u_k\|_2^2 + \|v_k\|_2^2 \leq (1 + \kappa_{uv}^{-1}) \|u_k\|_2^2 \leq (1 + \kappa_{uv}^{-1}) (\|u_k\|_2^2 + \|c_k\|_2). \eequationNN For all $k \in \Kcal_v$, one finds from Lemma \ref{lem.bound_v} that \bequationNN \|d_k\|_2^2 = \|u_k\|_2^2 + \|v_k\|_2^2 < (\kappa_{uv} + 1) \|v_k\|_2^2 \leq (\kappa_{uv} + 1) \kappa_v \|c_k\|_2. \eequationNN Combining the results from the two cases implies the first desired result. To establish the second result, note that the definition of $\Psi_k$ yields $\|c_k\|_2 \leq \Psi_k$ for all $k \in \N{}$. \eproof As revealed by our next lemma, the reduction in the model of the merit function is bounded below with respect to the same critical quantity. \blemma\label{lem.Psi} There exists a constant $\kappa_q \in \R{}_{>0}$ such that, for all $k \in \N{}$, \bequationNN \Delta q(x_k,\tau_k,g_k,H_k,d_k) \geq \kappa_q \tau_k \Psi_k. \eequationNN \elemma \bproof Combining \eqref{eq.merit_model_reduction_lower} and Lemma~\ref{lem.tangential_big}, it follows that $\Delta q(x_k,\tau_k,g_k,H_k,d_k) \geq \tfrac14 \tau_k \zeta \|u_k\|_2^2 + \sigma \|c_k\|_1$ for $k \in \Kcal_u$. Similarly, \eqref{eq.merit_model_reduction_lower} implies that $\Delta q(x_k,\tau_k,g_k,H_k,d_k) \geq \sigma \|c_k\|_1$ for all $k \in \Kcal_v$. Combining the two cases, $\|\cdot\|_1 \geq \|\cdot\|_2$, and the fact that $\{\tau_k\}$ is monotonically nonincreasing, the result holds for $\kappa_q := \min \{ \tfrac14 \zeta, \sigma/\tau_{-1}\} \in \R{}_{>0}$. \eproof Our next lemma shows an upper bound on the change in the merit function when the inner \textbf{for} loop of Algorithm~\ref{alg.sqp_adaptive} terminates with large Lipschitz constant estimates. \blemma\label{lem.phi_reduction_upper} For all $k \in \N{}$, if the inner \textbf{for} loop of Algorithm~\ref{alg.sqp_adaptive} terminates since \eqref{eq.Lipschitz_bounds} holds, then with $\Gamma_k := \sum_{i\in[m]} \gamma_{k,i} \in \R{}_{>0}$ it follows that \bequationNN \phi(x_k + \alpha_k d_k,\tau_k) - \phi(x_k,\tau_k) \leq \alpha_k \tau_k g_k^Td_k + |1-\alpha_k|\|c_k\|_1 - \|c_k\|_1 + \thalf (\tau_k L_k + \Gamma_k) \alpha_k^2 \|d_k\|_2^2. \eequationNN \elemma \bproof For such $k \in \N{}$, it follows from \eqref{eq.Lipschitz_bounds} and Lemma~\ref{lem.c_upper} that \bequationNN \baligned &\ \phi(x_k + \alpha_k d_k,\tau_k) - \phi(x_k,\tau_k) \\ =&\ \tau_k f(x_k + \alpha_k d_k) - \tau_k f(x_k) + \|c(x_k + \alpha_k d_k)\|_1 - \|c_k\|_1 \\ \leq&\ \alpha_k \tau_k g_k^Td_k + \|c_k + \alpha_k J_k d_k\|_1 - \|c_k\|_1 + \thalf (\tau_k L_k + \Gamma_k) \alpha_k^2 \|d_k\|_2^2 \\ =&\ \alpha_k \tau_k g_k^Td_k + |1-\alpha_k|\|c_k\|_1 - \|c_k\|_1 + \thalf (\tau_k L_k + \Gamma_k) \alpha_k^2 \|d_k\|_2^2, \ealigned \eequationNN as desired. \eproof Next, we show lower bounds for the reduction in the merit function in each iteration of each algorithm. For concision, let us define for all $k \in \N{}$ the values \bequationNN \widehat\mu_k := \tfrac{2(1-\eta)\Delta q(x_k,\tau_k,g_k,H_k,d_k) }{(\tau_k L + \sum_{i\in[m]} \gamma_i)\|d_k\|_2^2}\ \ \text{and}\ \ \widetilde\mu_k := \widehat\mu_k - \tfrac{4 \|c_k\|_1}{(\tau_k L + \sum_{i\in[m]} \gamma_{i})\|d_k\|_2^2}. \eequationNN For a given $k \in \N{}$, one should notice the similarity between these values and the pair $(\widehat\alpha_{k,j},\widetilde\alpha_{k,j})$ defined for all $j \in \N{}$ in Algorithm~\ref{alg.sqp_adaptive}, except that the pair $(\widehat\mu_k,\widetilde\mu_k)$ are defined with respect to $L$ and $\gamma_{i}$ for all $i \in [m]$ defined in Assumption~\ref{ass.deterministic}. \blemma\label{lem.adaptive_red} For all $k \in \N{}$, the inequality \eqref{eq.sufficient_decrease} holds, where in the case of Algorithm~\ref{alg.sqp_line_search} this occurs with the stepsize satisfying $\alpha_k > \nu \min\{\widehat\mu_k,\max\{1,\widetilde\mu_k\}\} > 0$. \elemma \bproof Let $k \in \N{}$ be given. First, consider Algorithm~\ref{alg.sqp_adaptive}. If the inner \textbf{for} loop terminates since the stepsize yields \eqref{eq.sufficient_decrease}, then there is nothing left to prove. Hence, we may proceed by supposing that the loop terminates since \eqref{eq.Lipschitz_bounds} holds, which we shall now proceed to show means that \eqref{eq.sufficient_decrease} holds as well. Consider three cases, where as in Lemma~\ref{lem.phi_reduction_upper} let us define $\Gamma_k := \sum_{i\in[m]} \gamma_{k,i} \in \R{}_{>0}$. \textbf{Case 1:} Suppose that in the last iteration of the inner \textbf{for} loop one finds $\widehat\alpha_{k,j} < 1$, in which case the algorithm yields $\alpha_k = \tfrac{2(1-\eta)\Delta q(x_k,\tau_k,g_k,H_k,d_k)}{(\tau_k L_k + \Gamma_k) \|d_k\|_2^2} < 1$. Combining this fact with Lemmas~\ref{lem.directional_derivative} and \ref{lem.phi_reduction_upper}, it follows that \bequationNN \baligned &\ \phi(x_k + \alpha_k d_k,\tau_k) - \phi(x_k,\tau_k) \\ \leq&\ \alpha_k(\tau_k g_k^Td_k - \|c_k\|_1) + \thalf (\tau_k L_k + \Gamma_k) \alpha_k^2 \|d_k\|_2^2 \\ \leq&\ - \alpha_k \Delta q(x_k,\tau_k,g_k,H_k,d_k) + \thalf (\tau_k L_k + \Gamma_k) \alpha_k^2 \|d_k\|_2^2 \\ =&\ -\alpha_k \Delta q(x_k,\tau_k,g_k,H_k,d_k) \\ &\quad + \thalf \alpha_k (\tau_k L_k + \Gamma_k) \(\tfrac{2(1-\eta)\Delta q(x_k,\tau_k,g_k,H_k,d_k)}{(\tau_k L_k + \Gamma_k)\|d_k\|_2^2}\)\|d_k\|_2^2 \\ =&\ -\eta \alpha_k \Delta q(x_k,\tau_k,g_k,H_k,d_k). \ealigned \eequationNN \textbf{Case 2:} Suppose that in the last iteration of the inner \textbf{for} loop one finds $\widehat\alpha_{k,j} \geq 1$ and $\widetilde\alpha_{k,j} \leq 1$, in which case the algorithm yields $\alpha_k = 1$. Combining this fact, the fact that $\widehat\alpha_{k,j} \geq 1$ in the last iteration of the loop, Lemma~\ref{lem.directional_derivative}, and Lemma~\ref{lem.phi_reduction_upper} yields the same string of relationships as in Case~1, except that since $\widehat\alpha_{k,j} \geq 1$ the first equation holds not as an equation, but as an ``$\leq$'' inequality. \textbf{Case 3:} Suppose that in the last iteration of the inner \textbf{for} loop one finds $\widetilde\alpha_{k,j} > 1$, in which case the algorithm yields $\alpha_k = \tfrac{2(1-\eta)\Delta q(x_k,\tau_k,g_k,H_k,d_k) - 4 \|c_k\|_1}{(\tau_k L_k + \Gamma_k) \|d_k\|_2^2} > 1$. Combining this fact with Lemmas~\ref{lem.directional_derivative} and \ref{lem.phi_reduction_upper}, it follows that \bequationNN \baligned &\ \phi(x_k + \alpha_k d_k,\tau_k) - \phi(x_k,\tau_k) \\ \leq&\ \alpha_k \tau_k g_k^Td_k + (\alpha_k-1)\|c_k\|_1 - \|c_k\|_1 + \thalf (\tau_k L_k + \Gamma_k) \alpha_k^2 \|d_k\|_2^2 \\ =&\ \alpha_k(\tau_k g_k^Td_k - \|c_k\|_1) + 2 (\alpha_k - 1) \|c_k\|_1 + \thalf (\tau_k L_k + \Gamma_k) \alpha_k^2 \|d_k\|_2^2 \\ \leq&\ -\alpha_k \Delta q(x_k,\tau_k,g_k,H_k,d_k) + 2\alpha_k \|c_k\|_1 + \thalf (\tau_k L_k + \Gamma_k) \alpha_k^2 \|d_k\|_2^2 \\ =&\ -\alpha_k \Delta q(x_k,\tau_k,g_k,H_k,d_k) + 2\alpha_k \|c_k\|_1 \\ &\quad + \thalf \alpha_k (\tau_k L_k + \Gamma_k) \(\tfrac{2(1-\eta)\Delta q(x_k,\tau_k,g_k,H_k,d_k) - 4\|c_k\|_1}{(\tau_k L_k + \Gamma_k) \|d_k\|_2^2}\) \|d_k\|_2^2 \\ =&\ -\eta \alpha_k \Delta q(x_k,\tau_k,g_k,H_k,d_k). \ealigned \eequationNN Combining the three cases shows the desired result for Algorithm~\ref{alg.sqp_adaptive}. Now consider Algorithm~\ref{alg.sqp_line_search}. One finds that one of three cases occurs, which mimic those for Algorithm~\ref{alg.sqp_adaptive}. In particular, if $\widehat\mu_k < 1$, then an analysis similar to that for Case~1 above shows that for $j \in \N{}$ with $\alpha_{k,j}/\nu > \widehat\mu_k$ and $\alpha_{k,j} \leq \widehat\mu_k$, the backtracking line search will terminate by iteration $j \in \N{}$, from which it follows that $\alpha_k > \nu\widehat\mu_k$. If $\widehat\mu_k \geq 1$ and $\widetilde\mu_k \leq 1$, or if $\widetilde\mu_k > 1$, then a similar argument combined with Case~2 or Case~3, respectively, completes the proof. \eproof Next, we show that the tangential components of the directions are bounded. \blemma\label{lem.bound_u} The tangential component sequence $\{u_k\}$ is bounded. \elemma \bproof The first block of \eqref{eq.system_deterministic}, premultiplied by $u_k^T$, yields $u_k^TH_k(u_k + v_k) = -u_k^Tg_k$. Hence, under Assumption \ref{ass.H}, one finds that \bequationNN \zeta \|u_k\|_2^2 \leq u_k^TH_ku_k = - g_k^Tu_k - v_k^TH_ku_k \leq (\|g_k\|_2 + \kappa_H \|v_k\|_2)\|u_k\|_2. \eequationNN Therefore, the result follows from Assumption \ref{ass.deterministic} and Lemma \ref{lem.bound_v}. \eproof We now show that the merit parameter sequence is bounded, and that it remains fixed at a value for all sufficiently large $k \in \N{}$. \blemma\label{lem.tau_bound} There exists $k_\tau \in \N{}$ and $\tau_{\min} \in \R{}_{>0}$ such that $\tau_k = \tau_{\min}$ for $k \geq k_\tau$. \elemma \bproof Recall that $\tau_k < \tau_{k-1}$ if and only if both $g_k^Td_k + \max\{d_k^TH_kd_k,0\} > 0$ and \bequation\label{eq.tau_bound} \tau_{k-1} (g_k^Td_k + \max\{d_k^TH_kd_k,0\}) > (1 - \sigma) \|c_k\|_1. \eequation According to the first block equation of \eqref{eq.system_deterministic} (premultiplied by $u_k^T$), one has \bequationNN g_k^Td_k + \max\{d_k^TH_kd_k, 0\} = \bcases g_k^Tv_k + v_k^T H_k u_k + v_k^T H_k v_k & \text{if $d_k^TH_kd_k \geq 0$} \\ g_k^Tv_k - v_k^TH_ku_k - u_k^TH_ku_k & \text{otherwise.} \ecases \eequationNN The result follows from our ability to bound the left-hand side of this expression with respect to the constraint reduction. We consider two cases. First, if $d_k^TH_kd_k \geq 0$, then under Assumptions \ref{ass.deterministic} and \ref{ass.H} it follows with Lemmas~\ref{lem.bound_v} and \ref{lem.bound_u} and $\|\cdot\|_1 \geq \|\cdot\|_2$ that there exists a constant $\kappa_{\tau,1} \in \R{}_{>0}$ such that \bequationNN g_k^Tv_k + v_k^T H_k u_k + v_k^T H_k v_k \leq (\|g_k\|_2 + \kappa_H \|u_k\|_2) \|v_k\|_2 + \kappa_H \|v_k\|_2^2 \leq \kappa_{\tau,1} \|c_k\|_1. \eequationNN Second, if $d_k^TH_kd_k < 0$, then under Assumptions \ref{ass.deterministic} and \ref{ass.H} it follows from Lemmas \ref{lem.bound_v} and \ref{lem.bound_u} and $\|\cdot\|_1 \geq \|\cdot\|_2$ that there exists a constant $\kappa_{\tau,2} \in \R{}_{>0}$ such that \bequationNN g_k^Tv_k - v_k^T H_k u_k - u_k^T H_k u_k \leq (\|g_k\|_2 + \kappa_H \|u_k\|_2) \|v_k\|_2 \leq \kappa_{\tau,2} \|c_k\|_1. \eequationNN Together, these imply $g_k^Td_k + \max\{d_k^TH_kd_k,0\} \leq \max\{\kappa_{\tau,1},\kappa_{\tau,2}\} \|c_k\|_1$, from which it follows that in order to have both $g_k^Td_k + \max\{d_k^TH_kd_k,0\} > 0$ and \eqref{eq.tau_bound}, one must have $\tau_{k-1} > (1-\sigma)/\max\{\kappa_{\tau,1},\kappa_{\tau,2}\}$. Therefore, if this inequality is not satisfied for $k = k_\tau$ for some $k_\tau \in \R{}$, then it remains unsatisfied for all $k \geq k_\tau$. This, along with the fact that whenever Algorithm \ref{alg.sqp_adaptive} or \ref{alg.sqp_line_search} decreases the merit parameter it does so by at least a constant factor, proves the result. \eproof We now prove that there is a positive lower bound for the stepsizes. \blemma\label{lem.alpha_lower} There exists $\alpha_{\min} \in \R{}_{>0}$ such that $\alpha_k \geq \alpha_{\min}$ for all $k \in \N{}$. \elemma \bproof Let $k \in \N{}$ be given. With respect to Algorithm~\ref{alg.sqp_adaptive}, one has that $\alpha_k \geq 1$ unless the inner \textbf{for} loop terminates in iteration $j \in \N{}$ with $\widehat\alpha_{k,j} < 1$. In such cases, it follows from monotonicity of $\{\tau_k\}$ and Lemmas~\ref{lem.well_defined}, \ref{lem.tau_bound}, \ref{lem.Psi_1}, and \ref{lem.Psi} that \bequationNN \alpha_k = \tfrac{2(1-\eta) \Delta q(x_k,\tau_k,g_k,H_k,d_k)}{(\tau_k L_{k,j} + \sum_{i\in[m]} \gamma_{k,i,j})\|d_k\|_2^2} \geq \tfrac{2(1-\eta)\kappa_q \tau_{\min}}{(\tau_{-1} L_{\max} + \sum_{i\in[m]} \gamma_{\max,i}) \kappa_\Psi} > 0. \eequationNN Similarly, for Algorithm~\ref{alg.sqp_line_search}, Lemma~\ref{lem.adaptive_red} implies $\alpha_k \geq 1$ unless $\widehat\mu_k < 1$. In such cases, it follows from monotonicity of $\{\tau_k\}$ and Lemmas~\ref{lem.well_defined}, \ref{lem.tau_bound}, \ref{lem.Psi_1}, and \ref{lem.Psi} that \bequationNN \alpha_k > \tfrac{2\nu (1-\eta) \Delta q(x_k,\tau_k,g_k,H_k,d_k)}{(\tau_k L + \sum_{i\in[m]} \gamma_i)\|d_k\|_2^2} \geq \tfrac{2 \nu (1-\eta)\kappa_q \tau_{\min}}{(\tau_{-1} L + \sum_{i\in[m]} \gamma_{i}) \kappa_\Psi} > 0. \eequationNN Overall, a positive lower bound has been proved for both algorithms. \eproof We now present our main convergence theorem for Algorithms~\ref{alg.sqp_adaptive} and \ref{alg.sqp_line_search}. \btheorem\label{th.deterministic} Algorithms~\ref{alg.sqp_adaptive} and \ref{alg.sqp_line_search} yield \bequationNN \lim_{k\to\infty} \|d_k\|_2 = 0,\ \ \lim_{k\to\infty} \|c_k\|_2 = 0,\ \ \text{and}\ \ \lim_{k\to\infty} \|g_k + J_k^Ty_k\|_2 = 0. \eequationNN \etheorem \bproof For all $k \in \N{}$, it follows from Lemmas~\ref{lem.Psi}, \ref{lem.adaptive_red}, and \ref{lem.alpha_lower} that \bequationNN \phi(x_k,\tau_k) - \phi(x_{k+1},\tau_k) \geq \eta \alpha_k \Delta q(x_k,\tau_k,g_k,H_k,d_k) \geq \eta \alpha_{\min} \kappa_q \tau_{\min} \Psi_k. \eequationNN Combining this with Lemmas~\ref{lem.Psi_1} and \ref{lem.tau_bound} shows for $k \in \N{}$ with $k > k_\tau$ that \bequationNN \baligned &\ \phi(x_{k_\tau},\tau_{\min}) - \phi(x_k,\tau_{\min}) \\ =&\ \sum_{j = k_\tau}^{k-1} (\phi(x_j,\tau_{\min}) - \phi(x_{j+1},\tau_{\min})) \\ \geq&\ \eta \alpha_{\min} \kappa_q \tau_{\min} \sum_{j = k_\tau}^{k-1} \Psi_j \geq \tfrac{\eta \alpha_{\min} \kappa_q \tau_{\min}} {\kappa_\Psi + 1} \sum_{j = k_\tau}^{k-1} (\|d_j\|_2^2 + \|c_j\|_2). \ealigned \eequationNN Since, under Assumption~\ref{ass.deterministic}, $\phi(\cdot,\tau_{\min})$ is bounded below over the iterates, the above implies the first two desired limits. Note now that \eqref{eq.system_deterministic} implies \bequation\label{eq.gJy} \|g_k + J_k^Ty_k\|_2 = \|H_kd_k\|_2 \leq \|H_k\|_2\|d_k\|_2 \leq \kappa_H \|d_k\|_2. \eequation Hence, by Assumption~\ref{ass.H} and $\{d_k\} \to 0$, the result follows. \eproof \section{Stochastic Setting}\label{sec.stochastic} Now consider the optimization problem \bequation\label{prob.f_nonlinear_stochastic} \min_{x\in\R{n}}\ f(x)\ \st\ c(x) = 0,\ \ \text{with}\ \ f(x) = \E[F(x,\omega)], \eequation where $f : \R{n} \to \R{}$, $c : \R{n} \to \R{m}$, $\omega$ is a random variable with associated probability space $(\Omega,\Fcal,P)$, $F : \R{n} \times \Omega \to \R{}$, and $\E[\cdot]$ represents expectation taken with respect to~$P$. We presume that one has access to values of the constraint function and its derivatives, but that it is intractable to evaluate the objective and/or its derivatives. That said, we presume that at a given iterate $x_k$, one can evaluate a stochastic gradient estimate $\gbar_k \in \R{n}$ satisfying the following assumption. \bassumption\label{ass.g} For all $k \in \N{}$, the stochastic gradient estimate $\gbar_k \in \R{n}$ is an unbiased estimator of the gradient of $f$ at $x_k$, i.e., \bequationNN \E_k[\gbar_k] = g_k, \eequationNN where $\E_k[\cdot]$ denotes expectation taken with respect to the distribution of $\omega$ conditioned on the event that the algorithm has reached $x_k \in \R{n}$ in iteration $k \in \N{}$. In addition, there exists a constant $M \in \R{}_{>0}$ such that, for all $k \in \N{}$, one has \bequationNN \E_k[\|\gbar_k - g_k\|_2^2] \leq M. \eequationNN \eassumption \subsection{Algorithm}\label{sec.algorithms_stochastic} Similar to the deterministic setting, in order to solve \eqref{prob.f_nonlinear_stochastic}, we consider a stochastic algorithm that computes a search direction $\dbar_k \in \R{n}$ and Lagrange multiplier vector $\ybar_k \in \R{m}$ in iteration $k \in \N{}$ by solving the linear system \bequation\label{eq.system_stochastic} \bbmatrix H_k & J_k^T \\ J_k & 0 \ebmatrix \bbmatrix \dbar_k \\ \ybar_k \ebmatrix = - \bbmatrix \gbar_k \\ c_k \ebmatrix, \eequation where $\{H_k\}$ satisfies Assumption~\ref{ass.H}. Generally, we use a ``bar'' over a quantity whose value in iteration $k \in \N{}$ depends on $\gbar_k$. Hence, as they are independent of $\gbar_k$ conditioned on the event that the algorithm reaches $x_k$ as its $k$th iterate, we write the constraint value, constraint Jacobian, and $(1,1)$-block matrix as $c_k$, $J_k$, and $H_k$, respectively, but we write the solution of \eqref{eq.system_stochastic} as $(\dbar_k,\ybar_k)$ due to its dependence on~$\gbar_k$. The algorithm that we propose is stated as Algorithm~\ref{alg.sqp_stochastic}. Paralleling Algorithm~\ref{alg.sqp_adaptive}, the merit parameter is set based on the computation of a trial value \bequation\label{eq.merit_parameter_trial_stochastic} \bar\tau_k^{trial} \gets \bcases \infty & \text{if $\gbar_k^T\dbar_k + \max\{\dbar_k^TH_k\dbar_k,0\} \leq 0$} \\ \tfrac{(1 - \sigma)\|c_k\|_1}{\gbar_k^T\dbar_k + \max\{\dbar_k^TH_k\dbar_k,0\}} & \text{otherwise,} \ecases \eequation followed by the rule \bequation\label{eq.merit_parameter_lower_stochastic} \bar\tau_k \gets \bcases \bar\tau_{k-1} & \text{if $\bar\tau_{k-1} \leq \bar\tau_k^{trial}$} \\ (1-\epsilon) \bar\tau_k^{trial} & \text{otherwise,} \ecases \eequation which ensures $\bar\tau_k \leq \bar\tau_k^{trial}$ and, similarly as for our deterministic algorithm (see \eqref{eq.merit_model_reduction_lower}), \bequation\label{eq.merit_model_reduction_lower_stochastic} \Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k) \geq \thalf \bar\tau_k \max\{\dbar_k^TH_k\dbar_k,0\} + \sigma \|c_k\|_1. \eequation A unique feature of our algorithm for this stochastic setting is that it adaptively estimates a lower bound for the ratio between the reduction in the model of the merit function and the merit parameter times the squared norm of a search direction. This is used to determine an interval into which the stepsize will be projected; control of this parameter is paramount to ensure convergence in expectation. We set \bequation\label{eq.ratio_trial} \bar\xi_k^{trial} \gets \tfrac{\Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k)}{\bar\tau_k \|\dbar_k\|_2^2}, \eequation then apply the rule (which ensures $\bar\xi_k \leq \bar\xi_k^{trial}$ for all $k \in \N{}$) \bequation\label{eq.ratio} \bar\xi_k \gets \bcases \bar\xi_{k-1} & \text{if $\bar\xi_{k-1} \leq \bar\xi_k^{trial}$} \\ (1-\epsilon) \bar\xi_k^{trial} & \text{otherwise.} \ecases \eequation It will be shown in our analysis that $\{\bar\xi_k\}$ is bounded away from zero \emph{deterministically}. For generality, Algorithm~\ref{alg.sqp_stochastic} is stated with Lipschitz constant estimates $\{L_k\}$ and~$\{\Gamma_k\}$ given as inputs (with the idea that $\Gamma_k := \sum_{i\in[m]} \gamma_{k,i}$ for all $k \in \N{}$). Our analysis in the next subsection presumes that Lipschitz constants are known, although in practice these can be estimated using standard techniques (see, e.g., \cite{CurtRobi19}) in an attempt to ensure that the same convergence results hold as for the case when the constants are known. The sequence $\{\beta_k\}$ is introduced to control the stepsizes. As in standard analysis for stochastic (sub)gradient-type methods, our analysis in the next subsection considers the case when $\{\beta_k\}$ is constant asymptotically, and when it diminishes at an appropriate rate to ensure convergence in expectation. \begin{algorithm}[ht] \caption{Stochastic SQP Algorithm} \label{alg.sqp_stochastic} \begin{algorithmic}[1] \Require $x_0 \in \R{n}$; $\bar\tau_{-1} \in \R{}_{>0}$; $\epsilon \in (0,1)$; $\sigma \in (0,1)$; $\bar\xi_{-1} \in \R{}_{>0}$; $\{\beta_k\} \subset (0,1]$; $\theta \in \R{}_{\geq0}$; $\{L_k\} \subset \R{}_{>0}$; $\{\Gamma_k\} \subset \R{}_{>0}$ \For{\textbf{all} $k \in \N{}$} \State Compute $(\dbar_k,\ybar_k)$ as the solution of \eqref{eq.system_stochastic} \State \textbf{if} $\dbar_k = 0$ \textbf{then continue} (to iteration $k+1$) \State Set $\bar\tau_k^{trial}$ by \eqref{eq.merit_parameter_trial_stochastic} and $\bar\tau_k$ by \eqref{eq.merit_parameter_lower_stochastic} \label{step.tau_stochastic} \State Set $\bar\xi_k^{trial}$ by \eqref{eq.ratio_trial} and $\bar\xi_k$ by \eqref{eq.ratio} \label{step.xi} \State Set \bequationNN \baligned \bar{\widehat\alpha}_{k,\text{init}} &\gets \tfrac{\beta_k\Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k)}{(\bar\tau_k L_k + \Gamma_k)\|\dbar_k\|_2^2}\ \ \text{and} \\ \bar{\widetilde\alpha}_{k,\text{init}} &\gets \bar{\widehat\alpha}_{k,\text{init}} - \tfrac{4\|c_k\|_1}{(\bar\tau_k L_k + \Gamma_k)\|\dbar_k\|_2^2} \ealigned \eequationNN \State Set $\bar{\widehat\alpha}_k \gets \proj_k(\bar{\widehat\alpha}_{k,\text{init}})$ and $\bar{\widetilde\alpha}_k \gets \proj_k(\bar{\widetilde\alpha}_{k,\text{init}})$ where \label{step.alpha_projection_stochastic} \bequationNN \proj_k(\cdot) \equiv \proj\( \cdot\ \bigg| \left[\tfrac{\beta_k \bar\xi_k \bar\tau_k}{\bar\tau_k L_k + \Gamma_k}, \tfrac{\beta_k \bar\xi_k \bar\tau_k}{\bar\tau_k L_k + \Gamma_k} + \theta\beta_k^2\right]\) \eequationNN \State Set \label{step.alpha_stochastic} \bequationNN \bar{\alpha}_k \gets \bcases \bar{\widehat\alpha}_k & \text{if $\bar{\widehat\alpha}_k < 1$} \\ 1 & \text{if $\bar{\widetilde\alpha}_k \leq 1 \leq \bar{\widehat\alpha}_k$} \\ \bar{\widetilde\alpha}_k & \text{if $\bar{\widetilde\alpha}_k > 1$} \ecases \eequationNN \State Set $x_{k+1} \gets x_k + \bar\alpha_k \dbar_k$ \EndFor \end{algorithmic} \end{algorithm} \subsection{Convergence Analysis}\label{sec.analysis_stochastic} In this section, we prove that Algorithm~\ref{alg.sqp_stochastic} has convergence properties that match those from the deterministic setting in expectation, with some caveats that we explain and justify. Our algorithm uses only the stochastic gradient estimates~$\{\gbar_k\}$, computes $\{(\dbar_k,\ybar_k)\}$ by \eqref{eq.system_stochastic}, sets merit parameter-related sequences $\{\bar\tau_k\}$ and~$\{\bar\tau_k^{trial}\}$, and also sets steplength-related sequences $\{\bar\xi_k\}$ and~$\{\bar\xi_k^{trial}\}$, but our analysis also references the gradients $\{g_k\}$ corresponding to $\{x_k\}$ as well as the corresponding sequence of solutions of \eqref{eq.system_deterministic}, namely, $\{(d_k,y_k)\}$, and trial merit parameter values $\{\tau_k^{trial}\}$. In other words, for all $k \in \N{}$, conditioned on the event that the algorithm reaches $x_k$, we define $(d_k,y_k)$ and $\tau_k^{trial}$ as they would be computed if the algorithm reached $x_k$ as the $k$th iterate in Algorithm~\ref{alg.sqp_adaptive}. Throughout this section, we assume that Assumptions~\ref{ass.deterministic}, \ref{ass.H}, and \ref{ass.g} hold---where $\{H_k\}$ is a deterministic sequence chosen independently from $\{\gbar_k\}$---and for the sake of brevity we do not state this fact within each result. In this section, we assume that Lipschitz constants for the objective and constraints, in particular, $L$ and $\Gamma := \sum_{i\in[m]} \gamma_{i}$, are known. \begin{remark} Our analysis makes Assumption~\ref{ass.deterministic}, which means that it assumes that the iterates remain in an open convex set over which the objective and constraint function and derivative values are bounded. This is admittedly not ideal in a stochastic setting. For example, in the case of applying a stochastic gradient method (SG) in an unconstrained stochastic setting, it is not ideal to assume that the gradients at the iterates remain bounded in norm, since---as SG is not a descent method---it is unreasonable to assume that the iterates remain in a sublevel set of the objective function. However, we believe this assumption is more reasonable in a constrained setting, since the iterates are being driven to the \emph{deterministic} feasible region. Further, we claim that Assumption~\ref{ass.deterministic} could be loosened if our algorithm were to choose a predetermined stepsize sequence, rather than one that mimicks the stepsize scheme from Algorithm~\ref{alg.sqp_adaptive}. We discuss this issue further in \S\ref{sec.conclusion}. \end{remark} Since if $\dbar_k = 0$, then the algorithm simply skips to iteration $k+1$, we may assume without loss of generality in our analysis that $\dbar_k \neq 0$ for all $k \in \N{}$. As in the deterministic setting, our analysis makes use of the orthogonal decomposition of the (stochastic) search directions given by \bequationNN \dbar_k = \bar{u}_k + v_k\ \ \text{where}\ \ \bar{u}_k \in \Null(J_k)\ \ \text{and}\ \ v_k \in \Range(J_k^T)\ \ \text{for all}\ \ k \in \N{}. \eequationNN Let us emphasize that, conditioned on the event that the algorithm reaches $x_k$ as its $k$th iterate, the normal component is \emph{deterministic}, depending only on the constraint value $c_k$ and Jacobian $J_k$; hence, we write $v_k$ rather than~$\vbar_k$ in the expression above. For all $k \in \N{}$, let $Z_k$ be an orthogonal basis for the null space of~$J_k$, which under Assumption~\ref{ass.deterministic} is a matrix in $\R{n \times (n-m)}$. It follows that, for all $k \in \N{}$, \bequationNN \ubar_k = Z_k\wbar_k\ \ \text{and}\ \ u_k = Z_kw_k\ \ \text{for some}\ \ (\wbar_k,w_k) \in \R{n-m} \times \R{n-m}. \eequationNN Under Assumption~\ref{ass.H}, the reduced Hessian satisfies $Z_k^TH_kZ_k \succeq \zeta I$. For our first lemma, we carry over properties of algorithmic quantities that hold in the same manner as in the deterministic case, conditioned on the event that the algorithm has reached $x_k$ as the $k$th iterate. As in our analysis in the deterministic setting, for the constant $\kappa_{uv} \in \R{}_{>0}$ defined in the lemma, we define \bequationNN \overline\Psi_k := \bcases \|\ubar_k\|_2^2 + \|c_k\|_2 & \text{if $\|\ubar_k\|_2^2 \geq \kappa_{uv} \|v_k\|_2^2$} \\ \|c_k\|_2 & \text{otherwise.} \ecases \eequationNN \blemma\label{lem.deterministic_to_stochastic} For all $k \in \N{}$, \eqref{eq.system_stochastic} has a unique solution. In addition, for the same constants $(\kappa_v,\kappa_{uv},\kappa_\Psi,\kappa_q) \in \R{}_{>0} \times \R{}_{>0} \times \R{}_{>0} \times \R{}_{>0}$ that appear in Lemmas~\ref{lem.bound_v}, \ref{lem.tangential_big}, \ref{lem.Psi_1}, and \ref{lem.Psi}, the following statements hold true for all $k \in \N{}$. \benumerate \item[(a)] The normal component satisfies $\max\{\|v_k\|_2,\|v_k\|_2^2\} \leq \kappa_v\|c_k\|_2$. \item[(b)] If $\|\ubar_k\|_2^2 \geq \kappa_{uv}\|v_k\|_2^2$, then $\thalf \dbar_k^TH_k\dbar_k \geq \tfrac14 \zeta \|\ubar_k\|_2^2$. \item[(c)] The search direction satisfies $\|\dbar_k\|_2^2 \leq \kappa_\Psi \overline\Psi_k$ and $\|\dbar_k\|_2^2 + \|c_k\|_2 \leq (\kappa_\Psi + 1) \overline\Psi_k$. \item[(d)] The model reduction satisfies $\Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k) \geq \kappa_q \bar\tau_k \overline\Psi_k$. \eenumerate Finally, for all $k \in \N{}$, it follows that \bequationNN \phi(x_k + \bar\alpha_k \dbar_k,\bar\tau_k) - \phi(x_k,\bar\tau_k) \leq \bar\alpha_k \bar\tau_k g_k^T \dbar_k + |1-\bar\alpha_k| \|c_k\|_1 - \|c_k\|_1 + \thalf (\bar\tau_k L + \Gamma) \bar\alpha_k^2 \|\dbar_k\|_2^2. \eequationNN \elemma \bproof That \eqref{eq.system_stochastic} has a unique solution for all $k \in \N{}$ follows for the same reason that Lemma~\ref{lem.nonsingular} holds. The proofs of parts (a)--(d) follow in the same manner as the proofs of Lemmas~\ref{lem.bound_v}, \ref{lem.tangential_big}, \ref{lem.Psi_1}, and \ref{lem.Psi}, respectively, with the stochastic quantities $\{\gbar_k,\dbar_k,\ubar_k,\bar\tau_k\}$ in place of the deterministic quantities $\{g_k,d_k,u_k,\tau_k\}$, where it is important to recognize that the conclusions follow with the \emph{same constants}, namely, $(\kappa_v,\kappa_{uv},\kappa_\Psi,\kappa_q)$, as in the deterministic setting. The proof of the last conclusion follows in the same manner as that of Lemma~\ref{lem.phi_reduction_upper}. \eproof In the next lemma, we prove that the sequence $\{\bar\xi_k\}$ is bounded deterministically. \blemma\label{lem.xi} In any run of the algorithm, there exists $\kbar_\xi \in \N{}$ and $\bar\xi_{\min} \in \R{}_{>0}$ such that $\bar\xi_k = \bar\xi_{\min}$ for all $k \geq \kbar_\xi$, where $\bar\xi_{\min} \in [\xi_{\min},\bar\xi_{-1}]$ with $\xi_{\min} := \epsilon\kappa_q/\kappa_\Psi$. \elemma \bproof If Line~\ref{step.xi} of the algorithm ever sets $\bar\xi_k < \bar\xi_{k-1}$, then it ensures that $\bar\xi_k \leq \epsilon \bar\xi_{k-1}$. This means that $\{\bar\xi_k\}$ is constant for sufficiently large $k$ or it vanishes. On the other hand, by Lemma~\ref{lem.deterministic_to_stochastic}(c) and (d), it follows that \bequationNN \tfrac{\Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k)}{\bar\tau_k \|\dbar_k\|_2^2} \geq \tfrac{\kappa_q \bar\tau_k \overline\Psi_k}{\kappa_\Psi \bar\tau_k \overline\Psi_k} = \tfrac{\kappa_q}{\kappa_\Psi}, \eequationNN meaning that Line~\ref{step.xi} will never set $\bar\xi_k$ less than $\epsilon\kappa_q/\kappa_\Psi$ for any $k \in \N{}$. Therefore, $\{\bar\xi_k\}$ is constant for sufficiently large $k$ in the manner stated. \eproof Next, we present the following obvious, but important consequence of our stepsize selection scheme. In particular, the result shows that, even though the algorithm sets the stepsize adaptively, the difference between the largest and smallest possible stepsizes in a given iteration is $\Ocal(\beta_k^2)$, so this difference is controlled by the algorithm. \blemma\label{lem.beta_control} For any $k \in \N{}$, the stepsize satisfies \bequationNN \bar\alpha_k \in [\bar\alpha_{k,\min},\bar\alpha_{k,\max}] := \left[\tfrac{\beta_k \bar\xi_k \bar\tau_k }{\bar\tau_k L + \Gamma}, \tfrac{\beta_k \bar\xi_k \bar\tau_k}{\bar\tau_k L + \Gamma} + \theta\beta_k^2\right], \eequationNN which is an interval with length $\bar\alpha_{k,\max} - \bar\alpha_{k,\min} = \theta \beta_k^2$. \elemma \bproof The proof follows directly from the projections of $\bar{\widehat\alpha}_k$ and $\bar{\widetilde\alpha}_k$ in Line~\ref{step.alpha_projection_stochastic} and the formula for the stepsize $\bar\alpha_k$ in Line~\ref{step.alpha_stochastic}. \eproof Our next result is a cornerstone of our analysis. It builds on the last conclusion in Lemma~\ref{lem.deterministic_to_stochastic} to specify a useful upper bound for the merit function value after a step. Central to the proof is our specific stepsize selection strategy. \blemma\label{lem.key_decrease} Suppose that $\{\beta_k\}$ is chosen such that $\beta_k \bar\xi_k \bar\tau_k/(\bar\tau_k L + \Gamma) \in (0,1]$ for all $k \in \N{}$. Then, for all $k \in \N{}$, it follows that \bequationNN \baligned &\ \phi(x_k + \bar\alpha_k \dbar_k, \bar\tau_k) - \phi(x_k, \bar\tau_k) \\ \leq&\ -\bar\alpha_k \Delta q(x_k,\bar\tau_k,g_k,H_k,d_k) + \thalf \bar\alpha_k \beta_k \Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k) + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k). \ealigned \eequationNN \elemma \bproof Let $k \in \N{}$ be arbitrary. We consider three cases, with a few subcases, depending on how the stepsize is set in Lines~\ref{step.alpha_projection_stochastic} and~\ref{step.alpha_stochastic} of the algorithm. \textbf{Case 1:} Suppose in Line~\ref{step.alpha_stochastic} that $\bar{\widehat\alpha}_k < 1$, meaning that $\bar\alpha_k \gets \bar{\widehat\alpha}_k$. From Lemma~\ref{lem.deterministic_to_stochastic} and Lemma~\ref{lem.directional_derivative}, it follows that \bequationNN \baligned &\ \phi(x_k + \bar\alpha_k \dbar_k, \bar\tau_k) - \phi(x_k, \bar\tau_k) \\ \leq&\ \bar\alpha_k (\bar\tau_k g_k^T \dbar_k - \|c_k\|_1) + \thalf (\bar\tau_k L + \Gamma) \bar\alpha_k^2 \|\dbar_k\|_2^2 \\ =&\ \bar\alpha_k (\bar\tau_k g_k^T d_k - \|c_k\|_1) + \thalf (\bar\tau_k L + \Gamma) \bar\alpha_k^2 \|\dbar_k\|_2^2 + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k) \\ \leq&\ -\bar\alpha_k \Delta q(x_k,\bar\tau_k,g_k,H_k,d_k) + \thalf (\bar\tau_k L + \Gamma) \bar\alpha_k^2 \|\dbar_k\|_2^2 + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k). \ealigned \eequationNN Using this inequality, let us now consider two subcases. (For all $k \in \N{}$, since \eqref{eq.ratio} ensures $\bar\xi_k \leq \bar\xi_k^{trial} = \tfrac{\Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k)}{\bar\tau_k\|\dbar_k\|_2^2}$, it follows that $\tfrac{\beta_k\Delta q(x_k,\bar\nu_k,\gbar_k,H_k,\dbar_k)}{(\bar\tau_k L + \Gamma)\|\dbar_k\|_2^2} \geq \tfrac{\beta_k\bar\xi_k\bar\tau_k}{\bar\tau_k L + \Gamma}$.) \noindent \textbf{Case 1a:} If $\bar\alpha_k = \tfrac{\beta_k\Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k)}{(\bar\tau_k L + \Gamma)\|\dbar_k\|_2^2}$, then \bequationNN \baligned &\ \phi(x_k + \bar\alpha_k \dbar_k, \bar\tau_k) - \phi(x_k, \bar\tau_k) \\ \leq&\ -\bar\alpha_k \Delta q(x_k,\bar\tau_k,g_k,H_k,d_k) \\ &\quad + \thalf \bar\alpha_k (\bar\tau_k L + \Gamma) \(\tfrac{\beta_k \Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k)}{(\bar\tau_k L + \Gamma) \|\dbar_k\|_2^2}\) \|\dbar_k\|_2^2 + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k) \\ =&\ -\bar\alpha_k \Delta q(x_k,\bar\tau_k,g_k,H_k,d_k) + \thalf \bar\alpha_k \beta_k \Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k) + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k). \ealigned \eequationNN \noindent \textbf{Case 1b:} If $\bar\alpha_k = \tfrac{\beta_k \bar\xi_k \bar\tau_k}{\bar\tau_k L + \Gamma} + \theta\beta_k^2 \leq \tfrac{\beta_k \Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k)}{(\bar\tau_k L + \Gamma) \|\dbar_k\|_2^2}$, then \bequationNN \baligned &\ \phi(x_k + \bar\alpha_k \dbar_k, \bar\tau_k) - \phi(x_k, \bar\tau_k) \\ \leq&\ -\bar\alpha_k \Delta q(x_k,\bar\tau_k,g_k,H_k,d_k) \\ &\quad + \thalf \bar\alpha_k (\bar\tau_k L + \Gamma) \(\tfrac{\beta_k \bar\xi_k \bar\tau_k}{\bar\tau_k L + \Gamma} + \theta\beta_k^2\) \|\dbar_k\|_2^2 + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k) \\ \leq&\ - \bar\alpha_k \Delta q(x_k,\bar\tau_k,g_k,H_k,d_k) + \thalf \bar\alpha_k \beta_k \Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k) + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k). \ealigned \eequationNN \textbf{Case 2:} Suppose in Line~\ref{step.alpha_stochastic} that $\bar{\widetilde\alpha}_k \leq 1 \leq \bar{\widehat\alpha}_k$, meaning that $\bar\alpha_k \gets 1$. From Lemma~\ref{lem.deterministic_to_stochastic}, Lemma~\ref{lem.directional_derivative}, and since $\tfrac{\beta_k \Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k)}{(\bar\tau_k L + \Gamma) \|\dbar_k\|_2^2} \geq 1 = \bar\alpha_k$, it follows that \bequationNN \baligned &\ \phi(x_k + \bar\alpha_k \dbar_k, \bar\tau_k) - \phi(x_k, \bar\tau_k) \\ \leq&\ \bar\alpha_k (\bar\tau_k g_k^T \dbar_k - \|c_k\|_1) + \thalf (\bar\tau_k L + \Gamma) \bar\alpha_k^2 \|\dbar_k\|_2^2 \\ =&\ \bar\alpha_k (\bar\tau_k g_k^T d_k - \|c_k\|_1) + \thalf (\bar\tau_k L + \Gamma) \bar\alpha_k^2 \|\dbar_k\|_2^2 + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k) \\ \leq&\ -\bar\alpha_k \Delta q(x_k,\bar\tau_k,g_k,H_k,d_k) + \thalf (\bar\tau_k L + \Gamma) \bar\alpha_k^2 \|\dbar_k\|_2^2 + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k) \\ \leq&\ -\bar\alpha_k \Delta q(x_k,\bar\tau_k,g_k,H_k,d_k) + \thalf \bar\alpha_k \beta_k \Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k) + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k). \ealigned \eequationNN \textbf{Case 3:} Suppose in Line~\ref{step.alpha_stochastic} that $\bar{\widetilde\alpha}_k > 1$, meaning that $\bar\alpha_k \gets \bar{\widetilde\alpha}_k$. From Lemma~\ref{lem.deterministic_to_stochastic} and Lemma~\ref{lem.directional_derivative}, it follows that \bequationNN \baligned &\ \phi(x_k + \bar\alpha_k \dbar_k, \bar\tau_k) - \phi(x_k, \bar\tau_k) \\ \leq&\ \bar\alpha_k \bar\tau_k g_k^T \dbar_k + (\bar\alpha_k - 1)\|c_k\|_1 - \|c_k\|_1 + \thalf (\bar\tau_k L + \Gamma) \bar\alpha_k^2 \|\dbar_k\|_2^2 \\ =&\ \bar\alpha_k (\bar\tau_k g_k^T \dbar_k - \|c_k\|_1) + 2(\bar\alpha_k - 1)\|c_k\|_1 + \thalf (\bar\tau_k L + \Gamma) \bar\alpha_k^2 \|\dbar_k\|_2^2 \\ \leq&\ \bar\alpha_k (\bar\tau_k g_k^T d_k - \|c_k\|_1) + 2 \bar\alpha_k \|c_k\|_1 + \thalf (\bar\tau_k L + \Gamma) \bar\alpha_k^2 \|\dbar_k\|_2^2 + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k) \\ \leq&\ -\bar\alpha_k \Delta q(x_k,\bar\tau_k,g_k,H_k,d_k) + 2 \bar\alpha_k \|c_k\|_1 + \thalf (\bar\tau_k L + \Gamma) \bar\alpha_k^2 \|\dbar_k\|_2^2 + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k). \ealigned \eequationNN Using this inequality, let us now consider two subcases. (Since the lemma requires $1 \geq \tfrac{\beta_k \bar\xi_k \bar\tau_k}{\bar\tau_k L + \Gamma}$ for all $k \in \N{}$, it is not possible that $\bar\alpha_k = \tfrac{\beta_k \bar\xi_k \bar\tau_k}{\bar\tau_k L + \Gamma}$ in this case.) \noindent \textbf{Case 3a:} If $\bar\alpha_k = \tfrac{\beta_k \Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k) - 4 \|c_k\|_1}{(\bar\tau_k L + \Gamma) \|\dbar_k\|_2^2}$, then \bequationNN \baligned &\ \phi(x_k + \bar\alpha_k \dbar_k, \bar\tau_k) - \phi(x_k, \bar\tau_k) \\ \leq&\ -\bar\alpha_k \Delta q(x_k,\bar\tau_k,g_k,H_k,d_k) + 2 \bar\alpha_k \|c_k\|_1 \\ &\quad + \thalf \bar\alpha_k (\bar\tau_k L + \Gamma) \(\tfrac{\beta_k \Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k) - 4 \|c_k\|_1}{(\bar\tau_k L + \Gamma) \|\dbar_k\|_2^2}\) \|\dbar_k\|_2^2 + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k) \\ =&\ -\bar\alpha_k \Delta q(x_k,\bar\tau_k,g_k,H_k,d_k) + \thalf \bar\alpha_k \beta_k \Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k) + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k). \ealigned \eequationNN \noindent \textbf{Case 3b:} If $\bar\alpha_k = \tfrac{\beta_k \bar\xi_k \bar\tau_k}{\bar\tau_k L + \Gamma} + \theta \beta_k^2 \leq \tfrac{\beta_k \Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k) - 4 \|c_k\|_1}{(\bar\tau_k L + \Gamma) \|\dbar_k\|_2^2}$, then \bequationNN \baligned &\ \phi(x_k + \bar\alpha_k \dbar_k, \bar\tau_k) - \phi(x_k, \bar\tau_k) \\ \leq&\ -\bar\alpha_k \Delta q(x_k,\bar\tau_k,g_k,H_k,d_k) + 2\bar\alpha_k \|c_k\|_1 \\ &\quad + \thalf \bar\alpha_k (\bar\tau_k L + \Gamma) \(\tfrac{\beta_k \bar\xi_k \bar\tau_k}{\bar\tau_k L + \Gamma} + \theta\beta_k^2\) \|\dbar_k\|_2^2 + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k) \\ \leq&\ -\bar\alpha_k \Delta q(x_k,\bar\tau_k,g_k,H_k,d_k) + 2\bar\alpha_k \|c_k\|_1 \\ &\quad + \thalf \bar\alpha_k \beta_k (\Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k) - 4 \|c_k\|_1/\beta_k) + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k) \\ \leq&\ -\bar\alpha_k \Delta q(x_k,\bar\tau_k,g_k,H_k,d_k) + \thalf \bar\alpha_k \beta_k \Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k) + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k). \ealigned \eequationNN The result follows by combining the conclusions of all cases and subcases. \eproof Our next two lemmas provide useful relationships between deterministic (i.e., dependent on $g_k$) and stochastic (i.e., dependent on $\gbar_k$) quantities conditioned on the event that the algorithm has reached $x_k$ as the $k$th iterate. \blemma\label{lem.expectation} For all $k \in \N{}$, $\E_k[\dbar_k] = d_k$, $\E_k[\ubar_k] = u_k$, and $\E_k[\ybar_k] = y_k$. Moreover, there exists $\kappa_d \in \R{}_{>0}$, independent of $k$ and any run of the algorithm, with \bequationNN \E_k[\|\dbar_k - d_k\|_2] \leq \kappa_d \sqrt{M}. \eequationNN \elemma \bproof The first statement follows from the fact that, conditioned on the $k$th iterate being~$x_k$, the matrix on the left-hand side of \eqref{eq.system_stochastic} is deterministic and, under Assumption~\ref{ass.deterministic}, it is invertible, along with the fact that expectation is a linear operator. For the second statement, notice that for any realization of $\gbar_k$, it follows that \bequationNN \bbmatrix \dbar_k - d_k \\ \ybar_k - y_k \ebmatrix = -\bbmatrix H_k & J_k^T \\ J_k & 0 \ebmatrix^{-1} \bbmatrix \gbar_k - g_k \\ 0 \ebmatrix \implies \|\dbar_k - d_k\|_2 \leq \kappa_d \|\gbar_k - g_k\|_2, \eequationNN where $\kappa_d \in \R{}_{>0}$ is an upper bound on the norm of the matrix shown above, the existence of which, and independence from $k$, follows under Assumption~\ref{ass.deterministic}. It also follows from Jensen's inequality, concavity of the square root, and Assumption~\ref{ass.g} that \bequationNN \E_k[\|\gbar_k - g_k\|_2] \leq \sqrt{\E_k[\|\gbar_k - g_k\|_2^2]} \leq \sqrt{M}. \eequationNN Combined with the displayed inequality above, the desired conclusion follows. \eproof Relationships between inner products involving deterministic and stochastic quantities are the subject of the next lemma. \blemma\label{lem.product_bounds} For all $k \in \N{}$, it follows that \bequationNN g_k^Td_k \geq \E_k[\gbar_k^T\dbar_k] \geq g_k^Td_k - \zeta^{-1}M\ \ \text{and}\ \ d_k^TH_kd_k \leq \E_k[\dbar_k^TH_k\dbar_k]. \eequationNN \elemma \bproof From the first block equation in \eqref{eq.system_stochastic}, it follows that \bequationNN \baligned && H_k(Z_k\wbar_k + v_k) + J_k^T\ybar_k &= -\gbar_k \\ \iff && Z_k^TH_kZ_k\wbar_k &= -Z_k^T(\gbar_k + H_kv_k) \\ \iff && Z_k\wbar_k &= -Z_k(Z_k^TH_kZ_k)^{-1}Z_k^T(\gbar_k + H_kv_k), \ealigned \eequationNN from which it follows that \bequation\label{eq.bigD1} \gbar_k^T\ubar_k = \gbar_k^TZ_k\wbar_k = -\gbar_k^TZ_k(Z_k^TH_kZ_k)^{-1}Z_k^T(\gbar_k + H_kv_k). \eequation Following the same line of argument for \eqref{eq.system_deterministic}, it follows that \bequation\label{eq.bigD2} g_k^Tu_k = -g_k^TZ_k(Z_k^TH_kZ_k)^{-1}Z_k^T(g_k + H_kv_k). \eequation At the same time, under Assumptions~\ref{ass.H} and \ref{ass.g}, one finds that \bequation\label{eq.daniels_special_equation} \zeta^{-1}M \geq \E_k[\|Z_k^T(\gbar_k - g_k)\|_{(Z_k^TH_kZ_k)^{-1}}^2] \geq 0. \eequation One finds that the middle term in this expression can be written as \bequationNN \baligned &\ \E_k[\|Z_k^T(\gbar_k - g_k)\|_{(Z_k^TH_kZ_k)^{-1}}^2] \\ =&\ \E_k[\|Z_k^T\gbar_k\|_{(Z_k^TH_kZ_k)^{-1}}^2] - 2\E_k[\gbar_k^TZ_k(Z_k^TH_kZ_k)^{-1}Z_k^Tg_k] + \|Z_k^Tg_k\|_{(Z_k^TH_kZ_k)^{-1}}^2 \\ =&\ \E_k[\|Z_k^T\gbar_k\|_{(Z_k^TH_kZ_k)^{-1}}^2] - \|Z_k^Tg_k\|_{(Z_k^TH_kZ_k)^{-1}}^2. \ealigned \eequationNN Hence, combining \eqref{eq.bigD1}, \eqref{eq.bigD2}, \eqref{eq.daniels_special_equation}, and the fact that $\E_k[\gbar_k] = g_k$ one finds \bequationNN \baligned g_k^Tu_k - \E_k[\gbar_k^T\ubar_k] &= -g_k^TZ_k(Z_k^TH_kZ_k)^{-1}Z_k^T(g_k + H_kv_k) \\ &\qquad + \E_k[\gbar_k^TZ_k(Z_k^TH_kZ_k)^{-1}Z_k^T(\gbar_k + H_kv_k)] \\ &= -\|Z_k^Tg_k\|_{(Z_k^TH_kZ_k)^{-1}}^2 + \E_k[\|Z_k^T\gbar_k\|_{(Z_k^TH_kZ_k)^{-1}}^2] \in [0,\zeta^{-1}M]. \ealigned \eequationNN The first desired result follows from this fact, $\E_k[\gbar_k^Tv_k] = g_k^Tv_k$, and \bequationNN g_k^Td_k - \E_k[\gbar_k^T\dbar_k] = g_k^Tu_k + g_k^Tv_k - \E_k[\gbar_k^T\ubar_k + \gbar_k^Tv_k] = g_k^Tu_k - \E_k[\gbar_k^T\ubar_k]. \eequationNN Now let us prove the second desired conclusion. From \eqref{eq.system_stochastic}, it follows that \bequationNN \baligned && H_k(\ubar_k + v_k) &= -\gbar_k - J_k^T\ybar_k \\ \iff && (\ubar_k + v_k)^TH_k(\ubar_k + v_k) &= -\gbar_k^T(\ubar_k + v_k) - \ybar_k^TJ_k\dbar_k \\ && &= -\gbar_k^T(\ubar_k + v_k) + \ybar_k^Tc_k. \ealigned \eequationNN Following the same argument for \eqref{eq.system_deterministic}, it follows that \bequationNN (u_k + v_k)^TH_k(u_k + v_k) = -g_k^T(u_k + v_k) + y_k^Tc_k. \eequationNN Combining these facts, it follows that \bequationNN \baligned &\ \ubar_k^TH_k\ubar_k + 2\ubar_k^TH_kv_k - u_k^TH_ku_k - 2u_k^TH_kv_k \\ =&\ -\gbar_k^T(\ubar_k + v_k) + g_k^T(u_k + v_k) + (\ybar_k - y_k)^Tc_k, \ealigned \eequationNN which after taking conditional expectation and using Lemma~\ref{lem.expectation} yields \bequationNN \E_k[\ubar_k^TH_k\ubar_k] - u_k^TH_ku_k = -\E_k[\gbar_k^T\ubar_k] + g_k^Tu_k. \eequationNN The desired conclusion now follows since \bequationNN \baligned \E_k[\dbar_k^TH_k\dbar_k] - d_k^TH_kd_k &= \E_k[(\ubar_k + v_k)^TH_k(\ubar_k + v_k)] - (u_k + v_k)^TH_k(u_k + v_k) \\ &= \E_k[\ubar_k^TH_k\ubar_k] - u_k^TH_ku_k, \ealigned \eequationNN where again we have used the result of Lemma~\ref{lem.expectation}. \eproof In the remainder of our convergence analysis, we consider three cases depending on the behavior of the sequence $\{\bar\tau_k\}$ in a run of the algorithm. In the deterministic setting, it was proved that the merit parameter sequence eventually remains constant at a value that is sufficiently small to ensure that a primal-dual stationarity measure vanishes (see Lemma~\ref{lem.tau_bound}). However, under only Assumption~\ref{ass.g}, it is not possible to prove that such behavior is guaranteed for any possible run of Algorithm~\ref{alg.sqp_stochastic}. Our analysis considers three mutually exclusive and exhaustive events: event $E_{\tau,small}$ that the merit parameter sequence eventually remains constant at a sufficiently small positive value; event $E_{\tau,0}$ that the merit parameter sequence vanishes; and event $E_{\tau,big}$ that the merit parameter sequence eventually remains constant, but at a value that is not sufficiently small. Under modest assumptions, we prove that $E_{\tau,big}$ occurs with probability zero, and under slightly stronger, but reasonably pragmatic assumptions, we prove that event $E_{\tau,0}$ either does not occur or only occurs in extreme circumstances (e.g., divergence in norm of a subsequence of the stochastic gradient estimates). This leaves event $E_{\tau,small}$, which we consider first and show that, conditioned on this event, convergence comparable to the deterministic setting is achieved in expectation. \subsubsection{Constant, Sufficiently Small Merit Parameter}\label{sec.constant_tau_small} Let us first consider the behavior of the algorithm conditioned on the event that the merit parameter sequence eventually remains constant at a sufficiently small value. In particular, recalling Lemma~\ref{lem.xi}, let us now make the following assumption. \bassumption\label{ass.g_conditioned} Event $E_{\tau,small}$ occurs in the sense that there exists an iteration number $\kbar_{\tau,\xi} \in \N{}$ and a merit parameter value $\bar\tau_{\min} \in \R{}_{>0}$ such that \bequation\label{eq.tau_small} \bar\tau_k = \bar\tau_{\min} \leq \tau_k^{trial}\ \ \text{and}\ \ \bar\xi_k = \bar\xi_{\min}\ \ \text{for all}\ \ k \geq \kbar_{\tau,\xi}. \eequation In addition, the stochastic gradient sequence $\{\gbar_k\}_{k \geq \kbar_{\tau,\xi}}$ satisfies \bequationNN \E_{k,\tau,small}[\gbar_k] = g_k\ \ \text{and}\ \ \E_{k,\tau,small}[\|\gbar_k - g_k\|_2^2] \leq M, \eequationNN where $\E_{k,\tau,small}$ denotes expectation with respect to the distribution of $\omega$ conditioned on the event that $E_{\tau,small}$ occurs and the algorithm has reached $x_k$ in iteration $k \in \N{}$. \eassumption The inequality $\bar\tau_k \leq \tau_k^{trial}$ in \eqref{eq.tau_small} is critical since it ensures that the model reduction value $\Delta q(x_k,\bar\tau_{\min},g_k,H_k,d_k)$ satisfies the result of Lemma~\ref{lem.Psi} for all $k \geq \kbar_{\tau,\xi}$ with $\bar\tau_{\min}$ in place of $\tau_k$. In other words, it means that the merit parameter has become small enough such that, if one were to compute the \emph{deterministic} search direction $d_k$ using the \emph{true} gradient $g_k$ at $x_k$, then one would find that it is a direction of sufficient descent for the merit function $\phi(\cdot,\bar\tau_{\min})$ at $x_k$. The importance of this becomes clear in our final results at the end of this part of our analysis. The latter part of the assumption reaffirms the properties of the stochastic gradient estimates stated in Assumption~\ref{ass.g}, now conditioned on the occurrence of $E_{\tau,small}$. With this assumption, the results of Lemma~\ref{lem.expectation} and \ref{lem.product_bounds} continue to hold. For the sake of brevity, for the rest of this part of our analysis (\S\ref{sec.constant_tau_small}), let us redefine $\E_k[\ \cdot\ ] \equiv \E_{k,\tau,small}[\ \cdot\ ]$. To derive our main result for this case, our goal is to prove upper bounds in expectation for the positive terms on the right-hand side of the conclusion of Lemma~\ref{lem.key_decrease}. Let us first consider the last term, which is addressed in our next lemma. \blemma\label{lem.alphagd} Suppose that Assumption~\ref{ass.g_conditioned} holds. Let $\kappa_g \in \R{}_{>0}$ be an upper bound for $\{\|g_k\|_2\}$, the existence of which follows under Assumption~\ref{ass.deterministic}. It follows, with $\kappa_d \in \R{}_{>0}$ from Lemma~\ref{lem.expectation} and any $k \geq \kbar_{\tau,\xi}$, that \bequationNN \E_k[\bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k)] \leq \beta_k^2 \theta \bar\tau_{\min} \kappa_g \kappa_d \sqrt{M}. \eequationNN \elemma \bproof For all $k \geq \kbar_{\tau,\xi}$, let $E_k$ be the event that $g_k^T (\dbar_k - d_k) \geq 0$ and let $E_k^c$ be the event that $g_k^T (\dbar_k - d_k) < 0$. Let $\P_k[\cdot]$ denote probability conditioned on the event that $E_{\tau,small}$ occurs and the algorithm has reached $x_k$ in iteration $k$. By the Law of Total Expectation, \eqref{eq.tau_small}, and Lemma~\ref{lem.beta_control}, it follows for $k \geq \kbar_{\tau,\xi}$ that \bequationNN \baligned &\ \E_k[\bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k)] \\ =&\ \E_k[\bar\alpha_k \bar\tau_{\min} g_k^T (\dbar_k - d_k) | E_k] \P_k[E_k] + \E_k[\bar\alpha_k \bar\tau_{\min} g_k^T (\dbar_k - d_k) | E_k^c] \P_k[E_k^c] \\ \leq&\ \bar\alpha_{k,\max} \bar\tau_{\min} \E_k[g_k^T (\dbar_k - d_k) | E_k] \P_k[E_k] + \bar\alpha_{k,\min} \bar\tau_{\min} \E_k[ g_k^T (\dbar_k - d_k) | E_k^c] \P_k[E_k^c]. \ealigned \eequationNN By Lemma~\ref{lem.expectation}, this means on one hand that \bequationNN \baligned &\ \E_k[\bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k)] \\ \leq&\ \bar\alpha_{k,\min} \bar\tau_{\min} \E_k[g_k^T (\dbar_k - d_k) | E_k] \P_k[E_k] + \bar\alpha_{k,\min} \bar\tau_{\min} \E_k[ g_k^T (\dbar_k - d_k) | E_k^c] \P_k[E_k^c] \\ +&\ (\bar\alpha_{k,\max} - \bar\alpha_{k,\min}) \bar\tau_{\min} \E_k[g_k^T (\dbar_k - d_k) | E_k] \P_k[E_k] \\ =&\ (\bar\alpha_{k,\max} - \bar\alpha_{k,\min}) \bar\tau_{\min} \E_k[g_k^T (\dbar_k - d_k) | E_k] \P_k[E_k], \ealigned \eequationNN while on the other hand that \bequationNN \baligned &\ \E_k[\bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k)] \\ \leq&\ \bar\alpha_{k,\max} \bar\tau_{\min} \E_k[g_k^T (\dbar_k - d_k) | E_k] \P_k[E_k] + \bar\alpha_{k,\max} \bar\tau_{\min} \E_k[ g_k^T (\dbar_k - d_k) | E_k^c] \P_k[E_k^c] \\ +&\ (\bar\alpha_{k,\min} - \bar\alpha_{k,\max}) \bar\tau_{\min} \E_k[g_k^T (\dbar_k - d_k) | E_k^c] \P_k[E_k^c] \\ =&\ (\bar\alpha_{k,\min} - \bar\alpha_{k,\max}) \bar\tau_{\min} \E_k[g_k^T (\dbar_k - d_k) | E_k^c] \P_k[E_k^c] \ealigned \eequationNN Combining these facts, it follows that \bequationNN \baligned &\ \E_k[\bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k)] \\ \leq&\ \thalf (\bar\alpha_{k,\max} - \bar\alpha_{k,\min}) \bar\tau_{\min} (\E_k[g_k^T (\dbar_k - d_k) | E_k] \P_k[E_k] - \E_k[g_k^T (\dbar_k - d_k) | E_k^c] \P_k[E_k^c]). \ealigned \eequationNN Observe that, by the Law of Total Expectation, it follows that \bequationNN \baligned \E_k[g_k^T (\dbar_k - d_k) | E_k] \P_k[E_k] \leq&\ \E_k[\|g_k\|_2 \|\dbar_k - d_k\|_2 | E_k] \P_k[E_k] \\ =&\ \E_k[\|g_k\|_2 \|\dbar_k - d_k\|_2] - \E_k[\|g_k\|_2 \|\dbar_k - d_k\|_2 | E_k^c] \P_k[E_k^c] \\ \leq&\ \|g_k\|_2 \E_k[\|\dbar_k - d_k\|_2], \ealigned \eequationNN and, in a similar manner, \bequationNN \baligned -\E_k[g_k^T (\dbar_k - d_k) | E_k^c] \P_k[E_k^c] \leq&\ \E_k[\|g_k\|_2 \|\dbar_k - d_k\|_2 | E_k^c] \P_k[E_k^c] \\ =&\ \E_k[\|g_k\|_2 \|\dbar_k - d_k\|_2] - \E_k[\|g_k\|_2 \|\dbar_k - d_k\|_2 | E_k] \P_k[E_k] \\ \leq&\ \|g_k\|_2 \E_k[\|\dbar_k - d_k\|_2]. \ealigned \eequationNN Combining these results with Lemma~\ref{lem.beta_control} and Lemma~\ref{lem.expectation} yield the result. \eproof Our next result addresses the middle term on the right-hand side of Lemma~\ref{lem.key_decrease}. \blemma\label{lem.Deltaq} Suppose Assumption~\ref{ass.g_conditioned} holds. Then, for all $k \geq \kbar_{\tau,\xi}$, it follows that \bequationNN \E_k[\Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k)] \leq \Delta q(x_k,\bar\tau_{\min},g_k,H_k,d_k) + \bar\tau_{\min} \zeta^{-1} M. \eequationNN \elemma \bproof Consider arbitrary $k \geq \kbar_{\tau,\xi}$. From \eqref{def.merit_model_reduction}, \eqref{eq.tau_small}, Lemma~\ref{lem.product_bounds}, Jensen's inequality, and convexity of $\max\{\cdot,0\}$, it follows that \bequationNN \baligned \E_k[\Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k)] &= \E_k[-\bar\tau_{\min}(\gbar_k^T\dbar_k + \thalf \max \{\dbar_k^TH_k\dbar_k,0\}) + \|c_k\|_1] \\ &\leq -\bar\tau_{\min}(g_k^Td_k + \thalf \max\{d_k^TH_kd_k,0\}) + \bar\tau_{\min} \zeta^{-1}M + \|c_k\|_1 \\ &= \Delta q(x_k,\bar\tau_{\min},g_k,H_k,d_k) + \bar\tau_{\min} \zeta^{-1} M, \ealigned \eequationNN as desired. \eproof We now prove our main theorem for this part of our analysis, where we define \bequationNN \E_{\tau,small} [\ \cdot\ ] = \E[\ \cdot\ |\ \text{Assumption~\ref{ass.g_conditioned} holds}\ ]. \eequationNN The theorem considers the behavior of a certain sequence of model reduction values, and the subsequent corollary translates the result of the theorem to the behavior of the sequences of constraint violations and stationarity measures. \btheorem\label{th.stochastic_tau_constant_small} Suppose that Assumption~\ref{ass.g_conditioned} holds and the sequence $\{\beta_k\}$ is chosen such that $\beta_k \bar\xi_k \bar\tau_k/(\bar\tau_k L + \Gamma) \in (0,1]$ for all $k \geq \kbar_{\tau,\xi}$. Define \bequationNN \Abar := \tfrac{\bar\xi_{\min} \bar\tau_{\min}}{\bar\tau_{\min} L + \Gamma}\ \ \text{and}\ \ \Mbar := \bar\tau_{\min} \big(\thalf (\Abar + \theta) \zeta^{-1} M + \theta \kappa_g \kappa_d \sqrt{M}\big). \eequationNN If $\beta_k = \beta \in (0,2\Abar/(\Abar + \theta))$ for all $k \geq \kbar_{\tau,\xi}$, then \bequation\label{eq.beta_fixed} \baligned &\ \E_{\tau,small}\left[\tfrac{1}{k+1} \sum_{j=\kbar_{\tau,\xi}}^{\kbar_{\tau,\xi}+k} \Delta q(x_j,\bar\tau_{\min},g_j,H_j,d_j) \right] \\ \leq&\ \tfrac{\beta \Mbar}{\Abar - \thalf(\Abar + \theta)\beta} + \tfrac{\E_{\tau,small}[\phi(x_{\kbar_{\tau,\xi}},\bar\tau_{\min})] - \phi_{\min}}{(k+1) \beta(\Abar - \thalf(\Abar + \theta)\beta)} \xrightarrow{k\to\infty} \tfrac{\beta \Mbar}{\Abar - \thalf(\Abar + \theta)\beta}, \ealigned \eequation where $\phi_{\min} \in \R{}_{>0}$ is a lower bound for $\phi(\cdot,\bar\tau_{\min})$ over $\Xcal$, the existence of which follows by Assumption~\ref{ass.deterministic}. On the other hand, if $\sum_{k=\kbar_{\tau,\xi}}^\infty \beta_k = \infty$ and $\sum_{k=\kbar_{\tau,\xi}}^\infty \beta_k^2 < \infty$, then \bequation\label{eq.beta_diminishing} \lim_{k \to \infty} \E_{\tau,small}\left[ \tfrac{1}{\(\sum_{j=\kbar_{\tau,\xi}}^{\kbar_{\tau,\xi}+k} \beta_j\)} \sum_{j=\kbar_{\tau,\xi}}^{\kbar_{\tau,\xi}+k} \beta_j\Delta q(x_j,\bar\tau_{\min},g_j,H_j,d_j) \right] = 0. \eequation \etheorem \bproof Consider arbitrary $k \geq \kbar_{\tau,\xi}$. It follows from the definition of $\Abar$, Lemma~\ref{lem.beta_control}, and the fact that $\beta_k \in (0,1]$ that $\Abar \beta_k \leq \bar\alpha_k \leq (\Abar + \theta) \beta_k$. Hence, it follows from Lemmas~\ref{lem.deterministic_to_stochastic}(d), \ref{lem.key_decrease}, \ref{lem.alphagd}, and \ref{lem.Deltaq} that, under the conditions of the theorem, \bequationNN \baligned &\ \E_k[\phi(x_k + \bar\alpha_k \dbar_k, \bar\tau_k)] - \E_k[\phi(x_k, \bar\tau_k)] \\ \leq&\ \E_k[-\bar\alpha_k \Delta q(x_k,\bar\tau_k,g_k,H_k,d_k) + \thalf \bar\alpha_k \beta_k \Delta q(x_k,\bar\tau_k,\gbar_k,H_k,\dbar_k) + \bar\alpha_k \bar\tau_k g_k^T (\dbar_k - d_k)] \\ \leq&\ -\beta_k \big(\Abar - \thalf (\Abar + \theta) \beta_k\big) \Delta q(x_k,\bar\tau_{\min},g_k,H_k,d_k) + \beta_k^2 \Mbar. \ealigned \eequationNN For the scenario of $\{\beta_k\}$ being a constant sequence for $k \geq \kbar_{\tau,\xi}$, one finds from above, taking total expectation conditioned on \eqref{eq.tau_small}, that, for all $k \geq \kbar_{\tau,\xi}$, \begin{multline*} \E_{\tau,small}[\phi(x_k + \bar\alpha_k \dbar_k, \bar\tau_{\min})] - \E_{\tau,small}[\phi(x_k, \bar\tau_{\min})] \\ \leq - \beta(\Abar - \thalf(\Abar + \theta)\beta) \E_{\tau,small}[\Delta q(x_k,\bar\tau_{\min},g_k,H_k,d_k)] + \beta^2 \Mbar. \end{multline*} Summing this inequality for $j \in \{\kbar_{\tau,\xi},\dots,\kbar_{\tau,\xi} + k\}$, one finds by Assumption~\ref{ass.deterministic} that \bequationNN \baligned &\ \phi_{\min} - \E_{\tau,small}[\phi(x_{\kbar_{\tau,\xi}},\bar\tau_{\min})] \\ \leq&\ \E_{\tau,small}[\phi(x_{\kbar_{\tau,\xi}+k+1},\bar\tau_{\min})] - \E_{\tau,small}[\phi(x_{\kbar_{\tau,\xi}},\bar\tau_{\min})] \\ \leq&\ -\beta(\Abar - \thalf(\Abar + \theta)\beta) \E_{\tau,small}\left[\sum_{j=\kbar_{\tau,\xi}}^{\kbar_{\tau,\xi}+k} \Delta q(x_j,\bar\tau_{\min},g_j,H_j,d_j) \right] + (k+1) \beta^2 \Mbar, \ealigned \eequationNN from which \eqref{eq.beta_fixed} follows. Now consider the scenario of $\{\beta_k\}$ diminishing as described. It follows that for sufficiently large $k \geq \kbar_{\tau,\xi}$ one finds $\beta_k \leq \Abar/(\Abar + \theta)$; hence, let us assume without loss of generality that, for all $k \geq \kbar_{\tau,\xi}$, one has $\beta_k \leq \Abar/(\Abar + \theta)$, which implies $\Abar - \thalf(\Abar + \theta)\beta_k \geq \thalf \Abar$. Similar to above, it follows for all $k \geq \kbar_{\tau,\xi}$ that \begin{multline*} \E_{\tau,small}[\phi(x_k + \bar\alpha_k \dbar_k, \bar\tau_{\min})] - \E_{\tau,small}[\phi(x_k, \bar\tau_{\min})] \\ \leq - \thalf \Abar \beta_k \E_{\tau,small}[\Delta q(x_k,\bar\tau_{\min},g_k,H_k,d_k)] + \beta_k^2 \Mbar. \end{multline*} Summing this inequality for $j \in \{\kbar_{\tau,\xi},\dots,\kbar_{\tau,\xi} + k\}$, one finds by Assumption~\ref{ass.deterministic} that \bequationNN \baligned &\ \phi_{\min} - \E_{\tau,small}[\phi(x_{\kbar_{\tau,\xi}},\bar\tau_{\min})] \\ \leq&\ \E_{\tau,small}[\phi(x_{\kbar_{\tau,\xi}+k+1},\bar\tau_{\min})] - \E_{\tau,small}[\phi(x_{\kbar_{\tau,\xi}},\bar\tau_{\min})] \\ \leq&\ -\thalf \Abar \E_{\tau,small}\left[ \sum_{j=\kbar_{\tau,\xi}}^{\kbar_{\tau,\xi}+k} \beta_j \Delta q(x_j,\bar\tau_{\min},g_j,H_j,d_j)\right] + \Mbar \sum_{j=\kbar_{\tau,\xi}}^{\kbar_{\tau,\xi}+k} \beta_j^2. \ealigned \eequationNN Rearranging this inequality yields \begin{multline*} \E_{\tau,small} \left[ \sum_{j=\kbar_{\tau,\xi}}^{\kbar_{\tau,\xi}+k} \beta_j \Delta q(x_j,\bar\tau_{\min},g_j,H_j,d_j)\right] \\ \leq \tfrac{2(\E_{\tau,small}[\phi(x_{\kbar_{\tau,\xi}},\bar\tau_{\min})] - \phi_{\min})}{\Abar} + \tfrac{2\Mbar}{\Abar} \sum_{j=\kbar_{\tau,\xi}}^{\kbar_{\tau,\xi}+k} \beta_j^2, \end{multline*} from which \eqref{eq.beta_diminishing} follows. \eproof \bcorollary\label{cor.stochastic_tau_finite_large} Under the conditions of Theorem~\ref{th.stochastic_tau_constant_small}, the following hold true. \benumerate \item[(a)] If $\beta_k = \beta \in (0,2\Abar/(\Abar + \theta))$ for all $k \geq \kbar_{\tau,\xi}$, then \bequationNN \baligned &\ \E_{\tau,small}\left[\tfrac{1}{k+1} \sum_{j=\kbar_{\tau,\xi}}^{\kbar_{\tau,\xi}+k} \(\tfrac{\|g_j + J_j^Ty_j\|_2^2}{\kappa_H^2} + \|c_j\|_2\) \right] \xrightarrow{k\to\infty} \tfrac{2\kappa_\Psi \beta \Mbar}{\kappa_q \bar\tau_{\min} (\Abar - \thalf(\Abar + \theta)\beta)}. \ealigned \eequationNN \item[(b)] If $\sum_{k=\kbar_{\tau,\xi}}^\infty \beta_k = \infty$ and $\sum_{k=\kbar_{\tau,\xi}}^\infty \beta_k^2 < \infty$, then \bequationNN \lim_{k\to\infty}\E_{\tau,small}\left[ \tfrac{1}{\(\sum_{j=\kbar_{\tau,\xi}}^{\kbar_{\tau,\xi}+k} \beta_j\)} \sum_{j=\kbar_{\tau,\xi}}^{\kbar_{\tau,\xi}+k} \beta_j\(\tfrac{\|g_j + J_j^Ty_j\|_2^2}{\kappa_H^2} + \|c_j\|_2\) \right] = 0, \eequationNN from which it follows that \bequationNN \liminf_{k\to\infty}\ \E_{\tau,small} [\kappa_H^{-2} \|g_k + J_k^Ty_k\|_2^2 + \|c_k\|_2] = 0. \eequationNN \eenumerate In addition, in either case, there exists $\delta_x \in \R{}_{>0}$ such that if $\|x_k - x_*\|_2 \leq \delta_x$ for some stationary point $(x_*,y_*) \in \R{n} \times \R{m}$ for \eqref{prob.f_nonlinear_stochastic}, then for any $\delta_g \in \R{}_{>0}$ one finds \bequationNN \left\|\bbmatrix\gbar_k - \nabla f(x_*) \\ c_k \ebmatrix\right\|_2 \leq \delta_g\ \ \implies\ \ \|\ybar_k - y_*\|_2 \leq 2\delta_g. \eequationNN \ecorollary \bproof Parts (a) and (b) follow by combining the results of Lemmas~\ref{lem.Psi_1} and \ref{lem.Psi}, the relation \eqref{eq.gJy}, and Theorem~\ref{th.stochastic_tau_constant_small}. The remainder follows with Lemma~\ref{lem.stationary} since for $x_k$ sufficiently close to $x_*$, one obtains with $g_* := \nabla f(x_*)$ and $c_* := c(x_*) = 0$ that \bequationNN \|\ybar_k - y_*\|_2 \leq \left\|\bbmatrix H_k & J_k^T \\ J_k & 0 \ebmatrix^{-1} \bbmatrix \gbar_k \\ c_k \ebmatrix - \bbmatrix H_k & J_*^T \\ J_* & 0 \ebmatrix^{-1} \bbmatrix g_* \\ c_* \ebmatrix \right\|_2 \leq 2 \left\|\bbmatrix\gbar_k - g_* \\ c_k\ebmatrix\right\|_2, \eequationNN from which the desired conclusion follows. \eproof We close our analysis of this case with the following remark. \begin{remark} Consideration of the conclusion of Corollary~\ref{cor.stochastic_tau_finite_large}(a) reveals the close relationship between our result and a conclusion that one reaches for a stochastic (sub)gradient method in an unconstrained setting. Notice that \bequationNN \tfrac{2\kappa_\Psi \beta \Mbar}{\kappa_q \bar\tau_{\min} (\Abar - \thalf(\Abar + \theta)\beta)} = \tfrac{2\kappa_\Psi \beta \bar\tau_{\min} \(\thalf \(\tfrac{\bar\xi_{\min} \bar\tau_{\min}}{\bar\tau_{\min} L + \Gamma} + \theta\) \zeta^{-1} M + \theta \kappa_g \kappa_d \sqrt{M}\)}{\kappa_q \bar\tau_{\min} \(\tfrac{\bar\xi_{\min} \bar\tau_{\min}}{\bar\tau_{\min} L + \Gamma} - \thalf\(\tfrac{\bar\xi_{\min} \bar\tau_{\min}}{\bar\tau_{\min} L + \Gamma} + \theta\)\beta\)}. \eequationNN Our first observation is a common one for the unconstrained setting: The value above is directly proportional to~$\beta$. To reduce this value, one should choose smaller~$\beta$, but the downside of choosing smaller~$\beta$ is that the algorithm takes shorter steps, meaning that it takes longer for this limiting value to be approached $($recall~\eqref{eq.beta_fixed}$)$. On the other hand, while larger $\beta$ means that the algorithm takes larger steps, this comes at the cost of a larger limiting value. A second observation, unique for our algorithm, is the influence of $\theta$. The quantity above is directly proportional to $\theta$, meaning that the optimal choice in terms of reducing this value is $\theta=0$, in which case one obtains \bequationNN \tfrac{2\kappa_\Psi \beta \Mbar}{\kappa_q \bar\tau_{\min} (\Abar - \thalf(\Abar + \theta)\beta)} \xrightarrow{\theta \to 0} \tfrac{\kappa_\Psi \beta \zeta^{-1} M}{\kappa_q(1 - \thalf \beta)}. \eequationNN However, this results in a non-adaptive algorithm with $\bar\alpha_k = \beta_k\bar\xi_k\bar\tau_k/(\bar\tau_k L + \Gamma)$ for all $k \in \N{}$. This choice has some theoretical benefits $($see also our discussion in \S\ref{sec.conclusion}$)$, but we have found this conservative choice to be detrimental in practice. \end{remark} \subsubsection{Poor Merit Parameter Behavior} Theorem~\ref{th.stochastic_tau_constant_small} and Corollary~\ref{cor.stochastic_tau_finite_large} show desirable convergence properties in expectation of Algorithm~\ref{alg.sqp_stochastic} in the event that the merit parameter sequence eventually remains constant at a value that is sufficiently small. This captures behavior similar to that of Algorithm~\ref{alg.sqp_adaptive} in the deterministic setting, in which the merit parameter is \emph{guaranteed} to behave in this manner. However, for the stochastic Algorithm~\ref{alg.sqp_stochastic}, one of two other events are possible, which we now define mathematically as follows: \bitemize \item Event $E_{\tau,big}$: there exists infinite $\overline\Kcal_\tau \subseteq \N{}$ and $\bar\tau_{big} \in \R{}_{>0}$ such that \bequationNN \bar\tau_k = \bar\tau_{big} > \tau_k^{trial}\ \ \text{and}\ \ \bar\xi_k = \bar\xi_{\min}\ \ \text{for all}\ \ k \in \overline\Kcal_\tau. \eequationNN Since $\bar\tau_k^{trial} \geq \bar\tau_k$ for all $k \in \N{}$, this means $\bar\tau_k^{trial} > \tau_k^{trial}$ for all $k \in \overline\Kcal_\tau$. \item Event $E_{\tau,0}$: $\{\bar\tau_k\} \searrow 0$. \eitemize Our goal in this part of our analysis is to argue that these events, exhibiting what we refer to as poor behavior of the merit parameter sequence, are either impossible or can only occur in extreme circumstances in practice. For these considerations, let us return to assume that Assumption~\ref{ass.g} (not Assumption~\ref{ass.g_conditioned}) holds. Let us first consider event $E_{\tau,big}$. We show under a modest assumption that this event occurs with probability zero, which is to say that the merit parameter eventually becomes sufficiently small with probability one. As shown above in the definition of $E_{\tau,big}$, the merit parameter remaining too large requires that the stochastic trial value $\bar\tau_k^{trial}$ \emph{consistently} overestimates the deterministic trial value $\tau_k^{trial}$. The following proposition shows that under a modest assumption about the behavior of the stochastic gradients and corresponding search directions, this behavior occurs with probability zero. The subsequent proposition then provides an example showing that our modest assumption holds for an archetypal distribution of the stochastic gradients. \begin{proposition}\label{prop.p} If there exists $p \in (0,1]$ such that, for all $k \in \N{}$, \bequationNN \P_k[\gbar_k^T\dbar_k + \max\{\dbar_k^TH_k\dbar_k,0\} \geq g_k^Td_k + \max\{d_k^TH_kd_k,0\}] \geq p, \eequationNN then $E_{\tau,big}$ occurs with probability zero. \end{proposition} \bproof If, in any run of the algorithm, $g_k^Td_k + \max\{d_k^TH_kd_k,0\} \leq 0$ for all sufficiently large $k \in \N{}$, then $\tau_k^{trial} = \infty$ for all sufficiently large $k \in \N{}$ and event $E_{\tau,big}$ does not occur. Hence, let us define $\Kcal_{gd} \subseteq \N{}$ as the set of indices such that $k \in \Kcal_{gd}$ if and only if $g_k^Td_k + \max\{d_k^TH_kd_k,0\} > 0$, and let us restrict attention to runs in which $\Kcal_{gd}$ is infinite. For any $k \in \Kcal_{gd}$, it follows that the inequality $\gbar_k^T\dbar_k + \max\{\dbar_k^TH_k\dbar_k,0\} \geq g_k^Td_k + \max\{d_k^TH_kd_k,0\}$ holds if and only if \bequationNN \bar\tau_k^{trial} = \tfrac{(1-\sigma)\|c_k\|_1}{\gbar_k^T\dbar_k + \max\{\dbar_k^TH_k\dbar_k,0\}} \leq \tfrac{(1-\sigma)\|c_k\|_1}{g_k^Td_k + \max\{d_k^TH_kd_k,0\}} = \tau_k^{trial}. \eequationNN Hence, it follows from the conditions of the proposition, the fact that $\bar\tau_k \leq \bar\tau_k^{trial}$ for all $k \in \N{}$, and the fact that $\Kcal_{gd}$ is infinite, that for any $k \in \N{}$ the probability is one that for a subsequent iteration number $\khat \geq k$ one finds $\bar\tau_{\khat} \leq \bar\tau_{\khat}^{trial} \leq \tau_{\khat}^{trial}$. This, the fact that Lemma~\ref{lem.tau_bound} implies that $\{\tau_k^{trial}\}$ is bounded away from zero, and the fact that if the merit parameter is ever decreased then it is done so by a constant factor, shows that one has $\bar\tau_k \leq \tau_k^{trial}$ for all sufficiently large $k \in \N{}$ with probability one. \eproof As a concrete example of a setting that offers the minimum probability required in Proposition~\ref{prop.p}, we offer the following. This is clearly only one of many example situations that one could consider to mimic real-world scenarios. \begin{example} If, for all $k \in \N{}$, one has $H_k \succ 0$ and $\gbar_k \sim \Ncal(g_k,\Sigma_k)$ for some $\Sigma_k \in \mathbb{S}^n$ with $\Sigma_k \succ 0$, then the condition in Proposition~\ref{prop.p} holds with $p = \thalf$. \end{example} \bproof Let $k \in \N{}$ be arbitrary. The tangential component of the search direction is $\ubar_k = Z_k\wbar_k$, where, under Assumption~\ref{ass.H} and the stated conditions, $\wbar_k = -(Z_k^TH_kZ_k)^{-1}Z_k^T(\gbar_k + H_kv_k)$. Plugging in this solution and simplifying yields \bequationNN \gbar_k^T\dbar_k + \dbar_k^TH_k\dbar_k = v_k^TH_k^{1/2}(I - H_k^{1/2}Z_k(Z_k^TH_kZ_k)^{-1}Z_k^TH_k^{1/2})(H_k^{-1/2}\gbar_k + H_k^{1/2}v_k). \eequationNN Since $\gbar_k$ is normally distributed with mean $g_k$, it follows that this value is normally distributed with mean of the same form, but with $g_k$ in place of $\gbar_k$ (see, e.g., \cite{Tong12}). Since a normally distributed random variable takes values greater than or equal to its expected value with probability $\thalf$, the conclusion follows. \eproof Let us now consider the event $E_{\tau,0}$. One can learn from Lemmas~\ref{lem.bound_u} and \ref{lem.tau_bound} from the deterministic setting that the following holds true. \begin{proposition}\label{lem.tau_bound_stochastic} Consider an arbitrary constant $g_{\max} \in \R{}_{>0}$. If, for a run of Algorithm~\ref{alg.sqp_stochastic}, the stochastic gradient estimates satisfy $\|\gbar_k - g_k\|_2 \leq g_{\max}$ for all $k \in \N{}$, then the sequence of tangential step components $\{\ubar_k\}$ is bounded, and there exists $\kbar_\tau \in \N{}$ and $\bar\tau_{\min} \in \R{}_{>0}$ such that $\bar\tau_k = \bar\tau_{\min}$ for all $k \geq \kbar_\tau$. \end{proposition} \bproof Boundedness in norm of the tangential step components follows in the same manner as in Lemma~\ref{lem.bound_u} with $(\gbar_k,\ubar_k)$ in place of $(g_k,u_k)$. Further, the claimed behavior of the merit parameter sequence follows in the same manner as in the proof of Lemma~\ref{lem.tau_bound} using $(\gbar_k,\dbar_k,\ubar_k)$ in place of $(g_k,d_k,u_k)$, where in place of the constants $(\kappa_{\tau,1},\kappa_{\tau,2})$ one derives constants $(\bar\kappa_{\tau,1},\bar\kappa_{\tau,2})$ whose value depends on $g_{\max}$ as well as the upper bound on the sequence $\{\|g_k\|_2\}$ (under Assumption~\ref{ass.deterministic}). \eproof By Proposition~\ref{lem.tau_bound_stochastic}, if the differences between the stochastic gradient estimates and true gradients are bounded in norm, then the merit parameter sequence will not vanish, i.e., event $E_{\tau,0}$ will not occur. This is guaranteed if the distributions defining the stochastic gradients $\{\gbar_k\}$ ensure uniform boundedness or, e.g., if \bequationNN f(x) = \tfrac{1}{N} \sum_{i=1}^N f_i(x)\ \ \text{and}\ \ \gbar_k := \nabla f_{i_k}(x_k)\ \ \text{for all}\ \ k \in \N{}, \eequationNN where the component functions $\{f_i\}$ have bounded derivatives over a set containing the iterates and in each iteration $i_k$ is randomly sampled uniformly from $\{1,\dots,N\}$. \section{Numerical Results}\label{sec.numerical} In this section, we demonstrate the empirical performance of our proposed Algorithm~\ref{alg.sqp_adaptive} (for the deterministic setting) and Algorithm~\ref{alg.sqp_stochastic} (for the stochastic setting) using Matlab implementations. We consider their performance on a subset of the equality constrained problems from the CUTE collection \cite{BongConnGoulToin95}. Specifically, of the 123 such problems in the set, we selected those for which $(i)$ $f$ is \emph{not} a constant function, $(ii)$~$n+m \leq 1000$, and $(iii)$ the LICQ held at all iterates in all runs of all algorithms that we ran. This selection resulted in a total of 49 problems. Each problem comes with an initial point, which we used in our experiments. \subsection{Deterministic Setting} Our goal in this setting is to demonstrate that, in practice, our proposed Algorithm~\ref{alg.sqp_adaptive} (``SQP Adaptive'') is as reliable a method as the state-of-the-art Algorithm~\ref{alg.sqp_line_search} (``SQP Backtracking''). We do not claim that ``SQP Adaptive'' is as efficient as ``SQP Backtracking'' since, as has been verified by others in the literature, the line search scheme is very effective across a broad range of problems. That said, since our algorithm for the stochastic setting is based on ``SQP Adaptive,'' it is at least of interest to demonstrate that this approach is as reliable as ``SQP Backtracking'' in practice. For these experiments, we chose each $H_k$ to be the Hessian of the Lagrangian at $(x_k,y_{k-1})$. For both algorithms, for any $k$ such that the inertia of the matrix in \eqref{eq.system_deterministic} is not correct with this choice, a multiple of the identity is added in an iterative manner until the correct inertia is attained. This is a common strategy in state-of-the-art constrained optimization software; see, e.g., \cite{WaecBieg06}. For our experiments, the parameters were set as: $\tau_{-1} = 1$, $\epsilon = 10^{-6}$, $\sigma = 1/2$, $\eta = 10^{-4}$, $\rho = 3$, $L_{-1} = 1$, $\gamma_{-1,i} = 1$, $\nu=1/2$, and $\alpha = 1$. In Line~\ref{step.initialize} of Algorithm \ref{alg.sqp_adaptive}, all Lipschitz constant estimates were set as $1/2$ times the estimates from the previous iteration. A run terminated with a message of success if iteration $k \leq 10^4$ yielded \bequationNN \|g_k + J_k^Ty_k\|_\infty \leq 10^{-6} \max\{1,\|g_0 + J_0^Ty_0\|_\infty\}\ \ \text{and}\ \ \|c_k\|_\infty \leq 10^{-6} \max\{1,\|c_0\|_\infty\}; \eequationNN otherwise, the run was considered a failure. Figure \ref{fig.perf_deterministic} provides Dolan-Mor\'e performance profiles \cite{DolaMore02} for iterations and function evaluations required by the two methods. (The profiles are capped at $t = 20$.) As expected, the performance of ``SQP Backtracking'' was typically better than that of ``SQP Adaptive.'' That said, ``SQP Adaptive'' was as reliable as this state-of-the-art approach. Over all iterations of all runs of ``SQP Adaptive,'' the stepsize $\alpha_k$ was chosen less than one $40.9\%$ of the time, equal to one $41.8\%$ of the time, and greater than one $17.3\%$ percent of the time. \begin{figure}[ht] \centering \includegraphics[width=0.425\textwidth,clip=true,trim=60 10 90 50]{perf_iters.png} \quad \includegraphics[width=0.425\textwidth,clip=true,trim=60 10 90 50]{perf_funcs.png} \caption{Performance profiles for ``SQP Adaptive'' and ``SQP Backtracking'' for problems from the CUTE test set in terms of iterations (left) and function evaluations (right).} \label{fig.perf_deterministic} \end{figure} \subsection{Stochastic Setting} Our goal in this setting is to compare the performance of our proposed Algorithm \ref{alg.sqp_stochastic} (``Stochastic SQP'') against that of a stochastic subgradient method (``Stochastic subgradient'') applied to minimize the exact penalty function~\eqref{eq.penalty_function} (which represents the current state-of-the-art for constrained stochastic optimization). For these experiments, we used our test set of 49 CUTE problems, but considered multiple runs for different levels of noise. In particular, for a given run of an algorithm, we fixed $\epsilon_N \in \{10^{-8}, 10^{-4},10^{-2},10^{-1}\}$, then for each iteration set the stochastic gradient estimate as $\gbar_k = \Ncal(g_k,\epsilon_N I)$. For each problem and noise level, we ran 10 instances. This led to a total of 490 problem instances for each algorithm and noise level. Each run of ``Stochastic SQP'' was given a budget of $1000$ iterations while each run of ``Stochastic Subgradient'' was given a budget of $10000$ iterations. We tuned the value of $\tau$ individually for each problem instance for ``Stochastic Subgradient.'' In particular, for each problem instance, we ran the algorithm for the $11$ values $\tau\in \{10^{-10},10^{-9},\dots,10^{-1},10^0\}$ and selected the value for that instance that led to the best results in terms of feasibility and optimality errors (see below). Overall, this means that for each problem, ``Stochastic Subgradient'' was given 110 times the number of iterations that were allowed for ``Stochastic SQP.'' (This broad range of $\tau$ was needed by ``Stochastic Subgradient'' to obtain its best results. The selected $\tau$ values were roughly evenly distributed over the set from $10^{-10}$ to $10^0$.) For both methods, the Lipschitz constants $L$ and $\Gamma = \sum_{i=1}^m\gamma_i$ were estimated using differences of gradients near the initial point and kept fixed for all subsequent iterations. (This process was done so that $L$ and $\Gamma$ were the same for both methods for each problem.) For ``Stochastic SQP,'' we set $H_k=I$ for all $k$ for fairness of comparison with the (first-order) subgradient method. The other inputs for ``Stochastic SQP'' were set as: $\bar{\tau}_{-1}=1$, $\epsilon = 10^{-6}$, $\sigma = 1/2$, $\bar{\xi}_{-1}=1$, $\theta = 10$, and $\beta_k = 1$ for all $k$. ``Stochastic Subgradient'' was run with a constant stepsize $\tfrac{\tau}{\tau L + \Gamma}$ for all $k$. For each algorithm and each problem instance, we computed a resulting feasibility error and optimality error as follows. If a run produced an iterate that was sufficiently feasible in the sense that $\|c_k\|_{\infty} \leq 10^{-6} \max\{1,\|c_0\|_{\infty}\}$ for some $k$, then, with the largest $k$ corresponding to such a feasible iterate, the feasibility error was reported as $\|c_k\|_\infty$ and the optimality error was reported as $\|g_k + J_k^Ty_k\|_\infty$, where $y_k$ was computed as a least-squares multiplier using the true gradient $g_k$ and $J_k$. (In this manner, the optimality error is \emph{not} based on a stochastic gradient; rather, it is a true measure of optimality corresponding to the iterate $x_k$.) On the other hand, if a run produced no sufficiently feasible iterate, then the feasibility error and optimality error were computed in this manner at the \emph{least infeasible} iterate during the run. The results are reported in the form of box plots in Figure \ref{fig.perf_stochastic}. \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth,clip=true,trim=20 5 90 50]{box_feasibility_1.png}\quad \includegraphics[width=0.45\textwidth,clip=true,trim=20 5 90 50]{box_optimality_1.png} \caption{Box plots for feasibility errors (left) and optimality errors (right).} \label{fig.perf_stochastic} \end{figure} Finally, let us comment on the occurrence of the event \eqref{eq.tau_small}. In all runs of ``Stochastic SQP,'' we found that $\bar{\tau}_k \leq \tau_k^{trial}$ held $100\%$ of the time in the last 100 iterations. In fact, for the noise levels $10^{-8}$, $10^{-4}$, $10^{-2}$, and $10^{-1}$, this inequality held in $99.92\%$, $99.10\%$, $99.22\%$, and $99.65\%$, respectively, of \emph{all} iterations. This provides evidence that the theory offered under the event \eqref{eq.tau_small} is relevant in practice. \section{Conclusion}\label{sec.conclusion} We have presented, analyzed, and tested sequential quadratic optimization algorithms for solving smooth nonlinear optimization problems with equality constraints. Our first algorithm is based on a state-of-the-art line-search SQP method, but employs a stepsize scheme based on (adaptively estimated) Lipschitz constants in place of the line search. We have shown that this method has convergence guarantees that match those of the state-of-the-art line-search SQP method, and our numerical experiments show that the algorithm is as reliable as this state-of-the-art approach. Based on this proposed algorithm, our second algorithm is designed to solve problems involving deterministic constraint functions, but a stochastic objective function. We have proved that under good behavior of the merit function parameter, the algorithm possesses convergence guarantees that match those of our deterministic algorithm in expectation. We have also argued that certain poor behavior of the merit function parameter will only occur in extreme circumstances, and other poor behavior only occurs with probability zero (and in any case can be safeguarded against). Our numerical experiments show that our algorithm for the stochastic setting consistently and significantly outperforms a (sub)gradient method employed to minimize a penalty function, which is an algorithm that represents the current state-of-the-art in the context of \emph{stochastic} constrained optimization. One assumption required for our analysis is that the iterates remain in an open convex set over which the objective and constraint functions and their derivatives remain bounded. This is not ideal in the context of a stochastic algorithm, although it is more forgivable in a constrained setting than in an unconstrained setting since the algorithm is designed to be driven to the deterministic feasible region. That being said, one could loosen this assumption if one were to apply our algorithm with $\theta = 0$. Indeed, notice that in our analysis in \S\ref{sec.constant_tau_small}, boundedness of $\{\|g_k\|_2\}$ is primarily required in Lemma~\ref{lem.alphagd}, but with $\theta=0$ one finds directly that, for $k \geq \kbar_{\tau,\xi}$, \bequationNN \E_k[\bar\alpha_k\bar\tau_kg_k^T(\dbar_k - d_k)] = \(\tfrac{\beta_k \bar\xi_{\min} \bar\tau_{\min}}{\bar\tau_{\min}L_k + \Gamma_k}\) \bar\tau_{\min} \E_k[g_k^T(\dbar_k - d_k)] = 0. \eequationNN Hence, our assumption about the boundedness of $\{\|g_k\|_2\}$ is only needed when $\theta > 0$. We have proposed our algorithm for this setting since it is the context of $\theta > 0$ that allows the stepsize scheme in our algorithm to be adaptive, which has a significant benefit in terms of practical performance of the method.
proofpile-arXiv_065-179
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\@ifstar{\starsection}{\nostarsection}} \newcommand\nostarsection[1] {\sectionprelude\origsection{#1}\sectionpostlude} \newcommand\starsection[1] {\sectionprelude\origsection*{#1}\sectionpostlude} \newcommand\sectionprelude{% \vspace{8pt} } \newcommand\sectionpostlude{% \vspace{-8pt} } \makeatother \section*{Introduction} Two-dimensional (2D) materials are rapidly becoming a new platform for photon based quantum information technologies \cite{Liu2019,XChen2019}. The discovery of single-photon emitters (SPEs) at point defects in hBN \cite{Tran2016} has spurred an intense search for optically active defects in 2D crystals \cite{Gottscholl2020}. Compared to defects in bulk crystals, such as diamond and silicon carbide \cite{Childress2006, Ladd2010, Hensen2015, Aharonovich2016}, defects embedded in 2D materials promise to be more easily addressed and controlled. In the case of hBN, defect emitters exhibit a range of desirable properties, including high emission rate, room temperature stability, strong zero-phonon line (ZPL) and easy integration with other optical components using 2D hBN crystals \cite{Liu2019,Caldwell2019,Jungwirth2016,Tran2017,Dietrich2018}. \\ \indent A pressing challenge for defect emitters in hBN is identifying their atomic structure. Various possible structures have been proposed on the basis of density functional theory (DFT) calculations and their comparison with experiments \cite{Tran2016,Tawfik2017,Weston2018,Sajid2018,Lopez-Morales2018,Noh2018,Abdi2018,Turiansky2019}. However, while DFT can provide valuable insight into the formation energy, symmetry and electronic structure of SPE defects, it cannot address key aspects of point-defect SPEs such as their excited states and radiative processes responsible for light emission. To complicate the matter further, similar to other 2D materials and their defects \cite{Bernardi2013, Qiu2013, Refaely-Abramson2018}, optical transitions at defects in hBN are dominated by excitonic effects \cite{Attaccalite2011}, which require specialized first-principles calculations beyond the scope of DFT. Quantum chemistry approaches and density matrix renormalization group have also been used to investigate specific defect structures \cite{Reimers2018,Ivady2020}, but wide comparisons among different structures are still missing. \\ \indent The Bethe-Salpeter equation (BSE) can accurately predict optical properties and excitons in materials \cite{Rohlfing}. It also enables, due to recent advances, precise calculations of radiative lifetimes in 2D and bulk crystals \cite{Palummo2015, Chen2018, Chen2019, Jhalani2019}. The radiative lifetime plays an important role in the study of SPEs as it determines the shortest decay time constant in the second-order photon correlation function \cite{Loudon, Kimble} and can also be measured directly from fluorescence intensity decay \cite{Tran2016, Jungwirth2016}. Applying the BSE and related methods to defect emitters in hBN would enable direct comparisons between theory and experiment of the emission energy and radiative lifetime, providing valuable information to identify defect SPEs. Yet, first-principles BSE calculations are computationally costly to carry out on defect structures, and the radiative lifetime calculations are only a recent development. \\ \indent In this work, we employ the BSE approach to compute from first principles the optical properties, transition dipoles, excitons and radiative lifetimes of atomic defects in hBN. We examine a large pool of candidate SPE structures, spanning native defects and carbon or oxygen impurities, to correlate their atomic structures with their photophysics. We find that different quantum emitters exhibit radiative lifetimes spanning six orders of magnitude and emission energies from infrared to ultraviolet. Bayesian statistical analysis is employed to correlate our results with experiments and identify the most likely SPEs in hBN, among which we find the $\mathrm{V_NN_B}$ defect to have the highest likelihood. In-depth calculations on the $\mathrm{V_NN_B}$ defect highlight the strong dependence of its radiative properties on small perturbations to its atomic structure. The dependence of the defect radiative properties on dielectric screening is analyzed by comparing monolayer and bulk hBN results. Our systematic investigation addresses key challenges for characterizing the excited states and radiative properties of defect emitters in 2D materials. \section*{Results} Our candidate defect structures consist of charge-neutral native defects and carbon or oxygen impurities occupying one or two atomic sites, for a total of 8 different native defects and 7 structures for each of carbon and oxygen impurities. We compute the ground state defect properties using DFT, employing fully relaxed defect atomic structures in 5$\times$5 supercells of monolayer hBN (and for some defects, in bulk hBN). We then refine the electronic structure of selected defects using GW calculations \cite{Hybertsen1986}, followed by BSE calculations to obtain the exciton energies and wave functions and from them the optical absorption, transition dipoles and radiative lifetimes (see the Methods section). In the following, we denote the defects in hBN as $\mathrm{X_NY_B}$ if neighboring N and B atoms are replaced by species X and Y, respectively, where X and Y can be a vacancy or another element \cite{Tawfik2017,Tran2016}. We focus on emitters in the interior of the 2D crystal \cite{Exarhos2017,Hayee2020} and do not consider defects that would likely appear at the sample edges or corners \cite{Chejanovsky2016,Choi2016}. \\ \indent The electronic energies obtained using DFT, while in general not representative of electronic or optical transitions, can be used for guidance and for estimating qualitative trends. Figure \ref{fig:1} shows the lowest spin-conserving transition (HOMO-LUMO) energy of the candidate defects, obtained from DFT, together with the emission polarization inferred from structural symmetry. The defect structures considered here exhibit three different types of local symmetries, $\mathrm{D_{3h}}$, $\mathrm{C_{2v}}$, and $\mathrm{C_s}$. In the high symmetry $\mathrm{D_{3h}}$ structure, adopted by $\mathrm{N_B}$, $\mathrm{V_N}$, $\mathrm{B_N}$, $\mathrm{C_B}$, $\mathrm{C_N}$, $\mathrm{O_N}$, emitted light cannot be linearly polarized. Conversely, linearly polarized emitted light, as observed experimentally in hBN SPEs \cite{Tran2017}, is possible in the $\mathrm{C_{2v}}$ and $\mathrm{C_s}$ symmetries. In the $\mathrm{C_{2v}}$ configuration, which is the most common among the defects investigated here, the 3-fold rotational symmetry is broken but all the atoms remain in-plane, preserving the mirror symmetry with respect to the crystal plane. The $\mathrm{C_s}$ symmetry found in the $\mathrm{V_NN_B}$, $\mathrm{V_NC_B}$, $\mathrm{V_NO_B}$ and $\mathrm{O_B}$ defects is instead associated with an out-of-plane distortion that breaks the mirror symmetry about the plane. The DFT transition energies for the 22 defect structures range from 0 to 3.5 eV. In contrast, the ZPL of the measured SPEs are in the 1.6$-$2.2 eV energy range \cite{Tran2017}, as shown by the shaded region in Fig.~\ref{fig:1}. While candidate structures with $\mathrm{D_{3h}}$ symmetry can be ruled out, $\mathrm{C_{2v}}$ and $\mathrm{C_s}$ structures with exceedingly small or large DFT transition energies also appear unlikely on the basis of the DFT results. \begin{figure}[t!] \centering \includegraphics[width=1.0\textwidth,clip]{fig_DFT.pdf} \caption{Figure 1. Distribution of the DFT transition energy, structural symmetry and emitted light polarization of 22 candidate defect structures. The shaded area shows the experimental range of values for SPEs in hBN.} \label{fig:1} \end{figure} Starting from the DFT ground state, for selected defects we compute the excited state properties with the GW-BSE method, obtaining the quasiparticle energies in the one-shot $\mathrm{G_0W_0}$ approximation and the exciton energies and wave functions with the BSE, which captures electron-hole interaction and excitonic effects. We combine the solutions of the BSE with an approach we recently developed \cite{Chen2019,Jhalani2019} to compute the radiative lifetime of an exciton state from Fermi's golden rule. Generalizing our previous formula for isolated (0D) emitters \cite{Chen2019} to include anisotropic dielectric screening in hBN, the radiative decay rate $\gamma_S$ (inverse of radiative lifetime) of an exciton state $S$ is: \begin{equation} \gamma_S=\frac{\sqrt{\epsilon_{xy}(k_{xy})}e^2E_S}{3\pi\epsilon_0m^2c^3\hbar^2}\left[\left(\frac{3}{4}+\frac{\epsilon_z}{4\epsilon_{xy}(k_{xy})}\right)|p_{S,xy}|^2+|p_{S,z}|^2\right]\,, \label{eq:lifetime} \end{equation} where $\epsilon_{xy}(k_{xy})$ and $\epsilon_{z}$ are the in-plane and out-of-plane dielectric function of hBN, respectively, $k_{xy}$ is the in-plane photon wavevector, $E_S$ is the exciton energy and $p_{S,xy}$ and $p_{S,z}$ are the corresponding components of the exciton transition dipole. We use a constant in-plane dielectric function for bulk hBN ($\epsilon_{xy} = 5$) at optical wavelengths, and for monolayer hBN we take into account the dependence on wavevector $q$ as $\epsilon_{xy}(q)\approx 1+2\pi\alpha_{2D}\,q$ \cite{Cudazzo2011}, where $\alpha_{2D}$ is a constant equal to 0.4 nm \cite{Andersen2015}. In this approach, which is appropriate for 2D materials, the in-plane dielectric function of monolayer hBN reduces to a value of 1 when the wavevector $q$ equals the wavevector of a photon at optical frequencies. \\ \indent \begin{figure}[t!] \centering \includegraphics[width=1.0\textwidth,clip]{fig_BSE.pdf} \caption{Figure 2. Radiative lifetime and energy of the lowest bright exciton of candidate defect SPEs in hBN from GW-BSE calculations. The range of experimental values is shown as a shaded region. The blue line shows the radiative lifetime of an exciton with an assumed transition dipole moment of 6.9 Debye.} \label{fig:2} \end{figure} Figure \ref{fig:2} shows the computed radiative lifetimes and lowest bright exciton energies for nine selected defects, including $\mathrm{V_N}$, $\mathrm{B_N}$, $\mathrm{V_NN_B}$, $\mathrm{C_N}$, $\mathrm{V_NC_B}$, $\mathrm{C_NV_B}$, $\mathrm{O_B}$, $\mathrm{O_NV_B}$ and $\mathrm{V_BO_N}$. We find that the exciton energy can differ significantly $-$ by as much as 1 eV $-$ from the corresponding DFT transition energy, which fails to account for screening and electron-hole interaction effects. This result is a testament to the importance of excited state calculations for investigating SPEs.\\ \indent Surprisingly, we find that the computed radiative lifetime for the selected structures span six orders of magnitude, from about 1 to 10$^6$ ns (1 ms), showing that the emission rate and brightness of quantum emitters in hBN can vary widely. The values typically found in experiments for the emission energy (1.6$-$2.2 eV) and radiative lifetime (1$-$10 ns) are also given for comparison in Fig.~\ref{fig:2}. Writing the exciton transition dipole in Eq. 1 as $\mathbf{p}_S = -(i m E_S / \hbar e) \times e\, \mathbf{r}_S$ to highlight its physical meaning of an atomic-scale dipole, and setting $\abs{\mathbf{r}_S} = \abs{\bra{0}\mathbf{r}\ket{S}}$ equal to the in-plane B-N bond length (this choice gives a dipole of 6.9 Debye), the resulting radiative lifetime as a function of exciton energy gives a lower bound to the calculated radiative lifetimes (see the blue line in Fig.~\ref{fig:2}). The physical insight of this analysis is that due to incomplete overlap of the electron and hole wavefunctions, the exciton transition dipole for most defects is significantly smaller than the bond length, leading to longer radiative lifetimes than this theoretical bound. \\ \indent As the candidate defect structures have properties distributed across a wide range, it is challenging to pinpoint the correct defect structure from our results. As such, we pursue a quantitative analysis of the relative likelihood of the various structures using Bayesian inference, a statistical approach for dealing with uncertainties and combining information from different categories, in which the probability of a hypothesis is updated as more evidence or information becomes available \cite{VonToussaint2011}. The workflow of our \mbox{analysis is shown in Fig.~\ref{fig:3}.} We first generate candidate structures systematically by enumerating all defects that occupy no more than two adjacent atomic sites. A prior likelihood $p(h)$ is then assigned to each candidate structure $h$ as an initial guess of how likely the defect structure can appear in the hBN crystal without considering whether it could account for the properties of the SPEs. The prior likelihood can be tailored to describe various experimental scenarios. For example, if carbon impurities are added in the experiment, as shown in recent work \cite{Mendelson2020}, one could include the presence of a carbon impurity in the prior likelihood, therefore favoring the posterior likelihood of carbon-containing emitters. Here, without referring to specific experimental conditions, we choose the prior likelihood $p(h)$ on the basis of the structural complexity of the defect, using the single exponential form $p(h)\propto S_hA^{-C_h}$, where $C_h$ measures the complexity of the structure, defined as the number of vacancies and antisite defects plus twice the number of impurities, $A$ is a constant, and $S_h$ is a degeneracy factor due to symmetry-related configurations. We subsequently combine our computational results with experimental evidence to obtain a posterior likelihood $p(h|E)$ for each structure \cite{VonToussaint2011}: \begin{equation} p(h|E)\propto p(h)p(E|h), \label{eq:bayesian} \end{equation} where the likelihood function $p(E|h)$ is the probability that the structure $h$ is compatible with a specific piece of experimental evidence $E$ on the basis of our calculations. The resulting posterior likelihood, which is updated in multiple steps using several computed properties, quantifies which structures among those considered are more likely to be defect emitters in hBN and deserve further consideration. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth,trim={1cm 2.5cm 2.5cm 1.5cm},clip]{fig1.pdf} \caption{Figure 3. Bayesian inference workflow. Candidate structures are initially generated through combinatorial enumeration of point defects in hBN and attributed a prior likelihood based on structural complexity. The likelihood of each defect is then updated using results from our DFT calculations. For the most promising defect structures, the likelihood is refined using GW-BSE calculations.} \label{fig:3} \end{figure} The properties of the candidate SPE defects and the results of our Bayesian inference analysis are summarized in Table 1. Moving in the table from left to right, the prior probability $p(h)$ is updated to posterior probabilities $p(h|E)$ using calculations presented in subsequent columns. The first computed posterior probability, $p_1(h|E)$, uses the DFT results to infer whether the defect symmetry is compatible with linearly polarized emission and whether the DFT transition energy, within a standard deviation of $\sim$0.5 eV, falls in the 1.6$-$2.2 eV emission range found in experiments \cite{Jungwirth2016,Exarhos2017,Tran2017}. The final posterior likelihood $p_2(h|E)$ in the rightmost column of Table 1 gives the likelihood of the most likely defect structures when all factors have been taken into account, including comparison between the lowest bright exciton energy and the experimental emission energy and comparison with experiment of the radiative lifetime from our GW-BSE calculations. The detailed rationale of our Bayesian analysis is described in the Supplementary information. \\ \indent The main finding from the data in Table 1 is that the $\mathrm{V_NN_B}$ defect, which was originally proposed as a SPE in hBN on the basis of DFT calculations \cite{Tran2016,Tawfik2017,Abdi2018}, possesses optical and radiative properties that most accurately match the experimental results, even after taking into account excitonic effects and radiative lifetimes. The next most likely structure is the oxygen impurity defect, $\mathrm{O_B}$, with an emission energy of 1.52 eV just below the experimental range. On the other hand, we find that the $\mathrm{V_NC_B}$ defect, which is also considered a likely candidate in the literature based on DFT calculations \cite{Tawfik2017,Abdi2018}, has an emission energy of 2.6 eV that lies too far above the experimental energy range when excitonic effects are included, making it a less likely candidate. We note that although our analysis excludes the $\mathrm{V_NC_B}$ defect and other defects with higher emission energies as candidates for the 1.6$-$2.2 eV SPEs, they could still be good candidates for SPEs in the ultraviolet range\cite{bourrellier2016,tan2019}, which is not the focus of our discussion. As the most likely defect emitter in hBN is the $\mathrm{V_NN_B}$ structure according to our analysis, we investigate it in more detail to provide a microscopic understanding of its radiative properties. \begin{figure}[t!] \centering \includegraphics[width=1.0\textwidth,trim={0cm 3.0cm 0cm 2cm},clip]{table_bayesian.pdf} \caption[]{Table 1. Calculated properties of candidate defect emitters in hBN and their likelihood based on Bayesian inference analysis. The likelihood is normalized to a maximum \mbox{value of 1.}} \label{table:1} \end{figure} Figure 4(a) shows that the relaxed atomic structure of the $\mathrm{V_NN_B}$ defect in monolayer hBN exhibits an out-of-plane displacement of the central nitrogen (N) atom by $z=0.66$~\AA. If the structure is made planar by constraining the N atom to the hBN plane, the total energy of the system is only moderately higher (by 0.125 eV) than the equilibrium structure. Therefore the total energy forms a shallow double-well potential as a function of the out-of-plane N atom displacement. Consistent with recent findings \cite{Li2020}, this small energy difference suggests that the structure is soft in the out-of-plane direction and could fluctuate due to external perturbations. \\ \indent To take this result into account, we compute the optical absorption spectrum of the $\mathrm{V_NN_B}$ defect for out-of-plane displacements $z$ of the N atom in the 0$-$0.83~\AA~range, interpolating between the in-plane and out-of-plane equilibrium structures to obtain the intermediate metastable structures. We find that the absorption spectrum changes drastically for even small changes of $z$, as shown in Fig.~4(c). The spectrum is dominated by two exciton absorption peaks at low energy. At the equilibrium position ($z=0.66$~\AA), the first peak at 1.92 eV is associated with a transition from the top valence to the bottom conduction spin-majority electronic states, whose wave functions are shown in Fig.~4(b). As the displacement $z$ of the N atom is decreased from the equilibrium value to zero, which corresponds to a planar structure, the energy of the first exciton peak decreases at first but then increases again at small $z$ values while gaining oscillator strength monotonically. The second peak at 2.7 eV is also due to a transition from the top valence to the bottom conduction spin-minority bands with relatively large oscillator strength at the out-of-plane equilibrium N atom position. Both the energy and the oscillator strength of this peak decrease monotonically as $z$ decreases [see Fig.~4(c)]. For the planar structure ($z$=0), the second peak becomes the lowest-energy transition but completely loses its oscillator strength and becomes a dark state. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth,trim={3cm 0cm 5cm 1cm},clip]{fig_VNNB.pdf} \caption{Figure 4. Properties of the $\mathrm{V_NN_B}$ defect. (a) Atomic structure in top and side views, with N atoms shown as blue and B atoms as green spheres. Also shown in the side view is the N atom out-of-plane displacement $z$. (b) GW electronic band structure computed using a $5\times5$ supercell. The wave function of the two majority-spin bands that mainly contribute to the lowest-energy exciton are plotted on the right. (c) Absorption spectrum and (d) radiative lifetime as a function of the out-of-plane displacement $z$ of the central N atom.} \label{fig:4} \end{figure} These noteworthy changes in the two lowest-energy exciton peaks are accompanied by substantial changes in the exciton radiative lifetimes. Although the lifetime of the lowest exciton is 334 ns for the DFT equilibrium out-of-plane distortion, it becomes only 19 ns if the $\mathrm{V_NN_B}$ structure is kept planar ($z=0$ case), as shown in Fig.~4(d). This value of the radiative lifetime for the planar structure is within a factor of 2$-$3 of experimental results \cite{Tran2017}. Because many experiments are carried out on thicker hBN samples, we also calculated the radiative lifetime of the $\mathrm{V_NN_B}$ defect embedded in bulk hBN (instead of monolayer), a setting resembling more closely SPE experimental measurements in thicker hBN layer stacks (see the Methods section). Due to interlayer interactions, the out-of-plane displacement of the N atom is reduced to $0.45$~\AA~in bulk hBN. Going from monolayer to bulk reduces the radiative lifetime of the $\mathrm{V_NN_B}$ defect to 147 ns due to changes in the dielectric environment and transition dipole, both of which affect the radiative emission rate in Eq.~(1). Therefore we conclude that the radiative lifetime of the $\mathrm{V_NN_B}$ defect is highly sensitive to both out-of-plane structural distortions and the dielectric environment. \section*{Discussion} While our computed lifetimes are in the 20$-$350 ns range for different $\mathrm{V_NN_B}$ structures, these values pertain to an ideal defect with 100\% quantum yield, whereas defect emission measured so far in hBN is likely associated with a lower quantum yield. To explain the discrepancy between the experimental and calculated lifetimes, note that nonradiative processes, which are not included in our calculation of intrinsic radiative lifetimes, could significantly reduce the apparent lifetime measured in experiments. For example, a defect with a measured 10 ns lifetime but only a 10\% quantum yield would correspond to a 100 ns intrinsic radiative lifetime in very good agreement with our calculations. Future SPE measurements in hBN taking into account the quantum yield may provide a better estimate of the intrinsic SPE radiative lifetime and enable a more accurate comparison with our calculations. \\ \indent In summary, we have investigated the excited state and radiative properties of many candidate defect SPEs in hBN. Our calculations address the photophysics of these defect structures, including their optical transitions and radiative lifetimes, using calculations that accurately account for excitonic effects and anisotropic dielectric screening. Our Bayesian statistical analysis allows us to identify the native $\mathrm{V_NN_B}$ defect as the most likely candidate within a wide pool of over 20 charge-neutral native defects and carbon or oxygen impurities. This work explores the fertile ground at the intersection of excited-state first-principles calculations and Bayesian learning methods, and the application of this framework to quantum technologies. The search for novel single emitters will greatly benefit from such precise first-principles calculations of excited states in materials. \section*{Methods} We carry out DFT calculations in the generalized gradient approximation using the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional \cite{Perdew1996}. Our spin-polarized DFT calculations employ the plane-wave pseudopotential method implemented in {\sc Quantum Espresso} \cite{QE}. The defects are placed in a 5$\times$5 supercell of hBN with the lattice constant kept fixed at 5$\times$2.504~\AA ~while the atoms are fully relaxed without symmetry constraints. For calculations on monolayer hBN, a vacuum of 15~\AA ~is used to avoid spurious inter-layer interactions with the periodic replicas. We use ONCV pseudopotentials \cite{Hamann2013,Schlipf2015} for all atoms along with a plane-wave kinetic energy cutoff of 80 Ry and a 3$\times$3 $\mathbf{k}$-point Brillouin zone grid. The GW-BSE calculations are carried out with the Yambo code \cite{Marini2009,Sangalli2019} using a 2D slab cutoff of the Coulomb interaction. For calculations of defects in monolayer hBN, a 3$\times$3 $\mathbf{k}$-point grid is employed together with an energy cutoff of 10 Ry for the dielectric matrix. The number of empty bands included in the GW calculation (for the polarizability and self-energy summations) is 7 times the number of occupied bands. These parameters converge the exciton energy to within 0.1~eV in our test calculations, which are shown in the Supplementary information. For calculations on defects in bulk hBN, we use a 5$\times$5$\times$1 supercell containing two AA’-stacked layers. The bulk case uses a 2$\times$2$\times$3 $\mathbf{k}$-point grid; all other parameters are the same as in the monolayer case. \section*{Data Availability} All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary information. Additional data related to this paper may be requested from the authors.
proofpile-arXiv_065-180
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:introduction} From the perspective of most computational analysis, music can be defined as sound, its important features yielding to the decomposition of waveforms. However, for the vast majority of history, musical sound could not be separated from its source; to whatever degree it may have evolved biologically to serve various human functions, music must be regarded as an embodied and socially embedded phenomenon \cite{bispham2018human, cross2009evolutionary, richter2016don}. Research has shown intimate links between musical features and human movement, including the reflection of hierarchical rhythmic structures in embodied eigen movements \cite{toiviainen2010embodied}, the reflection of higher-level musical structures in group movement to Electronic Dance Movement \cite{solberg2017pleasurable}, and reflection of spectral and timbral features of music in dance \cite{burger2018embodiment}. Bodily movement is one of the most commonly reported responses to music \cite{lesaffre2008potential}, and movement to music is one of very few universal features of music across cultures \cite{nettl2000ethnomusicologist}. This paper towards Multimodal MIR takes into consideration the multi-modality of music, and takes into account one of the primary aspect of musical engagement, i.e, movement. It is therefore insufficient to consider music only in terms of sound when trying to understand human digital use and interaction with music. This may be especially true in terms of user experience and personalization; human movement in response to music reflects not only the music itself but characteristics of the individual, such as personality \cite{luck2010effects} and emotion \cite{luck2014emotion}. Indeed, research has shown that music-induced movement is so individual that its features can be used in person-identification with a high degree of accuracy \cite{carlson2020dance}. This is in line with previous research, such as that of Cutting et al. \cite{cutting1977recognizing} demonstrating that friends can recognise each other from their walk with only point-light (stick figure) displays of movement, without the need for other distinguishing features. This paradoxical balance between universality and individuality in human motoric responsiveness to music poses a challenge for the creation of digital music interfaces which take music-induced movement into account in providing personalized music experiences. Although the concept of an interactive music system has long been proposed that allow music playback to be controlled and altered via human gestures \cite{subotnick2001interactive}, human-movement based interaction techniques and devices are fast gaining importance in the field of HCI \cite{gillies2019understanding}. In this context, it makes decoding aspects of a user/individual via human movements a key and useful endeavor, which would then aid in the design of more personalized experiences. \section{Related Work} The specific features used in previous work associated movement with individual differences are quite varied. Satchell et al. \cite{satchell2017evidence} examined speed, relative and absolute rotation of the body and found relationships between relative movement of the upper and lower body during walking in both FFM personality traits and gait, while Michalak et al. \cite{michalak2009embodiment} were able to associate low mood with lateral body sway and posture. In dance, relevant features have included amount of movement of the whole body relative to itself and to the environment, responsiveness to music features such as tempo \cite{carlson2016conscientiousness, toiviainen2010embodied}. Another area for exploration of individual differences in movement patterns has been in the context of disorders that have altered or impaired movement \cite{de2012rehabilitation, anzulewicz2016toward, torres2013autism}. These links allow us to postulate that movement patterns should give us information related to individual traits and tendencies which can be then linked to music preferences, mood or emotion in relation to music experiences and could have implications for music therapy as well as for music information retrieval. However, as an initial step, there exist no studies that predict personality and empathy as a function of movement patterns. The current study focuses on identifying FFM personality traits, as well as scores on the Empathy Quotient (EQ) and Systemizing Quotient (SQ), from participants’ free dance movements to various genres of music. The EQ measures participants’ tendency to empathize with others \cite{baron2004empathy}, while the SQ measures the tendency to think in terms of systems \cite{baron2003systemizing}. These two measures were originally developed to increase understanding of people with ASD, as in this population trait systemizing tends to be very high while empathy tends to be low. However previous work has also used the EQ/SQ to determine how these traits are distributed in the general population. Although previous work has found relationships between empathy and responsiveness to changes in heard music or in dance partner \cite{bamford2019trait, carlson2018dance}, and between EQ/SQ scores and music preferences \cite{carlson2017personality, greenberg2015musical}, general movement patterns associated with empathy have not, to the knowledge of the authors, been explored using dance movement, nor have patterns related to systemizing tendencies. \section{Method} \subsection{Participants} Data acquired was from a previous study \cite{carlson2019empathy} comprising data from 73 university students (54 females, mean age = 25.74 years, std = 4.72 years). Thirty-three reported having received formal musical training; five reported one to three years, ten reported seven to ten years, while sixteen reported ten or more years of training. Seventeen participants reported having received formal dance training; ten reported one to three years, five reported four to six years, while two reported seven to ten years. Participants were of 24 different nationalities, with Finland, the United States, and Vietnam being the most frequently represented. For attending the experiment, participants received two movie ticket vouchers each. All participants spoke and received instructions in English. Fifteen participants were excluded from further analysis due to incomplete data. They were asked to listen to the music and move as freely as they desired, but staying within the marked capture space. The aim of these instructions was to create a naturalistic setting, such that participants would feel free to behave as they might in a real-world situation. \subsection{Apparatus, Stimuli, and Procedure} Participants' movements were recorded using a twelve-camera optical motion-capture system (Qualisys Oqus 5+), tracking at a frame rate of 120 Hz, the three-dimensional position of 21 reflective markers attached to each participant. Markers were located as follows (L=left, R=right, F=front, B=back) 1: LF head; 2: RF head; 3: B head; 4: L shoulder; 5: R shoulder; 6: sternum; 7: stomach; 8: LB hip; 9: RB hip; 10: L elbow; 11: R elbow; 12: L wrist; 13: R wrist; 14: L middle finger; 15: R middle finger; 16: L knee; 17: R knee; 18: L ankle; 19: R ankle; 20: L toe; 21: R toe. The stimuli comprised sixteen 35-second excerpts from eight genres, in randomized order: Blues, Country, Dance, Jazz, Metal, Pop, Rap, and Reggae. The stimuli for the experiment were selected using a computational process based on social-tagging and acoustic data. The selection pipeline was designed to select naturalistic stimuli that were uncontroversially representative of their respective genres, which would also be appropriate to use in a dance setting. Moreover, investigating movements to multiple genres of music further adds to the generalizability of our findings. \begin{figure}[h!] \centering \begin{subfigure}[b]{.5\linewidth} \centering \includegraphics[width=\linewidth]{figs/21_markers.png} \caption*{(A)} \label{fig:sub1} \end{subfigure}% \begin{subfigure}[b]{.5\linewidth} \centering \includegraphics[width=.9\linewidth]{figs/20_joints.png} \caption*{(B)} \label{fig:sub2} \end{subfigure} \caption{Marker and joint locations (A) Anterior view of the marker locations a stick figure illustration; (B) Anterior view of the locations of the secondary markers/joints used in animation and analysis of the data} \label{fig:markers} \end{figure} \subsubsection{Personality and Trait Empathy Measures} The Big Five Inventory (BFI) was used to capture the five predominant personality dimensions, namely, Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism \cite{john1999big}. Trait Empathy was measured using the EQ- and SQ-short form version, developed and validated by Wakabayashi et al. \cite{wakabayashi2006development}, as a result giving an EQ and SQ score per participant. \begin{figure*}[t!] \begin{center} \includegraphics[height=6.5cm,width=\linewidth]{figs/Joint_Importance.pdf} \caption{Overview of our Pipeline. Given the position of joints across time frames in 3D Euclidean space(a), we apply pairwise correntropy between time series $x_i$ and $x_j$ and calculate the K-matrix (b). Then, taking the lower triangular part of the symmetric covariance matrix, we get the feature vectors (c). After training the regression model on the feature vectors, we get the weight vector(d). Finally, corresponding weight values from the learned weight vector are mapped to the corresponding joints to get the per-joint importance (e).} \label{fig:JI} \end{center} \end{figure*} \subsection{Feature Extraction} The analysis and prediction pipeline is illustrated in \figref{fig:JI}. To facilitate extraction of kinematic features using the MATLAB Motion Capture (MoCap) Toolbox\cite{burger2013mocap}, a set of 20 secondary markers, subsequently referred to as joints, was derived from the original 21 markers. The locations of these 20 joints are depicted in \figref{fig:markers}. The locations of joints B, C, D, E, F, G, H, I, M, N, O, P, Q, R, S, and T are identical to the locations of one of the original markers, while the locations of the remaining joints were obtained by averaging the locations of two or more markers; Joint A: midpoint of the two back hip markers; J: midpoint the shoulder and hip markers; K: midpoint of shoulder markers; and L: midpoint of the three head markers. The instantaneous velocity of each marker in each direction was calculated. Instantaneous velocity was estimated by time differentiation followed by the application of a 2nd-order Butterworth filter with a cutoff frequency of 24Hz \cite{burger2013mocap}. The features used in our analysis is the co-variances of position and velocity. The co-variances between all marker time series in each direction ($X$, $Y$ and $Z$) within each participant for each stimulus. We used a non-linear measure to calculate covariance between the markers. This method, referred to as correntropy between time series $x_{i}$ and $x_{j}$ \cite{liu2007correntropy}, is given by: \begin{equation} K(x_{i}, x_{j}) = e^{\frac{-||x_{i} - x_{j}||_{2}^{2}}{2\sigma^{2}T^2}} \end{equation} where $||x_{i} - x_{j}||_{2}$ is L2-norm between $x_{i}$ and $x_{j}$, $\sigma$ is a constant, 12.0 in our case and $T$ is the length of the time-series. The L2-norm is divided by $T$ to normalize according to time series length since it has different lengths with varying stimuli. Since the number of joints are 20 and each joint has three coordinates, the dimension of $K$ would be 60$\times$60. The lower triangular part excluding the diagonal elements of the symmetric covariance matrix was vectorised to produce a feature vector of length 1770 for each participant and for each stimuli. We also run our experiments using the Normalized feature vectors calculated by using Position and Velocity, we employed standard Gaussian Normalization technique: \begin{equation} \hat{X} = \frac{X - \mu(X)}{\sigma(X)} \end{equation} where $\hat{X}$ is the feature vector, $\mu(X)$ is the mean and $\sigma(X)$ is standard deviation. \subsection{Model Regression} The most common regression model for value prediction tasks used is Linear Regression. The goal here is to find an optimal line that minimizes the total prediction error. But this model treats its parameters as unknown constants whose values must be derived. Moreover, the weights become sensitive when the dataset size is large. So to prevent the model from overfitting, we took principal components of the features to train the model (For the result sections, we will be considering 243 components for position data and 137 components for velocity data which gave us the best results). We also approached this problem by using Bayesian Regression other than Principal Component Regression (PCR)\footnote{Detailed analysis of Principal Component Regression (PCR) and Bayesian Regression are discussed in the supplementary.}. In Bayesian Regression the parameters are treated as random variables belonging to an underlying distribution. Depending on the dataset, we can be more or less certain about the weights. Since, the parameters of the model belong to a distribution, the predictions of the model also belong to a distribution. So we have confidence bounds on our predictions. Therefore, they are better at representing the uncertainty of a model’s predictions. \subsubsection{Personality and Trait Empathy Prediction} The features extracted are used to train five different Bayesian Regression models to predict each of the five personality traits - Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (OCEAN). The features extracted are used to train both regression models to predict EQ and SQ respectively. The model is trained and evaluated on the described dataset. \subsection{Visualizing the Weight Vector} To interpret the coefficients (also known as the weights or model parameters) of the regression models, we add the value of the feature vector to the corresponding joints. In our algorithm, we first find the index in the 60$\times$60 matrix and then add the absolute value to those joints. The sign of the coefficient indicates the direction of the relationship but the magnitude preserves the importance. After that, Min-Max Normalization is applied to bring the values in the range (0, 1) for better visualizing the same variable across similar tasks. \begin{equation}\label{relativity} \overline{JI}[i] = \bigg( \frac{JI[i] - min(JI)}{max(JI) - min(JI)} \bigg) \forall JI[i] \end{equation} where $\overline{JI}$ represent the normalized Joint Importance Vector. Algorithm \ref{alg:joint_importance} describes the pseudo-code to get the importance of joints from the weights of the trained regression model. \begin{algorithm}[H] \SetAlgoLined \KwResult{Calculate a vector $J$ of 20 dimensions representing importance of each joint.} \hspace{5mm} $W$ is the weight vector; $J$ is the importance vector initialised with $0$; $S$ contains lower triangular indices excluding diagonal indices; 0-indexing is followed;\\ \begin{algorithmic}[1] \STATE $S\gets LowerTriangularIndices(60 \times 60)$ \STATE $N\gets$ $S.length()$ \FOR{$k=0:N-1$} \STATE $(i,j) := S(k)$ \STATE $(\hat{i},\hat{j})\gets IndexToJoint(i,j)$ \STATE $J(\hat{i}):= J(\hat{i}) + |W(k)|$ \STATE $J(\hat{j}):= J(\hat{j}) + |W(k)|$ \ENDFOR \RETURN $J$ \end{algorithmic} \caption{Joint Importance} \label{alg:joint_importance} \end{algorithm} After getting a vector of 20 dimension, we reduce it to 12 before visualizing joint importance. We did this by taking the average of joints which occur in pairs eg. (L shoulder, R shoulder), (L wrist, R wrist). \subsubsection{Evaluation Metric} (a) Root Mean Square Error (RMSE): It computes a risk metric corresponding to the expected value of the root of squared (quadratic) error or loss. \\ (b) $R^{2}$ Score: It represents the proportion of the variance(of y) that has been explained by the independent variables in the model.\footnote{Detailed explanation of metrics is provided in the supplementary.} \\ As the square root of a variance, RMSE can be interpreted as the standard deviation of the unexplained variance, and has the useful property of being in the same units as the response variable and at the same time the $R^2$ helps us evaluate the goodness of fit in capturing the variance in training data. We calculate RMSE and $R^2$ on multiple splits so that we get an average estimate of the accuracy. \subsection{Results} \subsubsection{EQ and SQ} The results for EQ prediction are in \tabref{table:eq_results} and SQ prediction are in \tabref{table:sq_results}. The results are calculated using 5-fold cross validation. The range of EQ and SQ is 0-80. The boldface values represent the best score. The 'N' in the tables denote that Gaussian Normalization was applied on the features. We trained and evaluated two different models for each of the aforementioned tasks. We can see that using position data, instead of velocity data, to generate the feature vectors, gave us the best results. Also, we can see that the Bayesian Regression gave better results than Principal Component Regression on both metrics. So from here on, we will be using Bayesian Regression for other prediction and analysis tasks. \\ \begin{table}[h!] \centering \begin{tabular}{|c|c c|c c|} \hline Input & \multicolumn{2}{c|}{PCR} & \multicolumn{2}{c|}{Bayesian Ridge} \\ & RMSE & $R^{2}$ & RMSE & $R^{2}$ \\ \hline \textbf{Position} & 3.071 & 0.708 & \textbf{2.722} & \textbf{0.771} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Position(N)}\end{tabular} & 3.201 & 0.684 & 2.733 & 0.765 \\ \textbf{Velocity} & 4.938 & 0.249 & 4.343 & 0.423 \\ \begin{tabular}[c]{@{}c@{}}\textbf{Velocity(N)}\end{tabular} & 4.583 & 0.353 & 4.015 & 0.503 \\ \hline \end{tabular} \caption{Prediction Results for Empathizing Quotient} \label{table:eq_results} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c|c c|c c|} \hline Input & \multicolumn{2}{c|}{PCR} & \multicolumn{2}{c|}{Bayesian Ridge} \\ & RMSE & $R^{2}$ & RMSE & $R^{2}$ \\ \hline \textbf{Position} & 2.398 & 0.781 & \textbf{2.161} & \textbf{0.867} \\ \begin{tabular}[c]{@{}c@{}}\textbf{Position(N)}\end{tabular} & 2.363 & 0.786 & 2.502 & 0.838 \\ \textbf{Velocity} & 4.448 & 0.252 & 3.832 & 0.469 \\ \begin{tabular}[c]{@{}c@{}}\textbf{Velocity(N)}\end{tabular} & 4.211 & 0.329 & 3.714 & 0.552 \\ \hline \end{tabular} \caption{Prediction Results for Systemizing Quotient} \label{table:sq_results} \end{table} \begin{table*}[t!] \centering \begin{tabular}{|c|c c|c c|c c|c c|c c|} \hline Input & \multicolumn{2}{c|}{Openness} & \multicolumn{2}{c|}{Conscientiousness} & \multicolumn{2}{c|}{Extraversion} & \multicolumn{2}{c|}{Agreeableness} & \multicolumn{2}{c|}{Neuroticism}\\ & RMSE & $R^{2}$ & RMSE & $R^{2}$ & RMSE & $R^{2}$ & RMSE & $R^{2}$ & RMSE & $R^{2}$ \\ \hline \textbf{Position} & \textbf{0.197} & \textbf{0.776} & \textbf{0.317} & \textbf{0.760} & \textbf{0.384} & 0.743 & \textbf{0.252} & \textbf{0.776} & \textbf{0.384} & \textbf{0.758} \\ \begin{tabular}[c]{@{}c@{}} \textbf{Position(N)}\end{tabular} & 0.227 & 0.740 & 0.332 & 0.690 & 0.414 & \textbf{0.756} & 0.273 & 0.716 & 0.390 & 0.739 \\ \textbf{Velocity} & 0.332 & 0.464 & 0.487 & 0.415 & 0.556 & 0.523 & 0.440 & 0.335 & 0.557 & 0.483 \\ \begin{tabular}[c]{@{}c@{}} \textbf{Velocity(N)}\end{tabular} & 0.304 & 0.527 & 0.426 & 0.543 & 0.501 & 0.623 & 0.408 & 0.442 & 0.461 & 0.654 \\ \hline \end{tabular} \caption{Prediction Results for Five Personality Traits using Bayesian Regression} \label{table:ocean_results_d2} \end{table*} \subsubsection{Personality Regression} The results for OCEAN value prediction on Dataset can be found in \tabref{table:ocean_results_d2}. The results are calculated using 5-fold cross validation. The range of the personality values is 1.0-5.0. We can see that using position data to extract features gave the best results on predicting all five personality traits on the dataset. We can concur that using position data instead of velocity data in the kernelized space is better for these regression tasks. \subsubsection{Joints' Importance} For evaluating joint importance we used learned weights of the model using position data across different tasks. For the purpose of analyzing the importance of the joints, we reduced them to 12 by taking the average for those joints which occur in pairs eg. (L shoulder, R shoulder). This was also done for hips, knee, ankle, toe, elbow, wrist, and finger. Altogether the results in characterizing an individual trait is dominated by the limbs than the core of the body. From the relative joint importance depicted in \figref{fig:eq_sq_spyder}, we observe that 'Ankle', Elbow' and 'Shoulder' play an important role in determining EQ and SQ of an individual, whereas 'Neck' and 'Torso' have a negligible contribution. We also infer that 'Finger', 'Hip', and 'Knee' are more crucial joints for predicting EQ than for SQ whereas 'Elbow' holds significantly higher importance for predicting SQ than for EQ. \figref{fig:ocean_spyder} displays the relative joint importance of personality along with the mean plotted in each sub-figure. The farther away from the mean the joint importance value for an individual joint is, the more important it is in characterizing that trait. Some similarities in the joint importance profiles across the personality traits can be attributed to the inherent correlation that exists among them\footnote{The table for Spearman Correlation between the personality traits is provided in the supplementary material.}. We observe that it is the 'Finger', 'Elbow', and 'Knees' that contribute to Feature Importance whereas 'Root', 'Neck' and 'Torso' have negligible contribution. For characterizing Conscientiousness, 'Shoulders', 'Knees' and 'Neck' play a crucial role while 'Head' and 'Toe' plays an important role for Extraversion. For Agreeableness, 'Neck' and 'Wrists' have relatively less importance as compared to other joints whereas, 'Wrists' play an important role in Openness. Finally, there are no significant defining features for Neuroticism, which indicates that their expression in Dance Movements through Music-Induced Movements are very limited. \begin{figure}[b!] \includegraphics[width=\linewidth]{eq_sq_spyder.pdf} \caption{Relative importance of Joints in EQ and SQ Tasks using the Position Data. } \label{fig:eq_sq_spyder} \end{figure} \begin{figure*}[ht!] \begin{center} \includegraphics[width=\linewidth]{ocean_spyder1.pdf} \caption{Relative importance of Joints of the five personality traits(Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism) using the Position Data. The black line indicates the mean importance of the corresponding joint marker. The red dotted line in the top left sub-figure indicates the standard deviation about the mean. } \label{fig:ocean_spyder} \end{center} \end{figure*} \section{Discussion} Music experiences are highly embodied, making it necessary to consider individual embodied responses to music in developing more advanced personalized user experiences. The current study is among the first to the authors’ knowledge to use participants’ free dance movements to predict personality traits, and both the Empathizing and Systemizing Quotients (EQ/SQ). Co-variance between joint velocities has previously been used to identify an individual from their free dance movements with a high degree of accuracy \cite{carlson2020dance}. The results of the current analysis show co-variance to be a useful feature in predicting individual differences. However, we achieved considerably better prediction accuracy by using position data than velocity data. Overall, the limbs of the body seemed to have more importance in predicting individual traits than the core body. This is in line with the fact that gesture plays an important role in communication \cite{goldin2006talking}, and as specifically regards the EQ/SQ, as these tests were originally developed in conjunction with studies of ASD, in which gesture and imitative movement appear to be compromised \cite{hamilton2007imitation}. Although the sample used in the current study comprised typically functioning (non-ASD) participants, the accuracy of prediction of EQ/SQ scores in this analysis is worth highlighting in light of recent work suggesting the existence of motor signatures unique to ASD, detectable from whole body movements as well as data drawn from participants’ interaction with tablets \cite{anzulewicz2016toward, wu2018biomarker}. The specific markers that were important in the prediction of individual traits in some cases corroborates previous work, and in some cases contradicts it. Luck et al.\cite{luck2010effects} found correlations between Extraversion and speed of head movement, which supports the current finding that the head is of particular importance in identifying Extraversion. Carlson et al. \cite{carlson2016conscientiousness} found that, compared to Conscientiousness, the core body was more important in responsiveness to musical tempo in relation to Extraversion, which is partly supported by the slightly greater importance of the finger and wrist markers to Extraversion in our study, but partly contradicted by the importance of the shoulder marker in Extraversion. The difference between findings may relate to the use in the current study of position rather than velocity or acceleration data; that is, core body posture while moving to music may be more indicative of Conscientiousness than core body movement. EQ scores were more related to head, finger, hips and lower limb joints than SQ scores, which may be partly attributed to gender-typical movement patterns as females tend to score higher on the EQ than males \cite{baron2004empathy, troje2002decomposing}. Several limitations of the current study should be noted. First, the majority of participants were from European or North American countries, and all eight music stimuli were of Western origin, limiting the degree to which results can be generalized cross-culturally. Secondly, There may exist potential bias due to gender imbalance. Future work could include separate analysis performed within gender categories. And lastly, participants' preferences for heard stimuli were not included in our model. This would be an important feature to focus on in future work, as preference and enjoyment are highly relevant to personalized MIR. Further extension of this work could help to make music recommendation systems more robust. Previous work has considered the relationship between personality and music preference \cite{carlson2017personality, rentfrow2003re}, while Greenberg et al. \cite{greenberg2015musical} explored the relationship between music preference and empathizing-systematizing theory, suggesting even that music may play a role in increasing empathy in people with empathy-related disorders, such as ASD. However, the relationship between embodiment, personality and musical experiences requires further exploration. To conclude, this study represents an early step towards multimodal MIR. To make this approach applicable to personalized gesture-based retrieval systems, it can be extended to monocular video captured by accessible devices such as a mobile phone camera. This approach would be feasible due to recent progress in the area of 3D human pose estimation in predicting the body joint coordinates from a monocular video \cite{pavllo20193d, venkat2019humanmeshnet, cheng20203d}. This would then allow future recommendation systems to take embodied processes into account, resulting in better and more responsive personalized experiences. \clearpage \section*{Supplementary Material of "Towards Multimodal MIR: Predicting individual differences from music-induced movement"}] \noindent In this supplementary material, we present more experimental results and details about the Evaluation Metric and Bayesian Regression that could not be included in the main manuscript due to the lack of space.\\ \subsection*{Evaluation Metrics} (a) Root-Mean Square Error (RMSE): If $\hat{y}_{i}$ is the predicted value of the $i^{th}$ sample and $y_i$ is the corresponding true value for total $n$ samples, then the RMSE estimated is defined as: \begin{equation}\label{relativity} RMSE(y, \hat{y}) = \sqrt{\frac{1}{n} \sum_{i=1}^{n}(y_i - \hat{y}_i)^2} \end{equation} We can interpret the model's performance using RMSE. Consider $X$ being the ground truth value and $Y$ is the RMSE score, then we can say that the model's prediction value will be accurate in the range $X-Y$ to $X+Y$.\newline \\ (b) $R^{2}$ Score: If $\hat{y}_{i}$ is the predicted value of the $i^{th}$ sample, $y_i$ is the corresponding true value for total $n$ samples, and $\bar{y}$ is the mean of the ground truth data, the estimated $R^2$ is defined as: \begin{equation}\label{relativity} R^2(y, \hat{y}) = 1 - \frac{\sum_{i=1}^{n} (y_i - \hat{y}_i)^2}{\sum_{i=1}^{n}(y_i -\bar{y})^2} \end{equation} \subsection*{Principal Component Regression} The Linear Regression model was used to predict the EQ and SQ values but we found that the model was highly overfitting. So, we took Principal Components of the features for this model. We calculated the RMSE and $R^2$ scores for both of the aforementioned tasks after taking the Principal Components. We repeated this experiment by varying the number of principal components. \subsubsection*{EQ} From Figs. \ref{fig:f1} and \ref{fig:f2} we can say that using position data for feature extraction (Sec. 3.3), the model started to overfit after having more than around 240 principal components since the RMSE started increasing and $R^2$ score started decreasing for testing data. Similarly, from Figure \ref{fig:f3} and \ref{fig:f4}, we can say that using velocity data for feature extraction, the model started to overfit after having more than around 140 principal components. \begin{figure}[h!] \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/EQ_pos_r2.pdf} \caption{$R^2$ score on Position Data} \label{fig:f1} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/EQ_pos_rmse.pdf} \caption{RMSE on Position Data} \label{fig:f2} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/EQ_vel_r2.pdf} \caption{$R^2$ score on Velocity Data} \label{fig:f3} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/EQ_vel_rmse.pdf} \caption{RMSE on Velocity Data} \label{fig:f4} \end{subfigure} \caption{PCR Results on EQ} \end{figure} \begin{figure}[h!] \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/SQ_pos_r2.pdf} \caption{$R^2$ score on Position Data} \label{fig:sq_f1} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/SQ_pos_rmse.pdf} \caption{RMSE on Position Data} \label{fig:sq_f2} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/SQ_vel_r2.pdf} \caption{$R^2$ score on Velocity Data} \label{fig:sq_f3} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/SQ_vel_rmse.pdf} \caption{RMSE on Velocity Data} \label{fig:sq_f4} \end{subfigure} \caption{PCR Results on SQ} \end{figure} \subsubsection*{SQ} From Figs. \ref{fig:sq_f1} and \ref{fig:sq_f2}, we can say that using position data for feature extraction, the model started to overfit after having more than 260 principal components. Similarly from Figure \ref{fig:sq_f3} \& \ref{fig:sq_f4} when we use velocity data for feature extraction, the model started to overfit after taking more than 170 principal components. \clearpage \subsection*{Bayesian Regression} The model for the Bayesian Regression with the response sampled from a normal distribution is \begin{equation} y \sim N(\beta^Tx, \sigma^2I) \end{equation}{} where $\beta$ is the weight vector, $x$ is the feature vector and $\sigma$ is the standard deviation. Here $\beta$ and $\sigma$ are the model parameters. The goal is not to find the single best value of model parameters but to determine the posterior distribution for the model parameters. The posterior probability of the model parameters can be defined as \begin{equation} p(\beta|D) \propto p(D|\beta)p(\beta) \end{equation} \begin{equation} \beta \sim N(0, \sigma_{\beta}^2I_{d}) \end{equation} where $p(\beta)$ is the initial probability distribution, also known as prior distribution and $p(D|\beta)$ is known as the likelihood function. Using these approaches, we attempted two tasks 1. EQ and SQ Prediction 2. Personality Prediction. Due to its robustness, it is evident that the model did not overfit on the dataset as the $R^2$ score kept on increasing with the number of principal components. The $R^2$ scores for training and testing set are 0.92 and 0.86 when all the features are considered. We also repeated this experiment by varying the number of principal components incrementally to test its robustness. \subsubsection*{EQ} From Figs. \ref{fig:br_f1} \& \ref{fig:br_f2}, we can say that using position data, the maximum $R^2$ score achieved is 0.76 and minimum RMSE is 2.73. From Figs. \ref{fig:br_f3} \& \ref{fig:br_f4}, we can say that using velocity data, the maximum $R^2$ score is 0.42 and minimum RMSE is 4.35. The RMSE and $R^2$ scores become somewhat saturated at some point but still gets better marginally with the increase in principal components. \subsubsection*{SQ} From Figs. \ref{fig:br_sq_f1} \& \ref{fig:br_sq_f2}, we can say that using position data, the maximum $R^2$ score is 0.82 and minimum RMSE is 2.18. From Figs. \ref{fig:br_sq_f3} \& \ref{fig:br_sq_f4}, we can say that using velocity data gained maximum $R^2$ score of 0.44 and minimum RMSE of 3.82. In the paper, we reported the RMSE and $R^2$ scores of Bayesian Regression without taking any principal components. In EQ, using position data, the RMSE and $R^2$ is 2.72 and 0.77. Similarly, using velocity data, the RMSE and $R^2$ is 4.43 and 0.42. In SQ, using position data, the RMSE and $R^2$ is 2.16 and 0.86. Similarly, using velocity data, the RMSE and $R^2$ is 3.83 and 0.46. Since, the results were close but better than that of after performing the PCA, we present all the results in our paper for Bayesian Regression without performing PCA. \begin{figure}[h!] \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/EQ_br_pos_r2.pdf} \caption{$R^2$ score on Position Data} \label{fig:br_f1} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/EQ_br_pos_rmse.pdf} \caption{RMSE on Position Data} \label{fig:br_f2} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/EQ_br_vel_r2.pdf} \caption{$R^2$ score on Velocity Data} \label{fig:br_f3} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/EQ_br_vel_rmse.pdf} \caption{RMSE on Velocity Data} \label{fig:br_f4} \end{subfigure} \caption{Bayesian Regression Results on EQ} \end{figure} \begin{figure}[h!] \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/SQ_br_pos_r2.pdf} \caption{$R^2$ score on Position Data} \label{fig:br_sq_f1} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/SQ_br_pos_rmse.pdf} \caption{RMSE on Position Data} \label{fig:br_sq_f2} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/SQ_br_vel_r2.pdf} \caption{$R^2$ score on Velocity Data} \label{fig:br_sq_f3} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{fig_supplementary/SQ_br_vel_rmse.pdf} \caption{RMSE on Velocity Data} \label{fig:br_sq_f4} \end{subfigure} \caption{Bayesian Regression Results on SQ} \end{figure} \subsection*{Correlation between BFI Personality Traits} \begin{table}[h!] \setlength{\tabcolsep}{12pt} \begin{tabular}{|c|cccc|} \hline & O & C & E & A \\ \hline C & -0.093 & - & - & - \\ E & -0.003 & 0.128 & - & - \\ A & 0.021 & \cellcolor[HTML]{C0C0C0}0.341 & \cellcolor[HTML]{C0C0C0}0.358 & - \\ N & 0.217 & \cellcolor[HTML]{EFEFEF}-0.289 & -0.225 & \cellcolor[HTML]{EFEFEF}-0.292 \\ \hline \end{tabular} \begin{tabular}{ >{\columncolor[HTML]{EFEFEF}}l >{\columncolor[HTML]{C0C0C0}}l } p\textless{}0.05 & p\textless{}0.01 \end{tabular} \caption{Results of the Spearman Correlation between the BFI personality traits.} \label{table:ocean_corr1} \end{table} \end{document}
proofpile-arXiv_065-181
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Triangles are the basic substructure of networks and play critical roles in network analysis. Due to the importance of triangles, triangle counting problem (TC), which counts the number of triangles in a given graph, is essential for analyzing networks and generally considered as the first fundamental step in calculating metrics such as clustering coefficient and transitivity ratio, as well as other tasks such as community discovery, link prediction, and Spam filtering \cite{tcReview}. TC problem is not hard but they are all memory bandwidth intensive thus time-consuming. As a result, researchers from both academia and industry have proposed many TC acceleration methods ranging from sequential to parallel, single-machine to distributed, and exact to approximate. From the computing hardware perspective, these acceleration strategies are generally executed on CPU, GPU or FPGA, and are based on Von-Neumann architecture \cite{tcReview,XiongTCCPGPU,XiongTCFPGA}. However, due to the fact that most graph processing algorithms have low computation-memory ratio and high random data access patterns, there are frequent data transfers between the computational unit and memory components which consumes a large amount of time and energy. In-memory computing paradigm performs computation where the data resides. It can save most of the off-chip data communication energy and latency by exploiting the large internal memory inherent bandwidth and inherent parallelism \cite{MutluDRAM,DBLP:conf/dac/LiXZZLX16}. As a result, in-memory computing has appeared as a viable way to carry out the computationally-expensive and memory-intensive tasks \cite{LiBingOverview,FanAligns}. This becomes even more promising when being integrated with the emerging non-volatile STT-MRAM memory technologies. This integration, called Processing-In-MRAM (PIM), offers fast write speed, low write energy, and high write endurance among many other benefits \cite{wang2018current,DBLP:journals/tvlsi/JainRRR18}. In the literature, there have been some explorations on in-memory graph algorithm accelerations \cite{ChenHPCA,FanGraphs,WangYuASPDAC,QianMicro}, however, existing TC algorithms, including the intersection-based and the matrix multiplication-based ones, cannot be directly implemented in memory. For large sparse graphs, highly efficient PIM architecture, efficient graph data compression and data mapping mechanisms are all critical for the efficiency of PIM accelerations. Although there are some compression methods for sparse graph, such as compressed sparse column (CSC), compressed sparse row (CSR), and coordinate list (COO) \cite{ChenHPCA}, these representations cannot be directly applied to in-memory computation either. In this paper, we propose and design the first in-memory TC accelerator that overcomes the above barriers. Our main contributions can be summarized as follows: \begin{itemize} \item We propose a novel TC method that uses massive bitwise operations to enable in-memory implementations. \item We propose strategies for data reuse and exchange, and data slicing for efficient graph data compression and mapping onto in-memory computation architectures. \item We build a TC accelerator with the sparsity-aware processing-in-MRAM architecture. A device-to-architecture co-simulation demonstrates highly encouraging results. \end{itemize} The rest of the paper is organized as follows: Section~\ref{sec:preliminary} provides some preliminary knowledge of TC and in-memory computing. Section~\ref{sec:tc} introduces the proposed TC method with bitwise operations, and Section~\ref{sec:pimArch} elaborates a sparsity-aware processing-in-MRAM architecture which enables highly efficient PIM accelerations. Section~\ref{sec:exper} demonstrates the experimental results and Section~\ref{sec:conclusion} concludes. \section{Preliminary}\label{sec:preliminary} \subsection{Triangle Counting} Given a graph, triangle counting (TC) problem seeks to determine the number of triangles. The sequential algorithms for TC can be classified into two groups. In the {matrix multiplication based algorithms}, a triangle is a closed path of length three, namely a path of three vertices begins and ends at the same vertex. If $A$ is the adjacency matrix of graph $G$, $A^3[i][i]$ represents the number of paths of length three beginning and ending with vertex $i$. Given that a triangle has three vertices and will be counted for each vertex, and the graph is undirected (that is, a triangle $i-p-q-i$ will be also counted as $i-q-p-i$), the number of triangles in $G$ can be obtained as $trace(A^3)/6$, where $trace$ is the sum of elements on the main diagonal of a matrix. In the {set intersection based algorithms}, it iterates over each edge and finds common elements from adjacency lists of head and tail nodes. A lot of CPU, GPU and FPGA based optimization techniques have been proposed \cite{tcReview,XiongTCCPGPU,XiongTCFPGA}. These works show promising results of accelerating TC, however, these strategies all suffer from the performance and energy bottlenecks brought by the significant amount of data transfers in TC. \subsection{In-Memory Computing with STT-MRAM} STT-MRAM is a promising candidate for the next generation main memory because of its properties such as near-zero leakage, non-volatility, high endurance, and compatibility with the CMOS manufacturing process \cite{wang2018current}. In particular, prototype STT-MRAM chip demonstrations and commercial MRAM products have been available by companies such as Everspin and TSMC. STT-MRAM stores data with magnetic-resistances instead of conventional charge based store and access. This enables MRAM to provide inherent computing capabilities for bitwise logic with minute changes to peripheral circuitry \cite{DBLP:journals/tvlsi/JainRRR18}\cite{yang2018exploiting}. As the left part of Fig.~\ref{fig:cim} shows, a typical STT-MRAM bit-cell consists of an access transistor and a Magnetic Tunnel Junction (MTJ), which is controlled by bit-line (BL), word-line (WL) and source-line (SL). The relative magnetic orientations of pinned ferromagnetic layer (PL) and free ferromagnetic layer (FL) can be stable in parallel (\texttt{P} state) or anti-parallel (\texttt{AP} state), corresponding to low resistance ($R_{\rm P}$) and high resistance ($R_{\rm AP}$, $R_{\rm AP}>R_{\rm P}$), respectively. \texttt{READ} operation is done by enabling WL signal, applying a voltage $V_{\rm read}$ across BL and SL, and sensing the current that flows ($I_{\rm P}$ or $I_{AP}$) through the MTJ. By comparing the sense current with a reference current ($I_{\rm ref}$,), the data stored in MTJ cell (logic `0' or logic `1') could be readout. \texttt{WRITE} operation can be performed by enabling WL, then applying an appropriate voltage ($V_{\rm write}$) across BL and SL to pass a current that is greater than the critical MTJ switching current. To perform bitwise logic operation, as demonstrated in the right part of Fig.~\ref{fig:cim}, by simultaneously enabling $WL_i$ and $WL_j$, then applying $V_{\rm read}$ across $BL_k$ and $SL_k$ ($k \in [0,n-1]$), the current that feeds into the $k$-th sense amplifier (SA) is a summation of the currents flowing through $MTJ_{i,k}$ and $MTJ_{j,k}$, namely $I_{i,k}+I_{j,k}$. With different reference sensing current, various logic functions of the enabled word line can be implemented. \begin{figure}[t] \centering \includegraphics[width = 0.9\linewidth]{cim.pdf} \caption{Typical STT-MRAM bit-cell and paradigm of computing in STT-MRAM array.} \label{fig:cim} \end{figure} \section{Triangle Counting with Bitwise Operations}\label{sec:tc} In this section, we seek to perform TC with massive bitwise operations, which is the enabling technology for in-memory TC accelerator. Let $A$ be the adjacency matrix representation of a undirected graph $G(V,E)$, where $A[i][j]\in \{0,1\}$ indicates whether there is an edge between vertices $i$ and $j$. If we compute $A^2=A*A$, then the value of $A^2[i][j]$ represents the number of distinct paths of length two between vertices $i$ and $j$. In the case that there is an edge between vertex $i$ and vertex $j$, and $i$ can also reach $j$ through a path of length two, where the intermediate vertex is $k$, then vertices $i$, $j$, and $k$ form a triangle. As a result, the number of triangles in $G$ is equal to the number of non-zero elements ($nnz$) in $A \cap A^2$ (the symbol `$\cap$' defines element-wise multiplication here), namely \begin{equation}\label{equ:eq1} TC(G)=nnz(A \cap A^2) \end{equation} Since $A[i][j]$ is either zero or one, we have \begin{equation}\label{equ:eq2} (A\cap A^2)[i][j]= \begin{cases} 0, & \text{if}\ A[i][j]=0;\\ A^2[i][j], & \text{if}\ A[i][j]=1. \end{cases} \end{equation} According to Equation~(\ref{equ:eq2}), \begin{equation}\label{equ:eq3} \begin{split} nnz(A \cap A^2)&=\sum\sum\nolimits_{A[i][j]=1}A^2[i][j]\\ \end{split} \end{equation} Because the element in $A$ is either zero or one, the bitwise Boolean \texttt{AND} result is equal to that of the mathematical multiplication, thus \begin{equation}\label{equ:eq4} \begin{split} A^2[i][j]& =\sum_{k=0}^{n} A[i][k]*A[k][j]=\sum_{k=0}^{n} {AND}(A[i][k],A[k][j])\\ & ={BitCount}({AND}(A[i][*],A[*][j]^T)) \end{split} \end{equation} in which \texttt{BitCount} returns the number of `1's in a vector consisting of `0' and `1', for example, $BitCount(0110)=2$. Combining equations ~(\ref{equ:eq1}), (\ref{equ:eq3}) and (\ref{equ:eq4}), we have \begin{equation} \begin{split} TC(G)&={BitCount}({AND}(A[i][*],A[*][j]^T)),\\ &\quad \text{in which }A[i][j]=1 \end{split} \end{equation} Therefore, TC can be completed by only \texttt{AND} and \texttt{BitCount} operations (massive for large graphs). Specifically, for each non-zero element $A[i][j]=1$, the $i$-th row ($R_i=A[i][*]$) and the $j$-th column ($C_j=A[*][j]^T$) are executed \texttt{AND} operation, then the \texttt{AND} result is sent to a bit counter module for accumulation. Once all the non-zero elements are processed as above, the value in the accumulated \texttt{BitCount} is exactly the number of triangles in the graph. \begin{figure}[htb] \centering \includegraphics[width = 0.85\linewidth]{TCProcedure1.pdf} \caption{Demonstrations of triangle counting with \texttt{AND} and \texttt{BitCount} bit-wise operations.} \label{fig:TCProcedure} \end{figure} Fig.~\ref{fig:TCProcedure} demonstrates an illustrative example for the proposed TC method. As the left part of the figure shows, the graph has four vertices, five edges and two triangles ($0-1-2-0$ and $1-2-3-1$), and the adjacency matrix is given. The non-zero elements in $A$ are $A[0][1]$, $A[0][2]$, $A[1][2]$, $A[1][3]$, and $A[2][3]$. For $A[0][1]$, row $R_0$=`0110' and column $C_1$=`1000' are executed with \texttt{AND} operation, then the \texttt{AND} result `0000' is sent to the bit counter and gets a result of zero. Similar operations are performed to other four non-zero elements. After the execution of the last non-zero element $A[2][3]$ is finished, the accumulated \texttt{BitCount} result is two, thus the graph has two triangles. The proposed TC method has the following advantages. First, it avoids the time-consuming multiplication. When the operation data are either zero or one, we can implement the multiplication with \texttt{AND} logic. Second, the proposed method does not need to store the intermediate results that are larger than one (such as the elements in $A^2$), which are cumbersome to store and calculate. Third, it does not need complex control logic. Given the above three advantages, the proposed TC method is suitable for in-memory implementations. \section{Sparsity-Aware Processing-In-MRAM Architecture}\label{sec:pimArch} To alleviate the memory bottleneck caused by frequent data transfers in traditional TC algorithms, we implement an in-memory TC accelerator based on the novel TC method presented in the previous section. Next, we will discuss several dataflow mapping techniques to minimize space requirements, data transfers and computation in order to accelerate the in-memory TC computation. \subsection{Data Reuse and Exchange} Recall that the proposed TC method iterates over each non-zero element in the adjacency matrix, and loads corresponding rows and columns into computational memory for \texttt{AND} operation, followed by a \texttt{BitCount} process. When the size of the computational memory array is given, it is important to reduce the unnecessary space and memory operations. We observe that for \texttt{AND} computation, the non-zero elements in a row reuse the same row, and the non-zero elements in a column reuse the same column. The proposed data reuse mechanism is based on this observation. Assume that the non-zero elements are iterated by rows, then the current processed row only needs to be loaded once, at the same time the corresponding columns are loaded in sequence. Once all the non-zero elements in a row have been processed, this row will no longer be used in future computation, thus we can overwrite this row by the next row to be processed. However, the columns might be used again by the non-zero elements from the other rows. Therefore, before loading a certain column into memory for computation, we will first check whether this column has been loaded, if not, the column will be loaded to a spare memory space. In case that the memory is full, we need to select one column to be replaced with the current column. We choose the least recently used (LRU) column for replacement, and more optimized replacement strategy could be possible. As demonstrated in Fig.~\ref{fig:TCProcedure}, in step $1$ and step $2$, the two non-zero elements $A[0][1]$ and $A[0][2]$ of row $R_0$ are processed respectively, and corresponding columns $C_1$ and $C_2$ are loaded to memory. Next, while processing $A[1][2]$ and $A[1][3]$, $R_1$ will overlap $R_0$ and reuse existing $C_2$ in step $3$, and load $C_3$ in step $4$. In step $5$, to process $A[2][3]$, $R_1$ will be overlapped by $R_2$, and $C_3$ is reused. Overlapping the rows and reusing the columns can effectively reduce unnecessary space utilization and memory \texttt{WRITE} operations. \subsection{Data Slicing} To utilize the sparsity of the graph to reduce the memory requirement and unnecessary computation, we propose a data slicing strategy for graph data compression. Assume $R_i$ is the $i$-th row, and $C_j$ is the $j$-th column of the adjacency matrix $A$ of graph $G(V,E)$. The slice size is $|S|$ (each slice contains $|S|$ bits), then each row and column has $\lceil\frac{|V|}{|S|}\rceil$ number of slices. The $k$-th slice in $R_i$, which is represented as $R_i S_k$, is the set of $\{A[i][k*|S|],\cdots,A[i][(k+1)*|S|-1]$. We define that slice $R_i S_k$ is \textbf{\textit{valid}} if and only if $\exists A[i][t] \in R_i S_k,A[i][t]=1,t\in [k*|S|,(k+1)*|S|-1]$. Recall that in our proposed TC method, for each non-zero element in the adjacency matrix, we compute the \texttt{AND} result of the corresponding row and column. With row and column slicing, we will perform the \texttt{AND} operation in the unit of slices. For each $A[i][j]=1$, we only process the valid slice pairs, namely only when both the row slice $R_i S_k$ and column slice $C_j S_k$ are valid, we will load the valid slice pair $(R_iS_k,C_jS_k)$ to the computational memory array and perform \texttt{AND} operation. \begin{figure}[htbp] \centering \includegraphics[width = 0.82\linewidth]{rs1.pdf} \caption{Sparsity-aware data slicing and mapping.} \label{fig:rowslicing} \end{figure} Fig.~\ref{fig:rowslicing} demonstrates an example, after row and column slicing, only slice pairs $(R_iS_3,C_jS_3)$ and $(R_iS_5,C_jS_5)$ are valid, therefore, we only load these slices for \texttt{AND} computation. This scheme can reduce the needed computation significantly, especially in the large sparse graphs. \textit{Memory requirement of the compressed graph data.} With the proposed row and column slicing strategy, we need to store the index of valid slices and the detailed data information of these slices. Assuming that the number of valid slices is $N_{VS}$, the slice size is $|S|$, and we use an integer (four Bytes) to store each valid slice index, then the needed space for overall valid slice index is $IndexLength = N_{VS} \times 4$ Bytes. The needed space to store the data information of valid slices is $DataLength = N_{VS} \times |S| / 8$ Bytes. Therefore, the overall needed space for graph $G$ is $N_{VS} \times (|S| / 8 + 4)$ Bytes, which is determined by the sparsity of $G$ and the slice size. In this paper, we set $|S|=64$ in the experimental result section. Given that most graphs are highly sparse, the needed space to store the graph can be trivial. \textbf{Moreover, the proposed format of compressed graph data is friendly for directly mapping onto the computational memory arrays to perform in-memory logic computation.} \begin{algorithm}[t] \footnotesize \KwIn{Graph $G(V,E)$.} \KwOut{The number of triangles in $G$.} $TC\_G$ = 0\; Represent $G$ with adjacent matrix $A$\; \For {each edge $e\in E$ with $A[i][j]=1$}{ Partition $R_i$ into slices\; Partition $C_j$ into slices\; \For {each valid slice pair ($R_iS_k$,$C_jS_k$)}{ $TC\_G$ += \textbf{COMPUTE} ($R_iS_k$,$C_jS_k$)\; } } \textbf{return} $TC\_G$ as the number of triangles in $G$.\\ ----------------------------------------\\ \textbf{COMPUTE} ($Slice1$, $Slice2$) {\\ load $Slice1$ into memory\; \If {$Slice2$ has not been loaded}{ \eIf {there is no enough space}{ Replace least recently used slice with $Slice2$\; }{Load $Slice2$ into memory\;} } \textbf{return} $BitCount(AND(Slice1,Slice2))$. } \caption{TCIM: Triangle Counting with Processing-In-MRAM Architecture.} \label{alg:dataMapping} \end{algorithm} \subsection{Processing-In-MRAM Architecture} \begin{figure*}[t] \centering \includegraphics[width = 0.75\linewidth]{OverallArch.pdf} \caption{Overall processing-in-MRAM architecture.} \label{fig:overallArch} \end{figure*} Fig.~\ref{fig:overallArch} demonstrates the overall architecture of processing-in-MRAM. The graph data will be sliced and compressed, and represented by the valid slice index and corresponding slice data. According to the valid slice indexes in the data buffer, we load the corresponding valid slice pairs into computational STT-MRAM array for bitwise computation. The storage status of STT-MRAM array (such as which slices have been loaded) is also recorded in the data buffer and utilized for data reuse and exchange. As for the computational memory array organization, each chip consists of multiple Banks and works as computational array. Each Bank is comprised of multiple computational memory sub-arrays, which are connected to a global row decoder and a shared global row buffer. Read circuit and write driver of the memory array are modified for processing bitwise logic functions. Specifically, the operation data are all stored in different rows in memory arrays. The rows associated with operation data will be activated simultaneously for computing. Sense amplifiers are enhanced with \texttt{AND} reference circuits to realize either \texttt{READ} or \texttt{AND} operations. By generating $R_\text{ref-AND}\in (R_\text{P-P},R_\text{P-AP})$, the output by the sense amplifier is the \texttt{AND} result of the data that is stored in the enabled WLs. \subsection{Pseudo-code for In-Memory TC Acceleration} Algorithm~\ref{alg:dataMapping} demonstrates the pseudo-code for TC accelerations with the proposed processing-in-MRAM architecture. It iterates over each edge of the graph, partitions the corresponding rows and columns into slides, then loads the valid slice pairs onto computational memory for \texttt{AND} and \texttt{BitCount} computation. In case that there is no enough memory space, it adopts an LRU strategy to replace a least recently used slice. \section{Experimental Results}\label{sec:exper} \subsection{Experimental Setup} To validate the effectiveness of the proposed approaches, comprehensive device-to-architecture evaluations along with two in-house simulators are developed. At the device level, we jointly use the Brinkman model and Landau-Lifshitz-Gilbert (LLG) equation to characterize MTJ \cite{yang2015radiation}. The key parameters for MTJ simulation are demonstrated in Table~\ref{tab:parameter}. For the circuit-level simulation, we design a Verilog-A model for 1T1R STT-MRAM device, and characterize the circuit with $45$nm FreePDK CMOS library. We design a bit counter module based on Verilog HDL to obtain the number of non-zero elements in a vector. Specifically, we split the vector and feed each $8$-bit sub-vector into an $8$-$256$ look-up-table to get its non-zero element number, then sum up the non-zero numbers in all sub-vectors. We synthesis the module with Synopsis Tool and conduct post-synthesis simulation based on $45$nm FreePDK. After getting the device level simulation results, we integrate the parameters in the open-source NVSim simulator \cite{NVSim} and obtain the memory array performance. In addition, we develop a simulator in Java for the processing-in-MRAM architecture, which simulates the proposed function mapping, data slicing and data mapping strategies. Finally, a behavioural-level simulator is developed in Java, taking architectural-level results and memory array performance to calculate the latency and energy that spends on TC in-memory accelerator. To provide a solid comparison with other accelerators, we select from the real-world graphs from SNAP dataset \cite{snapnets} (see TABLE~\ref{tab:graphpara}), and run comparative baseline intersect-based algorithm on Inspur blade system with the Spark GraphX framework on Intel E5430 single-core CPU. Our TC in-memory acceleration algorithm also runs on single-core CPU, and the STT-MRAM computational array is set to be $16$ MB. \begin{table}[htbp] \setlength{\tabcolsep}{14pt} \footnotesize \caption{Key parameters for MTJ simulation.} \label{tab:parameter} \centering \begin{tabular}{l|l} \specialrule{0.8pt}{0pt}{0pt} Parameter & Value \\ \hline MTJ Surface Length & $40$ $nm$ \\ MTJ Surface Width & $40$ $nm$ \\ Spin Hall Angle & $0.3$ \\ Resistance-Area Product of MTJ & $10^{-12}$ $\Omega \cdot m^2$ \\ Oxide Barrier Thickness & $0.82$ $nm$ \\ TMR & $100\%$ \\ Saturation Field & $10^6$ $A/m$ \\ Gilbert Damping Constant & $0.03$ \\ Perpendicular Magnetic Anisotropy & $4.5 \times 10^5$ $A/m$ \\ Temperature & $300 K$ \\ \specialrule{0.8pt}{0pt}{0pt} \end{tabular} \end{table} \begin{table}[t] \setlength{\tabcolsep}{10pt} \footnotesize \caption{Selected graph dataset.} \label{tab:graphpara} \centering \begin{tabular}{l|rrr} \specialrule{0.8pt}{0pt}{0pt} Dataset & \# Vertices & \# Edges & \# Triangles \\ \hline ego-facebook & 4039 & 88234 & 1612010 \\ email-enron & 36692 & 183831 & 727044 \\ com-Amazon & 334863 & 925872 & 667129 \\ com-DBLP & 317080 & 1049866 & 2224385 \\ com-Youtube & 1134890 & 2987624 & 3056386 \\ roadNet-PA & 1088092 & 1541898 & 67150 \\ roadNet-TX & 1379917 & 1921660 & 82869 \\ roadNet-CA & 1965206 & 2766607 & 120676 \\ com-LiveJournal & 3997962 & 34681189 & 177820130 \\ \specialrule{0.8pt}{0pt}{0pt} \end{tabular} \end{table} \subsection{Benefits of Data Reuse and Exchange} TABLE~\ref{tab:sliceDataSize} shows the memory space required for the bitwise computation. For example, the largest graph {\it com-lj} will need $16.8$ MB without incurring any data exchange. On average, only $18$ KB per $1000$ vertices is needed for in-memory computation. \begin{table}[t] \setlength{\tabcolsep}{6pt} \footnotesize \caption{Valid slice data size (MB).} \label{tab:sliceDataSize} \centering \begin{tabular}{lr|lr|lr} \specialrule{0.8pt}{0pt}{0pt} ego-facebook & 0.182 & com-DBLP & 7.6 & roadNet-TX & 12.38 \\ email-enron & 1.02 & com-Youtube & \bf{16.8} & roadNet-CA & \bf{16.78} \\ com-Amazon & 7.4 & roadNet-PA & 9.96 & com-lj & \bf{16.8} \\ \specialrule{0.8pt}{0pt}{0pt} \end{tabular} \end{table} When the STT-MRAM computational memory size is smaller than those listed in TABLE~\ref{tab:sliceDataSize}, data exchange will happen. For example, with $16$ MB, the three large graphs will have to do data exchange as shown in Fig.~\ref{fig:datareuse}. In this figure, we also list the percentages of data hit (average $72\%$) and data miss (average $28\%$). Recall that the first time a data slice is loaded, it is always a miss, and a data hit implies that the slice data has already been loaded. So this shows that the proposed data reuse strategy saves on average $72\%$ memory \texttt{WRITE} operations. \begin{figure}[htb] \centering \includegraphics[width = 0.9\linewidth]{dataReuse.pdf} \caption{Percentages of data hit/miss/exchange.} \label{fig:datareuse} \end{figure} \subsection{Benefits of Data Slicing} As shown in TABLE~\ref{tab:validSlice}, the average percentage of valid slices in the five largest graphs is only $0.01\%$. Therefore, the proposed data slicing strategy could significantly reduce the needed computation by $99.99\%$. \begin{table}[htbp] \setlength{\tabcolsep}{4pt} \caption{Percentage of valid slices.} \label{tab:validSlice} \centering \begin{tabular}{lr||lr||lr} \specialrule{0.8pt}{0pt}{0pt} ego-facebook & 7.017\% & com-DBLP & 0.036\% & roadNet-TX & 0.010\% \\ email-enron & 1.607\% & com-Youtube & 0.013\% & roadNet-CA & 0.007\% \\ com-Amazon & 0.014\% & roadNet-PA & 0.013\% & com-lj & 0.006\% \\ \specialrule{0.8pt}{0pt}{0pt} \end{tabular} \end{table} \subsection{Performance and Energy Results} TABLE~\ref{tab:graphperf} compares the performance of our proposed in-memory TC accelerator against a CPU baseline implementation, and the existing GPU and FPGA accelerators. One can see a dramatic reduction of the execution time in the last columns from the previous three columns. Indeed, without PIM, we achieved an average $53.7\times$ speedup against the baseline CPU implementation because of data slicing, reuse, and exchange. With PIM, another $25.5\times$ acceleration is obtained. Compared with the GPU and FPGA accelerators, the improvement is $9\times$ and $23.4\times$, respectively. It is important to mention that we achieve this with a single-core CPU and $16$ MB STT-MRAM computational array. \begin{table}[htbp] \setlength{\tabcolsep}{5pt} \caption{Runtime (in seconds) comparison among our proposed methods, CPU, GPU and FPGA implementations.} \label{tab:graphperf} \centering \begin{tabular}{l|r|r|r|r|r} \specialrule{0.8pt}{0pt}{0pt} \multirow{2}{*}{Dataset} & \multirow{2}{*}{CPU} & \multirow{2}{*}{GPU \cite{XiongTCFPGA}} & \multirow{2}{*}{FPGA \cite{XiongTCFPGA}} & \multicolumn{2}{c}{This Work}\\ \cline{5-6} & & & & w/o PIM & TCIM \\ \hline ego-facebook & 5.399 & 0.15 & 0.093 & 0.169 & 0.005 \\ email-enron & 9.545 & 0.146 & 0.22 & 0.8 & 0.021 \\ com-Amazon & 20.344 & N/A & N/A & 0.295 & 0.011 \\ com-DBLP & 20.803 & N/A & N/A & 0.413 & 0.027 \\ com-Youtube & 61.309 & N/A & N/A & 2.442 & 0.098 \\ roadNet-PA & 77.320 & 0.169 & 1.291 & 0.704 & 0.043 \\ roadNet-TX & 94.379 & 0.173 & 1.586 & 0.789 & 0.053 \\ roadNet-CA & 146.858 & 0.18 & 2.342 & 3.561 & 0.081 \\ com-LiveJournal & 820.616 & N/A & N/A & 33.034 & 2.006 \\ \specialrule{0.8pt}{0pt}{0pt} \end{tabular} \end{table} As for the energy savings, as shown in Fig.~\ref{fig:energy}, our approach has $20.6\times$ less energy consumption compared to the energy-efficient FPGA implementation \cite{XiongTCFPGA}, which benefits from the non-volatile property of STT-MRAM and the in-situ computation capability. \begin{figure}[htbp] \centering \includegraphics[width = 1.0\linewidth]{energy.pdf} \caption{Normalized results of energy consumption for TCIM with respect to FPGA.} \label{fig:energy} \end{figure} \section{Conclusion}\label{sec:conclusion} In this paper, we propose a new triangle counting (TC) method, which uses massive bitwise logic computation, making it suitable for in-memory implementations. We further propose a sparsity-aware processing-in-MRAM architecture for efficient in-memory TC accelerations: by data slicing, the computation could be reduced by $99.99\%$, meanwhile the compressed graph data can be directly mapped onto STT-MRAM computational memory array for bitwise operations, and the proposed data reuse and exchange strategy reduces $72\%$ of the memory \texttt{WRITE} operations. We use device-to-architecture co-simulation to demonstrate that the proposed TC in-memory accelerator outperforms the state-of-the-art GPU and FPGA accelerations by $9\times$ and $23.4\times$, respectively, and achieves a $20.6\times$ energy efficiency improvement over the FPGA accelerator. Besides, the proposed graph data compression and data mapping strategies are not restricted to STT-MRAM or TC problem. They can also be applied to other in-memory accelerators with other non-volatile memories. \bibliographystyle{unsrt} \scriptsize \begingroup
proofpile-arXiv_065-182
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Relativistic jets produce a vast range of flaring activity that can last from a few minutes, as in the case of gamma-ray bursts, up to months or years, as in the case of active galactic nuclei. Jets and their flares are most easily studied in blazars, where the jet of the active galaxy points towards Earth \citep{up95} and persists over long time scales. Blazars, as all relativistic jets, exhibit flares that differ substantially in duration and evolution \citep[e.g.,][and references therein]{z18}. A common scenario is a change in the plasma flow across a shock within the jet \citep{mg85}. However, the details of the variation of the plasma parameters that fit the lightcurves, have to be set arbitrarily. Most easily, a variation in the plasma density can account for a flare, and the modulation in the injection results in the specific lightcurve profile. Natural sources of density fluctuations are either a variable particle injection process at the base of the jet, possibly coupled to variations in the accretion disk, or through pick-up of material while the jet moves through the host galaxy. For the latter process, material can be supplied by the interstellar gas, stellar astrospheres, supernova remnants, etc. \cite{zea17,zea19} used the injection of pick-up material to explain the bright, and long-lasting flare of the blazar CTA~102. In their model, a gas cloud approached the jet and was subsequently ablated. In their picture, the cloud slowly intruded into the jet implying a smoothly varying injection of particles. The number of particles injected into the jet flow depended on the geometry and the density structure of the cloud with few particles being injected in the beginning and at the end of the process and most particles being injected when the center of the cloud intruded the jet. The successful reproduction of the CTA~102 flare is reassuring. However, in order to understand the full potential of the model, it is necessary to study the influence of various parameters from both cloud and jet on the lightcurve. Therefore in this paper, we revisit the cloud-ablation model from a theoretical point-of-view. We will first explore the requirements of cloud and jet for the ablation to proceed, followed by deriving the time evolution of the particle injection. This is described in Sec.~\ref{sec:ablation} along with a general discussion of the model. We then study the lightcurves of both theoretical (Sec.~\ref{sec:param}) and a few exemplary real (Sec.~\ref{sec:clouds}) clouds. The results are discussed in Sec.~\ref{sec:sumcon}. \section{The cloud ablation model} \label{sec:ablation} Clouds, like those of the broad-line region (BLR) but also stars and their astrospheres, penetrating the relativistic jet of an AGN have been explored in various applications. Generally, the time-dependent intrusion of the cloud into the jet results in a similar time-dependency of the particle injection into the jet flow. The destruction of the cloud is a consequence of the relativistic jet flow and the associated ram pressure. The time scale of the ablation process is governed by the speed of the shock that is formed in the cloud as it starts penetrating the jet, i.e. the time the shock needs to cross the cloud. For a rough estimate, one can assume that the shock speed is roughly $c/4$,\footnote{In the downstream frame, the speed of a strong shock is $v_s=u_1/4$. The upstream speed is $u_1\sim c$, i.e. the speed of the jet.} where $c$ is the speed of light. The time for the shock to pass the cloud is $t_s=8R/c$, with the cloud radius $R$. If the intrusion time of the cloud into the jet $t_{in}=2R/v$, where $v$ is the cloud speed, is shorter than $t_s$, the cloud may penetrate deep into the jet before the shock has crossed the object. In this case, the cloud material will be shocked in one instance resulting in a violent burst of particles and radiation \citep{abr10}. This provides the ingredients for fast flares \citep[e.g.,][]{bea12,bba12}. \begin{figure} \centering \includegraphics[width=0.40\textwidth]{Ablation2_200610.pdf} \caption{Sketch of the ablation process (not to scale). (1) The cloud approaches the jet and (2) is ablated slice-by-slice while entering the jet. (3) The ablated cloud material is mixed into the jet flow resulting in a specific density enhancement. (4) At a downstream shock, the particles are accelerated to non-thermal energies and radiate. } \label{fig:model} \end{figure} On the other hand, if $t_s<t_{in}$, the shock mostly interacts with the subvolume of the cloud that has already penetrated the jet. In turn, the ablation process is gradual \citep{zea17} as depicted in Fig.~\ref{fig:model}. Simulations by \cite{pbb17} involving a jet-star interaction suggest that the ablation of the astrosphere already begins in the transition layer at the edge of the jet and proceeds while the star continues to penetrate. The ablated material is mixed into the jet flow, where it is accelerated to the jet's bulk speed. The acceleration to non-thermal energies may take place at a downstream shock, such as a recollimation shock \citep{mea08}. Such shocks are ubiquitous in blazar jets \citep{jea17}. Radiative processes \citep{tb19} may occur close to the shock region. At this point we ignore the possibility that shock waves are formed in the jet through the cloud intrusion itself \citep{br12,pbb17}. The potential variations in shock strength and size throughout the jet-cloud interaction cannot be easily quantified, and require in-depth (magneto-)hydrodynamic (MHD) simulations, which are beyond the scope of this paper. Following the gradual ablation and injection of particles into the jet flow, the subsequent acceleration of the particles and emission of radiation proceed similarly. A resulting flare evolves smoothly and is probably symmetric in time,\footnote{Note that this remark concerns the long-term behavior of the flare. On shorter time-scales, substructures in the cloud or instabilities in the jet due to the ablation process, may still trigger shorter spikes in the lightcurve.} as the change in the particle injection dominates the lightcurve, while other jet parameters -- such as shock radius -- probably remain fixed. In this paper, we focus on the second scenario, where a cloud enters the jet, and is slowly ablated leading to a long-lasting flux enhancement. In order to calculate the amount of ablated material at a given instant of time, the cloud's geometry and density structure are required. While the geometry has a significant influence on the ablated volume, we assume only a spherical geometry for ease of computation. Furthermore, we only consider leptonic radiation processes. While the hadronic components of the cloud are carried along in the jet, we assume that they remain energetically cold and do not participate in the radiation processes, \change{which is a common -- though debated -- assumption in blazar modeling \citep[e.g.,][]{b07,gea11,bea13,zb15,cea15,cea17,Hea19}}. Below, we describe the cloud's density structure, the jet condition to ablate the cloud, and the resulting injection function in turn. \subsection{Cloud density} An isothermal cloud is bound by its own gravitational pull. If we ignore any external forces that may shape it into different structures, the cloud is spherically symmetric with radial coordinate $r$. As discussed in \cite{zea17}, the cloud's density structure $n(r)$ is defined by the equation of hydrostatic equilibrium \begin{align} k_B T \frac{\td{n(r)}}{\td{r}} = -4\pi \frac{Gm_p^2 n(r)}{r^2} \int\limits_0^{r} \td{\bar{r}} \bar{r}^2 n(\bar{r}) \label{eq:hydro} \end{align} with the temperature $T$, Boltzmann's constant $k_B$, proton mass $m_p$ and gravitational constant $G$. We have assumed that the cloud predominantly consists of hydrogen. With some manipulations of Eq.~(\ref{eq:hydro}), one reaches the Lane-Emden equation, which does not provide an analytical solution in this case. However, from the asymptotic solution $n\propto r^{-2}$, as well as the boundary conditions $n(0)<\infty$ and $\left.\td{n}/\td{r}\right|_{r=0}=0$, one can derive a reasonable approximation \citep{zea17,bbea18}: \begin{align} n(r) = \frac{n_0}{1+(r/r_0)^2} \label{eq:clouddens}, \end{align} with the central density $n_0$, and the scale height \begin{align} r_0 &= \sqrt{\frac{2k_B T}{4\pi G m_p^2 n_0}} = \sqrt{\tilde{c}\frac{T}{n_0}} \nonumber \\ &= 4\E{12} \est{T}{140\,\mbox{K}}{1/2} \est{n_0}{10^{15}\,\mbox{cm}^{-3}}{-1/2}\,\mbox{cm} \label{eq:scaleheight}, \end{align} with $\tilde{c} := 2k_B/(4\pi G m_p^2)=1.17\E{38}\,$cm$^{-1}$K$^{-1}$. Note that the scale height in \cite{zea17} contains a minor calculation error on the order of unity, which has been corrected here. As clouds cannot be infinitely large, we define an outer radius $R$ after which the density is set to zero. As the isothermal cloud contains predominantly hydrogen, the sound speed simply is $c_s=(5k_BT/3m_p)^{1/2}$. Using Eq.~(\ref{eq:scaleheight}), this becomes \begin{align} c_s &= \sqrt{\frac{5k_B n_0r_0^2}{3m_p\tilde{c}}} \nonumber \\ &= 1.4\E{5} \est{n_0}{10^{15}\,\mbox{cm}^{-3}}{1/2} \est{r_0}{4\E{12}\,\mbox{cm}}{}\, \mbox{cm}\,\mbox{s}^{-1} \label{eq:soundspeed}, \end{align} which shows that all relevant speeds are much larger than the cloud's sound speed. This verifies \textit{a posteriori} that large Mach numbers are achieved and a strong shock is formed in the cloud. \subsection{Necessary jet condition} The ablation process commences, if the cloud's gravitational pull cannot withstand the jet's ram pressure. While the details of the process require (M)HD simulations, which are beyond the scope of this paper, we can provide a rough estimate on the necessary jet condition to ablate the cloud. In the frame of the host galaxy, the relativistic jet containing a fraction of $a$ cold protons and $(1-a)$ positrons per electron, exerts the ram pressure \begin{align} P_R \approx \Gamma (\Gamma-1) a m_p c^2 n_j \label{eq:rampressure}, \end{align} with the bulk Lorentz factor $\Gamma$, the proton rest energy $m_pc^2$, and the jet's electron density $n_j$. We assumed that $am_p>\bar{\gamma}m_e$, where $\bar{\gamma}$ is the average electron Lorentz factor. This inequality implies that the mass of protons is greater than the average relativistic mass of the electrons. In turn, the ram pressure is dominated by the protons. This approximations is improved, if the protons have non-negligible kinetic energy in the comoving frame. The ram pressure must overcome the cloud's pressure on its particles. Following the hydrostatic equilibrium condition, the cloud's pressure is \begin{align} P_c(r) = \frac{n_0 k_B T}{1+\left( r/r_0 \right)^2} \approx n_0k_B T \label{eq:gravpressureI}, \end{align} where the approximation holds within the cloud's center ($r\ll r_0$). If the ram pressure is larger than the cloud's central pressure, the cloud will be destructed entirely. Setting $P_R>P_c(r\ll r_0)$, and solving for the bulk Lorentz factor results in \begin{align} \Gamma(\Gamma-1) &> \frac{n_0 k_B T}{am_p c^2 n_j} \nonumber \\ &= 128 \est{n_0}{10^{15}\,\mbox{cm}^{-3}}{} \est{T}{140\,\mbox{K}}{} \est{a}{0.1}{-1} \est{n_j}{10^3\,\mbox{cm}^{-3}}{-1} \label{eq:requiredbulk} . \end{align} Taking the square root provides us with the required value for the bulk Lorentz factor of $\Gamma\gtrsim 12$, which is achieved in many blazar jets \citep[e.g.,][]{jea17}. Hence, a cloud with the provided parameters is indeed destructed. Parts of a denser cloud, or one with a higher temperature, may possibly survive the encounter. Note that the chances of the cloud's survival are much reduced, if the jet protons are not cold. As a side note: while stars may be stripped of their astrospheres, the star itself should survive the encounter with a relativistic jet. Additionally, we can consider the time the cloud needs to cross the jet. As the central region of the cloud within the scale height is the densest part of the cloud, we consider this size in the following estimate. The ablation process is governed by the shock that forms during the interaction. Hence, the crossing time of the shock through the cloud is a good estimator of the ablation time, as it disrupts the internal structure of the cloud, adding (or enhancing) the turbulent motions in the cloud, which weakens the gravitational pull. We can derive the minimum shock speed $v_s$ required to cross the cloud, before the cloud with speed $v$ has crossed the jet of radius $R_j$: \begin{align} t_{\rm cross} = \frac{2R_j}{v} &< t_{s} = \frac{2r_0}{v_s} \nonumber \\ \Leftrightarrow v_s &< \frac{r_0}{R_j} v \nonumber \\ &= 3.2\E{3} \est{r_0}{4\E{12}\,\mbox{cm}}{} \est{v}{2\E{7}\,\mbox{cm\,s}^{-1}}{} \nonumber \\ &\quad\times \est{R_j}{2.5\E{16}\,\mbox{cm}}{-1}\,\mbox{cm\,s}^{-1}. \label{eq:shockspeed1} \end{align} If we express the jet radius as a function of distance $z_j$ from the black hole using $R_j = z_j\tan{\Gamma^{-1}}\approx z_j/\Gamma$, Eq.~(\ref{eq:shockspeed1}) becomes \begin{align} v_s &< \frac{r_0\Gamma}{z_j}v \nonumber \\ &= 1.3\E{3} \est{r_0}{4\E{12}\,\mbox{cm}}{} \est{v}{2\E{7}\,\mbox{cm\,s}^{-1}}{} \nonumber \\ &\quad\times \est{\Gamma}{10}{} \est{z_j}{6.5\E{17}\,\mbox{cm}}{-1}\,\mbox{cm\,s}^{-1}. \label{eq:shockspeed2} \end{align} Comparing this to our earlier estimate that the shock speed may actually be on the order of $~\sim c/4$, the cloud will not be able to cross the jet in time. This changes for clouds close to the base of the jet, where the speed of motion of the cloud is a lot higher, and the jet a lot narrower. The gradual ablation of the cloud results in the injection of the cloud particles into the jet, where they get mixed into the bulk flow. The injection function is derived in the next section. \subsection{Injection function into the jet emission region} The number of particles $\td{N}$ entering the jet in a given time step $\td{t}$, depends on the density of the cloud and the ablated volume $\td{V}$ of a slice of the cloud. As in \cite{zea17}, we denote with $x=0$ the point of the cloud that first touches the jet, with $x=R$ the centre of the cloud, and with $x=2R$ the far side of the cloud. The ablated volume at position $x$ becomes \citep{zs13} \begin{align} \td{V}(x) = \td{x}\int^{x} \td{A}(\tilde{x}) = \pi (2Rx-x^2)\td{x} \label{eq:ablvol}, \end{align} with the width $\td{x}$ and the cross-section $A(x)$ of the ablated volume. The particle number in a slice yields \begin{align} \td{N}(x) &= \td{x}\int^{x} n(r) \td{A}(\tilde{x}) \nonumber \\ &= \pi n_0 \td{x} r_0^2 \ln{\left( \frac{r_0^2 + R^2}{r_0^2 + (R-x)^2} \right)} \label{eq:particleabl} \end{align} If the cloud enters the jet with constant speed $v$, the length scales can be transformed to time scales. Then the number of particles entering the jet in a given time step $\td{t}=\td{x}/v$ is \begin{align} \frac{\td{N}(t)}{\td{t}} = \pi n_0 v r_0^2 \ln{\left( \frac{t_0^2 + t_R^2}{t_0^2 + (t_R-t)^2} \right)} \label{eq:particlerate} , \end{align} with $t_0 = r_0/v$, and $t_R=R/v$. In the simulations below, the radiation is calculated in the comoving frame of the jet. Hence, the particle rate, Eq.~(\ref{eq:particlerate}), must be transformed to the comoving frame of the jet.\footnote{Quantities in the comoving frame are denoted with primes.} The jet flows with bulk Lorentz factor $\Gamma$, and we assume that the cloud enters the jet in a right angle in the galactic frame. The Lorentz transformation of the time step is $\td{t}=\Gamma\td{t^{\prime}}$, while the transformation of the time coordinate is $t=t^{\prime}/\Gamma$ due to the right angle. Then, the particle rate becomes in the comoving frame: \begin{align} \frac{\td{N}(t^{\prime})}{\td{t^{\prime}}} = \Gamma\pi n_0 v r_0^2 \ln{\left( \frac{(\Gamma t_0)^2 + (\Gamma t_R)^2}{(\Gamma t_0)^2 + (\Gamma t_R-t^{\prime})^2} \right)} \label{eq:pratecomov} . \end{align} These initially thermal particles get accelerated in the jet through a process which we do not specify here, and are subsequently injected into the emission region (see for more in-depth models, e.g., \citealt{cea12,ws15,bb19}). We assume that a fraction $\epsilon_c\sim 0.1$ \citep[e.g.,][]{ssa13} of the cloud electrons is accelerated and the resulting spectrum is a power-law with index $s^{\prime}$ between a minimum and maximum Lorentz factor, $\gamma_{\rm min}^{\prime}$ and $\gamma_{\rm max}^{\prime}$, respectively. Note again that we assume that the hadronic cloud particles remain energetically cold. The injection luminosity of cloud electrons into the emission region of the jet becomes \begin{align} L^{\prime}_{\rm inj,c}(t) &= m_e c^2 \epsilon_c \frac{\td{N}(t^{\prime})}{\td{t^{\prime}}} \int\limits_{\gamma_{\rm min}^{\prime}}^{\gamma_{\rm max}^{\prime}} \gamma^{\prime 1-s^{\prime}}\td{\gamma^{\prime}} \nonumber \\ &= m_e c^2 \epsilon_c \frac{\td{N}(t^{\prime})}{\td{t^{\prime}}} \begin{cases} \frac{\ln{\left( \gamma_{\rm max}^{\prime}/\gamma_{\rm min}^{\prime} \right)}}{\gamma_{\rm min}^{\prime -1} - \gamma_{\rm max}^{\prime -1}} & s^{\prime}=2 \\ \frac{1}{2-s^{\prime}}\left( \gamma_{\rm max}^{\prime 2-s^{\prime}} - \gamma_{\rm min}^{\prime 2-s^{\prime}} \right) &\mbox{else} \end{cases} \label{eq:injlum}. \end{align} While the time dependency in Eq.~(\ref{eq:injlum}) is obviously the same as in \cite{zea17}, here we have also derived the full transformation and the correct normalization factor. These were only indirectly considered or treated as free parameters in \cite{zea17}. Therefore, Eq.~(\ref{eq:injlum}) provides -- within the given assumptions -- the correct particle injection function of a slowly ablated cloud into the emission region of a jet. \section{Parameter study} \label{sec:param} \begin{table*} \caption{Jet emission region parameter definition, symbol and value for the FSRQ and BL Lac object cases. } \begin{tabular}{lccc} Definition & Symbol & FSRQ & BL Lac object \\ \hline Distance to black hole & $z$ & $6.5\times 10^{17}\,$cm & $1.0\times 10^{19}\,$cm \\ Doppler factor & $\delta$ & $35\,$ & $35\,$ \\ Emission region radius & $R_j^{\prime}$ & $2.5\times 10^{16}\,$cm & $1.0\times 10^{17}\,$cm \\ Magnetic field strength & $B_j^{\prime}$ & $3.7\,$G & $1.0\,$G \\ e$^{-}$ injection luminosity & $L_{\rm inj}^{\prime}$ & $2.2\times 10^{43}\,$erg/s & $5.0\times 10^{42}\,$erg/s \\ Min. e$^{-}$ Lorentz factor & $\gamma_{\rm min}^{\prime}$ & $1.3\times 10^1\,$ & $1.6\times 10^2\,$ \\ Max. e$^{-}$ Lorentz factor & $\gamma_{\rm max}^{\prime}$ & $3.0\times 10^3\,$ & $3.0\times 10^6\,$ \\ e$^{-}$ spectral index & $s^{\prime}$ & $2.4\,$ & $2.2\,$ \\ Escape time scaling & $\eta_{\rm esc}^{\prime}$ & $10.0\,$ & $10.0\,$ \\ BLR Temperature & $T_{\rm BLR}$ & $5.0\times 10^4\,$K & -- \\ Cosmological redshift & $z_{\rm red}$ & $1.037$ & $1.037$ \\ \end{tabular} \label{tab:jetparam} \end{table*} \begin{table} \caption{Baseline cloud parameter definition, symbol and value for the artificial clouds. } \begin{tabular}{lcc} Definition & Symbol & Value \\ \hline Cloud radius & $R$ & $6.0\times 10^{13}\,$cm \\ Cloud scale height & $r_0$ & $4.0\times 10^{12}\,$cm \\ Cloud density & $n_0$ & $1.0\times 10^{15}\,$cm$^{-3}$ \\ Cloud speed & $v$ & $2.0\times 10^{7}\,$cm/s \\ Acceleration efficiency & $\epsilon_c$ & $0.1$ \\ \end{tabular} \label{tab:baseparam} \end{table} \begin{figure*} \begin{minipage}{0.49\linewidth} \centering \resizebox{\hsize}{!} {\includegraphics{Cloud_Radius_Plot.pdf}} \end{minipage} \hspace{\fill} \begin{minipage}{0.49\linewidth} \centering \resizebox{\hsize}{!} {\includegraphics{Cloud_Scale_Height_Plot.pdf}} \end{minipage} \newline \begin{minipage}{0.49\linewidth} \centering \resizebox{\hsize}{!} {\includegraphics{Cloud_Density_Plot.pdf}} \end{minipage} \hspace{\fill} \begin{minipage}{0.49\linewidth} \centering \resizebox{\hsize}{!} {\includegraphics{Cloud_Speed_Plot.pdf}} \end{minipage} \caption{Lightcurves in the observer's frame for different parameter values of theoretical clouds. In each panel, lightcurves in the \ensuremath{\gamma}-ray, X-ray and R band are shown for different varied parameters: (a) cloud radius $R$, (b) scale height $r_0$, (c) cloud density $n_0$, and (d) cloud speed $v$. The dashed black lightcurve employs the baseline parameters given in Tab.~\ref{tab:baseparam}. Note the logarithmic y-axes. } \label{fig:parastud1} \end{figure*} \begin{figure*} \begin{minipage}{0.49\linewidth} \centering \resizebox{\hsize}{!} {\includegraphics{B_Field_Hom_Reg_200617.pdf}} \end{minipage} \hspace{\fill} \begin{minipage}{0.49\linewidth} \centering \resizebox{\hsize}{!} {\includegraphics{Blob_Radius_200616.pdf}} \end{minipage} \newline \begin{minipage}{0.49\linewidth} \centering \resizebox{\hsize}{!} {\includegraphics{Theo_Cloud_s_200616.pdf}} \end{minipage} \hspace{\fill} \begin{minipage}{0.49\linewidth} \centering \resizebox{\hsize}{!} {\includegraphics{Theo_Cloud_L_200616.pdf}} \end{minipage} \caption{Lightcurves in the observer's frame of theoretical clouds for different parameter values of the jet emission region. In each panel, lightcurves in the \ensuremath{\gamma}-ray, X-ray and R band are shown for different varied parameters: (a) magnetic field $B^{\prime}_j$, and (b) size $R^{\prime}_j$ of the emission region, (c) spectral index $s^{\prime}$ of the electron distribution, and (d) the injection luminosity $L^{\prime}_{\rm inj}$ of the quiescent state. The dashed black lightcurve employs the FSRQ parameters given in Tab.~\ref{tab:jetparam}. Note the logarithmic y-axes. } \label{fig:parastudEmReg} \end{figure*} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{SSC_Parametersatz_200617.pdf} \caption{Lightcurve in the observer's frame of a theoretical cloud employing the BL Lac object parameter set (turquoise) of the emission region compared to the FSRQ parameter set (dashed black). The parameter sets are given in Tab.~\ref{tab:jetparam}. Note the logarithmic y-axes. } \label{fig:parastudSSC} \end{figure} In this section we provide a thorough study on the resulting lightcurves following a scan over parameters of the cloud and the emission region. Here and in the following, we use the time-dependent, leptonic one-zone code extensively described in \cite{db14} and \cite{zea17}. The emission region parameters are given and defined in Tab.~\ref{tab:jetparam}, providing cases for both flat spectrum radio quasars (FSRQs) and BL Lac objects. To study FSRQs, we use the same parameters as \cite{zea17}. Included emission processes are synchrotron, synchrotron-self Compton (SSC) and inverse-Compton scattering of photons from the accretion disc and the BLR. The parameters provided in Tab.~\ref{tab:jetparam} imply an initial density of non-thermal electrons of $8\E{3}\,$cm$^{-3}$ for FSRQs and $7\E{2}\,$cm$^{-3}$ for BL Lac objects, respectively.\footnote{\change{Note that the seemingly high density in the FSRQ case follows from the modeling of CTA~102 by \cite{zea17}, while the BL Lac case is an adaptation from the FSRQ values. Both values are within bounds found in other studies \citep[e.g.,][]{bea13,zw16}.}} Note that the redshift in Tab.~\ref{tab:jetparam} is required for the flux normalization. The baseline cloud parameters, from which we conduct the parameter study of this section, are given in Tab.~\ref{tab:baseparam}. Note that the injection luminosity provided by Eq.~(\ref{eq:injlum}) is added on top of the jet values listed in Tab.~\ref{tab:jetparam}. We assume that the electrons supplied from the ablation process are accelerated to the same spectral behavior -- namely the same minimum and maximum Lorentz factors and spectral index -- as the already existing jet electrons. This is reasonable, if they are accelerated in the same environment at a shock downstream of the jet-cloud interaction site. For the parameter study each cloud parameter is varied individually resulting in the four plots shown in Fig.~\ref{fig:parastud1}, which display the lightcurves in the \ensuremath{\gamma}-ray-, X-ray- and R-band utilizing the FSRQ jet parameters. The results are as follows. \begin{enumerate} \item[(a)] {\it Variation of cloud radius $R$:} As expected, the flare takes longer to evolve with larger $R$, while also the peak flux increases mildly. A larger radius implies overall a larger number of cloud particles explaining the mild increase in flux. % \item[(b)] {\it Variation of scale height $r_0$:} As $r_0$ governs the size of the region with constant, maximal density $n_0$, the total number of particles significantly changes with a variation of $r_0$. For small $r_0$, the number of particles in the cloud becomes so low that a variation in flux is barely visible (magenta curve). On the other hand, a larger $r_0$ not just increases the flux, but also changes the curvature of the lightcurve owing to the combination of higher particle numbers in a larger cloud volume. In turn, the peak becomes less pronounced with increasing scale height. % \item[(c)] {\it Variation of density $n_0$:} In this case, the peak fluxes are linearly altered, as the synchrotron and external-Compton\footnote{SSC is negligible in this parameter set.} processes linearly depend on the particle density. % \item[(d)] {\it Variation of speed $v$:} The influence of $v$ on the lightcurves is involved, as it changes both the normalization factor and the duration of the event. Therefore, slower speeds result in longer, but less pronounced flares in line with the discussion in Sec.~\ref{sec:ablation}. \end{enumerate} Obviously, the cloud parameters have a strong influence on the lightcurves, resulting in a zoo of potential solutions, which could explain many symmetrical flares. However, the parameters of the emission region itself may also influence the lightcurve. We have tested this by varying individually the magnetic field $B^{\prime}_j$, and the size $R^{\prime}_j$ of the emission region, as well as the spectral index $s^{\prime}$ of the electron distribution and the jet injection luminosity $L^{\prime}_{\rm inj}$. The baseline cloud parameters are unchanged. The results are shown in Fig.~\ref{fig:parastudEmReg}, and the details are as follows. \begin{enumerate} \item[(a)] {\it Variation of the magnetic field $B_j^{\prime}$:} Obviously, the synchrotron component (optical band) reacts directly to changes in the magnetic field, while the $\gamma$-ray component (external Compton on BLR in this case) remains at the same flux level for most cases and starts to decrease for high magnetic field strengths. This decrease is expected as the synchrotron cooling begins to dominate the external-Compton cooling resulting in a decreased efficiency of the external-Compton process. Similar statements can be made for the X-ray domain exhibiting similar fluxes for most magnetic field values and only deviating for the highest magnetic field strengths. Here, the SSC process starts to dominate the external-Compton process in this energy range. % \item[(b)] {\it Variation of the size $R_j^{\prime}$:} For most values, there is no noteworthy change in the lightcurves. At the smallest size, the SSC process dominates in the X-ray domain due to the increased densities in the synchrotron photons. At the largest size, the dynamical and escape time scales become so long (compared to the chosen time step of $1\,$d in the observer's frame) that particles remain much longer in the emission region, and the fluxes decrease slower than they rise. % \item[(c)] {\it Variation of the electron spectral index $s^{\prime}$:} Following Eq.~(\ref{eq:injlum}), the shape of the accelerated (i.e. injected) particle distribution has a strong influence on the injection luminosity of the cloud particles. While the total number of particles does not change, their influence is distributed to higher energies for harder electron distributions. In turn, the variation is more pronounced. The opposite is true for softer electron distributions. % \item[(d)] {\it Variation of the injection luminosity $L_{\rm inj}^{\prime}$:} This is the injection luminosity of the initial emission region, and its value plays a significant role for the observed variability. For larger values, the luminosity added by the cloud is relatively smaller and the variation does not emerge strongly from the quiescence state. On the other hand, for a small injection luminosity, the cloud injection becomes significant displaying more pronounced variability. \end{enumerate} As a final test, we have considered a typical BL Lac object parameter set \citep[e.g.,][]{bea13} with parameters provided in Tab.~\ref{tab:jetparam}. The lightcurve is shown in Fig.~\ref{fig:parastudSSC}. The changes in the parameters of the emission region compared to the FSRQ case concern a larger distance from the black hole in order to avoid inverse-Compton scattering on external photon fields, which are located at smaller distances \citep{zea19}, a larger emission region, a smaller magnetic field, a smaller injection luminosity, and a harder and more energetic electron distribution. Following the discussion on individual changes of emission region parameters, we can expect a significant change in the lightcurve behavior. Indeed, the lightcurves in the BL Lac object case differ considerably from the FSRQ case shown as the dashed black line in Fig.~\ref{fig:parastudSSC}, despite using the same cloud parameters. The variation in all energy bands exceeds one order of magnitude, and even two orders of magnitude in the \ensuremath{\gamma}-ray band. The latter can easily be understood, as the SSC process depends quadratically on the particle distribution. Hence, as the only change in the cloud injection is the particle number, the SSC flux must change quadratically compared to the synchrotron component. The stronger reaction in the X-ray domain is amplified compared to the FSRQ case, as this energy band is now produced by highly energetic electrons emitting synchrotron emission, which is much more variable than in the baseline case where the X-ray band is dominated by low-energetic electrons producing inverse-Compton emission. \section{Interstellar objects} \label{sec:clouds} \begin{table*} \caption{Parameters of interstellar clouds. The scale height $r_0$ is calculated from the other parameters and not a free parameter. } \begin{tabular}{cl|cccc|c|c} & Type & $R$ & $T$ & $n_0$ & $v$ & $r_0$ & Ablate? \\ & & [cm] & [K] & [cm$^{-3}$] & [cm\,s$^{-1}$] & [cm] & \\ \hline (a) & Giant molecular clouds & $7.7\E{19}$ & $15$ & $2.0\E{8}$ & $5.0\E{8}$ & $3.0\E{15}$ & yes \\ (b) & Dark clouds & $1.5\E{19}$ & $10$ & $5.0\E{8}$ & $5.0\E{8}$ & $1.5\E{15}$ & yes \\ (c) & Clumps & $1.0\E{19}$ & $10$ & $1.0\E{9}$ & $5.0\E{8}$ & $1.1\E{15}$ & yes \\ (d) & Bok globules & $1.2\E{18}$ & $10$ & $4.0\E{10}$ & $5.0\E{8}$ & $1.7\E{14}$ & yes \\ (e) & Dense cores & $1.5\E{17}$ & $10$ & $1.0\E{10}$ & $5.0\E{8}$ & $3.4\E{14}$ & yes \\ (f) & Hot cores & $1.0\E{17}$ & $200$ & $1.0\E{14}$ & $5.0\E{8}$ & $1.5\E{13}$ & yes \\ \end{tabular}\\ $R$ -- cloud radius; $T$ -- cloud temperature; $n_0$ -- cloud density; $v$ -- cloud speed; $r_0$ -- derived cloud scale height; Ablate? -- If cloud and jet parameters (Tab.~\ref{tab:jetparam}) fulfill Eqs.~(\ref{eq:requiredbulk}) and (\ref{eq:shockspeed1}) \label{tab:realclouds} \end{table*} \begin{figure*} \begin{minipage}{0.32\linewidth} \centering \resizebox{\hsize}{!} {\includegraphics{GMC_a_200610.pdf}} \end{minipage} \hspace{\fill} \begin{minipage}{0.32\linewidth} \centering \resizebox{\hsize}{!} {\includegraphics{Dark_b_200610.pdf}} \end{minipage} \hspace{\fill} \begin{minipage}{0.32\linewidth} \centering \resizebox{\hsize}{!} {\includegraphics{Clumps_c_200610.pdf}} \end{minipage} \newline \begin{minipage}{0.32\linewidth} \centering \resizebox{\hsize}{!} {\includegraphics{Bok_d_200610.pdf}} \end{minipage} \hspace{\fill} \begin{minipage}{0.32\linewidth} \centering \resizebox{\hsize}{!} {\includegraphics{Dense_e_200610.pdf}} \end{minipage} \hspace{\fill} \begin{minipage}{0.32\linewidth} \centering \resizebox{\hsize}{!} {\includegraphics{Hot_f_200610.pdf}} \end{minipage} \caption{Lightcurves of the FSRQ (dashed) and BL Lac object (solid) cases in the observer's frame for three energy bands for examples of real clouds: (a) Giant molecular cloud, (b) dark cloud, (c) clump, (d) bok globule, (e) dense core, and (f) hot core. The jet parameters are given in Tab.~\ref{tab:jetparam}, and the cloud parameters are provided in Tab.~\ref{tab:realclouds}. Note the logarithmic y-axes. } \label{fig:realclouds} \end{figure*} Before we proceed, a note is required on the free parameters. As we have used the scale height as a free parameter in the previous section, we implicitly took the cloud temperature $T$ as a dependent variable. However unlike the scale height, $T$ is a measurable quantity, and therefore the scale height becomes the dependent variable from now on. Interestingly, the injection rate, Eq.~(\ref{eq:pratecomov}), is proportional to $n_0r_0^2$. Inserting Eq.~(\ref{eq:scaleheight}), this becomes $n_0r_0^2=\tilde{c}T$, independent of the density. Hence, the influence of the density on the lightcurves is minor. The injection is therefore driven by the speed and the temperature of the cloud. With this in mind, we can discuss the lightcurves from interstellar clouds. Some cloud types and their typical parameters \citep{co07} are given in Tab.~\ref{tab:realclouds}. Clearly, this list is not exhaustive, and should be considered as examples. All these clouds fulfill the ablation conditions in Eqs.~(\ref{eq:requiredbulk}) and (\ref{eq:shockspeed1}). As we have seen in the previous section, the speed of the cloud has an enormous influence on the resulting lightcurve. Hence, stronger flares may be expected for clouds relatively close to the black hole. Given that many AGN are located in elliptical hosts, the nuclear activity may be a result of the merger of galaxies and gas and dust (in the form of clouds) is pushed into the galactic center. In turn, many clouds will come close to the black hole and the jet while moving rapidly. This provides the necessary ingredients for our model. We assume that the clouds have reached a distance to the black hole within the radius of the BLR, and hence move with roughly the orbital speed of the BLR -- about $5000\,$km\,s$^{-1}$. This allows us to use the FSRQ model. Additionally, as the BL Lac object scenario amplifies the variability, we also provide the corresponding lightcurves. The resulting lightcurves are shown in Fig.~\ref{fig:realclouds} with dashed lines for the FSRQ parameters and solid lines for the BL Lac object model. Obviously, all cases exhibit strong flares, varying in duration and peak flux. The evolution of the lightcurve also changes slightly in accordance with the discussion of the previous section. The duration is, of course, governed by the size of the objects (and they are ordered in decreasing size), so the duration drops from case to case. The peak flux in turn depends on the particle number, which depends on both the size and the density. For the FSRQ parameter set, the flux variation is on the order of a few in all three bands. In the BL Lac object case, the statements of the previous section hold that the variation in the \ensuremath{\gamma}-ray band is roughly quadratically the variation in the X-ray and R bands. That is, if variations in the X-ray and R bands are on the order of 1 order of magnitude, the \ensuremath{\gamma}-ray band exhibits variations on the order of 2 orders of magnitude. The most notable variation takes place in ''hot cores``, which is the densest and hottest of the examples. In this case, the lightcurve varies more than an order of magnitude even in the FSRQ case. Astrospheres of RGB stars are another common ``cloud'' type in elliptical galaxies. We discuss their case in the appendix as they are not fully compatible with our assumptions on the derivation of the cloud's density structure. \section{Discussion} \label{sec:sumcon} Elliptical galaxies -- the hosts of blazars -- form through the collision of gas-rich spiral galaxies. Much of the free gas is funneled into the center of the galaxy, where an AGN is turned on. Clouds of gas will enter the galactic center out of any direction, and may encounter the relativistic jet. We have considered such an encounter of a cloud with the jet as a particle injection process that will produce long-lasting flares. We first derived the analytical equations that describe the injection function of a spherical, isothermal gas cloud that is only shaped by the hydrostatic equilibrium between its self-gravity and its gas pressure. This expanded the work of \cite{zea17} and provides the correct normalization of the injection process. We also derived the necessary jet conditions to fully ablate the cloud. It turns out that jets should be able to ablate most cloud types. We then proceeded to study the theoretical lightcurve shapes considering various parameter sets for both the cloud and the jet. The most important cloud parameter in this regard is the scale height, which depends on the temperature and the central density of the cloud. The scale height's value relative to the cloud radius determines the homogeneity of the cloud. Homogeneous clouds, i.e. those with a large scale height, produce round lightcurves with a steep rise/decay and relatively flat maximum, while clouds with a small scale height produce a peaked lightcurve with a flat rise/decay and a pronounced central peak. As the density of the cloud can be determined from the peak flux of the lightcurve, the shape of the lightcurve gives a strong indication of the temperature of the cloud. While the jet parameters can be deduced from observations before the flare, they also have a significant influence on the lightcurve. Most notably, parameters describing BL Lac objects produce a significantly larger variability than FSRQ parameters. While this changes the lightcurve shape, it has no major influence on the peak-structure, and therefore on the possibility to determine the scale height. This is important, as it does not influence the predictive power of the model. Subsequently, we used examples of different cloud types that may be present in an active galaxy, and which may penetrate the jet. Each example results in variable fluxes with different magnitudes, different lightcurve shapes, and variations on different time scales. For the example clouds, the variations take thousands to millions of days, which is obviously too long for proper observations. However, there are much smaller clouds present in the Universe, such as the very dense structures around forming stars. Furthermore, the peaks in Fig.~\ref{fig:realclouds} are quite pronounced in terms of flux variation and duration, lasting only for 1 or 2\% of the entire high state duration. This increases the observational potential. One of our major assumptions is that all clouds abide the same hydrostatic density structure. While this is a relatively simple structure, real clouds are much more complex. Especially in star forming regions the cloud structure will be chaotic \citep{kfb20,xl20}, and shaped from gravitational encounters, stellar winds, magnetic fields, etc. It is quite likely that such non-spherically-symmetric structures produce flares that are asymmetric. Furthermore, one can also expect a more complicated density structure with several cores, and turbulent behavior. Additionally, we have treated each cloud as an individual entity. In fact, many of the considered cloud types are part of star-forming regions, and are intertwined. Such a multi-cloud model might produce bright flares on top of an extended high state. These are intriguing possibilities for further applications of the model. In any case, the lightcurve would become more complicated with several peaks. A similar result would be obtained, if several individual clouds would interact with the jet at the same time \citep[e.g.,][]{pbr19}. While this increases the number of ablated particles and, thus, the flux variation, one would observe again several peaks. Disentangling the different clouds may be a complicated endeavor. In our simulations we have assumed that every particle of the cloud enters the jet. This is unlikely to happen. The interaction of the -- loosely bound -- cloud with the highly energetic jet will result in the ejection of cloud material, and only a fraction of the cloud will enter the jet. It is difficult to quantify how much material is lost in this way, and would require dedicated (M)HD simulations, which are beyond the scope of this paper. Additionally, in case of real clouds, it is unlikely that all particles enter the jet independently of the cloud size with respect to the jet size. If the cloud is much larger than the jet, parts of the cloud will skip the jet. However, if the central and densest part of the cloud enters the jet, the effect should be minor. In fact, in all our examples the scale height is smaller than the jet radius. Despite these issues, a significant amount of particles enters the jet to produce an equally significant flux variation. As radiation processes we have considered leptonic synchrotron and inverse-Compton emission, as this is the standard blazar emission scenario. However, as the cloud naturally contains protons and heavier nuclei, hadronic radiation processes might be an interesting alternative to produce the emission. On the one hand, the cloud's nuclei may be accelerated to non-thermal speeds and produce radiation on their own \citep[proton synchrotron, pion and muon synchrotron, etc; e.g.][]{bea13,zea19}. On the other hand, the cloud's nuclei could also serve as targets for the jet's relativistic protons, resulting in proton-nucleus interactions and subsequent radiation production \citep{hea20}. This is an intriguing possibility to produce a flare without the need to accelerate the cloud particles. Here, as well, dedicated simulations may provide further insights. In summary, we have demonstrated that the cloud ablation process is a viable option to produce long-lasting high states in blazars. Further studies, especially (M)HD or particle-in-cell simulations of the entire process, are strongly encouraged. \section*{Acknowledgement} The authors wish to thank Markus B\"ottcher, Patrick Kilian, Klaus Scherer, Valent\'i Bosch-Ramon, Maxim Barkov, Frank Rieger, and Kerstin Wei\ss{} for stimulating discussions on model and manuscript details, as well as cloud parameters. We also thank the anonymous referee for valuable suggestions that helped to improve the manuscript. Funding by the German Ministry for Education and Research (BMBF) through grant 05A17PC3 is gratefully acknowledged.
proofpile-arXiv_065-183
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec-introduction} Emulation of quantum condensed-matter systems using ultracold atoms is an active area in the studies of quantum simulations \cite{Lewenstein2007aip,Georgescu2014rmp,Safronova2018rmp,Schaetz2013njp}. This is because ultracold atoms can provide a versatile platform with a series of advantages: (i) The nontrivial interplay or external fields can be designed by the setup of the laser-atom interaction that can bring novel physics \cite{Dalibard2011rmp,Goldman2014rpp}. (ii) The high controllability of the synthetic interplay and fields has promising applications such as exploring intriguing phase transitions \cite{Greiner2002nat,Stoferle2004prl,Spielman2007prl} and critical phenomena \cite{Bloch2005nphys,Andersen2002prl}. (iii) The absence of the disorder effect or impurities makes the well-isolated system ideally clean \cite{Bloch2012nphys}, and thereby facilitates the investigation for unraveling complex phenomena. Based on these features, a variety of emulations using ultracold atoms have been successfully proposed in a broad range of interesting topics, for instance the ferromagnetism \cite{Parker2013nphys}, quantum Hall effect \cite{Aidelsburger2013prl,Miyake2013prl,Barberan2015njp,Barbarino2016njp}, atomtronic circuit \cite{Pepino2009prl,Jendrzejewski2014prl} and its hysteresis \cite{Eckel2014nat}, atom transistor \cite{Zhang2015njp}, and optical solenoid associated with magnetic flux \cite{Wang2018prl}. In condensed-matter physics, the magneto-optic effect is a fundamental but broad concept in the magnetic mediums. It has been known that the transverse conductivity plays a crucial role in the magneto-optic effect \cite{Pershan2004jap}, which can be introduced by the interplay of the band exchange splitting and spin-orbit coupling in the magnetic medium \cite{Ebert1996rpp}. When the light transmits from vacuum to the medium, the presence of the transverse conductivity hybrids the two polarized components of the photons and imposes a coherent phase to them during the light propagation. As the result, the polarized angle of the reflected and forward scattered light fields deviates from the one of the incident light, respectively known as the magneto-optic Kerr effect (MOKE) and Faraday effect (MOFE). In recent studies, the magneto-optic effect can also arise by virtue of the topological Hall effect, in which the rotated polarized angle is related to the topological invariant, known as the topological magneto-optic effect \cite{Tse2010prl,Tse2011prb,Feng-2020natcommun}. However, a great deal of experiment advances on the magneto-optic effect has constituted the focus of major efforts to MOKE rather than MOFE. This is because the distinct measurement of MOFE has been elusive so far in ordinary magnetic mediums. In MOKE, the rotation of the polarized angle, which is the prominent feature of the magneto-optic effect, is affected by the medium boundary condition of the light reflection. By starkly contrast in MOFE, it accumulates during the light propagation in mediums. For the sake of the photon absorption by the medium, MOFE is generally expected to be detected in ultra-thin films. The magnitude of the rotated polarized angle thus dramatically drops in thinner films, and therefore the salient signal of MOFE is challengingly attainable in real experiments. On the other hand as mentioned, the emulation using ultracold atoms has advantages in realizing artificial physics system with controllable manipulations. This motivates us to search a possible alternative routine for studying MOFE via the emulation in the atomic gases, instead of the challenged detection in conventional solid-state systems. In this work, we propose such a proposal for emulating MOFE using ultracold atoms. The mechanism for the synthetic MOFE relies on the light-atom interplay, which stands out from the conventional physics picture and provides full controllability as well as detectable signals with existing techniques. The paper is organized as follows. In section \ref{sec-model}, we present the detailed model of the emulation. For simplicity we firstly consider the model at resonance to extract the physics picture, and in section \ref{sec-detuned} without loss of generality, the detuned case is investigated. In section \ref{sec-discussion}, we address the relevant practical considerations and possible implementation of the proposal. In section \ref{sec-conclusion}, we summarize the work. \section{Model} \label{sec-model} We consider the bosons with two internal levels that are denoted as pseudo-spins $\uparrow$ and $\downarrow$. In ultracold Bose gases, the atomic cloud can be loaded into two reservoirs separated by a mesoscopic channel \cite{Brantut2012sci,Krinner2013prl,Krinner2014nat,Chien2015nphys}. By preparing the two reservoirs with a number imbalance, the atomic current can be observed through the channel, and the hydrodynamics of the atomic cloud density is semi-classically refined by the equation $\partial_t n + {\bf v}\cdot\nabla n = 0$ \cite{Dalfovo1999rmp}. The linear dispersion shares the similarity of the light propagation. Furthermore, in Bose gases, specifically the Bose-Einstein condensate (BEC), the wave function of different pseudo-spins are orthogonal, exhibiting the same property of the polarized components of the light. Therefore, the atomic transport process inspires us to draw an analogy to the light field in terms of the bosonic cloud. Despite that the atomic ensemble is totally different from the magnetic mediums, the phenomenal and intrinsic similarities can reveal the fundamental physics at the macroscopic level, which is the focus of quantum simulation. The experimental setup is illustrated in Figure \ref{fig-setup}(a). We suppose the atomic cloud is prepared in the BEC phase. In the channel between the reservoirs, the atomic cloud can be approximately regarded as being confined in the harmonic trap potential $V_{\rm trap}({\bf r})=\frac{1}{2}m\omega_{\rm trap}(x^2+y^2)$. In the section normal to the $z$ direction, the wave function of the ground state can be given by \begin{equation} \psi({\bf r}) = e^{-(x^2+y^2)/(2l_0^2)}/(\pi l_0^2) \,, \label{eq-inplane-wave} \end{equation} where $l_0=\sqrt{\hbar/(m\omega_{\rm trap})}$ \cite{Pethick2008book}. Along the $z$ direction, the atomic cloud flows at a center-of-mass (COM) velocity $v_{\rm cm}$. \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{setup.eps} \caption{(a) The experimental setup of the proposal: the light field is emulated via the atomic current between two reservoirs, in which the polarization of the light is characterized by the atomic spin imbalance. The laser-atom interacting region (yellow) plays the role of the medium in which the light can propagates. (b) Illustration of the atomic $\Lambda$-type transition: the two pseudospin states of atoms are coupled via a third excited states $|e\rangle$ by means of two Raman lasers $\Omega_{1,2}$ (yellow arrows). The detuning of each laser-atom interaction is denoted as $\Delta_{1,2}$.} \label{fig-setup} \end{figure} As the spin of atoms mimics the polarization of light, we implement counter-propagating lasers along the $x$ direction, which drives a Raman transition between the two spins via an auxiliary excited levels. In this way, the artificial transverse conductivity can be equivalently generated by laser fields that couples different spins, and the laser-atom interacting region will play the role of the ``medium''. The transition is sketched in Figure \ref{fig-setup}(b). At low temperature, we assume the velocity fluctuation is much smaller than the laser field strength. In the COM reference frame, such a $\Lambda$ system is governed by the following Hamiltonian, \begin{equation} H = [ \hat{\Omega}_1({\bf r})e^{-i\omega_1t}|e\rangle\langle \uparrow| + \hat{\Omega}_2({\bf r})e^{-i\omega_2t}|e\rangle\langle \downarrow| + H.c. ] + \sum_{\lambda=\uparrow,\downarrow,e}\Gamma_{\lambda}|\lambda\rangle\langle \lambda| \,. \label{eq-h-3level} \end{equation} Here $|\lambda\rangle$ with $\lambda=\uparrow,\downarrow,e$ denote the spin-$\uparrow,\downarrow$ and excited states, respectively. $\Gamma_{\uparrow,\downarrow,e}$ are their corresponding level energies. $\hat{\Omega}_{\alpha=1,2}({\bf r})\equiv M_{\alpha}({\bf r})e^{ik_\alpha x}$ where $M_{\alpha}({\bf r})$ characterizes the laser field mode followed with the frequency $\omega_{\alpha}$ as well as the standing-wave vector $k_\alpha$. $H.c.$ stands for the Hermitian conjugation. We can assume the general form of the wave function for the three-level system as $|\psi\rangle = \sum_{\lambda}c_\lambda e^{-i\Gamma_\lambda/\hbar}|\lambda\rangle$. According to the Schr\"{o}dinger equation $i\partial_t|\psi\rangle = H|\psi\rangle$, the coefficient $c_\lambda$ satisfies the following equations, \begin{equation} \cases{ & $i\hbar\partial_t c_\uparrow = \hat{\Omega}_1^*({\bf r}) e^{i\Delta_1 t/\hbar}c_e$ \\ & $i\hbar\partial_t c_\downarrow = \hat{\Omega}_2^*({\bf r}) e^{i\Delta_2 t/\hbar}c_e$ \\ & $i\hbar\partial_t c_e = \hat{\Omega}_1({\bf r}) e^{-i\Delta_1 t/\hbar}c_\uparrow + \hat{\Omega}_2({\bf r}) e^{-i\Delta_2 t/\hbar}c_\downarrow$ }\,. \label{eq-pd-c} \end{equation} Here we have denoted the detuning as $\Delta_{1} = \Gamma_e - \Gamma_{\uparrow}-\hbar\omega_1$ and $\Delta_{2} = \Gamma_e - \Gamma_{\downarrow}-\hbar\omega_2$. For simplicity without loss of generality, we normalize the atomic densities by the total atomic number of the condensate. Thus the coefficient $c_\lambda$ obeys the constraint $\sum_{\lambda}|c_{\lambda}|^2=1$ due to the number conversation. In order to give a simple physics picture for our proposal, we firstly consider the resonance condition $\Delta_1=\Delta_2=0$. The atoms are initially prepared to reside in the two spin states that host the lowest energy. Due to spontaneous breaking the $U(1)$ symmetry, the BEC hosts distinguished phases for each spins \cite{Andrews1997sci}. Thereby, the initial state of the spinful system can be assumed as the form $|\psi_0\rangle = \cos\theta\mbox{$|\uparrow\rangle$} + \sin\theta e^{i\varphi}\mbox{$|\downarrow\rangle$}$. Here $\theta$ characterizes the number imbalance of spins, and $\varphi$ describes the relative phase between the two spins. The solution to Eq.(\ref{eq-pd-c}) is then given as follows, \begin{equation} \cases{ & $c_\uparrow = \cos\theta + F({\bf r})\frac{\hat{\Omega}_1({\bf r})}{\hat{\Omega}_R({\bf r})}\{\cos[\hat{\Omega}_R({\bf r})t/\hbar]-1\}$ \\ & $c_\downarrow = \sin\theta e^{i\varphi} + F({\bf r})\frac{\hat{\Omega}_2({\bf r})}{\hat{\Omega}_R({\bf r})}\{\cos[\hat{\Omega}_R({\bf r})t/\hbar]-1\}$ \\ & $c_e = -i F({\bf r})\sin[\hat{\Omega}_R({\bf r})t/\hbar]$ }\,. \label{eq-wavefunc-evol} \end{equation} Here the dimensionless function is written as $F({\bf r})=[\hat{\Omega}_1({\bf r})\cos\theta + \hat{\Omega}_2({\bf r})e^{i\varphi}\sin\theta + H.c.]/[2\hat{\Omega}_R({\bf r})]$. $\hat{\Omega}_R({\bf r})=\sqrt{|\hat{\Omega}_1({\bf r})|^2+|\hat{\Omega}_2({\bf r})|^2}$ is the Rabi frequency. For simplicity, we postulate the laser modes $M_{1,2}({\bf r})$ to be slowly varied along the $x$ direction in the atomic cloud. As the laser fields are applied along the $x$ direction, the atomic transition driven by them involves no momentum transfer in the $z$ direction, and hence does not affect the atomic transport. In the laboratory frame, the Rabi frequency can be approximately expanded as $\hat{\Omega}_R({\bf r}_{\rm cm}+{\bf r}') \approx \hat{\Omega}_R({\bf r}_{\rm cm})+{\bf r}'\cdot\nabla\hat{\Omega}_R({\bf r}_{\rm cm})$. Here the COM coordinate ${\bf r}_{\rm cm}=v_{\rm cm}t\hat{\bf e}_z$ with $\hat{\bf e}_{x,y,z}$ being the unit vector. Since the laser fields are spatially uniform along the trajectory direction, quantities that depend only on ${\bf r}_{\rm cm}$ can be regarded as constants hereafter. We denote the gradient of the laser field as $\nabla\hat{\Omega}_R({\bf r}_{\rm cm})\equiv A\hat{\bf e}_x$. We remark that the gradient potential $A$ can be attainable in practice, for instance by using Gaussian beams whose center deviates from the atomic COM trajectory, or the tilted potential that is widely applied in the technique of laser-assisted tunneling \cite{Aidelsburger2013prl,Miyake2013prl}. In a steady transport case, the atomic current is incompressible along the trajectory direction. The density per unit of length along the $z$ direction can thus be obtained by $n_{\lambda} = \int |c_{\lambda}\psi({\bf r})|^2 d x d y$, where the spatial distribution $\psi({\bf r})$ has been given in Eq.(\ref{eq-inplane-wave}). In particular, for spin-$\uparrow$ atoms, it is expressed as \begin{eqnarray} n_\uparrow &= \int \Big\{ [F({\bf r})]^2\frac{|\hat{\Omega}_1|^2}{2\hat{\Omega}_R^2}[\cos(2\hat{\Omega}_Rt/\hbar)-4\cos(\hat{\Omega}_Rt/\hbar)+3] \nonumber\\ & + F({\bf r})\frac{\hat{\Omega}_1+\hat{\Omega}_1^*}{\hat{\Omega}_R}[\cos(\hat{\Omega}_Rt/\hbar)+1]\cos\theta + \cos^2\theta \Big\} |\psi({\bf r})|^2 d x d y \,. \label{eq-n-up} \end{eqnarray} We suppose the spatial scale of the atomic cloud is tremendously larger than the laser wavelengths, i.e. $k_1l_0,k_2l_0\gg1$. By using the following mathematical relation, \begin{equation} \frac{1}{\sqrt{\pi c}}\int_{-\infty}^{+\infty} \cos(ax+b)e^{-x^2/c}d x = \cos(b)e^{-a^2c/4} \,, \end{equation} the rapid spatial modulated terms such as $\cos(k_1x)$ and $\cos(k_2x+\varphi)$ in $F({\bf r})$ of Eq.(\ref{eq-n-up}) will be averaged to exponentially vanish when integrating out the spatial coordinates. Then we have \begin{equation} n_\uparrow = K_2(t)\Omega_1^2 + K_1(t)\Omega_1^2\cos^2\theta + \cos^2\theta \,,\label{eq-n-evol-up} \end{equation} where the time-dependent functions defined as \begin{eqnarray} K_1(t) &= \frac{1}{\Omega_R^2}[ \cos(\Omega_Rt/\hbar)e^{-t^2/\tau_c^2} - 1] \,, \label{eq-k-func-1}\\ K_2(t) &= \frac{\mathcal{F}}{2\Omega_R^2}[ \cos(2\Omega_Rt/\hbar)e^{-4t^2/\tau_c^2} - 4\cos(\Omega_Rt/\hbar)e^{-t^2/\tau_c^2} + 3] \,, \label{eq-k-func-2} \end{eqnarray} and $\mathcal{F}=(\Omega_1^2\cos^2\theta + \Omega_2^2\sin^2\theta)/(2\Omega_R^2)$. We have denoted $\Omega_{1,2}=M_{1,2}({\bf r}_{\rm cm})$ and $\Omega_{R}=\hat{\Omega}_R({\bf r}_{\rm cm})$. The decay time $\tau_c$ is defined as \begin{equation} \tau_c = 2\hbar/(Al_0) \,. \end{equation} Likewise, the density evolutions of spin-$\downarrow$ and excited-state atoms are obtained as \begin{eqnarray} n_\downarrow &= K_2(t)\Omega_2^2 + K_1(t)\Omega_2^2\sin^2\theta + \sin^2\theta \,, \label{eq-n-evol-down}\\ n_e &= \frac{\mathcal{F}}{2} [ 1-\cos(2\Omega_Rt/\hbar)e^{-4t^2/\tau_c^2} ] \,. \label{eq-n-evol-e} \end{eqnarray} \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{resonance.eps} \caption{(a)-(b) The evolutions of atomic densities at resonance condition. We set $\theta=\pi/5$ in (a) and $\pi/3$ in (b). Other parameters are $(\Omega_1,\Omega_2)=(3.0,4.0)\hbar\omega_{\rm trap}$, and $\tau_c^{-1}=0.3\omega_{\rm trap}$. The regions of the atomic cloud interacting with lasers are highlighted in gray. (c) The polarized angles of the emulated light as functions of $\theta$: $\phi_{\rm in}$ (blue-solid line), $\phi_{\rm sc}$ at $(\Omega_1,\Omega_2)=(1.0,4.0)\hbar\omega_{\rm trap}$ (red-dashed line), and $\phi_{\rm sc}$ at $(3.0,4.0)$ (green-dash-dotted line).} \label{fig-res1} \end{figure} From Eqs.(\ref{eq-n-evol-up}), (\ref{eq-n-evol-down}) and (\ref{eq-n-evol-e}), one can see that in the presence of the laser field gradient $A$, the Rabi oscillations are exponentially suppressed. Similar phenomena can be evidenced by experiments yet in a two-level system \cite{Daniel2013pra}. The atomic cloud will evolve to a steady state in which the densities of spin $\uparrow$ and $\downarrow$ saturate to \begin{equation} \cases{ & $n_\uparrow(t\rightarrow \infty) = \cos^2\theta - \frac{\Omega_1^2}{\Omega_R^2}\cos^2\theta + \frac{3\mathcal{F}}{2}\frac{|\Omega_1|^2}{\Omega_R^2}$ \\ & $n_\downarrow(t\rightarrow \infty) = \sin^2\theta - \frac{\Omega_2^2}{\Omega_R^2}\sin^2\theta + \frac{3\mathcal{F}}{2}\frac{|\Omega_2|^2}{\Omega_R^2}$ } \,.\label{eq-steay-density} \end{equation} The dynamic evolutions are shown in Figure \ref{fig-res1}(a) and (b) for different initial setups. For simplicity, we have assumed the atomic cloud in motion enters the laser region at $t=0$, and leaves it after the cloud fully evolved to the steady state. It can be guaranteed by preparing the width of the laser region $L> v_{\rm cm}\tau_c$. As we use the bosonic atoms to represent the light field, the polarized angle $\phi$ of the emulated light field is defined by the atomic densities, \begin{equation} \phi = \tan^{-1}[n_\downarrow(t)/n_\uparrow(t)] \,, \label{eq-phi} \end{equation} which is time dependent. In particular, the polarized angle of the incident light is expressed as $\phi_{\rm in}=\tan^{-1}(\tan^2\theta)$, while for the scattered light is calculated by $\phi_{\rm sc}=\tan^{-1}[n_\downarrow(\infty)/n_\uparrow(\infty)]$ (c.f. Eq.(\ref{eq-steay-density})). In Figure \ref{fig-res1}(c), we can see the polarized angle is changed after the light passes through the emulated medium, exhibiting the manifest feature of MOFE. The signal of the artificial MOFE (i.e. the rotated polarized angle) not only depends on the parameter $\theta$ of initial setups, but is also controllable by the laser field strengths $\Omega_{1,2}$. \section{Detuned case} \label{sec-detuned} The resonance condition used in the above discussions will introduce the additional heating effect that are frustrated to the practical experiments such as suppressing the lifetime of ultracold atoms \cite{Ketterle1999colle}. However, we remark that the resonance condition in the $\Lambda$ system is not necessary, instead the proposal still works when the laser-atom interaction is prepared with a detuning $\Delta_1=\Delta_2=\Delta\neq0$. It has a crucial advantage that, at the fully far detuned regime (i.e. $\Delta\gg |\hat{\Omega}_{1,2}({\bf r})|$), the heating effect can prominently suppressed and thereby facilitates the realization of the proposal. At the detuned case, the evolutions of the atomic densities for spin $\uparrow$ and $\downarrow$ share the same forms of Eqs.(\ref{eq-n-evol-up}) and (\ref{eq-n-evol-down}), but the time-dependent functions are instead rewritten as (see \ref{app-detuned-case}) \begin{eqnarray} K_1(t) &= \frac{1}{\Omega_R^++\Omega_R^-}\sum_{\alpha=\pm}\frac{\cos(\Omega_R^\alpha t/\hbar)e^{-t^2/\tau_{c\alpha}^2}-1}{\Omega_R^\alpha} \,, \\ K_2(t) &= \sum_{\alpha=\pm}\frac{2\mathcal{F}'}{|\Omega_R^\alpha|^2} +\frac{2\mathcal{F}'}{\Omega_R^+\Omega_R^-}\cos[(\Omega_R^++\Omega_R^-)t/\hbar]e^{-4t^2/\widetilde{\tau}_c^2} \nonumber\\ &-\sum_{\alpha,\alpha'}\frac{2\mathcal{F}'}{\Omega_R^{\alpha}\Omega_R^{\alpha'}} \cos(\Omega_R^\alpha t/\hbar)e^{-t^2/\tau_{c\alpha}^2} + \frac{2\mathcal{F}'}{\Omega_R^{+}\Omega_R^{-}} \,. \end{eqnarray} Here the Rabi oscillations are split into two branches whose frequencies are expresses as $\hat{\Omega}_R^{\pm}({\bf r})=\Delta/2 \pm \sqrt{|\hat{\Omega}_1({\bf r})|^2+|\hat{\Omega}_2({\bf r})|^2+\Delta^2/4}$. The dimensionless constant $\mathcal{F}' = (\Omega_{1}^2\cos^2\theta + \Omega_{2}^2\sin^2\theta)/[2(\Omega_R^++\Omega_R^-)^2]$. We have denoted $\Omega_R^\pm = \pm\hat{\Omega}_R^\pm({\bf r}_{\rm cm})$, $A_\pm = \pm\nabla_x \hat{\Omega}_R^\pm({\bf r}_{\rm cm})$, $\tau_{c\pm} = 2\hbar/(A_\pm l_0)$, and $\widetilde{\tau}_c^{-1} = \tau_{c+}^{-1} + \tau_{c-}^{-1}$. It is easy to demonstrated that $K_{1,2}(t)$ reduces to the form given in Eqs.(\ref{eq-k-func-1}) and (\ref{eq-k-func-2}) at resonance $\Delta=0$. The evolutions are plotted in Figure \ref{fig-res2}(a) and (b). Likewise as in a similar way to the resonance condition, the polarized angle of the incident light is shifted after passing through the medium, as shown in Figure \ref{fig-res2}(c). We comment that the decay time $\tau_{c+}$ and $\tau_{c-}$ are indeed identical. This is because the spatial dependence of $\hat{\Omega}_{R}^{\pm}({\bf r})$ originates from the laser field modes $M_{1,2}({\bf r})$, and hence their gradients $A_\pm$ are equal to each other. In comparison between Figure \ref{fig-res1}(c) and Figure \ref{fig-res2}(c), we find that the rotated polarized angles is insensitive to the detuning $\Delta$. This is because in the steady state, $\Delta$ only affects the Rabi frequencies $\hat{\Omega}_{R}^{\pm}({\bf r})$, which is nearly canceled out in the calculation using Eq.(\ref{eq-phi}). \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{detune.eps} \caption{(a)-(b) The evolutions of atomic densities with a detuning $\Delta=13.0\hbar\omega_{\rm trap}$. We set $\theta=\pi/5$ in (a) and $\pi/3$ in (b). Other parameters are $(\Omega_1,\Omega_2)=(3.0,4.0)\hbar\omega_{\rm trap}$, and $\tau_{c\pm}^{-1}=0.3\omega_{\rm trap}$. The regions of the atomic cloud interacting with lasers are highlighted in gray. (c) The polarized angles of the emulated light as functions of $\theta$: $\phi_{\rm in}$ (blue-solid line), $\phi_{\rm sc}$ at $(\Omega_1,\Omega_2)=(1.0,4.0)\hbar\omega_{\rm trap}$ (red-dashed line), and $\phi_{\rm sc}$ at $(3.0,4.0)$ (green-dash-dotted line).} \label{fig-res2} \end{figure} \section{Discussions} \label{sec-discussion} \subsection{Non-condensed components} In general, the Bose gas is composed of not only the BEC component but also the non-condensed one. This is is usually evidenced by the contrast of condensate densities in spatial and momentum spaces \cite{Anderson1995sci,Davis1995prl,Bradley1995prl}. In the plane normal to the trajectory direction, the spatial profile of the non-condensed wave function can be given by $\psi_{\rm nc}({\bf r})=e^{-(x^2+y^2)/(2l_{\rm nc}^2)}/(\pi l_{\rm nc}^2)$ with $l_{\rm nc}=\sqrt{k_BT/(m\omega_{\rm trap}^2)}$ at temperature $T$. Similar to the results of the BEC case, the laser field gradients can also lead to the exponential damping in the evolution of atomic densities. However, the decay times are approximately estimated as $\tau_{c\pm}^{\rm nc}=\tau_{c\pm}l_0/l_{\rm nc}$ instead. It implies that the densities of the non-condensed component will evolve to the steady state faster at higher temperature. The presence of the non-condensed component does not alter the results or arguments obtained before. This can be explained as follows. Outside the laser-atom interacting region, there is no coupling between the two spins. The system of each spin reduces to the ideal Bose gas confined in a two-dimensional harmonic trap potential. It can be demonstrated that (see \ref{app-tube-BEC}), in a bosonic system composed of $N_{\rm total}$ atoms, the atomic number of the non-condensed component $N_{\rm nc}$ is determined by $N_{\rm nc}/N_{\rm total}=\mbox{$(T/T_c)^{5/2}$}$ below $T_c$. Here $T_c$ stands for the critical temperature for the phase transition that BEC vanishes. We can find that $N_{\rm nc}$ depends only on the temperature $T$, and is proportional to the BEC number: $N_{0}=N_{\rm total}-N_{\rm nc}=[\mbox{$(T_c/T)^{5/2}$}-1]N_{\rm nc}$. Therefore, the density ratio between opposite spins in the non-condensed component is identical to the results obtained in BEC. It reveals the polarized angles of the two components evolve in the same way, yet are damped in different decay time. \subsection{Interaction effect} The theoretical results obtained in the above sections are based on the single-particle properties. By choosing proper atom samples, the intrinsic inter-atomic interaction originated from the van der Waals potential can be weak and ignorable in the transport process. For example, it is known that the scattering length of $^{88}$Sr atoms is approximately $-2a_0$ with $a_0$ being the Bohr radius \cite{deEscobar2008pra,Stellmer2009prl}. The use of $^{88}$Sr can decrease the interaction effect close to zero, thereby the obtained results will maintain the accuracy. On the other hand, we remark that the principal idea of an MOFE emulator in this proposal works as long as the spin-imbalanced densities of the final state explicitly depends on the initial setup. Under weak interaction like the contact one, the atom number and the spin imbalance are both conservative. For this sake, the emulation and observation of MOFE obtained by atomic densities will be still valid when the single-particle properties dominate the physics of the system. \subsection{Spontaneous emission effect} The excited state $|e\rangle$ is occupied even after the atoms evolve to the steady state (c.f. Eq.(\ref{eq-wavefunc-evol})). As the atomic density of each spin respectively characterizes the amplitudes of the polarized light field, the residence number $n_e$ can be used to represent the absorbance ratio of the the emulated medium. However, the ubiquitous spontaneous emission of the atoms will lead the decay from the excited state $|e\rangle$ to the two spin states that host the lower energy. In the $\Lambda$ system of the proposal, there are two dressed states that are mutually orthogonal: the bright one $|\psi_{B}\rangle = \Omega_1/\Omega_R\mbox{$|\uparrow\rangle$} + \Omega_2/\Omega_R\mbox{$|\downarrow\rangle$}$ which is coupled to $|e\rangle$ through the laser fields, and the dark one $|\psi_D\rangle = -\Omega_2/\Omega_R\mbox{$|\uparrow\rangle$} + \Omega_1/\Omega_R\mbox{$|\downarrow\rangle$}$ which is decoupled from $|e\rangle$ and $|\psi_B\rangle$. For the sake of the spontaneous emission, the atoms eventually evolve to the dark state $|\psi_D\rangle$. The dynamic evolution is known as the coherent population trapping \cite{Scully1997book} and is widely applied in the laser cooling \cite{Aspect1988prl,Aspect1989josab}. The spin imbalance of the dark state, i.e. the polarized angle of the scattered light, is solely determined by the laser field strength $\Omega_{1,2}$ and is independent from the polarized angle of the incident light. At this time, the laser-atom region plays the role of a polarizer. It filters the light with a specific polarized angle $\phi=\tan^{-1}(\Omega_1^2/\Omega_2^2)$. Therefore, in order to realize an emulator of MOFE, the spontaneous emission effect needs to be suppressed. In practice, it can be achieved by decreasing the population occupied in $|e\rangle$. By comparing $n_e$ illustrated in Figures \ref{fig-res1} and \ref{fig-res2}, one can easily find that $n_e$ is nearly empty under the far detuning condition, and thus the suppression on the spontaneous emission effect is anticipated in this case. We note that besides the $\Lambda$-type transition, the use of a single optical field that directly couples pseudo-spins may also work for the MOFE emulator, but provides less controllability. \subsection{Experimental implements} The proposal is readily realized via current techniques using ultracold atoms. Here we use $^{87}$Rb as the example to estimate the feasibility of the proposal. We choose two hyperfine states of $^5$S$_{1/2}$ as pseudo-spins. The interaction effect can be suppressed by tuning Feshbach resonances. By setting $\omega_{\rm trap}\approx 2\pi \times 200$Hz, the condensate lengths are evaluated as $l_0\approx 0.76\mbox{$\mu$m}$ and $l_{\rm nc}\approx 2.4\mbox{$\mu$m}$ at temperature 100nK. If we choose the laser field gradient as $A\approx 200$kHz/mm, the decay time is obtained as $\tau_c\approx 2.1$ms and $\tau_c^{\rm nc}\approx 0.65$ms. Noticing that the Rabi oscillations decay exponentially as $e^{-t^2/\tau_c^2}$, the system will evolve to the steady state within shorter time than milliseconds. The Faraday rotation illustrated in Figure \ref{fig-res1}(c) and \ref{fig-res2}(c) can be detected by preparing the laser fields $\Omega_{1,2}$ to the order of $\hbar\omega_{\rm trap}$. \section{Conclusions} \label{sec-conclusion} In summary, we propose a scheme for quantum emulating MOFE that is frustrated to be evidenced in practical experiments. The core of the quantum simulation relies on the analogy between the classic light field and the bosonic atomic cloud. Our proposal broadens a classic concept of MOFE to ultracold atomic physics, and provides an alternative perspective of understanding the laser-atom interaction in atomic ensembles at a macroscopic level. The feasible measurement with the high controllability and distinguished signals facilitates the observation to the artificial MOFE, and unambiguously paves the way for the quantum emulating and exploring MOFE. The present work has focused on the physics of the homogeneous laser-atom interaction. If we design a spatially-dependent or spin-dependent structure for such an interplay, it is known to support a series of effective external fields that associate with rich physics \cite{Dalibard2011rmp,Goldman2014rpp}. For instance, based on existing techniques \cite{Lin2016jpb}, it is possible to associate the rotated polarized angle with a nonlinear dependence on external artificial fields, i.e. the emulation of the magneto-optic Voigt effect. Another potential application of the present work is to relate the artificial transverse conductivity to the intrinsic topological properties of system, revealing the possibility of a quantized Faraday rotation. These represent an interesting line of future research. \section*{Acknowledgements} We acknowledge S. Z. Zhang for helpful discussions. This work was supported by the Key-Area Research and Development Program of GuangDong Province (Grant No. 2019B030330001), the National Key Research and Development Program of China (Grant No. 2016YFA0301800), the GRF (No.: HKU173057/17P) and CRF (No.: C6005-17G) of Hong Kong.
proofpile-arXiv_065-184
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{0pt}{12pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \titlespacing\subsection{0pt}{10pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \titlespacing\subsubsection{0pt}{8pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \usepackage{graphics} \usepackage{multirow} \usepackage{hhline} \usepackage{subfigure} \title{Towards Visual Distortion in Black-Box Attacks} \usepackage{authblk} \renewcommand*{\Authfont}{\bfseries} \author{Nannan Li} \author{Zhenzhong Chen\thanks{\tt{zzchen@ieee.org}}} \affil{School of Remote Sensing and Information Engineering, Wuhan University} \begin{document} \twocolumn[ \begin{@twocolumnfalse} \maketitle \begin{abstract} Constructing adversarial examples in a black-box threat model injures the original images by introducing visual distortion. In this paper, we propose a novel black-box attack approach that can directly minimize the induced distortion by learning the noise distribution of the adversarial example, assuming only loss-oracle access to the black-box network. The quantified visual distortion, which measures the perceptual distance between the adversarial example and the original image, is introduced in our loss whilst the gradient of the corresponding non-differentiable loss function is approximated by sampling noise from the learned noise distribution. We validate the effectiveness of our attack on ImageNet. Our attack results in much lower distortion when compared to the state-of-the-art black-box attacks and achieves $100\%$ success rate on InceptionV3, ResNet50 and VGG16bn. The code is available at \url{https://github.com/Alina-1997/visual-distortion-in-attack}. \end{abstract} \vspace{0.35cm} \end{@twocolumnfalse} ] \section{Introduction} Adversarial attack has been a well-recognized threat to existing Deep Neural Network (DNN) based applications. It injects small amount of noise to a sample (e.g., image, speech, language) but degrades the model performance drastically \cite{audio20,KurakinGB17,practical17}. With the continuous improvements of DNN, such attack could cause serious consequences in practical conditions where DNN is used. According to \cite{survey18,towards}, adversarial attack has been a practical concern in real-world problems, ranging from cell-phone camera attack to attacking self-driving cars. According to the information that an adversary has of the target network, existing attack roughly falls into two categories: white-box attack that knows all the parameters of the target network, and black-box attack that has limited access to the target network. Each category can be further divided into several subcategories depending on the adversarial strength \cite{PapernotLimit16}. The proposed attack in this paper belongs to loss-oracle based black-box attack, where the adversary can obtain the output loss from supplied inputs. In real-world scenario, it's sometimes difficult or even impossible to have full access to certain networks, which makes the black-box attack practical and attract more and more attention. Black-box attack has very limited or no information of the target network and thus is more challenging to perform. In the $l_p$-bounded setting, a black-box attack is usually evaluated on two aspects: number of queries and success rate. In addition, recent work \cite{jordan2019quantifying} shows that visual distortion in the adversarial examples is also an important criteria in practice. Even under a small $l_\infty$ bound, perturbing pixels in the image without considering the visual impact could make the distorted image very annoying. As shown in Fig. \ref{fig:motiv}, an attack \cite{ilyas2018prior} under a small noise level ($l_\infty \leq 0.05$) causes relatively large visual distortion and the perturbed image is more distinguishable from the original one. Therefore, under the assumption that the visual distortion caused by the noise is related to the spatial distribution of the perturbed pixels, we take a different view from previous work and focus on \emph{explicitly} learning a noise distribution based on its corresponding visual distortion. In this paper, we propose a novel black-box attack that can directly minimize the induced visual distortion by learning the noise distribution of the adversarial example, assuming only loss-oracle access to the black-box network. The quantified visual distortion, which measures the perceptual distance between the adversarial example and the original image, is introduced in our loss where the gradient of the corresponding non-differentiable loss function is approximated by sampling noise from the learned noise distribution. The proposed attack can achieve a trade-off between visual distortion and query efficiency by introducing the weighted perceptual distance metric in addition to the original loss. Theoretically, we prove the convergence of our model under a convex or non-convex loss function. The experiments demonstrate the effectiveness of our attack on ImageNet. Our attack results in much lower distortion than the other attacks and achieves $100\%$ success rate on InceptionV3, ResNet50 and VGG16bn. In addition, it is shown that our attack is valid even when it's only allowed to perturb pixels that are out of the target object in a given image. Our contributions are as follows: \begin{itemize} \item We are the first to introduce perceptual loss in a \emph{non-differentiable} way for the generation of less-distorted adversarial examples. And the proposed method can achieve a trade-off between visual distortion and query efficiency by using the weighted perceptual distance metric in addition to the original loss. \item Theoretically, we prove the convergence of our model. \item Through extensive experiments, we show that our attack results in much lower distortion than the other attacks. \end{itemize} \begin{figure*}[!t] \centering \includegraphics[width=0.75\textwidth]{motivb.pdf} \caption{Adversarial examples on ImageNet with bounded noise ${||\delta||}_\infty \leq 0.05$. The first image is the original unperturbed image. The following examples are from \cite{ilyas2018prior} and our method, respectively. Higher Structural SIMilarity (SSIM) and lower Learned Perceptual Image Patch Similarity (LPIPS) indicate less visual distortion.} \label{fig:motiv} \end{figure*} \section{Related Work} Recent research on adversarial attack \cite{there20, pca20,pso20} has made advanced progress in developing strong and computationally efficient adversaries. In the following, we briefly introduce existing attack techniques in both the white-box and black-box settings. \subsection{White-box Attack} In white-box attack, the adversary knows the details of a network, including network structure and its parameter values. Goodfellow \emph{et al.} \cite{GoodfellowSS14} proposed a fast gradient sign method to generate adversarial examples. It's computationally effective and serves as a baseline for attacks with additive noise. In \cite{functional19}, a functional adversarial attack that applied functional noise instead of additive noise to the image, is introduced. Recently, Jordan \emph{et al.} \cite{jordan2019quantifying} stressed quantifying perceptual distortion of the adversarial examples by leveraging perceptual metrics to define an adversary. Different from our method which directly optimizes the metric, their model conducts a search over parameters from several composed attacks. There are also attacks that sample noise from a noise distribution \cite{r12zheng2019distributionally,r13wang2020hamiltonian}, on the condition that gradients from the white-box network is accessible. Specifically, \cite{r12zheng2019distributionally} utilizes particle approximation to optimize a convex energy function. \cite{r13wang2020hamiltonian} formulates the attack problem as generating a sequence of adversarial examples in a Hamiltonian Monte Carlo framework. In summary, white-box attack is hard to detect or defend \cite{advCarlin}. In the meantime, however, it suffers from label-leaking and gradient-masking problem \cite{KurakinGB17}. The former causes adversarially trained models to perform better on adversarial examples than original images, and the latter neutralizes the useful gradient for adversaries. The preliminary of acquiring full access to a network in white-box attack is also sometimes difficult to satisfy in real-world scenarios. \subsection{Black-box Attack} Black-box attack considers the target network as a black-box, and has limited access to the network. We discuss loss-oracle based attack here, where the adversary assumes only loss-oracle access to the black-box network. \paragraph{Query Efficient Attacks.} Attacks of this kind roughly fall into three categories: 1) Methods that estimate gradient of the black-box. Some methods estimate the gradient by sampling around a certain point, which formulates the task as a problem of continuous optimization. Tu \emph{et al.} \cite{TuTC0ZYHC19} searched for perturbations in the latent space of an auto-encoder. \cite{r1zhang2020dual} utilizes feedback knowledge to alter the searching directions for efficient attack. Ilyas \emph{et al.} \cite{ilyas2018prior} exploited prior information about the gradient. Al-Dujaili and O`Reilly \cite{there20} reduced query complexity by estimating just the sign of the gradient. In \cite{r2linonlinear,r3_5huang2019black}, the proposed methods perform search in a constructed low-dimensional space. \cite{ld2019} shares similarity with our method as it also \emph{explicitly} defines a noise distribution. However, the distribution in \cite{ld2019} is assumed to be an isometric normal distribution without considering visual distortion whilst our method does not assume the distribution to be a specific form. We compare with their method in details in the experiments. Other approaches in this category develop a substitute model \cite{practical17,ChengDPSZ19,papernot2016transferability} to approximate performance of the black-box. By exploiting the transferability of adversarial attack \cite{GoodfellowSS14}, the white-box attack technique applied to the substitute model can be transferred to the black-box. These approaches assume only label-oracle to the targeted network, whereas training of the substitute model requires either access to the training dataset of the black-box, or collection of a new dataset. 2) Methods based on discrete optimization. In \cite{MoonAS19,there20}, an image is divided into regular grids and the attack is performed and refined on each grid. Meunier \emph{et al.} \cite{meunier2019yet} adopted the tiling trick by adding the same noise for small square tiles in the image. 3) Methods that leverage evolutionary strategies or random search \cite{meunier2019yet,andriushchenko2019square}. In \cite{andriushchenko2019square}, the noise value is updated by a square-shaped random search at each query. Meunier \emph{et al.} \cite{meunier2019yet} developed a set of attacks using evolutionary algorithms using both continuous and discrete optimization. \paragraph{Attacks that Consider Visual Impact.} Query efficient black-box attacks usually do not consider the visual impact of the induced noise, for which the adversarial example could suffer from significant visual distortion. Similar to our work, there are research that address the perceptual distance between the adversarial examples and the original image. \cite{r10xiao2018generating,r11zhang2019generating} introduce Generative Adversarial Network (GAN) based adversaries, where the gradient of the perceptual distance in the generator is computed through backpropagation. \cite{r6gragnaniello2019perceptual,r7rozsa2016adversarial} also require the adopted perceptual distance metric to be differentiable. Computing the gradients of a complex perceptual metric at each query might be computationally expensive \cite{r18gao2015learning}, and is not possible for some rank-based metrics \cite{r19ma2016no}. Different from these methods, our approach treats the perceptual distance metric as a black-box, saving the efforts of computing its gradients, and minimizing such distance by sampling from a learned noise distribution. On the other hand, \cite{r8zhao2017generating,r9wang2020transferable} present semantic perturbations for adversarial attacks. The produced noise map is semantically meaningful to human, whilst the image content of the adversarial example is distinct from that of the original image. Different from \cite{r8zhao2017generating,r9wang2020transferable} that focus on \emph{semantic} distortion, our method addresses \emph{visual} distortion and aims to generate adversarial examples that are visually indistinguishable from the original image. \section{Method} \begin{algorithm}[t] \DontPrintSemicolon \LinesNumbered \caption{Our Algorithm} \KwIn{image $x$, maximum norm $\epsilon$, proportion $q$ of the resampled noise} \KwOut{adversarial example $x+\delta$} Initialize noise distribution $p_{\theta_0}=\text{softmax}({\theta}_0)$ and noise ${{\delta}_0}$ \; \For{$\mathrm{step}$ $t$ $\mathrm{in}$ $\{1,...,n\}$}{ $T^*= {\text{argmin}_{T = 0,1,...t - 1}}L(x,x + {\delta _T})$ \; Compute baseline $b = {L}(x,x + {\delta}_{T^*} )$ \; Update $\theta$ using Eq. (\ref{eq:detaE}), ${{\theta}_t} \leftarrow {{\theta}_{t-1}}- {\nabla}{F}(\theta _{t-1})$\; Sample ${\delta}_t$, ${\delta _{t}} \leftarrow {\text{resample}}{({\delta _{T^*}},q;{\delta _{t-1}})_{{\delta _{t-1}}\sim{p_{{\theta _{t-1}}}}}}$ \; \If{$successful{\text{\_}}attack(x,x+{\delta_t})$} {return $x+\delta_t$\;} } \SetKwProg{Def}{def}{:}{} \Def{$successful{\text{\_}}attack(x,x+{\delta_t})$}{ \eIf{${\text{argmax}_{k_1}}{f(x+{\delta _t})_{k_1}}{\neq}{\text{argmax}_{k_2}}{f(x)_{k_2}}$} {return True \;} {return False \;} } \end{algorithm} \subsection{Learning Noise Distribution Based on Visual Distortion} An attack model is an adversary that constructs adversarial examples against certain networks. Let $f:x \to f(x)$ be the target network that accepts an input ${x} \in {\mathbb{R}^n}$ and produces an output ${f(x)} \in {\mathbb{R}^m}$. $f(x)$ is a vector and ${f(x)}_k$ represents its $k_\text{th}$ entry, denoting the score of the $k_\text{th}$ class. $y={ \text{argmax}_k} {{f(x)}_k}$ is the predicted class. Given a valid input ${x}$ and the corresponding predicted class $y$, an adversarial example \cite{SzegedyZSBEGF13} $x^\prime$ is similar to $x$ yet results in an incorrect prediction ${ \text{argmax}_k} {{f(x^\prime)}_k}{\neq}y$. In an additive attack, an adversarial example $x^\prime$ is a perturbed input with additive noise $\delta$ such that $x^\prime=x+{\delta}$. The problem of generating an adversarial example is equivalent to produce noise map $\delta$ that causes wrong prediction for the perturbed input. Thus a successful attack is to find ${\delta}$ such that ${\text{argmax}_k}{f(x+{\delta})_k}{\neq}y$. Since this constraint is highly non-linear, the loss function is usually rephrased in a different form \cite{towards}: \begin{equation} L(x,x + \delta )={\text{max}}(0,f{(x + \delta)_y} - {\text{ma}}{{\text{x}}_{k \ne y}}f{(x + \delta )_k}) \end{equation} The attack is successful when $L=0$. It's noted that such a loss does not take the visual impact into consideration, for which the adversarial example could suffer from significant visual distortion. In order to constrain the visual distortion caused by the difference between $x$ and $x+\delta$, we adopt a perceptual distance metric $d(x,x + \delta)$ into the loss function with a predefined hyperparameter $\lambda$: \begin{equation} \begin{aligned} L(x,x + \delta ) =&{\text{max}}\big{(}0,f{(x + \delta)_y} - {\text{ma}}{{\text{x}}_{k \ne y}}f{(x + \delta )_k}\big{)} \\ &+ {\lambda}d(x,x + \delta ) \end{aligned} \label{eq:L} \end{equation} \begin{figure*}[!t] \centering \includegraphics[width=0.9\textwidth]{model.pdf} \caption{Framework of the proposed attack.} \label{fig:model} \end{figure*} where smaller $d(x,x + \delta )$ indicates less visual distortion. $d$ can be any form of metric that measures the perceptual distance between $x$ and $x+\delta$, such as well-established $1-\text{SSIM}$ \cite{ssim} or LPIPS \cite{ZhangIESW18}. $\lambda$ manages the trade-off between a successful attack and the visual distortion caused by the attack. The effects of $\lambda$ will be further discussed in Section \ref{sec:abl}. Minimizing the above loss function faces a challenge that $L$ is not differentiable since the black-box adversary does not have access to the gradients of $L$ and the metric $d(x,x+\delta)$ might be calculated in a non-differentiable way. To address this problem, we \emph{explicitly} assume a flexible noise distribution of $\delta$ in the discrete space, in the sense that the noise values and their probabilities are discrete. And the gradient of $L$ can be estimated by sampling from this distribution. Suppose that $\delta$ follows a distribution $p_{\theta}$ parameterized by $\theta$, \emph{i.e.}, $\delta \sim {p_\theta}$. For the $j_\text{th}$ pixel in an image, we make its noise distribution $p_{\theta^j}=\text{softmax}(\theta^j)$, where $\theta^j$ is a vector and each element in it denotes a probability value. By sampling noise from the distribution $p_{\theta}$, $\theta$ can be learned to minimize the expectation of the above loss such that the attack is successful (\emph{i.e.}, alters the predicted label) and the produced adversarial example is less distorted (\emph{i.e.}, small $d$): \begin{equation} {\text{minimize}}{\text{ }}\mathbb{E}_{\delta \sim {p_{\theta}}}[L(x,x + \delta )] \end{equation} For the $j_\text{th}$ pixel, we define its noise's sample space to be a set of discrete values ranging from $-\epsilon$ to $\epsilon$: ${\delta^j} \in \{ \epsilon ,\epsilon - \frac{\epsilon }{N},\epsilon - 2\frac{\epsilon }{N},...,0,... - \epsilon \}$, where $N$ is the sampling frequency and $\frac{\epsilon }{N}$ is the sampling interval. The noise value ${\delta}^j$ of the $j_\text{th}$ pixel is sampled from this sample space by following $p_{\theta^j}$, $p_{\theta^j} \in {\mathbb{R}}^{2N+1}$ Given $W$ and $H$ the width and height of an image, respectively, since each pixel has its own noise distribution $p_{\theta^j}$ of length $2N+1$, the number of parameters for the entire image is $(2N+1)WH$. Note that we do not consider the difference of color channels in order to reduce the size of the sample space. Otherwise the number of parameters would be tripled. Thus, the same noise value is sampled for each RGB channel of a pixel. To estimate $\theta$, we adopt policy gradient \cite{sutton1998reinforcement} to make the above expectation differentiable with respect to $\theta$. Using REINFORCE, we have the differentiable loss function $F(\theta)$: \begin{equation} \begin{aligned} F(\theta)&={\mathbb{E}_{\delta \sim {p_\theta }}}[L(x,x + \delta ) - b] \\ &= (L(x,x + \delta ) - b)\log ({p_\theta}(\delta)) \end{aligned} \label{eq:E} \end{equation} \begin{equation} \begin{aligned} {\nabla}{F}(\theta)&={\nabla _\theta }{\mathbb{E}_{\delta \sim {p_\theta }}}[L(x,x + \delta ) - b]\\ & = (L(x,x + \delta ) - b)(1 - {p_\theta}(\delta)) \end{aligned} \label{eq:detaE} \end{equation} where $b$ is introduced as a \emph{baseline} in the expectation with specific meaning: 1) when $L(x,x + \delta )<b$, the sampled noise map $\delta$ returns low $L$, and its probability ${p_\theta}(\delta)$ increases through gradient descent; 2) when $L(x,x + \delta )=b$, ${\nabla}{F}(\theta)=0$ and ${p_\theta}(\delta)$ remains unchanged; 3) when $L(x,x + \delta)>b$, the sampled noise map $\delta$ returns high $L$, and its probability ${p_\theta}(\delta)$ decreases through gradient descent. To sum up, $L(x,x + \delta)$ is forced to improve over $b$. At the iteration $t$, we choose $b ={{\text{min}}}_{T = 0,1,...t - 1} {L}(x,x + {\delta}_T )$ such that $L$ improves over the obtained minimal loss. The above expectation is estimated using a single Monte Carlo sampling at each iteration, and the sampling of noise map $\delta$ is critical. Simply sampling ${\delta}_t$ at the iteration $t$ on the entire image might cause large variance on the norm of the noise, \emph{i.e.}, $||{\delta}_t-{\delta}_{t-1}||_2$. Therefore, to ensure a small variance, with $T^*= {{\text{argmin}}}_{T = 0,1,...t - 1} {L}(x,x + {\delta}_T )$, only a number of $qWH$ pixels' noise values are resampled in $\delta _{T^*}$, while $(1-q)WH$ pixels' noise values remain unchanged: \begin{equation} {\delta _{t+1}} \leftarrow {\text{resample}}{({\delta _{T^*}},q;{\delta _t})_{{\delta _t}\sim{p_{{\theta _t}}}}} \label{eq:resample} \end{equation} The above equation means replacing $qWH$ pixels' noise values in noise map $\delta _{T^*}$ with those in ${\delta _t}$, which are sampled from distribution $p_{\theta_t}$. In other words, if $q=0.01$, then only a random $1\%$ of $\delta_{T^*}$ is updated at each iteration. As shown in Fig. \ref{fig:model}, after sampling ${\delta}_t$, the feedback $L(x,x+{\delta}_t)$ from the black-box and the perceptual distance metric decide the update of the distribution $p_{\theta_t}$. The iteration stops when the attack is successful, \emph{i.e.}, ${\text{max}}(0,f{(x + \delta_t)_y} - {\text{ma}}{{\text{x}}_{k \ne y}}f{(x + \delta_t)_k})=0$. \subsection{Proof of Convergence} Ruan \emph{et al.} \cite{RuanHK18} shows that feed-forward DNNs (Deep Neural Networks) are Lipschitz continuous with a Lipschitz constant $K$, for which we have \begin{equation} \forall {t},||f(x + {\delta _t}) - f(x + {\text{ }}{\delta _{{T^*}}})|{|}_2 \leq K||{\delta _t} - {\delta _{{T^*}}}|{|_2} \label{eq:ineq1} \end{equation} At the iteration $t$, since only a small part of the noise map is updated, it can be assumed that \begin{equation} |{\text{ma}}{{\text{x}}_{k \ne y}}f{(x + {\delta _t})_k} - {\text{ma}}{{\text{x}}_{k \ne y}}f{(x + {\delta _{{T^*}}})_k}| \leq C \label{eq:maxineq} \end{equation} where $C$ is a constant. Suppose that the perceptual distance metric $d$ is normalized to $[0,1]$. Substituting the inequalities (\ref{eq:ineq1}) and (\ref{eq:maxineq}) in our definition of $L$ in Eq. (\ref{eq:L}) gets the following inequalities: \begin{equation} \begin{aligned} &|L(x,x + {\delta _t}) - L(x,x + {\delta _{{T^*}}})| \\ & \leq K||{\delta _t} - {\delta _{{T^*}}}|{|_2} + C + \lambda \\ & \leq 2KWH\epsilon cq +C+\lambda \end{aligned} \label{eq:12} \end{equation} Ideally, $L(x,x+\delta_t)-L(x,x+\delta_{T^*})$ accurately quantifies the difference of the perturbed image even when only one noise value for just a single pixel at the iteration $t$ is different from that at $T^*$. Let $\delta^{ij}$ represent a special noise map, whose $j_\text{th}$ pixel's noise value is the $i_\text{th}$ element in its sample space and the other pixels' noise values are $0$. Note that the length of the sample space for each pixel is $2N+1$. Similarly, ${p_{\theta_t}}(\delta^{ij})$ denotes the probability of the $i_\text{th}$ element in the sample space of the $j_\text{th}$ pixel. By sampling every element in the sample space of the $j_\text{th}$ pixel, we define $l_t^j$ and $p_{\theta _t^j}$ to be a vector: \begin{gather} \forall {j \in \{1,2,...,WH\}}, l _t^j = vector[L(x,x + \delta^{ij}) - L(x,x + {\delta _{{T^*}}})], \notag \\ i = 1,2,...,2N + 1 \end{gather} \begin{gather} \forall {j \in \{1,2,...,WH\}}, {p_{\theta _t^j} } = vector[{p_{\theta_t}}({\delta}^{ij})], \notag \\ i = 1,2,...,2N + 1 \end{gather} Although the above equations are only meaningful under the ideal situation where $L$ can quantify the difference of just one perturbed pixel, we use these equations for a theoretical proof of convergence. In the ideal situation, the gradient of the $j_\text{th}$ pixel's parameters can be calculated exactly as \begin{equation} \nabla F(\theta _t^j) = {l_t^j} \cdot ({\mathbf{1}} - p_{\theta _t^j}) \end{equation} According to Eq. (\ref{eq:12}) when the number of the resampled pixels $qWH$=1, we have \begin{equation} |L(x,x + \delta ^{ij}) - L(x,x + {\delta _{{T^*}}})| \leq 2K{\epsilon}c+C+\lambda \label{eq:16} \end{equation} Note that for $\forall {t_1},{t_2}$ that share the same $T^*$, $l_{t_1}^j$ is equal to $l_{t_2}^j$. Thus, using Eq. (\ref{eq:16}), we have \begin{equation} \begin{aligned} &||\nabla F({\theta _{{t_1}}^j}) - \nabla F({\theta _{{t_2}}^j})|{|_2}\\ & {\leq} (2N + 1) (2K{\epsilon}c+C+\lambda)||{\text{softmax}}({\theta _{{t_1}}^j}) - {\text{softmax}}({\theta _{{t_2}}^j})|{|_2} \label{eq:detaE} \end{aligned} \end{equation} In practice, we adopt a single Monte Carlo sampling instead of sampling every noise values for every pixel, for which $2N+1$ should be replaced by $1$ in the above inequality. The inequality (\ref{eq:detaE}) thus becomes: \begin{equation} \begin{aligned} & {||}\nabla F({\theta _{{t_1}}^j}) - \nabla F({\theta _{{t_2}}^j})|{|_2}\\ & \leq (2K{\epsilon}c + C + \lambda )||{\text{softmax}}({\theta _{{t_1}}^j}) - {\text{softmax}}({\theta _{{t_2}}^j})|{|_2} \\ &\leq (2K{\epsilon}c + C + \lambda )||{\theta _{{t_1}}^j} - {\theta _{{t_2}}^j}|{|_2} \end{aligned} \end{equation} The standard softmax function disappears because it is Lipschitz continuous with the Lipschitz constant being $1$ \cite{gao2017properties}. Finally, we have the inequality for ${||}\nabla F({\theta _{{t_1}}}) - \nabla F({\theta _{{t_2}}})|{|_2}$: \begin{equation} {||}\nabla F({\theta _{{t_1}}}) - \nabla F({\theta _{{t_2}}})|{|_2} \leq (2K{\epsilon}c + C + \lambda )||{\theta _{{t_1}}} - {\theta _{{t_2}}}|{|_2} \label{eq:detaE-lip} \end{equation} The above inequality proves that $F(\theta)$ is $L$-smooth with the Lipschitz constant being $2K\epsilon c+ C + \lambda$. If $F(\theta)$ is convex, the exact number of steps that Stochastic Gradient Descent (SGD) takes to convergence is $\frac{{(2K\epsilon c+ C + \lambda)\cdot||{\theta _0} - {\theta ^*}||_2^2}}{\xi}$ , where $\xi$ is an arbitrary small tolerable error ($\xi>0$). However, since deep network $L$ is usually highly non-convex, we need to consider the situation where$F(\theta)$ is non-convex. Let the SGD update be \begin{equation} \theta_{t+1}={\theta_t}+{\eta_t}{g({\theta_t})} \end{equation} where ${\eta_t}$ is the learning rate and ${g({\theta_t})}$ is the stochastic gradient. We assume that the variance of the stochastic gradient is upper bounded by $\sigma^2$: \begin{equation} \mathbb{E}[||{\nabla}{F(\theta)}-g(\theta)|{|_2^2}] \leq \sigma^2 < \infty \end{equation} \begin{table*}[!t] \centering \caption{Ablation results of the perceptual distance metric, {\upshape $\lambda$} and sampling frequency {\upshape $N$}. Smaller {\upshape $1-\text{SSIM}$}, LPIPS and CIEDE2000 indicate less visual distortion.} \begin{tabular}{cccccccc}\toprule Sampling &Perceptual &\multirow{2}{*}{$\lambda$} &Success &\multirow{2}{*}{$1-\text{SSIM}$} &\multirow{2}{*}{LPIPS} &\multirow{2}{*}{CIEDE2000} &Avg. \cr Frequency &Metric &&Rate &&&&Queries \cr\midrule \multirow{7}{*}{$N=1$} &- &0 &\textbf{100\%} &0.091 &0.099 &0.941 &\textbf{356} \cr \cmidrule(r){2-8} &\multirow{4}{*}{$1-\text{SSIM}$} &10 &\textbf{100\%} &0.076 &0.081 &0.741 &401 \cr &&100 &97.4\% &0.036 &0.051 &0.703 &1395\cr &&200 &92.2\% &0.025 &0.040 &0.622 &2534\cr &&dynamic &\textbf{100\%} &\textbf{0.009} &0.009 &\textbf{0.204} &7678\cr \cmidrule(r){2-8} &\multirow{4}{*}{LPIPS} &10 &\textbf{100\%} &0.080 &0.078 &0.762 &450 \cr &&100 &98.1\% &0.049 &0.052 &0.711 &1174\cr &&200 &95.1\% &0.038 &0.045 &0.635 &1928 \cr &&dynamic &\textbf{100\%} &0.015 &\textbf{0.005} &0.277 &6694 \cr\midrule None &$1-\text{SSIM}$ &10 &\textbf{100\%} &0.118 &0.142 &5.936 &426\cr \midrule $N=2$ &$1-\text{SSIM}$ &10 &99.7\% &0.071 &0.074 & 0.846 &520 \cr \midrule $N=5$ &$1-\text{SSIM}$ &10 &99.5\% &0.069 &0.070 &0.877 &665\cr \midrule $N=10$ &$1-\text{SSIM}$ &10 &98.7\% &0.062 &0.075 &0.879 &669 \cr \midrule $N=12$ &$1-\text{SSIM}$ &10 &98.7\% &0.071 &0.075 &0.882 &673\cr \bottomrule \end{tabular} \label{tab:abl} \end{table*} And we select ${\eta_t}$ that satisfies \begin{equation} {\sum\limits_{t = 1}^\infty {{\eta _t}} = \infty} \text{ and }{\sum\limits_{t = 1}^\infty {{\eta _t}^2} < \infty } \label{eq:condeta} \end{equation} Condition (\ref{eq:condeta}) can be easily satisfied with a decaying learning rate, e.g., ${\eta _t} = \frac{1}{{\sqrt {t\ln (t + 1)} }}$. According to Lemma 1 and Theorem 2 in \cite{r21non-convWeb}, using the $L$-smooth property of $F(\theta)$ , ${\nabla}{F}(\theta_t)$ goes to $0$ with probability $1$. This means that with probability $1$ for any $\xi>0$ there exists $N_{\xi}$ such that ${\nabla}{F}(\theta_t){\leq}{\xi}$ for $t{\geq}{N_{\xi}}$. Unfortunately, unlike in the convex case, we do not know the exact number of steps that SGD takes to convergence. The above proof simply aims to theoretically show that the proposed method converges in finite steps, although possibly in a rather slow speed. From the ``Avg. Queries'' in the following experiments, we can see that the actual computational cost is affordable and comparable to some of the query-efficient attacks. \begin{figure*}[!t] \centering \includegraphics[width=0.8\textwidth]{sample.pdf} \caption{Adversarial examples under different sampling frequency. From left to right is the original image, the adversarial examples from $N=1,2,5,10,12$, respectively.} \label{fig:vis_samp} \end{figure*} \section{Experiments} \label{sec:experiments} Following previous work \cite{meunier2019yet,ilyas2018prior}, we validate the effectiveness of our model on the large-scale ImageNet \cite{ILSVRC15} dataset. We use three pretrained classification networks on Pytorch as the black-box networks: InceptionV3 \cite{incep}, ResNet50 \cite{He2015Deep} and VGG16bn \cite{SimonyanZ14a}. The attack is performed on images that were correctly classified by the pretrained network. We randomly select $1000$ images in the validation set for test, and all images are normalized to $[0,1]$. We quantify our success in terms of the perceptual distance ($1-\text{SSIM}$, LPIPS and CIEDE2000) as we address the visual distortion caused by the attack. In these metrics, $1-\text{SSIM}$ \cite{ssim} measures the degradation of structural information in the adversarial examples. LPIPS \cite{ZhangIESW18} evaluates the perceptual similarity of two images with their normalized distance between their deep features. CIEDE2000 \cite{r20zhao2020towards} measures perceptual color distance, which is developed by the CIE (International Commission on Illumination). Smaller value of these metrics denotes less visual distortion. Except for $1-\text{SSIM}$, LPIPS and CIEDE2000, the success rate and average number of queries are also reported as in most previous work. The average number of queries refers to the average number of requests to the output of the black-box network. We initialize the noise distribution $p_\theta$ to be a uniform distribution and noise $\delta _0$ to be $0$. The learning rate is $0.01$ and $q$ is set to be $0.01$. In addition, we specify the shape of the resampled noise at each iteration to be a square \cite{meunier2019yet,MoonAS19, andriushchenko2019square}, and adopt the tiling trick \cite{ilyas2018prior,meunier2019yet} with tile size$=2$. The upper bound $\epsilon$ of our attack is set to be $0.05$ as in previous work. \subsection{Ablation Studies} \label{sec:abl} \begin{figure*}[!t] \centering \includegraphics[width=0.9\textwidth]{lambda.pdf} \caption{Visualized examples of the proposed attack. From left to right is the original image, the adversarial examples on $\lambda=0, \lambda=10, \lambda=100,\lambda=200$, dynamic $\lambda$, respectively.} \label{fig:vis_exp} \end{figure*} In the ablation studies, the maximum number of queries is set to be $10,000$. The results are averaged on $1000$ test images. In the following, we discuss the trade-off between visual distortion and query efficiency, the effects of using different perceptual distance metrics in the loss function, the results on different sampling frequencies and the influence of predefining a specific form of noise distribution. \paragraph{Trade-off between visual distortion and query efficiency.} Under the same $l_\infty$ ball, a query-efficient way to produce an adversarial example is to perturb most pixels with the maximum noise values $ \pm\epsilon$ \cite{MoonAS19,andriushchenko2019square}. However, such attack introduces large visual distortion, which could make the distorted image very annoying. To constrain the visual distortion, the perturbed pixels should be those who cause smaller visual difference while performing a valid attack, which takes extra queries to find. This brings the trade-off between visual distortion and query efficiency, which can be controlled by $\lambda$ in our loss function. As shown in Table \ref{tab:abl}, when $N=1$ and $\lambda=0$, the adversary does not consider visual distortion at all, and perturbs each pixel that is helpful for misclassification until the attack is successful. Thus, it causes the largest perceptual distance ($0.091$, $0.099$ and $0.941$) with the least number of queries ($356$). As $\lambda$ increases to $200$, all the perceptual metrics decrease at the cost of more queries and lower success rate. The maximum $\lambda$ in Table \ref{tab:abl} is $200$ since further increasing it causes the success rate to be lower than $90\%$. In addition, as in \cite{TuTC0ZYHC19}, we perform a dynamic line search on the choice of $\lambda$ to see the best perceptual scores the adversary can achieve, where ${\lambda} \in [0,1000]$. Comparing with fixed $\lambda$ values, using dynamic values of $\lambda$ greatly boosts the performance on perceptual metrics with $100\%$ attack success rate, at the cost of dozens of times the number of queries. Fig. \ref{fig:vis_exp} gives several visualized examples on different $\lambda$, where adversarial examples with larger $\lambda$ suffer from less visual distortion. \paragraph{Ablation studies on the perceptual distance metric.} The perceptual distance metric $d$ in the loss function is predefined to measure the visual distortion between the adversarial example and the original image. We adopt $1-\text{SSIM}$ and LPIPS as the perceptual distance metric to optimize, respectively, and report their results in Table \ref{tab:abl}. When $\lambda=10$, optimizing $1-\text{SSIM}$ shows better score on $1-\text{SSIM}$ ($0.076$ v.s. $0.080$) and CIEDE2000 ($0.721$ v.s. $0.742$) whilst optimizing LPIPS has better performance on LPIPS ($0.078$ v.s. $0.081$). However, when $\lambda$ increases to $100$ and $200$, optimizing $1-\text{SSIM}$ gives better scores on both $1-\text{SSIM}$ and LPIPS. Therefore, we set the perceptual distance metric to be $1-\text{SSIM}$ in the following experiments. \begin{figure*}[!t] \centering \includegraphics[width=0.9\textwidth]{oattack.pdf} \caption{\small{Visualized adversarial examples in out-of-object attack. The red bounding box locates the target object in the original image. In \emph{out-of-object attack}, the adversary is only allowed to perturb pixels that are out of the object bounding box. In \emph{image attack}, the adversary can perturb any pixel in the image.}} \label{fig:oattack} \end{figure*} \begin{table*}[!t] \centering \caption{Results of the out-of-object attack on ImageNet when {\upshape $\lambda=10, N=1$} and the perceptual distance metric being {\upshape $1-\text{SSIM}$}. I, R and V represent InceptionV3, ResNet50 and VGG16bn, respectively.} \scalebox{.8}[.8]{ \begin{tabular}{cccccccccccccccc}\toprule Attacked &\multicolumn{3}{c}{Success} &\multicolumn{3}{c}{\multirow{2}{*}{$1-\text{SSIM}$}} &\multicolumn{3}{c}{\multirow{2}{*}{LPIPS}} &\multicolumn{3}{c}{\multirow{2}{*}{CIEDE2000}} &\multicolumn{3}{c}{Avg.} \cr Range &\multicolumn{3}{c}{Rate} &&&&&&&&&&\multicolumn{3}{c}{Queries}\cr \cmidrule{2-4} \cmidrule{5-7} \cmidrule{8-10} \cmidrule{11-13} \cmidrule{14-16} &I &R &V &I &R &V &I &R &V &I &R &V &I &R &V \cr\midrule Image &\textbf{100\%} &\textbf{100\%} &\textbf{100\%} &0.078 &0.076 &\textbf{0.072} &0.096 &0.081 &0.079 &0.692 &\textbf{0.741} &0.699 &\textbf{845} &\textbf{401} &\textbf{251}\cr Out-of-object &90.1\% &93.8\% &94.7\% &\textbf{0.071} &\textbf{0.069} &0.074 &\textbf{0.081} &\textbf{0.065} &\textbf{0.070} &\textbf{0.678} &0.805 &\textbf{0.687} &4275 &3775 &3104\cr \bottomrule \end{tabular} } \label{tab:range} \end{table*} \paragraph{Sampling frequency.} Sampling frequency decides the size of the sample space of $\delta$. Setting higher frequency means there are more noise values to explore through sampling. In Table \ref{tab:abl}, increasing the sampling frequency from $N=1$ to $N=2$ reduces the perceptual distance to some extent at the cost of lower success rate. On the other hand, further increasing $N$ to $12$ does not essentially reduce the distortion yet lowers the success rate. We set the sampling frequency $N=1$ in the following experiments. Note that the maximum sampling frequency is $N=12$ because the sampling interval in RGB color space (\emph{i.e.}, $255*0.05/N$) would be less than $1$ if $N>12$. See Fig. \ref{fig:vis_samp} for a few adversarial examples from different sampling frequencies. \paragraph{Noise Distribution.} In the proposed algorithm, we adopt a flexible noise distribution instead of predefining it to be a specific form. Therefore, we conducted ablation studies on assuming the distribution to be a regular form as in NAttack \cite{ld2019}. Specifically, we let the noise distribution be an isometric normal distribution while $\lambda=10$ in the loss function, and perform attacks by estimating mean and variance as Eq. (10) in \cite{ld2019}. As reported in the tenth row of Table \ref{tab:abl}, under the same experimental setting, it is clear that fixing the noise distribution to be a specific isometric normal distribution degrades the overall performance. We think it is because the distribution that minimizes the perceptual distance is unknown, which might not follow a Guassian distribution or other regular form of distribution. To approximate an unknown distribution, it is better to allow the noise distribution to be a free form as in the proposed approach, and let it be learned by minimizing the perceptual distance. \subsection{Out-of-Object Attack} Most existing classification networks \cite{He2015Deep,hu2018senet} are based on Convolutional Neural Network (CNN), which gradually aggregates contextual information in deeper layers. Therefore, it is possible to fool the classifier by just attacking the ``context'', \emph{i.e.}, background that is out of the target object. Attacking just the out-of-object pixels constrains the number and the position of pixels that can be perturbed, which might further reduce the visual distortion caused by the noise. To locate the object in a given image, we exploited the object bounding box provided by ImageNet. An out-of-object mask is then created according to the bounding box such that the model is only allowed to attack pixels that are out of the object, as shown in Fig. \ref{fig:oattack}. In Table \ref{tab:range}, we report results of InceptionV3, ResNet50 and VGG16bn with the maximum queries$=40,000$. The attack is performed on images whose masks are at least $10\%$ large of the image area. The results show that attacking just the out-of-object pixels can also cause misclassification of the object with over $90\%$ success rate. Compared with image attack, the out-of-object attack is more difficult for the adversary in that it requires more number of queries ($4275/3775/3104$) yet has lower success rate ($90.1\%/93.8\%/94.7\%$). On the other hand, the out-of-object attack indeed reduces visual distortion of the adversarial examples on the three networks. \begin{table}[!h] \centering \caption{Comparison of the undefended ({\upshape v$3$}) and defended ({\upshape v$3_{\text{adv-ens}4}$}) InceptionV3. The defended InceptionV3 adopts ensemble adversarial training.} \scalebox{.7}[.7]{ \begin{tabular}{ccccccc}\toprule Network &Clean Accuracy &After Attack &$1-\text{SSIM}$ &LPIPS &CIEDE2000 &Avg. Queries \cr\midrule v$3$ &75.8\% &0.8\% &0.096 &0.149 &0.862 &531 \cr v$3_{\text{adv-ens}4}$ &73.4\% &1.8\% &0.103 &0.154 &0.979 &777 \cr \bottomrule \end{tabular} } \label{tab:defended} \end{table} \subsection{Attack Effectiveness on Defended Network} \begin{figure*}[!t] \centering \includegraphics[width=0.85\textwidth]{compare1.pdf} \caption{Adversarial examples from different attacks with perceptual distance scores.} \label{fig:compare} \end{figure*} \begin{table*}[!t] \centering \caption{Results of different attacks on ImageNet. I, R and V represent InceptionV3, ResNet50 and VGG16bn, respectively.} \scalebox{.8}[.8]{ \begin{tabular}{cccccccccccccccc}\toprule \multirow{3}{*}{Attack} &\multicolumn{3}{c}{Success} &\multicolumn{3}{c}{\multirow{2}{*}{$1-\text{SSIM}$}} &\multicolumn{3}{c}{\multirow{2}{*}{LPIPS}} &\multicolumn{3}{c}{\multirow{2}{*}{CIEDE2000}} &\multicolumn{3}{c}{Avg.} \cr &\multicolumn{3}{c}{Rate} &&&&&&&&&&\multicolumn{3}{c}{Queries}\cr \cmidrule{2-4} \cmidrule{5-7} \cmidrule{8-10} \cmidrule{11-13} \cmidrule{14-16} &I &R &V &I &R &V &I &R &V &I &R &V &I &R &V \cr\midrule SignHunter \cite{there20} &98.4\% &- &- &0.157 &- &- &0.117 &- &- &3.837 &- &- &450 &- &-\cr NAttack \cite{ld2019} &99.5\% &- &- &0.133 &- &- &0.212 &- &- &5.478 &- &- &524 &- &\cr AutoZOOM \cite{TuTC0ZYHC19} &100\% &- &- &0.038 &- &- &0.059 &- &- &3.33 &- &- &1010 &- &-\cr Bandits \cite{ilyas2018prior} &96.5\% &98.8\% &98.2\% &0.343 &0.307 &0.282 &0.201 &0.157 &0.140 &8.383 &8.552 &8.194 &935 &705 &388\cr Square Attack \cite{andriushchenko2019square} &99.7\% &\textbf{100\%} &\textbf{100\%} &0.280 &0.279 &0.299 &0.265 &0.243 &0.247 &9.329 &9.425 &9.429 &\textbf{237} &\textbf{62} &\textbf{30} \cr TREMBA \cite{r3_5huang2019black} &99.0\% &\textbf{100\%} &99.8\% &0.161 &0.161 &0.160 &0.188 &0.189 &0.187 &4.413 &4.400 &4.421 &- &- &-\cr \midrule SignHunter-SSIM &97.6\% &- &- &0.220 &- &- &0.157 &- &- &3.832 &- &- &642 &- &-\cr NAttack-SSIM &97.3\% &- &- &0.128 &- &- &0.210 &- &- &5.021 &- &- &666 &- &- \cr AutoZOOM-SSIM &\textbf{100\%} &- & - &0.028 & - &- &0.048 & - &- & 2.98 &- &- &2245 &- &-\cr Bandits-SSIM &80.0\% &89.3\% &89.7\% &0.333 &0.303 &0.275 &0.200 &0.163 &0.135 &8.838 &8.666 &8.194 &1318 &1020 &793\cr Square Attack-SSIM &99.2\% &100\% &100\% &0.260 &0.268 &0.292 &0.256 &0.238 &0.245 &9.301 &9.462 &9.451 &278 &65 &\textbf{30} \cr TREMBA-SSIM &98.5\% &100\% &99.8\% &0.160 &0.160 &0.159 &0.185 &0.186 &0.183 &4.410 &4.396 &4.421 &- &- &-\cr Ours &98.7\% &\textbf{100\%} &\textbf{100\%} &0.075 &0.076 &0.072 &0.094 &0.081 &0.079 &0.692 &0.741 &0.699 &731 &401 &251\cr Ours($\lambda_{dynamic}$) &\textbf{100\%} &\textbf{100\%} &\textbf{100\%} &\textbf{0.016} &\textbf{0.009} &\textbf{0.006} &\textbf{0.023} &\textbf{0.009} &\textbf{0.005} &\textbf{0.215} &\textbf{0.204} &\textbf{0.155} &7311 &7678 &7620\cr \bottomrule \end{tabular} } \label{tab:compare} \end{table*} In the above experiments, we show that our black-box model can attack the \emph{undefended} network with high success rate. To evaluate the strength of the proposed attack in \emph{defended} situation, we further attack the InceptionV3 network that adopts ensemble adversarial training (\emph{i.e.}, v$3_{\text{adv-ens}4}$). Following \cite{TramerKPGBM18}, we set $\epsilon=0.0625$ and randomly select $10,000$ images from the ImageNet validation set for test. The maximum number of queries is $10,000$. The performance of the attacked network is reported in Table\ref{tab:defended}, where clean accuracy is the classification accuracy before attack. Note that v$3$ is slightly different from InceptionV3 in Table \ref{tab:abl} in that the pretrained model of v$3$ comes from Tensorflow, which is the same platform of the pretrained model of v$3_{\text{adv-ens}4}$. Compared with undefended network, attacking defended one causes larger visual distortion. However, the proposed attack can still reduce the classification accuracy from $73.4\%$ to $1.8\%$, which demonstrates its effectiveness under defend. \begin{figure*}[!t] \centering \includegraphics[width=0.9\textwidth]{compare.pdf} \caption{More visualized adversarial examples from different attacks.} \label{fig:more} \end{figure*} \subsection{Comparison with Other Attacks} Since our approach addresses improving the visual similarity between the adversarial example and the original image, it might cost more number of queries to construct a less distorted adversarial example. To show that such costs are affordable, we compare our attack to recently proposed black-box attacks: SignHunter \cite{there20}, NAttack \cite{ld2019}, AutoZOOM \cite{TuTC0ZYHC19}, Bandits \cite{ilyas2018prior}, Square Attack \cite{andriushchenko2019square} and TREMBA \cite{r3_5huang2019black}. For fair comparison, in Table \ref{tab:compare}, methods marked with -SSIM and \textbf{Ours} introduce $\lambda \cdot (1-\text{SSIM})$ to the loss function with $\lambda=10$. Note that AutoZOOM performs line search on the choice of $\lambda$, for which we adopt the same strategy and denotes this variant of our method as \textbf{Ours($\lambda_{dynamic}$)}. The results of the above methods are reproduced using the official codes provided by the authors. We use the default parameter settings of the corresponding attack, and set the maximum number of queries to be $10,000$. See Table \ref{tab:settings} for the experimental settings of different methods. In Table \ref{tab:compare}, Comparing approaches that use fixed $\lambda$ value (i.e., Signhunter-SSIM, NAttack-SSIM, Bandits-SSIM, Square Attack-SSIM, TREMBA-SSIM, AdvGAN-SSIM and Ours), we can see that the proposed method outperforms other attacks on reducing perceptual distance, while the average number of queries is comparable to Bandits. On the other hand, Ours($\lambda_{dynamic}$) achieves state-of-the-art performance on 1-SSIM, LPIPS and CIEDE2000 when compared with methods that perform line search over $\lambda$ (i.e., AutoZOOM and AutoZOOM-SSIM). In general, except for Signhunter, introducing perceptual distance metric in the objective function helps reduce visual distortion in other attacks. The visualized adversarial examples from different attacks are given in Fig. \ref{fig:compare}, which shows that our model produces less distorted adversarial examples. More examples can be found in Fig. \ref{fig:more}. \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{sub.pdf} \caption{An example of the pictures that we show to the evaluator. One of (a)(b) is produced by our model and the other is from the attacks (excluding ours) in Table \ref{tab:compare}.} \label{fig:sub} \end{figure} We noticed that adversarial examples from SignHunter have horizontal-stripped noise and Square Attack generates adversarial examples with vertical-stripped noise. Stripped noise is helpful in improving query efficiency since the classification network is quite sensitive to such noise \cite{andriushchenko2019square}. However, from the perspective of visual distortion, such noise greatly degrades the image quality. The adversarial examples of Bandits are relatively perceptible-friendly, but the perturbation affects most pixels in the image, which causes visually ``noisy'' effects, especially in a monocolor background. The noise maps from NAttack and AutoZOOM appear to be regular color patches all over the image due to their large tiling size in the methods. \begin{table}[!t] \centering \caption{Experimental settings.} \scalebox{.9}[.9]{ \begin{tabular}{ccc}\toprule Method &$\lambda$ &Max. Iterations\cr\midrule Signhunter-SSIM &10 &10,000 \cr NAttack-SSIM &10 &10,000 \cr AutoZOOM-SSIM &dynamic, $\lambda \in [0,1000]$ &10,000 \cr Bandits-SSIM &10 &10,000 \cr Square Attack-SSIM &10 &10,000 \cr TREMBA-SSIM &10 &- \cr Ours &10 &10,000 \cr Ours($\lambda_{dynamic}$) &dynamic, $\lambda \in [0,1000]$ &10,000 \cr \bottomrule \end{tabular} } \label{tab:settings} \end{table} We also conducted subjective studies for further validation. Specifically, we randomly chose two adversarial examples, where one is generated by our approach (Ours($\lambda_{dynamic}$)) and the other is given by the attacks (excluding ours) in Table \ref{tab:compare}. We show each human evaluator the two adversarial examples, and ask him/her which one is less distorted compared with the original image. Figure \ref{fig:sub} gives an picture that we show to the evaluator. Note that the order of the two adversarial examples in the triplet is randomly permuted. We asked 10 human evaluators in total, each made judgements over $100$ triplets of images. As a result, adversarial examples generated by our method are thought to have less noticeable noise $82.1\%$ of the time, while $10.0\%$ of the time the evaluators think both examples are distorted at the same level. Therefore, the subjective results further prove that the proposed method effectively reduces visual distortion in adversarial examples. \begin{table*}[!t] \centering \caption{Results of other {\upshape $l_p$} attacks on ResNet50 when {\upshape $\lambda=10$}. The raw {\upshape $l_0$} and {\upshape $l_1$} scores have much higher order of magnitude compared with other metrics, and thus the normalized scores of {\upshape $l_0$} and {\upshape $l_1$} distances are reported. } \scalebox{.8}[.8]{ \begin{tabular}{cccccccccc}\toprule Distance Metric &Sampling Frequency &Success Rate &$1-\text{SSIM}$ &LPIPS &CIEDE2000 &$l_0$ &$l_1$ &$l_2$ &Avg. Queries \cr \midrule \multirow{3}{*}{$l_0$} &1 &\textbf{99.5\%} &0.077 &0.083 &0.795 &\textbf{0.133} &0.130 &6.75 &536\cr &2 &99.2\% &0.065 &0.069 &\textbf{0.768} &0.159 &0.118 &5.88 &679 \c &5 &97.9\% &\textbf{0.058} &\textbf{0.065} &0.789 &0.177 &\textbf{0.118} &\textbf{5.19} &960 \cr\midrul \multirow{3}{*}{$l_1$} &1 &\textbf{99.5\%} &0.077 &0.083 &0.795 &\textbf{0.133} &0.130 &6.75 &536\cr &2 &\textbf{99.5\%} &0.070 &0.076 &0.773 &0.176 &0.130 &6.14 &658\cr &5 &99.2\% &0.066 &0.070 &0.768 &0.218 &0.129 &5.74 &800 \cr \midrul \multirow{3}{*}{$l_2$} &1 &\textbf{99.5\%} &0.110 &0.112 &0.829 &0.215 &0.211 &8.21&\textbf{392}\cr &2 &\textbf{99.5\%} &0.092 &0.100 &0.803 &0.259 &0.191 &7.44 &431 \c &5 &\textbf{99.5\%} &0.087 &0.094 &0.792 &0.312&0.185 &6.89 &579\cr \bottomrule \end{tabular} } \label{tab:lp} \end{table*} \subsection{Other $l_p$ Attacks} \label{sec:lp} Although our method in this paper is based on $l_\infty$ attack, the perceptual distance metric $d$ in the loss function can be replaced by other $l_p$ ($p=0,1,2$) distance. We did not discuss it in the above experiments because these $l_p$ distance metrics are less accurate in measuring the \emph{perceptual} distance between images compared to the specifically designed metrics, such as well-established $1-\text{SSIM}$ and LPIPS. Nevertheless, we still present the results of other $l_p$ ($p=0,1,2$) attacks in Table \ref{tab:lp}, where the $l_p$ distance is normalized to $[0,1]$ in the loss function. Specifically, $d(x,x+\delta)=\frac{{{l_p}(x,x + \delta )}}{{{{\max }_\delta }({l_p}(x,x + \delta ))}}$, where ${l_p}(x,x + \delta)$ is the $l_p$ distance between the original image $x$ and the perturbed image $x+\delta$. As in the paper, we set $\lambda=10,\epsilon=0.05$ and the maximum number of queries being $10,000$. We find that the raw $l_0$ and $l_1$ scores have much higher order of magnitude compared with other metrics, and thus the normalized scores of $l_0$ and $l_1$ distances are reported in Table \ref{tab:lp}. Note that when the sampling frequency $N=1$, $l_0$ distance is equivalent to $l_1$ distance in that \begin{equation} \begin{aligned} \frac{{{l_1}(x,x + \delta )}}{{{{\max }_\delta }({l_1}(x,x + \delta ))}}& = \frac{{mc \cdot \epsilon }}{{WHc \cdot \epsilon }} \\ &= \frac{m}{{WH}} \\ &= \frac{{{l_0}(x,x + \delta )}}{{{{\max }_\delta }({l_0}(x,x + \delta ))}} \end{aligned} \end{equation} where $m$ is the number of perturbed pixels. $W,H$ and $c$ are the width, height and number of channels of a given image, respectively. Table \ref{tab:lp} shows that optimizing $l_0$ distance gives better performance on both the perceptual distance metrics and the $l_p$ distance metrics. \subsection{Conclusion} We introduce a novel black-box attack based on the induced visual distortion in the adversarial example. The quantified visual distortion, which measures the perceptual distance between the adversarial example and the original image, is introduced in our loss where the gradient of the corresponding non-differentiable loss function is approximated by sampling from a learned noise distribution. The proposed attack can achieve a trade-off between visual distortion and query efficiency by introducing the weighted perceptual distance metric in addition to the original loss. The experiments demonstrate the effectiveness of our attack on ImageNet as our model achieves much lower distortion when compared to existing attacks. In addition, it is shown that our attack is valid even when it's only allowed to perturb pixels that are out of the target object in a given image. \normalsize
proofpile-arXiv_065-185
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} For many decades, quantum phase transitions (QPT) have been the subject of intense studies in several areas of physics~\cite{Sachdev2011}. In a closed system with unitary dynamics, the hallmark of an equilibrium QPT is the non-analytic behavior of an observable upon changing a physical parameter~\cite{Vojta2000,Greiner2002,Brown2017}. In recent years, a new frontier has emerged in many-body physics, investigating non-equilibrium phase transitions. In that regard and as a suitable testbed, driven-dissipative quantum systems and their phase transitions have been the subject of many studies. Observation of exciton-polariton BEC in semiconductors~\cite{Deng2010,Carusotto2013} and cuprites~\cite{Bao2019} and their superfluidity~\cite{Amo2009,Lerario2017}, probing the first-order phase transitions, dynamical hysteresis, and Kibble-Zurek quench mechanism in microcavities~\cite{Rodriguez2017,Fink2018}, and demonstration of dynamical bifurcation and optical bistability in circuit QED ~\cite{Siddiqi2005,Yin2012,Liu2017,Fitzpatrick2017,Elliott2018, Andersen2020} are a few examples of rapidly growing body of experimental explorations of such physics in different platforms. In parallel, some general aspects of non-equilibrium QPT have been investigated theoretically~\cite{Diehl2010,Torre2013}, and particularly e.g. in coupled spins~\cite{Kessler2012}, interacting bosonic systems~\cite{Casteels2016,Boite2017,Casteels2017, Verstraelen2020}, and semiconductor microcavities~\cite{Carusotto2005}. Due to their coupling to a bath, driven-dissipative dynamics are not given by a Hermitian Hamiltonian but with a superoperator, whose gapped spectrum signifies a QPT~\cite{Drummond1980,Drummond1981,Carmichael2015}. In spite of all progress, due to a notably larger parameter space compared to the closed systems, dissipative phase transitions (DPT) necessitate further investigations. A natural question e.g., could be about the crossover between the DPT and the phase transition in the thermodynamic limit (TD). Although due to their constant interaction with the environment, open systems are inherently far from the thermodynamic equilibrium however, still there could be some parameter ranges where the system asymptotically approaches the mean-field (MF) limit, where quantum correlations and fluctuations can be ignored. To be more specific, in this paper we focus our studies on a driven-dissipative three-mode bosonic system subject to a Kerr-type intra- and intermodal interactions. To keep our results and discussions general, we do not specify the nature of the bosonic system. But let us remark that such setup could be realized in various platforms, including cavity Rydberg polaritons~\cite{Jia2018,Schine2019,Clark2020}, excitons in 2D materials and semiconductors~\cite{Togan2018,Tan2020}, microwave photons in superconducting circuits~\cite{Materise2018}, or interacting photons in optical cavities~\cite{Klaers2010}. Starting from the MF description we first explore the phase transitions of the system as a function its parameters, i.e. pump, detuning, interaction strength, and bare-cavity mode spacing. We show that depending on the bare cavity features, the phase transition can be either continuous (2$^{nd}$-order phase transition) or abrupt (1$^{st}$-order phase transition) corresponding to an optical multi-stability, as studied for planar microcavities~\cite{Wouters2007B}. In both cases, the phase transition manifests itself by a non-zero amplitude of the unpumped modes and is related to the dissipative gap closure of the Liouvillian. We show that within this range and up to the MF level, there is an unconditionally squeezed mode at the output, attributed to the spontaneous breaking of the local U(1)- and time-translational symmetry (TTS). While at TD limit, the diverging quadrature of this state is related to the well-known, freely propagating Goldstone mode~\cite{Wouters2007,Leonard2017,Guo2019}, employing the Wigner phase-space representation we show that within the quantum limit this mode becomes susceptible to fluctuations and becomes short-lived. Since employing the Wigner formalism allows us to properly include the quantum noise, we have been able to explore the phase diagram more accurately and beyond MF description. That also helps to delineate the validity range of MF when it comes to the study of QPT. In spite of its simplicity, the investigated system reveals important dynamics of driven-disspative bosonic gases and could be a quintessential model for further exploration of SSB in open many-body systems. The paper is organized as follows; in Section~\ref{sec:problem} we present the general problem, its MF description in the form of a generalized Gross-Pitaevskii equation (GPE) and the low-energy excitation spectrum determined via Bogoliubov treatment. We also summarize the stochastic formulation of the problem based on the truncated Wigner phase-space method. In Section~\ref{sec:results} we present the numerical results of the three-mode cavity where various phase transitions are investigated and discussed. Finally, the last section summarizes the main results of the paper and sets the stage for future directions that can be explored in such systems. \section{Problem Formulation}\label{sec:problem} Consider a three-mode cavity with the following Hamiltonian describing the interaction dynamics between the modes ($\hat{a}_{1,2,3}$) \begin{align}~\label{eq:3-mode Hamiltonian} \hat{H} &= \sum_{n=1}^3 \left(\omega_m \hat{a}_m^\dagger \hat{a}_m + \frac{V_0}{2} \sum_{m}^3 \hat{a}_m^\dagger \hat{a}_n^\dagger \hat{a}_m \hat{a}_n\right)\\ &+ V_0 (\hat{a}_2^{\dagger ^2} a_1 a_3 + \hat{a}_1^\dagger \hat{a}_3^\dagger \hat{a}_2^2), \end{align} where $\omega_m$ is the frequency of the $m^{th}$-mode of the bare cavity and $V_0$ is the interaction strength. A coherent drive at frequency $\omega_L$ excites the $p^{th}$-mode of the cavity at the rate of $\Omega_0$ as \begin{equation}~\label{eq:coherent drive} \hat{H}_D = \Omega_0(\hat{a}_p e^{+i\omega_L t} + \hat{a}_p^\dagger e^{-i\omega_L t}). \end{equation} Assuming a Markovian single-photon loss for the mode-bath coupling, the following Lindblad master equation describes the evolution of the reduced cavity density matrix $\hat{\rho}$ as \begin{equation}~\label{eq:master equation} \frac{d\hat{\rho}}{dt} = -i\left[\hat{H} , \hat{\rho}\right] + \sum_m \gamma_m \left(2\hat{a}_m\hat{\rho}\hat{a}_m^\dagger - \{\hat{a}_m^\dagger \hat{a}_m , \hat{\rho}\}\right), \end{equation} where $\hat{H} = \hat{H}_{ph} + \hat{H}_D$ on the RHS describes the unitary dynamics of the system and the second term captures the quantum jumps and losses of the $m^{th}$-cavity field at rate $\gamma_m$. Equivalently, we can derive the equations of motion for $\hat{a}_m$ operators and describe the dynamics via Heisenberg-Langevin equations as~\cite{Gardiner2004} \begin{multline}~\label{eq:Heisenberg-Langevin} \dot{\hat{a}}_m = -i\left(\Delta_m -i\gamma_m \right)\hat{a}_m - iV_0 \sum_{nkl}\eta^{mn}_{kl} \hat{a}_n^\dagger \hat{a}_k \hat{a}_l - i\Omega_0 \delta_{mp} + \\ \sqrt{2\gamma_m} \hat{\xi}_m(t), \end{multline} where in the above equation $\Delta_m = \omega_m - \omega_L$ is the frequency of the $m^{th}$-mode in the laser frame, $\eta_{kl}^{mn}$ is the mode-specific prefactor arising from different commutation relations, and $\{\hat{\xi}_m(t)\}$ describe stationary Wiener stochastic processes with zero means and correlations as \begin{align}~\label{eq:white-noise correlation} \braket{\hat{\xi}_m^\dagger(t+\tau) \hat{\xi}_n(t)} = n_{th} \delta(\tau) ~\delta_{mn}, \\ \nonumber \braket{\hat{\xi}_m(t+\tau) \hat{\xi}_n^\dagger (t)} = (1 + n_{th})\delta(\tau) ~\delta_{mn}, \end{align} $n_{th}$ in the above equations is the number of thermal photons at temperature $T$. For numerical calculations, the dimension of the relevant (few-photon) Hilbert space grows rapidly with increasing number of modes and particle number. Hence, the direct solution of the density matrix in Eq.~(\ref{eq:master equation}) is only possible for a small number of modes and at a low pumping rate $\Omega_0$. For the quantum Langevin equations in Eq.~(\ref{eq:Heisenberg-Langevin}), the two-body interaction generates an infinite hierarchy of the operator moments, making them intractable as well. The most straight-forward approach is a classical MF treatment where the correlations are approximated with the multiplication of the expectation values i.e., $\braket{\hat{a}_m \hat{a}_n} \approx \braket{\alpha_m} \braket{\alpha_n}$. These substitutions simplify the equations of motion of the operators' MFs in Eq.~(\ref{eq:Heisenberg-Langevin}) to a set of coupled non-linear equations as \begin{multline}~\label{eq:mean-field equations} i\dot{\alpha}_m = \left(\Delta_m -i\gamma_m \right)\alpha_m + V_0 \sum_{nkl}\eta^{mn}_{kl} \alpha_n^* \alpha_k \alpha_l + \Omega_0 \delta_{mp}. \end{multline} In the steady state, the mean values are determined as $\dot{\alpha}_m=0$, which is an exact description for the operators' 1$^{st}$-moments. In this work, we used the Jacobian matrix to check the dynamical stability of all steady-states. Equation~(\ref{eq:mean-field equations}) is a Gross-Pitaevskii type equation with added drive and dissipation terms. Although the MF provides a good starting point, information about quantum correlations is lost. To improve this, we replace $\hat{a}_m = \alpha_m + \hat{b}_m$ and linearize Eq.~(\ref{eq:Heisenberg-Langevin}) around MF determined from the steady state of Eq.~(\ref{eq:mean-field equations}). Defining $\hat{B} = \left[\hat{b}\right]$ as fluctuation-operator vector (with $2N$ components), its time evolution is determined as \begin{equation}~\label{eq:linearized fluctuations EoM} \frac{d\hat{B}}{dt} = M\hat{B} + D^{1/2} \hat{\Xi} , \end{equation} where $M$ is the Bogoliubov matrix at the MF $\alpha_m$, $D=\mathrm{diag}(2\gamma_m)$, and $\hat{\Xi}$ is the noise operator vector of the Wiener processes in Eq.~(\ref{eq:Heisenberg-Langevin}). As shown in Appendix~\ref{app:Covariance MF-Bog}, from $\hat{B}$ one can directly determine the covariance matrix, $\mathrm{C}_B(\omega)$ whose entries are the stationary two-time correlations of the (zero-mean) operators $\hat{B}_i,\hat{B}_j$ \begin{equation}~\label{eq:spectral response} \Gamma_{ij}(\omega) = \mathcal{F} \braket{\lim_{t\to\infty}\hat{B}_i(t+\tau) \hat{B}_j(t)} = \braket{\Tilde{\hat{B}}_i(\omega) \Tilde{\hat{B}}_j(-\omega)}, \end{equation} where $\mathcal{F}$ represents the Fourier transform of the correlation w.r.t to the delay $\tau$ and $\Tilde{\hat{B}}_i(\omega)$ is the Fourier transform of $\hat{B}_i(t)$. Within the Born-Markov approximation, if the 2$^{nd}$-order dynamics is contractive and, in the vicinity of the steady state it dominates over the higher-order terms, then most of the important correlations can be obtained from the linearized Bogoliubov treatment as in Eq.~(\ref{eq:linearized fluctuations EoM}). This is a self-consistent criterion with $M$ being a negative-definite matrix and is typically satisfied at large particle numbers and weak interactions, as for the TD limit, where MF treatment is well-justified. To examine the validity of the MF and linearization in the quantum limit of small number of particles, we further employ the Wigner function (WF) representation to express the system dynamics in terms of the analytic quasi-probability distribution $W(\vec{\alpha};t)$~\cite{Wiseman2011,Gardiner2004,Berg2009}. Using Itô calculus, the truncated dynamics of $W$ can be further mapped to a set of stochastic differential equations (SDE)s for $\alpha_m^\pm$ with the following general form (more details can be found in Appendix~\ref{app:Wigner func.}) \begin{equation}~\label{eq:SDE} d\alpha_m = A_m dt + \sum_{m'} D_{m,m'} ~ dN_m, \end{equation} where $dN_m$ is a complex Wiener process describing a Gaussian white noise. For any operator $\hat{\mathcal{O}}$, the expectation value of its symmetrically-ordered form, i.e. the equally weighted average of all possible orderings of the $\hat{\mathcal{O}}$ and $\hat{\mathcal{O}}^\dag$, can be obtained as \begin{equation}\label{expectationvalue-SDE} \braket{\hat{\mathcal{O}}}_{sym} = \braket{\braket{\mathcal{O}}}, \end{equation} where $\braket{\braket{.}}$ stands for the ensemble average over stochastic trajectories. Before leaving this section, we would like to emphasize that the beyond-MF corrections of GPE in Eq.~(\ref{eq:mean-field equations}) need the effect of the $2^{nd}$ and the $3^{rd}$ normally- and anomalously-ordered correlations. These terms contribute to the MF as \emph{state-dependent} noises. In the truncated Wigner method, there are additional drift terms as well as Langevin forces to capture those aforementioned quantum-field corrections, partially. While the full dynamics of $W(\vec{\alpha};t)$ in Eq.~(\ref{eq:Fokker-Planck}) is equivalent to the master equation in Eq.~(\ref{eq:master equation}), the truncated Wigner (TW) is an approximation which can only be applied to initially positive WF and preserves this property. It can be interpreted as the semi-classical version of Langevin equations of Eq.~(\ref{eq:Heisenberg-Langevin}). Thus, the TW and its equivalent SDE in Eq.~(\ref{eq:SDE}) might not be able to reproduce the quantum dynamics, fully. However, it goes beyond the MF-Bogoliubov treatment and can describe the generation of non-Gaussian and non-classical states~\cite{Corney2015}. \section{Results and Discussion}\label{sec:results} Throughout this section we assume identical field decay rates for all cavity modes, i.e., $\gamma_m = \gamma_0$ and express all other rates normalized to this value. Similarly, time is expressed in units of $\gamma_0^{-1}$. A coherent drive as in Eq.~(\ref{eq:coherent drive}) excites the second mode, i.e. $\hat{a}_2$ hence, the $1^{st}$ and $3^{rd}$ modes are populated, equally (more discussions can be found in Appendix~\ref{app:Covariance MF-Bog}). Thermal fluctuations due to the bath are assumed to be zero, i.e. $n_{th}=0$. Part of the full quantum mechanical calculations are done with QuTip open-source software~\cite{Johansson2012,Johansson2013}. The numerical convergence in each case has been tested by increasing the number of random initialization (MF), random trajectories (SDE), and the truncation number in Fock states (DM) to have a relative error $\approx O(-5)$ in the particle number. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{PD-Harmonic.eps} \caption{~\label{fig:phase transition} MF Dissipative phase diagram of a three-mode harmonic cavity, i.e., $2\omega_2 = \omega_1 + \omega_3$, as a function of (a) the interaction strength $V_0$ and (b) the laser detuning $\Delta_0$. In each panel the yellow (A), orange (B), and red (C) regions correspond to one, two (bi-stability), and three stable (tri-stability) fixed points for the pumped mode, respectively. In (a) the detuning is fixed at $\Delta_0 = -3$ and in (b) the interaction strength has the constant value $V_0 = 1$. The dotted vertical lines [labelled (I) and (II)] at $V_0=0.1$ and $\Delta_0=-3$ indicate the cuts through the phase diagram studied in subsequent figures.} \end{figure} In a driven-dissipative system, the interplay between coherent excitation rate and its detuning , incoherent loss , and interaction leads to notable changes in system properties, typically known as dissipative phase transition (DPT). In a multi-mode case as in here, we have an additional parameter $\delta_D = 2\omega_2 - (\omega_1 + \omega_3)$, which is the anharmonicity of the bare cavity. To distinguish between these two cases, we call the cavity \emph{harmonic} if $\delta_D = 0$ and \emph{anharmonic} otherwise. As will be discussed, $\delta_D$ is also an important parameter governing the DPT. Similar phase diagrams and multi-stability phenomena have been studied for exciton-polaritons in planar cavities where $\delta_D$ vanishes~\cite{Wouters2007B}. Moreover, in this case the frequencies of the generated pairs are set by the bare cavity modes and the interaction, self-consistently. Figure~\ref{fig:phase transition}(a),(b) shows the phase diagram of a harmonic cavity as a function of the interaction strength $V_0$ and the laser detuning $\Delta_0$, respectively. The phase diagram closely resembles the DPT of a single-mode cavity depicted in Fig.~\ref{fig:pump-only PT}(a),(b) in Appendix~\ref{app:single-mode}. While it is in (A)-phase, i.e. the yellow region, the pumped mode has one stable fixed point. In (B)-phase, i.e. the orange region, there are two distinct values for the pumped mode. Finally in (C)-phase, i.e. the red region which only appears in the multi-mode case, the system is within a tri-stable phase and the pumped mode has three stable MF fixed points. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{n-Harmonic.eps} \caption{~\label{fig:3-mode population-harmonic} Population of the $1^{st}$ ($3^{rd}$) and the $2^{nd}$ mode in a harmonic cavity, i.e., $\delta_D = 0$, as a function of the pumping rate ($\Omega_0$) calculated from MF (black dots) and SDE (purple diamonds). Solid red line in panels (c),(d) show the DM solutions for comparison. $V_0 = 0.1$ in panels (a),(b) and $V_0 = 1$ in (c),(d). In both cases $\Delta_0 = -3$.} \end{figure} In Fig.~\ref{fig:3-mode population-harmonic} we plot $\braket{\hat{n}_{1,2}}$ for $V_0 = 0.1,~1$ at $\Delta_0 = -3$ as a function of the pumping rate varied along the dotted lines (I),(II) in Fig.~\ref{fig:phase transition}(a),(b), respectively. There, the black dots show the MF solutions determined from integrating Eq.~(\ref{eq:mean-field equations}) for many different random initial conditions and for a time long compared to all transient time scales. The purple line with diamonds show the data calculated using the SDE method averaged over 2000 random trajectories, and the solid red line in panel (c),(d) depicts the results of the full density matrix calculations (DM) as a benchmark. It can be seen that the phase transitions are discontinuous, i.e. a \emph{first-order} PT. Moreover, for all modes the difference between stable MF branches decreases upon increasing the interaction from $V_0 = 0.1$ to $V_0 = 1$ in Fig.~\ref{fig:3-mode population-harmonic}(a,b) and (c,d), respectively. Aside from the finite region around the multi-stability, also it can be seen that the results of MF, SDE, and DM agree quite well (Note a similar tendency for the single-mode case in Fig.~\ref{fig:pump-only} of Appendix~\ref{app:single-mode}). For the 1$^{st}$ and 3$^{rd}$ modes on the other hand, both Fig.\ref{fig:3-mode population-harmonic}(a) and (c) indicate that the finite MF tri-stable region (C in the DPT) is the only parameter range where these modes get non-zero population. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{PT-anharmonic.eps} \caption{~\label{fig:3-mode DPTT anharmonic} Number of photons in the $1^{st},3^{rd}$-modes of a three-mode anharmonic cavity ($2\omega_2 \ne \omega_1 + \omega_3$), as a function of the (a) interaction strength $V_0$ and (b) anharmonicity $\delta_D$, determined from MF. In (a) $\delta_D = 5$ and in (b) $V_0 = 1$ and the laser is always resonantly pumping the $2^{nd}$-mode $\Delta_0 = 0$. (A) , (C) indicate two different phases of zero and non-zero population of the first mode. The dotted vertical lines [labelled (I) and (II)] at $V_0=0.1$ and $\delta_D=5$ indicate the the cuts through the phase diagram studied in subsequent figures.} \end{figure} The situation is completely different in an \emph{anharmonic} cavity where $\delta_D \ne 0$. Figure~\ref{fig:3-mode DPTT anharmonic}(a),(b) shows the average number of photons in unpumped modes $\braket{n_{1,3}}$ as a function of the interaction strength $V_0$, the pumping rate $\Omega_0$, and the anharmonicity parameter $\delta_D$. For better illustrations, in Fig.\ref{fig:3-mode population-anharmonic}(a,b) and (c,d) we plot the average number of photons in all cavity modes as a function of the pump rate at weak ($V_0 = 0.1$) and strong ($V_0 = 1$) interaction, respectively when the pumping rate is continuously increased along (I) and (II) dotted lines in Fig.~\ref{fig:3-mode DPTT anharmonic}(a),(b). Unlike the harmonic cavity case, here we only have two phases (A),(C), where the transition occurs continuously (but non-analytic), i.e.\emph{second-order} PT, with a unique-valued order parameter in each phase. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Fig7.eps} \caption{~\label{fig:3-mode population-anharmonic} Population of the $1^{st}$ ($3^{rd}$) and the $2^{nd}$ mode, in an anharmonic cavity with $\delta_D = 5$, as a function of the pumping rate ($\Omega_0$) calculated from MF (black dots), SDE (purple diamonds). Solid red line in panels (c),(d) shoe the DM solutions for comparison. $V_0 = 0.1$ in panels (a),(b) and $V_0 = 1$ in (c),(d). In both cases $\Delta_0 = 0$.} \end{figure} As elaborated in Appendix~\ref{app:Covariance MF-Bog} for the single-mode cavity, the interaction of the pumped mode ($2^{nd}$ mode here) with itself creates energetically symmetric sidebands. In a multi-mode case, the interplay between the intra- and inter-mode interactions leads to the excitation of other modes in both harmonic as well as anharmonic cavities. Similarly for both, MF predicts a threshold and a finite parameter range for non-zero occupations of the the $1^{st}, ~ 3^{rd}$-mode. While the lower threshold is set solely by the pumped mode when $V_0 n_2 \ge \gamma_2$, the upper threshold is dependent on the population of the other two modes as well as their relative energies. (The lowest and highest pumping rate is set by the constraints on $\Phi_0 , \Phi_p$, respectively, as detailed in Appendix~\ref{app:Covariance MF-Bog}.) When quantum fluctuations are included, however, either via SDE or full density matrix calculations (DM), unique, continuous and, non-zero solutions for all three modes are predicted at all pumping rates. In both cavities and for the pumped mode, MF, SDE, and DM results agree quite well in (A)-phase. For the parametrically populated modes however, the SDE and DM results are in good agreement over the whole range but are remarkably different from MF. However, the rising slope of the former analyses always coincide with the transition to the MF (C)-phase. \subsection{Spontaneous Symmetry Breaking and Goldstone mode}~\label{sec:SSB} In the absence of the coherent pump, the Liouvillian super-operator $\mathcal{L}$ of Eq.~(\ref{eq:master equation}) has a continuous global U(1)-symmetry, which is broken by a coherent drive of Eq.(\ref{eq:coherent drive}). However, with the Hamiltonian of Eq.~(\ref{eq:3-mode Hamiltonian}) for the three-mode cavity, $\mathcal{L}$ sill has a local U(1)-symmetry as it remains unchanged under the following transformations for any arbitrary phase $\Theta_0$~\cite{Wouters2007} \begin{equation}~\label{eq:local U(1)-sym} \hat{a}_1 \rightarrow \hat{a}_1 e^{+i\Theta_0} ~ , ~ \hat{a}_3 \rightarrow \hat{a}_3 e^{-i\Theta_0}. \end{equation} If the MF amplitudes $\alpha_{1,3} = 0$, then the steady state respects the Liouvillian's symmetry. However, for $\alpha_{1,3} \ne 0$ as occurs within the (C)-phase, the MF solutions are not U(1) symmetric, anymore. Hence, there is a \emph{spontaneous symmetry breaking} (SSB) accompanied by a DPT. However, it is evident that the set of all solutions is invariant under the aforementioned rotations. In other words, within the (C)-phase there is a continuum of MF fixed points. Figure~\ref{fig:LC}(a),(b) shows the temporal behavior of order parameters $\alpha_m$ within the MF (C)-phase of the harmonic and anharmonic cavities, respectively. As can be seen, while the pumped mode $m_2$ is time-invariant (green line), the parametrically populated modes $m_{1,3}$ (blue and red lines) show self-sustained oscillations with a random relative phase, reflecting the value U(1) phase acquire in the SSB. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Fig8.eps} \caption{~\label{fig:LC} Temporal behavior of the mean fields $\alpha_j(t)$ within the MF (C)-phase in a thee-mode (a) harmonic cavity at $\Omega_0 = 1.85, \Delta_0 = -3$ and (b) anharmonic cavity at $\Omega_0 = 2, \delta_D = 5$ and $V_0 = 1$. In both panels the blue, green, and red lines correspond to the $1^{st},~2^{nd}$ and, the $3^{rd}$-mode, respectively. The time is in units of $\gamma_0^{-1}$ and $T_{LC}$ indicates the limit-cycle period.} \end{figure} In the laser rotated frame, the Liouvillian $\mathcal{L}$ is TTS, which indeed is the symmetry of the solutions within the (A),(B)-phase. Within the (C)-phase, however, the order parameter becomes time-periodic and thus breaks the time-translational symmetry. Therefore, in both of the harmonic and anharmonic cavities, the MF (C)-phase is accompanied by SSB of the local U(1) symmetry and the TTS. This oscillatory behavior, known as \emph{limit-cycle} (LC)-phase, is an apparent distinction of DPT from its equilibrium counterparts~\cite{Qian2012,Chan2015}. From Fig.~\ref{fig:LC} the LC-period can be determined as $T_{LC} \approx 6.28$ and $T_{LC} \approx 0.83$, corresponding to $\omega_{LC} = 1 , ~ 7.5$ for the harmonic and anharmonic cavities, respectively. Note that these frequencies agree with theoretical predictions of $\Tilde{\Delta}_{1,3}$ in Appendix~\ref{app:Covariance MF-Bog}. The consequence of SSB of this continuous symmetry can be interpreted in terms of the gapless \emph{Goldstone} mode. The eigenvalues $\{\lambda\}$ of the Bogoliubov matrix $M$ in Eq.(\ref{eq:linearized fluctuations EoM}) directly determine the excitation energies around a MF fixed point, with Re($\lambda$) being the excitation linewidth and Im($\lambda$) its frequency. It is straightforward to check that due to the relative-phase freedom of the unpumped modes, $M$ has a kernel along the following direction~\cite{Wouters2007} (more information in Appendix~\ref{app:Covariance MF-Bog}) \begin{equation}~\label{eq:Goldstone mode} \ket{G} = [\alpha_1, 0, -\alpha_3, -\alpha_1^*, 0, \alpha_3^*]^T, \end{equation} where $T$ means the transpose. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{XPspec_MF.eps} \caption{~\label{fig:3-mode spectra} Output $X,P$ spectra of the modes in the (a),(b) harmonic cavity at $\Delta_0 = -3 , \Omega_0 = 1.85$, and (c),(d) anharmonic cavity at $\Delta_0 = 0 , \Omega_0 = 2$, calculated from the MF-Bogoliubov. In each panel the solid blue, red, and green lines correspond to the spectrum of the pumped ($\ket{m_2}$), symmetric ($\ket{m_+}$) and antisymmetric ($\ket{m_-}$) modes, respectively. Due to its divergence, the momentum of the antisymmetric mode is scaled down in panels (b),(d).} \end{figure} $\lambda_G = 0$ implies that in the local oscillators frame, $\ket{G}$ is a mode at $\omega=0$ with zero linewidth, i.e., an undamped excitation. To investigate the implications of this mode on quantum correlations, we employ Eq.~(\ref{eq:linearized fluctuations EoM}) to calculate the $XP$-quadrature spectra of the cavity output. Figure~\ref{fig:3-mode spectra} shows the quadrature correlations of the output $2^{nd}$-mode and $\ket{m_\pm} = m_1 \pm m_3$, i.e., the symmetric and antisymmetric superpositions of the two unpumped modes. Panels (a),(b) show the spectra of the harmonic cavity at $\Omega_0 = 1.85$, and panels (c),(d) show the same quantities for an anharmonic cavity at $\Omega_0 = 2$, which correspond to the point B within the MF LC-phase, and on the rising slope of the SDE/DM results in Fig.~\ref{fig:3-mode population-harmonic}(c) and Fig.~\ref{fig:3-mode population-anharmonic}(c). Although the spectral features of the pumped and the symmetric mode depend on detail cavity features, the antisymmetric mode quadratures in harmonic and anharmonic cavities look alike (solid green lines in Fig.~\ref{fig:3-mode spectra}(c),(d)). While $S_{X_-}$ is unconditionally fully squeezed at the origin, the spectrum of its conjugated variable $S_{P_-}$ diverges. From Eq.~(\ref{eq:Goldstone mode}) it is clear that $S_{P_-}$ is indeed the spectrum of the gapless Goldstone mode. Since in the MF picture, this mode encounters no restoring force its fluctuation diverges. (The analytic form of the spectra and further can be found in Appendix~\ref{app:Covariance MF-Bog}.) \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Fig10.eps} \caption{~\label{fig:Wigner 3-mode Harmonic} Histograms of number state occupation probability $p_n$ and colormaps of the Wigner function of the (a)-(d) $1^{st}, 3^{rd}$-modes and (e)-(h) $2^{nd}$-mode, in a three-mode harmonic cavity when $\Delta_0 = -3 , V_0 = 1$ for different pumping rates $\Omega_0$ highlighted as (A,B,C,D) in Fig.~\ref{fig:3-mode population-harmonic}(c),(d). In each phase-space map the white dashed lines show the axes ($X=0,P=0$) in the $XP$-plane and black stars or circles correspond to the predicted MF.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Fig11.eps} \caption{~\label{fig:Wigner 3-mode anHarmonic} Histograms of number state occupation probability $p_n$ and colormaps of the Wigner function of the (a)-(d) $1^{st}, 3^{rd}$-modes and (e)-(h) $2^{nd}$-mode, in a three-mode anharmonic cavity when $\Delta_0 = 0, \delta_D = 5$ and at $V_0 = 1$ for different pumping rates $\Omega_0$ highlighted as (A,B,C,D) in Fig.~\ref{fig:3-mode population-anharmonic}(c),(d). In each phase-space map the white dashed lines show the axes ($X=0,P=0$) in the $XP$-plane and black stars or circles correspond to the predicted MF.} \end{figure} To examine the robustness of the Goldstone mode and the consequent unconditional squeezing, we employ the SDE to study the beyond-MF behavior of the cavity state. Figure~\ref{fig:Wigner 3-mode Harmonic} shows the number state occupation probability ($p_n$) and the Wigner function distribution of the harmonic cavity at four different pumping rates $\Omega_0 = 1,~ 1.85,~ 3.5,~ 10$ corresponding to (A,B,C,D) points in Fig.~\ref{fig:3-mode population-harmonic} at $V_0 = 1$, respectively. Panels (a)-(d) show these quantities for the $1^{st}$-mode and panels (e)-(h) show the ones for the $2^{nd}$-mode. As can be seen in all panels (a)-(d), distributions of the $1^{st},~3^{rd}$-modes are azimuthally symmetric independent of the pumping rate, which is consistent with the local U(1) symmetry of these two modes and their phase freedom, i.e., $\braket{\hat{a}_{1,3}} = 0$. Within the (A)-phase at low pumping rate and before the parametric threshold, MF predicts zero amplitude for the $1^{st},3^{rd}$ modes, while the $2^{nd}$ mode looks like a coherent state (Fig.~\ref{fig:Wigner 3-mode Harmonic}(a),(e)). As the pumping rate increases (point B in Fig.~\ref{fig:3-mode population-harmonic} (c),(d)), the system enters the LC-phase in which mode 2 has three stable fixed points, as shown with three stars in Fig.~\ref{fig:Wigner 3-mode Harmonic}(f), and the two unpumped modes acquire a finite population. The black circle in Fig.~\ref{fig:Wigner 3-mode Harmonic}(b) shows the loci of MF fixed points. For larger values of the pump, close to the upper threshold of the multi-stability region (point C in Fig.~\ref{fig:3-mode population-harmonic}(c),(d)), the systems transitions to the uniform (A)-phase again where the $2^{nd}$-mode attains a unique fixed point and the $1^{st},3^{rd}$-modes have zero MF. However, as can be seen in Fig.~\ref{fig:Wigner 3-mode Harmonic}(g) the cavity state is far from coherent due to the larger interaction at this photon number. At even larger pumping rate shown in Fig.~\ref{fig:Wigner 3-mode Harmonic}(d),(h), corresponding to the point D in Fig.~\ref{fig:3-mode population-harmonic}(c),(d) (far within the (A)-phase), the $2^{nd}$ mode is a non-classical state whose phase-space distribution is reminiscent of the single-mode cavity at this regime (Fig.~\ref{fig:pump-only} of Appendix~\ref{app:single-mode}). Also it is worth mentioning that in spite of the similar symmetric distribution of the $1^{st},3^{rd}$ modes and their vanishing means, their variances clearly change as the system traverses through different phases. For completeness, in Fig.~\ref{fig:Wigner 3-mode anHarmonic} we detail the state of the anharmonic cavity through its different phases at four pumping rates of $\Omega_0 =1,~ 2, ~ 3.5, ~ 10$ corresponding to (A,B,C,D) points in Fig.~\ref{fig:3-mode population-anharmonic}(c),(d). As can be seen the overall behavior of the cavity modes looks like that of the harmonic case, with the main distinction of always having one unique MF fixed point. To study the robustness of the Goldstone mode in the presence of quantum fluctuations, from SDE analysis we calculate the correlation and spectrum of $\hat{P}_-$ as \begin{align}\label{eq:minus-mode g1} g^{(1)}(\tau) &= \braket{\lim_{t\to\infty}\hat{P}_-(t+\tau) \hat{P}_-(t)}, \\ S_{P_-}(\omega) &= \mathcal{F}\left(g^{(1)}(\tau) \right), \end{align} where $\hat{P}_- = i(\hat{a}_- - \hat{a}^\dagger_-)/\sqrt{2}$ is the momentum of $\ket{m_-}$-mode. The results are shown in Fig.~\ref{fig:goldstone mode spec} when the interaction $V_0$ is increased from 0.1 to 1 (brown to yellow lines). Panels (a),(b) are the spectra and correlations in the (A)-phase while (c),(d) are within the (C)-phase where LC is predicted by MF. For direct comparison with LC oscillations of Fig.~\ref{fig:LC}(b) and highlighting $\omega_{LC}$, the spectral densities in (a),(c) are shown in the laser ($\omega_L$) rather than the local frame ($\omega_{LO}$). Defining a dimension-less parameter $N$ where $V_0/N \rightarrow 0^+$ is the TD limit, the pumping rate is scaled by $\sqrt{N}$, so that $V \Omega^2$ is kept fixed. As can be seen in Fig.~\ref{fig:goldstone mode spec}(a),(b), the observables are almost unchanged when the system is in the (A)-phase, where MF predicts zero-photon number in $\ket{m_-}$. From Fig.~\ref{fig:goldstone mode spec}(a) we can see that the linewidth of this mode is large and the spectral density is very small (note that the lines for $V_0 = 0.5, ~ 0.1$ are shifted upwards to clarify things better). Similarly, the temporal behavior in panel (b) shows a short correlation time. On the contrary, when the system transitions to the MF LC-phase by virtue of increasing the pumping rate, the spectral densities shown in Fig.~\ref{fig:goldstone mode spec}(c) increase and an apparent resonance feature appears that becomes more prominent at weaker interaction closer to the TD limit hence, the validity range of MF. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{minus-mode.eps} \caption{~\label{fig:goldstone mode spec} SDE calculations of the (a),(c) spectral density in the laser frame and (b),(d) delayed temporal correlation of $P$-quadrature of $\ket{m_-}$ mode in an anharmonic cavity. The upper row shows the behavior in the (A)-phase and the lower row shows the ones with the MF LC-phase. The interaction is changed from $V_0 = 0.1$ to $V_0 = 1.0$, yellow to red to brown, respectively. The dashed lines show the Lorentzian fit in (a),(c) and the exponential fits in (c),(d).} \end{figure} Similarly the temporal correlations in panel (d) show prolonged coherence times that increases at weaker interaction. To quantify these features better we fit a Lorentzian lineshape with the following form to $S_{P_-}(\omega)$ \begin{equation}~\label{eq:lor. fit} L(\omega) = \frac{a}{(\omega - \omega_{peak})^2 + \Gamma^2} + c \end{equation} The fits are shown with dashed lines in Fig.~\ref{fig:goldstone mode spec}(a),(c) and the center and linewidth fit parameters are presented in table~\ref{tab: Goldstone mode Lorentzian}. Within the (A)-phase, $\omega_{peak},\Gamma$ slightly changes with changing the interaction $V_0$. Throughout the LC-phase on the other hands, $\omega_{peak} \approx 7.5$, i.e., the LC oscillation frequency $\omega_{LC}$ in Fig.~\ref{fig:LC}(b). Moreover, starting from a narrow resonance ($\Gamma \approx 0.4$) at weak interaction (large $N$), the linewidth clearly increases ($\Gamma \approx 3.2$) by increasing the interaction (small $N$). Similar values were obtained by fitting the correlation functions with exponential functions, i.e. dashed lines in Fig.~\ref{fig:goldstone mode spec}(b),(d), independently. \begin{table}~\label{tab: Goldstone mode Lorentzian} \begin{tabular}{ |c|c|c|c| } \hline & $V_0 = 0.1$ & $V_0 = 0.5$ & $V_0 = 1.0$ \\ \hline $\omega_{peak}$ (A) & 8.7 & 8.5 & 9 \\ \hline $\omega_{peak}$ (LC) & 7.5 & 7.7 & 7.9 \\ \hline $\Gamma$ (A) & 5.6 & 5.4 & 6.5 \\ \hline $\Gamma$ (LC) & 0.4 & 1.7 & 3.2 \\ \hline \end{tabular} \caption{The Lorentzian fit parameters to the spectral density of $P_-$quadrature within the MF (A)- and LC-phase as in Fig.~\ref{fig:goldstone mode spec}(a),(c).} \end{table} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Fig16.eps} \caption{~\label{fig:goldstone mode decay} The linewidth of $P_-$quadrature, i.e. the Goldstone mode, within the MF LC-phase as a function of dimensionless parameter $N$. The red squares are the SDE calculation results while the solid blue line is a power-law fit to the data. The solid red line shows the number of particles in this mode (right axis). The inset colormaps show the TW distribution of $\ket{m_-}$-mode at a couple of interaction strengths.} \end{figure} As a final remark we study the behavior of $P_-$quadrature linewidth within the whole quantum to TD limit, corresponding to the small and large $N$, respectively. The results depicted in Fig.~\ref{fig:goldstone mode decay} with red squares. The solid red line is the number of particles in this mode (right y-axis), and the the solid blue line is a power-law fit to the data, indicating the linewidth narrowing scales as $N^{-0.9}$. In other words, while the gapless Goldstone mode picture at TD limit (kernel of MF-Bogoliubov matrix) corroborates well with a small $\Gamma \approx 0$ of $P_-$quadrature, approaching the quantum limit the decay rate notably increases due to the phase diffusion. It is worth comparing this tendency with $N^{-1}$ behavior, i.e. the Shallow-Towens laser linewidth scaling~\cite{Haken1984}. To investigate the $\ket{m_-}$-mode noise spectra as well, we show the Wigner function distribution of this mode at a few different interaction points. As can be seen at larger $N$ (point C), hence the weaker interaction, the phase-space distribution resembles the one of a number-squeezed state. However, upon increasing the interaction (points A,B) the squeezing decreases. This clearly confirms the phase diffusion effect in reducing the coherence time of the generated pairs. Besides, this effect becomes more dominant deep into the quantum range where the fluctuations should not be ignored. \section{Conclusion} Exploring dissipative phase transitions is one of the important topics of open quantum systems. There, the interplay between dissipation, drive, and interaction can lead to a rich testbed to investigate dynamics of many-body systems far from their equilibrium. In this article, we theoretically investigate the first- and second-order quantum dissipative phase transitions of in a three-mode cavity with intra- and inter-modal two-body interaction as a prototypical model. We showed the emergence of a MF limit-cycle phase where the local U(1) symmetry and the TTS of the Liouvillian are spontaneously broken. We explained the connection between this phase and the Goldstone mode well-studied in the TD limit. By employing the Wigner function formalism hence, properly including the quantum noise, we showed the breakdown of MF predictions within the quantum regime. Within this range, fluctuations notably limit the coherence time of the Goldstone mode due to the phase diffusion. Concerning the experimental realizations, the model and the results are applicable to a wide variety of driven-dissipative interacting bosonic platforms, including circuit-QED, semiconductor excitons, and multi-mode cavities with cold-atoms~\cite{Jia2018, Vaidya2018}, where the figure of merit $V_0/\gamma$ can be tuned, properly. It is also interesting to explore the feasibility of using such platforms in creating non-Gaussian states as an instrumental ingredient for quantum information protocols based on continuous variable entanglement and photonic quantum logic gates~\cite{Braunstein2005,Santori2014,Liu2017,Zhang2017}. \section*{acknowledgement} The authors thank Wolfgang Schleich, Hans-Peter B\"uchler, Jan Kumlin, and Jens Hertkorn for insightful discussions. The invaluable IT support from Daniel Weller is greatly acknowledged. H. A. acknowledges the financial supports from IQST Young Researchers grant and the Eliteprogram award of Baden-W\"urttemberg Stiftung. I. C. acknowledges financial support from the European Union FET-Open grant ``MIR-BOSE'' (n. 737017), from the H2020-FETFLAG-2018-2020 project "PhoQuS" (n.820392), and from the Provincia Autonoma di Trento.
proofpile-arXiv_065-186
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Speech is a short-term stationary signal~\citep{quatieri2006discrete} which contains information related to the spoken content, speaker's identity, speaker's emotion, spoken language, etc. The speaker recognition technology recognizes persons from their speech~\citep{kinnunen2010overview1}. The \emph{automatic speaker verification} (ASV) is one of the important tasks in speaker recognition where two voice signals are compared by machine for deciding whether they are produced by the same speaker or not. The ASV technology finds its application in voice biometrics for authentication tasks in both logical and physical access scenarios~\citep{sahidullah2019introduction, markowitz2000voice} and also help in the judicial system to compare an unknown speaker's voice with a known suspect's voice~\citep{dumas1990voice, campbell2009forensic}. The performance of the ASV system is reliable in \emph{controlled conditions}; however, in the real-world situations, the performance is considerably degraded due to the variations in \emph{intrinsic factors} (speaker's emotion, health, age, etc.) and \emph{extrinsic factors} (background noise, channel, room impulse response, etc.)~\citep{nagrani2017voxceleb1}. To achieve a good performance in practical applications, the ASV system should be robust against these unwanted variations. \par A typical ASV system consists of three main modules: \emph{frame-level feature extractor}, \emph{segment-level feature (embedding) extractor}, and \emph{classifier}. The frame-level feature extraction unit converts raw speech waveform into a sequence of acoustic feature vectors~\citep{kinnunen2010overview1, campbell1997speaker}. Most of the ASV studies use short-term spectral features which are based on the knowledge of speech production and perception model. Some studies use high-level features as complementary information which represent other speaking characteristics such as, speaking rate and pronunciation style~\citep{ayoub2016gammatone}. The classifier module further parameterizes the features into statistical models~\citep{kinnunen2010overview1}. For efficient use of the ASV systems in different real-world applications, we need a feature extraction method which should be robust to unwanted variations in the speech signal and computationally inexpensive~\citep{kinnunen2010overview1}. Improving the robustness of acoustic feature usually reduces the effort from classifier for improving the ASV system performance. The scope of this work is limited to the development of a new robust feature extraction algorithm for real-world applications. Among all the existing cepstral features, \emph{mel-frequency cepstral coefficients} (MFCCs) are the most popular and widely used for the ASV as well as other speech processing tasks such as automatic speech recognition~\citep{benzeghiba2007automatic}, speaker diarization~\citep{anguera2012speaker}, spoofing countermeasures~\citep{wu2015spoofing}, etc. The recently introduced \emph{x-vector} based ASV system, which drew attention in previous NIST speaker recognition evaluations~\citep{NIST,lee2019i4u,NISTSRE2019}, also uses MFCCs as acoustic features. The MFCC computation process involves mel scale integration followed by logarithmic compression and \emph{discrete cosine transform} (DCT). The MFCCs are very popular for the following reasons. First, the computation process~\textcolor{black}{utilizes} mel filterbank analysis, which is partially inspired by the processing of the audio signal by the human auditory system. Second, the computation process involves \emph{fast Fourier transform} (FFT) and matrix multiplication which makes it more computationally efficient compared to other methods such as \emph{linear prediction cepstral coefficients} (LPCCs) or \emph{line spectral frequencies} (LSFs)~\cite{RabinerFundamentals1993}. Third, MFCCs are also suitable for different feature-level compensation methods such as \emph{relative spectral} (RASTA) processing~\cite{hermansky1994rasta}, \emph{cepstral mean and variance normalization} (CMVN), and \emph{feature warping}~\cite{Pelecanos2001}. Though the MFCCs are relatively more robust compared to other cepstral features such as \emph{linear frequency cepstral coefficients} (LFCCs) or LPCCs, the ASV performance with MFCCs are severely degraded in real-world conditions due to the mismatch of acoustic conditions in enrollment (or speaker registration) and verification (or speaker authentication) phase~\citep{sahidullah2012design, paliwal2009speech}. To overcome some of the shortcomings of MFCCs, various acoustic features like \emph{frequency domain linear prediction} (FDLP)~\citep{ganapathy2012signal}, \emph{cochlear frequency cepstral coefficients} (CFCCs)~\citep{li2011auditory}, \emph{power-\textcolor{black}{normalized} cepstral coefficients} (PNCCs)~\citep{kim2016power}, \emph{mean Hilbert envelope coefficients} (MHECs)~\citep{SADJADI2015138}, \emph{Gammatone frequency cepstral coefficients} (GFCCs)~\citep{zhao2012casa}, \emph{constant-Q cepstral coefficients} (CQCCs)~\citep{todisco2016articulation}, \emph{time-varying linear prediction} (TVLP)~\citep{vestman2018speaker}, and \emph{locally-\textcolor{black}{normalized} cepstral coefficients} (LNCCs)~\citep{poblete2015perceptually} were proposed. All these features even though achieve better performance in noisy condition, they require a large number of user-defined parameters. These parameters further need to be manually tuned for different environmental conditions. The overall process seems to be difficult for a system-developer. Also, improving feature robustness beyond a certain level is extremely difficult, especially for a wide range of degradation~\citep{ganapathy2012signal,SADJADI2015138}. Besides, most of those features are also computationally more expensive than MFCCs. The MFCCs, on the other hand, have lesser number of free parameters. This study develops a data-driven feature extraction method which follows the same principle as MFCC but derives the parameters from the speech data itself. Unlike the feature extraction methods discussed before, which require ``hand-crafted" parameters, the feature extraction method with parameters computed in a \emph{data-driven} procedure reduces the effort needed for manual fine-tuning. The data-driven methods also show the improvement in robustness when large corpora are used in training strong discriminative models~\citep{hansen2015speaker}. \textcolor{black}{The data-driven acoustic feature extraction methods use speech data to compute the parameters of the feature extraction algorithm. We classify those methods into two broad categories. One of them uses discriminative approaches such as the \emph{artificial neural network} (ANN) or \emph{linear discriminant analysis} (LDA). These methods require labeled speech data. The other type does not apply the discriminative approach but utilizes some speech science knowledge during parameter estimation. In other words, they learn the feature extraction parameters directly from the speech data without using any class label information. Some of the popular data-driven speech feature extraction methods are discussed in Table~\ref{Literature}. Most of the methods are discriminative in nature, and they are generally investigated for automatic speech recognition (ASR) tasks. In ASV research, data-driven feature extraction methods have drawn relatively less attention~\citep{ravanelli2018speaker}.} \begin{table}[t!] \renewcommand{\arraystretch}{1.2} \caption{Selected works on data-driven feature extraction methods for various speech applications (ASR: Automatic speech recognition, ASV: Automatic speaker verification, SAD: Speech activity detection).} \centering \begin{footnotesize} \begin{tabular}{|c|>{\arraybackslash}p{10cm}|c|} \hline \textbf{Work} & \textbf{Methodology} & \textbf{Task}\\ \hline \multirow{2}{*}{\citep{hermansky1999temporal}}& Neural network is trained with speech features of larger temporal context and used to create data-driven features called TempoRAl Patterns (TRAPs).&\multirow{2}{*}{ASR}\\ \hline \multirow{2}{*}{\citep{malayath2000data}}& This work investigates data-driven temporal filter with oriented principal component analysis (OPCA) that reduces channel variability.&\multirow{2}{*}{ASV}\\ \hline \multirow{2}{*}{\citep{burget2001data}}& The filterbank is derived from phonetically labeled speech data using LDA.&\multirow{2}{*}{ASR}\\ \hline \multirow{2}{*}{\citep{malayath2003data}}& Data-driven LDA is applied on the logarithmic critical-band power spectrum of speech.&\multirow{2}{*}{ASR}\\ \hline \multirow{5}{*}{\citep{hermansky2003trap}}& This method uses TRAP followed by TANDEM. The TRAP estimator provides multiple evidences in terms of posterior probabilities from frequency-localized overlapping time-frequency regions of speech signal computed with the help of data-driven transformation of contextual information. Next, TANDEM converts the frequency-\textcolor{black}{localized} evidences to features.&\multirow{5}{*}{ASR}\\ \hline \multirow{2}{*}{\citep{hung2006optimization}}& Data-driven temporal filters are designed using PCA, LDA and minimum classification error (MCE) framework.&\multirow{2}{*}{ASR}\\ \hline \multirow{2}{*}{\citep{el2006using}}& Speech segments are created using a data-driven and automatic language independent speech processing (ALISP).&\multirow{2}{*}{ASV}\\ \hline \multirow{2}{*}{\citep{lu2008investigation}}& This work uses F-ratio to adjust the center and edge frequencies of the filterbank and the F-ratio is computed for speaker separability.&\multirow{2}{*}{ASV}\\ \hline \multirow{2}{*}{\citep{paliwal2009speech}}& Data-driven frequency warping is obtained by dividing the long-term average spectrum (LTAS) into subbands of equal energies.&\multirow{2}{*}{ASR}\\ \hline \multirow{2}{*}{\citep{thomas2012acoustic}}& A multi-layer perceptron (MLP) is trained to classify speech and non-speech frames. The outputs of the MLP are used as posterior features.&\multirow{2}{*}{SAD}\\ \hline \multirow{3}{*}{\citep{sainath2015learning}}& Combination of convolutional neural network (CNN) and long short-term memory (LSTM) is used to learn neural network parameters to classify the context-dependent state labels.&\multirow{3}{*}{ASR}\\ \hline \multirow{2}{*}{\citep{hoshen2015speech}}& CNN is used to learn time-domain filter parameters and the network is trained to classify the context-dependent state labels.&\multirow{2}{*}{ASR}\\ \hline \multirow{2}{*}{\citep{7563327}}& The filterbank is learned in an unsupervised manner using convolutional restricted Boltzmann machine (ConvRBM) with clean and noisy audio data.&\multirow{2}{*}{ASR}\\ \hline \multirow{2}{*}{\citep{seki2017deep}}& The triangular mel filter is approximated using Gaussian function and the parameters of this pseudo filter are learned using DNN. &\multirow{2}{*}{ASR}\\ \hline \multirow{3}{*}{\citep{zeghidour2018learning}}& Computational steps of mel-frequency spectral coefficients (MFSCs) are implemented with neural network where the parameters are learned using convolution layers with a goal of maximizing phone recognition accuracy.&\multirow{3}{*}{ASR}\\ \hline \multirow{4}{*}{\citep{ravanelli2018speaker}}& A CNN-based architecture SincNet is introduced which learns the lower and upper cut-off frequencies of the subband filters. Each filter is approximated with the help of a pair of Sinc functions in time-domain and its parameters are tuned by maximizing speaker classification accuracy.&\multirow{4}{*}{ASV}\\ \hline \end{tabular} \end{footnotesize} \label{Literature} \end{table} \textcolor{black}{In this work, we perform detailed analysis of a data-driven feature extraction method for ASV which utilizes only audio-data for computing the desired parameters, in contrast to most of the data-driven techniques that require additional metadata such as speech (\emph{e.g.}, phoneme) or speaker information. We select \emph{speech-signal-based frequency cepstral coefficient} (SFCC), and this feature has demonstrated promising performance in speech and speaker recognition applications~\citep{paliwal2009speech, sarangi2012novel}. The method is also very similar to MFCC; however, in contrast to MFCC which applies handcrafted mel scale, SFCC utilizes a \emph{frequency warping} scale that is computed by a data-driven approach. Since the filterbank parameters are computed prior to the feature extraction step, its effective computational time is same as that of MFCCs, and thus considerably lower than other recently proposed features such as FDLP, MHEC or CQCC. The current study extends our preliminary study~\citep{sarangi2012novel} which introduced the basic data-driven frequency warping~\citep{paliwal2009speech} in speaker recognition. In this work, we further improve this method by optimizing the scale and by computing the other parameters in a data-driven manner. By performing separability analysis with F-ratio, we have demonstrated that the proposed features are more speaker discriminative than standard MFCCs. Our ASV experiments conducted with different ASV systems agree with this analysis. The major contributions of this work are summarized below.} \begin{itemize} \item \textcolor{black}{We improve the basic data-driven scale with frame selection. With comprehensive analysis and experimental results, we demonstrate that selective use of speech-frames helps to create more reliable frequency warping scale.} \item \textcolor{black}{We introduce a data-driven way for computing filter responses as an alternative to the auditory motivated triangular filters. Our proposed method computes the filterbank response in an unsupervised way with a smaller amount of speech data in contrast to the discriminative approaches that require class labels and a larger amount of speech data.} \item \textcolor{black}{We evaluate the proposed features with a state-of-the-art x-vector based ASV system which currently utilizes either MFCCs or log-mel energy features.} \end{itemize} The rest of the paper is organized as follows. Section~\ref{Section:Cepstral} explains the baseline cepstral feature extraction methods for both mel and data-driven scale. The next section presents the proposed method for improving the data-driven scale. We propose the data-driven approach of computing filter responses in Section~\ref{Section:Optimization of filter shape}. We discuss the experimental setup in Section~\ref{Section:Experimental setup}, and we show the results in Section~\ref{Section:Results and discussion}. Finally, we conclude in Section~\ref{Section:Conclusion} with a discussion on limitations of this study and possible future directions. \section{Cepstral features based on filterbank}\label{Section:Cepstral} A general block diagram of cepstral feature extraction methods using a filterbank is shown in Fig.~\ref{fig:1}. After pre-processing steps such as framing and windowing, the short-term power spectrum of speech frames is multiplied with a filterbank frequency response. Then, cepstral features are created by performing DCT on log-energies of filterbank output. In MFCC computation, we place the triangular-shaped filters in the mel scale. However, for SFCCs, triangular filters are placed in data-driven speech-signal-based scale. In the following sub-sections, we briefly describe these two feature extraction methods. \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{drawing_revision2.png}\\ \vspace*{-16.5cm} \caption{Block diagram of a typical cepstral feature extraction method.}\label{fig:1} \centering \end{figure} \subsection{MFCCs: fixed mel scale} The MFCC feature extraction scheme introduced in~\citep{davis1980comparison} provides a straightforward way to compute cepstral features. Since then, it had been the state-of-the-art in different speech-based applications including speaker recognition~\citep{8461375}. It uses the mel scale~\citep{Stevens} based triangular filterbank for the creation of cepstral features. There are several alternatives to mel scale representations~\citep{ganchev2005comparative}. The most commonly used equation to convert linear frequency $\mathit{f}$ to mel frequency $\mathit{f}_{mel}$ is \begin{equation}\label{eqpia1} f_{mel}=2595\log_{10}\left(1+\frac{f}{700}\right). \end{equation} In the mel scale domain, the frequency axis is divided into equidistant points. By considering those points as filter edge-frequencies, the filters are placed by keeping $50\%$ overlap with the adjacent one~\citep{Sandipan1}. In the MFCC computation step, the pre-emphasized speech signal is first segmented into frames of $20$-$30$~ms typically with an overlap of $50\%$. After that, we perform the short-time Fourier transform (STFT) of the speech frames. Then, we compute filterbank energies by using mel filterbank. Finally, DCT is performed over logarithm of filterbank energies to get cepstral features. The detailed procedure to compute MFCCs can be found in~\citep{sahidullah2012design, Ganchev}. \subsection{SFCCs: data-driven scale} The SFCC is a data-driven cepstral feature extraction method which computes the non-linear scale from the training speech. This scale was initially proposed for speech recognition~\citep{paliwal2009speech} and later has been successfully applied in speaker recognition~\citep{sarangi2012novel}. The SFCC extraction method replaces the mel scale with a data-driven scale, and the rest of the process is the same as the MFCC computation process. The following paragraph describes the steps required to get data-driven scale. The scale computation involves the computation of long-term average power spectrum (LTAS) of speech data. The LTAS per speech utterance is computed first by averaging the short-term power spectrum over all the frames in the utterance. \textcolor{black}{Then, average LTAS is computed over all the speech utterances present in a corpus for computating the scale}. In the next step, the logarithm of LTAS is divided into equal area intervals to compute the filter edge frequencies. The derivation of the speech-based data-driven scale is described in the following steps. \textbf{1. Computation of LTAS}: Let $\mathit{u}$ be a speech utterance of $\mathit{N}_l$ frames. Its LTAS is expressed as, \begin{equation}\label{eqpia2} P[k]=\frac{1}{N_l}\sum_{i=1}^{N_l}X_i[k], \end{equation} where $X_i[k]$ is the energy spectrum and $k$ is the index of frequency bin. \textbf{2. Computation of average LTAS}: The average LTAS is computed as the ensemble average of LTAS of all speech utterances in a corpus, and it is defined as, \begin{equation}\label{eqpia3} \bar{P}[k]=\frac{1}{N_s}\sum_{i=1}^{N_s}P[k], \end{equation} where $N_s$ is the total number of speech utterances in a corpus. \textbf{3. Computation of cumulative log power spectrum}: Now, if we want to place $\mathit{Q}$ filters, we divide the $\log \bar{P}[k]$ into frequency subbands of $\mathit{Q}$ equal areas. We compute the area of the $j$-th band as, \begin{equation}\label{eqpia4} A_j=\sum_{k=k_l^j}^{k_h^j}\log\bar{P}[k], \end{equation} where $j=1,2,3,...,\mathit{Q}$. Here $k_l^j$ and $k_h^j$ are the lower and upper band for $j$-th filter and they are selected in such a manner that $A_1=A_2=A_3=\ldots=A_Q$. We also consider the lower edge frequency of the first filter as $0$~Hz and the higher edge frequency of the last filter as the Nyquist frequency. In practice, it is not possible to get $A_i$s exactly equal in numerical values, and they are made approximately equal. \textbf{4. Computation of warping scale}: Finally, the scale is computed by interpolating the filterbank center frequencies to their mapped values which are obtained with the help of the following equation~\citep{paliwal2009speech}, \begin{equation}\label{eqpia5} W\left[\frac{k_l^j+k_h^{j}}{2}\right]=\frac{j}{Q}, \end{equation} where $j=1,2,3,..., \mathit{Q}$. Eq.~(\ref{eqpia5}) gives the required frequency points to design filters in the filterbank structure. The cepstral features computed with this scale is referred to as SFCCs. This scale used in SFCC computation is shown in Fig.~\ref{fig:3} along with standard mel, ERB, and Bark scale as well as the scale proposed in Section~\ref{Section3}. \textcolor{black}{To compute this scale, we do not require speaker labels for the corpus, unlike most of the methods listed in Table~\ref{Literature}}. During this scale computation, all the speech frames are used which are selected by a speech activity detection (SAD) method. This includes all types of speech frames showing different spectral characteristics; however, we do not necessarily need the entire speech corpus as LTAS can be obtained with a small subset of available data. In the next section, we consider a frame selection technique to select useful frames for better ASV performance. \section{Data-driven frequency warping using selected frames}\label{Section3} The frame selection strategy is used in speaker recognition task for fast implementation in real-time application~\citep{kinnunen2006real, sarkar2010real}. In this work, we select a subset of speech frames for developing warping scale. The conventional mel scale is a psychophysical scale for pitch perception. This experimental scale was first formulated with the help of a perceptual test by playing tones of fixed frequency to the listeners~\citep{Stevens}. The participants were asked to tune the frequency of another adjustable tone according to the perceived half-value of the fixed tone. All the tones were played at a constant loudness of $60$~dB. The scale was formulated by fitting a curve that maps the numerical value of linear frequency to the perceived value. We note that the mel scale development is originally a subjective method which might be biased to the selected listeners~\citep{Greenwood}. Therefore, instead of subjective criterion, in data-driven method, we replace human being with the objective criterion of equal energy of voice signal. During this process, we consider all types of speech data irrespective of the voice production mechanism. This crude selection of speech frames include unvoiced speech frames created with random noise passing through a narrow constriction of the vocal tract. This unvoiced frames have no harmonic structure and closely resemble the uniform distribution of noise spectra~\citep{quatieri2006discrete}. Therefore, we propose to select only the voiced frames having pitch information for our data-driven scale formulation process. \begin{figure}[t!] \centering \includegraphics[width=1\textwidth]{final_today_spectrogram_pitch_contour_plot.jpg}\\ \caption{\textcolor{black}{Illustration of (a)~spectrogram and (b)~pitch contour of a speech signal taken from NIST SRE $2001$ speech corpus. We compute the pitch values using \emph{the pitch estimation filter robust to high levels of noise} (PEFAC) method as studied in~\citep{gonzalez2014pefac}.}}\label{fig:2} \centering \end{figure} \begin{figure}[t!] \centering \includegraphics[width=.98\textwidth]{comparision_all_scale_pitch_today.png} \caption{The frequency warping function for Mel, ERB, Bark, SFCC, and proposed scale.}\label{fig:3} \centering \end{figure} Fig.~\ref{fig:2} shows the spectrogram and pitch contour of the speech signal, which is taken from the NIST SRE $2001$ corpus. Fig.~\ref{fig:3} shows the~\textcolor{black}{normalized} plot of both auditory and data-driven scales. \textcolor{black}{We observe that the data-driven scales have lower resolution at the both ends of the frequency band, and higher resolution everywhere else. This is expected as the speech files for NIST SRE $2001$ are collected over telephone channel with a frequency range of $300-3700$~Hz. Therefore, we hypothesize that the filterbank placed according to the newly derived scale will help to capture more relevant information than mel filterbank or standard data-driven filterbank.} \section{Computation of data-driven filter shape using PCA}\label{Section:Optimization of filter shape} \par In traditional MFCCs and SFCCs, we use filters with triangular-shaped frequency responses which closely approximate the auditory filters in cochlea. Other shapes like Gaussian~\citep{chakroborty2010feature} and trapezoidal~\citep{hermansky1990perceptual} are also used in speech feature extraction process. The shape of the filters in filterbank assigns non-uniform weights to the subband frequencies. In this work, the idea is to design the subband filter response so that the output of this filter, computed as the energy, will represent the subband frequency components in a most effective way. In other words, we need to reduce the dimension of the subband frequency components to a single data point. We employ \emph{principal component analysis} (PCA) which is appropriate for finding the most ``expressive" representation of the data~\citep{bishop2006pattern}. Previously, PCA is applied to design the data-driven filterbank with mel scale for robust speech recognition~\cite{hung2006optimization, lee2001improved}. We propose to apply PCA on the log-power spectrum of each frequency band separately for constructing the filters. The PCA basis with highest eigenvalue, known as ``first basis", is used to create the filter frequency response. Since speech signal is a highly correlated process~\cite{0888Jayant1984}, the subband covariance matrix will be positive. Hence all the elements of its eigenvector with highest eigenvalue, i.e., the first basis of PCA, will be non-negative according to the \emph{Perron-Frobenius} theorem~\citep{johnson1985matrix}. The steps to find the filter shape are~\textcolor{black}{summarized} below: \begin{figure}[t!] \centering \includegraphics[width=1\textwidth]{newpca_v1.eps} \caption{Figure showing proposed data-driven method for computing frequency response of a filter. Here the matrix $\mathbf{P}^{r}$ contains log power spectrum of all the frames corresponding to the $r$-th subband.}\label{fig:4} \centering \end{figure} \textbf{1. Computation of subband covariance matrix}: Let $P_{i}^{r}[k]$ be the log power spectrum of $k$-th frequency component for $r$-th subband and $i$-th speech frame. Then the subband covariance matrix corresponding to the $r$-th subband is given as: \begin{equation}\label{eqpia6} \mathbf{S}^{r}=\frac{1}{N_{f}-1}\sum_{i=1}^{k_{r}}(P_{i}^{r}[k]-\bar{m}[k])(P_{i}^{r}[k]-\bar{m}[k])^{\top}, \end{equation} where $N_{f}$ is the number of frames, $k_{r}$ is the number of frequency bins in $r$-th subband and $\bar{m}[k]$ is the mean subband power spectrum given by, \begin{equation}\label{eqpia6} \bar{m}[k]=\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}P_{i}^{r}[k]. \end{equation} \textbf{2. Computation of first PCA basis}: We apply \emph{singular value decomposition} (SVD)~\citep{golub2012matrix} to compute the PCA basis of subband covariance matrix. Using SVD, we can write, \begin{equation}\label{eqpia7} \mathbf{S}^{r}=\mathbf{U}\mathbf{V}\mathbf{U}^{\top}, \end{equation} where $\mathbf{U}$ is the $k_r \times k_r$ orthogonal matrix containing eigenvectors of $\mathbf{S}^{r}$ in each column, and the diagonal elements of the $k_r \times k_r$ matrix $\mathbf{V}$ contain the singular values. The first column of $\mathbf{U}$, i.e., the first principal component is used to create the $r$-th filter. We apply zero-padding to get the filter frequency response for the entire band. The computation of PCA-based filter response is illustrated in Fig.~\ref{fig:4}. The filter shape computed in the above process treats all the frequency components within a subband in an identical manner. However, considering the subbands have overlap with the adjacent bands, we apply tapering function to the power spectrum that assigns higher weights to the frequency components near center frequencies and lower weights to the components near edge frequencies. We use \emph{Hamming window} on the power spectrum data before performing PCA. We also~\textcolor{black}{normalize} the frequency response to make the highest magnitude unity similar to the mel filters. Fig.~\ref{fig:5} illustrates the filters for different frequency warping scale. In order to analyze the separability of different features, we compute F-ratio~\citep{nicholson1997}. For this analysis, we used $131$ speakers from POLYCOST corpus~\citep{hennebert2000polycost}. In Table~\ref{F-ratio}, we showed the F-ratio of log-energies of different feature extraction methods with $20$ filters. \textcolor{black}{This demonstrates that the proposed methods have more filters that have higher discriminative characteristics. We also showed the average F-ratio which indicates that the proposed features are better than the MFCC for most cases}. \begin{figure}[t!] \centering \includegraphics[width=0.9\textwidth]{final_filterbank_plot_today.eps} \caption{Data-driven filterbank frequency responses using PCA. The filters are shown for different scales: (a) mel, (b) speech-based, and (c) speech-based with pitch. The next three (d, e and f) shows the filter shapes for three scales when Hamming window is applied on the log-power spectrum. The last three (g, h, and i) are for normalized frequency responses. In all cases, the filters are derived from the development set of NIST SRE $2001$ corpus.}\label{fig:5} \centering \end{figure} \begin{table}[h] \renewcommand{\arraystretch}{1.2} \caption{\textcolor{black}{F-ratios of log-energies for MFCC features and for three kinds of SFCC features denoted by M1, M2 and M3}. M1 indicates the baseline SFCC feature where the scale is computed with all the speech frames. M2 indicates SFCC features when the scale is computed taking speech frames having pitch using pitch estimation algorithm. Finally, M3 indicates the SFCC features when the scale is same as M2 but triangular filters are replaced with window-based PCA filters. The last row shows the average ratio for all the cases.} \centering \begin{footnotesize} \begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{Filter No.} & \multirow{2}{*}{MFCC} & \multicolumn{3}{|c|}{SFCC}\\ \cline{3-5} & & M1 & M2 & M3 \\ \hline 1 & $\textbf{0.5677}$ & $0.4107$ & $0.4103$ & $0.4149$ \\ \hline 2 & $\textbf{0.4241}$ & $0.3270$ & $0.3262$ & $0.3281$ \\ \hline 3 & $\textbf{0.3417}$ & $0.2864$ & $0.2862$ & $0.2864$ \\ \hline 4 & $0.2403$ & $0.2855$ & $0.2856$ & $\textbf{0.2860}$ \\ \hline 5 & $0.2015$ & $0.2965$ & $\textbf{0.2968}$ & $0.2961$ \\ \hline 6 & $0.2135$ & $0.3018$ & $\textbf{0.3022}$ & $0.3012$ \\ \hline 7 & $0.2521$ & $0.3209$ & $\textbf{0.3217}$ & $0.3209$ \\ \hline 8 & $0.2607$ & $0.3405$ & $\textbf{0.3412}$ & $0.3410$ \\ \hline 9 & $0.2870$ & $0.3564$ & $\textbf{0.3571}$ & $0.3565$ \\ \hline 10 & $0.3088$ & $0.3773$ & $\textbf{0.3780}$ & $0.3774$ \\ \hline 11 & $0.3252$ & $\textbf{0.3758}$ & $0.3753$ & $0.3749$ \\ \hline 12 & $0.3407$ & $0.3775$ & $\textbf{0.3785}$ & $0.3781$ \\ \hline 13 & $0.3281$ & $0.4093$ & $\textbf{0.4110}$ & $0.4110$ \\ \hline 14 & $0.3511$ & $0.4396$ & $\textbf{0.4408}$ & $0.4404$ \\ \hline 15 & $0.3966$ & $0.4596$ & $\textbf{0.4598}$ & $0.4593$ \\ \hline 16 & $0.4280$ & $\textbf{0.4469}$ & $0.4460$ & $0.4454$ \\ \hline 17 & $0.4160$ & $0.4477$ & $\textbf{0.4487}$ & $0.4484$ \\ \hline 18 & $0.4557$ & $0.4842$ & $0.4853$ & $\textbf{0.4861}$ \\ \hline 19 & $0.4990$ & $0.5164$ & $\textbf{0.5176}$ & $0.5173$ \\ \hline 20 & $0.5696$ & $0.5746$ & $0.5757$ & $\textbf{0.5845}$ \\ \hline Avg. & $0.3604$ & $0.3917$ & $0.3922$ & $\textbf{0.3927}$ \\ \hline \end{tabular} \end{footnotesize} \label{F-ratio} \end{table} \section{Experimental setup}\label{Section:Experimental setup} \begin{table*}[h] \renewcommand{\arraystretch}{1.2} \caption{Summary of the corpora for speaker verification experiments.} \centering \begin{footnotesize} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\textbf{Corpus}}&\textbf{No. of}&\textbf{Target}&\textbf{Test}&\textbf{Total}&\textbf{True}&\textbf{Impostor}\\ &\textbf{speakers}&\textbf{models}&\textbf{segments}&\textbf{trials}&\textbf{trials}&\textbf{trials}\\ \hline NIST SRE $2001$&$174$&$174$&$2038$&$22418$&$2038$&$20380$ \\ \hline NIST SRE $2002$&$330$&$330$&$3570$&$39270$&$2983$&$36287$\\ \hline VoxCeleb1 &$40$&$4715$&$4713$&$37720$&$18860$&$18860$\\ \hline \end{tabular} \end{footnotesize} \label{Database} \end{table*} \subsection{Corpora for experiments} \par We evaluated our proposed method in NIST (SRE $2001$ and SRE $2002$) and VoxCeleb (VoxCeleb1) speech corpora ~\citep{NIST1, NIST2, nagrani2017voxceleb1}. In addition, we evaluate the performance in noisy conditions. Initially, we conducted experiments on NIST SRE $2001$ corpus to~\textcolor{black}{optimize} different parameters. Then, we used those parameters to evaluate the ASV system in the NIST SRE $2002$ corpus for both clean and noisy conditions. We use VoxCeleb1 corpus consisting large number of speakers for real-world conditions~\citep{nagrani2017voxceleb1}. This corpus consists of voices of $1251$ celebrities collected from the YouTube videos. Out of them, $40$ speakers are used for evaluation purpose. The sampling rate of each utterance is of $16$ kHz, and average utterance length is $8$ seconds. The corpora used in our experiments are summarized in Table~\ref{Database}. We use the development data for scale computation. \textcolor{black}{The same data is used to train the model parameters and hyper-parameters, i.e., for computing the parameters for UBM, PLDA and T-matrix when required}. \textcolor{black}{The enrollment and the test sentences are corrupted with noises where SNRs range from $0$ to $40$ dB and type of noise is randomly chosen from five noises (white, pink, babble, volvo and factory). We took noise files from NOISEX-$92$ corpus.} \subsection{Feature extraction} \par We extracted the acoustic features from speech frames of $20$~ms with $10$~ms overlap. For experiments with GMM-UBM and i-vector system, we used $20$ filters. We extracted $19$ coefficients after discarding the first coefficient. Finally, a $57$-dimensional feature vector~\citep{sahidullah2016local} is formulated after appending delta and double-delta coefficients. The MFCCs are filtered with RASTA processing~\citep{hermansky1994rasta} to remove slowly varying channel effect. Finally, we perform cepstral mean and variance normalization (CMVN) after applying bi-Gaussian modelling based SAD~\cite{sahidullah2012design}. We use identical pre-processing and post-processing steps for all the features. \subsection{Classifier details} We use three different ASV systems: GMM-UBM, i-vector and DNN-based x-vector. \textcolor{black}{First, we use simple GMM-UBM classifier for conducting experiments with NIST SRE $2001$ and NIST SRE $2002$ corpora. Then, we evaluate our proposed feature on VoxCeleb1 corpus using i-vector and x-vector system. In order to make the work self-contained, we briefly describe all the classifiers as follows.} \subsubsection{GMM-UBM system} \textcolor{black}{In the GMM-UBM system, the feature distribution of the target speakers and the cohort models are represented with a mixture of Gaussian densities~\citep{reynolds2000speaker}. The cohort model, also known as universal background model (UBM) in this context, is trained with several hours of speech data}. The UBM is represented as $\mathbf{\lambda}_{\mathrm{ubm}}=\{ w_{i},\mathbf{\mu}_{i},\mathbf{\Sigma}_{i} \}_{i=1}^{C}$ where $C$ is the number of Gaussian mixture components, and $w_{i}$ is the weight, $\mathbf{\mu}_{i}$ is the mean, and $\mathbf{\Sigma}_{i}$ is the covariance matrix of the $i$-th component. The parameter $w_{i}$ satisfies the constrain $\textstyle \sum_{i=1}^{C}w_{i}=1$. The~\textcolor{black}{enrollment} speech model ($\lambda_{\mathrm{enroll}}$) are derived from the UBM using \emph{maximum-a-posteriori} (MAP) adaptation with the target speaker's feature. During test, we calculate ASV score of the test utterance, $\mathbf{X}_{\mathrm{test}}= \{\mathbf{x}_{1}, \mathbf{x}_{2}, \ldots, \mathbf{x}_{T} \} $ as the log-likelihood ratio (LLR), given by the following equation: \begin{equation}\label{eqpia10} \Lambda_\mathrm{GMM-UBM}({\mathbf{X}_{\mathrm{test}},{\lambda_{\mathrm{enroll}}}})=\log P({\mathbf{X}_{\mathrm{test}}|\lambda_{\mathrm{enroll}}})-\log P({\mathbf{X}_{\mathrm{test}}|\lambda_{\mathrm{ubm}}}). \end{equation} Finally, if the ASV score is greater than or equal to a decision threshold, $\theta$, the test speech is considered as spoken by the correct speaker, otherwise an imposter. In our experiments, we use the development section of NIST SRE $2001$ corpus, which consists of six hours of speech data, to train gender-independent UBM of $512$ mixture components. We use $10$ iterations of \emph{expectation-maximization} (EM) algorithm to estimate the UBM parameters. The target speaker models are created by adapting only the mean vectors of UBM with relevance factor $14$. \subsubsection{i-vector system} In i-vector method, the GMM concatenated means of the adapted GMM, known as GMM-\emph{supervector}, is projected into a low dimensional space called as total variability (TV) space~\citep{dehak2010front} as, \begin{equation}\label{eqpia11} \mathbf{M}=\mathbf{m}+\mathbf{T}\mathbf{w}, \end{equation} \textcolor{black}{where $\mathbf{T}$, $\mathbf{m}$ and $\mathbf{M}$ are the low-rank total variability matrix, the speaker and channel independent supervector (taken from UBM supervector) and the GMM supervector representation of the speech utterance, respectively}. Here the $\mathbf{w}$ is called as i-vectors. In order to compute the i-vector representation of a speech utterance, $\mathbf{X}_{\mathrm{utt}}$, we estimate the posterior mean of the i-vector given the centered first-order Baum-Welch statistics as, \begin{equation}\label{eqpia12} \mathbf{w}_{\mathrm{utt}}=(\mathbf{I}+ \mathbf{T}^{\top} \mathbf{\Sigma}^{-1}\mathbf{N}\mathbf{T})^{-1}\mathbf{T}^{\top}\mathbf{\Sigma}^{-1}\mathbf{F}, \end{equation} \textcolor{black}{where $\mathbf{N}$ is matrix consisting of the zero-order Baum-Welch statistics as the diagonal elements; $\mathbf{F}$ is a vector whose elements are first-order Baum-Welch statistics;} and $\mathbf{\Sigma}$ is the residual variability, commonly created from the UBM covariances. The extracted i-vectors contain channel information. \textcolor{black}{In order to compensate the effect of channel, \emph{probabilistic linear discriminant analysis} (PLDA) is used to compute the similarity between i-vectors of enrollment and test~\cite{rajan2014single}.} We use Gaussian PLDA (GPLDA) in our experiment which models the within-class covariance by a full-rank matrix. The ASV score using PLDA is computed as the likelihood score given as, \begin{equation}\label{eqpia13} \Lambda_\mathrm{PLDA}(\mathbf{w}_{\mathrm{enroll}},{\mathbf{w}_{\mathrm{test}}})=\log\frac{p(\mathbf{w}_{\mathrm{enroll}},\mathbf{w}_{\mathrm{test}}|H_s)}{p(\mathbf{w}_{\mathrm{enroll}}|H_d)p(\mathbf{w}_{\mathrm{test}}|H_d)}, \end{equation} where $\mathbf{w}_{\mathrm{enroll}}$ and $\mathbf{w}_{\mathrm{test}}$ are correspondingly the i-vectors of~\textcolor{black}{enrollment} and test sentences. Here $H_s$ and $H_d$ represent two hypotheses whether two i-vectors are from the same speaker ($H_s$) or not ($H_d$). In our experiment with i-vector system, we have randomly selected $20,000$ and $50,000$ speech files from VoxCeleb1 dev set for training the UBM and T-matrix, respectively. The PLDA is trained with entire dev set consisting $148,642$ files from $1211$ speakers. We also apply linear discriminant analysis (LDA) to improve the speaker discriminativeness of i-vectors with the same data as used in PLDA training. We fix the number of mixture components to $512$ and i-vector dimension to $400$. The i-vectors are projected to $250$ dimensions using LDA. We perform \emph{whitening} and \emph{length normalization} on i-vectors before training GPLDA with $200$ dimensional speaker subspace. \subsubsection{x-vector system} ~\textcolor{black}{The x-vector system uses deep neural network to learn the speech representation in a supervised manner unlike the unsupervised linear method used in i-vector approach~\citep{8461375}.} \textcolor{black}{The neural network consists of~\emph{time-delay neural network} (TDNN) along with statistical pooling followed by fully connected layers}. This architecture captures information from a large temporal context from the frame-level speech feature sequences~\citep{tdnn21701}. The TDNN is a fixed-size~\emph{convolutional neural network} (CNN) that share weights along the temporal dimension and it is regarded as 1D convolution (Conv1D) or temporal convolution~\citep{lecun1998gradient}. The x-vector system is trained for speaker classification task at segment level. Finally, the x-vectors are computed from the output of the first fully connected layer. In our x-vector~\textcolor{black}{system} implementation, we use five TDNN layers and three fully connected layers as used in~\citep{8461375}. The details of the neural network configuration is shown in Table~\ref{xvector_description}. \begin{table*}[!t] \renewcommand{\arraystretch}{1.2} \caption{Description of the layers in x-vector architecture.} \centering \vspace{0.1cm} \begin{footnotesize} \begin{tabular}{|c|c|} \hline \textbf{Layer} & \textbf{Details} \\ \hline TDNN-1 & Conv1D (\#filter = 256, kernel size = 5, dilation rate =1)\\ \hline TDNN-2 & Conv1D (\#filter = 256, kernel size = 3, dilation rate =2)\\ \hline TDNN-3 & Conv1D (\#filter = 256, kernel size = 3, dilation rate =3)\\ \hline TDNN-4 & Conv1D (\#filter = 256, kernel size = 1, dilation rate =1)\\ \hline TDNN-5 & Conv1D (\#filter = 1024, kernel size = 1, dilation rate =1)\\ \hline Statistics pooling & Computation of mean and standard deviation\\ \hline FC1 & Fully connected layer (256 nodes) \\ \hline FC2 & Fully connected layer (256 nodes) \\ \hline Softmax & Softmax layer with 1211 outputs\\ \hline \end{tabular} \end{footnotesize} \label{xvector_description} \end{table*} We implemented the x-vector system with Python library Keras~\citep{chollet2015keras} using TensorFlow~\citep{tensorflow2015-whitepaper} as backend. We use \emph{rectified linear unit} (ReLU)~\citep{nair2010rectified} and \emph{batch normalization}~\citep{ioffe2015batch} for all the five TDNN and two fully connected layers. We apply dropout with probability $0.05$ only on the two fully connected layers. The parameters of the neural network are initialized with Xavier normal method~\citep{glorot2010understanding}. The neural network is trained using Adam optimizer~\citep{kingma:adam} with learning rate $0.001$, $\beta_1=0.9$, $\beta_2=0.999$ and \textcolor{black}{without weight decay}. We train the neural network using speech segments of 1~seconds. We use $20$-dimensional MFCCs computed with $20$ filters. The MFCCs after dropping non-speech frames with SAD are processed with utterance-dependent cepstral mean~\textcolor{black}{normalization} (CMN). The x-vector systems are trained with batch size of $100$. We use the minimum validation loss as the stopping criteria. We consider entire VoxCeleb1 dev set consisting 1211 speakers (same data as i-vector extractor training). We used data augmentation as used in standard x-vector~\textcolor{black}{system}~\citep{8461375}. We extract $256$-dimensional embeddings from the fully connected layers (after batch~\textcolor{black}{normalization} but before applying ReLU). \begin{table*}[h] \renewcommand{\arraystretch}{1.2} \caption{Parameters of the cost function for NIST SREs and VoxCeleb1 corpora.} \centering \vspace{0.1cm} \begin{footnotesize} \begin{tabular}{|c|c|c|c|} \hline Corpus & $C_{miss}$ &$C_{fa}$ & $P_{tar}$ \\ \hline NIST SRE $2001$ and $2002$ & $10$ & $1$ & $0.01$ \\ \hline VoxCeleb1 & $1$ & $1$ & $0.01$ \\ \hline \end{tabular} \end{footnotesize} \label{costvalues} \end{table*} \subsection{Performance evaluation} \textcolor{black}{We evaluate ASV system performance with commonly used evaluation metrics: \emph{equal error rate} (EER) and \emph{minimum detection cost function} (minDCF) computed from the detection error trade-off (DET) curve~\citep{kinnunen2010overview1, martin1997det}.} The EER is the point in DET curve where the false alarm rate (FAR) and false rejection rate (FRR) are equal. On the other hand, minDCF is computed by formulating a weighted cost function after assigning costs to the error rates followed by~\textcolor{black}{minimization} of the weighted cost function. The cost function is defined as, \begin{equation}\label{eq_cost} C_{det}= C_{miss} \times P_{miss} (\theta) \times P_{tar} + C_{fa} \times P_{fa} (\theta) \times (1-P_{tar}), \end{equation} where $C_{miss}$ and $C_{fa}$ are the cost of miss and false acceptance, respectively, $P_{miss}(\theta)$ and $P_{fa}(\theta)$ are the probabilities of miss and false acceptance at decision threshold $\theta$, and $P_{tar}$ is the prior target probability. The values of $C_{miss}$, $C_{fa}$ and $P_{tar}$ are chosen according to the evaluation plan of the respective corpus~\citep{nagrani2017voxceleb1, NIST1, NIST2} and their values are shown in Table~\ref{costvalues}. \section{Results and discussion}\label{Section:Results and discussion} \subsection{Experiments on NIST SREs with GMM-UBM system} We evaluate the ASV performances on NIST SREs using GMM-UBM classifier. \textcolor{black}{First, we assess the performance with MFCC and baseline SFCC features for subsequent comparison with the proposed features on NIST SRE $2001$ corpus.} For SFCC methods, we compute the scale using the development section of NIST SRE $2001$ corpus. Table~\ref{Scores nist $2001$} shows the comparison between baseline MFCC, SFCC and proposed one (best one selected among pitch estimation methods mentioned) which indicates that the proposed one performs better than MFCC and SFCC in terms of both the evaluation metrics. \begin{table*}[h] \renewcommand{\arraystretch}{1.2} \caption{Comparison of ASV system performances in terms of EER (in \%) and minDCF$\times100$ for MFCC, SFCC, and the proposed features on NIST SRE $2001$ corpus using GMM-UBM backend.} \centering \vspace{0.1cm} \begin{footnotesize} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{\textbf{Feature}} & \textbf{EER(in \%)} & \textbf{minDCF$\times100$} \\ \hline \multicolumn{2}{|c|}{MFCC} & $7.70$ & $3.39$ \\ \hline \multicolumn{2}{|c|}{SFCC (Baseline)} & $7.51$ & $3.28$ \\ \hline \multirow{5}{*}{SFCC (Scale with pitch)}& ~\citep{drugman2011joint} & $7.61$ & $3.27$ \\ \cline{2-4} &~\citep{sun2002pitch} & $7.45$ & $3.40$ \\ \cline{2-4} &~\citep{de2002yin} & $7.31$ & $\textbf{3.23}$ \\ \cline{2-4} &~\citep{wise1976maximum} & $7.22$ & $3.26$ \\ \cline{2-4} &~\citep{gonzalez2014pefac} & $\textbf{7.21}$ & $3.24$ \\ \hline \end{tabular} \end{footnotesize} \label{Scores nist $2001$} \end{table*} We also perform the experiment with data-driven filter shapes created with PCA-based method. In Table~\ref{data_driven_filter_PCA_only}, we have shown the ASV performance for different scales where the filter is computed with PCA on the development data. We observe that the ASV performance is relatively poor compared to the results with fixed triangular based filters. Interestingly, the proposed scale based features are better than mel scale based features. The pitch based ASV system yields lowest EER amongst all the three systems. \begin{table*}[t!] \renewcommand{\arraystretch}{1.2} \caption{Comparison of ASV performances with PCA-based data-driven filters using different scales. Results are shown in terms of EER (in \%) and minDCF$\times100$ on NIST SRE $2001$ corpus using GMM-UBM back-end.} \centering \vspace{0.1cm} \begin{footnotesize} \begin{tabular}{|c|c|c|} \hline \textbf{Scale} & \textbf{EER(in \%)} & \textbf{minDCF$\times100$} \\ \hline Mel &$8.69$&$3.86$ \\ \hline Speech-based & $8.39$&$\textbf{3.61}$ \\ \hline Speech-based with pitch &$\textbf{8.38}$&$3.66$ \\ \hline \end{tabular} \label{data_driven_filter_PCA_only} \end{footnotesize} \end{table*} We further apply tapering window (here Hamming) on the subband spectrum before performing PCA. The results are reported in Table~\ref{data_driven_filter_PCA_symmetric_window}. We observe noticeable improvement compared to the results of untaperd case in Table~\ref{data_driven_filter_PCA_only}. Interestingly, the performance obtained with the data-driven filter shapes are sometimes better than the performance with triangular filters. For instance, in case of MFCCs, the minDCFs ($\times$ 100) of the triangular and data-driven filters are $3.39$ (Table~\ref{Scores nist $2001$}) and $3.35$ (Table~\ref{data_driven_filter_PCA_symmetric_window}). Similarly, the EER for pitch-based system reduces to $7.11\%$ from $7.21\%$ when windowed and PCA-based data-driven filter is used instead of triangular filters. However, we do not observe improvement in EER with after~\textcolor{black}{normalizing} the filter response magnitudes though we observe a reduction in cost function values. \begin{table*}[t!] \renewcommand{\arraystretch}{1.2} \caption{Same as Table~\ref{data_driven_filter_PCA_only} but with tapering Window applied on the subbands before applying PCA. We also report the performance with magnitude~\textcolor{black}{normalized} filters in the last row.} \centering \vspace{0.1cm} \begin{footnotesize} \begin{tabular}{|c|c|c|} \hline \textbf{Scale} & \textbf{EER(in \%)} & \textbf{minDCF$\times100$} \\ \hline Mel &$7.70$&$3.35$ \\ \hline Speech-based &$7.25$&$3.27$ \\ \hline Speech-based with pitch &$\textbf{7.11}$&$3.23$\\ \hline Speech-based with pitch (Normalized) &$7.41$&$\textbf{3.22}$ \\ \hline \end{tabular} \label{data_driven_filter_PCA_symmetric_window} \end{footnotesize} \end{table*} \begin{table*}[t!] \renewcommand{\arraystretch}{1.2} \caption{Comparison of ASV performances with fixed (i.e, mel scale with triangular filter) and various data-driven features on NIST SRE $2002$ corpus. Performances are shown in terms of EER (in \%) and minDCF $\times100$ using GMM-UBM backend. \textcolor{black}{Here, the scale is computed using development set of NIST SRE 2001 corpus}.} \centering \vspace{0.1cm} \begin{footnotesize} \begin{tabular}{|c|c|c|c|} \hline \textbf{Filter Shape} & \textbf{Scale} & \textbf{EER (in \%)} & \textbf{minDCF $\times100$} \\ \hline \multirow{3}{*}{Triangular} & Mel &$8.76$&$4.07$ \\ \cline{2-4} & Speech-based &$9.15$&$4.45$ \\ \cline{2-4} & Speech-based with pitch &$9.12$ &$4.28$ \\ \hline \multirow{3}{*}{PCA} & Mel &$9.65$&$4.33$ \\ \cline{2-4} & Speech-based &$9.92$&$4.55$ \\ \cline{2-4} & Speech-based with pitch &$9.96$&$4.57$ \\ \hline \multirow{3}{*}{Window+PCA} & Mel &$\textbf{8.42}$&$4.04$ \\ \cline{2-4} & Speech-based &$9.15$&$4.34$ \\ \cline{2-4} & Speech-based with pitch &$8.75$&$4.25$ \\ \hline \multirow{3}{*}{Window+PCA+Norm.} & Mel &$8.48$&$\textbf{4.03}$ \\ \cline{2-4} & Speech-based&$9.29$&$4.29$ \\ \cline{2-4} & Speech-based with pitch &$8.91$&$4.33$ \\ \hline \end{tabular} \label{results NIST $2002$ database} \end{footnotesize} \end{table*} \begin{figure}[t!] \begin{center} \includegraphics[width=1\textwidth,trim={9cm 0 8cm 0cm},clip]{det_2001_today_new.eps} \end{center} \vspace{-.5cm} \caption{The DET curves ASV system performance using different feature extraction methods on NIST SRE $2001$ corpus with GMM-UBM as backend.}\label{fig:6} \centering \vspace{-0.25cm} \end{figure} \begin{table*}[t!] \renewcommand{\arraystretch}{1.2} \caption{Comparison of ASV system performances in noisy conditions. Results are shown in terms of EER (in \%) and minDCF$\times100$ on additive noise-corrupted NIST SRE $2002$ corpus with GMM-UBM as backend.} \centering \vspace{0.1cm} \begin{footnotesize} \begin{tabular}{|c|c|c|} \hline \textbf{Methods} & \textbf{EER(in \%)} & \textbf{minDCF$\times100$} \\ \hline MFCC (Baseline) &$18.27$&$8.04$ \\ \hline Speech-based with triangular filter &$16.63$&$7.64$ \\ \hline Speech-based (pitch) with window \& PCA-based filter &$\textbf{16.02}$&$\textbf{7.56}$ \\ \hline \end{tabular} \label{noisy_2002} \end{footnotesize} \end{table*} We also conduct experiment with NIST SRE $2002$ corpus to evaluate the~\textcolor{black}{generalization} ability of the proposed data-driven approach. In this case, the same development data from the subset of NIST SRE $2001$ corpus is used for computing the parameters of data-driven feature extractor. The results are summarized in Table~\ref{results NIST $2002$ database}. \textcolor{black}{We observe that with triangular filter, mel-scaled filterbank always obtain lower EER and minDCF than the data-driven scale based methods. The reason for this performance is due to domain-mismatch as the scale is computed on the speech files from a different corpus, i.e., NIST SRE 2001.} However, we notice that the warping scale based on the selected frames from pitch show improvement over the condition where the scale is computed on all the speech frames (\emph{i.e.}, baseline SFCCs). The DET curves of ASV results of selected features are illustrated in Fig.~\ref{fig:6} and~\ref{fig:7}. \begin{figure}[t!] \begin{center} \includegraphics[width=1\textwidth,trim={10cm 0 8cm 0cm},clip]{det_2002_today_new.eps} \end{center} \vspace{-.5cm} \caption{Same as Fig.\ref{fig:6} but for NIST SRE $2002$ corpus.}\label{fig:7} \centering \vspace{-.5cm} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1\textwidth,trim={10cm 0 8cm 0cm},clip]{det_2002_noise_today_new.eps} \caption{The DET curves of ASV systems based on different feature extraction methods on noise-corrupted version of NIST SRE $2002$ corpus using GMM-UBM as backend. }\label{fig:8} \vspace{-0.5cm} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1\textwidth,trim={10cm 0 8cm 0cm},clip]{voxceleb1_final_plot_today_new.eps} \caption{The DET curves ASV systems based on different data-driven feature extraction methods on VoxCeleb1 corpus with i-vector and PLDA-based scoring as backend.} \label{fig:9} \vspace{-0.25cm} \end{figure} \begin{table*}[t!] \renewcommand{\arraystretch}{1.2} \caption{Comparison of ASV performances on VoxCeleb1 corpus with i-vector system. Results are shown in terms of EER (in \%) and minDCF$\times100$ for features based on different scales. \textcolor{black}{The scale is computed on the development set of the VoxCeleb1 corpus.}} \centering \vspace{0.1cm} \begin{footnotesize} \begin{tabular}{|c|c|c|} \hline \textbf{Scale} & \textbf{EER(in \%)} & \textbf{minDCF$\times100$} \\ \hline Mel &$9.95$&$0.747$ \\ \hline Speech-based &$9.71$&$0.786$ \\ \hline Speech-based with pitch &$9.52$&$\textbf{0.721}$ \\ \hline Speech-based (pitch) with window \& PCA-based filter &$\textbf{8.98}$&$0.744$ \\ \hline \end{tabular} \label{voxceleb1_results} \end{footnotesize} \end{table*} \begin{figure}[t!] \centering \includegraphics[width=.9\textwidth,trim={.75cm 0 .5cm 0cm},clip]{final_now.eps} \caption{Error bar plot showing ASV performance on VoxCeleb1 where 0.1\% of the total speech data are randomly selected for computing filterbank parameters. The results are shown on VoxCeleb1 corpus with i-vector back-end. The dotted horizontal line indicates the performance with baseline MFCCs and continuous horizontal line denotes the performance with proposed method where 100\% speech data are used for computing the filterbank parameters.} \label{fig:10} \end{figure} \begin{table*}[t!] \renewcommand{\arraystretch}{1.2} \caption{Comparison of ASV performances when in-domain and out of domain (Librispeech and TIMIT) are used for computing scale of data-driven filter. Results are shown in terms of EER (in \%) and minDCF$\times100$ on VoxCeleb1 test set using i-vector and PLDA backend.} \centering \vspace{0.1cm} \begin{footnotesize} \begin{tabular}{|c|c|c|} \hline \textbf{Corpus for scale computation} & \textbf{EER(in \%)} & \textbf{minDCF$\times100$} \\ \hline VoxCeleb1 (in-domain) & $\textbf{9.52}$ & $\textbf{0.721}$ \\ \hline Librispeech & $10.40$ & $0.812$ \\ \hline TIMIT & $9.99$ & $0.730$ \\ \hline \end{tabular} \end{footnotesize} \label{domain_scale_voxceleb1_results} \end{table*} Even though we do not observe improvement with the data-driven scales, the performances of mel scale based are improved with window and PCA based data-driven filters. We can conclude that scale selection is more sensitive to the corpus selection whereas filter-responses computed from one dataset generalize well to other datasets. Finally, the results on noisy conditions are shown in Table~\ref{noisy_2002} and the corresponding DET in Fig.~\ref{fig:8}. Here, we have found that the propose data-driven features are more robust compared to the baseline MFCCs. The best performance in terms of EER is obtained with data-driven feature where scale is computed from the selected frames with pitch values and the filter shape is computed with windowed spectrum and PCA. \subsection{Experiments on VoxCeleb1} \subsubsection{Performance evaluation with i-vector system} In our experiments with i-vector system on VoxCeleb1, first we compute the scale on the entire development set consisting of $1211$ speakers and report the results for different scales in Table~\ref{voxceleb1_results}. \textcolor{black}{We observe that the performance with feature using frame selection based scale yields better performance in terms of both EER and minDCF.} We obtain more than $4.30\%$ and $3.48\%$ relative reduction for EER and minDCF, respectively. Fig.~\ref{fig:9} shows the DET curve of the ASV system using VoxCeleb1 corpus.~\textcolor{black}{From this curve, we find that the proposed features perform better than the other features in ASV task}. \begin{table*}[t!] \renewcommand{\arraystretch}{1.2} \caption{Comparison of ASV performances with x-vector representation. Results are shown in terms of EER (in \%) and minDCF$\times100$ on VoxCeleb1 test set.} \centering \vspace{0.1cm} \begin{footnotesize} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{\textbf{Methods}} & \multicolumn{2}{|c|}{\textbf{Embeddings from FC1}} & \multicolumn{2}{|c|}{\textbf{Embeddings from FC2}} \\ \cline{2-5} & \textbf{EER (in \%)} & \textbf{minDCF$\times100$} & \textbf{EER (in \%)} & \textbf{minDCF$\times100$} \\ \hline MFCC & $5.13$ & $0.468$ & $5.19$ & $0.480$ \\ \hline Proposed SFCC & $5.03$ & $0.468$ & $4.96$ & $0.502$ \\ \hline Score Fusion & $\textbf{4.45}$ & $\textbf{0.421}$ & $\textbf{4.56}$ & $\textbf{0.446}$ \\ \hline \end{tabular} \end{footnotesize} \label{x_vector_voxceleb1_results} \end{table*} \begin{figure}[t!] \centering \includegraphics[width=1\textwidth,trim={.5cm 0 .5cm 0cm},clip]{x_vector_both_plot.eps} \vspace{-3cm} \caption{The DET curves of results of ASV system among MFCC, Proposed feature extraction methods and score level fusion on VoxCeleb1 corpus using x-vector system where (a) using FC1 and (b) using FC2. } \label{fig:11} \end{figure} In Table~\ref{voxceleb1_results}, the scales are computed with entire development data which is computationally expensive, especially for PCA-based filter shape computation. In the next experiment, we examined the effect of amount of data for scale computation on the performance of ASV system where we chose a small subset of speech utterances from the entire set of $148642$ files. We conducted the ASV experiments 10 times where every time 0.1\% of the speech data are randomly selected. The ASV performances for this randomly chosen small subsets are shown in Fig.~\ref{fig:10}. The figure also shows the performance with baseline MFCCs and proposed method with full speech data. We observe that filterbank parameters computed with 0.1\% of the data shows lower EER than baseline MFCCs. However, we do not observe improvements in minDCF. Interestingly, the ASV performance with 100\% of the data for scale computation only gives about 1\% relative improvement (in terms of EER) over 0.1\% data. To compare the performance with out of domain data, we also conduct experiment where the data for scale formulation is taken from corpus other than VoxCeleb1 in-domain data. We took speech data from Librispeech~\citep{panayotov2015librispeech} and TIMIT~\citep{zue1990speech} for this purpose. We use the same VoxCeleb1 data for computing parameters of UBM, T-matrix, LDA and PLDA. The results reported in Table~\ref{domain_scale_voxceleb1_results} indicates that out of domain data considerably degrades the ASV performances. We conclude that the proposed method should be applicable where limited in-domain data is available. \subsubsection{Performance evaluation with x-vector system} For experiments with x-vector system, we chose baseline MFCCs and the proposed data-driven feature extraction in which the warping scale and the filter parameters are computed with development data from the VoxCeleb1 corpus. The results of x-vector~\textcolor{black}{system} with PLDA scoring are~\textcolor{black}{summarized} in Table~\ref{x_vector_voxceleb1_results}. In the state-of-the-art x-vector system also, the proposed features are better than conventionally used MFCCs in terms of EER. We showed the results when the embeddings are computed from the output of FC1 and FC2. The improvement is observed for both cases. We did not find improvement in terms of minDCF; however, the proposed features are better than MFCCs in most of the operating points as shown in the DET curve in Fig.\ref{fig:11}. Finally, we perform experiments with fused system where scores of MFCC and proposed SFCC are combined with equal weights~\citep{6494266}. The performance is substantially improved with fusion. In EER, we obtained relative improvement of $13.26\%$ and $12.14\%$ over baseline MFCC, respectively for x-vector embeddings computed from FC1 and FC2. This confirms the complementarity of proposed data-driven filterbank with mel filterbank. \section{Conclusion}\label{Section:Conclusion} \textcolor{black}{The filterbanks in most of the acoustic feature extraction modules are either handcrafted with some auditory knowledge or learned over a large dataset with some objectives. In this work, we proposed to compute the MFCC filterbank in a data-driven way. We improved the data-driven frequency warping scale by considering voiced frames having pitch information. We demonstrated the superiority of the newly designed warping scale for ASV tasks. We also computed frequency responses of the filters in a data-driven manner from the subband power spectrum using PCA. We showed that both these schemes reduce the speaker recognition error rates. We observed improvements in both matched and mismatch conditions. The proposed feature extraction method is compatible with the state-of-the-art x-vector systems and shows improvement over MFCC-based ASV systems. The proposed method is computationally less expensive than DNN-based data-driven methods. Also, it computes the filterbank parameters (\emph{i.e.}, filter edge frequencies \& frequency response) with a small amount of speech data without additional metadata as opposed to the supervised methods which require a large amount of labeled data. We further improved the ASV performance by simple score fusion with an MFCC-based system.} \textcolor{black}{Even though the acoustic features computed with the proposed data-driven filters show improvement over MFCCs, the performance of the proposed features substantially degrades if in-domain audio-data is not available. However, domain-mismatch remains an open challenge for other data-driven feature extractors, too. In future, we plan to explore the data-augmentation methods for addressing the domain mismatch issue. We can compute the filterbank from the augmented speech data and observe its robustness. The objective of this work was not to optimize the number of filters and the amount of overlaps with the adjacent filters. The present work can also be extended in that direction. In this work, we develop the filterbank in a task-independent manner but its application is limited to ASV in the current study. We also plan to adopt the proposed data-driven filterbank for other potential speech processing tasks, such as language and emotion recognition.} \section*{Acknowledgments} The authors would like to express their sincere thanks to the anonymous reviewers and the editors for their valuable comments and suggestions which greatly improved the work in quality and content. The work of Md Sahidullah is supported by Region Grand Est, France. Experiments presented in this paper were partially carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (see \url{https://www.grid5000.fr}).
proofpile-arXiv_065-187
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Correlation measurements are a standard tool to investigate quantum systems. To determine whether the quantum system is separable or entangled, correlation measurements have been actively performed alongside of various theoretical investigations. (e.g.,\cite{bell,clauser,aspect,leggett,vedral,guhne03,altepeter,sakai}). For example, a recent work by Fujikawa et. al.\cite{fujikawa2} analyzed experimental results with the view of associating separability with zero--correlations, which is one of many active endeavors for obtaining conditions for separabilities of density matrices(e.g.,\cite{werner,peres,horodecki}) Here, we try to be as simplistic as possible and consider the approach of utilizing correlation functions that appear in traditional theories and measurements to detect entanglements. In particular, we base our analysis on achieving zero--correlations among observables measured on a composite quantum system. In other words, we assume that we need only measure the existence of correlations (either zero or non-zero), which we term as the ``Binary Correlation Measurements''. We do not require information on the exact values of non-zero correlation functions. It should be noted that this constraint simplifies the analysis of measurements. This approach is different from other more elaborate mathematical methods for entanglement detections or separability conditions where more detailed information on correlation functions and/or the density matrix is necessary (\cite{horodecki2,guhne} for overviews). The main question we proposed to ask with the above setup is the following: what is the minimum number of the binary correlation measurements we need to perform to detect entanglements with certainty? To gain insights toward this question, we start with the simplest example of the two-particle (bipartite) $2\times 2$ system. First, we examine a pure quantum state. The known results for the minimal number of general correlation measurements are three to distinguish the entangled state from the separable one with certainty\cite{guhne03}. We show that, even with the limited information obtained from the binary correlation measurements, three measurements involving four observables are still sufficient. For the mixed case, however, the answer to the above question is not obtained. We illustrate the difficulty by considering the density matrix called the Werner density matrix that has a parameter whose value decides whether it is separable or entangled. We show that the correlation structure is the same regardless of whether there is entanglement or separability. Particularly, for this matrix, zero correlations appear for the entangled situation just in the same way as the separable situation. This implies that non-zero correlations do not certify the existence of entanglement nor does zero correlation entail separability. We end the paper with discussions on extensions of this line of approach to higher dimensions as well as general questions on concepts of correlations and entanglements in capturing quantum nature through a brief mention of the quantum pigeon hole effect. \section{Main Question} We consider a typical situation of correlation measurements on pure or mixed quantum systems. Let us start with the simplest case of a bipartite system consisting of two quantum particles $A$ and $B$. We can measure observable operators $\mathcal{X}$ and $\mathcal{Y}$ defined as $\mathcal{X} = {\mathcal{Q}_A}\otimes{\bf{1}}_B$ and $\mathcal{Y} = {\bf{1}}_A\otimes\mathcal{R}_B$. They can be measured independently meaning measurement only on particle $A$, or only on particle $B$. Also, we want to measure $\mathcal{X}\mathcal{Y}$, both measurements necessarily taking place at the same time. Through repeating these measurements for statistics, we obtain three expectation values, $\bra{\psi}\mathcal{X}\ket{\psi}, \bra{\psi}\mathcal{Y}\ket{\psi}$, and $\bra{\psi}\mathcal{X}\mathcal{Y}\ket{\psi}$, from which we can compute the correlation (covariance) as \begin{equation} c(\mathcal{X}, \mathcal{Y}) = \bra{\psi}\mathcal{X}\mathcal{Y}\ket{\psi} - \bra{\psi}\mathcal{X}\ket{\psi}\bra{\psi}\mathcal{Y}\ket{\psi}. \end{equation} As is well known, if the system is described by a density matrix $\rho_{AB}$, we can generalize the above using its trace and its partial traces $\rho_A$ and $\rho_B$ to describe the quantum state and compute the correlation function in the following manner. \begin{equation} c(\mathcal{X}, \mathcal{Y}) = Tr(\rho_{AB}\mathcal{X}\mathcal{Y}) - Tr(\rho_{A}\mathcal{X})Tr(\rho_{B}\mathcal{Y}). \end{equation} We also mention that a density matrix describing a pure quantum system is separable if it can be written as \begin{equation} \rho_{AB}=\rho_A\otimes\rho_B, \end{equation} otherwise it is entangled. For a density matrix describing a mixed state, the separability is defined as \begin{equation} \rho_{AB}=\sum_i p_i\rho^i_{A}\otimes\rho^i_{B} \label{mseparable} \end{equation} with non-negative real $\{ p_i \}$ such that $\sum_i p_i = 1$. If such decomposition is not possible, it is entangled. \vspace{1em} With the set up above our main question can be phrased as \vspace{1em} \noindent {\bf{Main Question}} What is the minimum number of binary correlation measurements we need to perform on a given quantum state to determine with certainty whether it is entangled or separable? \vspace{1em} We assume that we can design experiments to measure observables $\mathcal{X}$ and $\mathcal{Y}$ to obtain the above mentioned-expectation values. The question is to find the minimum number of measurements of correlation functions so that we can be certain to determine whether the given system is in a separable or an entangled state. We also assume that the correlation measurement results are binary: either zero or non-zero. That is to say, we do not need to know the values of any non--zero correlation function. We will see that this leads to a relatively simple measurement analysis. It should be noted that, with this restriction, our approach is different from much investigated separability criteria and entanglement detections, where more information from quantum systems is required\cite{horodecki2,guhne}. In the following, we illustrate how much we can gain insight into quantum systems even with these restricted correlation measurements. \section{Analysis with the $2\times 2$ system} To gain insight, we proceed to consider the simplest bipartite system of two 2-state particles ($2\times 2$ system), such as two spin 1/2 particles or two qubits systems(e.g.,\cite{wootters,kummer,abouraddy,jchen}). Our analysis will employ the density matrix, whether or not the system is in a pure or mixed state. It is known that any density matrix for $2\times2$ systems can be written using the Pauli matrices as follows. \begin{equation} \rho_{AB}={1\over 4}({\bf{1}_A}\otimes {\bf{1}_B}+{\vec{a}}\cdot{\vec{\mathcal{\sigma}}} \otimes {\bf{1}_B} + {\bf{1}_A}\otimes {\vec{b}}\cdot{\vec{\mathcal{\sigma}}} +\sum_{ij} F_{ij}{\mathcal{\sigma}}_i\otimes {\mathcal{\sigma}}_j) \label{density} \end{equation} where ${\bf{1}}$ is the $2\times2$ identity matrix, ${\vec{a}},{\vec{b}}$ are vectors consists of 3 real numbers ($\cdot$ is the inner product), and $F_{ij}$ are real number elements of a $3\times3$ matrix ${\mathcal{F}}$, and ${\vec{\mathcal{\sigma}}} = ({\mathcal{\sigma}_x}, {\mathcal{\sigma}_y},{\mathcal{\sigma}_z})$ is the vector with the Pauli matrices. \begin{equation} {\mathcal{\sigma}_x}= \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} , \quad {\mathcal{\sigma}_y}= \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}, \quad {\mathcal{\sigma}_z}= \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}.\nonumber \end{equation} Now, the two quantum observable operators ${\mathcal{Q}_A}, {\mathcal{R}_B}$ can be also expressed using the Pauli matrices up to a scale factor as \begin{equation} {\mathcal{Q}_A} = {1\over 2}({\bf{1}_A} + {\vec{x}}\cdot{\vec{\mathcal{\sigma}}}), \quad {\mathcal{R}_B} = {1\over 2}({\bf{1}_B} + {\vec{y}}\cdot{\vec{\mathcal{\sigma}}}) \label{obs} \end{equation} where ${\vec{x}},{\vec{y}}$ are three dimensional real vectors. Then, for operators $\mathcal{X} = {\mathcal{Q}_A}\otimes{\bf{1}}_B$ and $\mathcal{Y} = {\bf{1}}_A\otimes\mathcal{R}_B$, we can calculate the expectation values as follows; \begin{equation} \langle \mathcal{X}\mathcal{Y} \rangle = {\mathrm{Tr_{AB}}}[\rho_{AB}\mathcal{X}\mathcal{Y}]={1\over 4}(1+ {\vec{a}}\cdot{\vec{x}} + {\vec{b}}\cdot{\vec{y}} + {\vec{x}}\cdot{\mathcal{F}}\cdot{\vec{y}}),\nonumber \end{equation} and \begin{equation} \langle \mathcal{X} \rangle ={1\over 2}(1 + {\vec{a}}\cdot{\vec{x}}),\quad \langle \mathcal{Y} \rangle ={1\over 2}(1 + {\vec{b}}\cdot{\vec{y}}). \nonumber \end{equation} This leads to \begin{equation} \langle \mathcal{X}\mathcal{Y} \rangle - \langle \mathcal{X} \rangle\langle \mathcal{Y} \rangle = {1\over 4}({\vec{x}}\cdot{\mathcal{F}}\cdot{\vec{y}} - ({\vec{a}}\cdot{\vec{x}})({\vec{b}}\cdot{\vec{y}})) = {1\over 4}{\vec{x}}\cdot({\mathcal{F}} - {\vec{a}}\cdot{{\vec{b}^{\hspace{0.05cm} \mathsf{T}}}})\cdot{\vec{y}}, \end{equation} where ${\vec{a}}\cdot{{\vec{b}^{\hspace{0.05cm} \mathsf{T}}}}$ is the outer product of ${\vec{a}}, {{\vec{b}}}$. By defining a real $3\times3$ matrix $\mathcal{C}$ as \begin{equation} \mathcal{C} = {\mathcal{F}} - {\vec{a}}\cdot{{\vec{b}^{\hspace{0.05cm} \mathsf{T}}}}, \label{corr} \end{equation} \begin{equation} c(\mathcal{X}, \mathcal{Y}) = {1\over 4}{\vec{x}}\cdot \mathcal{C} \cdot{\vec{y}}. \label{corrv} \end{equation} Thus, the nature of the correlation function depends on that of the matrix $\mathcal{C}$. Particularly, we note the following\cite{cheng}, which generalizes our previous analogous results\cite{ohira1,ohira2} for any pure bipartite $2\times2$ state vectors. \vspace{1em} \noindent {\bf{Theorem 1}}: For any bipartite $2\times2$ system (regardless of separable or entangled, or, pure or mixed), there always exits a pair of observables $\mathcal{X}, \mathcal{Y}$ that gives a zero--correlation, $c(\mathcal{X}, \mathcal{Y}) = 0$. \vspace{1em} \noindent {\bf{Proof}}: From the setup above, zero--correlation is achieved when ${\vec{x}}\cdot \mathcal{C} \cdot{\vec{y}} = 0$. This is simply an orthogonality relation of two real vectors ${\vec{x}}$ and $\mathcal{C} \cdot{\vec{y}}$ in three-dimensional space. Such a pair of ${\vec{x}}, {\vec{y}}$ can always be found for any real $3\times3$ matrix $\mathcal{C}$. (Q.E.D) \vspace{1em} This proof shows that a zero--correlation observable pair can not only be found for any $\rho_{AB}$, but also there are uncountably many such pairs. It also turns out that, for pure states, the special nature of the matrix $\mathcal{C}$ allows us to answer our main question of distinguishing entangled states. \subsection{Pure States} For the case of pure states, the density matrix has special properties, which in turn constrain the nature of the matrix $\mathcal{C}$. \vspace{1em} \noindent {\bf{Theorem 2}}: For a pure state, the rank of the $3\times3$ matrix $\mathcal{C}$ is given as follows (a) For a separable state $rank(\mathcal{C}) = 0$, that is $\mathcal{C}$ is the zero matrix. (b) For an entangled state $rank(\mathcal{C}) = 3$, that is $\mathcal{C}$ is a regular matrix. \vspace{1em} \noindent {\bf{Proof}}: First, we list some known properties\cite{jchen} when the density matrix in (\ref{density}) describes a pure state. (i) ${\mathcal{F}}\cdot \vec{b} = \vec{a}, \quad {\mathcal{F}}^{\hspace{0.05cm} \mathsf{T}}\cdot \vec{a} = \vec{b}$. (ii) $0 \leq \begin{Vmatrix} \vec{a} \end{Vmatrix} = \begin{Vmatrix} \vec{b} \end{Vmatrix} \leq 1$ ($\vec{a}$ and $\vec{b}$ has the same length). (iii) $Det({\mathcal{F}}) = \begin{Vmatrix} \vec{a} \end{Vmatrix}^2 - 1 = \begin{Vmatrix} \vec{b} \end{Vmatrix}^2 -1$. (iv)The density matrix describes a pure and separable state if and only if $\begin{Vmatrix} \vec{a} \end{Vmatrix} = \begin{Vmatrix}\vec{b} \end{Vmatrix} = 1$. \vspace{1em} \noindent (a) We note that any density matrix describing a pure separable state can be written as \begin{equation} \rho^{sep}_{AB} = {1\over 2}({\bf{1}_A} + {\vec{a}}\cdot{\vec{\mathcal{\sigma}}})\otimes {1\over 2}({\bf{1}_B} + {\vec{b}}\cdot{\vec{\mathcal{\sigma}}}) \label{sep} \end{equation} By expanding this, we can immediately see with (\ref{density}) that \begin{equation} {\mathcal{F}} = {\vec{a}}\cdot{{\vec{b}^{\hspace{0.05cm} \mathsf{T}}}}. \end{equation} Hence, by the definition of the correlation matrix (\ref{corr}), \begin{equation} \mathcal{C} = {\mathcal{F}} - {\vec{a}}\cdot{{\vec{b}^{\hspace{0.05cm} \mathsf{T}}}} = \mathcal{O}. \end{equation} \vspace{1em} \noindent (b) We consider two possible cases. \vspace{1em} \noindent For ${\vec{a}} = {\vec{b}} = {\vec{0}}$: By the definition (\ref{corr}) of the correlation matrix $\mathcal{C} = \mathcal{F}$, and therefore by the property (iii), we have \begin{equation} Det({\mathcal{C}}) = Det({\mathcal{F}}) = - 1 \end{equation} This shows that the correlation matrix is regular and thus has $rank(\mathcal{C}) = 3$. \vspace{1em} \noindent For ${\vec{a}} \neq {\vec{0}}$: By the property (ii), we also have ${\vec{b}} \neq {\vec{0}}$. Without loss of generality, we can choose a coordinate system such that \begin{equation} {\vec{b}} = \begin{Vmatrix}\vec{b} \end{Vmatrix}\begin{bmatrix} 1 \\ 0\\ 0 \end{bmatrix} \end{equation} Then, by the property (i), \begin{equation} \vec{a} = {\mathcal{F}}\cdot \vec{b} = \begin{Vmatrix}\vec{b} \end{Vmatrix} \begin{bmatrix} F_{11} \\ F_{21}\\ F_{31} \end{bmatrix}. \end{equation} We can now derive the following by (\ref{corr}) and the property of the determinant, \begin{equation} Det(\mathcal{C}) = Det( \begin{bmatrix} (1-\begin{Vmatrix}\vec{b} \end{Vmatrix}^2)F_{11},& F_{12},& F_{13} \\ (1-\begin{Vmatrix}\vec{b} \end{Vmatrix}^2)F_{21},& F_{22},& F_{23} \\ (1-\begin{Vmatrix}\vec{b} \end{Vmatrix}^2)F_{31},& F_{32},& F_{33} \end{bmatrix}) = (1-\begin{Vmatrix}\vec{b} \end{Vmatrix}^2)Det({\mathcal{F}}) = - (\begin{Vmatrix} \vec{b} \end{Vmatrix}^2 -1)^2. \end{equation} The right-hand side of this equation is non-zero by the property (iv) for the entangled case, which, in turn, shows that the correlation matrix is regular and thus has $rank(\mathcal{C}) = 3$. (It also validates and includes the case of ${\vec{a}} = {\vec{b}} = {\vec{0}}$.) (Q.E.D) \vspace{1em} This theorem provides an answer to our main question to find the minimum number of the binary correlation measurements for entanglement detection. \vspace{1em} \noindent {\bf{Theorem 3}}: For pure bipartite $2\times2$ quantum systems, the minimum number of the binary correlation measurements for entanglement detection with certainty is three. \vspace{1em} \noindent {\bf{Proof}}: From Eq. (\ref{corrv}), and Theorem 2(a), the correlation function $c(\mathcal{X}, \mathcal{Y})$ is $0$ for the separable case for any pair of $\mathcal{X}, \mathcal{Y}$. Now, for the entangled state, let us choose $\mathcal{Y}$ to have any value other than the identity (no measurement). This is equivalent to fixing a real three-dimensional vector $\vec{y}(\neq 0)$. Then, the correlation function is $0$ if and only if the vector $\vec{x}$ defining $\mathcal{X}$ lies in the plane perpendicular to ${\vec{y'}}\equiv \mathcal{C} \cdot{\vec{y}}$ that is a non-zero vector by Theorem 2(b) for the entangled case. Thus, if we prepare thee observables $\mathcal{X}_a, \mathcal{X}_b, \mathcal{X}_c$ defined by linearly independent\footnote{they do not have to be orthogonal.} $\vec{x_a}, \vec{x_b}, \vec{x_c}$, at least one of them gives non-zero correlation with respect to the fixed $\mathcal{Y}$. (The perpendicular plane can only accommodate up to two linearly independent real three-dimensional vectors.) This will identify the entangled states, distinguishing them from separable ones. (Q.E.D) \vspace{1em} \noindent {\bf{Examples}}: To illustrate the above theorem, we discuss here two examples of pure entangled states. \vspace{1em} \noindent (i) The first example is the singlet state, which is also one of the Bell states. \begin{equation} \ket{\Psi^{-}}= {1\over\sqrt{2}}(\ket{a_1}\otimes\ket{b_2} - \ket{a_2}\otimes\ket{b_1}) \equiv {1\over\sqrt{2}}(\begin{bmatrix} 1 \\ 0 \end{bmatrix}_A \otimes \begin{bmatrix} 0 \\ 1 \end{bmatrix}_B - \begin{bmatrix} 0 \\ 1 \end{bmatrix}_A \otimes \begin{bmatrix} 1 \\ 0 \end{bmatrix}_B). \label{bell2} \end{equation} The corresponding density matrix is given as follows in the notation above: \begin{equation} \rho_{AB} ={1\over 4}({\bf{1}_A}\otimes {\bf{1}_B} + \sum_{i} (-1){\mathcal{\sigma}}_i\otimes {\mathcal{\sigma}}_i) = {1\over 2}\begin{bmatrix} 0,& 0,& 0,& 0 \\ 0,& 1,& -1,& 0 \\ 0,& -1,& 1,& 0 \\ 0,& 0,& 0,& 0 \end{bmatrix} \label{density-singlet} \end{equation} Thus, in this case, the correlation matrix is particularly simple as \begin{equation} \mathcal{C} = \mathcal{F} = \begin{bmatrix} -1,& 0,& 0 \\ 0,& -1, & 0 \\ 0,& 0, & -1 \end{bmatrix} \label{cmatrix-singlet} \end{equation} Thus, the correlation function (\ref{corrv}) is $-{1\over 4}\vec{x}\cdot\vec{y}$, which is zero for any pair of observable $\mathcal{X}, \mathcal{Y}$ defined by a pair of orthogonal vectors $\vec{x}, \vec{y}$. Therefore, for a given $\mathcal{Y}$, it suffices to prepare and measure three observables operators $\mathcal{X}_a, \mathcal{X}_b, \mathcal{X}_c$ defined by three linearly independent vectors $\{\vec{x_a}, \vec{x_b}, \vec{x_c}\}$: at least one of them will give a non-zero value, showing that this is an entangled state. \vspace{1em} \noindent (ii) The second example is from \cite{jchen}. The entangled state vector is given as \begin{eqnarray} \ket{\Phi}&=&{1\over\sqrt{3}}(\ket{a_1}\otimes\ket{b_1} +\ket{a_1}\otimes\ket{b_2}+ \ket{a_2}\otimes\ket{b_2})\nonumber\\ &\equiv& {1\over\sqrt{3}}(\begin{bmatrix} 1 \\ 0 \end{bmatrix}_A \otimes \begin{bmatrix} 1 \\ 0 \end{bmatrix}_B + \begin{bmatrix} 1 \\ 0 \end{bmatrix}_A \otimes \begin{bmatrix} 0 \\ 1 \end{bmatrix}_B + \begin{bmatrix} 0 \\ 1 \end{bmatrix}_A \otimes \begin{bmatrix} 0 \\ 1 \end{bmatrix}_B). \label{chen} \end{eqnarray} The corresponding density matrix is given as \begin{equation} \rho_{AB}=\ket{\Phi}\bra{\Phi} = {1\over 9}\begin{bmatrix} 1,& 2,& 0,& 2 \\ 2,& 4,& 0,& 4 \\ 0,& 0,& 0,& 0 \\ 2,& 4,& 0,& 4 \end{bmatrix} \label{density-then} \end{equation} This density matrix can be written in the form of (\ref{density}) with \begin{equation} {\vec{a}}={1\over 3} \begin{bmatrix} 2 \\ 0 \\ 1 \end{bmatrix} , \quad {\vec{b}}={1\over 3} \begin{bmatrix} 2 \\ 0 \\ -1 \end{bmatrix}, \quad {\mathcal{F}} = \begin{bmatrix} 2 & 0 & -2\\ 0 & -2 & 0 \\ 2 & 0 & 1 \end{bmatrix}.\nonumber \end{equation} This yields the correlation matrix as \begin{equation} \mathcal{C} = {\mathcal{F}} - {\vec{a}}\cdot{{\vec{b}^{\hspace{0.05cm} \mathsf{T}}}} = {2\over 9} \begin{bmatrix} 1 & 0 & -2\\ 0 & -3 & 0 \\ 2 & 0 & 2 \end{bmatrix}. \end{equation} If we set \begin{equation} {\vec{y}}= \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix},\quad {\vec{y'}} = \mathcal{C} \cdot{\vec{y}}={2\over 9} \begin{bmatrix} 1 \\ 0 \\ 2 \end{bmatrix}. \end{equation} Thus, the correlation function is zero for any operator $\mathcal{X}$ defined by a vector ${\vec{x}}$ that resides on the plane spanned by \begin{equation} \{ \begin{bmatrix} 2 \\ 0 \\ -1 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \} \end{equation} Thus, again, as we prepare three observable operators defined by three linearly independent vectors $\{\vec{x_a}, \vec{x_b}, \vec{x_c}\}$, we can be sure that at least one of them will yield a non-zero correlation. \subsection{Mixed Density Matrix} As has been known, if the system is described by a mixed density matrix, the situation is more complex. It is also the case for our approach here. We illustrate this by considering a concrete example of the Werner density matrix\cite{werner}. The Werner density matrix for bipartite $2\times2$ quantum systems is given as follows, \begin{equation} \rho^{W}_{AB}={{1-\xi}\over 4}{\bf{1}_A}\otimes {\bf{1}_B} + \xi\ket{\Psi^{-}}\bra{\Psi^{-}} = {1\over 4}\begin{bmatrix} 1-\xi,& 0,& 0,& 0 \\ 0,& 1+\xi,& -2\xi,& 0 \\ 0,& -2\xi,& 1+\xi,& 0 \\ 0,& 0,& 0,& 1-\xi \end{bmatrix}, \end{equation} where $\ket{\Psi^{-}}$ is the singlet state (\ref{bell2}), to which it reduces with $\xi=1$. Thus, it can be viewed as a generalization of the singlet state. It can be rewritten as follows \begin{equation} \rho^{W}_{AB}={{1}\over 4}({\bf{1}_A}\otimes {\bf{1}_B} +(-\xi) \sum_{i}{\mathcal{\sigma}}_i\otimes {\mathcal{\sigma}}_i), \label{mixdensity} \end{equation} which is of the form (\ref{density}), with $\vec{a}=\vec{b}=\vec{0}$, $F_{ij}= -\delta_{ij}\xi$. This density matrix has been studied extensively (e.g., \cite{peres,hiroshima,azuma} and known to be separable, i.e., it can be rewritten in the form of (\ref{mseparable}), for $0 \leq \xi \leq {1\over 3}$, and entangled for ${1\over 3} < \xi \leq 1$. Now, if we apply our procedure, the matrix $\mathcal{C}$ is a simple diagonal matrix, which again reduces to that of the singlet state with $\xi=1$. \begin{equation} \mathcal{C} = \mathcal{F} = (-\xi)\begin{bmatrix} 1,& 0,& 0 \\ 0,& 1,& 0 \\ 0,& 0,& 1 \end{bmatrix} \label{cmatrix-werner} \end{equation} Hence,the correlation function is also simply given as \begin{equation} c(\mathcal{X}, \mathcal{Y}) = -{\xi\over 4}{\vec{x}}\cdot {\vec{y}}. \end{equation} This shows that the $c(\mathcal{X}, \mathcal{Y})=0$ if and only if non-zero real three-dimensional vectors are orthogonal regardless of the value of $\xi$. Thus, for the Werner density matrix, the zero-correlation condition is the same both for separable and entangled situations. Zero correlations do not entail separability, nor do non-zero correlations mean entanglements. This is in contrast to the pure state case where zero-correlations can distinguish between the separable and entangled states. In general, for the mixed density matrix, it appears that we need to analyze further the values of the correlation function with various measurements and apply other criteria to detect entanglements. \section{Discussion} \noindent (i) Generalization For pure $2 \times 2$ states, we expect the binary correlation measurements proposed here to give the ``minimal bound'' for entanglement detection. For $n \times n$ states, naive extended conjecture is the following. \vspace{1em} \noindent {\bf{Conjecture}}: For a bipartite $n \times n$ pure system, the following holds. (1) The analogously defined correlation matrix $C$ has a rank of either $0$ for separable states, or $n^2 -1$ for entangled states. (2) The minimum number of the correlation function measurements for entanglement detection with certainty is $n^2 -1$. \vspace{1em} It turns out that this conjecture is not quite correct. However, it paved the way to develop the first ever algorithm for computing the Schmidt rank of any unknown pure quantum state using only zero-correlation tests\cite{tanasescu}. Also for the $n \times n$ system, a known minimum bound of $n+1$ for entanglement detections using general correlation measurements has been obtained\cite{guhne03}. It is yet to be explored if our restricted binary correlation measurements can reach or approximate this bound. For a multi-party system involving more than two, we cannot anticipate any analogous theorem. It is not clear even whether there exist entangled states that never admit zero correlations. \vspace{1em} \noindent (ii) Limitations As we have noted, this approach of correlation measurements particularly based on zero-correlation has not been found to be a useful way to distinguish separability from entanglement for mixed states in general. Even the existence of uncountably many zero-correlations does not certify separability, nor non-zero correlations entail entanglement. It appears that we need to gain more detailed information on the values of the correlation functions, or each element of the density matrix in question, from which more elaborate mathematical tools can be applied for entanglement determination. \vspace{1em} \noindent (iii) Prospects Thus, the concept of correlations as we have set it up above is not enough to capture the nature of quantum entanglements in general. Further, we would like to point out that the concept of entanglement itself may not be enough to describe quantum correlations in general. This is indicated by the recently proposed ``quantum pigeonhole effect'' by Aharonov et. al. \cite{aharonov}. Classically the pigeonhole principle states that if we have more pigeons placed in fewer number of holes, at least one hole must have multiple pigeons together. The analogous system is considered in quantum mechanics with three two-state quantum particles (pigeons), each in a superposition of two (hole) states (a quantum $3\times2$ system). Computing correlations with cleverly chosen different pre- and post-selected product (non-entangled) states, surprisingly, show that no pair of particles can be in the same quantum (hole) state. This means that the pigeonhole principle in some cases breaks down in quantum mechanics. It also shows there are new aspects of quantum entanglement which are not apparent in product states. The experimental work with three single photons transmitted through two polarization channels indicates that this quantum pigeonhole effect is real \cite{chen}. Concepts, which may be a generalization of entanglement, to capture this type of quantum correlations are yet to be explored. \section*{Acknowledgment} The author would like to thank Philip M. Pearle, Professor Emeritus of Hamilton College, for his comments and encouragements. Also, comments by Dr. Shuming Cheng on Theorem 1, and by Profs. Chigaku Itoi and Shinichi Deguchi of Nihon University are acknowledged as very constructive for this work. This work was supported by funding from Ohagi Hospital, Hashimoto, Wakayama, Japan, and by Grant-in-Aid for Scientific Research from Japan Society for the Promotion of Science No.19H01201.
proofpile-arXiv_065-188
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Jammed out-of-equilibrium structures forming from attractive particles are ubiquitous in nature and many consumer products. Being metastable solids, they are mechanically rigid structures that typically form at low particle density due to a space-spanning cluster of aggregated particles. Important examples include attractive colloidal particles that aggregate into system-spanning networks known as gelation~\cite{Trappe01}. Extensive work has gone into mapping the phase boundary of this transition, unravelling jamming diagrams across a range of volume fractions and particle interaction strength~\cite{Trappe01,Bergenholtz03,Puertas03,Zukoski03,Blijdenstein04,Odriozola2007,Tuinier99,Zaccarelli05,Lu08,Eberle11,Wang19}. While much studied for strong attraction, where particles stick irreversibly and open structures form~\cite{Weitz84,Meakin84,Weitz85,Ball87,Lin89}, and at rather low attraction, typically by colloidal depletion interaction, where phase separation occurs by spinodal decomposition into depletant-rich and depletant-poor phases with subsequent arrest~\cite{Lu08}, the situation is much less clear for intermediate particle attraction, where the structure forms in a highly out-of-equilibrium process, most relevant to structure formation in biology. At effective interparticle attractions of many $k_\textrm{B}T$, the thermal energy, detailed balance in the particle attachment and detachment process is broken, and the system falls out of equilibrium; in this case, a description based on an underlying equilibrium phase diagram may no longer apply. This is demonstrated by a recent experimental study on the intermittent dynamics of colloidal gels~\cite{vanDorn17} revealing a marked asymmetry in the cooperative bonding and de-bonding processes. This regime, which is most relevant for biological network formation, is often modelled by cluster kinetic equations, but there is no theory that describes the formation of these out-of-equilibrium structures and is able to explain or predict, from first-principles, the fractal dimension of the growing clusters, and its relation to the cluster mass distribution. Furthermore, most experimental studies in the weakly-attractive regime are based on depletion interactions that naturally cause phase separation into depletant-rich and depletant-poor phases, which may yield specific routes to gelation, distinct from those of attractive spheres~\cite{Eberle11}. Besides the arrested spinodal decomposition scenario~\cite{Lu08}, also a mechanism based on a double-glass transition, or jamming transition of clusters, has been proposed~\cite{DelGado2009}, which has recently received experimental verification in terms of the resulting predictions for the gel elasticity and mesoscale structure~\cite{Furst19}. In fact, recent two-dimensional simulations suggest that the jamming of attractive spheres falls into a distinct universality class~\cite{Tighe2018}: the continuous growth of clusters is reminiscent of a continuous phase transition with a diverging length scale, different from the more familiar repulsive jamming. Such framework of critical phenomena has been sought as a possible connection between physical, colloidal and chemical gelation, as it would offer a unifying description~\cite{DelGado1,DelGado2}. Yet, while equilibrium percolation transitions have been discussed for fluid-fluid and fluid-solid transitions~\cite{Chiew1983,Marr1993}, used to interpret experimental data~\cite{Dhont95,Russel93,Eberle11}, and in theoretical models~\cite{Broderix}, their validity for systems out of equilibrium remains unclear. Here we combine experiments on tunable attractive colloids with simulations and analytic kinetic modelling to show that the observed gelation of short-range attractive particles into space-spanning structures shows all hallmarks of a nonequilibrium continuous phase transition. We study cluster growth of particles interacting with an effective critical Casimir attraction, as well as via simulations and analytic solutions to the master kinetic equation; latter encodes the relevant physics in terms of aggregation and spontaneous breakage of the growing clusters. All approaches uniquely converge to show that the observed short-range attractive colloidal gelation is related to a nonequilibrium second-order phase transition, with critical exponents of cluster growth in agreement with percolation theory. Analytically, this is supported by solving the master kinetic equation in the limit of single-particle detachment predicting the existence of a critical point and power-law cluster-mass distributions. Both predictions are indeed confirmed in the experiments and simulations over a range of attractive strengths. These results open up a new nonequilibrium view on gelation and attractive jammed structures in general, relevant for many natural aggregation processes. Our findings identify this structural arrest as an analogue, mirror-image process of yielding, and suggests unification of yielding and arrest in a single framework. \section{Results} \subsection{Cluster growth and critical scaling} We use colloidal particles suspended in a sucrose binary mixture of lutidine and water, in which attractive critical Casmir forces arise close to the solvent critical point $T_\textrm{c} = 31.0^\circ C$. The particles have a radius $r = 1 \mu m$ with a polydispersity of 5$\%$ and are suspended at a volume fraction of $\phi \sim 0.12$ in the sucrose binary solvent, which matches their density and refractive index, allowing for observation of assembly deep in the bulk with minimal disturbance by gravity (see Methods). Close to $T_\textrm{c}$, attractive critical Casimir forces cause particle aggregation with an effective attractive potential set by the temperature difference $\Delta T = T_\textrm{c} - T$~\cite{Hertlein08,Gambassi09,Stuij17,Shelke13}. Previous studies have revealed equilibrium phase transitions from gas to liquid and liquid to solid at low attraction~\cite{Guo2008,Nguyen2013,Dang2013,Nguyen2018}, as well as colloidal aggregation at higher attraction, which was investigated in microgravity~\cite{Veen12,Potenza2014,Potenza2018}. To study gelation, we induce sufficiently strong attractive strength between the particles by jumping from room temperature to $\Delta T = 1.2 , 1.0, 0.7$, and $0.5^\circ$C, corresponding to an attraction increasing from $\sim 3$ to $10 k_\textrm{B}T$. For each attraction, we follow the particle-scale aggregation process in a $108 \mu m$ by $108 \mu m$ by $40 \mu m$ volume using confocal microscopy. The experiments are complemented with molecular dynamics simulations of an equal mixture of particles with radii $r_\textrm{a}/r_\textrm{b} = 1:1.1$ at volume fraction $\phi=0.12$. Particles follow the overdamped Langevin equation, interacting through a Mie potential with parameters chosen to match the rather short attractive range of $\approx0.08r_\textrm{a}$ of the experiments (see Methods). The attractive strength is given by $\epsilon/k_\textrm{B}T$, where $\epsilon$ is the prefactor of the potential and $k_\textrm{B}T$ is the thermal energy. To test the generality of the computational results, we also perform simulations with a square-well potential, on particles with the same size ratio and effective attraction, as defined from the corresponding 2nd Virial coefficients. To compare with the phase behavior of adhesive hard spheres, we compute the Baxter parameter, $\tau$, and find that the onset of gelation we observe at $\tau \sim 0.1$, is in very good agreement with the gelation transition of adhesive hard spheres~\cite{Eberle11}, see Supplementary Note 1 and Supplementary Figures 1 and 2. Experiments on the aggregating colloidal particles reveal growing clusters, the largest of which eventually spans the field of view (Fig. \ref{fig1}a-c). By plotting the size evolution of the largest and second largest cluster in Fig. \ref{fig1}d, we identify the onset of space-spanning structures by the sudden increase of the largest, and concomitant decrease of the second-largest cluster, which becomes part of the largest cluster at gelation. We investigate the onset of gelation by looking at the evolution of the average coordination number $z$, i.e. the average number of bonded neighbors of a particle. This number increases as clusters grow to saturate at a value $z_\textrm{max}$, see Fig.~\ref{fig1}(e). We take $z$ as the order parameter of the gelation transition and plot the fraction of particles in the largest cluster, $f_\textrm{z}$, as a function of $z$ in Fig.~\ref{fig2}a. It increases sharply upon approaching the transition, indicating that the largest cluster abruptly absorbs a large number of particles. We find a divergence $f_\textrm{z} \sim (z_\textrm{c}-z)^{-\sigma}$ upon approaching the critical value $z_\textrm{c} = 5.5$, with exponent $\sigma \approx 1.6$. Concomitantly, the length scale of connected particle clusters diverges. We determine the correlation length of connected particles using $\xi^2 = 2 \sum_i R_{\textrm{g}i}^2 N_i^2 /\sum_i N_i^2 $ where $R_{\textrm{g}i}$ is the radius of gyration for cluster of size $N_i$ \cite{Stauffer}. This correlation length grows also sharply upon approaching the critical coordination number $z_\textrm{c}$ as shown in Fig.~\ref{fig2}b, diverging as $\xi \sim (z_\textrm{c} - z)^{-\nu}$ with $\nu = 0.8$ (inset). This exponent is consistent with three-dimensional percolation results~\cite{Stauffer}. Similar behavior is observed for all other attractive strength. The same divergence is also observed in the simulations, see Figs.~\ref{fig2}c and d, where we compile data for all investigated attractive strengths. All data collapse onto single curves, indicating that the same mechanism applies irrespective of the attractive strengths. We observe divergence of $f_\textrm{z}$ and $\xi$ upon approaching the critical coordination number, again with exponents of $-1.6$ and $-0.8$, respectively, for $f_\textrm{z}$ and $\xi$. The same scaling is observed for simulations based on a square-well potential of similar short range, see Fig.~\ref{fig2}e. Furthermore, as shown in the Supplementary Notes 2 and 3, identical scaling is observed over a range of particle volume fractions and attractive strengths, and for a very different short-range attractive experimental system of protein microparticles (Supplementary Figure 3), indicating that the observed divergence is robust and a general property of the gelation. Percolation occurs only for sufficiently strong attraction; for attractive strength smaller than $\epsilon_\textrm{c}/k_\textrm{B}T = 2.5$, the critical coordination number $z_\textrm{c}$ is no longer reached (green dots for $\epsilon/k_\textrm{B}T = 2$), and the clusters do not span space, consistent with the previously observed cluster phase in depletion systems~\cite{Lu08}. We find that upon approaching the critical attraction $\epsilon_\textrm{c}$ from below, the critical coordination number $z_\textrm{c}$ is approached in a power-law fashion (Fig.~\ref{fig2}f), giving independent evidence of an underlying critical point. We thus observe all hallmarks of percolation, while at the same time detailed balance is broken as shown in Fig.~\ref{fig3}. Here we plot association and dissociation rates, measured directly from subsequent simulation snapshots (see Methods and Supplementary Movie 1), as a function of cluster size for different attractive strength. Association rates are clearly larger than dissociation rates, and show a different cluster-size dependence: while the former are roughly independent, the latter decrease rapidly with cluster size, being largest for single-particle break off. \subsection{Analytic Model} To interpret the experimental and simulation results within the framework of nonequilibrium statistical mechanics, we study a kinetic master equation for partially reversible aggregation used before in colloidal and protein aggregation~\cite{Odriozola2002,Odriozola2004,Odriozola2007} (see Methods). The equation describes changes in the cluster sizes $c_k$ for all clusters $k=1..N$ due to dissociation into clusters of $i$ and $j$ particles occurring with rate constant $K_{ij}^{-}$, and merging of clusters with $i$ and $j$ particles occurring with rate constant $K_{ij}^{+}$. It is analytically solvable for the physically meaningful case that the dissociation rate constant is non-zero only for single-particle dissociation, and the association rate has the same value for all aggregate sizes. Both assumptions are reasonably well supported by Fig.~\ref{fig3} showing the attachment rate is fairly independent of the cluster size, and the detachment rate decreases rapidly with cluster size. Physically, the idea is that multiply connected particles belonging to inner cluster shells sit in much deeper energy minima, while particles at the surface sit in shallower potential wells, breaking off much more easily, as supported by recent simulations~\cite{Russel14}. Under these assumptions, the master kinetic equation simplifies to (see Methods) \begin{equation} \frac{dC}{dt}=C^{2}+2\lambda\frac{1-z}{z}C+2\lambda\frac{(1-z)^{2}}{z} N(t) \end{equation} where $C(z,t)=\sum_{j\geq1}(z^{j}-1)c_{j}(t)$, with $z$ a dummy variable as usually defined in generating functions, $N(t)=\sum_{j\geq1}c_{j}(t)$ and we took $K_{ij}^{+}=2$ for ease of notation and without any loss of generality~\cite{majumdar1,majumdar2}. Here, the parameter $\lambda$ measures the extent of single-particle breakup, and is proportional to $\exp(-V/k_\textrm{B}T)$ with $V$ the depth of the pair attraction well. Clearly, the last condition breaks the detailed balance: there is no linear dependence between aggregation and fragmentation rates for all processes involving $i$ and $j$ both larger than unity, meaning these aggregation processes are \textit{de facto} irreversible. It follows that any stationary state for which the cluster mass distribution reaches a steady-state in time is thus a nonequilibrium stationary state. At steady-state, defined by $dC/dt=0$ for $t\rightarrow\infty$, the second-order algebraic equation is solvable, and differentiating $C$ with respect to $z$ and setting $z=1$ gives $N$ as a function of $\lambda$. A continuous phase transition at the critical point $\lambda_\textrm{c}=1$ is found, which separates the sol state with $N=1-(2\lambda)^{-1}$ from the gel state (spanning network) with $N=\lambda/2$. Hence, this model predicts gelation as a continuous (second-order) phase transition, with a cluster-mass distribution that exhibits two distinct power-law exponents, namely $\tau=-3/2$, with an exponential tail, in the sol phase, and $\tau=-5/2$, without the exponential tail, in the gel phase (see Methods). \subsection{Verification of model predictions} To test these predictions, we monitor cluster sizes over time, and determine their distributions just before and just after gelation, as shown in Fig.~\ref{fig4}a and b. The initial exponential cutoff grows to larger sizes until a power-law distribution emerges near gelation (Fig.~\ref{fig4}a). The data is indeed in good agreement with the predicted exponent $-3/2$ before percolation (see also Supplementary Figure 4). After percolation, a large space-spanning cluster coexists with a dilute population of clusters whose size distribution approaches a power-law with slope close to $-5/2$, as shown in Fig.~\ref{fig4}b, where we have taken out the largest cluster and show the distribution of the remaining cluster population. A full reconstruction of the space-spanning cluster coexisting with the smaller clusters is shown in Fig.~\ref{fig4}c. The power-law slope of $-5/2$ is again consistent with the analytic prediction. Further confirmation comes from the simulations that show very similar distributions (crosses in Fig.~\ref{fig4}b). Furthermore, the cluster mass distributions are also fairly robust upon variation of the attractive strength as shown by the experimental data in Fig.~\ref{fig4}d and e. For all systems reaching percolation, cluster mass distributions closely follow the predicted power-laws with exponents $-3/2$ and $-5/2$ before and after percolation, respectively. Some deviation is expected as the assumption of single-particle break-up is an approximation. Finally, we can compare the fractal dimension determined experimentally with the value estimated from the hyperscaling relation of standard percolation $\tau = (d/d_\textrm{f})+1$, where $\tau$ is the power-law exponent of the cluster-mass distribution at percolation, assuming that this relation holds also for the nonequilibrium case studied here (recent experimental evidence supporting the validity of hyperscaling relations in colloidal gelation has been shown in~\cite{Joshi}). Using $\tau = 5/2$ as predicted by the model and confirmed in both experiments and simulations, we obtain the prediction $d_\textrm{f} = 2$. This is indeed in very good agreement with the experimental data, as shown by plotting the cluster size as a function of radius of gyration in Fig.~\ref{fig4}f. Here, we plot data at different stages before and after percolation, and find a robust power-law slope indicating a constant fractal dimension, consistent with $d_\textrm{f} \sim 2$ (solid line); yet, the limited dynamic range does not exclude other possible scenarios (such as e.g. $d_\textrm{f}=1.8$ for DLCA). We also note that the fractal dimension is expected to increase due to ageing, leading to more compact structures~\cite{Shelke13,Veen12}; yet, such aging is not observed on the time scale of our experiment as shown by the constant slope in Fig.~\ref{fig4}f. We note that, while the cluster distributions are thus accurately predicted by the model, properties that require spatial information may not be. As an example, we estimate the exponent $\sigma$ from the divergence of the largest (cut-off) size using eq.~\ref{eq5} in the Methods section. A Taylor expansion of this expression yields a leading term that diverges with power-law of -2, different from the exponent -1.6 determined in the experiments and simulations. This deviation between our simple mean-field model prediction and the experimental and simulation results reminds of the deviation of mean-field model predictions of critical exponents in equilibrium critical phenomena. \section{Discussion} Our critical Casimir colloidal experiments, simulations, and analytically solvable master-kinetic equation description all converge unambiguously to show that the observed gelation of short-range attractive colloids at intermediate densities manifests as a nonequilibrium continuous phase transition with exponents reminiscent of standard percolation in $3d$. The cluster-mass distributions, predicted by kinetic theory with the assumption of single-particle thermal detachment from clusters, are quantitatively confirmed in both experiments and simulations for the investigated attraction range. Furthermore, application of the hyperscaling relation of equilibrium percolation leads to accurate prediction of the fractal dimension. These results inspire a more general understanding of the fluid - to - solid transition in disordered systems. The yielding of amorphous solids has likewise been identified as nonequilibrium percolation transition~\cite{Shrivastav16,Ghosh17}. Because this yielding process, which fluidizes an initially solid material can be regarded as a process opposite to gelation, which solidifies an initially fluid-like sample, it appears that the observed onset of rigidity from a fluid state, and the onset of flow from a solid state are two almost mirror-image manifestations of the same nonequilibrium continuous phenomenon. This general framework comprehends the onset and loss of rigidity as two related, but evolving in opposite directions, nonequilibrium critical phase transitions. Indeed, the observed robustness of the scaling relations suggests some universality, meaning that the gelation mechanism, at least in this range of attraction and volume fractions, is independent of the precise form of the potential. This mechanism can hence be used for the tailored self-assembly of a variety of different systems with greatly varying pair interactions, particles and solvents. It appears that the classically discussed equilibrium percolation~\cite{Chiew1983,Marr1993}, extends towards a nonequilibrium continuation, governed by very different underlying kinetics (broken detailed balance), of crucial importance in e.g. biological structure formation. Indeed, recent studies on the gelation of random-patchy particles mimicking proteins highlight the direct analogy to adhesive hard spheres and our system~\cite{Wang19}. \\ \section{Methods} \subsection{Colloidal suspension ---} The colloids are fluorescently labeled copolymer particles made of 2,2,2-trifluoroethyl methacrylate~\cite{Kodger15} with radius $r_0 = 1 \mu m$ and a polydispersity of 5$\%$. The particles are suspended at a volume fraction $\phi \sim 0.12$ in a binary mixture of lutidine and water, with weight fraction of lutidine $c_L = 0.25$. Sugar was added to match the solvent refractive index and density with that of the particles, while only slightly affecting the binary solvent phase diagram. We also added salt (5 mM KCl) to screen the electrostatic repulsion of the charge-stabilized particles, as in previous studies~\cite{Veen12,Stuij17}. Phase separation of this solvent occurs at $T_\textrm{c} = 31.0^\circ$C, with a critical composition of $c_\textrm{c} = 0.26$ as determined by systematic investigation of the solvent phase diagram over a range of compositions. \subsection{Experiments ---} We use a fast confocal microscope (Zeiss 5 Live) equipped with a 63x lens with a numerical aperture of 1.4 to image individual colloidal particles in a $108 \mu$m by $108 \mu$m by $60 \mu$m volume. Three-dimensional image stacks with a distance of $0.2 \mu m$ between images are acquired every 60 seconds over a time interval of at least 60 minutes to follow the gelation process in 3 dimensions from the initial cluster formation to gelation and beyond. During this process, the temperature is kept strictly constant by using a specially designed water heating setup that controls the temperature of both the sample and the coupled oil-immersion objective with a stability of $\sim 0.01^\circ$C. Particle positions are determined from the three-dimensional image stacks with an iterative tracking algorithm to optimize feature finding and particle locating accuracy~\cite{trackpy}. The resulting particle positions have an accuracy of $\sim$ 20nm in the horizontal and $\sim$ 50nm in the vertical direction. To show this, we used several layers of particles stuck to a cover slip, which we imaged and located repeatedly to determine histograms of particle positions. From this, we determine the positional variances $\sigma_x =$ 15nm, $\sigma_y =$ 20nm and $\sigma_z =$ 40nm. From the determined three-dimensional particle positions, bonded particles are identified as those separated by less than $d_0 = 2.6r$, corresponding to the first minimum of the pair correlation function. We subsequently group bonded particles into connected clusters using a clustering algorithm based on a threshold distance of $d_\textrm{c} = 3.5r$. \subsection{Simulations ---} Molecular dynamics simulations are used to model the trajectories of particles with radii $r_\textrm{a}/r_\textrm{b} = 1:1.1$ mixed equally at volume fraction $\phi=0.12$, interacting through a Mie potential, with attractive range matching that of the experiments and attractive strength given by prefactor $\epsilon$. The potential acts between all particle pairs within a cut-off range $r_\textrm{c}=1.5 r_\textrm{a}$. The kinetic state of our system, liquid or gel, is determined by the dimensionless control parameter $\epsilon/k_\textrm{B}T$, where $k_\textrm{B}T$ is the thermal energy. The time unit is $t_\textrm{s}=\sqrt{mr_\textrm{a}^2/\epsilon}$, with $m$ the mass of a particle with radius $r_\textrm{a}$. Particle trajectories follow the Langevin equation~\cite{plimpton1995fast}, with coefficient of friction $1/\zeta$ (we set $\zeta = t_\textrm{s}$) and random forces ${f}_\textrm{B}(t)$ satisfying $\langle f_\textrm{B}(t)f_\textrm{B}(t')\rangle = 2mk_\textrm{B}T\delta(t-t')/\zeta$. Lennard-Jones units are used throughout to maintain generality, and we use $dt = 0.0025t_\textrm{s}$ as the numerical time step. Simulations are performed in a cubic box with periodic boundary conditions containing $N=32,768$ particles. Systems are equilibrated in the liquid state with $\epsilon/k_\textrm{B}T=1$ before switching to larger values and following the subsequent cluster formation. Contacting particles are those within the inflection point of the potential, which in this case occurs at $\left(31/14\right)^{0.1}r_\textrm{a}$. This allows us to identify clusters, and follow the evolution of their size distributions across the gelation transition for a range of attractive strengths. We also perform simulations with an approximate square-well potential. We adopt the `continuous square-well' model described in~\cite{Zeron18}, writing the potential as \begin{equation} U_\text{csw}(r) = \frac{1}{2}~\epsilon \left(\left(\frac{1}{r} \right)^n + \frac{1 - e^{-m(r-1)(r-r_\textrm{sw})}}{1 + e^{-m(r-1)(r-r_\textrm{sw})}} -1 \right) \text{,} \end{equation} using a binary form for the width of the well $r_\textrm{sw}$ (potential range) to match our particle size ratio for the Mie potential. The dimensionless well steepnesses $m$ and $n$ are set as 7000 and 700 respectively, leading to a 2nd virial coefficient (defined following Ref~\cite{Vliegenthart00}) that matches that of the Mie potential at $\epsilon/k_\textrm{B}T = 3$. \subsection{Calculation of association and dissociation rates ---} To calculate association and dissociation rates, we first define directly contacting particles as those whose centres lie within the inflection point of the potential (where $\frac{\partial^2U}{\partial r^2} = 0$). Based on these criteria, we define a particle as belonging to a cluster if there exists a continuous series of direct contacts between that particle and all other particles in the cluster. Outputting the particle coordinates with very fine time resolution then allows us to monitor the temporal evolution of cluster sizes throughout the system as successive dissociation and association events occur, and thus to compute the rate constants $K_{ij}^{+/-}$ in the kinetic master equation, see below. Here, $K_{ij}^{+}$ means the association rate of clusters that have, respectively, $i$ and $j$ particles, while $K_{ij}^{-}$ indicates the split-up or dissociation rate of a larger cluster into clusters of $i$ and $j$ particles. We determine the rate of dissociation events involving clusters of size 4, for example, by averaging dissociation rate $K_{4j}^{-}$ over $j$. As a result, we find that the rate of detachment of whole clusters is considerably smaller than that of events where a single particle detaches from a cluster, while the rates of the corresponding association events are comparable. \subsection{Cluster growth model ---} We start with the master kinetic equation for the time-evolution of the cluster population $c_{k}$, i.e. the number of clusters with $k$ particles per unit volume: \begin{widetext} \begin{equation} \frac{d c_{k}}{dt}=\frac{1}{2}\sum_{i+j=k}K_{ij}^{+}c_{i}c_{j} - c_{k}\sum_{j\geq1}K_{kj}^{+}c_{j}+\sum_{j\geq1}K_{kj}^{-}c_{j+k}-c_{k}\sum_{i+j=k}K_{ij}^{-}. \label{eq:kinetic} \end{equation} \end{widetext} In this master equation, the first term on the right hand side represents the creation of clusters with $k$ units due to aggregation of one cluster with $i$ units with another with $j$ units (where $i+j=k$); the second term represents the``annihilation" of clusters with $k$ units due to aggregation of a cluster with $k$ units with a cluster of any other size in the system; the third term represents creation of a cluster with $k$ units due to the breakage of a larger cluster which splits into a cluster with $k$ units and another of $j$ units, where $j$ can take any value; the fourth term represents annihilation of a cluster with $k$ units due to fragmentation into two fragments $i$ and $j$, subjected to mass balance. This equation can only be solved numerically. However, in the case of only single particle detachment, and cluster-size independent attachment, an analytic solution is available. Upon introducing the generating function (a procedure similar to a discrete Laplace transformation) $C(z,t)=\sum_{j\geq1}(z^{j}-1)c_{j}(t)$, where $z$ is a dummy variable as usually defined in generating functions, the system of ordinary differential equations is reduced to the Riccati equation (eq. (1) in the manuscript), yielding a phase separation for $\lambda_\textrm{c}=1$. By expanding $C(z)$ in powers of $z$ one obtains the cluster mass distribution in the sol and the gel phase. In the pre-critical sol phase, the power-law is accompanied by an exponential cutoff~\cite{majumdar1,majumdar2}, \begin{equation} c_{k}\left(t\rightarrow\infty\right)\sim k^{-3/2}e^{-k/k_\textrm{c}}. \end{equation} The presence of the exponential cutoff implies that all clusters are finite in size. However, the cutoff size $k_\textrm{c}$ diverges at $\lambda\rightarrow1^{+}$, according to ~\cite{majumdar1,majumdar2} \begin{equation} \label{eq5} k_\textrm{c}=\left\{2\log\left(\lambda/\lambda_\textrm{c}\right)-\log\left[2\left(\lambda/\lambda_\textrm{c}\right)-1\right]\right\}^{-1}. \end{equation} In the gel phase $\lambda \geq 1$, the steady-state cluster mass distribution is \begin{equation} c_{k}\left(t\rightarrow\infty\right)\sim k^{-5/2}, \end{equation} now without an exponential tail, which signals the existence of a giant system-spanning cluster via the divergence of the first-moment of the distribution. Hence, this model predicts gelation as a continuous (second-order) phase transition, with a cluster-mass distribution that exhibits two distinct power-law exponents, namely $\tau=-3/2$, with an exponential tail, in the sol phase, and $\tau=-5/2$, without the exponential tail, in the gel phase.\\ \section{Acknowledgements} The authors are grateful to Francesco Sciortino for useful comments, and to Carlijn van Balen and Erik van der Linden for preparing the protein microparticle system. This work is funded by an industrial partnership program, subsidized by the Netherlands Organization for scientific research (NWO). P. S. acknowledges support by a Vici Fellowship from NWO. C. N. acknowledges financial support from the Maudslay-Butler Research Fellowship at Pembroke College, Cambridge, and latterly from the Royal Academy of Engineering under the Research Fellowship scheme.\\ \section{Author contribution} J.C.R., A.Z. and P.S. conceived the study. J.C.R. performed the experiments and analyzed the data. C.N. performed the simulations. J.C.R. and P.S. wrote the paper except the modelling part, written by A.Z., and the simulation part, written by C.N. S.S. advised on the study and manuscript. All authors discussed the data and reviewed the manuscript. \section{Additional information} {\bf Competing interests:} The authors declare no competing financial or non-financial interests. {\bf Data availability:} The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. {\bf Code availability:} The codes of the computer simulations are available from the corresponding author upon request.
proofpile-arXiv_065-189
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Application to Zero-Loss Payment System}\label{ssec:payment} \ARPN{Reviewer: before evaluation?} In this section, we describe how the Longlasting Blockchain\xspace can be used to implement a zero-loss payment system as long as assets are fungible. The payment system requires users to deposit an amount of asset before they participate in the ASMR\xspace consensus, to use the deposit of identified fraudsters to refund assets that were spent in potential conflicting transactions. \paragraph{Zero-loss guarantee.} Our payment system offers the following zero-loss property by guaranteeing that if buyers got two conflicting transactions agreed upon, then after the reconciliation phase of ASMR\xspace, then one transaction persists whereas the other is refunded. \begin{property}[Zero-Loss]\label{prop:zeroloss} A user not issuing conflicting transactions concurrently never loses any assets. \end{property} \paragraph{Fungible assets assumption.} We assume that participants can transfer assets that are \emph{fungible} in that one unit is interchangeable and indistinguishable from another of the same value. An example of a fungible asset is a cryptocurrency. If assets are not fungible, we assume that there exists a function $fungibility:\mathds{D}^a \rightarrow \mathds{C}^{a-1} \times \mathds{D}$ that, given $'a'$ conflicting decisions $d^{P_0},...,d^{P_{a-1}}$, it outputs only one of them $d^{P_r}$ and a list of fungible assets that the rest of the partitions are willing to take as a refund for their decision being forgotten from that instance on. We refer to the function $\ms{fungibility}$ as the \emph{fungibility function}. An example is a participant willing to take the value of a product, if the product was accidentally sent to a different participant. \paragraph{Double spending.} A successful disagreement can mean that correct replicas assumed two conflicting decisions for some period of time. Even if fungible, extending their decision to the union of all is not always immediate if some are conflicting. For example, one decision may transfer a fungible asset to replica $p_i$ while another may transfer the same fungible asset to replica $p_k \neq p_i$ (i.e., a double-spent). We introduce the notion of deposit for these cases. \paragraph{Deposit refund.} The deposit is refunded upon each attack by exploiting the funds of the fraudsters: once discovered the keys of the fraudsters are black listed, meaning that their transactions are no longer taken into account by the system. Instead a corresponding amount of digital assets managed by the system is transferred into the deposit. As the total circulating amount of usable coins remains unchanged this would not impact the value of the coins on exchanges. As we explain later\ARP{you mean the gainful stuff right?}\vincent{yes.}, we bound the deposit so that refund is always possible and no correct replicas lose coins. \paragraph{Refund to avoid losses.} The deposit manages the cost of recovery to guarantee that recovery (Property (1) of Def~\ref{def:properties}) will be ensured and that all decisions will stay decided, or be refunded. The deposit can be filled either by taking fees per decision or by taking all assets from fraudsters once they are identified and the slashing removes them. If the deposit does not have enough assets to fund the conflicting decisions, then it creates new virtual assets from the fungible set, increasing the circulating supply and thus decreasing the value of existing assets. The deposit can later burn assets to balance the circulating supply, if a set of fraudsters is later punished, filling back the deposit. \paragraph{Merging forks and slashing fraudsters.} Thanks to partial synchrony, in the worst-case, fraudsters manage to maximize their spending, and thus the cost of the recovery for the deposit can be the same as that of leaving a fork without merging. For instance, if the fraudsters had all the assets and they managed to double-spend them all, then the deposit would need to fund every single asset, doubling the circulating supply. The advantage is that at least the fork is merged back and extended into one agreeing state, and fraudsters are removed. Furthermore, in any other case where the fraudsters did not double-spent all assets, the use of deposit decreases the circulating supply compared to not merging a fork. \paragraph{Simulating the cost of coalition attacks.} Later (\cref{subsec:theoretical}) we measure the expected cost of a refund depending on the probability of message arrival and the adversarial size $f$ under a probabilistic synchrony assumption. To this end, define the notion of an LLB\xspace as a blockchain in which the expected deposit flux of a recovery is 0 or positive. A positive deposit flux means that the deposit is expected to actually gain funds from attacks, and incur no cost to refund, and that it can use this gain later on to cover the expenses of future attacks, increasing the security of the system. \paragraph{Dissatisfaction despite zero-loss.} Note that the zero-loss property is not synonymous of user satisfaction. To illustrate the difference, consider an extremely rare good that two buyers attempt to buy. Consider that the two transactions are committed and included in two blocks at the same index of the blockchain due to a disagreement of the ASMR\xspace consensus. Upon reconciliation of LLB\xspace, one transaction is preserved, however, the second transaction is refunded as it was actually finally aborted. Although one buyer and the seller are satisfied, the buyer who could not acquire the precious good is likely to be dissatisfied: although he thought he could have acquired the precious good in the first place, it turns out that the good is already gone and it is unlikely that another opportunity to acquire it will appear soon. Despite this disappointment, the buyer is guaranteed not to have lost any assets thanks to Property~\ref{prop:zeroloss}. \subsection{Gainful Longlasting Blockchain\xspace} \label{subsec:theoretical} In this section, we compute the deposit needed for the payment system to be zero loss. We consider a classic proof-of-stake blockchain system, where the voting power of each participants is proportional to its stake expressed in coins~\cite{GHM17}. \paragraph{Network control restriction.} For a bounded deposit to ensure that a payment system ensures zero loss, we need to restrict the power of the adversary on the network. In particular, we need to prevent Byzantine processes from communicating infinitely faster than correct processes. To this end, we assume the ratio of the communication speed of the two fastest correct nodes over the communication speed of the two slowest Byzantine nodes is upper bounded in expectation by $1-\varepsilon$. More formally, let $X_1$ (resp. $X_2$) be the random variables that indicate the time it takes for a message between two correct processes from different partitions (resp. two Byzantine processes or correct processes within the same partition). We have $E(X_1) / E(X_2) < 1-\epsilon$. Next, we can compute the probability $\rho$ of a coalition attack succeeding in forking LLB\xspace. These forks have two outcomes, either the punishment $\mathcal{P}$ withdrawn from the attackers' accounts and credited to the deposit and the gain $\mathcal{G}$ obtained by the attacker when they succeed. To ensure zero-loss, every gain, which corresponds to stolen assets, must be reimbursed using the deposit, as a compensation. Note that $\mathcal{P}$ decreases with the duration of the attack as attackers can launder they theft by exchanging their stolen assets with non-stolen assets. \paragraph{Gain as a function of the blockchain depth.} Let $\mathfrak{G}$ be the sum of all assets that belong to a replica in the coalition. The maximum gain $\mathcal{G}$ the attacker can earn out of a fork is $\mathfrak{G} \cdot (a-1)$ where $a$ is the number of branches where the attacker spent the same coins. Let $T_0,\,...,\, T_{a-1}$ be the $a$ sets of disjoint transactions that have sent $\mathfrak{G}$ coins to different replicas. Let $T = \{T_0,\,...,\, T_{a-1}\}$ be the set containing these transaction sets. Suppose a consensus instance in the LLB\xspace can perform up to $s$ transactions, then if $|T_0|>s$ the coalition requires the attack to last for $m=\lceil |T_0|/s \rceil$ consensus instances in order to steal the whole gain $\mathfrak{G}$, while the attacker steals just a percentage of the gain if the attack lasts less instances than $m$. For this purpose, we define a random variable $Y$ with a geometric distribution and the probability of an attack failing as $\hat{\rho}=1-\rho$. We can now derive the expected amount of successes before a failure (i.e., the number of consensus instances an attack is expected to succeed in a row) as $E(Y)=\frac{1-\hat{\rho}}{\hat{\rho}}$. Additionally, some transactions in the set $T_0$ might have more funds than others\UPDATE{say: we assume them to be homogeneous, while in the appendix show how to homogenize assets}. We assume the coalition chooses their best strategy of sorting the set by decreasing funds, and define the function $\alpha:[0, m]\cap \mathds{Z} \rightarrow [0,1]$ that, given an amount of consensus instances, returns the percentage of total gain that fits in that amount of instances. For example, if the set of $|T_0|$ transactions have all the same funds $\mathfrak{G}/|T_0|$, then the value is directly $\alpha(x)=\frac{x\cdot s}{|T_0|}$. We define $m$ as the number of \vincent{consecutive?} blocks (proposals) \vincent{'proposals' or 'decisions'?} required to include all transactions, a value that the LLB\xspace can enforce by limiting the amount of assets to be transferred per block. This limit $m$ is analogous to the notion of \textit{finality} in state-of-the-art\ARP{cite Bitcoin 6 Blocks Ethereum 30 etc.}, and in fact this limit can define a finality period instead. \paragraph{Expected gain and punishment.} The expected gain for the attackers in a disagreement attempt is as follows: \begin{align*} \mathcal{G}(\hat{\rho}) =& (a-1)\cdot \bigg(\mathds{P}(Y\geq m)\cdot \mathfrak{G} +\sum_{i=0}^{m} \mathds{P}(Y=i)\cdot \alpha(i) \cdot \mathfrak{G}\bigg)\\ =&(a-1)\cdot\mathfrak{G}\cdot\bigg(\rho^{m+1}\;+\;\frac{(1-\rho)}{m} \cdot \sum_{i=0}^{m}i\rho^i\bigg)\\ =&(a-1)\cdot \mathfrak{G}\cdot \bigg( \rho^{m+1} \;+\; h(\rho) \bigg) \end{align*} We can calculate the solution to the series by deriving the solution to the power series $\sum_{i=0}^m\rho^i$ and multiplying it by $\rho$\vincent{Why $\rho$ not $i$?}: \begin{align*} \sum_{i=0}^{m}i\rho^i=\rho\cdot\bigg(\frac{d}{d\rho}\sum_{i=0}^m \rho^i\bigg)=\frac{\rho\big(m\rho^{m+1}-(m+1)\rho^{m}+1\big)}{(p-1)^2} \end{align*} Similarly, we derive the expected punishment: \begin{align*} \mathcal{P}(\hat{\rho}) =& \sum_{i=0}^{m} \mathds{P}(Y=i)\cdot (1-\alpha(i)) \cdot \mathfrak{G}\\ =&\bigg(\sum_{i=0}^{m} \mathds{P}(Y=i)-\sum_{i=0}^{m} \mathds{P}(Y=i)\cdot \alpha(i)\bigg)\cdot \mathfrak{G}\\ =&\bigg(\sum_{i=0}^m\rho^i(1-\rho) - h(\rho)\bigg)\cdot\mathfrak{G}\\ =&\bigg((1-\rho^{m+1})-h(\rho)\bigg)\cdot \mathfrak{G}. \end{align*} We define the difference between the punishment and the gain from an attack $\Delta\mathfrak{b}=\mathcal{P}(\hat{\rho})-\mathcal{G}(\hat{\rho})$ \vincent{Why using $\Delta\mathfrak{b}$ for this notation? it looks like a product, simply one notation would be enough.}as the expected \textit{deposit flux} per attack attempt. $\Delta\mathfrak{b}\geq 0$ means that the deposit never needs to get other funds than the punishment $\mathcal{P}$ to fund the recovery of successful disagreements $\mathcal{G}$. If $\mathcal{P}(\hat{\rho})<\mathcal{G}(\hat{\rho})$ then the deposit flux is $\mathcal{G}(\hat{\rho})-\mathcal{P}(\hat{\rho})$, while the LLB\xspace is gainful otherwise. Therefore, the deposit flux is: \begin{align*} \Delta\mathfrak{b}(\hat{\rho})=&\mathcal{P}(\hat{\rho})-\mathcal{G}(\hat{\rho})\\ =&\bigg(1-a\cdot \big(h(\rho)+\rho^{m+1}\big)\bigg)\mathfrak{G}\\ =&g(a,\rho,m)\mathfrak{G}. \end{align*} If $\Delta\mathfrak{b}<0$, then $|\Delta\mathfrak{b}|$ is the exact amount \vincent{it seems like this is the cardinality of a real number...?} to be expected to pay per attack attempt, which gives a metric of the cost of guaranteeing refunds on LLB\xspace. Otherwise the deposit is actually expected to fill exactly $\Delta\mathfrak{b}$ units per attack attempt. As such, we only consider values that keep $g(a,\rho,m)\geq 0$ for the deposit flux not to be negative. Notice that, while $a$ and $\rho$ depend on the size and communication speed of the coalition, the value $m$ can be limited by LLB\xspace itself: \begin{align*} &g(a,\rho,m)\geq\,0\iff\\ 1-a\cdot&\big(\frac{(1-\rho)}{m}\cdot \frac{\rho\big(m\rho^{m+1}-(m+1)\rho^{m}+1\big)}{(p-1)^2} \big)- a\cdot \rho^{m+1}\geq 0 \end{align*} if we assume $m\geq \frac{a\rho}{1-\rho}$, then: \begin{align*} & 1-\big(m\rho^{m+1}-(m+1)\rho^{m}+1 \big)- a\cdot \rho^{m+1}\geq 0\\ \iff &(m+1)\rho^{m}- \rho^{m+1}\big(a+m\big)\geq 0\\ \iff &(m+1)-\rho\big(a +m)\geq 0\\ \iff &m+1 \geq m\rho+a\rho \iff 1+\frac{1}{m}\geq \rho +\frac{a\rho}{m}\\ \iff & 1 + \frac{1-\rho}{a\rho}\geq \rho+1-\rho\iff 1 \geq \rho\\ \end{align*} And since this \vincent{what is 'this'? can we use equation number for references instead?} is always true for $0\leq\rho< 1$ and $a\geq 2$, we have that $m\geq\frac{a\rho}{1-\rho}$ guarantees $\Delta\mathfrak{b}\geq 0$. Furthermore, assuming a direct proportion between a replica' voting power and its percentage of assets, LLB\xspace is gainful if the total amount to be transferred in a consensus instance is limited to $\frac{\mathds{C}}{3m}$, where $\mathds{C}$ is the total circulating supply and $m\geq \frac{a\rho}{1-\rho}$, since attackers will always lose at least as much than they manage to steal. An example, in which the probability of success is $\rho=0.5$, for a coalition of $f=n/2$, would imply $m\geq 3$, i.e., as long as the LLB\xspace forces an attack to last for at least $3$ consensus instances, the LLB\xspace will be gainful. In general, if $c=\frac{\rho}{1-\rho}$ then $m\geq a\cdot c$, and to prevent any multiple-disagreement up until some value for $a$, LLB\xspace must limit the maximum amount of amount of assets transferred per consensus instance to: \begin{align*} min\bigg(\frac{\mathds{C}\cdot t}{n\cdot m}\bigg)=&\mathds{C}\cdot min_{i\in\mathds{N},i\leq a}\bigg(\frac{2i-3}{(3i-3)\cdot a\cdot c}\bigg)\\=&\mathds{C}\cdot \frac{2a-3}{(3a^2-3a)\cdot c} \end{align*} Notice that by this result LLB\xspace needs to limit the same maximum amount per consensus instance to prevent both a triple-disagreement and a double-disagreement $\frac{\mathds{C}}{6c}$, if their probability of success is the same. Additionally, to prevent any from a sextuple-disagreement to a double-disagreement, the maximum amount of assets' worth per consensus instance must be $\frac{\mathds{C}}{10c}$. Figure~\ref{fig:tap}(left) \UPDATE{change $t$ for $f$ in figure}shows the maximum amount of branches $a$ that a coalition of size $f$ can create in a disagreement, for different values of $f$ and $n$. It is specially interesting observing that a coalition of less than $n/2$ cannot perform anything else than a double-disagreement. We can see that values that are close to a threshold converge asymptotically to breaking the threshold into generating one more branch, but can never do it without one more replica \vincent{than what?}. Furthermore, Theorem\UPDATE{cite} shows that a value of less than $11n/18$ can at most perform a sextuple-disagreement, while being significantly close, at only $n/18$ of distance, to the threshold value of $2n/3$. \begin{figure} \begin{subfigure}{0.495\columnwidth} \includegraphics[width=\textwidth,trim=12 14 13 12,clip]{../plots/eps/plot_1_py.pdf} \end{subfigure} \begin{subfigure}{0.485\columnwidth} \includegraphics[width=\textwidth,trim=15 17 25 12,clip]{../plots/eps/plot_5_py.pdf} \end{subfigure} \caption{Left: Amount of branches (y axis) per number of replicas (x axis) for different coalition sizes. Right: Minimum depth of an attack for a zero-cost recovery, assuming a direct proportion between voting power and percentage of assets, and per coalition size, with different probabilities of an attack succeeding $\rho$\ARP{specify here and in text that $\rho$ refers to attack succeeding on proposal}.\vincent{how does it read in black and white? It would be better to make curves thicker.} } \label{fig:tap} \end{figure} Figure~\ref{fig:tap}(right) shows the required amount of \vincent{consecutive} consensus instances $m$ for LLB\xspace to get at least enough assets from punishments as to fund the recovery from attacks. For this purpose, we assume $\alpha(x)$ to be balanced following a uniform distribution of funds among transactions with the same amount, which again LLB\xspace can enforce by limiting the maximum amount of funds allowed to be transferred per transaction, per block, or per consensus instance. The right figure shows the value $m$ per coalition size $f/n$\UPDATE{change $t$ for $f$ in figure}. Notice how $m$ grows linearly with $a$, but exponentially with $f/n$. For $f/n \leq 0.6$ (i.e., allowing up to a sextuple-disagreement) we require $m\geq 54$, even for $\rho=0.9$. \subsection{Evaluation of the Payment System}\label{ssec:eval-payment} Taking the experimental results of~\cref{sec:expe} and based upon our theoretical analysis, we show in Fig.~\ref{fig:fig6} \ARP{define $l$ when defining $m$ and make sure everything makes sense} the expected deposit flux, for a variety of uniform delays, with $l$ being the percentage of total circulating supply allowed to be transferred per proposal, for $f=5n/9$. Again, we can see that the expected deposit flux increases with the number of replicas, confirming the scalability, in this sense, of the implementation. We can see that for lower uniform delays, the system can allow up to the entire circulating supply to be transferred in one single proposal, since attackers are discovered before they can even cause one disagreement, for an increasing number of replicas, while for really big uniform delays LLB\xspace needs to lower the value $l$ to guarantee the gainful property. Although omitted in the figure, our experiments show that for a uniform delay of 5 or 10 seconds, restricting $l=1\%$ still yields a non-negative deposit flux. Nevertheless, if the network performs as expected, LLB\xspace is expected to be gainful for a large value of $f$, and to actually benefit from attackers (i.e., obtaining more from punishing them than they can steal by attacking). \begin{figure}[h] \includegraphics[width=\textwidth/2]{./plots_python/plot_recovery_cost.eps} \caption{Expected deposit flux for different uniform delays and different limits on the maximum amount of coins allowed to be transferred per proposal $l$.\vincent{TODO: add more data points to make these curves smooth.}\ARP{more tests then?}\vincent{yes}} \label{fig:fig6} \end{figure} \section{Introduction} Blockchain systems promise to track ownership of assets without a central authority and thus rely heavily on solving the consensus problem~\cite{Nak08}. As consensus is known to be impossible in the general model~\cite{FLP85}, blockchains seem doomed to fail. Classic blockchains~\cite{Nak08,Woo15,LNZ16,EGSvR16} assume \emph{synchrony} or that all messages are delivered in a maximum amount of time, and guarantee, despite transient disagreements, that consensus will be reached with some probability on the next block to be appended. Hence, the probability that such a blockchain fails grows exponentially fast with the number of blocks and this probability becomes 1 as the chain length tends to infinity~\cite{ZMR18}. The resulting disagreements can lead to double spending as illustrated by the recent losses of $\mathdollar 70,000$ in Bitcoin Gold\footnote{\href{https://news.bitcoin.com/bitcoin-gold-51-attacked-network-loses-70000-in-double-spends/}{https://news.bitcoin.com/bitcoin-gold-51-attacked-network-loses-70000-in-double-spends/}} and $\mathdollar 5.6$ million in Ethereum Classic\footnote{\href{https://news.bitcoin.com/5-6-million-stolen-as-etc-team-finally-acknowledge-the-51-attack-on-network/}{https://news.bitcoin.com/5-6-million-stolen-as-etc-team-finally-acknowledge-the-51-attack-on-network/}}. More recent blockchains~\cite{Buc16,CNG18,GAG19,Fac19} assume partial synchrony~\cite{DLS88} and offer deterministic Byzantine fault tolerant consensus solutions but can only tolerate less than a third of malicious or Byzantine permissioned replicas~\cite{PSL80}. By contrast with synchronous blockchains, it is not sufficient to attack the network to double-spend. Instead the attacker must form a coalition of a third of replicas to steal assets. But given the values of the assets that can be stolen, it is reasonable to assume that a third of the replicas can get bribed after some time. Mitigation strategies require a regular arrival of new replicas to take over the role of reaching consensus, so that the coalition never exceeds a third of the current replicas. While this works in theory, these strategies fail as soon as not enough honest newcomers can join. As a result, one may wonder whether any blockchain is inherently doomed to fail in a sufficiently long execution. In this paper, we answer this question in the negative by proposing a blockchain with deterministic guarantees that tolerate a majority of failures. More precisely, our system assumes partial synchrony~\cite{DLS88}, that message delivery time has an unknown upper bound, to tolerate $\lceil 2n/3\rceil -1$ deceitful replicas and $\lceil n/3\rceil -1$ benign replicas. LLB\xspace comprises an accountable state machine replication that ignores messages that are not properly signed and justified and ensures the following: If deceitful faults lead honest replicas to disagree, then these honest replicas produce an undeniable proof-of-fraud to rightfully exclude at least $\lfloor n/3 \rfloor+1$ deceitful replicas, and include new replicas, while resolving the disputed disagreement. To this end, we build upon recent theoretical accountability results~\cite{CGG19,CGG20} to extend PeerReview~\cite{HKD07} that suspects failures forever in a partially synchronous model into LLB\xspace that exclude undeniably identified deceitful replicas. After a finite number of these exclusions, LLB\xspace guarantees that the remaining $n'$ replicas contain less than $n'/3$ Byzantine faults, which leads to consensus. Note that the deceitful failure model differs radically from the failures assumed in traditional distributed systems and is interesting in its own right. As opposed to datacenters~\cite{CKL09}, cloud services~\cite{LLM19} or distributed databases~\cite{KWQ12} where most faults are omissions and rare commissions are due to unlucky events (e.g., disk errors~\cite{CKL09}), in blockchain payment systems deceitful faults are more frequent than benign faults. Indeed, blockchains are more subject to well-fomented attacks to steal assets~\cite{LSP82,EE18} but, when not deceitful, blockchain replicas are more likely up and running than crashed: they are carefully monitored to generate financial rewards to their owner~\cite{Nak08,Woo15}. We thus distinguish two new classes of faults: (i)~\emph{deceitful faults} that could directly affect safety (e.g., sending correctly formatted messages with malicious content to try to influence the execution of honest replicas) and (ii)~\emph{benign faults} that do not risk to violate safety (e.g., omission faults, unintelligible messages or out-of-date well-formatted messages). To demonstrate the applicability of LLB\xspace, we build a zero loss payment application with Bitcoin transactions and illustrate its performance under two distinct coalition attacks. The key to zero loss is that we consider fungible assets and request replicas to deposit sufficient assets before participating in the consensus. If two conflicting transactions are found after the exclusion of the deceitful replicas responsible for a transient disagreement, then the zero loss payment system executes the first of these conflicting transactions and withdraws the deposited amount of the deceitful replicas to reimburse the second transaction. As opposed to classic blockchain payment systems~\cite{Nak08,Woo15,LNZ16,EGSvR16} where recovering from forks a posteriori is the norm, our LLB\xspace guarantees deterministic agreement---no forks---in good executions but recovers only in unlucky cases. As opposed to more recent Byzantine fault tolerant blockchains~\cite{Buc16,GHM17,CNG18,GAG19,Fac19}, LLB\xspace recovers eventually from a state with $q<\min(n/3, n-f)$ benign replicas and $f-q<2n/3$ deceitful replicas. Our results show that the impact of the attacks decreases rapidly with the system size. \sloppy{LLB\xspace presents some limitations. First, to hold nodes accountable for their actions, we need an expensive cryptographic primitive that adds up to the CPU demand of blockchain systems and cannot be minimized with threshold encryption or message authentication codes. Despite this, we show that by deciding multiple proposals, LLB\xspace scales to 90 simple 4 vCPU machines (\cref{sec:expe}) to outperform the HotStuff raw state machine replication~\cite{MNR19} at the heart of the Libra blockchain~\cite{Fac19} by $5.6$ times. Second, LLB\xspace cannot guarantee termination when $2n/3\leq f \leq n$, hence even though no assets are lost, the system risks to stop being available. Third, the zero loss payment system does not imply full satisfaction of honest users: Consider that a user thinks they succeed in buying a rare item sold on a first-come first-served basis, say Guernica by Picasso, they might be disappointed when they realize (even though they were not charged) that due to a transient fork someone else acquired it. We present the background (\cref{sec:bck}), our problem (\cref{sec:pb}), our LLB\xspace solution (\cref{sec:solution}), our evaluation (\cref{sec:expe}) and the payment system application (\cref{ssec:payment}) before presenting the related work (\cref{sec:rw}) and concluding (\cref{sec:conclusion}).} \section{Model} \label{sec:mod} Like in~\cite{CL02,KADC07}, we consider the classic distributed system model of $n$ replicas, $f$ of which can be Byzantine. These replicas communicate through a partially synchronous network, hence there exists an upper-bound on the time it takes to deliver a message, but this bound is unknown~\cite{DLS88}. Let $p_1, ..., p_n$ be the unique identifiers of these replicas. Note that in this model, it is well-known that consensus cannot be solved if the number $f$ of Byzantine replicas if $f\geq n/3$~\cite{LSP82}, due to this limitation no system can recover from such an adversarial situation. As we would like to offer properties when the number of failures is too large for consensus to be solved, we denote this threshold as $t_0n-1=\ceil{\frac{n}{3}}-1$ \NEW{(i.e. $t_0=1/3$)} in the remainder of the paper. \paragraph{Adversary.} We consider an adversary $A$ that can read and delay messages but cannot drop messages. $A$ can also create new messages out of thin air. We assume the presence of a public key infrastructure and $A$ cannot forge signatures. $A$ controls a coalition of $f<2n/3$ replicas. \vincent{What is the general assumption for $f<2n/3$ and $q<n/3$ or $f<5n/9$ and $f<5n/9$?} We refer to the replicas under the control of the adversary as \emph{Byzantine} replicas as they can fail in an arbitrary way~\cite{LSP82}. In particular, Byzantine replicas do not follow the protocol specification, they can send arbitrary messages, and can communicate infinitely faster than \emph{honest} replicas. To be more precise, we assume that Byzantine replicas can send messages to other honest or Byzantine replicas that are received with zero delay. \vincent{Note that to anaylize performance, we will limit later (\cref{sec:other-assumption}) the speed at which messages of the coalition are sent and delivered to honest replicas. This is for the gainful stuff and to do performance evaluation.} \paragraph{Fungible assets.} We assume that participants can transfer assets that are \emph{fungible} in that one unit is interchangeable and indistinguishable from another of the same value. (An example of this are cryptocurrencies.) If assets are not fungible, we assume that there exists a function $fungibility:\mathds{D^a} \rightarrow \mathds{C} x \mathds{D}^{a-1}$ that, given $a$ conflicting decisions $d^{i_{P_0}}_j,...,d^{i_{P_{a-1}}}_j$ it outputs only one of them $d^{i_{P_r}}$ and a list of fungible assets that the rest of the partitions are willing to take as a refund for their decision being forgotten from that instance on. We refer to the function $fungibility$ as the \textit{fungibility function}. An example is a participant willing to take the value of a product, if the product was accidentally sent to a different participant. \subsection{Proof of Correctness}\label{sec:proofs} In this section, we show the properties that can be satisfied depending on the size of the adversary, summarised in Table~\ref{table:thresholds}. We defer the proofs to the appendix. \paragraph{Upper-bounding the number of fork branches.} In the theorem below, we compute the number $a$ of branches Byzantine replicas can create when causing a fork, depending on the assumption on the size of the adversary. We show that, for the case $t_0n=\lceil\frac{n}{3}\rceil$ (i.e. the consensus tolerates up to $\lceil\frac{n}{3}\rceil-1$ faults), if $f<n/3$ then no fork is possible (i.e., only one branch exists and $a=1$), while 2 and 3 branches are possible for $f<n/2$ and $f<5n/9$, respectively. The maximum is reached at $\lfloor n/3\rfloor+1$ branches for $f<2n/3$, in which case there is only one correct replica in each branch, assuming that all Byzantines can be deceitful faults. Previous work already showed this result~\cite{singh2009zeno,Li}, however, our proof will be useful for other theorems. \begin{lemma} \statement The maximum number of branches that the Byzantine replicas can create in a fork is $a\leq\frac{n-(f-q)}{(1-t_0)n-(f-q)}$ ,\vincent{TODO: clarify outside the lemma statement the following:} being $a=1$ the normal case with no disagreement, for $(f-q)\geq \frac{(a(1-t_0)-1)n}{a-1}$. \label{lem:branches} \end{lemma} \begin{proof It is easy to see that the maximum is obtained for one correct replica in each branch, which is clearly for $f-q=(1-t_0)n$, and this number of branches is $a=t_0n+2$. let $|C| = n-(f-q)$ be the number of replicas that are not deceitful faults, and $a$ the number of branches in a fork, $|C|=n-f=ax+r$ for some $x$. W.l.o.g. we assume $|C|$ to be divisible by a, i.e. $r=0$. Then for the attackers to be able to create $a$ branches the following must hold: \begin{align*} (f-q)+x/a\,\geq\, &(1-t_0)n,\\ f-q \leq\,&\,(1-t_0)n-1,\\ \end{align*} Solving this system gives the following expression for $f-q$: \begin{align} f-q\geq& \frac{(a(1-t_0)-1)n}{a-1}, \end{align} and for $a$: \begin{align} a\leq&\frac{n-(f-q)}{(1-t_0)n-(f-q)}. \end{align} \end{proof} \paragraph{Upper-bounding the number of detected replicas.} Lemma~\ref{lem:01} gives an upper-bound on the number $f_d$ of proofs-of-fraud (of different replicas) that a correct replica must collect in order to start a slashing phase. In particular, for $t_0n=\lceil \frac{n}{3}\rceil-1$, a correct replica can wait for up to $f_d=\lfloor \frac{n}{3}\rfloor+1$ proofs-of-fraud, otherwise some disagreements do not cause a slashing phase. \vincent{Recall that this is possible as Polygraph~\cite{CGG19} guarantees that at least $\lfloor \frac{n}{3}\rfloor+1$ Byzantine replicas are eventually detected after a disagreement occurs.} \begin{lemma} \statement If the ASMR\xspace guarantees recovery and slashing then the number $f_d$ of proofs-of-fraud that a correct replica must collect to start the slashing consensus must be $f_d\leq (1-2t_0)n$. \label{lem:01} \end{lemma} \begin{proof} It is clear that $f_d$ must not be greater than the minimum amount of Byzantine replicas required to cause disagreement, which we know from Lemma~\ref{lem:branches} is, for $a=2$, $f=2((1-t_0)n)-n\geq (1-2t_0)n$. Therefore, to guarantee that any disagreement leads to correct replicas launching slashing $f_d$ provably deceitful replicas, the slashing must start when correct replicas collect $f_d\leq (1-2t_0)n$ Proofs-of-Fraud, otherwise there may be disagreements that do not lead to a slashing. \end{proof} \ARP{Note, not that necessary but remember: Notice however that if a disagreement takes place between conflicting views on the set of correct replicas (due to some correct replicas being outdated), then updated correct replicas can launch a new slashing, regardless of their value on $f_d$. We explore further cases when this is possible in the Lemma~\ref{lem:011}.} \paragraph{Upper-bounding the number of branches after the recovery.} The slashing phase guarantees that at least $f_d$ deceitful replicas are punished and removed and that the number of branches strictly decreases. However, depending on the adversarial size, a recovery will be able to reduce the number of branches to just one or multiple ones. For example, for $t_0n=\lceil \frac{n}{3}\rceil-1$, if $f<n/2$ then the recovery merges all branches into one. However, if $f<5n/9$ then it is possible that some correct replicas participate still in an outdated branch and are not aware that the slashing of $n/3$ nodes resulted in a new threshold $f'<n'/3$, while some other replicas participate in the updated branch. Additionally, for $f<2n/3$, there may be up to two updated branches whose participants learn that the slashing led to $f'<n'/2$, and as many outdated branches as outdated replicas that do not know about the slashing. Recall however that if more than one branch progress after a slashing execution, a new execution of the slashing will eventually start, punishing and removing more deceitful replicas, and leading to correct replicas eventually converging to one branch. For example, if $5/9\leq f/n< 2/3$ initially (cf. case with two updated branches in Fig.~\ref{fig:forks}) , then after the first slashing execution we obtain $1/3 \leq f'/n' < 1/2$ and after the second slashing execution we obtain $0 \leq f''/n''<1/4$ in which case consensus is solvable (cf. case with a single branch in Fig.~\ref{fig:forks}). Notice also that one recovery is enough to guarantee eventually converging to just one updated branch for $f<5n/9$, while awareness\ARP{change to disclosure or whatvs new name} is only possible if $f<n/2$, or for higher $f$ upper-bounding the number of benign failures. \begin{lemma} Consider an execution in which ASMR\xspace is about to execute a consensus instance $\Gamma^{t_0}_i$, which tolerates $t_0n, \,t_0\in(0,1)$ failures, and a recovery that removes $f_d$ deceitful replicas from $n$. Suppose an execution of recovery that leads to $UP$ correct replicas that updated their set of replicas, i.e. removing $f_d$, while $OP$ outdated correct replicas did not yet find out about the recovery (i.e. replicas that have not yet removed $f_d$), meaning $n=f+|OP|+|UP|$. Suppose that $q$ faults of $f$ are not deceitful faults. Then, the maximum amount $a$ of branches that faulty replicas can create is $a\leq \frac{|OP|}{(1-t_0)n-(f-q)}+a_{up}$, for $f-q\geq \frac{(a-a_{up})(1-t_0)n-|OP|}{a-a_{up}}$, being $a_{up}$ the number of branches that the remaining $f-f_d$ replicas can create after the recovery. \label{lem:011} \end{lemma} \begin{proof} The value $a_{up}$ of branches after the recovery can be directly derived from Lemma~\ref{lem:branches}, so we explore the outdated replicas. Since they are outdated, the $f_d$ deceitful replicas can still participate with $|OP|$ correct replicas, and also the number of replicas that participate in the consensus of their branch is $n$. Therefore, analogously to Lemma~\ref{lem:branches}, we explore for which values $f+|OP|/a_{op}\geq (1-t_0)n$. Solving this gives: \begin{align*} a_{op}\leq \frac{|OP|}{(1-t_0)n-(f-q)} \end{align*} and also: \begin{align*} f-q\geq \frac{a_{op}(1-t_0)n-|OP|}{a_{op}} \end{align*} Finally, making $a_{op}=a-a_{up}$ to count the branch of updated replicas gives the result. \end{proof} Note that From Lemma~\ref{lem:01} we have $f_d\leq (1-2t_0)n$. It is easy to see from Lemma~\ref{lem:011} that $a$ decreases with $|OP|$, meaning also that $a$ is minimized by maximizing the amount of correct replicas that are in $|UP|$. Additionally, being $f-f_d$ the amount of failures that the slashing must support, we have that $n-f-(f-f_d)=n+f_d-2f\leq|UP|$ is the minimum number of correct replicas that must participate in the slashing consensus for it to terminate, and thus $|OP|\leq f-f_d$. What's more, $f-f_d$ decreases when $f_d$ increases, and thus the amount of correct replicas that must participate in the recovery for it to terminate increases, also minimizing $a$. Therefore, when not specified we assume the system chooses to wait for at least $f_d=(1-2t_0)n$ Proofs-of-Fraud before launching recovery, which gives the best fault tolerance. \paragraph{Detection.} Theorem~\ref{th:det} shows in which cases an LSMR can guarantee detection. Informally, a correct replica can detect that no more branches are possible if the relative size of deceitful faults is smaller than a third of the remaining replicas, and enough correct replicas confirm that they agree on the remaining replicas. Notice that a disagreement on the output of a recovery still requires $f_d$ removed replicas on each branch and is eventually also merged, causing even more provably deceitful replicas removed from the set $n$ than $f_d$. Thus we obviate such better scenario, and focus on disagreement on the consensus. For example, for $t_0n=\lceil \frac{n}{3}\rceil-1$, if $f<n/2$ and $q<n/3$ then detection is immediately guaranteed after one recovery terminates. \begin{theorem} \vincent{TODO: complete the following sentence.} Let ASMR\xspace [...]. Let $a_{op}$ be the possible number of branches before the last recovery ($a_{OP}=0$ if no recovery took place yet), and let $a_{UP}$ be the possible number of branches after the last recovery that a correct replica terminated, then the ASMR\xspace guarantees detection if and only if it guarantees that there will be a maximum number of recoveries after which the number of outdated and updated branches will eventually be $a_{OP}<1$ and $a_{UP}<2$. \label{th:det} \end{theorem} \begin{proof}[Proof of Theorem~\ref{th:det}] First we prove the if. It is easy to see that since all branches $a=a_{OP}+a_{UP}$ are the addition of the outdated plus the updated ones, if $a_{OP}<1$ then not even one branch is possible in the outdated partition, while $a_{UP}<2$ allows up to 1 branch in the updated one. Now we prove the only if by contradiction. Suppose we have detection while either $a_{OP}\geq 1$ or $a_{UP}\geq 2$ after all correct replicas have executed all possible recoveries. Since the total number of branches is $a=a_{OP}+a_{UP}$, $a$ must be $a<2$, but that is only possible if $a_{OP}$ is less than $1$, and one of $a_{UP}$ is less than 2, which is a contradiction. \end{proof} \paragraph{Recovery.} In general, the recovery is correct as long as every branch requires at least one correct replica to progress, but it is only for some values of $f$ that correct replicas can be aware that the recovery has finished (detection). We show this in the following two lemmas. For instance, for $t_0n=\lceil \frac{n}{3}\rceil-1$, if $f-q<2n/3$, then after a recovery, the adversary can still hold more than a third of relative power, meaning they can still cause disagreements, including on the recovery itself. Thanks to the recovery also being accountable, the more disagreements they cause the lower and lower their relative size becomes, \vincent{This is assuming that there is an upper bound on the number of benign faults.} \ARP{in that case there would not be disagreements, a disagremeent always leads to a non-zero number of fraudsters being removed, as long as $f-q<2n/3$, which I say at the end of the paragraph}until it reaches strictly lower than a third of the remaining replicas, when all correct replicas can be sure that disagreements are no longer possible. Notwithstanding, if Byzantine replicas commit benign failure once the recovery starts, correct replica can not confirm their decisions to be the only decisions, even if they actually are, meaning they cannot guarantee detection even if there is in fact just one branch. Furthermore, we choose $f-q<(1-t_0)n-1$ by generalizing the proof of impossibility of accountability described by Civit et al.~\cite{CGG19}. \begin{theorem} \UPDATE{review here and paragraph above} \statement Assume that $q$ is the number of benign faults. Then, the recovery and slashing will eventually terminate if and only if $q\leq f$ and $f-q<(1-t_0)n$. <.\vincent{There should be an assumption on the maximum number of benign failures to guarantee termination.}\ARP{not for the recovery because the recovery waits for $n-f$ replies, not $2n/3$. The assumption can be that whatever assumption $max\_f$ replicas had on $f$ is accurate such that $q<max\_f$, or something like that.} \ARP{recovery?slashing?} \label{theorem:recovery} \end{theorem} \begin{proof} First and foremost, we discard the case $f-q\geq (1-t_0)n$, which is trivial since the deceitful faults can then cause disagreements without even communicating with correct replicas. It is also easy to see that, after a recovery, the relative size of the deceitful faults always decreases, since only deceitful faults are removed. Then, we prove that if $q'\leq f$ then the recovery terminates. Suppose that $q'=f'=f$, which is the best case for the adversary to tackle termination. If $f>n-2$ then the threshold $n-f$ is immediately met after one correct replica starts the recovery. For $f<n-2$, consider a recovery-broadcast that tolerates $f$ failures. If Byzantine replicas do not reply correctly, it is easy to see that eventually the $n-f$ correct replicas will reply, guaranteeing termination. Byzantine replicas can instead cause a disagreement on the recovery-broadcast, but correct replicas will eventually detect this and launch another recovery-broadcast to remove an additional number of Byzantine replicas (including the union of both recovery decisions and, if any, additional Proofs-of-Fraud derived from this disagreement). This can happen for a number of recursive recoveries, but every recovery-broadcast is guaranteed to remove a non-zero number of Byzantine replicas, meaning that, after a finite number of recovery-broadcasts, one recovery-broadcast will guarantee both safety and termination. Now we consider $q'>f$. This means that $n-q'$ are the number of replicas that can actually be expected to reply, but $n-q'<n-f$, which is the number of replicas that the recovery waits for. Therefore, the recovery does not guarantee termination in this case. \end{proof} \paragraph{Termination.} Notice however that, even if the recovery is guaranteed to terminate and up to one branch is guaranteed to progress, this does not mean that one branch will necessarily progress. For example, for $t_0n=\lceil \frac{n}{3}\rceil-1$, if $f=q>n/3$, it is immediate that no disagreements are possible, but also consensus will not terminate, given the relative size of deceitful faults. \begin{theorem} \statement Then, the ASMR\xspace guarantees termination if and only if $f<(1-t_0)n$ and $q<\frac{t_0(n-f)}{1-t_0}$. \label{th:ter} \end{theorem} \begin{proof} It is easy to see that the recovery terminates for any $q$, since the number of replies it waits for is $n-f$ and $q\leq f$. We consider the LSMR consensus. Since all deceitful faults $f-q$ might be removed during a recovery, it is necessary to guarantee that in the remaining replicas after a recovery $n'=n-(f-q)$ the relative size of the benign faults is not big enough to threaten termination. Therefore, as the number of benign failures tolerated are $t_0n'$, we have that $q<t_0n'\iff q<t_o(n-f+q)\iff q<\frac{t_0(n-f)}{1-t_0}$. At the same time, if $f\geq (1-t_0)n$ then the Byzantine replicas can terminate without communicating with any correct replica, not guaranteeing the definition of termination.\UPDATE{change paragraph that explains this theorem} \end{proof} \UPDATE{make note that obviously we could get around this and guarantee termination always if we know the value of $q$, but we want to preserve the normal guarantees for normal consensus for any $f<n/3$.} \paragraph{Confirmation of consensus.} Decisions that take place without the detection property are suspected by correct replicas to potentially require a merge, even if they will never be undecided. Confirmations allows for correct replicas to confirm that a particular decision will not require a merge with another decision unknown yet. In the following theorem, we show the number of replies correct replicas must collect in order to confirm their decision, and discard a disagreement on that decision. For example, for $t_0n=\lceil\frac{n}{3}\rceil -1$, if $f<2n/3$ then a correct replica must wait to receive $n$ replies from different replicas, otherwise it can not discard the possibility of a disagreement on that decision.\vincent{The following theorem statement should change.}\ARP{changed} \begin{theorem} \statement Then a correct replica confirms its decision if and only it collects prove that $c=t_0n+(f-q)+1$ other replicas also agreeing to such decision\vincent{I don't understand the last sentence.}. \label{th:partialawareness} \end{theorem} \begin{corollary} \statement Then, If a correct replica delivers at least $c=t_0n+(f-q)+1$ decisions from other replicas, it will either confirm its decision or start a slashing. \label{cor:partialawareness} \end{corollary} \begin{proof} Theorem~\ref{th:partialawareness} shows that if all delivered decisions agree, then the decision is confirmed. From Lemma~\ref{lem:01} we know that if there is a disagreement then the slashing will start. \end{proof} \UPDATE{write also confirmation of recovery, and of post-recovery?}\UPDATE{actually not really required, maybe post-recovery but it derives from the other theorems} \UPDATE{make mention to table where we show particular cases for an implementation in which $t_0n=...$ and $f_d=n/3...$.} Table~\ref{table:thresholds} shows the guarantees that ASMR\xspace can offer, depending on the size of $f$ and $q$. Notice awareness is not a property we show in Table~\ref{table:thresholds}. In general, awareness can only be guaranteed if $f<n/3$, however confirmation can succeed for particular past consensus instances, either by waiting to deliver enough agreeing decisions to confirm it, as shown in Theorem~\ref{th:partialawareness}, or because of a disagreeing decision that led to a recovery that succeeded, confirming the merge. Notice that recovery, i.e. the merging of disagreeing decisions, and slashing, i.e. punishing provably deceitful replicas, only takes place if correct replicas can guarantee that if there is a disagreement they will eventually know it. We show in Theorem~\ref{theorem:recovery} why ASMR\xspace guarantees recovery if and only if $f-q\geq 2n/3$. We show two columns for detection depending on one if correct replicas have an accurate estimate for the number of benign faults or not. Notice that we prove both cases in Theorem~\ref{th:det}, however, the values of $a_{OP}$ and $a_{UP}$ vary depending on the cases, which we show in Lemmas~\ref{lem:011} and ~\ref{lem:branches}. In particular, if correct replicas do not have an estimate for the number of benign failures, or do not rely on it, then the results from Lemma~\ref{lem:011} and Lemma~\ref{lem:branches} consider $q=0$, which maximizes the values of $a_{OP}$ and $a_{UP}$. Finally, we formally prove all the bounds for termination from Table~\ref{table:thresholds} in Theorem~\ref{th:ter}. \begin{table}[ht!] \footnotesize{ \setlength{\tabcolsep}{2.5pt} \begin{tabular}{cc|ccccc} \toprule \multicolumn{2}{c|}{fault tolerance} &recovery & detection & termination & detection \\ byzantine ($f$) & benign ($q$) & \& slashing & knowing $q$ & & \\ \midrule $0<f<\frac{n}{3}$ & $0<q\leq f$ & \ding{51} & \ding{51} & \ding{51} &\ding{51} \\ \hline $\frac{n}{3}\leq f<\frac{n}{2}$ & $0<q< \frac{n}{3}$ & \ding{51} & \ding{51} & \ding{51}&\ding{51} \\ & $\frac{n}{3} \leq q \leq f$ & \ding{51} & \ding{51} & \ding{55} &\ding{55} \\ \hline & $0<q<\frac{2n}{9}$ & \ding{51} & \ding{51} & \ding{51} & \ding{51}\\ $\frac{n}{3}\leq f<\frac{5n}{9}$ & $\frac{2n}{9}\leq q<\frac{n}{3}$& \ding{51} & \ding{51} & \ding{51} &\ding{55} \\ & $\frac{n}{3} \leq q \leq f$ & \ding{51} & \ding{51} & \ding{55} &\ding{55} \\ \hline & $0<q<\frac{n}{6}$ & \ding{51} & \ding{51} & \ding{51} &\ding{55} \\ $\frac{5n}{9}\leq f <\frac{2n}{3}$ & $\frac{n}{6}\leq q<\frac{n}{3}$ & \ding{51} & \ding{51} & \ding{55} &\ding{55} \\ & $\frac{n}{3}\leq q\leq f$ & \ding{51} & \ding{51} & \ding{55} &\ding{55} \\ \hline $\frac{2n}{3}\leq f \leq n$ & $f-q< \frac{2n}{3}$ & \ding{51} & \ding{51} & \ding{55} &\ding{55} \\ & $f-q\geq\frac{2n}{3}$ & \ding{55} & \ding{55} & \ding{55} &\ding{55} \\ \bottomrule \end{tabular} } \caption{The guarantees offered by ASMR\xspace depends on the number $f$ of Byzantine failures the number $q$ of benign failures, where $f-q$ is thus the number of deceitful failures. Note that the system is safe and live when $f<2n/3$ as long as $q<n/6$ which guarantees progress towards recovery.\vincent{we need to change the second last line.}}\label{table:thresholds} \end{table} \section{Additional Material}\label{sec:app1} \include{inc/06proofs.tex} \section{Background and Preliminaries} \label{sec:bck} A blockchain system~\cite{Nak08} is a distributed system maintaining a sequence of blocks that contains \emph{valid} (cryptographically signed) transactions indicating how assets are exchanged between \emph{accounts}. \vspace{-0.6em} \paragraph{Accountability.} The replicas of a blockchain system are, by default, not accountable in that their faults often go undetected. For example, when a replica creates a fork, it manages to double spend after one of the branches where it spent coins vanishes. This prevents other replicas from detecting this misbehavior in order to hold this replica accountable for its actions. Recently, Polygraph~\cite{CGG19,CGG20} introduced accountable consensus (Def.~\ref{def:accountability}) as the problem of solving consensus if $f<n/3$ and detecting eventually in case of disagreement $f_d \geq n/3$ faulty replicas. \begin{definition}[Accountable Consensus]\label{def:accountability} The problem of \emph{accountable consensus} is to solve consensus when the number of Byzantine faults is $f<n/3$ and for every honest replica to eventually output at least $f_d= \lceil n/3 \rceil$ faulty replicas if two honest replicas output distinct decisions. \end{definition} \vspace{-0.6em} \paragraph{Byzantine state machine replication.} A Byzantine State Machine Replication (SMR)~\cite{CL02,KADC07} is a replicated service that accepts deterministic commands from clients and totally orders these commands using a consensus protocol so that upon execution of these commands, every honest replica ends up with the same state despite \emph{Byzantine} or malicious replicas. The instances of the consensus execute in sequence, one after the other, starting from index 0. We refer to the consensus instance at index $i$ as $\Gamma_i$. Traditionally, provided honest replicas propose a value, the Byzantine consensus problem~\cite{PSL80} is for every honest replica to eventually decide a value (consensus termination), for no two honest replicas to decide different values (agreement) and for the decided value to be one of the proposed values (validity). In this paper, we consider however a variant of the Byzantine consensus (Def.~\ref{def:sbc}) % useful for blockchains~\cite{MSC16,DRZ18,CNG18} where the validity requires the decided value to be a subset of the union of the proposed values. \begin{definition}[Set Byzantine Consensus]\label{def:sbc} Assuming that each honest replica proposes a proposal, the \emph{Set Byzantine Consensus} (SBC) problem is for each of them to decide on a set in such a way that the following properties are satisfied: \begin{itemize} \item SBC-Termination: every honest replica eventually decides a set of transactions; \item SBC-Agreement: no two honest replicas decide on different sets of transactions; \item SBC-Validity: a decided set of transactions is a non-conflicting set of valid transactions taken from the union of the proposed sets; and if all replicas are honest and propose a common valid non-conflicting set of transactions, then this set is the decided set. \end{itemize} \end{definition} SBC-Validity includes two predicates, the first states that transactions proposed by Byzantine proposers could be decided as long as they are valid and \emph{non-conflicting} (i.e., they do not withdraw more assets from one account than its balance); the second one is necessary to prevent any trivial algorithm that decides a pre-determined value from solving the problem. As a result, we consider that a consensus instance $\Gamma_i$ outputs a set of enumerable decisions $out(\Gamma_i)=d_i,\; |d_i|\in\mathds{N}$ that all $n$ replicas replicate. We refer to the state of the SMR at the $i$-th consensus instance $\Gamma_i$ as all decisions of all consensus instances up to the $i$-th consensus instance. \vspace{-0.6em} \paragraph{Solving the Set Byzantine Consensus (SBC)} A classic reduction~\cite{BCG93,BKR94,CGLR18,CGG19} of the problem of multi-value consensus, that accepts any ordered set of input values, to the problem of binary consensus, that accepts binary input values, proved promising to solve the SBC problem (Def.~\ref{def:sbc}) when $f<n/3$~\cite{CNG18}. The idea consists of executing an all-to-all reliable broadcast~\cite{B87} to exchange $n$ proposals: any delivered proposal is stored in an array $\ms{proposals}$ at the index corresponding to the identifier of the broadcaster. A binary consensus at index $k$ is started with input value 1 for each index $k$ where a proposal has been recorded. Once $n-f$ proposals are delivered locally, a binary consensus at the remaining indices $0\leq \ell < n$ where $\ell\neq k$ is started with input value 0. The results of these concurrent binary consensus instances is stored into a $\ms{bitmask}$ array. And applying the $\ms{bitmask}$ to the $\ms{proposal}$ array yields a sequence of proposals whose content is the output of consensus. Polygraph is the accountable variant where replicas broadcast \emph{certificates}, sets of $2n/3$ messages signed by distinct replicas, each time they reliably broadcast or decide a binary value. \section{The Longlasting Blockchain\xspace Problem}\label{sec:pb} Considering the classic distributed system model~\cite{CL02,KADC07} where messages are delivered within bounded but unknown time (i.e., partial synchrony~\cite{DLS88}) and failures are Byzantine, solving the longlasting blockchain problem is to solve consensus when possible ($f<n/3$), and to recover from a situation where consensus is violated ($n/3 \leq f < 2n/3$) to a situation where this violation is resolved (with $f'<n'/3$). \begin{figure}[t] \includegraphics[scale=0.45]{fig/failures2} \caption{Unlike closed networks, blockchain networks experience more likely deceitful faults, due to well-fomented attacks, than benign faults (which could either be omission or some commission faults) that are harmless.\label{fig:failures}} \vspace{-1.1em} \end{figure} \vspace{-0.6em} \paragraph{Attacking the SBC solution} In the SBC solution presented above, deceitful replicas can form a coalition of $f\geq n/3$ replicas to lead honest replicas to a disagreement by \emph{equivocating} (sending distinct messages) to different partitions of honest replicas, with one of two coalition attacks: \begin{enumerate} \item {\bf Reliable broadcast attack:} deceitful replicas misbehave during the reliable broadcast by sending different proposals to different partitions, leading honest replicas to end up with distinct proposals at the same index $k$. \item {\bf Binary consensus attack:} deceitful replicas vote for each binary value in each of two partitions for the same binary consensus leading honest replicas to end up with different bits in the same index of their bitmask. \end{enumerate} Note that deceitful replicas do not benefit from combining these attacks: If two honest replicas deliver different proposals at index $k$, the disagreement comes from them outputting 1 at the corresponding binary consensus instance. Similarly, forcing two honest replicas to disagree during the $k^{\ms{th}}$ binary consensus only makes sense if they both have the same corresponding proposal at index $k$. \vspace{-0.6em}\noindent \paragraph{The deceitful Byzantine failure model.} We introduce the deceitful failure model that is a refinement of the Byzantine failure model~\cite{LSP80}, radically different from the closed network failure models~\cite{CKL09,KWQ12,LLM19} where most Byzantine failures are omissions and very few are commissions (cf. Fig.~\ref{fig:failures}). This contrast stems from the observation that blockchain payment systems incentivize replicas to either fail by attacking the network or minimize downtime to maximize profit~\cite{Nak08,Woo15}. A \emph{deceitful fault} consists of sending a message that violates the protocol to deceive honest replicas and try to reach a disagreement whereas a \emph{benign fault} consists of a non-deceitful Byzantine fault (e.g., sending a stale message). We refer to a replica that commits a deceitful (resp. benign) fault as a \emph{deceitful replica} (resp. \emph{benign replica}) and fail without trying to cause any harm to the system. We also denote the deceitful ratio $(f-q)/n$ by $\delta$. A replica that is not Byzantine is \emph{honest}. Let $n$ be the initial number of replicas in our system, we thus assume a maximum of $f-q<2n/3$ deceitful replicas and $q<\min(n/3, n-f)$ benign replicas. The adversary that controls these Byzantine replicas is dynamic in that $f$ can change over time, however, we assume that the adversary experiences \emph{static periods} during which each consensus instance fully executes and the honest replicas do not become faulty, and there exists a pool of replicas among which at least $2n/3$ are honest replicas and the rest are deceitful. Like in classic blockchain models, we assume the presence of unforgeable signatures. \vspace{-0.6em} \paragraph{Longlasting Blockchain\xspace.} A Longlasting Blockchain\xspace (LLB\xspace) is a Byzantine SMR that allows for some consensus instances to reach a disagreement before fixing the disagreement by merging the branches of the resulting fork and deciding the union of all the past decisions using SBC (Def.~\ref{def:sbc}). More formally, an SMR is an LLB\xspace if it ensures termination, agreement and convergence as defined below: \begin{definition}[Longlasting Blockchain Problem]\label{def:properties} An SMR is an LLB\xspace if the following properties are all satisfied: \begin{enumerate} \item {\bf Termination:} For all $k>0$, consensus instance $\Gamma_k$ terminates, either with agreement or disagreement. \item {\bf Agreement:} If $f<n/3$ during $\Gamma_k$, then honest replicas executing $\Gamma_k$ reach agreement. \item {\bf Convergence:} If $f-q < 2n/3 $ and $q<\min(n/3, n-f)$, then in any sufficiently long static period of the adversary there is a finite number of disagreements after which honest replicas agree. \end{enumerate} \end{definition} Termination does not necessarily imply agreement among honest replicas whereas agreement is the classic property of the consensus problem. Convergence preserves the assets of honest replicas by guaranteeing that there is a limited number of disagreements (this number is 0 if $f-q<n/3$) after which agreement is maintained, within a sufficiently long static period. \section{Description of the Longlasting Blockchain\xspace}\label{sec:solution} In this section we detail our system. Its two main ideas are to replace deceitful replicas undeniably responsible for a fork by new replicas to converge towards a state where consensus can be reached, and to fund conflicting transactions that were wrongly decided. We will show that LLB\xspace solves the longlasting blockchain problem. As depicted in Figure~\ref{fig:architecture}, we present below the components of our system, namely the ASMR\xspace (\cref{ssec:asmr}) and the BM\xspace (\cref{ssec:blockchain}) but we defer the description of the zero loss payment system (\cref{ssec:payment}). \begin{figure}[t] \includegraphics[scale=0.47]{fig/architecture} \caption{The distributed architecture of our LLB\xspace system relies on ASMR\xspace, BM\xspace and the payment system deployed on several replicas. {\color{darkgray}\ding{203}} Each replica batches some payment requests illustrated with {\color{darkgray}\ding{202}} a transfer $t$ (resp. $t'$) of \$1M from Alice's account (A) to Bob's (B) (resp. Carol's (C)). Consider that Alice has \$1M initially and attempts to double spend by modifying the code of a replica $p_k$ under her control so as to execute the coalition attack. {\color{darkgray}\ding{204}--\ding{206}} The ASMR\xspace component detects the deceitful replica $p_k$ that tried to double spend, the associated transactions $t$ and $t'$ and account $A$ with insufficient funds. It chooses transaction $t$ and discards $t'$, {\color{darkgray}\ding{207}} notifies BM\xspace that {\color{darkgray}\ding{208}} excludes or replaces replica $p_k$ and {\color{darkgray}\ding{208}} refunds $B$ with $p_k$'s deposit.\label{fig:architecture} } \vspace{-0.5em} \end{figure} \iffalse $\textbf{operation}$ $\text{ACC\_RB\_broadcast}$($v_p$) $\textbf{is}$ step 0. (By process p). $\tab$ Send (initial, $v_p$) to all the processes. step 1. Wait until the receipt of, $\tab$ $\; \;$ one (initial, $v_p$) message $\tab$ $\; \;$ or $(n+t_0)/2$ (echo, $v_p$) messages $\tab$ $\; \;$ or $(t_0+1)$ (ready, $v_p$) messages $\tab$ $\; \;$ for some $v_p$. $\tab$ Send (echo, $v_p$) to all the processes. step 2. Wait until the receipt of, $\tab$ $\; \;$ $(n+t_0)/2$ (echo, $v_p$) messages $\tab$ $\; \;$ or $t_0+1$ (ready, $v_p$) messages $\tab$ $\; \;$ (including messages received in step 1) $\tab$ $\; \;$ for some $v_p$. $\tab$ Send sign(ready, $v_p$) to all the processes. step 3. Wait until the receipt of, $\tab$ $\; \;$ $2t_0+1$ (ready, $v_p$) messages with valid signatures $\tab$ $\; \;$ (including messages received in step 1 and 2) for some $v_p$. $\tab$ Build the register $\underline{R}^{p}$ holding the $(2t_0+1)$ correctly $\tab$ signed ready-messages (sent by processes in $R^{p}$). $\tab$ Accept $v_p$. \fi \iffalse \Part{$\lit{gen-propose}(v_i)$} { \State $\lit{RB-broadcast}(\text{\sc est}\xspace, \tup{v_i, i}) \to \ms{messages}_i$ \Repeat \If{\exists v,k~:~(\text{\sc est}\xspace, \tup{v, k}) \in \ms{messages}_i} \If{$\lit{BIN-CONSENSUS}[k]$ not yet invoked} \State $\lit{BIN-CONSENSUS}[k].\lit{bin-propose}(1) \to \ms{bin-decisions}[k]_i$ \EndIf \EndIf \EndRepeat \State \textbf{until} $\exists k~:~\ms{bin-decisions}[k] = 1$ \State \textbf{for all} $k$ such that $\lit{BIN-CONSENSUS}[k]$ not yet invoked: $\lit{BIN-CONSENSUS}[k].\lit{bin-propose}(0) \to \ms{bin-decisions}[k]_i$ \WUntil \textbf{for all} $k$, $\ms{bin-decisions}[k] \neq \bot$ \State $j = \min\{k~:~\ms{bin-decisions}[k] = 1$ \WUntil \exists $v~:~(\text{\sc est}\xspace, \tup{v, j}) \in \ms{messages}_i$ \State \textbf{return} $v$ } \fi As long as new requests are submitted by a client to a replica, the payment system component of the replica converts it into a payment that is passed to the BM\xspace component. As depicted in Figure~\ref{fig:architecture}, when sufficiently many payments are present, the BM\xspace issues a batch of requests to the ASMR\xspace that, in turn, proposes it to the consensus component, which exchanges messages though the network for agreement among honest replicas to be reached. If a disagreement is detected, then the account of the deceitful replica is slashed. Consider that Alice (A) attempts to double spend by spending her \$1M with both Bob (B) and Carol (C) in $t$ and $t'$, respectively, and hacking the code of replica $p_k$ that commits deceitful faults for disagreement to occur. Once the ASMR\xspace{} detects the disagreement, BM\xspace is notified, which results in excluding the replica $p_k$ from the ASMR\xspace (potentially replacing it by newly joined replicas) and funding $B$ with $p_k$'s deposit. \begin{figure*}[t] \begin{center} \includegraphics[scale=0.52]{fig/phases} \caption{If there are enqueued requests that wait to be served, then a replica starts a new instance $\Gamma_k$ by participating in an ASMR\xspace consensus phase \ding{192}; a series of optional phases may follow: \ding{193}~the replica tries to confirm this decision to make sure no other honest replica disagrees, \ding{194}~it invokes an exclusion consensus if it finds enough proofs-of-fraud (PoFs), \ding{195}~it then potentially includes new replicas to compensate for the exclusion, and \ding{196}~merges the two batches of decided transactions. Some of these phases complete upon consensus termination (in black) whereas other phases terminate upon simple notification reception (in grey). The replica starts a new instance $\Gamma_{k+1}$ if there are other enqueued requests to be served, hence participating in a new ASMR\xspace consensus phase \ding{192} that may succeed, in which case none of the optional phases immediately follow.\label{fig:phases} } \end{center} \vspace{-1.5em} \end{figure*} \subsection{Accountable SMR\xspace (ASMR\xspace)}\label{ssec:asmr} We present our ASMR\xspace that consists of running an infinite sequence of five consecutive actions: \ding{192}~the accountable consensus (Def.~\ref{def:accountability}) that tries to decide upon a new set of transactions, \ding{193}~a confirmation that aims at confirming that the agreement was reached, \ding{194}--\ding{195}~a membership change that aims at replacing deceitful replicas responsible for a disagreement by new replicas and \ding{196}~a reconciliation phase that combines all the disagreement decisions, as depicted in Figure~\ref{fig:phases}. \vspace{-0.6em} \paragraph{The phases of ASMR\xspace} For each index, ASMR\xspace first runs the accountable consensus (\cref{sec:bck}) to try to agree on a set of transactions then it optionally runs the four subsequent phases to recover from the possible disagreement. \begin{enumerate} \item[\ding{192}] {\bf ASMR\xspace consensus:} Honest replicas propose a set of transactions, which they received from clients, to the accountable consensus (Def.~\ref{def:accountability}) in the hope to reach agreement. When the consensus terminates, all honest replicas agree on the same decision or honest replicas disagree: they decide distinct sets of transactions. \item[\ding{193}] {\bf Confirmation:} As honest replicas could be unaware of the other decisions, they enter a confirmation phase waiting for messages coming from more distinct replicas than what consensus requires. If the confirmation leads honest replicas to detect disagreements, i.e., they receive certificates supporting distinct decisions, then they start the membership change. This phase may not terminate as an honest replica needs to deliver messages from more than $(\delta + 1/3) \cdot n$ replicas (due to the number of `conflicting histories'~\cite{singh2009zeno}) to guarantee that no disagreement was possible by a deceitful ratio $\delta$, however, it does not prevent $\Gamma_k$ from terminating as they proceed in parallel. \item[\ding{194}] {\bf Membership change (exclusion):} Honest replicas may identify deceitful replicas by cross-checking the set of signed messages or certificates. Once a disagreement is detected, the certificates are used as proofs-of-fraud (PoFs). These PoFs cannot be falsified as they indicate conflicting signed messages that the same replica should not have both sent if it was honest. If honest replicas detect $f_d= \lceil n/3 \rceil$ deceitful replicas, they start an exclusion consensus to agree on PoFs. Note that they could instead start as soon as they detect one deceitful replica, however, waiting for at least $f_d$ guarantees that a membership change is necessary and will help remove many deceitful replicas from the same coalition at once. As these membership consensuses are also accountable, they may trigger another confirmation or another exclusion consensus, hence the arrow from \ding{195} to \ding{193} on Figure~\ref{fig:phases}. \item[\ding{195}] {\bf Membership change (inclusion):} To compensate with the excluded replicas, an inclusion consensus adds new candidate replicas (among the pool of candidates). This inclusion consensus waits to deliver at least $\ceil{n'/2}$ proposals, each containing $n-n'$ new replicas to include where $n'$ is the number of replicas at the start of the inclusion consensus (after the exclusion consensus). Once consensus finishes, replicas deterministically choose $\frac{n-n'}{\ceil{n'/2}}$ replicas from each proposal, to guarantee a fair distribution of included replicas across all honest replicas. To prevent the deceitful ratio from increasing after an inclusion consensus, we rely on a deterministic function that, given a disagreement upon replicas to include, chooses some of them instead to maintain the total number $n$ of replicas. \item[\ding{196}] {\bf Reconciliation:} Once the membership change finishes, the reconciliation starts by combining all transactions that were decided by distinct honest replicas in the preceding disagreement. These transactions are ordered through a deterministic function, whose simple example is a lexicographical order but can be made fair by rotating over the indices of the instances. \end{enumerate} Once the current instance $\Gamma_k$ terminates, another instance $\Gamma_{k+1}$ can start, even if it runs concurrently with a confirmation or a reconciliation at index $k$. \begin{algorithm}[ht] \caption{Membership change at replica $p_i$ \label{alg:recconsensus} \smallsize{ \begin{algorithmic}[1] \Part{{\bf State}}{ \State $\Gamma_k$, $k^{\ms{th}}$ instance of ASMR consensus $p_i$ participates to with field: \State \T $\ms{proc} \subset I$, the set of participating replicas not proved deceitful yet \State $\ms{pofs}$, the set of proofs-of-fraud (PoFs), initially $\emptyset$ \State $\ms{cons-exclude}$, the set of PoFs output by consensus, initially $\emptyset$ \State $\ms{cons-include}$, the set of new replicas output by consensus, initially $\emptyset$ \State $\ms{inc-prop}$, the set of new replicas that replica $p_i$ proposes to add \State $\ms{deceitful}\in I$, the identity of an agreed deceitful replica, initially $\emptyset$ \State $f_d$, the threshold of proofs-of-fraud to recover, $\lceil n/3 \rceil$ by default }\EndPart \small \Statex \rule{0.45\textwidth}{0.4pt} \smallsize{ \Part{\textbf{Upon receiving a list of proofs-of-fraud} $\ms{\_pofs}$}{ \SmallIf{$\lit{verify(}\_\ms{pofs}\lit{)}$}{} \label{line:verify-pof}\Comment{if PoFs are correctly signed} \State $\ms{pofs}.\lit{add(}\_\ms{pofs}{)}$ \Comment{add PoFs on distinct replicas} \SmallIf{$\lit{size(}\ms{pofs}\lit{)}\geq f_d$}{} \label{line:pof-size}\Comment{enough to change members} \If{pending $\Gamma_k$} $v\gets \Gamma_k$.\lit{stop}() \label{line:instance-paused}\Comment{it may violate agreement} \EndIf \State $\ms{cons-exclude}\gets\Gamma_{k}.\lit{ex-propose}(\ms{pofs})$ \label{line:slashing-consensus}\Comment{exclusion consensus} \For{all \ms{pof} in \ms{cons-exclude}} \Comment{for all decided PoFs} \State $\ms{deceitful} \gets \ms{pof}.\lit{get-deceitful}()$ \Comment{get deceitful} \State $\lit{detected-fraud(}\ms{deceitful}\lit{)}$\label{line:punish} \Comment{application punishment} \State $\ms{pofs}\gets \ms{pofs} \setminus \{\ms{pof}\}$ \Comment{discard the treated pofs} \State $\Gamma_k.\ms{proc}\gets \Gamma_k.\ms{proc} \setminus \{\ms{deceitful}\}$ \Comment{exclude deceitful} \EndFor \State $\ms{cons-include}\gets\Gamma_{k}.\lit{inc-propose}(\ms{inc-prop})$ \label{line:slashing-consensus}\Comment{inclusion cons.}\label{line:inc1} \For{all \ms{new-replica} in \ms{cons-include}} \Comment{for all new to inc.} \State $\lit{set-up-connection}(\ms{new-replica})$ \Comment{new replica joins} \State $\lit{send-catchup}(\ms{new-replica})$\label{line:cu}\Comment{get latest state} \State $\Gamma_k.\ms{proc}\gets \Gamma_k.\ms{proc} \cup \{\ms{new-replica}\}$ \Comment{include new replica}\label{line:inc2} \EndFor \If{$\Gamma_k$ is stopped} {\bf goto} \ding{192}\label{line:instance-restarted} \Comment{restart cons.} \EndIf \EndSmallIf \EndSmallIf}\EndPart } \end{algorithmic} } \end{algorithm} \vspace{-0.6em} \paragraph{Membership change details.} The membership change (Alg.~\ref{alg:recconsensus}), proposes PoFs $\ms{pofs}$ to an exclusion consensus $\lit{ex-propose}$ to exclude deceitful replicas and runs an inclusion consensus $\lit{inc-propose}$ to potentially add newly joined replicas. To this end, replica $p_i$ maintains the current consensus instance $\Gamma_k$, the $\ms{deceitful}$ replicas among the whole set $\ms{proc}$ of replica ids, new candidate replicas $\ms{inc-prop}$, a threshold $f_d$ of detected deceitful replicas, two sets $\ms{cons-exclude}$ of decided PoFs and $\ms{cons-include}$ of decided new replicas. Each PoF is \emph{valid} if it contains equivocating messages signed by the same replica (line~\ref{line:verify-pof}). The exclusion consensus starts once there are more PoFs than $f_d$ (line~\ref{line:pof-size}) but as opposed to waiting for $n-f$ signed responses, it only waits for $\ceil{\frac{n'}{2}}$ signed responses from distinct replicas where $n'=n-f_d$ as even during disagreement, PoFs validity can be checked locally. It may however be necessary to stop a pending consensus (line~\ref{line:instance-paused}) before restarting it with the new set of replicas (line~\ref{line:instance-restarted}). At the end, the excluded replicas are punished by the application layer (line~\ref{line:punish}) and the new replicas are included (lines~\ref{line:inc1} to~\ref{line:inc2}) to compensate the loss. Once the inclusion consensus finishes, the honest replicas share their state with the newly added replicas (line~\ref{line:cu}). Typically, new replicas simply verify the received certificates and PoFs. \vspace{-0.6em} \paragraph{Remarks.} Honest replicas should still accept certificates with signatures from excluded replicas in consensus instances that follow the membership change, or from included replicas whose inclusion was finally reverted, in the event of a disagreement. This transient acceptance allows these honest replicas to catch up on the new set of members without reverting any decision. We separate inclusion and exclusion in two consensus instances to avoid deciding to exclude and include the replicas proposed by the same replica. \subsection{Blockchain Manager\xspace (BM\xspace)}\label{ssec:blockchain} We now present the Blockchain Manager\xspace (BM\xspace) that builds upon ASMR\xspace to merge the blocks in multiple branches when forks are detected. Once a fork is identified, the conflicting blocks are not discarded as it would be the case in classic blockchains when a double spending occurs, but they are merged. Upon merging blocks, BM\xspace also copes with conflicting transactions, as the ones of a payment system, by keeping one transaction and reimbursing its conflicting ones. We defer to~\cref{ssec:payment} the details of the amount replicas must have on a deposit to guarantee this reimbursement. \begin{figure}[t] \begin{center} \includegraphics[scale=0.62]{fig/forks} \caption{ After a finite number of membership changes during a static period of the adversary, the deceitful ratio decreases from the worst case $\delta<2/3$ \ding{204} to the agreement case where $\delta<1/3$ \ding{202}. This is guaranteed even though the deceitful ratio passes through intermediate values, for example $\delta<5/9$ and $\delta<1/2$ \ding{203}. In particular, each membership change guarantees that the deceitful ratio never increases and eventually decreases. \label{fig:forks}} \end{center} \vspace{-1.5em} \end{figure} Similarly to Bitcoin~\cite{Nak08}, BM\xspace accepts transaction requests from a permissionless set of users. In particular, this allows users to use different devices or wallets to issue distinct transactions withdrawing from the same account---a feature that is not offered in payment systems without consensus~\cite{CGK20}. In contrast with Bitcoin, our system does not incentivize all users to take part in trying to decide upon every block, instead a restricted set of permissioned replicas have this responsibility for a given block. This is why BM\xspace, combined with ASMR\xspace, offers an open permissioned blockchain. \vspace{-0.6em} \paragraph{Guaranteeing consistency across replicas.} Building upon the accountability of the underlying ASMR\xspace that resolves disagreement, BM\xspace features a block merge to resolve forks by excluding deceitful replicas and including new replicas. As depicted in Figure~\ref{fig:forks}, a consensus may reach a disagreement if $f\geq n/3$, resulting in the creation of multiple branches or blockchain forks. BM\xspace builds upon the membership change of ASMR\xspace in order to recover from forks. In particular, the fact that ASMR\xspace excludes $f_d$ deceitful replicas each time a disagreement occurs guarantees that the ratio of deceitful replicas $\delta = (f-q)/n$ converges to a state where consensus is guaranteed (Lemma~\ref{lemma:recovery}). The maximum number of branches that can result from forks depends on the number $q$ of benign faults and the number $f-q$ of deceitful faults as was already shown for histories of SMRs~\cite{singh2009zeno}. \vspace{-0.6em} \paragraph{Updated/outdated replicas and branches.} Figure~\ref{fig:forks} depicts an example with updated and outdated branches, as described below. After the merge of some branches terminates, some replicas can become aware of the resulting branch---we call them \emph{updated replicas} and we call the resulting branch the \emph{updated branch}. Other replicas may remain unaware of the resulting branch---we call them \emph{outdated replicas}. As a result, outdated replicas may wrongly believe that an old branch still exists---we call such a branch an \emph{outdated branch}. \vspace{-0.6em} \paragraph{Fork resolution.} Figure~\ref{fig:forks} also depicts a series of membership changes leading evntually to a state in which an agreement can be reached ($\delta<n/3$). \ding{204}~If $5/9 \leq \delta < 2/3$, then deceitful replicas can create as many as $n-f$ branches (i.e., the number of honest replicas) but again replicas will learn about the updated branches. After a finite number of membership changes, the deceitful ratio strictly decreases, for example to $1/2\leq\delta'<5/9$. At this point the number of branches is at most 3 updated branches. \ding{203}~From $1/2 \leq \delta < 5/9$, the process continues and a finite number of membership changes can lead to say $1/3\leq\delta'<1/2$. At this point the number of branches is at most 2 updated branches. \ding{202}~From $1/3 \leq \delta < 1/2$, the process continues and a finite number of membership changes leads for example to $0\leq \delta'<1/3$. At this point the number of branches is at most 1 updated branch. Hence the system eventually recovers a state where consensus is guaranteed until the static period ends as we show in Theorem~\ref{theorem:convergence}, at which point new deceitful replicas may lead the system back to \ding{204}. \begin{algorithm}[ht] \caption{Block merge at replica $p_i } \label{alg:blkmerg} \smallskip \smallsize{ \begin{algorithmic}[1] \Part{{\bf State}}{ \State $\Omega$, a blockchain record with fields: \State \T$\ms{deposit}$, an integer, initially $0$ \label{line:deposit} \State \T$\ms{inputs-deposit}$, a set of deposit inputs, initially in the first deposit \label{line:inputs-deposit} \State \T$\ms{punished-acts}$, a set of punished account addresses, initially $\emptyset$ \label{line:punished-accounts} \State \T$\ms{txs}$, a set of UTXO transaction records, initially in the genesis block \State \T $\ms{utxos}$, a list of unspent outputs, initially in the genesis block }\EndPart \Statex \rule{0.45\textwidth}{0.4pt} \Part{Upon receiving conflicting block $\ms{block}$} { \label{line:safe-binpropose} \Comment{merge block} \For{$\ms{tx}$ \textbf{in} $\ms{block}$} \Comment{go through all txs} \SmallIf{$\ms{tx}$ \textbf{not in} $\Omega.\ms{txs}$}{} \Comment{check inclusion} \State $\lit{CommitTxMerge(\ms{tx})}$ \label{line:merge-invocation} \Comment{merge tx, go to line~\ref{line:merge}} \For{$\ms{out}$ \textbf{in} $\ms{tx.outputs}$}\Comment{go through all outputs} \SmallIf{$\ms{out.account}$ \textbf{in} $\Omega.\ms{punished-acts}$}{} \Comment{if punished} \State $\lit{PunishAccount(\ms{out.account})}$ \Comment{punish also this new output} \EndSmallIf \EndFor \EndSmallIf \EndFor \State $\lit{RefundInputs()}$ \Comment{refill deposit, go to line~\ref{line:refund}} \State $\lit{StoreBlock(\ms{block})}$ \Comment{write block in blockchain} }\EndPart \Statex \rule{0.45\textwidth}{0.4pt} \Part{$\lit{CommitTxMerge(\ms{tx})}$}{ \label{line:merge} \State $\ms{toFund} \gets 0$ \For{$\ms{input}$ \textbf{in} $\ms{tx.inputs}$}\Comment{go through all inputs} \SmallIf{$\ms{input}$ \textbf{not in} $\Omega.\ms{utxos}$}{}\Comment{not spendable, need to use deposit} \State $\Omega.\ms{inputs-deposit}.\lit{add(\ms{input})}$ \Comment{use deposit to refund} \State $\Omega.\ms{deposit}\gets \Omega.\ms{deposit}-\ms{input.value}$\label{line:deposit-withdrawal} \Comment{deposit decreases in value} \EndSmallIf \SmallElse{} $\Omega.\lit{consumeUTXO(\ms{input})}$\Comment{spendable, normal case} \EndSmallElse \EndFor\label{line:merge-end} }\EndPart \Statex \rule{0.45\textwidth}{0.4pt} \Part{$\lit{RefundInputs()}$}{ \label{line:refund} \For{$\ms{input}$ \textbf{in} $\Omega.\ms{inputs-deposit}$}\Comment{go through inputs that used deposit} \SmallIf{$input$ \textbf{in} $\Omega.\ms{utxos}$}{}\Comment{if they are now spendable} \State $\Omega.\lit{consumeUTXO(}input\lit{)}$\Comment{consume them} \State $\Omega.\ms{deposit}\gets \Omega.\ms{deposit}+\ms{input.value}$\Comment{and refill deposit} \EndSmallIf \EndFor }\EndPart \end{algorithmic} } \end{algorithm} \vspace{-0.6em} \paragraph{In memory transactions.}\label{ssec:utxo} LLB\xspace is a blockchain that inherits the same \emph{Unspent Transaction Output (UTXO)} model as Bitcoin~\cite{Nak08}; the balance of each account in the system is stored in the form of a UTXO table. In contrast with Bitcoin, the number of maintained UTXOs is kept to a minimum in order to allow in-memory optimizations. Each entry in this table is a UTXO that indicates some amount of coins that a particular account, the `output' has. When a transaction transferring from source accounts $s_1, ..., s_x$ to recipient accounts $r_1, ..., r_y$ executes, it checks the UTXOs of accounts $s_1, ..., s_x$. If the UTXO amounts for these accounts are sufficient, then this execution consumes as many UTXOs as possible and produces another series of UTXOs now outputting the transferred amounts to $r_1, ..., r_y$ as well as what is potentially left to the source accounts $s_1, ..., s_x$. Maximizing the number of UTXOs to consume helps keeping the table compact. Each replicas can thus generally access the UTXO table directly in memory for faster execution of transactions. \vspace{-0.6em} \paragraph{Protocol to merge blocks.} As depicted in Alg.~\ref{alg:blkmerg}, the state of the blockchain $\Omega$ consists of a set of inputs $\ms{inputs-deposit}$ (line~\ref{line:inputs-deposit}), a set of account addresses $\ms{punished-acts}$ (line~\ref{line:punished-accounts}) that have been used by deceitful replicas, a $\ms{deposit}$ (line~\ref{line:deposit}), that is used by the protocol, a set $\ms{txs}$ of transactions and a list $\ms{utxos}$ of UTXOs. The algorithm relies on a usual propagation of blocks by broadcasting on the network and starts upon reception of a block that conflicts with an existing known block of the blockhain $\Omega$ by trying to merge all transactions of the received block with the transactions of the blockchain $\Omega$ (line~\ref{line:merge-invocation}). This is done by invoking the function $\lit{CommitTxMerge}$ (lines~\ref{line:merge}--\ref{line:merge-end}) where the inputs get appended to the UTXO table and conflicting inputs are funded with the deposit (line~\ref{line:deposit-withdrawal}) of a deceitful replica. We explain in~\cref{ssec:payment} how to build a payment system with a sufficient deposit to remedy double spending attempts. \vspace{-0.6em} \paragraph{Cryptographic techniques.} To provide authentication and integrity, the transactions are signed using the Elliptic Curves Digital Signature Algorithm (ECDSA) with parameters \texttt{secp256k1}, as in Bitcoin~\cite{Nak08}. Each honest replica assigns a strictly monotonically increasing sequence number to its transactions. The network communications use gRPC between clients and replicas and raw TCP sockets between replicas, but all communication channels are encrypted through SSL. Finally, the excluding protocol (Alg.~\ref{alg:recconsensus}) also makes use of ECDSA for PoFs or authentifying the sender of messages responsible for disagreement. One may think that message authentication codes (MACs) or threshold encryption could be more efficient alternatives to this classic public-key cryptosystem, however, threshold encryption cannot be used to trace back the faulty users as they are encoded in less bits than what is needed to differentiate users, and MACs are insufficient to provide this transferrable authentication~\cite{CJK12}. \subsection{Fault tolerance and intuition of the proof}\label{sec:proof-sketch} In this section, we provide the intuition of the proof that LLB\xspace solves the Longlasting Blockchain problem (Def.~\ref{def:properties}) hence recovering from of a majority of failures. To this end, we show that LLB\xspace solves termination, agreement when $f<n/3$ and convergence when $f-q < 2n/3 $ and $q<\min(n/3, n-f)$. A membership change is called \emph{successful} if its series of inclusion and exclusion consensus ends up with all honest replicas deciding the same set of replicas. \begin{lemma} In LLB\xspace if the number of deceitful replicas is $f-q < 2n/3$ and the number of benign replicas is $q<n-f$ then any disagreement on $\Gamma$ is followed by a membership change that terminates successfully, and that removes at least $f_d\geq n/3$ replicas. \label{lemma:recovery} \end{lemma} \begin{proof}[Proof] The proof is in two parts, termination and success, but we need first to show that a membership change follows any disagreement on $\Gamma$. Note that $f-q<\frac{2n}{3}$ implies deceitful replicas cannot decide on their own. Since each honest replica decides on a single decision, there can only be as many (finite) decisions as honest replicas. The number of branches produced by $\Gamma$ is thus upper-bounded by the number $n-f$ of the number of honest replicas. Since honest replicas exchange certificates as in Polygraph~\cite{CGG19,CGG20}, they identify at least $f_d\geq n/3$ deceitful replicas responsible for a disagreement. \begin{enumerate \item {\bf Termination.} The existence of a disagreement implies $q<n/3$. As deceitful replicas help terminating by definition, consider the most difficult scenario where all failures left are benign. The exclusion consensus thus excludes all deceitful replicas leading to $n'=n-(f-q)$. Since both inclusion and exclusion consensus instances wait for at least $\frac{n'}{2} = \frac{n-(f-q)}{2}$ messages from distinct replicas, it follows that we need $\frac{n'}{2} > q$ for them to terminate. Thus, $\frac{n-(f-q)}{2}>q \iff q < n - f$, which is always true if $f<2n/3$ and $q<n/3$. \item {\bf Success.} Now that we know that the membership change completes, we show that it succeeds. By construction, honest replicas only start the membership change if they gather at least $f_d$ PoFs from distinct replicas, which later they propose during the exclusion consensus. As a result, since the exclusion consensus decides on at least one correct proposal, at least $f_d$ nodes are removed. If there is a disagreement during this exclusion consensus, the union of both PoFs is eventually adopted as the decision, which excludes even more deceitful replicas, hence leading to success. \end{enumerate}% \end{proof} Recall that by assumption (\cref{sec:pb}) there exists a pool with $2n/3$ honest replicas from which honest replicas select a subset to propose to the inclusion consensus. For simplicity, we also assume that no replica from this pool is included twice. \begin{theorem}[Convergence] In LLB\xspace if the number of deceitful replicas is $f-q < 2n/3$ and the number of benign replicas is $q<\min(n/3, n-f)$, then in any sufficiently long static period of the adversary there is a finite number of disagreements after which honest replicas agree. \label{theorem:convergence} \end{theorem} \begin{proof}[Proof] By Lemma~\ref{lemma:recovery}, we know that every disagreement leads to a membership change in which $f_d \geq n/3$ will be removed. Note first that the existence of a decision implies $q<n/3$. The inclusion consensus does not increase the deceitful ratio, since the inclusion consensus does not include more replicas than the number of excluded replicas by the exclusion consensus and all excluded replicas are deceitful. As the inclusion consensus decides at least $n'/2$ proposals where $n' = n-f_d$ and the remaining deceitful replicas are $f'=f-q-f_d<n/3< n'/2$, it follows that at least one proposal from one honest replica will be decided. As the pool of joining candidates is finite and no replica is included more than once, it follows that the deceitful ratio will eventually decrease in any sufficiently long static period of the adversary. Some inclusion consensus will thus eventually lead to a deceitful ratio $\delta < 1/3$ and agreement is eventually reached from then on, since $q<n/3$. \end{proof} % Agreement and termination follow from the fact that the consensus algorithm at the heart of LLB\xspace terminates when $f<n/3$ as was previously shown~\cite{CGLR18,TG19}. \section{Coalition Attack Against Consensus}\label{sec:consensus} Here we describe the coalition attack of sufficiently many deceitful replicas that leads honest replicas to a disagreement. First, we restate the accountable consensus protocol of ASMR\xspace, Polygraph~\cite{CGG19}. \paragraph{Polygraph.} Polygraph~\cite{CGG19} executes a reliable broadcast to exchange consensus proposals and then a binary consensus for each of these proposals. The reliable broadcast guarantees that each honest replica reliably delivers the same values. As each binary consensus instance outputs the same value at all correct replicas, applying the resulting binary decisions $\ms{bin-decisions}$ as a bitmask over the reliably delivered values, leads to SBC-Agreement (Def.~\ref{def:sbc}). To provoke a disagreement, deceitful replicas have two strategies: equivocating during the reliable broadcast so as to deliver different proposals at distinct honest replicas or equivocating during the binary consensus so as to make distinct honest replicas decide 0 while other decide 1, leading to different bitmasks. \paragraph{Attacking the reliable broadcast.} Polygraph applies a reduction from a consensus among $n$ replicas to $n$ binary consensus among $n$ replicas. In each binary consensus, a different replica broadcasts a proposal to all the other replicas, after which the replicas must decide to include that proposal in their decision ($1$) or not ($0$). To this end, an accountable reliable broadcast securely ensures that, as long as $q<n/3$, all replicas deliver the same value from the reliable broadcast, only if $f-q<n/3$, otherwise at least $f_d$ deceitful replicas will be detected. A coalition attack on the reliable broadcast consists of all honest replicas deciding the same binary value (i.e., $1$) but on different proposals. This is not ideal for the attackers for two reasons. First, this means that attackers of the reliable broadcast can only attack the proposals that they themselves propose but no other proposal. Second, since replicas need to locally validate the proposals before executing the binary consensus, and since validating proposals is the most time-consuming operation of the LLB\xspace protocol, attacking the accountable reliable broadcast takes efforts and is of negligible impact compared to a coalition attack that causes a disagreement on binary consensus instances. \paragraph{Attacking the binary consensus.} Once replicas reliably deliver a proposal, they start an accountable binary consensus for such a proposal. This protocol consists of three phases: a first phase in which participants reliably broadcast (using a binary-value reliable broadcast called $\lit{bv-broadcast}$) their binary estimate ($1$ or $0$), along with a ledger which collects enough signatures from replicas to justify adopting such an estimate, in a \text{\sc bval}\xspace message; a second phase that starts once replicas deliver enough estimates in phase 1, in which replicas suggest a decision through a signed \text{\sc echo}\xspace message for this round; and a decision phase in which replicas either construct a certificate (resp. a ledger) with enough signed \text{\sc echo}\xspace to decide and broadcast the decision (resp. adopt a new estimate for next round), depending on the parity of both round and estimate. \renewcommand{\Comment}[1]{\textsl{\scriptsize \hfill \color{blue}{$\rhd$ #1}}} \begin{algorithm}[ht] \caption{Coalition attack at process $p_i$} \label{alg:coalattack} \footnotesize{ \begin{algorithmic}[1] \iffalse \Part{{\bf State}}{ \State $\ms{decided}\gets \emptyset$, set of decided values \State $\ms{ledger}_{\{0,1\}}$, ledgers used to justify values 0 or 1. Initially empty \State $P_{\{0,1\}}$, lists of replicas whom to send values 0 or 1. \State $r$, current round number \State $\ms{rec\_aux}$, an array of two sets of signatures, initially both empty \State $\ms{delivered}$, a set of binary values $v$ delivered in round $\neg v$, initially $\emptyset$ \State $\ms{cert}$, certificate containing the signatures of the coalition, initially $\emptyset$ }\EndPart \Statex \rule{0.45\textwidth}{0.4pt} \fi \Part{$\lit{propose(\star)}$}{ \Comment{the attacker proposes anything} \While{$\lit{true}$} \State $broadcast\_vals\gets \emptyset$ \For{$\ms{val}$ \textbf{in} $\{0,1\}\textbackslash \ms{decided}$} \Comment{force to decide non decided value}\label{line:bvb1} \SmallIf{$\lit{valid(}\ms{ledger_{val}}\lit{)}$}{} \Comment{if correct signatures or empty ledger} \State $\lit{multicast}_{\ms{P_{val}}}(\text{\sc bval}\xspace,\, \langle \ms{val},\,\ms{ledger_{val}},\,\ms{i}\rangle\lit{)}$\Comment{send valid ledger}\label{line:mcast} \State $\ms{broadcast\_vals}.\lit{add}(\ms{val})$ \EndSmallIf \EndFor \WUntil{$\ms{broadcast\_vals}=\{0,\,1\}\textbackslash\ms{decided}$} \State {\bf receive} $(\text{\sc bval}\xspace,\langle \ms{val},\,\ms{ledger_{val}},\,\ms{j}\rangle\lit{)}$ \SmallIf{$val$ \textbf{not in} $broadcast\_vals$}{} \Comment{wait for valid ledger send} \State $\lit{multicast}_{P_{val}}(\text{\sc bval}\xspace,\, \langle val,\,ledger_{val},\,i\rangle\lit{)}$ \State $\ms{broadcast\_vals}.\lit{add}(\ms{val})$ \EndSmallIf \EndWUntil\label{line:bvb2} \State $\ms{rec\_aux}\gets\{\emptyset,\,\emptyset\},\;\ms{delivered}\gets \emptyset$\label{line:aux1} \For{$\ms{val}$ \textbf{in} $\{0,\,1\}\textbackslash\ms{decided}$} \State $\ms{signature}\gets \lit{sign}(\ms{val,\,r_i,\,i})$ \Comment{create signature} \State $\lit{multicast}_{P_{val}}\lit{(\text{\sc echo}\xspace[r_i]},\, \langle \ms{val,signature}\rangle\lit{)}$ \EndFor \WUntil{$\ms{decided}\cup \ms{delivered}=\{0,\,1\}$} \State {\bf receive} $(\text{\sc echo}\xspace[r_i],\,\langle \ms{val,signature}\rangle\lit{)}$ \State $\ms{rec\_aux[val]}\lit{.add(}\ms{signature}\lit{)}$ \SmallIf{$\lit{size(}\ms{rec\_aux[val]}\lit{)}\geq n-t_0\cdot n-f$}{} \Comment{enough $\ms{val}$ replies}\label{line:aux2} \State $\ms{cert}\gets \lit{fill\_certificate(}\ms{val,rec\_aux}[val]\lit{)}$ \Comment{coalition sign.}\label{line:end1} \SmallIf{$r_i\;$\textbf{mod}$\;2=\ms{val}\;$\textbf{and}$\;\ms{val}\;$\textbf{not in}$\;\ms{decided}$}{}\Comment{can decide} \State $\lit{multicast}_{P_{val}}\lit{(\text{\sc cert}\xspace[r_i]},\, \langle \ms{val,cert}\rangle\lit{)}$ \Comment{notification}\label{line:cert} \State $\ms{decided}.\lit{add(}\ms{val}\lit{)}$ \SmallIf{$\ms{decided}=\{0,\,1\}$}{\textbf{return}}\Comment{attack is done} \EndSmallIf \EndSmallIf \SmallElseIf{$\ms{val}\;$\textbf{not in}$\;\ms{decided}$}{}\Comment{cannot decide this round}\label{line:ledger} \State $\ms{ledger_{val}}\gets \ms{cert}$\Comment{record ledger} \State $\ms{delivered}\lit{.add(}\ms{val}\lit{)}$, \EndSmallElseIf \EndSmallIf \EndWUntil\label{line:end2} \State $r\gets r+1$ \Comment{increment the round number} \EndWhile }\EndPart \end{algorithmic} } \end{algorithm} We now present the coalition attack against the binary consensus that simply consists of having a sufficiently large coalition of deceitful replicas running the hacked binary consensus version depicted in Algorithm~\ref{alg:coalattack} instead of the correct version of~\cite{CGG19}. At a high level, the coalition attack consists of collecting enough \text{\sc echo}\xspace messages to be able to justify estimates and decisions in both the $\lit{bv-broadcast}$ and the broadcast $\ms{certificate}$. Additionally, attackers need to sign and send \text{\sc echo}\xspace messages, in order to provide correct replicas with enough messages to continue to the next round and/or decide. Attackers can thus use signed messages from correct replicas across partitions if the partitions share the same bitmask for that binary consensus. We denote $P_0$ and $P_1$ as the sets of honest replicas that attackers choose to decide $0$ or $1$, respectively, for a particular binary consensus. More precisely, at lines~\ref{line:bvb1} to~\ref{line:bvb2}, the deceitful replicas wait to receive a valid ledger to multicast it to each set of correct replicas in case they do not locally have one ledger, otherwise correct replicas will not use such a ledger (as it is not valid). At lines~\ref{line:aux1} to~\ref{line:aux2}, the deceitful replicas wait for enough \text{\sc echo}\xspace signed messages from correct replicas for each value. Once the coalition receives enough correct replies, it can conclude the round (lines~\ref{line:end1} to~\ref{line:end2}) with either a certificate to multicast (line~\ref{line:cert}) or a ledger to update for the next round (line~\ref{line:ledger}), depending on the parity of the value. The loop continues simply to help other replicas reach a decision. \begin{algorithm}[h!] \caption{Reliable Broadcast} {\footnotesize \begin{algorithmic}[1] \Part{$\lit{RB-broadcast}(v|\textcolor{red}{\{v_j\}_{j=0}^{a-1}}) $} \Comment{only executed by the source} \State $\lit{broadcast}(\text{\sc init}\xspace, v)$ | \textcolor{red}{$\lit{multicast_j}(\text{\sc init}\xspace, v_{\textcolor{red}{j}})$} \Comment{broadcast value $v$ to all} \EndPart \Part{\textbf{upon} {\rm receiving a message $(\text{\sc init}\xspace, v_{\textcolor{red}{j}})$ from $p_i$}} \State $\lit{broadcast}(\text{\sc echo}\xspace, v, i)$| \textcolor{red}{$\lit{multicast_j}(\text{\sc echo}\xspace, v_j, i)$} \Comment{echo value $v$ to all} \EndPart \Part{\textbf{upon} {\rm receiving $n - t_0 \textcolor{red}{ - f}$ distinct messages $(\text{\sc echo}\xspace, v_{\textcolor{red}{j}}, i)$ and not having sent a \text{\sc ready}\xspace~ message}} \State Construct a ledger $\ell_{\textcolor{red}{j}}$ containing $n-t_0$ signed messages $(\text{\sc echo}\xspace, v_{\textcolor{red}{j}}, i)$. \State $\lit{broadcast}(\text{\sc ready}\xspace, v, \ell, i)$| \textcolor{red}{$\lit{multicast_j}(\text{\sc ready}\xspace, v_j, \ell_j, i)$} \Comment{send \text{\sc ready}\xspace message and ledger for $v$ to all.} \State \textcolor{red}{$\lit{deliver_j}(v_j, i)$$\square$} \EndPart \Part{\textbf{upon} {\rm receiving $t_0+1$ distinct messages $(\text{\sc ready}\xspace, v, \cdot, j)$ and not having sent a \text{\sc ready}\xspace~ message}} \State Set $\ell_i$ to be one of the (valid) ledgers received $(\text{\sc ready}\xspace, v \ell, j)$. \State $\lit{broadcast}(\text{\sc ready}\xspace, v, \ell, j)$ \Comment{send \text{\sc ready}\xspace message for $v$ to all.} \EndPart \Part{\textbf{upon} {\rm receiving $n-t_0$ distinct messages $(\text{\sc ready}\xspace, v, \cdot, i)$ and not having delivered a message from $i$}} \State Let $\ell$ be one of the (valid) ledgers received $(\text{\sc ready}\xspace, v, \ell, i)$. \State $\lit{deliver}(v, i)$ \Comment{send \text{\sc ready}\xspace message for $v$ to all} \EndPart \end{algorithmic} } \CHANGE{specify that malicious preshared their blocks for example} \end{algorithm} \section{Blockchain Experimental Evaluation}\label{sec:expe} This section is dedicated to answer the following questions: Does LLB\xspace offer practical performance in a geo-distributed environment? When $f<n/3$ how does ASMR\xspace perform compared to the raw state machine replication at the heart of Facebook Libra? When $f\geq n/3$, what is the impact of large scale coalition attacks on the recovery of ASMR\xspace? We defer the evaluation of a zero-loss payment application to~\cref{ssec:eval-payment}. \vspace{-0.6em} \paragraph{Experimental settings.} To evaluate LLB\xspace, we compare its performance to (i)~HotStuff~\cite{YMR19}, the state machine replication with linear message complexity a variant of which has been integrated in Facebook's Libra~\cite{Fac19}; (ii)~Red Belly Blockchain~\cite{CNG18} whose performance scales to hundreds of nodes, and (iii)~Polygraph~\cite{CGG19} an accountable blockchain prototype that does neither exclude nor include replicas, and whose verification technique is not accountable. For all three, we deployed the original code from the corresponding authors~\cite{YMR19,CNG18,CGG19}. in two distributed settings of c4.xlarge Amazon Web Services (AWS) instances equipped with 4 vCPU and 7.5\,GiB of memory: (i)~a LAN with up to 100 machines and (ii)~a WAN with up to 90 machines. We evaluate LLB\xspace with a number of failures $f$ up to $\lceil \frac{2n}{3} \rceil - 1$, however, when not specified we fix $f-q=\lceil 5n/9\rceil-1$ and $q=0$. All error bars represent the 95\% confidence intervals and the plotted values are averaged over 3 to 5 runs. All transactions are \textasciitilde{}400-byte Bitcoin transactions with ECDSA signatures~\cite{Nak08}. \begin{figure}[t] \centering \includegraphics[height=6em]{./plots_eurosys/plot_decisions_compared} \caption{Throughput of LLB\xspace (decisions and confirmations) compared to that of Polygraph~\cite{CGG19}, HotStuff~\cite{YMR19} and the Red Belly Blockchain~\cite{CNG18}.} \label{fig:fig1} \vspace{-1em} \end{figure} \subsection{LLB\xspace vs. HotStuff, Red Belly and Polygraph} Figure~\ref{fig:fig1} compares the performance of LLB\xspace, Red Belly Blockchain and Polygraph deployed over 5 availability zones of 2 continents California, Oregon, Ohio, Frankfurt and Ireland (exactly like Polygraph experiments~\cite{CGG19}). For LLB\xspace, we only represent the decision throughput that reaches $16,626$ tx/sec at $n=90$ as the confirmation throughput is similar ($16,492$ tx/sec). As only LLB\xspace tolerates $f\geq n/3$, we fix $f=0$. First, Red Belly Blockchain offers the highest throughput. As expected it outperforms LLB\xspace due to its lack of accountability: it does not require messages to piggyback certificates to detect PoFs. Both solutions solve SBC so that they decide more transactions as the number of proposals enlarges and use the same batch size of $10,000$ transactions per proposal. As a result ASMR\xspace scales pretty well: the cost of tolerating $f\geq n/3$ failures even appears negligible at 90 replicas. Second, HotStuff offers the lowest throughput even if it does not verify transactions. Note that HotStuff is benchmarked with its dedicated clients in their default configuration, they transmit the proposal to all servers to save bandwidth by having servers exchanging only a digest of each transaction. The performance is explained by the fact that HotStuff decides only one proposal per consensus instance, regardless of the number of submitted transactions, which is confirmed by previous observations~\cite{VG19} and~\cite[Sect.8.3]{YMR19b}. By contrast, LLB\xspace becomes faster as the system size increases to outperform HotStuff by $5.6\times$ at $n=90$. Finally, Polygraph is faster at small scale than LLB\xspace, because Polygraph's distributed verification and reliable broadcast implementations~\cite{CGG19} are not accountable, performing less verifications. After 40 nodes, Polygraph becomes slower than LLB\xspace because of our optimizations: e.g., its RSA verifications are larger than our ECDSA signatures, consume more bandwidth and more CPU. \begin{figure}[t] \includegraphics[height=9.6em]{./plots_eurosys/plot_recovery_realistic_delays_rbbcast_too.pdf} \caption{Disagreeing decisions per number of replicas for a variety of uniform delays and for delays generated from a Gamma distribution and a distribution that draws from observed AWS latencies when equivocating while voting for a decision (top), and when equivocating while broadcasting the proposals (bottom), for $f=\lceil 5n/9\rceil-1$ and $q=0$.} \label{fig:fig2} \vspace{-1em} \end{figure} \subsection{Scalability of LLB\xspace despite each coalition attack} To evaluate LLB\xspace under failures, we implemented both coalition attacks: the reliable broadcast attack and the binary consensus attack (\cref{sec:pb}) with $f-q=\lceil 5n/9\rceil-1$ deceitful replicas and $q=0$ benign replicas. To disrupt communications between partitions of honest replicas, we inject random communication delays between partitions based on the uniform and Gamma distributions, and the AWS delays obtained in previously published measurements traces~\cite{mukherjee1992dynamics,crovella1995dynamic,CNG18}. (Deceitful replicas communicate normally with each partition.) Fig.~\ref{fig:fig2}(top) depicts the amount of disagreements, as the number of distinct proposals decided by honest replicas, under the binary consensus attack. First, we select uniformly distributed delays between the two partitions with mean as high as 200, 500 and 1000 milliseconds. Then, we select delays following a Gamma distribution with parameters taken from~\cite{mukherjee1992dynamics,crovella1995dynamic} and a distribution that randomly samples the fixed latencies previously measured between AWS regions~\cite{CNG18}. We automatically calculate the maximum amount of branches that the size of deceitful faults can create (i.e., 3 branches for $f-q<5n/9$), we then create one partition of honest replicas for each branch, and we apply these delays between any pair of partitions. Interestingly, we observe that our agreement property is scalable: the greater the number of replicas (for the same relative deceitful ratio), the harder for attackers to cause disagreements. This interesting scalability phenomenon is due to an unavoidable increase of the communication latency between attackers as the scale enlarges, which gives relatively more time for the partitions of honest replicas to detect the deceitful replicas that have equivocated and construct PoFs. This latency increase translates into a higher chance of detecting PoFs before deciding, which automatically cancels the upcoming decision and hence limits the number of disagreements. With more realistic network delays (Gamma distribution and AWS latencies) that are lower in expectation than the uniform delays, deceitful replicas can barely generate a single disagreement, let alone with an increasing number of replicas. This confirms the scalability of our system. Fig.~\ref{fig:fig2}(bottom) depicts the amount of disagreements under the reliable broadcast attack. Recall that this attack consists of having deceitful replicas equivocating when expected to reliably broadcast the same proposal, hence sending different proposals to different partitions. We can observe that the number of disagreement is substantially higher during this attack than during the previous attack, however, it drops faster as the system enlarges, because the attackers expose themselves earlier than in the binary consensus attack. \begin{figure}[t] \centering \includegraphics[height=10.6em]{./plots_eurosys/plot_100_f_changing_recovery_worst_case_uniforms_pdaxis.pdf} \caption{Total disagreements (\#distinct decisions) per: deceitful faults (top) for a setting with 100 replicas and a 2-second uniform delay between partitions, and number of replicas (bottom) for a variety of uniform delays.} \label{fig:fig3new} \vspace{-1em} \end{figure} \begin{figure*}[t] \begin{subfigure}{0.495\columnwidth} \includegraphics[width=\textwidth]{./plots_eurosys/plot_detection_pdxaxis.pdf} \end{subfigure} \begin{subfigure}{0.495\columnwidth} \includegraphics[width=\textwidth]{./plots_eurosys/plot_recovery_time_new_pdxaxis.pdf} \end{subfigure} \begin{subfigure}{0.495\columnwidth} \includegraphics[width=\textwidth]{./plots_eurosys/plot_join_time_pdxaxis.pdf} \end{subfigure} \begin{subfigure}{0.495\columnwidth} \includegraphics[width=\textwidth]{./plots_eurosys/plot_catchup_time_nxaxis.pdf} \end{subfigure} \caption{Time to detect $\ceil{\frac{n}{3}}$ deceitful replicas (left), exclude them (center-left), include new replicas (center-right), per delay distribution and number of replicas; and catch up per number of blocks and replicas (right), with $f=\lceil 5n/9\rceil-1$.} \label{fig:fig7} \vspace{-0.5em} \end{figure*} \subsection{Disagreements due to failure rates and delays} We now evaluate the impact of even larger coalitions and delays on LLB\xspace, we measure the number of disagreements as we increase the deceitful ratios and the partition delays in a system from 20 to 100 replicas. Note that these delays could be theoretically achieved with man-in-the-middle attacks, but are notoriously difficult on real blockchains due to direct peering between the autonomous systems of mining pools~\cite{EGJ18}. Figure~\ref{fig:fig3new}(top) depicts the number of disagreements as the coalition size and the number of branches created increases with a 2-second uniform delay between partitions. As expected, the larger the coalition, the more disagreements up to 23 disagreements for $f=\lceil \frac{2n}{3}\rceil-1$. This is due to deceitful replicas speeding up reliable broadcast by skipping verifications and gathering rapidly the signatures for each partition. Although omitted in the figure, the same result occurs for the reliable broadcast attack as the deceitful ratio increases While LLB\xspace is quite resilient to attacks for realistic but not catastrophic delays, attackers can wait and try to attack when the network collapses for a few seconds between regions. Fig.~\ref{fig:fig3new}(bottom) shows the amount of proposals attackers can disagree on in such a catastrophic scenario, reaching up to 52 disagreeing proposals for a uniform delay of 10 seconds between partitions of honest replicas for the binary consensus attack. Although omitted here for the sake of space, the reliable broadcast attack reaches up to 165 disagreeing proposals with a 5-second uniform delay. \subsection{Latencies of block merge and membership change} To have a deeper understanding of the cause of LLB\xspace delays, we measured the time needed to merge blocks and to change the membership by replacing deceitful replicas by new ones. \begin{table}[h] \footnotesize{ \setlength{\tabcolsep}{17pt} \centering \begin{tabular}{l|rrr} \toprule Blocksize (txs)& 100 & 1000 & 10000 \\ Time (ms) & 0.5 & 4.2 & 41.3 \\ \bottomrule \end{tabular} } \setlength{\tabcolsep}{20pt} \caption{Time to merge locally two blocks for different sizes with all transactions conflicting.} \label{tab:01} \vspace{-2em} \end{table} Table~\ref{tab:01} shows the worst-case times to locally merge block proposals for different numbers of transactions per block, assuming all transactions conflict. This is the time taken in the worst case because replicas can merge proposals that they receive concurrently (i.e., without halting consensus). It is clear that this time to merge blocks locally is negligible compared to the time it takes to run the consensus algorithm. Figure~\ref{fig:fig7} shows the time to detect $f_d \geq \ceil{\frac{n}{3}}$ deceitful replicas (left), to run the exclusion consensus (center-left), and to run the inclusion consensus (center-right), for a variety of delays and numbers of replicas. The time to detect reflects the time from the start of the attack until honest replicas detect the attack: If the first $\ceil{\frac{n}{3}}$ deceitful replicas are forming a coalition together and creating the maximum amount of branches, then the times to detect the first deceitful replica and the first $\ceil{\frac{n}{3}}$ deceitful replicas overlap entirely, as they are detected cross-checking conflicting certificates. (We detect all at the same time.) Moreover, the times to detect, exclude, and include increase as the communication delays increase; the time to exclude (57 seconds) is significantly larger than to include (21 seconds). This is due to the proposals of the exclusion consensus carrying PoFs and leading replicas to execute a time consuming cryptographic verification. With shorter communication delays, performance becomes practical. Finally Figure~\ref{fig:fig7} (right) depicts the time to catch up depending on the number of proposals (i.e., blocks). As expected, this time increases linearly with the number of replicas, due to the catchup requiring to verify larger certificates. \section{Related Work}\label{sec:rw} Several works have tried to circumvent the upper bound on the number of Byzantine failures~\cite{PSL80} to reach agreement. \vspace{-0.6em} \paragraph{Permissionless blockchains.} LLB\xspace is open permissioned as it requires an initial list of replicas that have the permission to propose to the consensus. This is different from permissionless blockchains~\cite{Nak08,Woo15} whose replicas can solve a cryptopuzzle on-the-fly to propose a block to the consensus. LLB\xspace builds upon membership change or view change (e.g.,~\cite{LMZ10}) in order to change the permissions at runtime. In-production blockchains, like Hyperledger Fabric~\cite{ABB18}, also support dynamic membership\footnote{\url{https://hyperledger-fabric.readthedocs.io/en/latest/glossary.html\#dynamic-membership}.}. Other membership change protocols adjust permissions without disrupting the blockchain service with synchrony~\cite{PS17,AMN17} or partial synchrony~\cite{VG19b,BAS20}. \paragraph{Slashing.} Slashing stakes has become popular to disincentivize blockchain participants to misbehave. The Casper~\cite{Casper} algorithm incurs a penalty in case of double votes. While safe, Casper does not ensure consensus termination when $f<n/3$ as one replica can always restart a new consensus instance in a later layer of its directed acyclic graph~\cite{Casper}. Tendermint~\cite{Buc16} aims at slashing replicas, but its consensus is not accountable~\cite{BKM18}. SUNDR~\cite{LKM04} assumes honest clients that can communicate directly in order to detect some Byzantine failures. Polygraph~\cite{CGG19} solves the accountable consensus but does not provide slashing. \paragraph{Weak guarantees.} It was shown that consensus is necessary to implement a cryptocurrency when different replicas can issue conflicting transactions~\cite{GKM19}. Assuming that they cannot, one may simply use reliable broadcast~\cite{CGK20}. Zeno~\cite{singh2009zeno} guarantees eventual consistency by decoupling requests into weak (i.e., requests that may suffer reordering) and strong requests. It provides availability in periods where $f$ goes beyond $n/3$ by committing only weak requests, that can later be reordered if a merge procedure was required. Note that LLB\xspace could not be built upon Zeno because Zeno requires wrongly ordered transactions to be rolled back, whereas blockchain transactions can have irrevocable side effects like the shipping of goods to the buyer. BFT2F~\cite{Li} offers fork* consistency, which forces the adversary to keep correct clients in one fork only, while also allowing accountability. These proposals do not aim at excluding deceitful replicas. \vspace{-0.6em} \paragraph{Failure model.} Distributed systems within closed networks usually consider that omission faults (omitting messages) are more frequent than commission faults (sending wrong messages)~\cite{CKL09,KWQ12,LLM19}. Eve~\cite{KWQ12} upper-bounds the number of commission faults to respond correctly to database requests. Depot~\cite{LLM19} offers a cloud service with a stronger guarantee than fork-join consistency when all faults are omission faults. This is because traditional datacenters build upon an isolated network whose rare commission faults can be due to disk errors~\cite{CKL09}. The BAR model~\cite{AAC05}, and its Byzantine, altruistic, rational classification is motivated by multiple administrative domains and corresponds better to the blockchain open networks without distinguishing benign from deceitful faults. \vspace{-0.6em} \paragraph{Strengthening fault tolerance.} Various non-blockchain systems already refined the types of failures to strengthen guarantees. Upright~\cite{CKL09} proposes a system that supports $n=2u+r+1$ faults, where $u$ and $r$ are the numbers of commission and omission faults, respectively. They can either tolerate $n/3$ commission faults or $n/2$ omission faults. Flexible BFT~\cite{MNR19} offers a failure model and theoretical results to support $\lceil 2n/3\rceil -1$ alive-but-corrupt replicas. An alive-but-corrupt replica only behaves maliciously when it can violate safety, but it behaves correctly otherwise. This assumption is too strong for our needs: for example, it is difficult for a replica to detect whether its coalition is large enough for its attack to succeed. Some hybrid failure models tolerate crash failures and Byzantine failures but prevent Byzantine failures from partitioning the network~\cite{LVC16}. Others aim at guaranteeing that well-behaved quorums are responsive~\cite{LLM19} or combine crash-recovery with Byzantine behaviors to implement reliable broadcast~\cite{BC03}. \section{Conclusion}\label{sec:conclusion} In this paper, we proposed an open permissioned blockchain LLB\xspace whose membership change makes it converge, despite $\lceil 2/3\rceil-1$ deceitful failures, to a state where it can solve consensus deterministically. We show that the cost of this convergence does not prevent LLB\xspace from scaling and performing almost as fast as an efficient blockchain, even outperforming a recent state machine replication in a geodistributed network. \section{Application to Zero-Loss Payment System}\label{ssec:payment} In this section, we describe how LLB\xspace can be used to implement a \emph{zero-loss payment system} where no honest replica loses any coin. The key idea is to request the consensus replicas to deposit a sufficient amount of coins in order to spend, in case of an attack, the coins of deceitful replicas to avoid any honest replica loss. In order to measure the expected impact of a coalition attack succeeding with probability $\rho$ in forking LLB\xspace by leading a consensus to disagreement, we first need to make the following assumptions: \begin{enumerate} \item {\bf Fungible assets.} We assume that users can transfer assets (like coins) that are \emph{fungible} in that one unit is interchangeable and indistinguishable from another of the same value. An example of a fungible asset is a cryptocurrency. We assume that an honest client does not issue conflicting transactions, i.e., concurrent transactions that withdraw from the same account. \item {\bf Deposit refund per block.} To limit the impact of one successful double spending on a block, we upper-bound the amount of deposited coins that can be refunded per block. We denote the maximum amount of deposit per account that can be refunded per block as $\ell$. This limit implies that transferring any amount \textcent{} can be done in one block but there should be an associated deposit of the same amount that cannot be refunded in less than $m=\text{\textcent}/\ell$ blocks---a portion of the deposit is refunded in each of these blocks. We denote $m$ as the \emph{depth} of the attack. We show below that we ensure zero loss when $m\geq \frac{a\rho}{1-\rho}$, where $a$ is the number of branches. \item {\bf Network control restriction.} Once Byzantine replicas select the disjoint subsets of honest replicas to double spend (i.e. the partitions), we need to prevent Byzantine replicas from communicating infinitely faster than honest replicas in different partitions. More formally, let $X_1$ (resp. $X_2$) be the random variables that indicate the time it takes for a message between two replicas within the same partition (resp. two honest replicas from different partitions). We have $E(X_1) / E(X_2) > \varepsilon$, for some $\varepsilon > 0$. Note that the definition of $X_1$ also implies that it is the random variable of the communication time of either two honest replicas of the same partition or two Byzantine replicas. Notice that this probabilistic synchrony assumption is similar to that of Bitcoin and other Blockchains that guarantee exponentially fast convergence, a result that is not different in LLB\xspace under the same assumptions. Notwithstanding, we show in the following an analysis that focuses on the attack on each consensus iteration, considering a successful disagreement if there is a fork in a single consensus instance, even for a short period of time. \end{enumerate} \vspace{-0.6em} \paragraph{Theoretical analysis.} We show that attackers always fund at least as much as they steal, leading to zero loss. We consider that a membership change starts before a disagreement occurs or does not start, which is safer than the general case. Out of one attack attempt, the attacker may gain $\mathcal{G}$ coins by forking or lose $\mathcal{P}$ coins as a punishment from the system. In case of a successful attack, the system compensates the coin losses using the deposit coins. The attack represents a Bernoulli trial that succeeds with probability $\rho$, which can be derived from $\varepsilon$. The random variable $Y$ measures the number of attempts for the attack to be successful and follows a geometric distribution with mean $E(Y)=\frac{1-\hat{\rho}}{\hat{\rho}}$, where $\hat{\rho} = 1 - \rho$ is the probability that the attack fails. The gain of an attack depends on the amount of coins of the coalition spent in $a$ branches of a fork. That is, the maximum gain of a successful attack is $\mathfrak{G} \cdot (a-1)$, where $\mathfrak{G}$ is the sum of all assets of the attacker coalition. We define the function $\alpha:[0, m]\cap \mathds{Z} \rightarrow [0,1]$ that, given an amount of consensus instances, returns the percentage of total gain that is refunded in that amount of instances. We consider that an attack tries to steal an amount divisible by $\ell$, thus $\alpha(i)=\frac{i}{m}$. The expected gain and punishment for the attackers in a disagreement attempt are as follows: \begin{align*} \mathcal{G}(\hat{\rho}) =& (a-1)\cdot \bigg(\mathds{P}(Y> m)\cdot \mathfrak{G} +\sum_{i=0}^{m} \mathds{P}(Y=i)\cdot \alpha(i) \cdot \mathfrak{G}\bigg),\\ \mathcal{P}(\hat{\rho}) =& \sum_{i=0}^{m} \mathds{P}(Y=i)\cdot (1-\alpha(i)) \cdot \mathfrak{G}. \end{align*} We refer to the expected \textit{deposit flux} per attack attempt as the difference $\Delta=\mathcal{P}(\hat{\rho})-\mathcal{G}(\hat{\rho})$ between the punishment and the gain from an attack. Therefore, the deposit flux is: $$\Delta=\bigg(1-a\cdot \big(\sum_{i=0}^{m} \mathds{P}(Y=i)\cdot \alpha(i)+\rho^{m+1}\big)\bigg)\mathfrak{G}=g(a,\rho,m)\mathfrak{G}.$$ If $\Delta< 0$ then a cost of $\mathcal{G}(\hat{\rho})-\mathcal{P}(\hat{\rho})$ is incurred to the system, otherwise the punishment is enough to fund the deposit, so we have to ensure that $\Delta\geq 0$, hence we are interested in $g(a,\rho,m)\geq\,0$: $$1-a\cdot \bigg(\frac{(1-\rho)}{m}\cdot \frac{\rho\big(m\rho^{m+1}-(m+1)\rho^{m}+1\big)}{(p-1)^2} \bigg)- a\cdot \rho^{m+1}\geq 0.$$ And choosing $m$ such that $m\geq \frac{a\rho}{1-\rho}$ we have: $$1-\big(m\rho^{m+1}-(m+1)\rho^{m}+1 \big)- a\cdot \rho^{m+1}\geq 0 \iff 1 \geq \rho$$ which always holds for any $a \geq 2$, hence guaranteeing that $\Delta \geq 0$. We can conclude that LLB\xspace implements a zero-loss payment system if the total amount to be refunded per account from the deposit in a consensus instance is limited to $\ell \leq \text{\textcent}\frac{1-\rho}{a\rho}$, with $m=\frac{\text{\textcent}}{l}$, since the remaining deposited funds will always be at least as much as what the attackers manage to steal. If $\text{\textcent}=\mathds{C}$ then the attackers cannot even steal the entire circulating supply, if $\ell$ and $m$ are chosen accordingly. The complete proof is deferred to the full version. \begin{theorem}[Zero Loss Payment System]\label{thm:zeroloss} Given $\rho$ the probability of success of an attack, if $m\geq \frac{a\rho}{1-\rho}$ then LLB\xspace implements a zero loss payment system. \end{theorem} \paragraph{Simulating the depth and width of a coalition attack.} Considering a deceitful ratio $\delta=\frac{f-q}{n}$, one can derive the maximum number of branches from the bound $a\leq\frac{n-(f-q)}{\ceil{2n/3}-(f-q)}$~\cite{singh2009zeno}. Thus, given a deceitful ratio $\delta$, it is possible to estimate the maximum probability of an attack succeeding for a particular depth of an attack, or the other way around. For example, for a deceitful ratio of $\delta=0.5,\, a=3$, and for a probability $\rho=0.5$, fixing the depth of the attack to $m=3$ blocks already guarantees zero-loss, but increasing $\rho=0.9$ requires to take at least $m=27$ blocks to finish. It is easy to notice that, whereas $m$ increases polynomially with $\rho$, it increases exponentially as the deceitful ratio $\delta$ approaches the asymptotic limit $2/3$, since the number of branches also increases exponentially fast, with $m=6$ blocks for $\delta=0.6$, while $m=14$ for $\delta=0.64$ or $m=51$ for $\delta=0.66$, with $\rho=0.5$. \\ \begin{figure}[t] \includegraphics[height=8em]{./plots_eurosys/plot_recovery_cost.pdf} \caption{Expected deposit flux $\Delta$ per uniform delay for $\ell=10\%$ coins to be refunded per block. $\Delta\geq 0$ indicates zero-loss and `rbbcast' indicates the reliable broadcast attack } \vspace{-1em} \label{fig:fig6} \end{figure} \vspace{-0.6em} \paragraph{Experimental evaluation of the payment system.}\label{ssec:eval-payment} Taking the experimental results of~\cref{sec:expe} and based on our aforementioned theoretical analysis, Figure~\ref{fig:fig6} depicts the expected deposit flux for a variety of uniform communication delays with $f=\lceil 5n/9\rceil-1$ where $\ell$ is the maximum amount to be refunded per block. Again, we can see that the expected deposit flux increases with the number of replicas, confirming that the zero loss property scales well. Additionally, small uniform delays yield zero loss at larger values $\ell$. Nonetheless, although omitted in the figure, all delays shown lead to zero loss for at least 90 replicas with $\ell\leq 50\%$ of the total circulating supply. This means that if LLB\xspace simply returns half the deposit immediately after deciding a transaction, and the other half after the following block is decided, no honest user will lose any asset whatsoever, even if some attacks succeed. Although omitted in the figure, our experiments showed that even for a uniform delay of 5 or 10 seconds, restricting $\ell=1\%$ still yields a non-negative deposit flux in the case of a binary consensus attack, while the reliable broadcast attack requires $\ell=0.3\%$. Nevertheless, if the network performs normally, LLB\xspace is expected to be gainful for a large value of $f$, and to actually benefit from attackers (i.e., obtaining more from punishing them than they can steal by attacking). \section{Analysis of Zero Loss Payment System}\label{app:analysis} \begin{table} \setlength{\tabcolsep}{1pt} \begin{small} \begin{tabular}{ll} \toprule Not. & Definition \\ \midrule $s$ & Maximum number of transactions per block\\ $\ell$ & Maximum number of coins per block\\ $a$ & Maximum number of branches resulting from a disagreement \\ $\rho$ & Probability of success of the attack \\ $m$ & Number of blocks to fully refund deposit and attack depth\\%Number of subsequent blocks needed to commit a transaction \\ $T_i$ & The conflicting transaction in the $i^{th}$ branch \\ $\alpha$ & Mapping from consensus instance to percentage of total gain \\ \bottomrule \end{tabular} \caption{Notations for the analysis of a zero-loss deposit.\label{table:notations}} \end{small} \end{table} We show that the Longlasting Blockchain\xspace can be used to implement a zero-loss payment system in that no honest replicas lose any coin. The key idea is to request the consensus participants to deposit a sufficient amount of coins in order to spend the coins of deceitful participants to avoid any honest replica loss. In order to measure the expected impact of a coalition attack succeeding with probability $\rho$ in forking LLB\xspace (Table~\ref{table:notations} lists the choice of notations), we first need to make the following assumptions: \begin{enumerate} \item {\bf Fungible assets assumption.} We assume that participants can transfer assets that are \emph{fungible} in that one unit is interchangeable and indistinguishable from another of the same value. An example of a fungible asset is a cryptocurrency. If assets are not fungible, we assume that there exists a function $\ms{fungibility}:\mathds{D}^a \rightarrow \mathds{C}^{a-1} \times \mathds{D}$ that, given `$a$' conflicting decisions $d^{P_0},...,d^{P_{a-1}}$, it outputs only one of them $d^{P_r}$ and a list of fungible assets that the rest of the partitions are willing to take as a refund for their decision being forgotten from that instance on. We refer to the function $\ms{fungibility}$ as the \emph{fungibility function}. An example is a participant willing to take the value of a product, if the product was accidentally sent to a different participant. % We assume that an honest client does not issue conflicting transactions, i.e., concurrent transactions that withdraw from the same account. \item {\bf Deposit refund per block.} We upper-bound the amount of coins that can be refunded per block. Let $s$ be the maximum number of transactions per block. Note that this limit is imposed by classic blockchains system~\cite{Nak08,Woo15} in the form of block size~\cite{Nak08} or gas limit~\cite{Woo15}. As the amount of coins that can be spent per transaction is bounded, it follows that the amount that is refunded per block is also upper bounded. We denote the maximum amount of deposit per account that can refunded per block as $\ell$. This limit implies that transferring a large amount can be done only across $m$ blocks (until the corresponding portion of the deposit is fully refunded) that we refer to as the \emph{depth} of the attack. We will show that we ensure zero loss when $m\geq \frac{a\rho}{1-\rho}$. \item {\bf Network control restriction.} We need to prevent Byzantine replicas from communicating infinitely faster than honest replicas. To this end, we assume the ratio of the communication time of the two slowest Byzantine replicas over the communication time of the two fastest honest replicas (of different partitions) is lower bounded in expectation. More formally, let $X_1$ (resp. $X_2$) be the random variables that indicate the time it takes for a message between two replicas within the same partition (resp. two honest replicas from different partitions). We have $E(X_1) / E(X_2) > \epsilon$, where $\epsilon > 0$. Note that the definition of $X_1$ also implies that it is the random variable of the communication time of either two honest replicas of the same partition or two Byzantine replicas. \end{enumerate} Below we show that attackers will always lose at least as much than they manage to steal. \begin{theorem}[Theorem~\ref{thm:zeroloss}] Given $\rho$ the probability of success of an attack, if $m\geq \frac{a\rho}{1-\rho}$, then LLB\xspace implements a zero loss payment system. \end{theorem} We now compute the probability $\rho$ of a coalition attack succeeding in forking LLB\xspace and the probability of failure $\hat{\rho}=1-\rho$. Out of one attack attempt, the attacker may gain $\mathcal{G}$ coins by forking or lose $\mathcal{P}$ coins as a punishment from the system. In case of a successful attack, the system compensates the coin losses using the deposit coins. Note that $\mathcal{P}$ decreases with the duration of the attack as attackers can launder they theft by exchanging their stolen assets with non-stolen assets. \paragraph{Expected gain.} The attack represents a Bernoulli trial that succeeds with probability $\rho$. The random variable $Y$ that measures the number of attempts for the attack to be successful thus follows a geometric distribution with mean $E(Y)=\frac{1-\hat{\rho}}{\hat{\rho}}$. The gain of an attack depends on the amount of coins of the coalition spent in $a$ branches of a fork. That is, the maximum gain of a successful attack is the $\mathfrak{G} \cdot (a-1)$, where $\mathfrak{G}$ is the sum of all assets of the attacker coalition. We define the function $\alpha:[0, m]\cap \mathds{Z} \rightarrow [0,1]$ that, given an amount of consensus instances, returns the percentage of total gain that is refunded in that amount of instances. We consider that an attack tries to steal an amount divisible by $\ell$, thus $\alpha(i)=\frac{i}{m}$. \paragraph{Expected gain and punishment.} The expected gain for the attackers in a disagreement attempt is as follows: \begin{align*} \mathcal{G}(\hat{\rho}) =& (a-1)\cdot \bigg(\mathds{P}(Y\geq m)\cdot \mathfrak{G} +\sum_{i=0}^{m} \mathds{P}(Y=i)\cdot \alpha(i) \cdot \mathfrak{G}\bigg)\\ =&(a-1)\cdot\mathfrak{G}\cdot\bigg(\rho^{m+1}\;+\;\frac{(1-\rho)}{m} \cdot \sum_{i=0}^{m}i\rho^i\bigg)\\ =&(a-1)\cdot \mathfrak{G}\cdot \bigg( \rho^{m+1} \;+\; h(\rho) \bigg). \end{align*} We can calculate the solution to the series by deriving the solution to the power series $\sum_{i=0}^m\rho^i$ and multiplying it by $\rho$: \begin{align*} \sum_{i=0}^{m}i\rho^i=\rho\cdot\bigg(\frac{d}{d\rho}\sum_{i=0}^m \rho^i\bigg)=\frac{\rho\big(m\rho^{m+1}-(m+1)\rho^{m}+1\big)}{(p-1)^2}. \end{align*} Similarly, we derive the expected punishment: \begin{align*} \mathcal{P}(\hat{\rho}) =& \sum_{i=0}^{m} \mathds{P}(Y=i)\cdot (1-\alpha(i)) \cdot \mathfrak{G}\\ =&\bigg(\sum_{i=0}^{m} \mathds{P}(Y=i)-\sum_{i=0}^{m} \mathds{P}(Y=i)\cdot \alpha(i)\bigg)\cdot \mathfrak{G}\\ =&\bigg(\sum_{i=0}^m\rho^i(1-\rho) - h(\rho)\bigg)\cdot\mathfrak{G}\\ =&\bigg((1-\rho^{m+1})-h(\rho)\bigg)\cdot \mathfrak{G}. \end{align*} \paragraph{Deposit needed for zero loss.} We refer to the expected \textit{deposit flux} per attack attempt as the difference $\Delta=\mathcal{P}(\hat{\rho})-\mathcal{G}(\hat{\rho})$ between the punishment and the gain from an attack. Therefore, the deposit flux is: $$\Delta(\hat{\rho})=\bigg(1-a\cdot \big(h(\rho)+\rho^{m+1}\big)\bigg)\mathfrak{G}=g(a,\rho,m)\mathfrak{G}.$$ If $\Delta< 0$ then the cost is $\mathcal{G}(\hat{\rho})-\mathcal{P}(\hat{\rho})$, otherwise the punishment is enough to fund the deposit, so we have to ensure that $\Delta\geq 0$, hence we are interested in $g(a,\rho,m)\geq\,0$, which implies: $$1-a\cdot \bigg(\frac{(1-\rho)}{m}\cdot \frac{\rho\big(m\rho^{m+1}-(m+1)\rho^{m}+1\big)}{(p-1)^2} \bigg)- a\cdot \rho^{m+1}\geq 0.$$ And recall that $m\geq \frac{a\rho}{1-\rho}$, hence we have: \begin{align*} & 1-\big(m\rho^{m+1}-(m+1)\rho^{m}+1 \big)- a\cdot \rho^{m+1}\geq 0\\ \iff &(m+1)\rho^{m}- \rho^{m+1}\big(a+m\big)\geq 0\\ \iff &(m+1)-\rho\big(a +m)\geq 0\\ \iff &m+1 \geq m\rho+a\rho\\ \iff & 1+\frac{1}{m}\geq \rho +\frac{a\rho}{m}\\ \iff & 1 + \frac{1-\rho}{a\rho}\geq \rho+1-\rho\\ \iff& 1 \geq \rho.\\ \end{align*} This holds for $0\leq \rho < 1$ for any $a \geq 2$, hence guaranteeing that $\Delta \geq 0$. \ARP{We can conclude that LLB\xspace implements a zero-loss payment system if the total deposit to be refunded in a consensus instance is limited to $\frac{\mathds{C}}{3m}$, where $\mathds{C}$ is the total circulating supply, since attackers will always lose at least as much as they manage to steal.} \ARPN{We can conclude that LLB\xspace implements a zero-loss payment system if the total amount to be refunded per account from the deposit in a consensus instance is limited to $\ell \leq \mathds{C}\frac{1-\rho}{a\rho}$, being $m=\frac{\mathds{C}}{l}$, since the remaining deposited funds will always be at least as much as what the attackers manage to steal.} If $c=\frac{\rho}{1-\rho}$ then $m\geq a\cdot c$, and to prevent any multiple-disagreement up until some deceitful ratio $\delta$,\ARPN{assuming that attackers hold a ratio of total coins at most as much as their deceitful ratio, then} LLB\xspace guarantees zero-loss if it limits the maximum amount to refund per consensus instance to: \begin{align*} l\leq \frac{\mathds{C}\cdot\delta}{m} \end{align*} And since $m$ is expressed as a function of $a$, we can express also $\delta$ as a function of $a$ using Theorem~\ref{lem:branches}, calculating $\ell$ to ensure zero-loss if not more than $a$ branches are possible: \begin{align*} l\leq \mathds{C}\cdot \frac{2a-3}{(3a-3)\cdot a\cdot c} \end{align*} Notice that it follows that LLB\xspace needs to limit the same refund amount per account for every consensus instance to prevent both 3 branches to refund and 2 branches $\ell\leq \frac{\mathds{C}}{6c}$, if their probability of success is the same. Additionally, to prevent any from a 6 branches to a double-disagreement, the maximum amount to refund per consensus instance must be $\ell\leq \frac{\mathds{C}}{10c}$. In this case, one can delay confirmation $m\geq 3c$ blocks to guarantee zero-loss with up to 3 branches (i.e., $f-q< 5n/9$), or instead $m\geq 6c$ with up to 6 branches (i.e., $f-q< 3n/5$). \section{Proof of Correctness}\label{app:proofs} \UPDATE{review and make sure references to exclude/include consensus and membership change make sense, or change accordingly} In this section, we show the properties that can be satisfied depending on the size of the adversary, summarized in Table~\ref{table:thresholds}. \paragraph{Upper-bounding the number of fork branches.} In the theorem below, we compute the number $a$ of branches Byzantine replicas can create when causing a fork, depending on the assumption on the size of the adversary, being $a=1$ the normal case with no disagreement. We show that, for the case $t_0n=\lceil\frac{n}{3}\rceil$ (i.e., the consensus tolerates up to $\lceil\frac{n}{3}\rceil-1$ faults), if $f<n/3$ then no fork is possible (i.e., only one branch exists and $a=1$), while 2 and 3 branches are possible for $f<n/2$ and $f<5n/9$, respectively. The maximum is reached at $\lceil \frac{n}{3}\rceil$ branches for $f<2n/3$, in which case there is only one honest replica in each branch, assuming that all Byzantines can be deceitful faults. Previous work already showed this result~\cite{singh2009zeno,Li}, however, our proof will be useful for other theorems. \begin{lemma} \statement The maximum number of branches that the Byzantine replicas can create in a fork is $a\leq\frac{n-(f-q)}{\ceil{(1-t_0)n}-(f-q)}$, for $(f-q)\geq \frac{(a(1-t_0)-1)n}{a-1}$. \label{lem:branches} \end{lemma} \begin{proof It is easy to see that the maximum is obtained for one honest replica in each branch, which is clearly for $f-q=(1-t_0)n$, and this number of branches is $a=t_0n+1$. let $|C| = n-(f-q)$ be the number of replicas that are not deceitful faults, and $a$ the number of branches in a fork, $|C|=n-(f-q)=ax+r$ for some $x$. W.l.o.g. we assume $|C|$ to be divisible by a, i.e., $r=0$. Then for the attackers to be able to create $a$ branches the following must hold: $$\begin{cases} (f-q)+x\,\geq\, (1-t_0)n,\\ f-q \leq\,\ceil{(1-t_0)n}-1,\\ ax=n-(f-q). \end{cases}$$ Solving this system gives the following expression for $f-q$: \begin{align} f-q\geq& \frac{(a(1-t_0)-1)n}{a-1}, \end{align} and for $a$: \begin{align} a\leq&\frac{n-(f-q)}{\ceil{(1-t_0)n}-(f-q)}. \end{align} \end{proof} \paragraph{Upper-bounding the number of detected replicas.} Lemma~\ref{lem:01} gives an upper-bound on the number $f_d$ of proofs-of-fraud (of different replicas) that an honest replica must collect in order to start a membership change. In particular, for $t_0n=\lceil \frac{n}{3}\rceil$, an honest replica can wait for up to $f_d=\lceil \frac{n}{3}\rceil$ proofs-of-fraud, otherwise some disagreements do not cause a membership change. This is possible as Polygraph~\cite{CGG19} guarantees that at least $\lceil \frac{n}{3}\rceil$ Byzantine replicas are eventually detected after a disagreement occurs. \begin{lemma} \statement If the ASMR\xspace guarantees block merge and membership change then the number $f_d$ of proofs-of-fraud that an honest replica must collect to start the exclude consensus must be $f_d\leq (1-2t_0)n$. \label{lem:01} \end{lemma} \begin{proof} It is clear that $f_d$ must not be greater than the minimum amount of Byzantine replicas required to cause disagreement, which we know from Lemma~\ref{lem:branches} is, for $a=2$, $f-q\geq (2(1-t_0)-1)n\iff f-q\geq(1-2t_0)n$. Therefore, to guarantee that any disagreement leads to honest replicas launching the membership change $f_d$ provably deceitful replicas, the exclude consensus must start when honest replicas collect $f_d\leq (1-2t_0)n$ proofs-of-fraud, otherwise there may be disagreements that do not lead to a membership change. \end{proof} Notice however that if a disagreement takes place between conflicting views on the set of honest replicas, then updated honest replicas simply merge both decided values, effectively removing more malicious nodes than $f_d$. We explore further cases when a disagreement occurs during or after excluding is possible in the next Lemma~\ref{lem:011}. \paragraph{Upper-bounding the number of branches after the recovery.} The exclude consensus guarantees that at least $f_d$ deceitful replicas are punished and removed and that the number of branches strictly decreases. However, depending on the adversarial size, an excluding consensus will be able to reduce the number of branches to just one or multiple ones. For example, for $t_0n=\lceil \frac{n}{3}\rceil$ (i.e., the consensus tolerates $\frac{n}{3}-1$ faults), if $f<n/2$ then the recovery merges all branches into one. However, if $f<5n/9$ then it is possible that some honest replicas participate still in an outdated branch and are not aware that the exclusion of $n/3$ replicas resulted in a new threshold $f'<n'/3$, while some other replicas participate in the updated branch. Additionally, for $f<2n/3$, there may be up to two updated branches whose participants learn that the exclude consensus led to $f'<n'/2$, and as many outdated branches as outdated replicas that do not know about the membership change. Recall however that if more than one branch progress after a membership change, a further membership change will eventually start in the future, punishing and removing more deceitful replicas, and leading to honest replicas eventually converging to one branch. For example, ignoring the including consensus, if $5/9\leq f/n< 2/3$ initially (cf. case with two updated branches in Fig.~\ref{fig:forks}) , then after the first excluding execution we obtain $1/3 \leq f'/n' < 1/2$ and after the second excluding execution we obtain $0 \leq f''/n''<1/4$ in which case consensus is solvable (cf. case with a single branch in Fig.~\ref{fig:forks}). Notice also that one recovery is enough to guarantee eventually converging to just one updated branch for $f<5n/9$, while awareness is only possible if $f<n/2$, or for higher $f$ upper-bounding the number of benign failures. \begin{lemma}[Lemma~\ref{lem:011}] Consider an execution in which ASMR\xspace is about to execute a consensus instance $\Gamma^{t_0}_i$, which tolerates strictly less than $t_0\cdot n, \,t_0\in(0,1)$ failures, and a membership change that leads to $UP$ honest replicas that updated their set of replicas and $OP$ outdated honest replicas that did not update their set of replicas yet, meaning $n=f+|OP|+|UP|$. The maximum amount $a$ of branches that faulty replicas can create is $a\leq \frac{|OP|}{\ceil{(1-t_0)n}-(f-q)}+a_{\ms{UP}}$, for $f-q\geq \frac{(a-a_{\ms{UP}})(1-t_0)n-|OP|}{a-a_{\ms{UP}}}$, where $a_{\ms{UP}}$ is the number of branches that the remaining deceitful replicas can create after the membership change. \end{lemma} \begin{proof} The value $a_{\ms{UP}}$ of branches after the membership change can be directly derived from Lemma~\ref{lem:branches}, so we explore the outdated replicas. Since they are outdated, the removed deceitful replicas can still participate with $|OP|$ honest replicas, and also the number of replicas that participate in the consensus of their branch is $n$. Therefore, analogously to Lemma~\ref{lem:branches}, we explore for which values $f-q+|OP|/a_{\ms{OP}}\geq (1-t_0)n$. Solving this gives: \begin{align*} a_{\ms{OP}}\leq \frac{|OP|}{\ceil{(1-t_0)n}-(f-q)} \end{align*} and also: \begin{align*} f-q\geq \frac{a_{\ms{OP}}(1-t_0)n-|OP|}{a_{\ms{OP}}} \end{align*} Finally, making $a_{\ms{OP}}=a-a_{\ms{UP}}$ to count the branch of updated replicas gives the result. \end{proof} Note that from Lemma~\ref{lem:01} we have $f_d\leq (1-2t_0)n$. It is easy to see from Lemma~\ref{lem:011} that $a$ decreases with $|OP|$, meaning also that $a$ is minimized by maximizing the amount of honest replicas that are in $|UP|$. Additionally, being $q\leq f-f_d$ the amount of benign failures that both exclude and include consensus must tolerate, we have that $n-f-(f-f_d)=n+f_d-2f\leq|UP|$ is the minimum number of honest replicas that must participate in the exclude consensus for it to terminate, and thus $|OP|\leq f-f_d$. What's more, $f-f_d$ decreases when $f_d$ increases, and thus the amount of honest replicas that must participate in the membership change for it to terminate increases, also minimizing $a$. Therefore, when not specified we assume the system chooses to wait for at least $f_d=(1-2t_0)n$ proofs-of-fraud before launching the membership change, which gives the best fault tolerance. \paragraph{Awareness.} Theorem~\ref{th:det} shows in which cases an LSMR can guarantee awareness. Informally, an honest replica can detect that no more branches are possible if the relative size of deceitful faults is smaller than a third of the remaining replicas, and enough honest replicas confirm that they agree on the remaining replicas. Notice that a disagreement on the output of a recovery still requires $f_d$ removed replicas on each branch and is eventually also merged, causing even more provably deceitful replicas removed from the set $n$ than $f_d$. Thus we obviate such better scenario, and focus on disagreement on the consensus. For example, for $t_0n=\lceil \frac{n}{3}\rceil$, if $f<n/2$ and $q<n/3$ then awareness is immediately guaranteed after one recovery terminates. \begin{theorem}[Theorem~\ref{th:det}] \statement Let $a_{\ms{OP}}$ be the possible number of branches before the last recovery ($a_{\ms{OP}}=0$ if no recovery took place yet), and let $a_{\ms{UP}}$ be the maximum possible number of branches after the last recovery that an honest replica terminated, then the ASMR\xspace guarantees awareness if and only if it guarantees that there will be a maximum number of recoveries after which the number of outdated and updated branches will eventually be $a_{\ms{OP}}<1$ and $a_{\ms{UP}}<2$. \end{theorem} \begin{proof} First we prove the if. It is easy to see that, since all branches $a=a_{\ms{OP}}+a_{\ms{UP}}$ are the addition of the outdated plus the updated ones, if $a_{\ms{OP}}<1$ then not even one branch is possible in the outdated partition, while $a_{\ms{UP}}<2$ allows up to 1 branch in the updated one. Now we prove the only if by contradiction. Suppose we have awareness while either $a_{\ms{OP}}\geq 1$ or $a_{\ms{UP}}\geq 2$ after all honest replicas have executed all possible excluding consensus. Since the total number of branches is $a=a_{\ms{OP}}+a_{\ms{UP}}$, $a$ must be $a<2$, but that is only possible if $a_{\ms{OP}}$ is less than $1$, and one of $a_{\ms{UP}}$ is less than 2, which is a contradiction. \end{proof} \paragraph{Recovery.} In general, our LLB guarantees recovery as long as every branch of a fork has at least one honest replica participating in it, but it is only for some values of $f$ that honest replicas can be aware that the recovery has finished (awareness). We show this in the following two lemmas. For instance, for $t_0n=\lceil \frac{n}{3}\rceil$, if $f-q<2n/3$, then after a recovery, the adversary can still hold more than a third of relative power, meaning they can still cause disagreements, including on the recovery itself. Thanks to the exclude consensus also being accountable, the more disagreements they cause the lower and lower the deceitful ratio becomes, until it reaches strictly lower than a third of the remaining replicas, when all honest replicas can be sure that disagreements are no longer possible. Notwithstanding, if Byzantine replicas commit benign failure once the recovery starts, honest replica can not confirm their decisions to be the only decisions, even if they actually are, meaning they cannot guarantee awareness even if there is in fact just one branch. \begin{theorem}[Theorem~\ref{theorem:recovery}] \statement Let the exclude consensus tolerate less than $t_0'n=n/2$ benign failures, and let $f_d=(1-2t_0)n$ be the number of PoFs correct replicas must collect to start a excluding consensus. Assume a set of deceitful replicas tried to cause disagreement. The block merge and exclude consensus will start for such disagreement, and eventually terminate, and there will be a finite number of recursive exclude consensus after which one of them will also guarantee safety if and only if $f-q<(1-t_0)n$ and $n>q+f$. \end{theorem} \begin{proof} First we assume $f-q<(1-t_0)n$. We consider the deceitful ratio to be big enough to cause disagreement, i.e., $f-q\geq (1-2t_0)n$. Note that the relative size of the benign faults takes it maximum when the LLB excludes all deceitful faults, therefore we consider the worst-case of a exclude consensus that excludes all $f-q$ deceitful faults. Let the number of faults that the exclude consensus tolerates be less than $t_0'n,\;t_0'\in(0,1)$. we need to guarantee that $t_0'(n-(f-q)) > q$ to guarantee that correct replicas will form a certificate with enough signatures, otherwise they will always wait for replies from benign faults. Since $t_0'n=n/2$ (i.e., the recovery consensus tolerates up to $n/2-1$ benign faults), we have that $q<n-f\iff n>f+q$. Notice also from Lemma~\ref{lem:01} that the recovery starts only once a correct replica collects enough proofs-of-fraud to cause a disagreement, i.e., $f_d= (1-2t_0)n$. Therefore, if $f-q<(1-2t_0)n$ the membership change will not even be necessary. This guarantees termination of the exclude consensus, the membership change and the block merge. It is easy to see that safety is also guaranteed, thanks to the accountability of both exclude and include consensus. If safety is attacked by some remaining deceitful faults, a new membership change starts. This can happen only for a finite number of times, since every new membership change excludes a non-zero number of newly discovered deceitful faults. There will thus be one last membership change in the recursive iteration that will have a deceitful ratio too small to cause disagreement, such as the case depicted already at the beginning of this proof, guaranteeing termination and safety. For the remainder of the only if direction, we already showed that, for $f-q\geq (1-t_0)n$, $n>q+f$ must also hold. Note now that if $f-q\geq(1-t_0)n$ then deceitful replicas do not need any correct replica to output two disagreeing decisions, meaning that this disagreement will not guarantee a block merge nor an exclusion of the provably malicious participants. \end{proof} \paragraph{Termination.} Notice however that, even if the membership change is guaranteed to terminate and up to one branch is guaranteed to progress, this does not mean that one branch will necessarily progress. For example, for $t_0n=\lceil \frac{n}{3}\rceil$, if $f=q>n/3$, it is immediate that no disagreements are possible, but also consensus will not terminate, given the relative size of benign faults. It is noticeable that we can guarantee termination for greater deceitful ratios $\delta\geq 1-t_0$, at the cost of foregoing the resilience for up to $f< t_0n$ Byzantine faults. \begin{theorem}[Theorem~\ref{th:ter}] \statement If $f-q<(1-t_0)n$ and $q<\frac{t_0(n-f)}{1-t_0}$, then ASMR\xspace guarantees termination. \end{theorem} \begin{proof} We consider the ASMR\xspace consensus. Since all the $f-q$ deceitful faults might be removed during a membership change, it is necessary to guarantee that among the remaining replicas after a membership change $n'=n-(f-q)$ the relative size of the benign faults is not big enough to threaten termination. Therefore, as the number of benign faults tolerated are less than $t_0n'$, we have that $q<t_0n'\iff q<t_o(n-f+q)\iff q<\frac{t_0(n-f)}{1-t_0}$. At the same time, if $f-q\geq (1-t_0)n$ then the deceitful replicas do not need honest replicas to terminate, and thus correct replicas may not ever terminate. \end{proof} \paragraph{Confirmation of consensus.} Decisions that take place without the awareness property are suspected by honest replicas to potentially require a merge, even if they will never be undecided. Confirmations allow for honest replicas to confirm that a particular decision will not require a merge with another decision unknown yet. In the following theorem, we show the number of replies honest replicas must collect in order to confirm their decision, and discard a disagreement on that decision. For example, for $t_0n=\lceil\frac{n}{3}\rceil$, if $f<2n/3$ then a honest replica must wait to receive $n$ replies from different replicas, otherwise it can not discard the possibility of a disagreement on that decision. \begin{theorem}[Theorem~\ref{cor:partialawareness}] \statement If a honest replica delivers $c>t_0n+(f-q)$ decisions from replicas, it will either confirm its decision or start a membership change. \end{theorem} \begin{proof} First, we consider confirmation. Suppose a process confirms a decision when delivering $c$ decisions from replicas, then this means that $c-(f-q)$ of the delivered decisions from must be delivered from correct replicas. Thus $n-(c-(f-q))-(f-q)=n-c$ are correct replicas for which no decision has been yet delivered. If replicas confirmed their decision already, these remaining replicas must also eventually decide the same the value. This means that the deceitful faults could not have misled the remaining replicas to decide on a different value: $(1-t_0)n\geq (f-q)+(n-c)\iff c> (f-q)+t_0n$. Suppose now that the deceitful replicas managed to cause a disagreement. It is clear that a correct replica will not be able to confirm its decision, by the above-shown construction. Instead, when a correct replica delivers $c>(f-q)+t_0n$ certificates (i.e., decisions), at least one of the certificates will decide a different value, and thanks to Lemma~\ref{lem:01} we can guarantee that, when comparing the signatures of such decision to another of the received certificates for a different decision, the correct replica will find enough proofs-of-fraud to start a membership change. \end{proof} \section*{Acknowledgments} This research is supported under Australian Research Council Discovery Projects funding scheme (project number 180100496) entitled ``The Red Belly Blockchain: A Scalable Blockchain for Internet of Things''. \bibliographystyle{plain}
proofpile-arXiv_065-190
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} The rapidly growing adoption of Artificial intelligence (AI) has led to the development of supervised machine learning, in particular, deep neural networks, for generating predictions of high accuracy~\cite{corr/abs-1901-04592}. While the advancement has the potential to make significant improvement to the state-of-the-art in operational decision making across various business domains and processes, the underlying models are often opaque and do not provide the decision-maker with any understanding of their internal predictive mechanisms. This opaqueness in machine learning models is known as the \textit{black-box} problem. Immediate consequences of trusting predictions from opaque models might result in severe losses for businesses (and people), unfair job losses, or even lead to negative impacts in certain societal groups (for instance, racial and gender discrimination)~\cite{corr/abs-2001-02478}. This has posed an open challenge to data scientists and business analysts on how to endow machine intelligence with capabilities to explain the underlying predictive mechanisms in a way that helps decision-makers understand and scrutinize the machine learned predictions. The recent body of literature in machine learning has emphasised the need to interpret and explain the (machine) learned predictions. Methods and techniques have been proposed for explaining black-box models which are known as interpretable machine learning~\cite{guidotti2018} or, in a broader context, explainable AI (XAI)~\cite{lakkaraju2019}. So far, there exist two different mechanisms to address model interpretability. One is to have an interpretable model that provides transparency at three levels: the entire model, the individual components and the learning algorithm~\cite{lipton2018}. For example, both linear regression models and decision tree models are interpretable models. Another mechanism to address model interpretability is via \textit{post hoc} interpretation, in which case, explanations and visualisations are extracted from a learned model, that is, after the model has been trained, and as such they are model agnostic. This is particularly useful for generating model interpretations for those complex machine learning models (such as deep neural networks) that have low transparency and are hard to be transformed into an interpretable model (i.e., a `white-box') due to their sophisticated internal representations. The existing post hoc interpretation techniques (see a review in~\cite{guidotti2018}) present knowledge about the various levels of impact of individual input features on the corresponding prediction. In this paper, we propose a novel approach underpinned by an extended framework of Bayesian networks for generating post hoc interpretations of a black-box predictive model, with a focus on providing interpretations for any instance of prediction learned by the model (known as local interpretations). We name this framework the \textit{Local Interpretation-Driven Abstract Bayesian Network} (LINDA-BN), which supports extracting a Bayesian network as an approximation (or an abstraction) of a black-box model for a specific prediction learned from any given input. We implemented our approach, applied it in the context of two well-known public datasets and analysed the results, which are made available in an open-source repository. Compared to the existing post hoc interpretation methods, the contribution of our approach is three-fold. \begin{itemize} \item The extracted Bayesian network not only can provide interpretations about \textit{what} input features contributed to the corresponding prediction. As a probabilistic graphical model, it also represents knowledge about dependencies (in form of conditional probabilities) between input features and prediction, thus generating interpretations about \textit{why} certain input features contributed to the prediction. \item For complex decision problems with a large number of features, the extracted Bayesian network is often complicated to be analysed by human. In this case, LINDA-BN supports generating a Markov blanket from the extracted Bayesian network. The Markov blanket determines the boundaries of a decision system in a statistical sense, and presents a graph structure covering a decision (e.g., a prediction), its parents, children, and the parents of the children. As such, the Markov blanket of the extracted Bayesian network provides interpretation with a focused view on those input features that directly contributed to the corresponding prediction. \item The extracted Bayesian network enables the identification of four different rules which can inform the decision-maker about the confidence level in a given prediction. As such, the interpretations provided in our approach can help the decision-maker assess the reliability of predictions learned by a black-box model. \end{itemize} In the rest of the paper, we continue to introduce the relevant concepts and review the related research efforts in Section~\ref{sec:backg}. We present our approach underpinned by the framework LINDA-BN in Section~\ref{sec:frame}. Next, we report the experiments and discuss the results of analysis in Section~\ref{sec:eval}. Finally, we conclude the paper with an outlook to future work (Section 5). \section{Background and Related Work} \label{sec:backg} In this section, we present the main concepts that are used throughout our work, and review research efforts that are related to the proposed framework. \subsection{Concepts} Prior to discussing existing work that relates to our approach on providing intepretations of a \textit{black box} machine learning model prediction, we note the following definitions: \begin{itemize} \item \textbf{Black box predictor:} It is a machine learning opaque model, whose internals are either unknown to the observer or they are known but are not understandable by humans. \item \textbf{Interpretability:} The ability to extract symbolic information out of a black box that can provide the meaning in understandable terms to a human \citep{doshivelez2017}. \item \textbf{Explainability:} The ability to highlight decision-relevant parts of the used representations of the algorithms and active parts in the algorithmic model, that either contribute to the model accuracy on the training set, or a specific prediction for one particular observation~\citep{Holzinger19}. \end{itemize} One can see interpretability as the extraction of symbolic information from the black box (machine-level) that already needs some degree of semantics, and explainability as the conversion of this symbolic information to a human understandable way (human-level). \subsection{Related Work} Various approaches have been proposed in the literature to address the problem of interpretability. Generally, this problem can be classified into two major models: Interpretable models and model agnostic (post-hoc) models. Interpretable models are by design already interpretable, providing the decision-maker a transparent white box approach for prediction. Decision tree, logistic regression, and linear regression are commonly used interpretable models. These models have been used to explain predictions of specific prediction problems~\cite{Siering2018}. Model-agnostic approaches, on the other hand, refer to the deriving explanations from a black box predictor by extracting information about the underlying mechanisms of the system. In addition, studies have focused on providing model-specific post-hoc explanations~\cite{KIMDSS2020}. The focus of our work is to build model-agnostic post-hoc methods as they have flexibility of being applied to any predictive model as compared to model-specific post-hoc approaches . To discover the demystifying predictive black box models, we focus on the widely cited post-hoc models that include LIME~\cite{Ribeiro16}, SHAP~\cite{Lundberg17}, and Counterfactual explanation in this work. \subsubsection{LIME} Local Interpretable Model-agnostic Explanations (LIME)~\cite{Ribeiro16} explains the predictions of any classifier by approximating it with an locally faithful interpretable model. Hence, LIME generates local interpretations by perturbing a sample around the input vector within a local decision boundary \citep{Elshawi19,Ribeiro16}. Each feature is associated with a weight that is computed using a similarity function that measures the distances between the original instance prediction and the predictions of the sampled points in the local decision boundary. Linear regression is learned to determine the local importance of each feature. LIME has been extensively applied in the literature. For instance, \citet{Stiffler18} used LIME to generate salience maps of a certain region showing which parts of the image affect how the black box model reaches a classification for a given test image. \citet{Tan17} apply LIME to demonstrate the presence of uncertainty in the explanations that could raise concerns in the use of the black box model and diminish the value of the explanations. different sources of uncertainty in the explanation. Their work demonstrates the presence of three sources of uncertainty: randomness in the sampling procedure, variation with sampling proximity, and variation in explained model across different data points. Anchor \citep{Ribeiro18} is an extension of LIME that attempts to address some of the limitations by maximizing likelihood on how a certain feature might contribute to a prediction. Anchor introduces IF-THEN rules as explanations as well as the notion of coverage, which allows the decision-maker to understand the boundaries in which the generated explanations are valid. \subsubsection{SHAP} The SHAP (SHapley Additive exPlanations) is an explanation method which uses Shapley values \cite{Shapley52} from coalitional game theory to fairly distribute the gain among players, where contributions of players are unequal~\citep{Lundberg17}. Shapely values are a concept in economics and game theory and consist in a method to fairly distribute the payout of a game among a set of players. One can map these game theoretic concepts directly to an XAI approach: a game is the prediction task for a single instance; the players are the feature values of the instance that collaborate to receive the gain. This gain consists of the difference between the Shapley value of the prediction and the average of the Shapley values of the predictions among the feature values of the instance to be explained \cite{Strumbelj13}. \citet{Strumbelj13} claim that in a coalition game, it is usually assumed that $n$ players form a grand coalition that has a certain value. Given that we know how much each smaller (subset) coalition would have been worth, the goal is to distribute the value of the grand coalition among players fairly (that is, each player should receive a fair share, taking into account all sub-coalitions). \citet{Lundberg17} on the other hand, present an explanation using SHAP values and the differences between them to estimate the gains of each feature. In order to fairly distribute the payoff amongst players in a collaborative game, SHAP makes use of two fairness properties: (1) Additivity, which states that amounts must sum up to the final game result, and (2) Consistency, which states that if one player contributes more to the game, (s)he cannot get less reward. In terms of related literature, \citet{Ariza20} adopted SHAP values to assess logistic regression model and several machine learning algorithms for granting scoring in P2P (peer-to-peer) lending, the authors point out SHAP values can reflect dispersion, non-linearity and structural breaks in the relationships between each feature and the target variable. They concluded that the SHAP can provide accurate and transparent results on the credit scoring model. \citet{Parsa20} also highlight that SHAP could bring insightful meanings to interpret prediction outcomes. For instance, one of the techniques in the model, XGBoost, not only is capable of evaluating the global importance of the impacts of features on the output of a model, but it can also extract complex and non-linear joint impacts of local features. \subsubsection{Probabilistic graphical model} The literature of interpretable methods for explainable AI based on probabilistic graphical models (PGM) is mostly dominated by models based on counterfactual reasoning in order to derive explanations for a scpecic local datapoint. The counterfactual explanation based on PGM comprises of a conditional assertion whose antecedent is false and whose consequent describes how the world would have been if the antecedent had occurred. It provides interpretations as a mean to point out which changes would be necessary to accomplish the desired goal, rather than supporting the understanding of why the current situation had a certain predictive outcome \citep{Wachter18}. For instance, in a scenario where a machine learning algorithm assesses whether a person should be granted a loan or not, a counterfactual explanation of \textit{why} a person did not have a loan granted could be in a form of a scenario \textit{if your income was greater than $ \$15,000$ you would be granted a loan} \citep{Mothila20}. Unlike other explanation methods that depend on approximating an interpretable model within a perturbed decision boundary, counterfactual explanations have the strength that it is always truthful to the underlying model by providing direct outputs of the algorithms \citep{Ribeiro16}. Counterfactual explanations are part of causal inference methods, which are based on causal reasoning. and is focused on the estimation of the causal effects from treatments and actions \citep{Pear19}. In 2000, Pearl proposed a framework (the ladder of causation) that proposes different levels of causal relationships during causal inference. Level 1, \textit{Association}, entails the sensing of regularities or patterns in the input data, expressed as relations; it focuses on the question \textit{what}. Level 2, \textit{Intervention}, predicts the effects of deliberate actions, expressed as causal relationships. And Level 3, \textit{Counterfactuals}, involve constructing a theory of the world that explains why certain actions have specific effects and what happens is the absence of such actions \citep{Pear19}. A simple and naive approach for generating counterfactual explanations is searching by trial and error. In this approach the feature values are randomly changed for the instance of interest and stops searching when the desired output is predicted. The notion of counterfactual model has been investigated by a few researchers. In 2013, the counterfactual approach has been proposed for the evolution of advertisement placement in search engines~\citep{Bottou13}. \citet{johansson2016learning} claim that the counterfactual thinking has been adopted in the context of machine learning applications to predict the result of several different actions, policies, and interventions using non-experiment data. Moreover, the Counterfactual Gaussian Process (CGP) approach has been created by \citet{schulam2017reliable} for modelling the effects of sequences of actions on continuous time series data and facilitate the reliability of medical decisions \citep{Neto20}. Although counterfactual explanation are useful, they do not explain why a certain prediction is made. On the contrary, they assume a hypothetical scenario where the prediction would be contrary to the output of that particular data point. Our approach aims to use probabilistic model to provide local explanations that provide insights into the features influencing a datapoint. \section{The Local Interpretation-Driven Abstract Bayesian Network Framework} \label{sec:frame} In this section, we present our framework built upon an extended framework of Bayesian networks that can generate post hoc interpretations for a single data point of prediction: the local interpretation-driven abstract Bayesian network (LINDA). We start with a brief introduction to Bayesian networks (Section~\ref{subsec:bn}) and structure learning (Section~\ref{subsec:learning}). Readers that are familiar with the knowledge can proceed directly to the proposed framework (Sections~\ref{subsec:model} to~\ref{subsec:rules}). \subsection{Bayesian Networks} \label{subsec:bn} A Bayesian Network (BN) is a directed acyclic graph in which each node represents a random variable, and each edge represents a direct influence from the source node to the target node. The graph represents (in)dependence relationships between variables, and each node is associated with a conditional probability table that specifies a distribution over the values of the node given each possible joint assignment of the values of its parents~\citep{Pearl88}. Bayesian networks can represent essentially any full joint probability distribution, which can be computed using the chain rule in probability theory~\citep{russel10}. Let $\mathcal{G}$ be a BN graph over the variables $X_1, \cdots, X_n$. We say that a probability distribution, $Pr$, over the same space factorizes according to $\mathcal{G}$, if $Pr$ can be expressed using the following equation~\cite{koller09prob}: \vspace*{-.25\baselineskip} \begin{equation} Pr( X_1, \dots, X_n ) = \prod_{i=1}^n Pr( X_i | Pa_{X_i} ). \label{eq:joint} \end{equation} In Equation~\ref{eq:joint}, $Pa_{X_i}$ corresponds to all the parent variables of $X_i$. The graph structure of the network, together with the associated factorization of the joint distribution allows the probability distribution to be used effectively for inference (i.e. answering queries using the distribution as our model of the world). For some query $Y$ and some observed variable $e$, the exact inference in Bayesian networks is given by the following equation~\citep{koller09prob}: \vspace*{-\baselineskip} \begin{equation} Pr( Y | E = e ) = \alpha Pr(Y, e) = \alpha \sum_{ w \in W } Pr(Y, e, w), \text{~~~~~with~} \alpha = \frac{1}{\sum_{y \in Y} Pr(y, e).} \label{eq:inference} \end{equation} Each instantiation of the expression $Pr(Y = y, e)$ can be computed by summing up all joint entries that correspond to assignments consistent with $y$ and the evidence variable~$e$. The set of random variables $W$ corresponds to variables that are neither query nor evidence. The~$\alpha$ parameter specifies the normalization factor for distribution~$Pr(Y,e)$, and this normalization factor is informed by certain assumptions made in Bayes rule~\citep{russel10}. \subsection{Structure Learning in Bayesian Networks} \label{subsec:learning} Bayesian networks are made of two important components: a directed acyclic graph~$\mathcal{G}$ representing the network structure, and a set of probability parameters~$\Theta$ representing the conditional dependence relations. Learning a BN is a challenging problem when the network representation $\mathcal{G}$ is unknown. Given a dataset $\mathcal{D}$ with $m$ observations, $Pr \left( \mathcal{G}, \Theta | \mathcal{D} \right)$ is composed of two steps, structure learning and parameter learning, as follows~\cite{Scutari19}: \vspace*{-.5\baselineskip} \begin{equation} Pr \left( \mathcal{G}, \Theta | \mathcal{D} \right) = \underbrace{ Pr \left( \mathcal{G} | \mathcal{D} \right) }_\text{structure learning} \cdot \underbrace{ Pr\left( \Theta| \mathcal{G}, \mathcal{D} \right). }_\text{parameter learning} \end{equation} Structure learning aims to find the directed acyclic graph~$\mathcal{G}$ by maximising $Pr(\mathcal{G} | \mathcal{D})$. Parameter learning, on the other hand, focuses on estimation of the parameters $\Theta$ given the graph $\mathcal{G}$ obtained from structure learning. According to~\cite{Heckerman95,Heckerman95Bayesian}, considering that parameters $\Theta$ represent independent distributions (as assumed in Na\"{i}ve Bayes), the learning process can be formalised as follows~\cite{Scutari19}: \vspace*{-\baselineskip} \begin{equation} Pr( \Theta | \mathcal{G}, \mathcal{D} ) = \prod_i Pr( \Theta_{X_i} | \Pi_{X_i}, \mathcal{D}). \end{equation} It is important to note that structure learning is well known to be both NP-hard~\citep{Chickering94} and NP-complete~\citep{Chickering96} due to the following equation: \vspace*{-.5\baselineskip} \begin{equation} Pr( \mathcal{G} | \mathcal{D} ) \propto Pr( \mathcal{G} ) Pr( \mathcal{D} | \mathcal{G} ), \end{equation} \noindent which can be decomposed into: \vspace*{-\baselineskip} \begin{equation} \begin{split} Pr( \mathcal{D} | \mathcal{G} ) = \int Pr( \mathcal{D} | \mathcal{G}, \Theta) Pr(\Theta | \mathcal{G}) d\Theta ~~~~~~~~~~~~~~~~~~~ \\ ~~~~~~~~~~~~~~~~~~~~~~~~ = \prod_i \int Pr( X_i | \Pi_{X_i}, \Theta_{X_i} ) Pr( \Theta_{X_i} | \Pi_{X_i} ) d\Theta_{X_i} \end{split} \end{equation} In structure learning, it is often used the BIC score, a frequentist measure, to maximise, $Pr( \mathcal{G}, \Theta | \mathcal{D})$, due to its simplicity. \vspace*{-.25\baselineskip} \begin{equation} Score(\mathcal{G}, \mathcal{D}) = BIC( \mathcal{G}, \theta | \mathcal{D} ) = \sum_i log Pr( X_i | \Pi_{X_i}, \Theta_{X_i}) - \frac{log(n)}{2} \left| \Theta_{X_i} \right|. \end{equation} According to~\citet{Scutari19}, structure learning via score maximisation is performed using general-purpose optimisation techniques, typically heuristics, adapted to take advantage of these properties to increase the speed of structure learning. The most common are greedy search strategies that employ local moves designed to affect only few local distributions, to that new candidate DAGs can be scored without recomputing the full $Pr(\mathcal{D} | \mathcal{G})$. This can be done either in the space of the DAGs with hill climbing and tabu search~\cite{russel10}. In this paper, we opted for a greedy Hill Climbing approach to learn the structure $\mathcal{G}$, due to its simplicity and effective results~\cite{Heckerman95}. \subsection{Local Interpretation-Driven Abstract Bayesian Network (LINDA-BN)} \label{subsec:model} State-of-the-art techniques for constructing predictive models underpinned by machine intelligence usually adopt a `black-box' approach, where the reasoning behind the predictions remains opaque (particularly in regard to deep learning models). Consequently, the underlying predictive mechanisms remain largely incomprehensible to the decision-maker. The challenge is how to endow machine intelligence with capabilities to explain the underlying predictive mechanisms in a way that helps decision-makers understand and scrutinize the machine learned decisions. In the following, we propose an extended framework of Bayesian Networks for generating post hoc local interpretations of black-box predictive models. We name this framework the \textit{Local Interpretation-Driven Abstract Bayesian Network} (LINDA-BN). It supports extracting a Bayesian network as an approximation (or an abstraction) of a black-box model for a specific prediction learned from any given input. Note that explanations can be constructed from the graphical representations of LINDA-BN, and we will address the explanation generation component as a direction for future work. The basic idea behind the proposed framework LINDA-BN rests in three main steps: i) permutation generation, ii) Bayesian network learning, and iii) computation of the Markov Blanket of the class variable (representing result of a prediction). It is important to stress that the proposed model aims to augment a decision-maker's intelligence towards a specific decision problem, providing interpretations that can either reinforce the predictions of the black-box or lead to a complete distrust in these predictions (identification of misclassifications). Figure~\ref{fig:lia-bn} shows a general illustration of the proposed framework. \begin{figure}[!h] \resizebox{\columnwidth}{!} { \includegraphics{bn_model.pdf} } \caption{An general illustration of the proposed framework LINDA-BN} \label{fig:lia-bn} \end{figure} Given a vector of input features $\vec{X} = \{ x_1, x_2, ..., x_n \}$ and a black-box predictor, $\hat{y}(\vec{X})$, the goal is to introduce a set of permutations $\vec{X_i}'$ in the features of $\vec{X}$ in a permutation variance $\epsilon \in \left[ 0, 1 \right]$ in such a way that each feature will be permuted using a uniform distribution over the interval $\left[ x_i - \epsilon, x_i + \epsilon \right]$. The goal is to analyse how introducing a small perturbation can impact the predictions of the black box prediction, $\hat{y}(\vec{X_i}')$, generating a new statistical distribution describing small variations of the input vector $\vec{X}$. The goal is to learn a Bayesian network structure out of this statistical sample using a Greedy Hill Climbing approach. Our hypothesis is the following: if the data point falls within the correct decision region of the black box predictor, leading to a correct class classification, $c$, then the predictions of all the permutations, $\hat{y}(\vec{X_i}')$, should be close to certainty, i.e. favouring one of the assignments of the class variable with $Pr( Class = c~|~X_i') \approx 1$. This can strengthen the decision-maker's trust in the predictions of the black-box predictor. If, however, the data point, $\vec{X}$, is very close to the black-box's decision boundary, then one would expect that the permutations will be spread around the different regions demarcated by the decision boundary, leading to a more diversified statistical distributions of predictions, and a higher uncertainty in the classification of the respective class $Pr( Class = c ~|~ X_i') << 1$. Such situations have the potential to alert the decision-maker that the black-box predictor is not very certain about the classification of the given data point. Section~\ref{subsec:rules} is centered in this topic. \begin{algorithm} [!h] \caption{Local Interpretation-Driven Abstract Bayesian Network Generator} \label{alg:algorithm} \begin{algorithmic}[1] \REQUIRE $local\_vec$, single vector from which we want to generate interpretations \\ ~~~~~~~~~~$black\_box$, a predictive model \\ ~~~~~~~~~~$\epsilon$, variance range to permute the features (default = 0.1) \\ ~~~~~~~~~~$n\_samples$, number of permuted samples to generate (default = 300) \\ ~~~~~~~~~~$class\_var$, string with the name of the class variable \\ \ENSURE $\mathcal{G}$, the Local Interpretable Abstract Bayesian Network\\ ~~\\ \STATE /* Generate permutations via a uniform distribution within a permutation range */ \STATE perms = \textit{GeneratePermutations}( x, model, $\epsilon$, n\_samples ) \\ \STATE ~\\ \STATE /* Discretise continuous features according to the number of quartiles */ \STATE perms\_discr = \textit{DiscretisePermutations}( perms, quartiles = 4 ) \STATE ~\\ \STATE /* Learn BN from discrete permutations using a Greedy Hill Climbing Search */ \STATE bn = \textit{LearnBN\_GreedyHillClimbing}( perms\_discr ) \STATE ~\\ \STATE /* Compute BN's marginal distributions */ \STATE bn\_inf = \textit{ComputeMarginalDistributions}( bn ) \STATE ~\\ \STATE /* Compute BN's Markov blanket */ \STATE bn\_markov = \textit{ComputeMarkovBlanket}( bn, class\_var ) \STATE ~~ \IF{ $bn.nodes <= 10$ } \RETURN $bn\_inf$ /* return full network */ \\ \ELSE \RETURN $bn\_markov $ /* return Markov blanket */ \ENDIF \STATE~~\\ \end{algorithmic} \end{algorithm} Since the network structure shows dependencies between the input features and the class variable, then it is possible to extract what features contributed to the prediction and \textit{why}, allowing a deeper understanding about the impact that the features have in the class variable, or even provide the decision-maker additional insights about the decision problem. For complex decision problems with a large amount of features, the local interpretable network that is learned from the generated permutations is extremely complicated to be analysed by a human, so a Markov Blanket is returned, instead, as a summarisation of what are the main variables influencing the class variable. The Markov blanket determines the boundaries of a system in a statistical sense. It includes all its parents, children, and the parents of the children. It can be shown that a node is conditionally independent of all other nodes given values for the nodes in its Markov blanket. Hence, if a node is absent from the class attribute's Markov blanket, its value is completely irrelevant to the classification~\citep{koller09prob}. Algorithm~\ref{alg:algorithm} describes the algorithm that we used for to generate the proposed local interpretable abstract Bayesian network. \subsection{Interpreting Graphical Representations through Reasoning} \label{subsec:repr} This section analyses how to interpret the different situations where a random variable can influence another in the local interpretable Bayesian network model. In a common cause structure, Figure~\ref{fig:reasoning} (a), the local interpretable model approximates to a Na\"{i}ve Bayes classifier, which means that having knowledge about the class variable will make the feature variables $X_1, X_2, \cdots, X_N$ conditionally independent, and consequently uncorrelated. This means that knowing about $X_1$ does not bring any additional information to the decision-maker. Although human decision-makers tend to assess and interpret these structures as cause/effect relationships as a way to simplify and linearise the decision problem due to bounded rationality constraints, statistically, common cause structures do not imply causal effects in Bayesian networks~\citep{Pearl88}. The consideration of the class variable as a prior in interpretations for a \textit{single datapoint} may indicate a high uncertainty obtained in the statistical sample of the permuted features, suggesting that the datapoint that is being interpreted may be very close to the predictive black-box decision boundary (Section~\ref{subsec:rules} addresses this with a higher detail). \begin{figure}[h!] \resizebox{\columnwidth}{!} { \includegraphics[scale=0.1]{reasoning.pdf} } \caption{Different graph structures for probabilistic reasoning} \label{fig:reasoning} \end{figure} The other type of structure that one can often find in the local interpretable model is the $v$-structure, also called common effect (Figure~\ref{fig:reasoning} (b), which approximates to a linear regression representation. This means that, the features become conditionally independent of the class, if and only if one has knowledge about the class variable. Being uncertain about the class will lead to an influence from the features, $X_1, X_2, \cdots, X_N$. In terms of the proposed local interpretable model, this means that the features have a direct effect in the class variable, and humans can interpret it through an abductive reasoning process. Abduction is a mode of human reasoning, which was brought into prominence by the the American philosopher C.S. Peirce \citep{Gabbay:Woods:2006}. In Peirce's view abduction is an inference of the form: ``The surprising fact $C$ is observed. But is $A$ were true, $C$ would be a matter of course. Hence, there is reason to suspect that $A$ is true". Abductive inference is thus a process of justifying an assumption, hypothesis or conjecture in producing the class of interest. Peirce states that abduction might explain a given set of data, or might facilitate observationally valid predictions, or might permit the discounting of other hypotheses. By engaging in abduction, the decision maker interpreting the graph structure is afforded a \emph{simpler} and \emph{more compact} account \cite{Gabbay:Woods:2006}. Abduction is not a sound form of inference like deduction, and so even though the decision maker might suspect $A$, there is a degree of uncertainty. Abduction is sometimes termed ``inference to the best explanation" where there is no guaranteed certainty in the explanation. In other words, given a set of observations, the decision maker uses abduction to find the simplest, most likely and compact explanation from the graph structure. The Markov blanket of the class variable is a way of supporting the decision maker's abductive reasoning process. \subsection{Rules for Local Interpretations} \label{subsec:rules} The graphical nature of the proposed framework LINDA-BN enables the identification of certain parts that can help the decision-maker assess the reliability of the predictions of the black box for single datapoints. To this end, we propose a set of four rules that correspond to four different patterns that the proposed model can identify, depending on how close to the decision boundary a datapoint is. By analysing the confidence of the interpretable model with regards to the class variable together with the structure of the network, one can provide useful guidelines to the decision-maker that can be later be used to generate human centric and understandable explanations (which is not the focus of this work). The proposed rules to assess the confidence of the black box predictions using the proposed framework are the following: \begin{itemize} \item \textbf{Rule 1: High confidence in predictions.} \textit{If the black box predicts a class $c$ for a given datapoint, $\vec{X}$, and the class variable is contained in a common-effect structure in $\mathcal{G}$ with a probability $Pr(Class = c) \approx 1$, then the interpretable model, $\mathcal{G}$, supports the prediction of $\vec{X}$ and its respective Markov blanket determines the most relevant features.} As mentioned in Section~\ref{subsec:repr}, common-effect structures in $\mathcal{G}$ approximate to a linear regression representation in which there is a direct influence from the features to the class. When $Pr(Class = c) \approx 1$, then this means that the datapoint falls in a well defined decision region, as illustrated in Figure~\ref{fig:rule1}. Since the likelihood of the class is close to certainty, the decision-maker can make use of the class' respective Markov blanket for explanation and perform an abductive reasoning process in which the decision-maker will seek to find the simplest and most likely conclusion out of the Markov blanket. \begin{figure}[!h] \centering \includegraphics[scale=0.22]{rule1.pdf} \caption{Graphical representation of a pattern representing Rule 1, a high confidence in the prediction of the black box, supported by an interpretable graph showing what are the most relevant features influencing the class variable.} \label{fig:rule1} \end{figure} \item \textbf{Rule 2: Unreliable predictions.} \textit{If the interpretable network, $\mathcal{G}$, has a structure where the class variable is independent from all other feature variables, that is $ Class \perp \{ X_1, \cdots, X_N \}$, then this corresponds to an unrealistic decision scenario, because the features are uncorrelated from the class variable and providing information about them does not make any change in the probability $Pr( Class = c )$. Thus, the classification $\hat{y}(\vec{X})$ is incorrect and it should be communicated to the decision-maker as an unreliable prediction}.\\ \begin{figure}[!h] \centering \includegraphics[scale=0.22]{rule2.pdf} \caption{Graphical representation of a pattern representing Rule 2, a distrusted prediction of the black box, supported by an interpretable graph showing that knowing information about the features does not make any changes in the class variable.} \label{fig:rule2} \end{figure} Sometimes, due to problems in generalising the black box predictor, there can be classifications that are erroneous and unrealistic. In these rare scenarios, the Local Interpretable model can learn from the permuted instances of $\vec{X}$, a graphical structure in which $ Class \perp \{ X_1, \cdots, X_N \}$ (Figure~\ref{fig:rule2} shows an example). In these situations, the Markov Blanket contains only the class variable, which makes it easy to identify the independence in the class variable. Moreover, it can be easily concluded that the classification $\hat{y}(\vec{X})$ is incorrect and it should be communicated to the decision-maker as an unreliable and unrealistic prediction that results from poor generalisation of the black box. \item \textbf{Rule 3: Contrast Effects.} \textit{If the black box predicts $\hat{y}(\vec{X}) = c$, and the maximum likelihood of the class variable in $\mathcal{G}$ is $Pr( Class = \bar{c})$, then there is a contradiction between the local interpretable abstract model and the prediction computed by the black box, suggesting that the datapoint is very close to the decision boundary, which can either be correctly or incorrectly classified. Thus, the decision-maker should analyse the Markov Blanket of the class variable representing $\vec{X}$, and assess whether the relationships between the features justify the class.} \\ \begin{figure}[!h] \centering \includegraphics[scale=0.22]{rule3.pdf} \caption{Graphical representation of a pattern representing Rule 3, a contrast effect, where the local interpretable abstract model reinforces a class that is different from the one predicted by the black-box.} \label{fig:rule3} \end{figure} In situations where the datapoint is very close to a decision boundary, the permutation of the datapoint $\vec{X}$ will generate a statistical distribution within a certain neighbourhood of $X$. Due to the complexity and non-linearity of the decision boundary, the statistical distribution can increase the likelihood, $Pr( Class = \bar{c} )$, contradicting the prediction of the black box, $\hat{y}(\vec{X}) = c$. In these situations, even if the black-box managed to predict correctly $X$, the is a high uncertainty in the prediction, and it should be recommended to the decision-maker to assess the features of $X$ in order to assess its reliability. Figure~\ref{fig:rule3} shows an example of a contrast effect. \item \textbf{Rule 4: Uncertainty in predictions.} \textit{If the black box predicts $\hat{y}(\vec{X}) = c$, and the probability of the class variable in $\mathcal{G}$ is $Pr( Class = c ) << 1$, but still with a maximum likelihood favouring class $c$, then datapoint $X$ falls near the decision boundary. Even if the class is in accordance with the prediction of the black-box, then there is an underlying uncertainty attached to its prediction. Thus, the decision-maker should analyse the Markov Blanket of the class variable representing $\vec{X}$, and assess whether the relationships between the features justify the class.} \begin{figure}[!h] \centering \includegraphics[scale=0.22]{rule4.pdf} \caption{Graphical representation of a pattern representing Rule 4, uncertainty in the prediction, where the local interpretable abstract model shows that the black box prediction is as good as flipping a coin.} \label{fig:rule4and5} \end{figure} This situation is very similar to the contrast effect (rule 3) with the difference that the class variable in $\mathcal{G}$ is still consistent with the predictions of $\hat{y}(\vec{X})$. However, the statistical distribution of the predictions of the permutations of $\vec{X}$ have a high uncertainty and do not allow the decision-maker to be fully confident in the prediction of $\hat{\vec{X}}$. Thus, depending on the degree of uncertainty of $Pr(Class = c)$, the decision-maker should analyse the Markov Blanket of the class variable representing $\vec{X}$, and assess whether the relationships between the features justify the class. Figure~\ref{fig:rule4and5} shows an example of an uncertain prediction. Although the likelihood of the variable $Class$ is in accordance with $\hat{\vec{X}} = c$, the local interpretable abstract model shows full uncertainty in the prediction: the prediction is as good as flipping a coin. \end{itemize} \section{Evaluation} \label{sec:eval} Given that there are no standard evaluation metrics for XAI~\citep{guidotti2018}, in this , we present a thorough analysis of the proposed LINDA-BN model in accordance to the rules that we put forward in Section~\ref{subsec:rules}. We performed an analysis in terms of two public well-known datasets from the literature, namely the \textit{Pima Indians diabetes dataset} and the Breast Cancer Wisconsin~\citep{Piri18}, both from the \textit{UCI Machine Learning Repository}\footnote{\url{https://archive.ics.uci.edu/ml/index.php}}. We have made available a public repository with Jupyter notebooks with the proposed model and all the experiments that we made for this research work: \url{https://github.com/catarina-moreira/LINDA_DSS}. In Section~\ref{sec:setup}, we present the main experimental setup for our analysis. Section~\ref{sec:params}, presents an analysis of the impact of the permutation variance in the proposed LINDA model. In Section~\ref{sec:analysis}, we make a statistical analysis of the distribution of the interpretations generated by LINDA over both datasets and their the different rules together with existing interpretable approaches such as LIME~\citep{Ribeiro16} and SHAP~\citep{Lundberg17}. Finally, Section~\ref{sec:complex}, describes how the proposed interpretable model performs in more complex decision scenarios. \subsection{Design of Experiments}\label{sec:setup} In order to assess the performance and interpretations generated by the proposed LINDA model, we trained a deep learning neural network for two two public well-known datasets from the literature, namely the \textit{Pima Indians diabetes dataset} and the\textit{ Breast Cancer Wisconsin} datasets. Both datasets are highly unbalanced, and for that reason, we had to balance the datasets in order to not have a biased predictive model. We performed a grid search approach in order to find the best performing neural network model. The characteristics of the models can be found in Table~\ref{tab:models}. As such, the learned models apply sophisticated internal working mechanisms and run as a black-box when making predictions. \begin{table}[!h] \centering \begin{tabular}{l|c|c } \textbf{Parameters} & \textbf{Diabetes} & \textbf{Breast Cancer} \\ \hline Model Accuracy & 0.7380 & 0.9840 \\ \hline Num. Hidden Layers & 5 & 4 \\ \hline Num. Neurons per Hidden Layer & 12 & 12 \\ \hline Hidden Layer Activation function & Relu & Relu \\ \hline Ouput Layer Activation function & Softmax & Softmax \\ \hline \end{tabular} \caption{Deep neural network architecture found for the best performing model in the Diabetes and the Breast Cancer datasets.} \label{tab:models} \end{table} \subsection{Analysis of the Impact of Different Permutation Variances}\label{sec:params} As in other representative interpretable models in the literature (like LIME and SHAP), LINDA-BN performs permutations between an interval in the range $\left[x_i - \epsilon, x_i + \epsilon \right]$ on the input vector's features $\vec{X} = \{ x_1, x_2, \cdots, x_N\}$ in order to generate a statistical distribution of how the predictions of the black-box change with the features. To investigate the impact of the permutation variance, $\epsilon$, we performed a set of experiments for both datasets, where we varied $\epsilon \in \left[ 0, 1 \right]$, and analysed how many times the proposed interpretable model returned a structure that is consistent with the rules proposed in Section~\ref{subsec:rules}. The obtained results are summarised in Figure~\ref{fig:permutations}. \begin{figure}[!h] \resizebox{\columnwidth}{!} { \includegraphics{permutation_analysis.pdf} } \caption{Impact of the permutation variance boundary in the different proposed rules. } \label{fig:permutations} \end{figure} Taking a close look at the Diabetes dataset in Figure~\ref{fig:permutations}, results show that low variance makes the interpretable model very confident in its interpretations, with $92\%$ of the datapoints (both from the training set and test set) falling in rule 1 (high confidence in the prediction). This is due to the fact that, for a very small permutation interval, the probability of hitting a decision boundary is very small. This is confirmed with the verification of the low amount of datapoints falling in rule 4 (uncertainty in the predictions) or rule 3 (contrast effects). When the permutation variance starts to increase, the model becomes less certain about the predictions. Consequently, rule 1 starts to decrease exponentially, while rule 4 starts to show a lot in uncertainty in the interpretations generated for the different datapoints. One can see that when the permutation variance reaches half of the feature space (note that we assume that the features of the black-box are scaled between 0 and 1 as in standard machine learning applications), then there comes a point where the uncertainty is so high that the interpretable model stops having confidence in almost 90\% of its interpretations. Additionally, almost $80\%$ of the datapoints start falling in rule 4. In terms of the impact of the permutation variance for the breast cancer dataset, the scenario tends to be slightly different. Just like in the diabetes dataset, when the permutation variance interval is very small, the statistical distribution of the predictions learned by the proposed LINDA model majorly falls in rule 1. However, when the permutation variance starts to increase together with the uncertainty levels, we start to notice an increase of contrast effects (rule 3), rather than uncertainty in the predictions (rule 4). The reason for this is due to the effectiveness of the black-box. Note that the accuracy of the deep neural network model for the diabetes dataset was $73.80 \%$, while the learned model for the breast cancer dataset achieved an accuracy of $98.40\%$. Since the majority of the datapoints fall in well defined decision regions, when the permutation variance increases, the statistical distribution of predictions will tend to show misclassifications (a statistical distribution more concentrated in the opposite region of the decision boundary). Figure~\ref{fig:permutations} also shows that permutation variances superior to 0.2 do not decrease the certainty of the interpretability of correctly predicted datapoints, however it does increase the number of contrast effects (rule 3), reinforcing again the idea that bigger permutation intervals will make the distributions point towards opposite directions of the decision boundary. \begin{table}[!h] \resizebox{\columnwidth}{!} { \begin{tabular}{l|c||c|c|c|c||c|c|c|c} \multicolumn{2}{c}{\textbf{}} & \multicolumn{4}{|c|}{\textbf{DIABETES}} & \multicolumn{4}{|c}{\textbf{BREAST CANCER}} \\ \hline \multicolumn{1}{c|}{\textbf{Rules}} & \textbf{Set} & \multicolumn{1}{c|}{\textbf{TP}} & \multicolumn{1}{c|}{\textbf{TN}} & \multicolumn{1}{c|}{\textbf{FP}} & \multicolumn{1}{c|}{\textbf{FN}} & \multicolumn{1}{|c|}{\textbf{TP}} & \multicolumn{1}{c|}{\textbf{TN}} & \multicolumn{1}{c|}{\textbf{FP}} & \multicolumn{1}{c|}{\textbf{FN}} \\ \hline \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Rule 1}}} & \textbf{Train} & 0.8662 & 0.91 & 0.7627 & 0.7419 & 0.9931 & 0.8786 & 1.0000 & 0.1667 \\ \cline{2-10} \multicolumn{1}{c|}{} & \textbf{Test} & 0.7576 & 0.96 & 0.57 & 0.8571 & 1.0000 & 0.8286 & 1.0000 & 0.0000 \\ \hline \hline \multirow{2}{*}{\textbf{Rule 2}} & \textbf{Train} & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 \\ \cline{2-10} & \textbf{Test} & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 \\ \hline \hline \multirow{2}{*}{\textbf{Rule 3}} & \textbf{Train} & 0.0828 & 0.016 & 0.1186 & 0.0645 &0.0000 & 0.0643 & 0.0000 & 0.6667 \\ \cline{2-10} & \textbf{Test} & 0.1818 & 0.0000 & 0.14 & 0.0000 & 0.0000 & 0.1143 & 0.0000 & 0.0000 \\ \hline \hline \multirow{2}{*}{\textbf{Rule 4}} & \textbf{Train} & 0.0509 & 0.0787 & 0.1186 & 0.1935 & 0.0069 & 0.0571 & 0.0000 & 0.1667 \\ \cline{2-10} & \textbf{Test} & 0.0606 & 0.04 & 0.29 & 0.1429 & 0.0000 & 0.0571 & 0.0000 & 0.0000 \\ \hline \end{tabular} } \caption{Overview of the distribution of the datapoints for the Diabetes and Breast Cancer datasets over the proposed rules using LINDA in order to determine the confidence of the computed interpretations.} \label{tab:results} \end{table} In order to extract interpretable models that are both highly confident in the interpretations, but can also flag possible misclassifications, we decided to set the permutation variance to $0.1$ for the remaining parts of this analysis. \subsection{Analysis of Rules for Local Interpretations}\label{sec:analysis} In this section, we analyse the impact of the proposed rules in the different classifications: true positives, true negatives, false positives, and false negatives. The goal with this analysis is to understand if the proposed rules can provide the decision-maker some insights on whether or not, the decision-maker is facing a correct classification or a possible misclassification. Table~\ref{tab:results} shows the percentage of datapoints over the different proposed rules for both the diabetes and breast cancer datasets, using $\epsilon = 0.1$. \begin{itemize} \item \textbf{Correct classifications majorly coincide with rule 1, leading to highly confident predictions}. For the breast cancer dataset, for instance, all datapoints classified as true positives and true negatives were categorised as rule 1 with high confident interpretations. For the diabetes dataset, since the black box had a more poor performance, then the percentage of correctly classified datapoints is already smaller. Still, 86\% of the true positives in the training set fell under the category of rule 1 and in the test set 75\%. Regarding the true negatives, more than 90\% of the datapoints had interpretations supporting a true classification of a non-diabetes case. When compared with interpretations from state of the art algorithms, one can see that both LIME and SHAP also tend to reinforce the features that are contributing positively to the class diabetes. Figure~\ref{fig:rule1_comp} shows an example of an interpretation of a correctly classified datapoint ($\epsilon = 0.1$) and the respective interpretations for LIME and SHAP. For the case of misclassifications, there was a significant amount of datapoints belonging to the set of the false positives and the false negatives that were also categorised as rule 1. In these examples, the interpretable model could not provide significant insights of why there was a misclassification. \begin{figure}[H] \resizebox{\columnwidth}{!} { \includegraphics{Rule1_comp.pdf} } \caption{Correct prediction high confidence (a true positive).} \label{fig:rule1_comp} \end{figure} \item \textbf{Nonexistence of rule 2.} As illustrated in Figure~\ref{fig:permutations}, rule 2, which corresponds to the cases where the class variable is independent of the features, is extremely rare. This rule only starts to emerge for permutations superior to 0.2 in the diabetes dataset, and are nonexistent in the cancer dataset. This suggests that permutation variances of 0.1 do not point towards unrealistic and erroneous classifications. Figure~\ref{fig:rule2_comp} shows an example of an unreliable prediction that was found in the testset of the diabetes dataset, using a variance of $\epsilon = 0.25$ and the respective LIME and SHAP interpretations. Note that this specific datapoint corresponds to a misclassification (a false positive). \begin{figure}[!h] \resizebox{\columnwidth}{!} { \includegraphics{Rule2_comp.pdf} } \caption{Misclassification in accordance with rule 2, an unreliable prediction (a false positive).} \label{fig:rule2_comp} \end{figure} \item \textbf{Contrast effects, rule 3, majorly coincide with misclassifications.} When a datapoint falls very close to the decision boundary, then when permuting that datapoint, there can be a significant statistical distribution of the predictions falling on the region of the decision boundary of the opposite class. According to the analysis in Table~\ref{tab:results}, in the Diabetes dataset, rule 3 majorly occurred in the set of false positive points, that is datapoints that were misclassified. But there is also a significant percentage of datapoints in the set of true positives. Although these points were correctly classified, the contrast effect that is captured by the proposed interpretable model might indicate that these are cases correctly classified near the decision boundary. Thus, the decision-maker should be aware that although there was a correct classification, this classification might not have occurred due to the appropriate input feature values. Figure~\ref{fig:rule3_comp} shows an example of a rule 3 in the Diabetes dataset for $\epsilon = 0.1$. When comparing with LIME and SHAP, one can notice that it is not very clear that this datapoint represents a misclassification. \begin{figure}[!h] \resizebox{\columnwidth}{!} { \includegraphics{Rule3_comp.pdf} } \caption{Misclassification in accordance with rule 3, a contrast effect.} \label{fig:rule3_comp} \end{figure} \item \textbf{Uncertainty in predictions, rule 4, majorly coincide with misclassifications.} In the experiments that were performed, using $\epsilon = 0.1$, Table~\ref{tab:results} shows that the datapoints showing higher uncertainty in the likelihood of the class variable are mostly present in the sets of the false positives and the false negatives. In other words, in the set of misclassified datapoints. This is more noticeable in the diabetes dataset, in which the black-box predictor achieve an average accuracy of $73.8\%$ where there are more misclassified datapoints. On the other hand, when one looks at the breast cancer dataset, since there were almost no misclassified datapoints (accuracy of 0.9840), the percentage of datapoints falling in rule 4 are nearly zero. Figure~\ref{fig:rule4_comp} illustrates an example of a false positive datapoint, in which the interpretable model show maximum uncertainty in the class variable node. In terms of LIME and SHAP, it is hard to identify any principles that could point the decision-maker about a possible misclassification. \begin{figure}[!h] \resizebox{\columnwidth}{!} { \includegraphics{Rule4_comp.pdf} } \caption{Correct classification in accordance with rule 4, uncertainty in predictions (true positive).} \label{fig:rule4_comp} \end{figure} \end{itemize} In the next section, we describe how more complex decision problems are addressed by the proposed interpretable model. \subsection{Interpretations for Complex Decision Scenarios}\label{sec:complex} For small decision problems (at most 10 random variables), the proposed LINDA model displays the full interpretable network. For more complex decision problems, however, this would become unreadable for a human decision-maker. The breast cancer dataset is an example of such complex decision problem that contains a set of 30 features, which are mapped into random variables. This results in a graphical structure too complex for any human to analyse. For such datasets, the proposed LINDA model provides the decision-maker a Markov Blanket of the variable of interest, instead of the full local interpretable Bayesian network, together with information about in which rule the network pattern corresponds to and respective marginal probabilities. \begin{figure}[H] \resizebox{\columnwidth}{!} { \includegraphics{cancer_markov.pdf} } \caption{Markov blanket representation of a local interpretable Bayesian network with 30 nodes for the breast cancer dataset. The Markov blanket is in accordance with rule 1 and represents a correct classification.} \label{fig:complex} \end{figure} This representation enables the summarisation of information, enabling a fast and compact data driven interpretation of a local datapoint. Figure~\ref{fig:complex} shows an example of an interpretable network that was extracted out of a true positive datapoint. The complexity of the network does not enable any human interpretations to take place. However, when looking at the Markov blanket together with the marginal probabilities of the random variables, then one can clearly identify a common-effect structure in which six features directly influence the class variable and contribute to its value. Moreover, the statistical distribution of the permutations shows a full confidence in the diagnosis, suggesting that the datapoint falls within rule 1 and consequently there is a high confidence that it is a correct prediction. The depth of this Markov blanket can potentially be extended to different depths, depending on the decision-maker's needs (for example. a normal person would be satisfied with the Markov blanket in Figure~\ref{fig:complex}, however a medical doctor would probability explore other depths and analyse the relationship between more variables and their indirect influences towards the class variable). \section{Conclusions} \label{sec:concl} In this paper, we proposed a new post hoc interpretable framework, the \textit{Local Interpretation-Driven Abstract Bayesian Network} (LINDA-BN). This framework consists in learning a Bayesian network as an approximation of a black-box model from a statistical distribution of predictions from a local datapoint. The major contribution of the proposed framework is the ability to identify four different rules which can inform the decision-maker about the confidence level in a given prediction of a specific datapoint. As such, the interpretations provided in our approach can help the decision-maker assess the reliability of predictions learned by a black-box model. These rules correspond to the different patterns that can be found in the learned Bayesian network, and they are summarised as follows: \begin{itemize} \item Rule 1 - High confidence in predictions: a common-effect structure in which the features are directly influencing the class and the maximum likelihood of the class is close to one, suggesting a correct classification. \item Rule 2 - Unreliable predictions: when the class variable is independent from the features, suggesting a misclassification \item Rule 3 - Contrast Effects: when the maximum likelihood of the class variable in the learned Bayesian network favours a class opposite to the black-box model, suggesting a misclassification. \item Rule 4 - Uncertainty in the predictions: when the likelihood of the class variable has very high levels of uncertainty, suggesting that the decision-maker should assess the network in order to understand if the features are supporting the class. \end{itemize} Experimental findings showed that rules 3 and 4 usually occurred in sets of false positives and false negatives, suggesting that the proposed framework might provide a possible approach to identify misclassifications in black-box models. On the other hand, the correct classifications (true positives and true negatives), were mostly associated with rule 1 with common-effect graph structures and maximum likelihood in the class variable close to one, again providing a potential method to identify correct classifications and promote trust in the decision-maker. For future work, we would like to extend the proposed approach from an interpretable framework to an explainable one. This majorly consists in converting the symbolic rules proposed in this study into explainable arguments that could communicate the decision-maker \textit{why} a certain prediction was computed out of a black-box and the reasons of \textit{why / why not} a decision-maker should trust in the predictions.
proofpile-arXiv_065-191
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The Mean Field Games (MFG in short) theory concerns the study of differential games with a large number of rational, indistinguishable agents and the characterization of the corresponding Nash equilibria. In the original model introduced in \cite{hcm,ll}, an agent can typically act on its velocity (or other first order dynamical quantities) via a control variable. Mean Field Games where agents control the acceleration have been recently proposed in \cite{ammt,bc,cm}.\par A prototype of stochastic process involving acceleration is given by the Langevin diffusion process, which can be formally defined as \begin{equation}\label{eq:int_Langevin} \ddot{X}(t)=-b(X(t))+\s\dot B(t), \end{equation} where $\ddot{X}$ is the second time derivative of the stochastic process $X$, $B$ a Brownian motion and $\s$ a positive parameter. The solution of \eqref{eq:int_Langevin} can be rewritten as a Markov process $(X,V)$ solving \begin{equation*} \left\{ \begin{array}{ll} \dot X(t)=V(t),\\ \dot V(t)=-b(X(t))+\s\dot B(t). \end{array} \right. \end{equation*} The probability density function of the previous process satisfies the kinetic Fokker-Planck equation \begin{equation*} \pd_t p-\frac{\s^2}{2}\Delta_v p-b(x)\cdot D_v p+v\cdot D_x p=0\qquad \text{in}\quad (0,\infty)\times \R^d\times\R^d. \end{equation*} The previous equation, in the case $b\equiv 0$, was first studied by Kolmogorov \cite{K} who provided an explicit formula for its fundamental solution. Then considered by H\"ormander \cite{H} as motivating example for the general theory of the hypoelliptic operators (see also \cite{AM,bou,LP}). \par We consider a Mean Field Games model where the dynamics of the single agent is given by a controlled Langevin diffusion process, i.e \begin{equation}\label{eq:int_Langevin_controlled} \left\{ \begin{array}{ll} \dot X(s)=V(s),\, &s\ge t\\ \dot V(s)=-b(X(s))+\a(s)+\s\dot B(s)&s\ge t\\ X(t)=x,\,V(t)=v \end{array} \right. \end{equation} In \eqref{eq:int_Langevin_controlled}, the control law $\a:[t,T]\to\R^d$, which is a progressively measurable process with respect to a fixed filtered probability space such that $\EE[\int_t^T|\a(t)|^2dt]<+\infty$, is chosen to {\it maximize} the functional \begin{align*} J(t,x,v;\a)= \EE_{t,(x,v)}\Big\{\int_t^T& \left[f(X(s),V(s),m(s))-\half |\a(s)|^2\right] ds\\ &+u_T(X(T),V(T))\Big\}, \end{align*} where $m(s)$ is the distribution of the agents at time $s$. Let $u$ the value function associated with the previous control problem, i.e. \[ u(t,x,v)=\sup_{\a\in \mA_t}\{J(t,x,v;\a)\} \] where $\mA_t$ is the the set of the control laws. Formally, the couple $(u,m)$ satisfies the MFG system (see \cite[Section 4.1]{ammt} for more details) \begin{equation}\label{eq:int_MFG} \left\{ \begin{array}{ll} \pd_t u+\frac{\s^2}{2}\Delta_v u-b(x)\cdot D_v u+v\cdot D_x u+\half |D_vu|^2=-f(x,v,m)\\[4pt] \pd_t m-\frac{\s^2}{2}\Delta_v m-b(x)\cdot D_v m+v\cdot D_x m+\diver_v(mD_vu)=0\\[4pt] m(0,x,v)=m_0(x,v),\quad u(T,x,v)=u_T(x,v). \end{array} \right. \end{equation} for $(t,x,v)\in (0,T)\times\R^d\times\R^d$. The first equation is a backward Hamilton-Jacobi-Bellman equation, degenerate in the $x$-variable and with a quadratic Hamiltonian in the $v$ variable, and the second equation is forward kinetic Fokker-Planck equation. In the standard setting, MFG systems with quadratic Hamiltonians has been extensively considered in literature both as a reference model for the general theory and also since, thanks to the Hopf-Cole change of variable, the nonlinear Hamilton-Jacobi-Bellman equation can be transformed into a linear equation, allowing to use all the tools developed for this type of problem (see for example \cite{gm,Gomesbook,G,gll,ll,ugs}). Recently, a similar procedure has been used for ergodic hypoelliptic MFG with quadratic cost in \cite{fgt} and for a flocking model involving kinetic equations in \cite[Section 4.7.3]{cd}.\\ We study \eqref{eq:int_MFG} by means of a change of variable introduced in \cite{G,gll} for the standard case. By defining the new unknowns $\phi =e^{u/{\s^2}}$ and $\psi =me^{-u/ {\s^2}}$, the system \eqref{eq:int_MFG} is transformed into a system of two kinetic Fokker-Planck equations \begin{equation}\label{eq:int_MFG_kinetic} \left\{ \begin{array}{ll} \pd_t \phi+\frac{\s^2}{2}\Delta_v \phi-b(x)\cdot D_v \phi+v\cdot D_x \phi=-\sqi f(x,v,\psi\phi)\phi\\[4pt] \pd_t \psi-\frac{\s^2}{2}\Delta_v \psi-b(x)\cdot D_v \psi+v\cdot D_x \psi=\sqi f(x,v,\psi\phi)\psi\\[4pt] \psi(0,x,v)=\frac{m_0(x,v)}{\phi(0,x,v)},\quad \phi(T,x,v)=e^{\frac{ u_T(x,v)}{ \s^2}}. \end{array} \right. \end{equation} for $(t,x,v)\in (0,T)\times\R^d\times\R^d$. In the previous problem, the coupling between the two equations is only in the source terms. Following \cite{G}, we prove existence of a (weak) solution to \eqref{eq:int_MFG_kinetic} by showing the convergence of an iterative scheme defined, starting from $\psi^{(0)}\equiv 0$, by solving alternatively the backward problem \begin{equation}\label{eq:iterate_phi} \left\{ \begin{array}{ll} \pd_t \phi^{(k+\half)} +\frac{\s^2}{2}\Delta_v \phi^{(k+\half)}&\hskip-16pt -b(x)\cdot D_v \phi^{(k+\half)}+v\cdot D_x \phi^{(k+\half)}\\[4pt] &=-\sqi f(\psi^{(k)}\phi^{(k+\half)})\phi^{(k+\half)}\\[6pt] \phi^{(k+\half)} (T,x,v)=e^{\frac{ u_T(x,v)}{ \s^2}}, \end{array} \right. \end{equation} and the forward one \begin{equation}\label{eq:iterate_psi} \left\{ \begin{array}{ll} \pd_t \psi^{(k+1)}-\frac{\s^2}{2}\Delta_v \psi^{(k+1)}&\hskip-34pt -b(x)\cdot D_v \psi^{(k+1)}+v\cdot D_x \psi^{(k+1)}\\[4pt] &= \sqi f(\psi^{(k+1)}\phi^{(k+\half)})\psi^{(k+1)}\\[6pt] \psi^{(k+1)}(0,x,v)=\frac{m_0(x,v)}{\phi^{(k+\half)}(0,x,v)}. \end{array} \right. \end{equation} We show that the resulting sequence $(\phi^{(k+\half)},\psi^{(k+1)})$, $k\in\N$, monotonically converges to the solution of \eqref{eq:int_MFG_kinetic}. Hence, by the inverse change of variable \begin{equation}\label{eq:change} u=\frac{ \ln(\phi)}{\s^2},\qquad m=\phi\psi, \end{equation} we obtain a solution of the original problem \eqref{eq:int_MFG}. We have \begin{theorem}\label{thm:main} The sequence $(\phi^{(k+\half)},\psi^{(k+1)})$ defined by \eqref{eq:iterate_phi}-\eqref{eq:iterate_psi} converges in $L^2([0,T]\times \R^d\times\R^d)$ and a.e. to a weak solution $(\phi,\psi)$ of \eqref{eq:int_MFG_kinetic}. Moreover, the couple $(u,m)$ defined by \eqref{eq:change} is a weak solution to \eqref{eq:int_MFG}. \end{theorem} The previous iterative procedure also suggests a monotone numerical method for the approximation of \eqref{eq:int_MFG_kinetic}, hence for \eqref{eq:int_MFG}. Indeed, by approximating \eqref{eq:iterate_phi} and \eqref{eq:iterate_psi} by finite differences and solving alternatively the resulting discrete equations, we obtain an approximation of the sequence $(\phi^{(k+\half)},\psi^{(k+1)})$. A corresponding procedure for the standard quadratic MFG system was studied in \cite{G}, where the convergence of the method is proved. We plan to study the properties of the previous numerical procedure in a future work. \section{Well posedness of the kinetic Fokker-Planck system} In this section, we study the existence of a solution to system \eqref{eq:int_MFG_kinetic}. The proof of the result follows the strategy implemented in \cite[Section 2]{G} for the case of a standard MFG system with quadratic Hamiltonian and relies on the results for linear kinetic Fokker-Planck equations in \cite[Appendix A]{d}.\\ We fix the assumptions we will assume in all the paper. The vector field $b: \R^{d} \to\R^d$ and the coupling cost $f:\R^d \times \R^d\times\R\to\R$ are assumed to satisfy \begin{eqnarray*} &b\in L^\infty(\T^{d}), \\ &\text{$f\in L^\infty(\R^d\times\R^d\times\R)$, $f\le 0$ and $f(x,v,\cdot)$ strictly decreasing.} \end{eqnarray*} Moreover, the diffusion coefficient $\s$ is positive and the initial and terminal data satisfy \begin{equation}\label{hyp:i_m} \begin{aligned} \,m_0\in L^\infty(\R^d\times\R^d),\, m_0\ge 0,\,\iint m_0(x,v)dxdv=1,\\ \text{and $\exists\, R_0>0$ s.t. $\supp\{m_0\}\subset \R^d\times B(0,R_0)$} \end{aligned} \end{equation} and \begin{equation}\label{hyp:i_u} \begin{aligned} u_T\in C^0(\R^d\times\R^d)\,\text{ and $\exists\,C_0$, $C_1>0$ s.t. $\forall (x,v)\in \R^d\times\R^d$}\\ -C_0(|v|^2+|x|)-C_0\le u_T(x,v)\le -C_1(|v|^2+|x|)+C_1. \end{aligned} \end{equation} Note that \eqref{hyp:i_u} implies that $e^{u_T/\s^2}\in L^\infty(\R^d\times\R^d)\cap L^2(\R^d\times\R^d)$. We denote with $(\cdot,\cdot)$ the scalar product in $L^2([0,T]\times \R^d\times\R^d)$ and with $\langle\cdot,\cdot\rangle$ the pairing between $\mX=L^2([0,T]\times\R^d_x;H^1(\R^d_v))$ and its dual $\mX'=L^2([0,T]\times\R^d_x;H^{-1}(\R^d_v))$. We define the following functional space \begin{equation* \mY=\left\{g\in L^2([0,T]\times\R^d_x, H^1(\R^d_v)),\partial_t g+v\cdot D_x g\in L^2([0,T]\times\R^d_x, H^{-1}(\R^d_v))\right\} \end{equation*} and we set $\mY_0=\{g\in \mY:\,g\ge 0 \}$. If $g\in \mY$, then it admits (continuous) trace values $g(0,x,v)$, $g(T,x,v)\in L^2(\R^d\times\R^d)$ (see \cite[Lemma A.1]{d}) and therefore the initial/terminal conditions for \eqref{eq:int_MFG_kinetic} are well defined in $L^2$ sense. We first prove the well posedness of problems \eqref{eq:iterate_phi} and \eqref{eq:iterate_psi}. \begin{proposition}\label{prop:well_posed_eq_phi} We have \begin{itemize} \item[(i)] For any $\psi\in\mY_0$, there exists a unique solution $\phi\in\mY_0$ to \begin{equation}\label{eq:equation_phi} \left\{ \begin{array}{ll} \pd_t \phi+\frac{\s^2}{2}\Delta_v \phi-b(x)\cdot D_v \phi+v\cdot D_x \phi=-\sqi f(x,v,\psi\phi)\phi\\[4pt] \phi(T,x,v)=e^{\frac{ u_T(x,v)}{ \s^2}}. \end{array} \right. \end{equation} Moreover, $\phi\in L^\infty([0,T]\times \R^d\times\R^d)$ and, for any $R>0$, there exist $\d_R\in\R$ and $\rho>0$ such that \begin{equation}\label{eq:lower_bound_phi} \phi(t,x,v)\ge C_R:= e^{\sqi(\d_R-\rho T)}\quad\forall t\in [0,T],\, (x,v)\in B(0,R)\subset \R^d\times\R^d. \end{equation} \item[(ii)] Let $\Phi: \mY_0\to\mY_0$ be the map which associates to $\psi$ the unique solution of \eqref{eq:equation_phi}. Then, if $\psi_2 \le\psi_1$, we have $\Phi(\psi_2)\ge \Phi(\psi_1)$. \end{itemize} \end{proposition} \begin{proof} Fixed $\psi\in\mY_0$, consider the map $F=F(\varphi)$ from $L^2([0,T]\times \R^d\times\R^d)$ into itself that associates with $\varphi$ the weak solution $\phi\in L^2([0,T]\times \R^d\times\R^d)$ of the linear problem \begin{equation}\label{eq:equation_phi_1} \left\{ \begin{array}{ll} \pd_t \phi+\frac{\s^2}{2}\Delta_v \phi-b(x)\cdot D_v \phi+v\cdot D_x \phi=-\sqi f(\psi\varphi)\phi\\[4pt] \phi(T,x,v)=e^{\frac{ u_T(x,v)}{ \s^2}}. \end{array} \right. \end{equation} By \cite[Prop. A.2]{d}, $\phi$ belongs to $\mY$ and it coincides with the unique solution of \eqref{eq:equation_phi_1} in this space. Moreover, the following estimate \begin{equation}\label{eq:energy_est} \|\phi\|_{L^2([0,T]\times\R^d_x;H^1(\R^d_v))}+\|\pd_t\phi +v\cdot D_x\phi\|_{L^2([0,T]\times\R^d_x;H^{-1}(\R^d_v))}\le C \end{equation} holds for some constant $C$ which depends only on $\|e^{u_T/\s^2}\|_{L^2} $, $\|f\|_{L^\infty}$ and $\s $. Hence $F$ maps $B_C$, the closed ball of radius $C$ of $L^2([0,T]\times \R^d\times\R^d)$, into itself.\\ To show that the map $F$ is continuous on $B_C$, consider $\{\varphi_n\}_{n\in\N},\,\varphi\in L^2([0,T]\times \R^d\times\R^d)$ such that $\|\varphi_n-\varphi\|_{L^2}\to 0$ and set $\phi_n=F(\varphi_n)$. Then $\phi_n \in \mY$, and, by the estimate \eqref{eq:energy_est}, we get that, up to a subsequence, there exists $\bphi\in \mY$ such that $\phi_n\to \bphi$, $D_v\phi_n\to D_v\bphi$ in $L^2([0,T]\times \R^d\times\R^d)$, $\pd_t\phi_n +v\cdot D_x\phi_n \to \pd_t\bphi_n +v\cdot D_x\bphi_n$ in $L^2([0,T]\times\R^d_x;H^{-1}(\R^d_v))$. Moreover $\varphi_n\to\varphi$ almost everywhere. By the definition of weak solution to \eqref{eq:equation_phi_1}, we have that \begin{equation}\label{eq:weak} \langle\pd_t \phi_n+v\cdot D_x\phi_n,w \rangle - \frac{\s^2}{2}(D_v\phi_n,D_vw) -(b\cdot D_v\phi_n,w)=(-\frac{1}{\s^2}\phi_n F(\varphi_n \psi),w), \end{equation} for any $w\in \mD([0,T]\times \R^d\times\R^d)$, the space of infinite differentiable functions with compact support in $[0,T]\times \R^d\times\R^d$. Employing weak convergence for left hand side of \eqref{eq:weak} and the Dominated Convergence Theorem for the right hand one, we get for $n\to\infty$ \[ \langle\pd_t \bphi+v\cdot D_x\bphi,w \rangle - \frac{\s^2}{2}(D_v\bphi ,D_vw) -(b\cdot D_v\bphi,w)=(-\bphi F(\varphi \psi),w) \] for any $w\in \mD([0,T]\times \R^d\times\R^d)$. Hence $\bphi=F(\varphi)$ and $F(\varphi_n)\to F(\varphi)$ for $n\to\infty$ in $L^2([0,T]\times \R^d\times\R^d)$. The compactness of the map $F$ in $L^2([0,T]\times \R^d\times\R^d)$ follows by the compactness of the set of the solutions to \eqref{eq:equation_phi_1}, see \cite[Theorem 1.2]{cep}. We conclude, by Schauder's Theorem, that there exists a fixed-point of the map $F$ in $L^2$, hence in $\mY$, and therefore a solution to the nonlinear parabolic equation \eqref{eq:equation_phi}.\par Observe that, if $\phi$ is a solution of \eqref{eq:equation_phi}, then $\tilde \phi=e^{\l t}\phi$ is a solution of \begin{equation}\label{eq:equation_phi_equiv} \pd_t \tilde\phi+\frac{\s^2}{2}\Delta_v \tilde\phi-b(x)\cdot D_v \tilde\phi+v\cdot D_x\tilde \phi-\l\tilde \phi=-\sqi f(e^{-\l t}\psi\tilde\phi)\tilde\phi \end{equation} with the corresponding final condition. In the following, we assume that $\lambda>0$. To show that $\phi$ is non negative, we will exploit the following property (see \cite[Lemma A.3]{d}): given $\phi\in \mY$ and defined $\phi^{\pm} =\max(\pm \phi ,0)$, then $\phi^\pm\in \mX$ and \begin{equation}\label{eq:property_phi_minus} \langle \pd_t \phi+v\cdot D_x\phi,\phi^-\rangle=\half\left(\iint|\phi(0,x,v)^-|^2dxdv-\iint|\phi(T,x,v)^-|^2dxdv\right). \end{equation} Let $\phi$ be a solution of \eqref{eq:equation_phi_equiv}, multiply the equation by $\phi^-$ and integrate. Then, since $\phi(T,x,v)$ is non negative, by \eqref{eq:property_phi_minus} we get \begin{align*} -\sqi (\phi f(e^{\l t}\phi \psi),\phi^-) =\langle\pd_t \phi+v\cdot D_x\phi,\phi^-\rangle-\\ \frac{\s^2}{2}(D_v\phi,D_v\phi^-) -(b\cdot D_v\phi,\phi^-) -\l (\phi,\phi^-) =\\ \half \iint|\phi(0,x,v)^-|^2dxdv+\frac{\s^2}{2}(D_v\phi^-,D_v\phi^-) + \l (\phi^-,\phi^-) \ge \\ \l (\phi^-,\phi^-), \end{align*} where it has been exploited that, by integration by parts, $(b\cdot D_v\phi,\phi^-)=0$. Since $f\le 0$ and therefore \[ -(\phi f(e^{\l t}\phi \psi),\phi^-) =(\phi^- f(e^{\l t}\phi \psi),\phi^-) \le 0, \] we get $(\phi^-,\phi^-)\equiv 0$ , hence $\phi\ge0$ . To prove the uniqueness of the solution to \eqref{eq:equation_phi}, consider two solutions $\phi_1$, $\phi_2$ of \eqref{eq:equation_phi_equiv} and set $\bphi=\phi_1-\phi_2$. Multiplying the equation for $\bphi$ by $\bphi$, integrating and using $\bphi(x,v,T)=0$, we get \begin{equation}\label{eq:uniq_phi} \begin{aligned} -\sqi( f(e^{-\l t}\psi\phi_1)\phi_1- f(e^{-\l t}\psi\phi_2)\phi_2, \phi_1-\phi_2) =\langle\pd_t \bphi+v\cdot D_x\bphi,\bphi\rangle -\\ \frac{\s^2}{2}(D_v\bphi,D_v\bphi) -(b\cdot D_v\bphi,\bphi) -\l (\bphi,\bphi) =\\ -\half\iint|\bphi(x,v,0)|^2dxdv-\frac{\s^2}{2}(D_v\bphi,D_v\bphi) -\l (\bphi,\bphi) \le -\l(\phi_1-\phi_2,\phi_1-\phi_2) \end{aligned} \end{equation} and, by the strict monotonicity of $f$, we conclude that $\phi_1=\phi_2$ .\par To prove that $\phi$ is bounded from above, we observe that the function $\bphi(t,x,v)=e^{C_1+(T-t)\|f\|_{\infty}/\s^2}$, where $C_1$ as in \eqref{hyp:i_u}, is a supersolution of the linear problem \eqref{eq:equation_phi_1} for any $\varphi\in L^2([0,T]\times \R^d\times\R^d)$, i.e. $\phi(T,x,v)\ge e^{u_T(x,v)/\s^2}$ and \[\pd_t \bphi+\frac{\s^2}{2}\Delta_v \bphi-b(x)\cdot D_v \bphi+v\cdot D_x \bphi\le -\sqi f(\psi\varphi)\bphi.\] By the Maximum Principle (see \cite[Prop. A.3 (i)]{d}), we get that $\bphi\ge \phi$, where $\phi$ is the solution of \eqref{eq:equation_phi_1}. Since the previous property holds for any $\varphi\in L^2([0,T]\times \R^d\times\R^d)$, we conclude that $\bphi\ge \phi$, where $\phi$ is the solution of the nonlinear problem \eqref{eq:equation_phi}.\\ A similar argument allows to show that $\underline\phi(x,v,t)=e^{(-C_0(|v|^2+|x|+1)-\rho(T-t))/\s^2}$, where $C_0$ as in \eqref{hyp:i_u} and $\rho$ sufficiently large, is a subsolution of \eqref{eq:equation_phi_1} for any $\varphi\in L^2([0,T]\times \R^d\times\R^d)$. Indeed, replacing $\underline\phi$ in the equation, we get that the inequality \begin{align*} \pd_t \uphi+\frac{\s^2}{2}\Delta_v \uphi-b(x)\cdot D_v \uphi+v\cdot D_x \uphi=\\ =\frac{\uphi}{\s^2}\left(\rho- C_0d \s^2+ 2C_0^2 \s^2|v|^2+2C_0b(x)\cdot v-C_0v\cdot \frac{x}{|x|}\right)\ge \\-\frac {1 }{\s^2} f(\psi\varphi)\uphi \end{align*} is satisfied for $\r$ large enough and, moreover, $\uphi\le e^{u_T(x,v)/\s^2}$. Hence $\underline \phi\le \phi$, where $\phi$ is the solution of the nonlinear problem \eqref{eq:equation_phi}, and, from this estimate, we deduce \eqref{eq:lower_bound_phi}.\par We finally prove the monotonicity of the map $\Phi$. Set $\phi_i=\Phi(\psi_i)$, $i=1,2$, and consider the equation satisfied by $\bphi=e^{\l t}\phi_1-e^{\l t}\phi_2$, multiply it by $\bphi^+$ and integrate. Performing a computation similar to \eqref{eq:uniq_phi}, we get \begin{align*} -\sqi (f(\phi_1\psi_1)\phi_1-f(\phi_2\psi_2)\phi_2,\bphi^+)\le - \l(\bphi^+,\bphi^+). \end{align*} Since, by monotonicity of $f$ and non negativity of $\phi_i$, we have \begin{align*} -(f(\phi_1\psi_1)\phi_1-f(\phi_2\psi_2)\phi_2,\bphi^+)= -(f(\phi_1\psi_1)(\phi_1-\phi_2),\bphi^+)-\\ ((f(\phi_1\psi_1) -f(\phi_2\psi_2))\phi_2,\bphi^+)\ge 0, \end{align*} we get $(\bphi^+,\bphi^+)=0$ and therefore $\phi_1\le \phi_2$. \end{proof} We set $$\mY_R=\{\phi\in \mY_0:\phi\ge C_R\quad \forall (x,v)\in B(0,R),\,t\in [0,T] \},$$ where $C_R$ is defined as in \eqref{eq:lower_bound_phi}. \begin{proposition}\label{prop:well_posed_eq_psi} Given $R>R_0$, where $R_0$ as in \eqref{hyp:i_m}, we have \begin{itemize} \item[(i)] For any $\phi\in\mY_R$, there exists a unique solution $\psi\in\mY_0$ to \begin{equation}\label{eq:equation_psi} \left\{ \begin{array}{ll} \pd_t \psi-\frac{\s^2}{2}\Delta_v \psi-b(x)\cdot D_v \psi+v\cdot D_x \psi=\sqi f(x,v,\psi\phi)\psi\\[4pt] \psi(0,x,v)=\frac{m_0(x,v)}{\phi(0,x,v)}. \end{array} \right. \end{equation} Moreover \begin{equation}\label{eq:upper_bound_psi} \psi(x,v,t)\le \frac{\|m_0\|_{L^\infty}}{ C_R}\qquad\forall t\in [0,T],\, (x,v)\in \R^d\times\R^d, \end{equation} where $C_R$ as in \eqref{eq:lower_bound_phi}. \item[(ii)] Let $\Psi: \mY_R\to\mY_0$ be the map which associates with $\phi\in \mY_R$ the unique solution of \eqref{eq:equation_psi}. Then, if $\phi_2 \le \phi_1$, we have $\Psi(\phi_2)\ge \Psi(\phi_1)$. \end{itemize} \end{proposition} \begin{proof} First observe that, since $R>R_0$, then $\psi(0,x,v)$ is well defined for $\phi\in\mY_R$. The proof of the first part of $(i)$ is very similar to the one of the corresponding result in Proposition \ref{prop:well_posed_eq_phi}, hence we only prove the bound \eqref{eq:upper_bound_psi}. If $\psi$ is a solution of \eqref{eq:equation_psi}, then $\tilde \psi=e^{-\l t}\psi$ is a solution of \begin{equation} \label{eq:equation_psi_equiv} \pd_t \tilde\psi-\frac{\s^2}{2}\Delta_v \tilde\psi-b(x)\cdot D_v \tilde\psi+v\cdot D_x \psi+\l\tilde\psi=\sqi f(x,v,e^{\l t}\tilde\psi\phi)\psi. \end{equation} Let $\psi$ be a solution of \eqref{eq:equation_psi_equiv}, set $\bpsi=\psi- e^{-\l t} \|m_0\|_{L^\infty}/ C_R$ and observe that $\bpsi(0)\le 0$. Multiply the equation for $\bpsi$ by $\bpsi^+$ and integrate to obtain \begin{align*} (\psi f(e^{\l t}\psi\phi),\bpsi^+)=\\ \langle\pd_t\bpsi+v\cdot D_x\bpsi,\bpsi^+\rangle+\sqi(D_v\bpsi,D_v\bpsi^+)-(b(x)D_v\bpsi,\bpsi^+)+\l (\bpsi,\bpsi^+)\ge\\ \iint|\bpsi^+(x,v,T)|^2dxdv+\l (\bpsi^+,\bpsi^+)\ge \l(\bpsi^+,\bpsi^+). \end{align*} Since $\psi\ge 0$ and $f\le 0$, we have \[(\psi f(e^{\l t}\psi\phi),\bpsi^+)\le 0\] and therefore $\bpsi^+\equiv 0$. Hence the upper bound \eqref{eq:upper_bound_psi}.\\ Now we prove {\it (ii)}. Set $\psi_i=\Psi(\phi_i)$, $i=1,2,$ and $\bpsi=e^{-\l t}\psi_1-e^{-\l t}\psi_2$. Multiply the equation satisfied by $\bpsi$ by $\bpsi^+$ and integrate. Since, by monotonicity and negativity of $f$, we have \begin{align*} (f(e^{\l t}\phi_1\psi_1)\psi_1-f(e^{\l t}\phi_2\psi_2)\psi_2,\bpsi^+)= (f(e^{\l t}\phi_1\psi_1) (\psi_1-\psi_2),\bpsi^+)+\\ (\psi_2(f(e^{-\l t}\phi_1\psi_1 ) -f(e^{-\l t}\phi_2\psi_2 )),\bpsi^+)\le 0. \end{align*} Then \begin{align*} 0\ge \langle\pd_t\bpsi+v\cdot D_x\bpsi,\bpsi^+\rangle+\sqi(D_v\bpsi,D_v\bpsi^+)-(b(x)D_v\bpsi,\bpsi^+)+\l (\bpsi,\bpsi^+)\ge\\ \iint|\bpsi^+(x,v,T)|^2dxdv+\l (\bpsi^+,\bpsi^+)\ge \l(\bpsi^+,\bpsi^+). \end{align*} Hence $\bpsi^+\equiv 0$ and therefore $\psi_1\le\psi_2$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main}] Given $\psi^{(0)}\equiv 0$, consider the sequence $(\phi^{(k+\half)},\psi^{(k+1)})$, $k\in\N$, defined in \eqref{eq:iterate_phi}-\eqref{eq:iterate_psi}. It can rewritten as \begin{equation}\label{eq:iterative_seq} \left\{\ \begin{array}{l} \phi^{(k+\half)}=\Phi(\psi^{(k)})\\[6pt] \psi^{(k+1)}=\Psi(\phi^{(k+\half)}) \end{array} \right. \end{equation} where the maps $\Phi$, $\Psi$ are as in Propositions \ref{prop:well_posed_eq_phi} and, respectively, \ref{prop:well_posed_eq_psi}. Observe that, by \eqref{eq:lower_bound_phi}, we have $\phi^{(k+\half)} \in\mY_R$ for $R>R_0$ and $\psi^{(k+1)}\ge 0$ for any $k$. Hence the sequence $(\phi^{(k+\half)},\psi^{(k+1)})$ is well defined. We first prove by induction the monotonicity of the components of $(\phi^{(k+\half)},\psi^{(k+1)})$. By non negativity of solutions to \eqref{eq:equation_psi}, we have $\psi^{(1)}=\Phi(\phi^{(\half)})\ge 0$ and therefore $\psi^{(1)}\ge\psi^{(0)}$. Moreover, by the monotonicity of $\Phi$, $\phi^{(\frac 3 2)}=\Phi(\psi^{(1)})\le \Phi(\psi^{(0)})=\phi^{(\half)}$. Now assume that $\psi^{(k+1)}\ge \psi^{(k)}$. Then \[ \phi^{(k+\frac 3 2)}=\Phi(\psi^{(k+1)})\le \Phi(\psi^{(k)})=\phi^{(k+\half)}\] and \[ \psi^{(k+2)} =\Psi (\phi^{(k+\frac 3 2)}) \ge \Psi (\phi^{(k+\half)})=\psi^{(k+1)}, \] therefore the monotonicity of two sequences.\\ Since $\phi^{(k+\half)}\ge 0$ and, by \eqref{eq:upper_bound_psi}, for $k\to\infty$, the sequence $\psi^{(k+1)}\le \|m_0\|_{L^\infty}/C_R$, $(\phi^{(k+\half)},\psi^{(k+1)})$ converges a.e. and in $L^2([0,T]\times \R^d\times\R^d)$ to a couple $(\phi,\psi)$. Taking into account the estimate \eqref{eq:energy_est}, the a.e. convergence of the two sequences and repeating an argument similar to the one employed for the continuity of the map $F$ in Proposition \ref{prop:well_posed_eq_phi}, we get that the couple $(\phi,\psi)$ satisfies, in weak sense, the first two equations in \eqref{eq:int_MFG_kinetic}. The terminal condition for $\phi$ is obviously satisfied, while the initial condition for $\psi$, in $L^2$ sense, follows by convergence of $\phi^{(k+\half)}(0)$ to $\phi(0)$.\par We now consider the couple $(u,m)$ given by the change of variable in \eqref{eq:change}. We first observe that, by \cite[Theorem 1.5]{bou}, we have $\pd_t\phi+v\cdot D_x\phi$, $D_v\phi$, $\Delta_v\phi\in L^2([0,T]\times \R^d\times\R^d)$ and a corresponding regularity for $\psi$. Taking into account the boundedness of $\phi$ and the estimate in \eqref{eq:lower_bound_phi}, we have that $u$, $\pd_t u+v\cdot D_x u$, $D_v u$, $\Delta_v u\in L^2_{loc}([0,T]\times \R^d\times\R^d)$. Hence we can write the equation for $u$ in weak form, i.e. \[ (\pd_t u+v\cdot D_x u,w )- \frac{\s^2}{2}(D_vu,D_vw) -(b\cdot D_vu,w)+\half(|D_vu|^2,w)=-( f(m),w), \] for any $w\in \mD([0,T]\times \R^d\times\R^d)$, with final datum in trace sense. In a similar way, since $m$, $\pd_t m+v\cdot D_x m$, $D_v m$, $\Delta_v m\in L^2_{loc}([0,T]\times \R^d\times\R^d)$ and $m$ is locally bounded, we can rewrite also the equation for $m$ in weak form, i.e. \[ (\pd_t m+v\cdot D_x m,w )+ \frac{\s^2}{2}(D_vm,D_vw) -(b\cdot D_vm,w)-(mD_vu,Dw)=0, \] for any $w\in \mD([0,T]\times \R^d\times\R^d)$ with the initial datum in trace sense. \end{proof} \noindent{\bf Acknowledgements.} The author wishes to thank Alessandro Goffi (Univ. di Padova) and Sergio Polidoro (Univ. di Modena e Reggio Emilia) for useful discussions.
proofpile-arXiv_065-192
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{section:Introduction} Let $X$ be an $n$-dimensional compact complex manifold. According to Sullivan [Sul76], Harvey-Lawson [HL83] and Streets-Tian [ST10, Definition 1.5] (where the name was coined), a Hermitian metric (namely a $C^\infty$ positive definite $(1,\,1)$-form) $\omega$ on $X$ is said to be {\bf Hermitian-symplectic} if $\omega$ is the component of bidegree $(1,1)$ of a real $C^\infty$ $d$-closed $2$-form $\widetilde{\omega}$ on $X$. Any $X$ admitting such a metric is called a {\bf Hermitian-symplectic manifold}. We will sometimes write H-S for Hermitian-symplectic. These manifolds, which constitute a natural generalisation of compact K\"ahler manifolds, were given the following intrinsic characterisation by Sullivan. \begin{The}\label{The:Sullivan_H-S} ([Sul76, Theorem III.2 and Remark III.11]) A compact complex manifold $X$ is Hermitian-symplectic if and only if $X$ carries no non-zero current $T$ of bidegree $(n-1,\,n-1)$ such that $T\geq 0$ and $T$ is $d$-exact. \end{The} Nevertheless, Hermitian-symplectic manifolds remain poorly understood. As they lie at the interface between symplectic and complex Hermitian geometries, they seem to warrant further probing. When $\mbox{dim}_\C X=2$, it can be shown (see e.g. [LZ09] or [ST10, Proposition 1.6] or Proposition \ref{Prop:S-T_surfaces} below) that $X$ is Hermitian-symplectic if and only if $X$ is K\"ahler. However, very little is known when $\mbox{dim}_\C X\geq 3$. This prompted Streets and Tian to ask the following \begin{Question}\label{Question:S-T} ([ST10, Question 1.7]) Do there exist non-K\"ahler Hermitian-symplectic complex manifolds $X$ with $\mbox{dim}_\C X\geq 3$? \end{Question} While the general case of this question remains open, it has been answered negatively for a handful of special classes of manifolds, including all nilmanifolds endowed with an invariant complex structure by Enrietti, Fino and Vezzoni in [EFV12] and all twistor spaces by Verbitsky in [Ver14]. \vspace{2ex} The Streets-Tian question is complementary to Donaldson's earlier \begin{Question}\label{Question:Don} ([Don06, Question 2]) If $J$ is an almost-complex structure on a compact $4$-manifold which is tamed by a symplectic form, is there a symplectic form compatible with $J$? \end{Question} Indeed, when the almost-complex structure $J$ is integrable, a symplectic form $\widetilde\omega$ is a taming form for $J$ if and only if the $(1,\,1)$-component $\omega$ of $\widetilde\omega$ is a Hermitian-symplectic metric (i.e. positive definite). While $J$ is assumed integrable in Question \ref{Question:S-T}, the dimension of the underlying manifold is allowed to be arbitrary. Meanwhile, Question \ref{Question:Don}, that has come to be known in the literature as {\it Donaldson's tamed-to-compatible conjecture}, is peculiar to four real dimensions but $J$ need not be integrable. Thus, the only known case so far lies at the intersection of Questions \ref{Question:Don} and \ref{Question:S-T}. \subsection{A new energy functional}\label{subsection:Introd_functional} In this work, we investigate Question \ref{Question:S-T} by introducing a functional $F$ on the open convex subset ${\cal S}_{\{\omega_0\}}\subset\{\omega_0\}_A\cap C^\infty_{1,\,1}(X,\,\R)$ of all the Hermitian-symplectic metrics $\omega$ lying in the Aeppli cohomology class $\{\omega_0\}_A\in H^{1,\,1}_A(X,\,\R)$ of an arbitrary Hermitian-symplectic metric $\omega_0$. Specifically, with every Hermitian-symplectic metric $\omega$ on $X$, we associate a unique differential form $\rho^{2,\,0}_\omega\in C^\infty_{2,\,0}(X,\,\C)$ that we call the {\bf $(2,\,0)$-torsion form} of $\omega$. (See Lemma and Definition \ref{Lem-Def:minimal_rho}.) We call its conjugate $\rho^{0,\,2}_\omega\in C^\infty_{0,\,2}(X,\,\C)$ the {\bf $(0,\,2)$-torsion form} of $\omega$. We then define our functional $F : {\cal S}_{\{\omega_0\}} \to [0,\,+\infty)$ (see Definition \ref{Def:F_energy-functional_H-S}) as the {\it squared $L^2$-norm} of $\rho^{2,\,0}_\omega$: \begin{equation}\label{eqn:Introd_F_energy-functional_H-S} F(\omega) = \int\limits_X|\rho^{2,\,0}_\omega|^2_\omega\,dV_\omega = ||\rho^{2,\,0}_\omega||^2_\omega.\end{equation} By construction, $F\geq 0$ and $F(\omega)=0$ if and only if the metric $\omega$ is K\"ahler. In the remaining part of $\S.$\ref{subsection:H-S_n=3_critical-points}, we go on to compute the {\it first variation} of $F$ in arbitrary dimension $n$ (see (ii) of Proposition \ref{Prop:F-tilde_properties}) and then reach the following conclusion in dimension $3$. (See also Corollary \ref{Cor:critical-points_energy_n=3}.) \begin{The}\label{The:Introd_critical-points_energy_n=3} Let $X$ be a $3$-dimensional compact Hermitian-symplectic manifold. For any H-S metric $\omega_0$ on $X$, the {\bf critical points} of $F : {\cal S}_{\{\omega_0\}} \to [0,\,+\infty)$ are the {\bf K\"ahler metrics} (if any) lying in the Aeppli cohomology class $\{\omega_0\}_A$. \end{The} In particular, the only possible critical points for $F$ are minima. Thus, the Streets-Tian Question \ref{Question:S-T} on $3$-dimensional compact complex manifolds $X$ is reduced to the existence of {\it critical points}, or equivalently {\it minimisers}, for the functional $F$. \subsection{Generalised volume of Hermitian-symplectic Aeppli classes}\label{subsection:Introd_gen-volume} Another consequence of Proposition \ref{Prop:F-tilde_properties} in the case of threefolds is that {\it minimising} our functional $F$ is equivalent to {\it maximising} the volume $\mbox{Vol}_\omega(X)$ of the Hermitian-symplectic metrics $\omega$ lying in a given Aeppli cohomology class. Specifically, we obtain the following result (see Proposition \ref{Prop:F-tilde_properties} and Definition \ref{Def:A-invariant}) that gives rise to a {\it volume-like invariant} for Aeppli cohomology classes of Hermitian-symplectic metrics. \begin{The-Def}\label{The-Def:Introd_A-invariant} Let $X$ be a $3$-dimensional compact Hermitian-symplectic manifold. For any Hermitian-symplectic metric $\omega$ on $X$, the quantity \begin{equation}\label{eqn:Introd_A-invariant}A=A_{\{\omega\}_A}:= F(\omega) + \mbox{Vol}_\omega(X)>0\end{equation} is independent of the choice of metric $\omega$ in its Aeppli cohomology class $\{\omega\}_A$, where $\mbox{Vol}_\omega(X):=\int_X\omega^3/3!$. The invariant $A$ is called the {\bf generalised volume} of the Hermitian-symplectic Aeppli class $\{\omega\}_A$. \end{The-Def} Recall that, when $\omega$ is a K\"ahler metric (provided it exists) on an $n$-dimensional compact complex manifold $X$, the quantity $\mbox{Vol}_\omega(X)=\int_X\omega^n/n!$ depends only on the Bott-Chern class of $\omega$ and is standardly called the {\it volume} of the K\"ahler class $\{\omega\}_{BC}$ and denoted by $\mbox{Vol}(\{\omega\}_{BC})$. However, when $\omega$ is not K\"ahler, a major source of difficulty, for example in solving Monge-Amp\`ere equations, stems from $\mbox{Vol}_{\omega + i\partial\bar\partial\varphi}(X)$ depending on $i\partial\bar\partial\varphi$ when the real-valued smooth function $\varphi$ on $X$ varies such that $\omega + i\partial\bar\partial\varphi>0$. Thus, $\mbox{Vol}(\{\omega\}_{BC})$ is meaningless for non-K\"ahler classes, but it can be replaced by our generalised volume $A_{\{\omega\}_A}$, which coincides with the standard volume $\mbox{Vol}(\{\omega\}_{BC})$ when the class $\{\omega\}_{BC}$ is K\"ahler. See $\S.$\ref{subsection:vol-M-A_eq_H-S} for details on a new volume form and a natural Monge-Amp\`ere-type equation that we propose in the Hermitian-symplectic case. \vspace{3ex} A related problem appears in the study of compact {\bf SKT manifolds} i.e. compact complex manifolds admitting a Hermitian metric $\omega$ such that $\partial\bar{\partial}\omega=0$. It is well known that this class of manifolds is strictly larger than the K\"ahler class. For example, by [Gau77a], every compact complex surface is SKT. It is of interest to investigate under what assumptions SKT manifolds are K\"ahler. In particular, the following special case of the Streets-Tian Question \ref{Question:S-T} is still open. \begin{Question}\label{Question:SKT_ddbar} Let $X$ be a compact SKT manifold. Assume additionally that $X$ is a $\partial\bar{\partial}$-manifold. Does $X$ admit a K\"ahler metric? \end{Question} It is immediate to see that, on a $\partial\bar{\partial}$-manifold, a metric $\omega$ is Hermitian-symplectic if and only if $\omega$ is SKT. Our observation in $\S.$\ref{subsection:SKT_ddbar} is that a minor adaptation of the functional $F$ once again reduces the above problem to the existence of critical points. \subsection{Search for critical points}\label{subsection:Introd_search-crit} In complex dimension $3$, we propose a family of Monge-Amp\`ere-type equations and relate the maximisation of a numerical constant appearing therein to the local minimisers of the functional $F$. \vspace{2ex} Ideally, if a Monge-Amp\`ere-type equation with solutions in a given H-S {\bf Aeppli class} could be solved, its solutions would be K\"ahler metrics. Specifically, in $\S.$\ref{section:M-A_eq} we get the following result. \begin{Prop}\label{Prop:Introd_consequence_MA-eq} Let $X$ be a compact complex Hermitian-symplectic manifold with $\mbox{dim}_\C X=3$. Fix an arbitrary Hermitian metric $\gamma$ and an H-S metric $\omega$ on $X$. Let $A = A_{\{\omega\}_A}>0$ be the generalised volume of the class $\{\omega\}_A$ and let $c = c_{\omega,\,\gamma}>0$ be the constant defined by the requirement \begin{equation*}\frac{(\int\limits_X\omega\wedge\gamma^2/2!)^3}{(\int\limits_X\gamma^3/3!)^2} = \frac{6A}{c}.\end{equation*} If there exists a solution $\eta\in C^\infty_{1,\,0}(X,\,\C)$ of the Monge-Amp\`ere-type equation \begin{equation*} (\omega + \partial\bar\eta + \bar\partial\eta)^3 = c\,(\Lambda_\gamma\omega)^3\,\,\frac{\gamma^3}{3!} \hspace{6ex} (\star)\end{equation*} such that $\omega_\eta:=\omega + \partial\bar\eta + \bar\partial\eta>0$, then $\omega_\eta$ is a {\bf K\"ahler metric} lying in the Aeppli cohomology class $\{\omega\}_A$ of $\omega$. \end{Prop} \vspace{2ex} However, equation $(\star)$ is heavily underdetermined and it is hard to see how one could go about solving it. For this reason, we replace it in $\S.$\ref{section:stratification} by a family of Monge-Amp\`ere-type equations of the familiar kind after we have stratified the given Hermitian-symplectic Aeppli class $\{\omega\}_A$ by {\it Bott-Chern subclasses} (or {\it strata}) in the following way. We consider a {\it partition} of ${\cal S}_{[\omega]}$ of the shape ${\cal S}_{[\omega]}=\cup_{j\in J}{\cal D}_{[\omega_j]}$, where $(\omega_j)_{j\in J}$ is a family of H-S metrics in $\{\omega\}_A$ and $${\cal D}_{[\omega_j]}:=\{\omega'>0\,\mid\,\omega' - \omega_j\in\mbox{Im}\,(\partial\bar\partial)\}, \hspace{3ex} j\in J.$$ For each $j\in J$, we choose an arbitrary Hermitian metric $\gamma_j$ on $X$ such that $\Lambda_{\gamma_j}\omega_j=1$ at every point of $X$. Then, on each Bott-Chern stratum ${\cal D}_{[\omega_j]}\subset{\cal S}_{[\omega]}$, the Tosatti-Weinkove result in [TW10, Corollary 1] ensures the existence of a {\it unique} constant $b_j>0$ such that the equation \begin{equation*}\frac{(\omega_j + i\partial\bar\partial\varphi)^3}{3!} = b_jA\,\frac{dV_{\gamma_j}}{\int\limits_XdV_{\gamma_j}}\hspace{6ex} (\star\star\star_j),\end{equation*} subject to the extra condition $\omega_j + i\partial\bar\partial\varphi>0$, is {\it solvable}, where $A>0$ is the {\it generalised volume} of $\{\omega\}_A$ introduced in Theorem and Definition \ref{The-Def:Introd_A-invariant}. (Hence, $A$ is independent of $j$.) In this way, we associate a constant $b_j\in(0,\,1]$ with every Bott-Chern stratum ${\cal D}_{[\omega_j]}$ of ${\cal S}_{[\omega]}$. The problem of {\it minimising} the functional $F$ in ${\cal S}_{[\omega]}$ (equivalently, {\it maximising} ${\cal S}_{[\omega]}\ni\omega'\mapsto\mbox{Vol}_{\omega'}(X)$) becomes equivalent to proving that the value $1$ is attained by one of the mysterious constants $b_j$. \begin{Prop}\label{Prop:Introd_b_j1} If there exists $j\in J$ such that $b_j=1$, the solution $\omega_j + i\partial\bar\partial\varphi_j$ of equation $(\star\star\star_j)$ is a {\bf K\"ahler metric} lying in the Bott-Chern subclass ${\cal D}_{[\omega_j]}$, hence also in the Aeppli class $\{\omega\}_A$. \end{Prop} We go on to observe in Lemma \ref{Lem:BC-subclass_volume-Gauduchon} that if a Bott-Chern stratum of an H-S Aeppli class contains a {\it Gauduchon} metric, then all the metrics $\omega$ on that stratum are Gauduchon and have the same volume $\mbox{Vol}_\omega(X)$. On the other hand, the restriction of the volume function $\omega\mapsto\mbox{Vol}_\omega(X)$ to a non-Gauduchon stratum cannot achieve any local extremum, thanks to Lemma \ref{Lem:BC-subclass_volume-nonGauduchon}. Thus, we have a good understanding of the behaviour of the volume in the {\it horizontal} directions (i.e. those of the Bott-Chern strata). The variation in the {\it vertical} directions remains mysterious for now. \subsection{Obstruction to the existence of a K\"ahler metric in a given Hermitian-symplectic Aeppli class}\label{subsection:Introd_obstruction} While the Streets-Tian Question \ref{Question:S-T} asks whether a K\"ahler metric exists on every Hermitian-symplectic manifold $X$, if our functional $F : {\cal S}_{\{\omega_0\}} \to [0,\,+\infty)$ admits critical points for {\it any} Hermitian-symplectic metric $\omega_0$ on $X$, much more will be true: there will exist a K\"ahler metric in the Aeppli cohomology class of {\it every} Hermitian-symplectic metric on $X$. However, in $\S.$\ref{section:obs-est} we identify an obstruction to the existence of a K\"ahler metric that is Aeppli cohomologous to a given Hermitian-symplectic metric. In fact, we first show (see Lemma and Definition \ref{Def:E_2_H-S}) that the $(0,\,2)$-torsion form $\rho^{0,\,2}_\omega$ of any H-S metric $\omega$ on an $n$-dimensional compact complex manifold $X$ is {\bf $E_2$-closed} in the sense that it defines an $E_2$-cohomology class $$\{\rho^{0,\,2}_\omega\}_{E_2}\in E_2^{0,\,2}(X)$$ on the second page of the {\it Fr\"olicher spectral sequence} of $X$. Moreover, $\{\rho^{0,\,2}_\omega\}_{E_2}$ depends only on the Aeppli class $\{\omega\}_A$. We call it the {\bf $E_2$-torsion class} of the Hermitian-symplectic Aeppli class $\{\omega\}_A$ and prove the following fact (see Corollary \ref{Cor:necessary-cond_K}). \begin{Prop}\label{Prop:Introd_} Let $X$ be a $3$-dimensional compact complex manifold supposed to carry a Hermitian-symplectic metric $\omega$. The {\bf vanishing} of the $E_2$-torsion class $\{\rho^{0,\,2}_\omega\}_{E_2}\in E_2^{0,\,2}(X)$ is a necessary condition for the Aeppli class $\{\omega\}_A$ to contain a K\"ahler metric. \end{Prop} This throws up the natural question of whether there exist compact $3$-dimensional Hermitian-symplectic manifolds on which all or some $E_2$-torsion classes are non-vanishing. \subsection{Cohomological interpretations of the generalised volume}\label{subsection:Introd_coh-interpretations} In $\S.$\ref{subsection:cohom_A} and $\S.$\ref{subsection:minimal-completion}, we give two cohomological interpretations of our generalised volume invariant $A$. \vspace{2ex} The one in $\S.$\ref{subsection:minimal-completion} is valid on any $3$-dimensional compact Hermitian-symplectic manifold. We first observe that the real $d$-closed $2$-form $$\widetilde\omega = \rho_\omega^{2,\,0} + \omega + \rho_\omega^{0,\,2},$$ that we call the {\bf minimal completion} of the given H-S metric $\omega$, lies in a De Rham cohomology class that depends only on the Aeppli cohomology class of $\omega$. This follows from the study of our functional $F$, specifically from Corollary \ref{Cor:energy_M-A_mass}. We then observe (see (a) of Proposition \ref{Prop:same-DR-class}) that the {\it generalised volume} is a kind of {\it volume of the minimal completion}: \begin{equation}\label{eqn:Introd_min-comp_integral}A = A_{\{\omega\}_A} = \int\limits_X\frac{\widetilde\omega^3}{3!} = \frac{1}{6}\,\{\widetilde\omega\}_{DR}^3,\end{equation} so it only depends on the De Rham cohomology class of the minimal completion $\widetilde\omega$. \vspace{2ex} The cohomological interpretation given in $\S.$\ref{subsection:cohom_A} is only valid on $3$-dimensional compact Hermitian-symplectic manifolds that lie in the new class of {\bf page-$1$-$\partial\bar\partial$-manifolds} that were very recently introduced in [PSU20a]. This class is strictly larger than the one of $\partial\bar\partial$-manifolds, so we get a link with Question \ref{Question:SKT_ddbar}. As observed in [PSU20b, Proposition 6.2], on a $3$-dimensional manifold $X$ an H-S metric $\omega$ induces an {\it $E_2$-Aeppli cohomology} class $\{\omega\}_{E_2,\,A}\in E_{2,\,A}^{1,\,1}(X)$. Together with the {\it $E_r$-Bott-Chern cohomologies}, the {\it $E_r$-Aeppli cohomologies} have been recently introduced in [PSU20b, Definition 3.4] for every integer $r\geq 2$. They coincide with the standard Bott-Chern and Aeppli cohomologies when $r=1$. On the other hand, using results from [PSU20b], we show in Corollary \ref{Cor:E_2BC_lifts} that on a $3$-dimensional page-$1$-$\partial\bar\partial$-manifold $X$, an {\it $E_2$-Bott-Chern class} $\mathfrak{c}_\omega\in E_{2,\,BC}^{2,\,2}(X)$ can be canonically associated with the $E_2$-Aeppli class $\{\omega\}_{E_2,\,A}$ of any Hermitian-symplectic metric $\omega$ on $X$. Finally, using the duality between $E_{2,\,BC}^{n-1,\,n-1}(X)$ and $E_{2,\,A}^{1,\,1}(X)$ proved in [PSU20b, Theorem 3.11] for every compact complex manifold $X$ of any dimension $n$, we get the following cohomological interpretation (see Theorem \ref{The:cohom_A}) of the generalised volume as a multiple of the {\it intersection number} between the cohomology classes $\mathfrak{c}_\omega$ and $\{\omega\}_{E_2,\,A}$: \begin{equation}\label{eqn:Introd_cohom_A} A= A_{\{\omega\}_A} = \frac{1}{6}\,\mathfrak{c}_\omega.\{\omega\}_{E_2,\,A}.\end{equation} Corollary \ref{Cor:energy_M-A_mass} in the study of our functional $F$ is again used in a key way to obtain this result. \vspace{3ex} \noindent {\bf Acknowledgments.} This work started in the spring of 2016 when the second-named author was visiting the Jagiellonian University in Krak\'ow at the invitation of S\l{}awomir Kolodziej, to whom he is very grateful for the hospitality offered under the NCN grant 2013/08/A/ST1/00312 and the very stimulating discussions on various topics. The first-named author was partially supported by the National Science Centre, Poland grant no 2017/27/B/ST1/01145. \section{Preliminaries}\label{section:preliminaries} In this section, we present a mixture of well-known and new results that will come in handy. \subsection{Background}\label{subsection:background} Let $\omega$ be a Hermitian metric on a compact complex manifold $X$ with $\mbox{dim}_\C X=n$. To outline the context of our study, we start by recalling the definitions of six classes of special metrics and the known implications among them: \\ \vspace{3ex} \noindent$\begin{array}{lllll}d\omega=0 & \Longrightarrow & \exists\,\, \rho^{0,\,2}\in C^{\infty}_{0,\,2}(X,\,\C)\,\, \mbox{s.t.} & \Longrightarrow & \partial\bar\partial\omega=0 \\ & & d(\overline{\rho^{0,\,2}}+\omega+\rho^{0,\,2})=0 & & \\ (\omega\,\,\mbox{is K\"ahler}) & & (\omega\,\,\mbox{is Hermitian-symplectic}) & & (\omega\,\,\mbox{is SKT}) \\ \rotatebox{-90}{$\implies$} & & & & \hspace{33ex} (P) \\ d\omega^{n-1}=0 & \Longrightarrow & \exists\,\, \Omega^{n-2,\,n}\in C^{\infty}_{n-2,\,n}(X,\,\C)\,\, \mbox{s.t.} & \Longrightarrow & \partial\bar\partial\omega^{n-1}=0 \\ & & d(\overline{\Omega^{n-2,\,n}}+\omega^{n-1}+\Omega^{n-2,\,n})=0 & & \\ (\omega\,\,\mbox{is balanced}) & & (\omega\,\,\mbox{is strongly Gauduchon (sG)}) & & (\omega\,\,\mbox{is Gauduchon}).\end{array}$ \vspace{5ex} \noindent The manifold $X$ is called {\it K\"ahler}, {\it Hermitian-symplectic} (H-S), {\it SKT}, {\it balanced}, {\it strongly Gauduchon} (sG) if it carries a Hermitian metric $\omega$ of the corresponding type. Meanwhile, Gauduchon metrics always exist on any $X$ by [Gau77a]. Balanced metrics were introduced in [Gau77b] under the name {\it semi-K\"ahler} and then discussed again in [Mic83], while strongly Gauduchon (sG) metrics were introduced in [Pop13] by requiring $\partial\omega^{n-1}\in\mbox{Im}\,\bar\partial$, a definition that was then proved in [Pop13, Proposition 4.2] to be equivalent to the description on the second line in the above picture (P). In particular, the notion of H-S metric is the analogue in bidegree $(1,\,1)$ of the notion of sG metric. These special metrics define Dolbeault, Bott-Chern or Aeppli cohomology classes, according to the case. Recall the by now standard definitions of these cohomologies: $$H^{\bullet,\,\bullet}_{\bar\partial}(X,\,\C)=\ker\bar\partial/\mbox{Im}\,\bar\partial, \hspace{1ex} H^{\bullet,\,\bullet}_{BC}(X,\,\C)=\ker\partial\cap\ker\bar\partial/\mbox{Im}\,(\partial\bar\partial), \hspace{1ex} H^{\bullet,\,\bullet}_A(X,\,\C)=\ker(\partial\bar\partial)/(\mbox{Im}\,\partial + \mbox{Im}\,\bar\partial).$$ The metrical notions in bidegree $(n-1,\, n-1)$ listed on the second line in (P) offer more flexibility than their bidegree $(1,\, 1)$ analogues. Moreover, the cohomology classes they define eventually give information on those defined by the metrics on the first line thanks to the classical {\it Serre duality} for the Dolbeault cohomology and the analogous duality between the Bott-Chern and Aeppli cohomologies (see [Sch07] for the latter): $$H^{p,\,q}_{\bar\partial}(X,\,\C)\times H^{n-p,\,n-q}_{\bar\partial}(X,\,\C)\longrightarrow\C, \hspace{3ex} (\{\alpha\}_{\bar\partial},\,(\{\beta\}_{\bar\partial})\longmapsto\int\limits_X\alpha\wedge\beta,$$ \noindent and $$H^{p,\,q}_{BC}(X,\,\C)\times H^{n-p,\,n-q}_A(X,\,\C)\longrightarrow\C, \hspace{3ex} (\{\alpha\}_{BC},\,(\{\beta\}_A)\longmapsto\int\limits_X\alpha\wedge\beta.$$ As for other possible vertical implications in (P), besides the trivial ``$\omega$ {\it K\"ahler $\implies$ $\omega$ balanced}'', it is easy to see that there is no counterpart at the SKT/Gauduchon level: \vspace{1ex} \hspace{25ex} $\omega$ {\it SKT $\centernot\implies$ $\omega$ Gauduchon}. \subsection{H-S and sG manifolds}\label{subsection:H-S_sG} We will now prove the implication ``{\it Hermitian-symplectic $\implies$ strongly Gauduchon}'' at the level of manifolds, which seems new and goes some way in the direction of the Streets-Tian Question \ref{Question:S-T}. Note that this implication does not hold at the level of metrics $\omega$. \begin{Prop}\label{Prop:H-S_sG} Every compact complex manifold $X$ that admits a {\bf Hermitian-symplectic} metric also admits a {\bf strongly Gauduchon (sG)} metric. \end{Prop} \noindent {\it Proof.} Let $n=\mbox{dim}_\C X$. As recalled in (P), a strongly Gauduchon (sG) structure on $X$ can be regarded as a real $C^\infty$ $d$-closed $(2n-2)$-form $\Omega$ on $X$ such that its $(n-1,\,n-1)$-component $\Omega^{n-1,\,n-1}$ is positive definite (see [Pop13]). This also uses the fact that, if $\Omega^{n-1,\,n-1}>0$, there exists a unique smooth positive definite $(1,\,1)$-form $\omega$ on $X$, called the $(n-1)$-st root $\Omega^{n-1,\,n-1}$, such that $\omega^{n-1} = \Omega^{n-1,\,n-1}$. (This fact, noticed in [Mic83], is well known and can be easily checked pointwise in appropriately chosen local coordinates.) Now, suppose that an H-S structure $\widetilde\omega$ exists on $X$. This means that $\widetilde\omega = \rho^{2,\,0} + \omega + \rho^{0,\,2}$ is a real $C^\infty$ $d$-closed $2$-form on $X$ such that its $(1,\,1)$-component $\omega$ is positive definite. Thus, $d\widetilde\omega^{n-1}=0$ and \begin{eqnarray}\nonumber\widetilde\omega^{n-1} = [\omega + (\rho^{2,\,0} + \rho^{0,\,2})]^{n-1} = \sum\limits_{k=0}^{n-1}\sum\limits_{l=0}^k{n-1 \choose k}{k \choose l}\,(\rho^{2,\,0})^l\wedge(\rho^{0,\,2})^{k-l}\wedge\omega^{n-k-1}.\end{eqnarray} \noindent In particular, the $(n-1,\,n-1)$-component of $\widetilde\omega^{n-1}$ is the sum of the terms for which $l=k-l$ in the above expression, i.e. $$\Omega^{n-1,\,n-1} = \omega^{n-1} + \sum\limits_{l=1}^{[\frac{n-1}{2}]}{n-1 \choose 2l}{2l \choose l}\,(\rho^{2,\,0})^l\wedge(\rho^{0,\,2})^l\wedge\omega^{n-2l-1}.$$ Thus, to prove the existence of an sG structure on $X$, it suffices to prove that the $(n-1,\,n-1)$-form $\Omega^{n-1,\,n-1}$ is positive definite. Its $(n-1)$-st root will then be an sG metric on $X$, by construction. To show that $\Omega^{n-1,\,n-1}>0$, it suffices to check that the real form $(\rho^{2,\,0})^l\wedge(\rho^{0,\,2})^l\wedge\omega^{n-2l-1}$ is weakly (semi)-positive at every point of $X$. (Recall that $\rho^{0,\,2}$ is the conjugate of $\rho^{2,\,0}$.) To this end, note that the $(2l,2l)$-form $(\rho^{2,\,0})^l\wedge(\rho^{0,\,2})^l$ is weakly semi-positive as the wedge product of a $(2l,0)$-form and its conjugate (see [Dem97, Chapter III, Example 1.2]). Therefore, the $(n-1,\,n-1)$-form $(\rho^{2,\,0})^l\wedge(\rho^{0,\,2})^l\wedge\omega^{n-2l-1}$ is (semi)-positive since the product of a weakly (semi)-positive form and a strongly (semi)-positive form is weakly (semi)-positive and $\omega$ is strongly positive (see [Dem97, Chapter III, Proposition 1.11]). (Recall that in bidegrees $(1,\,1)$ and $(n-1,\,n-1)$, the notions of weak and strong positivity coincide.)\hfill $\Box$ \vspace{3ex} In the case $n=2$, the notions of H-S and sG metrics coincide. Meanwhile, as explained in [Pop13, Observation 4.4], every strongly Gauduchon compact complex surface is K\"ahler. This answers the two-dimensional analogue of the Streets-Tian Question \ref{Question:S-T}, a fact that has been known for a while (cf. e.g. [LZ09] or [ST10, Proposition 1.6]). \begin{Prop}\label{Prop:S-T_surfaces} Let $X$ be a compact complex surface. If $X$ carries a Hermitian-symplectic metric, then $X$ carries a K\"ahler metric. \end{Prop} Note that Theorem 6.1 in [Lam99], according to which every compact complex surface whose first Betti number $b_1$ is odd carries a non-zero positive $d$-exact $(1,\,1)$-current, is given a proof based on the Hahn-Banach separation theorem and uses a duality argument. On the other hand, Observation 4.4 in [Pop13] uses in a key way Lamari's result and the fact that a compact complex manifold carries a strongly Gauduchon metric if and only there is no non-zero positive $d$-exact $(1,\,1)$-current on it (see [Pop13, Proposition 4.3]). This characterisation of sG manifolds given in [Pop13] uses again the Hahn-Banach theorem and a Harvey-Lawson-type duality argument harking back to Sullivan. In particular, we get no information in this way on the Aeppli cohomology class of the K\"ahler metric whose existence is given by the above Proposition \ref{Prop:S-T_surfaces}. \vspace{3ex} Independently, let us notice that the existence of Hermitian-symplectic metrics on a compact complex threefold implies a property that is well known to hold on compact K\"ahler manifolds (and even on $\partial\bar\partial$-manifolds and even on compact complex manifolds whose Fr\"olicher spectral sequence degenerates at $E_1$). Thus, the next observation takes Hermitian-symplectic threefolds a little closer to K\"ahler ones. \begin{Cor}\label{Cor:holomorphic_1-forms} Let $X$ be a compact complex Hermitian-symplectic manifold with $\mbox{dim}_\C X=3$. Then, every holomorphic $1$-form (i.e. every smooth $\bar\partial$-closed $(1,\,0)$-form) on $X$ is $d$-closed. \end{Cor} \noindent {\it Proof.} Let $\omega$ be an H-S metric on $X$. Then, $\partial\omega\in\mbox{Im}\,\bar\partial$ and $\bar\partial\omega\in\mbox{Im}\,\partial$ (see e.g. (\ref{eqn:H-S_condition})). Choose any form $\rho^{2,\,0}\in C^\infty_{2,\,0}(X,\,\C)$ such that $\partial\omega = -\bar\partial\rho^{2,\,0}$. Hence, $\bar\partial\omega = -\partial\rho^{0,\,2}$, where $\rho^{0,\,2}:=\overline{\rho^{2,\,0}}$. Now, let $\xi\in C^{\infty}_{1,\,0}(X,\,\C)$ such that $\bar\partial\xi=0$. We want to show that $\partial\xi=0$. On the one hand, if $\star = \star_\omega$ is the Hodge star operator induced by $\omega$, the general formula (\ref{eqn:prim-form-star-formula-gen}) applied to the (necessarily primitive) $(0,\,2)$-form $\bar\partial\bar\xi$ yields: $\star(\bar\partial\bar\xi) = \bar\partial\bar\xi\wedge\omega$. Hence, \begin{equation}\label{eqn:prelim_d-closedness1}\partial\xi\wedge\bar\partial\bar\xi\wedge\omega = |\partial\xi|_\omega^2\,dV_\omega \geq 0\end{equation} at every point of $X$. Meanwhile, an immediate calculation and the use of the identities $\bar\partial\xi=0$ and $\partial\bar\xi=0$ show that \begin{eqnarray*}\partial\xi\wedge\bar\partial\bar\xi\wedge\omega & = & -\partial\bar\partial(\xi\wedge\bar\xi\wedge\omega) + \xi\wedge\bar\partial\bar\xi\wedge\partial\omega + \partial\xi\wedge\bar\xi\wedge\bar\partial\omega + \xi\wedge\bar\xi\wedge\partial\bar\partial\omega \\ & = & -\partial\bar\partial(\xi\wedge\bar\xi\wedge\omega) - \xi\wedge\bar\partial\bar\xi\wedge\bar\partial\rho^{2,\,0} - \partial\xi\wedge\bar\xi\wedge\partial\rho^{0,\,2} \\ & = & -\partial\bar\partial(\xi\wedge\bar\xi\wedge\omega) + \bar\partial(\xi\wedge\bar\partial\bar\xi\wedge\rho^{2,\,0}) + \partial(\partial\xi\wedge\bar\xi\wedge\rho^{0,\,2})\in\mbox{Im}\,\partial + \mbox{Im}\,\bar\partial,\end{eqnarray*} where for the second identity we also used the property $\partial\bar\partial\omega=0$ of the H-S metric $\omega$. Using Stokes's theorem, we infer: \begin{equation}\label{eqn:prelim_d-closedness2}\int\limits_X\partial\xi\wedge\bar\partial\bar\xi\wedge\omega = 0.\end{equation} Putting together (\ref{eqn:prelim_d-closedness1}) and (\ref{eqn:prelim_d-closedness2}), we get $\partial\xi=0$ on $X$ and we are done. \hfill $\Box$ \subsection{Toolbox}\label{subsection:toolbox} \hspace{2ex} (I)\, Let $\omega$ be an arbitrary Hermitian metric on an $n$-dimensional compact complex manifold $X$. We know from [KS60, $\S.6$] (see also [Sch07] or [Pop15]) that $\omega$ induces the following $4^{th}$-order elliptic differential operator $\Delta_{BC}: C^{\infty}_{r,\,s}(X,\,\C)\longrightarrow C^{\infty}_{r,\,s}(X,\,\C)$, called the {\it Bott-Chern Laplacian}, in every bidegree $(r,\,s)$: \begin{equation}\label{eqn:BC-Laplacian}\Delta_{BC} : = \partial^{\star}\partial + \bar\partial^{\star}\bar\partial + (\partial\bar\partial)^{\star}(\partial\bar\partial) + (\partial\bar\partial)(\partial\bar\partial)^{\star} + (\partial^{\star}\bar\partial)^{\star}(\partial^{\star}\bar\partial) + (\partial^{\star}\bar\partial)(\partial^{\star}\bar\partial)^{\star}.\end{equation} \noindent From the ellipticity and self-adjointness of $\Delta_{BC}$, coupled with the compactness of $X$, we get the following $L^2_\omega$-orthogonal $3$-space decomposition: \begin{equation}\label{eqn:BC-3sp-decomp}C^{\infty}_{r, \, s}(X, \C)=\ker\Delta_{BC} \oplus \mbox{Im}\,\partial\bar\partial \oplus (\mbox{Im}\,\partial^{\star} + \mbox{Im}\,\bar\partial^{\star})\end{equation} \noindent in which $\ker\partial\cap\ker\bar\partial = \ker\Delta_{BC} \oplus \mbox{Im}\,\partial\bar\partial$. \vspace{3ex} The following Neumann-type formula for the minimal $L^2_\omega$-norm solution of a $\bar\partial$-equation with an extra constraint seems to be new. It will be used in $\S.$\ref{section:obs-est}. \begin{Lem}\label{Lem:Neumann_del-0} Let $(X,\,\omega)$ be a compact Hermitian manifold. For every $p,q=0,\dots , n=\mbox{dim}_\C X$ and every form $v\in C^\infty_{p,\,q}(X,\,\C)$, consider the following $\bar\partial$-equation problem: \begin{equation}\label{eqn:Neumann_del-0}\bar\partial u = v \hspace{3ex} \mbox{subject to the condition} \hspace{2ex} \partial u = 0.\end{equation} \noindent If problem (\ref{eqn:Neumann_del-0}) is solvable for $u$, the solution of minimal $L^2_\omega$-norm is given by the Neumann-type formula: \begin{equation}\label{eqn:Neumann_formula_del-0}u =\Delta_{BC}^{-1}[\bar\partial^\star v + \bar\partial^\star\partial\partial^\star v].\end{equation} \end{Lem} \noindent {\it Proof.} The solution $u$ of problem (\ref{eqn:Neumann_del-0}) is unique up to $\ker\partial\cap\ker\bar\partial= \ker\Delta_{BC} \oplus \mbox{Im}\,\partial\bar\partial$. Thanks to (\ref{eqn:BC-3sp-decomp}), the minimal $L^2_\omega$-norm solution of problem (\ref{eqn:Neumann_del-0}) is uniquely determined by the condition $u\in\mbox{Im}\,\partial^{\star} + \mbox{Im}\,\bar\partial^{\star}$. In other words, there exist forms $\xi$ and $\eta$ such that $$u = \partial^\star\xi + \bar\partial^\star\eta, \hspace{3ex} \mbox{hence} \hspace{3ex} \partial^{\star}u = -\bar\partial^{\star}\partial^{\star}\eta, \hspace{2ex} \bar\partial^{\star}u = -\partial^{\star}\bar\partial^{\star}\xi \hspace{2ex} \mbox{and} \hspace{2ex} (\partial\bar\partial)^{\star}u = 0.$$ \noindent Applying $\Delta_{BC}$, we get $$\Delta_{BC} u = \bar\partial^{\star}(\bar\partial u) + \bar\partial^{\star}\partial\partial^\star(\bar\partial u),$$ \noindent since the first, third (after writing $\partial\bar\partial = -\bar\partial\partial$) and sixth (after writing $(\partial^{\star}\bar\partial)^{\star} = \bar\partial^\star\partial$) terms in $\Delta_{BC}$ end with $\partial$ and $\partial u=0$, while the fourth term in $\Delta_{BC}$ ends with $(\partial\bar\partial)^\star$ and $(\partial\bar\partial)^\star u=0$. Now, the restriction of $\Delta_{BC}$ to the orthogonal complement of $\ker\Delta_{BC}$ is an isomorphism onto this same orthogonal complement, so using the inverse $\Delta_{BC}^{-1}$ of this restriction ($=$ the Green operator of $\Delta_{BC}$), we get $$u=\Delta_{BC}^{-1}[\partial^{\star}(\partial u) + \partial^{\star}\bar\partial\bar\partial^\star(\partial u)],$$ \noindent since both $u$ and $\partial^{\star}(\partial u) + \partial^{\star}\bar\partial\bar\partial^\star(\partial u)$ lie in $(\ker\Delta_{BC})^\perp$. Since $\partial u =v$, the last formula for $u$ is precisely (\ref{eqn:Neumann_formula_del-0}). \hfill $\Box$ \vspace{3ex} (II)\, We now remind the reader of the following notion. \begin{Def}\label{Def:dd-bar-lemma} A compact complex manifold $X$ is said to be a $\partial\bar\partial$-{\bf manifold} if for any $d$-closed {\it pure-type} form $u$ on $X$, the following exactness properties are equivalent: \\ \hspace{10ex} $u$ is $d$-exact $\Longleftrightarrow$ $u$ is $\partial$-exact $\Longleftrightarrow$ $u$ is $\bar\partial$-exact $\Longleftrightarrow$ $u$ is $\partial\bar\partial$-exact. \end{Def} Recall that $\partial\bar\partial$-manifolds are precisely the compact complex manifolds that have the {\it canonical Hodge Decomposition property} in the sense that every Dolbeault cohomology class $\{\alpha\}_{\bar\partial}\in H^{p,\,q}_{\bar\partial}(X,\,\C)$ of any bidegree $(p,\,q)$ can be represented by a {\it $d$-closed} form and the identity map induces, via $d$-closed representatives, an {\it isomorphism} $$H^k_{DR}(X,\,\C)\simeq\bigoplus\limits_{p+q=k}H^{p,\,q}_{\bar\partial}(X,\,\C)$$ in every degree $k$. \vspace{3ex} (III)\, We now recall the following standard formula (cf. e.g. [Voi02, Proposition 6.29, p. 150]) for the Hodge star operator $\star = \star_\omega$ of any Hermitian metric $\omega$ applied to {\it primitive} forms $v$ of arbitrary bidegree $(p, \, q)$: \begin{eqnarray}\label{eqn:prim-form-star-formula-gen}\star\, v = (-1)^{k(k+1)/2}\, i^{p-q}\, \frac{\omega^{n-p-q}\wedge v}{(n-p-q)!}, \hspace{2ex} \mbox{where}\,\, k:=p+q.\end{eqnarray} \vspace{3ex} (IV)\, Let us also mention the following observation that will play a key part in this work. It was first noticed in [IP13] and in some of the references therein as a consequence of more general results. A quick proof, made even shorter below, appeared in [Pop15, Proposition 1.1]. \begin{Prop}\label{Prop:SKT+bal} If a Hermitian metric $\omega$ on a compact complex manifold $X$ is both {\bf SKT} and {\bf balanced}, then $\omega$ is {\bf K\"ahler}. \end{Prop} \noindent {\it Proof.} The {\it SKT} assumption on $\omega$ translates to any of the following equivalent properties: \begin{eqnarray}\label{eqn:pluriclosed-equiv}\partial\bar\partial\omega=0 \Longleftrightarrow \partial\omega\in\ker\bar\partial \Longleftrightarrow \star(\partial\omega)\in\ker\partial^{\star},\end{eqnarray} \noindent where the last equivalence follows from the standard formula $\partial^{\star}=-\star\bar\partial\star$ involving the Hodge-star isomorphism $\star=\star_{\omega}:\Lambda^{p,\,q}T^{\star}X\rightarrow \Lambda^{n-q,\,n-p}T^{\star}X$ defined by $\omega$ for arbitrary $p,q=0,\dots , n$. Meanwhile, the {\it balanced} assumption on $\omega$ translates to any of the following equivalent properties: \begin{eqnarray*}\label{eqn:bal-equiv}d\omega^{n-1}=0 \Longleftrightarrow \partial\omega^{n-1}=0 \Longleftrightarrow \omega^{n-2}\wedge\partial\omega = 0 \Longleftrightarrow \partial\omega \,\,\mbox{is primitive}.\end{eqnarray*} \noindent Moreover, since $\partial\omega$ is primitive when $\omega$ is balanced, the general formula (\ref{eqn:prim-form-star-formula-gen}) yields: \begin{eqnarray}\label{eqn:consequence_balanced}\star(\partial\omega) = i\,\frac{\omega^{n-3}}{(n-3)!}\wedge\partial\omega = \frac{i}{(n-2)!}\,\partial\omega^{n-2}\in\mbox{Im}\,\partial.\end{eqnarray} \vspace{1ex} Thus, if $\omega$ is both SKT and balanced, we get from (\ref{eqn:pluriclosed-equiv}) and (\ref{eqn:consequence_balanced}) that \begin{eqnarray*}\star(\partial\omega)\in\ker\partial^{\star}\cap\mbox{Im}\,\partial = \{0\},\end{eqnarray*} where the last identity follows from the subspaces $\ker\partial^{\star}$ and $\mbox{Im}\,\partial$ of $C^\infty_{n-1,\,n-2}(X,\,\C)$ being $L^2_\omega$-orthogonal. We infer that $\partial\omega=0$, i.e. $\omega$ is K\"ahler. \hfill $\Box$ \section{The energy functional}\label{section:alternative_functional} We will define and discuss our new energy functional in the general Hermitian-symplectic setting in $\S.$\ref{subsection:H-S_n=3_critical-points}. We will then discuss a variant of it in the special case of SKT $\partial\bar\partial$-manifolds in $\S.$\ref{subsection:SKT_ddbar}. Additional discussions are included in $\S.$\ref{subsection:variation_torsion} and $\S.$\ref{subsection:vol-M-A_eq_H-S}. \subsection{Case of H-S metrics on compact complex manifolds}\label{subsection:H-S_n=3_critical-points} Let $X$ be a compact complex manifold with $\mbox{dim}_{\C}X=n$ such that $X$ admits {\it Hermitian-symplectic} metrics. Recall that these are $C^{\infty}$ positive definite $(1,\,1)$-forms $\omega>0$ for which there exists $\rho^{2,\,0}\in C^{\infty}_{2,\,0}(X,\,\C)$ such that \begin{equation}\label{eqn:H-S_condition}d(\rho^{2,\,0} + \omega + \rho^{0,\,2}) = 0,\end{equation} where $\rho^{0,\,2}:=\overline{\rho^{2,\,0}}$. Alternatively, we say that $\widetilde{\omega}:=\rho^{2,\,0} + \omega + \rho^{0,\,2}$ is a Hermitian-symplectic $2$-form. \begin{Lem-Def}\label{Lem-Def:minimal_rho} For every Hermitian-symplectic metric $\omega$ on $X$, there exists a unique smooth $(2,\,0)$-form $\rho_\omega^{2,\,0}$ on $X$ such that \begin{equation}\label{eqn:H-S_condition_bis}(i)\,\, \partial\rho_\omega^{2,\,0} = 0 \hspace{3ex} \mbox{and} \hspace{3ex} (ii)\,\, \bar\partial\rho_\omega^{2,\,0} = -\partial\omega \hspace{3ex} \mbox{and} \hspace{3ex} (iii)\,\,\rho_\omega^{2,\,0}\in \mbox{Im}\,\partial^{\star}_\omega + \mbox{Im}\,\bar\partial^{\star}_\omega.\end{equation} \noindent Moreover, property $(iii)$ ensures that $\rho_\omega^{2,\,0}$ has {\bf minimal $L^2_\omega$ norm} among all the $(2,\,0)$-forms satisfying properties $(i)$ and $(ii)$. We call $\rho_\omega^{2,\,0}$ the {\bf $(2,\,0)$-torsion form} and its conjugate $\rho_\omega^{0,\,2}$ the {\bf $(0,\,2)$-torsion form} of the Hermitian-symplectic metric $\omega$. One has the explicit {\bf Neumann-type formula}: \begin{equation}\label{eqn:Neumann_torsion_H-S}\rho_\omega^{2,\,0} = -\Delta_{BC}^{-1}[\bar\partial^\star\partial\omega + \bar\partial^\star\partial\partial^\star\partial\omega],\end{equation} \noindent where $\Delta_{BC}^{-1}$ is the Green operator of the Bott-Chern Laplacian $\Delta_{BC}$ induced by $\omega$, while $\partial^\star=\partial^\star_\omega$ and $\bar\partial^\star=\bar\partial^\star_\omega$ are the formal adjoints of $\partial$, resp. $\bar\partial$, w.r.t. the $L^2$ inner product defined by $\omega$. \end{Lem-Def} \noindent {\it Proof.} Condition (\ref{eqn:H-S_condition}) is equivalent to the vanishing of each of the components of pure types $(3,\,0)$, $(2,\,1)$, $(1,\,2)$ and $(0,\,3)$ of the real $3$-form $d(\rho^{2,\,0} + \omega + \rho^{0,\,2})$. Since the $(3,\,0)$- and $(2,\,1)$-components are the conjugates of the $(0,\,3)$- and resp. $(1,\,2)$-components, these vanishings are equivalent to conditions $(i)$ and $(ii)$ of (\ref{eqn:H-S_condition_bis}) being satisfied by $\rho^{2,\,0}$ in place of $\rho^{2,\,0}_\omega$. Now, the forms $\rho^{2,\,0}$ satisfying equations $(i)$ and $(ii)$ of (\ref{eqn:H-S_condition_bis}) are unique modulo $\ker\partial\cap\ker\bar\partial$. On the other hand, considering the $3$-space decomposition (\ref{eqn:BC-3sp-decomp}) of $C^{\infty}_{2, \, 0}(X, \C)$ induced by the Bott-Chern Laplacian $\Delta_{BC}:C^{\infty}_{2,\,0}(X,\,\C)\to C^{\infty}_{2,\,0}(X,\,\C)$ associated with the metric $\omega$, we see that the form $\rho^{2,\,0}$ with minimal $L^2_\omega$ norm satisfying equations $(i)$ and $(ii)$ of (\ref{eqn:H-S_condition_bis}) is the unique such form lying in the orthogonal complement of $\ker\partial\cap\ker\bar\partial = \ker\Delta_{BC} \oplus \mbox{Im}\,\partial\bar\partial$ in $C^{\infty}_{2, \, 0}(X, \C)$, which is $\mbox{Im}\,\partial^{\star}_\omega + \mbox{Im}\,\bar\partial^{\star}_\omega $. For the proof of formula (\ref{eqn:Neumann_torsion_H-S}), see Lemma \ref{Lem:Neumann_del-0} with $v=-\partial\omega$. \hfill $\Box$ \begin{Obs}\label{Obs:torsion-formula_dim3} When $\mbox{dim}_\C X=3$, formula (\ref{eqn:Neumann_torsion_H-S}) for the $(2,\,0)$-torsion form $\rho_\omega^{2,\,0}$ of any Hermitian-symplectic metric $\omega$ simplifies to \begin{equation}\label{eqn:Neumann_torsion_H-S_dim3}\rho_\omega^{2,\,0} = -\Delta^{''-1}\bar\partial^\star(\partial\omega),\end{equation} \noindent where $\Delta^{''-1} = \Delta^{''-1}_\omega$ is the Green operator of the $\bar\partial$-Laplacian $\Delta'' = \Delta''_\omega:=\bar\partial\bar\partial^\star + \bar\partial^\star\bar\partial$ induced by $\omega$ via $\bar\partial^\star = \bar\partial^\star_\omega$. \end{Obs} \noindent {\it Proof.} It is a standard and easily-verified fact that on any compact complex $n$-dimensional manifold, any $\bar\partial$-closed $(n-1,\,0)$-form is $\partial$-closed. Now, the $(2,\,0)$-form $\rho^{2,\,0}$ satisfying $\partial\rho^{2,\,0}=0$ and $\bar\partial\rho^{2,\,0}=-\partial\omega$ (cf.\!\! (\ref{eqn:H-S_condition_bis})) is unique up to the addition of an arbitrary $(2,\,0)$-form $\zeta\in\ker\partial\cap\ker\bar\partial$. When $n=3$, $n-1=2$, so $\ker\partial\cap\ker\bar\partial = \ker\bar\partial$ in bidegree $(2,\,0)$. Therefore, $\rho_\omega^{2,\,0}\in\mbox{Im}\,\bar\partial^{\star}_\omega$, i.e. $\rho_\omega^{2,\,0} = \bar\partial^{\star}\xi$ for some $(2,\,1)$-form $\xi$. We get $\Delta''\rho_\omega^{2,\,0} = \bar\partial^{\star}\bar\partial( \bar\partial^{\star}\xi) = -\bar\partial^{\star}(\partial\omega)$. This is equivalent to (\ref{eqn:Neumann_torsion_H-S_dim3}). \hfill $\Box$ \vspace{3ex} If $\omega_0$ is a Hermitian-symplectic metric on $X$, any $C^{\infty}$ positive definite $(1,\,1)$-form $\omega$ lying in the Aeppli cohomology class of $\omega_0$ is a Hermitian-symplectic metric. Indeed, by $(i)$ and $(ii)$ of (\ref{eqn:H-S_condition_bis}), $\partial\omega_0 = -\bar\partial\rho_0^{2,\,0}$ for some $\partial$-closed $(2,\,0)$-form $\rho_0^{2,\,0}$ on $X$. Meanwhile, $\omega = \omega_0 + \partial\bar{u} + \bar\partial u$ for some $(1,\,0)$-form $u$, so $\partial\omega = \partial\omega_0 + \partial\bar\partial u = -\bar\partial(\rho_0^{2,\,0} + \partial u)$. Meanwhile, $\rho_0^{2,\,0} + \partial u$ is $\partial$-closed since $\rho_0^{2,\,0}$ is. Therefore, $\omega$ is Hermitian-symplectic (cf. $(i)$ and $(ii)$ of (\ref{eqn:H-S_condition_bis}) which characterise the H-S property). By a {\it Hermitian-symplectic (H-S) Aeppli class} $\{\omega\}_A\in H^{1,\,1}_A(X,\,\R)$ we shall mean a real Aeppli cohomology class of bidegree $(1,\,1)$ that contains an H-S metric $\omega$. The set of all H-S classes $${\cal HS}_X:=\bigg\{\{\omega\}_A\in H_A^{1,\,1}(X,\,\R)\,\mid\,\omega \hspace{1ex}\mbox{is an H-S metric on}\hspace{1ex} X\bigg\}\subset H_A^{1,\,1}(X,\,\R)$$ is an open convex cone. Moreover, for every Hermitian-symplectic Aeppli class $\{\omega\}_A$, we denote by $${\cal S}_{\{\omega\}}:=\bigg\{\omega + \partial\bar{u} + \bar\partial u\,\mid\,u\in C^\infty_{1,\,0}(X,\,\C) \hspace{1ex}\mbox{such that}\hspace{1ex} \omega + \partial\bar{u} + \bar\partial u>0\bigg\}\subset\{\omega\}_A\cap C^\infty_{1,\,1}(X,\,\R)$$ the set of all (necessarily H-S) metrics in $\{\omega\}_A$. It is an {\it open convex subset} of the real affine space $\{\omega\}_A\cap C^\infty_{1,\,1}(X,\,\R) = \{\omega + \partial\bar{u} + \bar\partial u\,\mid\,u\in C^\infty_{1,\,0}(X,\,\C)\}$. \begin{Def}\label{Def:F_energy-functional_H-S} Let $X$ be a compact complex Hermitian-symplectic manifold with $\mbox{dim}_{\C}X=n$. For the Aeppli cohomology class $\{\omega_0\}_A\in{\cal HS}_X$ of any Hermitian-symplectic metric $\omega_0$, we define the following {\bf energy functional}: \begin{equation}\label{eqn:F_energy-functional_H-S}F : {\cal S}_{\{\omega_0\}} \to [0,\,+\infty), \hspace{3ex} F(\omega) = \int\limits_X|\rho_\omega^{2,\,0}|^2_\omega\,dV_\omega = ||\rho_\omega^{2,\,0}||^2_\omega, \end{equation} \noindent where $\rho_{\omega}^{2,\,0}$ is the $(2,\,0)$-torsion form of the Hermitian-symplectic metric $\omega\in{\cal S}_{\{\omega_0\}}$ defined in Lemma and Definition \ref{Lem-Def:minimal_rho}, while $|\,\,\,|_\omega$ is the pointwise norm and $||\,\,\,||_\omega$ is the $L^2$ norm induced by $\omega$. \end{Def} The first trivial observation that justifies the introduction of the functional $F$ is the following. \begin{Lem}\label{Lem:vanishing-F-Kaehler} Let $\{\omega_0\}_A\in{\cal S}_X$ and $\omega\in{\cal S}_{\{\omega_0\}}$. Then, the following equivalence holds: \begin{equation}\label{eqn:vanishing-F-Kaehler}\omega \hspace{1ex} \mbox{is K\"ahler} \iff F(\omega)=0.\end{equation} \end{Lem} \noindent {\it Proof.} If $\omega$ is K\"ahler, $\partial\omega=0$ and the minimal $L^2$-norm solution of the equation $\bar\partial\rho=0$ vanishes. Thus $\rho_\omega^{2,\,0}=0$, hence $F(\omega)=0$. Conversely, if $F(\omega)=0$, then $\rho_\omega^{2,\,0}$ vanishes identically on $X$, hence $\partial\omega = -\bar\partial\rho_\omega^{2,\,0} = 0$, so $\omega$ is K\"ahler. \hfill $\Box$ \vspace{2ex} We now compute the critical points of the energy functional $F$. Note that definition (\ref{eqn:F_energy-functional}) of $F$ translates to \begin{equation}\label{eqn:F_energy-functional_bis} F(\omega) = \int\limits_X\rho_\omega^{2,\,0}\wedge\star\overline{\rho^{2,\,0}_\omega} = \int\limits_X\rho_\omega^{2,\,0}\wedge\rho_\omega^{0,\,2}\wedge\frac{\omega^{n-2}}{(n-2)!}.\end{equation} \noindent Indeed, $\overline{\rho^{2,\,0}_\omega} = \rho_\omega^{0,\,2}$ is primitive since it is of bidegree $(0,\,2)$, so $\star\overline{\rho^{2,\,0}_\omega} =\overline{\rho^{2,\,0}_\omega}\wedge\omega^{n-2}/(n-2)!$ by (\ref{eqn:prim-form-star-formula-gen}). We now fix a Hermitian-symplectic metric $\omega$ on $X$ and we vary it in its Aeppli class along the path $\omega + t\gamma$, where $\gamma=\partial\bar{u} + \bar\partial u\in C^\infty_{1,\,1}(X,\,\R)$ is a fixed real $(1,\,1)$-form chosen to be Aeppli cohomologous to zero. Recall that the $(2,\,0)$-torsion form $\rho_{\omega}^{2,\,0}$ satisfies the condition $\bar\partial\rho_{\omega}^{2,\,0} = -\partial\omega$ and has minimal $L^2_\omega$-norm with this property. We get \begin{equation}\label{eqn:rho_plus_t_u}\bar\partial(\rho_{\omega}^{2,\,0} + t\partial u) = -\partial(\omega + t\gamma),\end{equation} \noindent although $\rho_{\omega}^{2,\,0} + t\partial u$ need not be of minimal $L^2_{\omega + t\gamma}$-norm with this property. For every $t\in\R$ close to $0$, we define the new functional: \begin{eqnarray}\label{eqn:F-tilde_def}\nonumber\widetilde{F}_t(\omega) & := & \int\limits_X|\rho_{\omega}^{2,\,0} + t\partial u|^2_{\omega + t\gamma}\,\frac{(\omega + t\gamma)^n}{n!} = \int\limits_X(\rho_{\omega}^{2,\,0} + t\partial u)\wedge\star_{\omega + t\gamma}\,(\overline{\rho_{\omega}^{2,\,0}} + t\,\bar\partial\bar{u}) \\ & = & \int\limits_X(\rho_{\omega}^{2,\,0} + t\partial u)\wedge(\overline{\rho_{\omega}^{2,\,0}} + t\bar\partial\bar{u})\wedge\frac{(\omega + t\gamma)^{n-2}}{(n-2)!}.\end{eqnarray} The properties of $\widetilde{F}_t$ are summed up in the following statement. \begin{Prop}\label{Prop:F-tilde_properties} $(i)$\, The two energy functionals are related by the inequality: \begin{equation}\label{eqn:F_energy_inequality}\widetilde{F}_t(\omega) \geq F(\omega + t\gamma) \hspace{2ex} \mbox{for all} \hspace{1ex} t\in\R \hspace{1ex} \mbox{close to} \hspace{1ex} 0.\end{equation} \vspace{1ex} $(ii)$\, The differential at $\omega$ of $F$ is given by the formula: \begin{eqnarray}\label{eqn:differential_F}\nonumber(d_{\omega}F)(\gamma) = \frac{d}{dt}_{|t=0}\widetilde{F}_t(\omega) & = & -2\,\mbox{Re}\,\langle\langle u,\,\bar\partial^{\star}\omega\rangle\rangle_\omega + 2\,\mbox{Re}\,\int\limits_X u\wedge\rho_\omega^{2,\,0}\wedge\overline{\rho_\omega^{2,\,0}}\wedge\bar\partial\bigg(\frac{\omega^{n-3}}{(n-3)!}\bigg) \\ & = & - \langle\langle\gamma\,,\omega\rangle\rangle + 2\,\mbox{Re}\,\int\limits_X u\wedge\rho_\omega^{2,\,0}\wedge\overline{\rho_\omega^{2,\,0}}\wedge\bar\partial\bigg(\frac{\omega^{n-3}}{(n-3)!}\bigg),\end{eqnarray} \noindent for every $(1,\,1)$-form $\gamma = \partial\bar{u} + \bar\partial u$. \end{Prop} \noindent {\it Proof.} $(i)$\, If $t$ is sufficiently close to $0$, $\omega + t\gamma>0$, hence $\omega + t\gamma$ is a Hermitian-symplectic metric. By $(ii)$ in Lemma and Definition \ref{Lem-Def:minimal_rho}, we have $\bar\partial\rho_{\omega + t\gamma}^{2,\,0} = -\partial(\omega + t\gamma)$ and $\rho_{\omega + t\gamma}^{2,\,0}$ has minimal $L^2_{\omega+t\gamma}$-norm with this property. Since $\rho_{\omega}^{2,\,0} + t\partial u$ solves the same equation as $\rho_{\omega + t\gamma}^{2,\,0}$ (cf. (\ref{eqn:rho_plus_t_u})), we conclude that $$\widetilde{F}_t(\omega) \geq \int\limits_X|\rho_{\omega + t\gamma}^{2,\,0}|^2_{\omega + t\gamma}\,\frac{(\omega + t\gamma)^n}{n!} = F(\omega + t\gamma), \hspace{2ex} t\in\R \hspace{1ex} \mbox{close to} \hspace{1ex} 0.$$ $(ii)$\, Since $\widetilde{F}_t(\omega) - F(\omega + t\gamma)\geq 0$ for all $t\in\R$ close to $0$ and since $\widetilde{F}_0(\omega) = F(\omega)$, the smooth function $t\mapsto \widetilde{F}_t(\omega) - F(\omega + t\gamma)$ achieves a minimum at $t=0$. Hence, its derivative vanishes at $t=0$. We get: $$\frac{d}{dt}_{|t=0}\widetilde{F}_t(\omega) = \frac{d}{dt}_{|t=0} F(\omega + t\gamma) = (d_{\omega}F)(\gamma),$$ \noindent which is precisely the first identity in (\ref{eqn:differential_F}). We now prove the second identity in (\ref{eqn:differential_F}) starting from (\ref{eqn:F-tilde_def}). For all $t\in\R$ close to $0$, we get: \begin{eqnarray}\nonumber\frac{d}{dt}_{|t=0}\widetilde{F}_t(\omega) & = & \frac{d}{dt}_{|t=0}\int\limits_X(\rho_{\omega}^{2,\,0} + t\partial u)\wedge(\overline{\rho_{\omega}^{2,\,0}} + t\bar\partial\bar{u})\wedge\frac{(\omega + t\gamma)^{n-2}}{(n-2)!} \\ \nonumber & = & \int\limits_X\partial u\wedge\overline{\rho_{\omega}^{2,\,0}}\wedge\frac{\omega^{n-2}}{(n-2)!} + \int\limits_X\bar\partial\bar{u}\wedge\rho_{\omega}^{2,\,0}\wedge\frac{\omega^{n-2}}{(n-2)!} \\ \nonumber & & + \int\limits_X\rho_{\omega}^{2,\,0}\wedge\overline{\rho_{\omega}^{2,\,0}}\wedge\frac{\omega^{n-3}}{(n-3)!}\wedge(\partial\bar{u} + \bar\partial u).\end{eqnarray} \noindent Applying Stokes's theorem in each integral to remove the derivatives from $u$ and $\bar{u}$, we get: \begin{eqnarray}\nonumber\frac{d}{dt}_{|t=0}\widetilde{F}_t(\omega) & = & \int\limits_X u\wedge\partial\overline{\rho_{\omega}^{2,\,0}}\wedge\frac{\omega^{n-2}}{(n-2)!} +\int\limits_X u\wedge\overline{\rho_{\omega}^{2,\,0}}\wedge\partial\bigg(\frac{\omega^{n-2}}{(n-2)!}\bigg) \\ \nonumber & & + \int\limits_X\bar{u}\wedge\bar\partial\rho_{\omega}^{2,\,0}\wedge\frac{\omega^{n-2}}{(n-2)!} + \int\limits_X\bar{u}\wedge\rho_{\omega}^{2,\,0}\wedge\bar\partial\bigg(\frac{\omega^{n-2}}{(n-2)!}\bigg)\\ \nonumber & & +\int\limits_X\bar{u} \wedge\partial\rho_{\omega}^{2,\,0}\wedge\overline{\rho_{\omega}^{2,\,0}}\wedge\frac{\omega^{n-3}}{(n-3)!} +\int\limits_X\bar{u}\wedge\rho_{\omega}^{2,\,0}\wedge\partial\overline{\rho_{\omega}^{2,\,0}}\wedge\frac{\omega^{n-3}}{(n-3)!} \\ \nonumber & & + \int\limits_X\bar{u}\wedge\rho_{\omega}^{2,\,0}\wedge\overline{\rho_{\omega}^{2,\,0}}\wedge\partial\bigg(\frac{\omega^{n-3}}{(n-3)!}\bigg) + \int\limits_X u\wedge\rho_{\omega}^{2,\,0}\wedge\overline{\rho_{\omega}^{2,\,0}}\wedge\bar\partial\bigg(\frac{\omega^{n-3}}{(n-3)!}\bigg) \\ \nonumber & & +\int\limits_X u\wedge\bar\partial\rho_{\omega}^{2,\,0}\wedge\overline{\rho_{\omega}^{2,\,0}}\wedge\frac{\omega^{n-3}}{(n-3)!} +\int\limits_X u\wedge\rho_{\omega}^{2,\,0}\wedge\bar\partial\overline{\rho_{\omega}^{2,\,0}}\wedge\frac{\omega^{n-3}}{(n-3)!}.\end{eqnarray} \noindent Grouping the terms on the r.h.s. according to whether the integrands are divisible by $u$ or by $\bar{u}$ and using the identities $\partial\overline{\rho_\omega^{2,\,0}} = -\bar\partial\omega$ and $\bar\partial\rho_\omega^{2,\,0} = -\partial\omega$ , we get: \begin{eqnarray}\nonumber\frac{d}{dt}_{|t=0}\widetilde{F}_t(\omega) & = & -\int\limits_X u\wedge\bigg[\bar\partial\omega\wedge\frac{\omega^{n-2}}{(n-2)!} + \bigg(-\overline{\rho_{\omega}^{2,\,0}}\wedge\partial\frac{\omega^{n-2}}{(n-2)!} + \partial\omega\wedge\overline{\rho_{\omega}^{2,\,0}}\wedge\frac{\omega^{n-3}}{(n-3)!}\bigg)\bigg] \\ \nonumber & & + \int\limits_X u\wedge\bigg[\rho_{\omega}^{2,\,0}\wedge\bar\partial\overline{\rho_{\omega}^{2,\,0}}\wedge\frac{\omega^{n-3}}{(n-3)!} + \rho_{\omega}^{2,\,0}\wedge\overline{\rho_{\omega}^{2,\,0}}\wedge\bar\partial\frac{\omega^{n-3}}{(n-3)!}\bigg] \\ \nonumber & & -\int\limits_X \bar{u}\wedge\bigg[\partial\omega\wedge\frac{\omega^{n-2}}{(n-2)!} + \bigg(-\rho_{\omega}^{2,\,0}\wedge\bar\partial\frac{\omega^{n-2}}{(n-2)!} + \bar\partial\omega\wedge\rho_{\omega}^{2,\,0}\wedge\frac{\omega^{n-3}}{(n-3)!}\bigg)\bigg] \\ \nonumber & & + \int\limits_X \bar{u}\wedge\bigg[\overline{\rho_{\omega}^{2,\,0}}\wedge\partial\rho_{\omega}^{2,\,0}\wedge\frac{\omega^{n-3}}{(n-3)!} + \rho_{\omega}^{2,\,0}\wedge\overline{\rho_{\omega}^{2,\,0}}\wedge\partial\frac{\omega^{n-3}}{(n-3)!}\bigg].\end{eqnarray} \noindent Now, the terms on the first two lines on the r.h.s. above are respectively conjugated to the terms on the third and fourth lines, while the two inner large paratheses on lines $1$ and $3$ vanish since $\partial\omega^{n-2}/(n-2)! = \partial\omega\wedge\omega^{n-3}/(n-3)!$. On the other hand, we recall that $\partial\rho_\omega^{2,\,0}=0$, hence also $\bar\partial\overline{\rho_\omega^{2,\,0}}=0$. Thus, the two integrals containing these factors on the r.h.s. above vanish. We are reduced to \begin{eqnarray}\nonumber\frac{d}{dt}_{|t=0}\widetilde{F}_t(\omega) & = & -\int\limits_X u\wedge\bigg[\bar\partial\frac{\omega^{n-1}}{(n-1)!} - \rho_{\omega}^{2,\,0}\wedge\overline{\rho_{\omega}^{2,\,0}}\wedge\bar\partial\frac{\omega^{n-3}}{(n-3)!}\bigg] \\ & & -\int\limits_X \bar{u}\wedge\bigg[\partial\frac{\omega^{n-1}}{(n-1)!} - \rho_{\omega}^{2,\,0}\wedge\overline{\rho_{\omega}^{2,\,0}}\wedge\partial\frac{\omega^{n-3}}{(n-3)!}\bigg],\end{eqnarray} \noindent or equivalently, to \begin{eqnarray}\label{eqn:d_dt_F_tilde}\frac{d}{dt}_{|t=0}\widetilde{F}_t(\omega) = -2\,\mbox{Re}\,\int\limits_X u\wedge\bar\partial\bigg(\frac{\omega^{n-1}}{(n-1)!}\bigg) + 2\,\mbox{Re}\,\int\limits_X u\wedge\rho_\omega^{2,\,0}\wedge\overline{\rho_\omega^{2,\,0}}\wedge\bar\partial\bigg(\frac{\omega^{n-3}}{(n-3)!}\bigg).\end{eqnarray} Now, from $\star_\omega\omega = \omega^{n-1}/(n-1)!$ and $\partial^{\star} = -\star\bar\partial\star$, we get: $$\bar\partial\bigg(\frac{\omega^{n-1}}{(n-1)!}\bigg) = \bar\partial\star\omega = \star(-\star\bar\partial\star)\,\omega = \star\partial^{\star}\omega = \star\,\overline{\bar\partial^{\star}\omega},$$ \noindent hence \begin{equation}\label{eqn:u_wedge_inner-product}u\wedge \bar\partial\bigg(\frac{\omega^{n-1}}{(n-1)!}\bigg) = \langle u,\,\bar\partial^{\star}\omega\rangle_\omega \, dV_\omega.\end{equation} Thus, (\ref{eqn:d_dt_F_tilde}) and (\ref{eqn:u_wedge_inner-product}) prove the second identity in (\ref{eqn:differential_F}). The third identity in (\ref{eqn:differential_F}) is obvious. \hfill $\Box$ \begin{Cor}\label{Cor:critical-points_energy_n=3} Suppose $n=3$. Then a Hermitian-symplectic metric $\omega$ on a compact complex manifold $X$ of dimension $3$ is a {\bf critical point} of the energy functional $F$ {\bf if and only if} $\omega$ is {\bf K\"ahler}. \end{Cor} \noindent {\it Proof.} It is obvious that every K\"ahler metric $\omega$ is a critical point for $F$ since $\partial\omega=0$, hence $\rho_\omega^{2,\,0} = 0$. If $n=3$, $\bar\partial\omega^{n-3}=0$, so (\ref{eqn:differential_F}) reduces to $(d_\omega F)(\gamma) = -2\,\mbox{Re}\,\langle\langle u,\,\bar\partial^{\star}\omega\rangle\rangle_\omega$. Now, a metric $\omega$ is a critical point of $F$ if and only if $(d_\omega F)(\gamma) = 0$ for every $\gamma = \partial\bar{u} + \bar\partial u$. By the above discussion, this amounts to $\mbox{Re}\,\langle\langle u,\,\bar\partial^{\star}\omega\rangle\rangle_\omega = 0$ for every $(1,\,0)$-form $u$. Thus, if $\omega$ is a critical point of $F$, by taking $u=\bar\partial^{\star}\omega$ we get $\bar\partial^{\star}\omega = 0$. This is equivalent to $\omega$ being balanced. However, $\omega$ is already SKT since it is Hermitian-symplectic, so $\omega$ must be K\"ahler by Proposition \ref{Prop:SKT+bal}. \hfill $\Box$ \vspace{3ex} \begin{Cor}\label{Cor:energy_M-A_mass} Let $X$ be a compact complex manifold of dimension $n=3$ admitting Hermitian-symplectic metrics. Then, for every Aeppli-cohomologous Hermitian-symplectic metrics $\omega$ and $\omega_\eta$: \begin{equation}\label{eqn:A_cohomologous_H-S}\omega_\eta = \omega + \partial\bar\eta + \bar\partial\eta >0, \hspace{3ex} \mbox{with} \hspace{1ex} \eta\in C^{\infty}_{1,\,0}(X,\,\C),\end{equation} \noindent the respective $(2,\,0)$-torsion forms $\rho_\omega^{2,\,0}$ and $\rho_\eta^{2,\,0}:=\rho_{\omega_\eta}^{2,\,0}$ satisfy the identity: \begin{equation}\label{eqn:torsion_A_cohomologous_H-S1}||\rho_\eta^{2,\,0}||^2_{\omega_\eta} + \int\limits_X\frac{\omega_\eta^3}{3!} = ||\rho_\omega^{2,\,0}||^2_\omega + \int\limits_X\frac{\omega^3}{3!}\end{equation} \noindent and are related by \begin{equation}\label{eqn:torsion_A_cohomologous_H-S2}\rho_\eta^{2,\,0} = \rho_\omega^{2,\,0} + \partial\eta.\end{equation} In particular, if $\partial\eta=0$ (a condition that is equivalent to $\omega_\eta-\omega$ being $d$-exact), we are reduced to $\rho_\eta^{2,\,0} = \rho_\omega^{2,\,0}$ and \begin{equation}\label{eqn:torsion_A_cohomologous_H-S_bis}||\rho_\omega^{2,\,0}||^2_{\omega_\eta} = ||\rho_\omega^{2,\,0}||^2_\omega + \int\limits_X \rho_\omega^{2,\,0}\wedge\overline{\rho_\omega^{2,\,0}}\wedge(\omega_\eta - \omega).\end{equation} \end{Cor} \noindent{\it Proof.} In arbitrary dimension $n$, we compute the differential of the map $${\cal S}_{\{\omega_0\}}\ni\omega\mapsto\int\limits_X\frac{\omega^n}{n!}:=\mbox{Vol}_\omega(X)$$ \noindent when the metric $\omega$ varies in its Aeppli cohomology class $\{\omega_0\}_A$. For any real, Aeppli null-cohomologous $(1,\,1)$-form $\gamma = \partial\bar{u} + \bar\partial u$ (with $u\in C^{\infty}_{1,\,0}(X,\,\C)$), we have \begin{eqnarray}\nonumber\frac{d}{dt}_{|t=0}\int\limits_X\frac{(\omega + t\gamma)^n}{n!} & = & \frac{1}{(n-1)!}\,\int\limits_X\omega^{n-1}\wedge\gamma = 2\,\mbox{Re}\,\int\limits_X\bar\partial u \wedge\frac{\omega^{n-1}}{(n-1)!} = 2\,\mbox{Re}\,\int\limits_X u \wedge\bar\partial\star\omega \\ \nonumber & = & 2\,\mbox{Re}\,\int\limits_X u \wedge\star\bigg(-\star\bar\partial\star\omega\bigg) = 2\,\mbox{Re}\,\int\limits_X u \wedge\star\partial^{\star}\omega = 2\,\mbox{Re}\,\int\limits_X u \wedge\star\overline{\bar\partial^{\star}\omega} \\ \nonumber & = & 2\,\mbox{Re}\,\langle\langle u,\, \bar\partial^{\star}\omega\rangle\rangle.\end{eqnarray} \noindent Together with (\ref{eqn:differential_F}) (recall that $n=3$ here), this identity shows that the differential at $\omega$ of the map $${\cal S}_{\{\omega_0\}}\ni\omega\mapsto ||\rho_\omega^{2,\,0}||^2_\omega + \int\limits_X\frac{\omega^3}{3!}$$ \noindent vanishes identically. Therefore, this map is constant on the Hermitian-symplectic metrics lying in a same Aeppli cohomology class $\{\omega_0\}_A$. This proves (\ref{eqn:torsion_A_cohomologous_H-S1}). To prove (\ref{eqn:torsion_A_cohomologous_H-S2}), recall that definition (\ref{eqn:H-S_condition_bis}) of the $(2,\,0)$-torsion forms implies the following relations: \begin{equation}\label{eqn:torsion-forms_comparison_def} (i)\,\bar\partial(\rho_\eta^{2,\,0} - \partial\eta) = -\partial\omega \hspace{2ex} \mbox{and} \hspace{2ex} (ii)\, ||\rho_\eta^{2,\,0} - \partial\eta||_\omega \geq ||\rho_\omega^{2,\,0}||_\omega,\end{equation} \noindent where $(ii)$ follows from $(i)$ and from the $L^2_\omega$-norm minimality of $\rho_\omega^{2,\,0}$ among the $(2,\,0)$-forms $\rho$ solving the equation $\bar\partial\rho = -\partial\omega$. Now, (\ref{eqn:torsion_A_cohomologous_H-S1}) gives the first of the following identities: \begin{eqnarray}\label{eqn:torsion_L2_M-A_computation1}||\rho_\omega^{2,\,0}||^2_\omega + \int\limits_X\frac{\omega^3}{3!} & = & ||\rho_\eta^{2,\,0}||^2_{\omega_\eta} + \int\limits_X\frac{\omega_\eta^3}{3!} = \int\limits_X \rho_\eta^{2,\,0}\wedge\overline{\rho_\eta^{2,\,0}}\wedge\omega_\eta + \int\limits_X\frac{\omega_\eta^3}{3!}.\end{eqnarray} \noindent On the other hand, we have: \begin{eqnarray}\label{eqn:torsion_L2_M-A_computation2}\nonumber(\omega_\eta + \rho_\eta^{2,\,0} + \overline{\rho_\eta^{2,\,0}})^3 & = & \omega_\eta^3 + 3\,\omega_\eta^2\wedge(\rho_\eta^{2,\,0} + \overline{\rho_\eta^{2,\,0}}) + 3\,\omega_\eta\wedge(\rho_\eta^{2,\,0} + \overline{\rho_\eta^{2,\,0}})^2 + (\rho_\eta^{2,\,0} + \overline{\rho_\eta^{2,\,0}})^3 \\ & = & \omega_\eta^3 + 6\,\omega_\eta\wedge\rho_\eta^{2,\,0}\wedge\overline{\rho_\eta^{2,\,0}},\end{eqnarray} \noindent where the last identity follows from the cancellation of several terms for bidegree reasons. Putting (\ref{eqn:torsion_L2_M-A_computation1}) and (\ref{eqn:torsion_L2_M-A_computation2}) together, we get: \begin{eqnarray}\label{eqn:torsion_L2_M-A_computation3}\nonumber ||\rho_\omega^{2,\,0}||^2_\omega + \frac{1}{3!}\,\int\limits_X\omega^3 & = & \frac{1}{3!}\,\int\limits_X(\omega_\eta + \rho_\eta^{2,\,0} + \overline{\rho_\eta^{2,\,0}})^3 = \frac{1}{3!}\, \int\limits_X[\omega + (\rho_\eta^{2,\,0} - \partial\eta) + (\overline{\rho_\eta^{2,\,0}} - \bar\partial\bar\eta) + d(\eta + \bar\eta)]^3 \\ \nonumber & \stackrel{(a)}{=} & \frac{1}{3!}\, \int\limits_X[\omega + (\rho_\eta^{2,\,0} - \partial\eta) + (\overline{\rho_\eta^{2,\,0}} - \bar\partial\bar\eta)]^3 \stackrel{(b)}{=} \frac{1}{3!}\, \int\limits_X\omega^3 + \int\limits_X(\rho_\eta^{2,\,0} - \partial\eta)\wedge(\overline{\rho_\eta^{2,\,0}} - \bar\partial\eta)\wedge\omega \\ & = & ||\rho_\eta^{2,\,0} - \partial\eta||^2_\omega + \frac{1}{3!}\, \int\limits_X\omega^3 \stackrel{(c)}{\geq} ||\rho_\omega^{2,\,0}||^2_\omega + \frac{1}{3!}\, \int\limits_X\omega^3.\end{eqnarray} $\cdot$ Identity $(a)$ followed from Stokes's theorem and the $d$-closedness of the form $\omega + (\rho_\eta^{2,\,0} - \partial\eta) + (\overline{\rho_\eta^{2,\,0}} - \bar\partial\bar\eta)$ that is seen through the following very simple computation: \begin{eqnarray}\nonumber d[\omega + (\rho_\eta^{2,\,0} - \partial\eta) + (\overline{\rho_\eta^{2,\,0}} - \bar\partial\bar\eta)] & = & \partial\omega + \bar\partial\omega + \bar\partial\rho_\eta^{2,\,0} - \bar\partial\partial\eta + \partial\overline{\rho_\eta^{2,\,0}} - \partial\bar\partial\bar\eta \\ \nonumber & = & \partial\omega + \bar\partial\omega -\partial(\omega + \partial\bar\eta + \bar\partial\eta) - \bar\partial\partial\eta - \bar\partial(\omega + \bar\partial\eta + \partial\bar\eta) - \partial\bar\partial\bar\eta \\ \nonumber & = & - (\partial\bar\partial\eta + \bar\partial\partial\eta) - (\bar\partial\partial\bar\eta + \partial\bar\partial\bar\eta) = 0,\end{eqnarray} \noindent where the second identity followed from $\bar\partial\rho_\eta^{2,\,0} = -\partial\omega_\eta = -\partial(\omega + \bar\partial\eta + \partial\bar\eta)$ and from the conjugated expression. $\cdot$ Identity $(b)$ in (\ref{eqn:torsion_L2_M-A_computation3}) followed from the analogue of (\ref{eqn:torsion_L2_M-A_computation2}) in this context, while inequality $(c)$ in (\ref{eqn:torsion_L2_M-A_computation3}) followed from part $(ii)$ of (\ref{eqn:torsion-forms_comparison_def}). We see that the first and the last terms in (\ref{eqn:torsion_L2_M-A_computation3}) are equal. This forces $(c)$ to be an equality, hence part $(ii)$ of (\ref{eqn:torsion-forms_comparison_def}) must be an equality. This means that $\rho_\eta^{2,\,0}-\partial\eta$ and $\rho_\omega^{2,\,0}$ are both solutions of the equation $\bar\partial\rho^{2,\,0} = -\partial\omega$ (see part $(i)$ of (\ref{eqn:torsion-forms_comparison_def}) and part $(ii)$ of (\ref{eqn:H-S_condition_bis})) and have {\it equal} $L^2_\omega$-norms. Since $\rho_\omega^{2,\,0}$ is the minimal $L^2_\omega$-norm solution, we infer that $\rho_\eta^{2,\,0}-\partial\eta = \rho_\omega^{2,\,0}$ by the uniqueness of the minimal $L^2_\omega$-norm solution. This proves (\ref{eqn:torsion_A_cohomologous_H-S2}). Finally, we write $\omega_\eta = \omega + (\omega_\eta-\omega)$ and $$||\rho_\omega^{2,\,0}||^2_{\omega_\eta} = \int\limits_X\rho_\omega^{2,\,0}\wedge\overline{\rho_\omega^{2,\,0}}\wedge\omega_\eta = ||\rho_\omega^{2,\,0}||^2_\omega + \int\limits_X\rho_\omega^{2,\,0}\wedge\overline{\rho_\omega^{2,\,0}}\wedge(\omega_\eta-\omega).$$ \noindent This is (\ref{eqn:torsion_A_cohomologous_H-S_bis}). \hfill $\Box$ \vspace{3ex} The main takeaway from Corollary \ref{Cor:energy_M-A_mass} is that the sum $F(\omega) + \mbox{Vol}_\omega(X)$ (where $\mbox{Vol}_\omega(X):=\int_X\omega^3/3!$) remains {\bf constant} when $\omega$ ranges over the (necessarily Hermitian-symplectic) metrics in the Aeppli cohomology class of a fixed Hermitian-symplectic metric $\omega_0$. This invariant attached to any Aeppli class of Hermitian-symplectic metrics generalises the classical volume of a K\"ahler class and constitutes one of our main findings in this work. \begin{Def}\label{Def:A-invariant} Let $X$ be a $3$-dimensional compact complex manifold supposed to carry Hermitian-symplectic metrics. For any such metric $\omega$ on $X$, the constant \begin{equation}\label{eqn:A-invariant}A=A_{\{\omega\}_A}:= F(\omega) + \mbox{Vol}_\omega(X)>0\end{equation} depending only on $\{\omega\}_A$ is called the {\bf generalised volume} of the Hermitian-symplectic Aeppli class $\{\omega\}_A$. \end{Def} \subsection{Case of SKT metrics on $\partial\bar\partial$-manifolds}\label{subsection:SKT_ddbar} In this subsection, we discuss an analogous functional in the special case of a $\partial\bar\partial$-manifold admitting SKT metrics. Note that any Hermitian-symplectic metric is SKT on any manifold $X$. Moreover, for every Hermitian-symplectic metric $\omega$ on $X$, the set of all Hermitian-symplectic metrics in the Aeppli class $\{\omega\}_A$ coincides with the set of all SKT metrics in the Aeppli class $\{\omega\}_A$. Conversely, SKT manifolds need not be Hermitian-symplectic, but on $\partial\bar\partial$-manifolds, the notions of H-S and SKT metrics coincide. \begin{Lem}\label{Lem:torsion_2-0_form_SKT} For every SKT metric $\omega$ on a compact $\partial\bar\partial$-manifold $X$, there exists a unique smooth $(2,\,0)$-form $\Gamma_\omega$ on $X$ such that \begin{equation}\label{eqn:torsion_2-0_form}(i)\,\,\bar\partial\Gamma_\omega = -\partial\omega \hspace{2ex} \mbox{and} \hspace{2ex} (ii)\,\,\Gamma_\omega\in\mbox{Im}\,\bar\partial^{\star}_\omega,\end{equation} \noindent where the subscript $\omega$ indicates that the formal adjoint is computed w.r.t. the $L^2$ inner product defined by $\omega$. The form $\Gamma_\omega$ will be called the {\bf $(2,\,0)$-torsion form} of the SKT metric $\omega$. It is given by the von Neumann-type formula: \begin{equation}\label{eqn:torsion_2-0_form_formula}\Gamma_\omega = -\Delta^{''-1}\bar\partial^{\star}(\partial\omega),\end{equation} \noindent where $\Delta^{''-1}$ is the Green operator of the Laplacian $\Delta'' = \bar\partial\bar\partial^{\star} + \bar\partial^{\star}\bar\partial$ induced by the metric $\omega$. \end{Lem} \noindent {\it Proof.} The $(2,\,1)$-form $\partial\omega$ is $d$-closed (thanks to the SKT assumption on $\omega$) and $\partial$-exact, hence by the $\partial\bar\partial$-assumption on $X$ it is also $\bar\partial$-exact. This means that the equation $\bar\partial\Gamma = -\partial\omega$ is solvable. Its solutions $\Gamma$ are unique up to the addition of any element in $\ker\bar\partial$, so the minimal $L^2_\omega$-norm solution is the unique solution lying in the orthogonal complement of $\ker\bar\partial$, which is $\mbox{Im}\,\bar\partial^{\star}_\omega$. The von Neumann formula is well known and can be easily proved: $\bar\partial(-\Delta^{''-1}\bar\partial^{\star}(\partial\omega)) = -\partial\omega$ (immediate verification) and $\Delta^{''-1}\bar\partial^{\star}(\partial\omega) = \bar\partial^{\star}\Delta^{''-1}(\partial\omega)\in\mbox{Im}\,\bar\partial^{\star}$. \hfill $\Box$ \vspace{2ex} The following is a very simple observation. \begin{Lem}\label{Lem:Gamma_del-closed} The $(2,\,0)$-torsion form $\Gamma_\omega$ of any SKT metric $\omega$ on a compact $\partial\bar\partial$-manifold $X$ has the property: \begin{equation}\label{eqn:Gamma_del-closed}\partial\Gamma_\omega = 0.\end{equation} \end{Lem} \noindent {\it Proof.} The $(3,\,0)$-form $\partial\Gamma_\omega$ is $\partial$-exact (obviously) and $d$-closed (since $\bar\partial(\partial\Gamma_\omega) = -\partial(\bar\partial\Gamma_\omega) = \partial^2\omega = 0$), hence it must be $\partial\bar\partial$-exact thanks to the $\partial\bar\partial$ assumption on $X$. This means that there exists a $(2,\,-1)$-form $\zeta$ (which must vanish for bidegree reasons) such that $\partial\bar\partial\zeta = \partial\Gamma_\omega$. Then $\partial\Gamma_\omega$ vanishes since $\zeta = 0$. \hfill $\Box$ \vspace{2ex} We now define a new energy functional by the $L^2$-norm of the $(2,\,0)$-torsion form $\Gamma_\omega$. \begin{Def}\label{Def:F_energy-functional} Let $X$ be a compact SKT $\partial\bar\partial$-manifold with $\mbox{dim}_{\C}X=n$. For every Aeppli cohomology class $\{\omega_0\}_A$ representable by an SKT metric, we define the following {\bf energy functional}: \begin{equation}\label{eqn:F_energy-functional}F : {\cal S}_{\{\omega_0\}} \to [0,\,+\infty), \hspace{3ex} F(\omega) = \int\limits_X|\Gamma_\omega|^2_\omega\,dV_\omega = ||\Gamma_\omega||^2_\omega,\end{equation} \noindent where $\Gamma_{\omega}$ is the $(2,\,0)$-torsion form of the SKT metric $\omega\in{\cal S}_{\{\omega_0\}}$ defined in Lemma \ref{Lem:torsion_2-0_form_SKT}. \end{Def} The remaining arguments are identical to those given in $\S.$\ref{subsection:H-S_n=3_critical-points} if we replace $\rho_\omega^{2,\,0}$ with $\Gamma_\omega$. Recall that by (\ref{eqn:Gamma_del-closed}) we have $\partial\Gamma_\omega=0$ (cf. $(i)$ of (\ref{eqn:H-S_condition_bis})). The first variation of $F$ can be computed as in $\S.$\ref{subsection:H-S_n=3_critical-points}. We get \begin{Prop}\label{Prop:F-tilde_properties_SKT} The differential of $F$ at any SKT metric $\omega$ is given by the formula: \begin{eqnarray}\label{eqn:differential_F_H-S}\nonumber(d_{\omega}F)(\gamma) & = & -2\,\mbox{Re}\,\langle\langle u,\,\bar\partial^{\star}\omega\rangle\rangle_\omega + 2\,\mbox{Re}\,\int\limits_X u\wedge\Gamma_\omega\wedge\overline{\Gamma}_\omega\wedge\bar\partial\bigg(\frac{\omega^{n-3}}{(n-3)!}\bigg) \\ & = & - \langle\langle\gamma\,,\omega\rangle\rangle + 2\,\mbox{Re}\,\int\limits_X u\wedge\Gamma_\omega\wedge\overline{\Gamma}_\omega\wedge\bar\partial\bigg(\frac{\omega^{n-3}}{(n-3)!}\bigg) \end{eqnarray} \noindent for every $(1,\,1)$-form $\gamma = \partial\bar{u} + \bar\partial u$. In particular, {\bf if $n=3$}, an {SKT} metric $\omega$ on $X$ is a {\bf critical point} of the energy functional $F$ {\bf if and only if} $\omega$ is {\bf K\"ahler}. \end{Prop} \subsection{Variation of the $(2,\,0)$-torsion form for $\partial\bar\partial$-cohomologous metrics}\label{subsection:variation_torsion} We first show that the $(2,\,0)$-torsion form of a Hermitian-symplectic metric does not change when the metric changes only by an element in $\mbox{Im}\,\partial\bar\partial$. The next statement can be compared with Corollary \ref{Cor:energy_M-A_mass}: it supposes more and achieves more. \begin{Prop}\label{Prop:torsion_variation_BC} Let $X$ be a compact complex manifold with $\mbox{dim}_\C X=3$. Suppose that $\omega>0$ and $\widetilde\omega = \omega + i\partial\bar\partial\varphi>0$ are {\bf SKT} metrics on $X$. $(i)$\, For every form $\rho^{2,\,0}\in C^{\infty}_{2,\,0}(X,\,\C)$ such that $\partial\rho^{2,\,0}=0$ and $\bar\partial\rho^{2,\,0} = -\partial\omega$, the $L^2$-norms of $\rho^{2,\,0}$ w.r.t. $\widetilde\omega$ and $\omega$ are related in the following way: \begin{equation}\label{eqn:torsion_variation_BC_L2}||\rho^{2,\,0}||^2_{\widetilde\omega} = ||\rho^{2,\,0}||^2_\omega - \frac{1}{2}\,\int\limits_X(\widetilde\omega - \omega)\wedge\omega^2.\end{equation} \noindent This relation is equivalent to \begin{equation}\label{eqn:torsion_variation_BC_L2_bis}||\rho^{2,\,0}||^2_{\widetilde\omega} + \int\limits_X\frac{\widetilde\omega^3}{3!} = ||\rho^{2,\,0}||^2_\omega + \int\limits_X\frac{\omega^3}{3!}.\end{equation} $(ii)$\, If $\omega>0$ and $\widetilde\omega = \omega + i\partial\bar\partial\varphi>0$ are {\bf Hermitian-symplectic} metrics, their $(2,\,0)$-torsion forms coincide, i.e. \begin{equation}\label{eqn:torsion_variation_BC}\rho^{2,\,0}_{\widetilde\omega} = \rho^{2,\,0}_\omega.\end{equation} \end{Prop} \noindent {\it Proof.} $(i)$\, From the assumptions, we get the following identities: \begin{eqnarray}\label{eqn:L2_norms_torsions_computations}\nonumber ||\rho^{2,\,0}||^2_{\widetilde\omega} & = & \int\limits_X\rho^{2,\,0}\wedge\overline{\rho^{2,\,0}}\wedge\widetilde\omega = \int\limits_X\rho^{2,\,0}\wedge\overline{\rho^{2,\,0}}\wedge\omega + \int\limits_X\rho^{2,\,0}\wedge\overline{\rho^{2,\,0}}\wedge i\partial\bar\partial\varphi \\ \nonumber & \stackrel{(a)}{=} & ||\rho^{2,\,0}||^2_\omega -i\,\int\limits_X\rho^{2,\,0}\wedge\partial\overline{\rho^{2,\,0}}\wedge\bar\partial\varphi \\ & \stackrel{(b)}{=} & ||\rho^{2,\,0}||^2_\omega -i\,\int\limits_X\varphi\, \bar\partial\rho^{2,\,0}\wedge\partial\overline{\rho^{2,\,0}} \stackrel{(c)}{=} ||\rho^{2,\,0}||^2_\omega -i\,\int\limits_X\varphi\,\partial\omega\wedge\bar\partial\omega,\end{eqnarray} \noindent where $(a)$ and $(b)$ follow from Stokes combined with the identity $\partial\rho^{2,\,0}=0$ and its conjugate $\bar\partial\overline{\rho^{2,\,0}}=0$, while $(c)$ follows from the identity $\bar\partial\rho^{2,\,0} = -\partial\omega$ and its conjugate $\partial\overline{\rho^{2,\,0}} = -\bar\partial\omega$. Now, the SKT property of $\omega$ implies that $\partial\omega\wedge\bar\partial\omega = \partial(\omega\wedge\bar\partial\omega) = (1/2)\,\partial\bar\partial\omega^2$, so two further applications of Stokes yield the second identity below: \begin{eqnarray}\label{eqn:L2_norms_torsions_computations_1}i\,\int\limits_X\varphi\,\partial\omega\wedge\bar\partial\omega = \frac{i}{2}\,\int\limits_X\varphi\,\partial\bar\partial\omega^2 = \frac{1}{2}\,\int\limits_X i\partial\bar\partial\varphi\wedge\omega^2 = \frac{1}{2}\,\int\limits_X (\widetilde\omega - \omega)\wedge\omega^2.\end{eqnarray} We now see that (\ref{eqn:L2_norms_torsions_computations}) and (\ref{eqn:L2_norms_torsions_computations_1}) prove (\ref{eqn:torsion_variation_BC_L2}) between them. To prove the equivalence of (\ref{eqn:torsion_variation_BC_L2_bis}) and (\ref{eqn:torsion_variation_BC_L2}), we have to show that $$(1/6)\,\int_X(\widetilde\omega^3 - \omega^3) = (1/2)\,\int_X(\widetilde\omega - \omega)\wedge\omega^2.$$ Now, since $\widetilde\omega^2 = \omega^2 + 2\,i\partial\bar\partial\varphi\wedge\omega + (i\partial\bar\partial\varphi)^2$, we get: \begin{eqnarray}\nonumber \frac{1}{6}\,\int\limits_X(\widetilde\omega^3 - \omega^3) & = & \frac{1}{6}\,\int\limits_X(\widetilde\omega - \omega)\wedge(\widetilde\omega^2 + \widetilde\omega\wedge\omega + \omega^2) \\ \nonumber & = & \frac{1}{6}\,\int\limits_X(\widetilde\omega - \omega)\wedge\omega^2 + \frac{1}{3}\,\int\limits_X(\widetilde\omega - \omega)\wedge i\partial\bar\partial\varphi\wedge\omega + \frac{1}{6}\,\int\limits_X(\widetilde\omega - \omega)\wedge(i\partial\bar\partial\varphi)^2 \\ \nonumber & + & \frac{1}{6}\,\int\limits_X(\widetilde\omega - \omega)\wedge\omega^2 + \frac{1}{6}\,\int\limits_X(\widetilde\omega - \omega)\wedge i\partial\bar\partial\varphi \wedge\omega + \frac{1}{6}\,\int\limits_X(\widetilde\omega - \omega)\wedge\omega^2 \\ \nonumber & = & 3\cdot \frac{1}{6}\,\int\limits_X(\widetilde\omega - \omega)\wedge\omega^2,\end{eqnarray} \noindent since all the other terms vanish by Stokes, the identities $\partial(\widetilde\omega - \omega) = 0$ and $\bar\partial(\widetilde\omega - \omega) = 0$ and the SKT assumption on $\omega$. $(ii)$\, The stronger H-S assumption on $\widetilde\omega$ and $\omega$ is only made to ensure the existence of the $(2,\,0)$-torsion forms $\rho^{2,\,0}_{\widetilde\omega}$ and $\rho^{2,\,0}_\omega$. The assumption $\widetilde\omega = \omega +i\partial\bar\partial\varphi$ implies $\partial\widetilde\omega = \partial\omega$, so $\rho^{2,\,0}_{\widetilde\omega}$ and $\rho^{2,\,0}_\omega$ are the minimal $L^2_{\widetilde\omega}$-norm solution, resp. the minimal $L^2_\omega$-norm solution, of the same equation $\bar\partial\rho^{2,\,0} = -\partial\omega$. However, (\ref{eqn:torsion_variation_BC_L2}) shows that when $\rho^{2,\,0}$ ranges over the set of smooth $(2,\,0)$-forms $\rho^{2,\,0}$ satisfying the conditions $\partial\rho^{2,\,0}=0$ and $\bar\partial\rho^{2,\,0} = -\partial\omega$ (both of which are satisfied by both $\rho^{2,\,0}_{\widetilde\omega}$ and $\rho^{2,\,0}_\omega$), $||\rho^{2,\,0}||_{\widetilde\omega}$ is minimal if and only if $||\rho^{2,\,0}||_\omega$ is minimal since the discrepancy term $$-\frac{1}{2}\,\int\limits_X(\widetilde\omega - \omega)\wedge\omega^2$$ \noindent is independent of $\rho^{2,\,0}$ (depending only on the given metrics $\widetilde\omega$ and $\omega$). This means that the same $\rho^{2,\,0}$ achieves the minimal $L^2$ norm w.r.t. either of the metrics $\widetilde\omega$ and $\omega$. By uniqueness of the minimal $L^2$-norm solution of the $\bar\partial$ equation, we get $\rho^{2,\,0}_{\widetilde\omega} = \rho^{2,\,0}_\omega$. \hfill $\Box$ \subsection{Volume form and Monge-Amp\`ere-type equation associated with an H-S metric}\label{subsection:vol-M-A_eq_H-S} We now digress briefly to point out another possible future use of the new invariant defined by the generalised volume. In fact, a new volume form that seems better suited to featuring in the right-hand term of complex Monge-Amp\`ere equations can be associated with every Hermitian-symplectic metric on a $3$-dimensional compact complex manifold. \begin{Def}\label{Def:Introd_tilde-volume-form} If $\omega$ is a Hermitian-symplectic metric on a compact complex manifold $X$ with $\mbox{dim}_\C X=3$ and $\rho_\omega^{2,\,0}$ is the $(2,\,0)$-torsion form of $\omega$, we define the following volume form on $X$: $$d\widetilde{V}_\omega := (1 + |\rho_\omega^{2,\,0}|^2_\omega)\,dV_\omega.$$ \end{Def} The main interest in this volume form stems from the fact that its volume is independent of the choice of metric in a given Hermitian-symplectic Aeppli class, as follows from Corollary \ref{Cor:energy_M-A_mass}: \begin{eqnarray*}\int\limits_X d\widetilde{V}_{\omega_1} = \int\limits_X d\widetilde{V}_{\omega_2} = A, \hspace{3ex} \mbox{for all metrics}\hspace{1ex} \omega_1,\omega_2\in\{\omega\}_A,\end{eqnarray*} where $A = A_{\{\omega\}_A}>0$ is the {\it generalised volume} of the H-S Aeppli class $\{\omega\}_A$. Now, if $\omega$ is a Hermitian-symplectic metric on a manifold $X$ as above, it seems natural to consider the Monge-Amp\`ere equation $$\frac{(\omega + i\partial\bar\partial\varphi)^3}{3!} = b\,d\widetilde{V}_{\omega},$$ subject to the condition $\omega + i\partial\bar\partial\varphi>0$, where $b>0$ is a given constant. By [TW10, Corollary 1], there exists a unique $b$ such that this equation is solvable. Moreover, for that $b$, the solution $\omega + i\partial\bar\partial\varphi>0$ is unique. Note that $$b= \frac{\mbox{Vol}_{\omega + i\partial\bar\partial\varphi}(X)}{A_{\{\omega\}_A}}\in(0,\,1]$$ since $A_{\{\omega\}_A} = F(\omega + i\partial\bar\partial\varphi) + \mbox{Vol}_{\omega + i\partial\bar\partial\varphi}(X)\geq\mbox{Vol}_{\omega + i\partial\bar\partial\varphi}(X)$. We hope that this can shed some light on the mysterious constant $b$ in this context. \section{Obstruction and estimates}\label{section:obs-est} In this section, we point out an obstruction to the Aeppli cohomology class of a given Hermitian-symplectic metric containing a K\"ahler metric. We start by observing that a class in the vector space $E_2^{0,\,2} = E_2^{0,\,2}(X)$ featuring on the second page of the Fr\"olicher spectral sequence of $X$ can be uniquely associated with every Hermitian-symplectic metric on $X$ and, in dimension $3$, even with every Aeppli cohomology class of such metrics. (Cf. [PSUb, Proposition 6.2.].) \begin{Lem-Def}\label{Def:E_2_H-S} Suppose that $\omega$ is a {\bf Hermitian-symplectic metric} on a compact complex $n$-dimensional manifold $X$. (i)\, The $(0,\,2)$-torsion form $\rho_\omega^{0,\,2}\in C^\infty_{0,\,2}(X,\,\C)$ of $\omega$ represents an $E_2$-cohomology class $\{\rho_\omega^{0,\,2}\}_{E_2}\in E_2^{0,\,2}(X)$. Moreover, $\{\rho_\omega^{0,\,2}\}_{E_2}\in\ker(d_2:E_2^{0,\,2}(X)\to E_2^{2,\,1}(X))$. \vspace{1ex} (ii)\, Suppose that $n=3$. Then, the class $\{\rho_\omega^{0,\,2}\}_{E_2}\in E_2^{0,\,2}(X)$ is constant when the Hermitian-symplectic metric $\omega$ varies in a fixed Aeppli cohomology class. The class $\{\rho_\omega^{0,\,2}\}_{E_2}\in E_2^{0,\,2}(X)$ will be called the {\bf $E_2$-torsion class} of the Hermitian-symplectic Aeppli class $\{\omega\}_A$. \end{Lem-Def} \noindent {\it Proof.} (i)\, By construction, the $(0,\,2)$-torsion form $\rho_\omega^{0,\,2}$ has the properties: $$\bar\partial\rho_\omega^{0,\,2} = 0 \hspace{3ex} \mbox{and} \hspace{3ex} \partial\rho_\omega^{0,\,2}\in\mbox{Im}\,\bar\partial \hspace{2ex} (\mbox{since} \hspace{1ex} \partial\rho_\omega^{0,\,2} = -\bar\partial\omega),$$ \noindent which translate precisely to $\rho_\omega^{0,\,2}$ being $E_2$-closed (see terminology in [Pop19, Proposition 3.1]), namely to $\rho_\omega^{0,\,2}$ representing an $E_2$-cohomology class. Moreover, the class $d_2(\{\rho_\omega^{0,\,2}\}_{E_2})\in E_2^{2,\,1}(X)$ is represented by $-\partial\omega$ since $-\omega$ is such that $\bar\partial(-\omega) = \partial\rho_\omega^{0,\,2}$. (See [CFGU97].) However, $\partial\omega$ is $\bar\partial$-exact, so, in particular, $\{\partial\omega\}_{E_2}=0$. We get $$d_2(\{\rho_\omega^{0,\,2}\}_{E_2}) = -\{\partial\omega\}_{E_2}=0\in E_2^{2,\,1}(X).$$ \vspace{1ex} (ii)\, When $n=3$, Corollary \ref{Cor:energy_M-A_mass} tells us that the respective $(2,\,0)$-torsion forms $\rho_\omega^{2,\,0}$ and $\rho_\eta^{2,\,0}:=\rho_{\omega_\eta}^{2,\,0}$ of any two Aeppli-cohomologous Hermitian-symplectic metrics $\omega$ and $\omega_\eta = \omega + \partial\bar\eta + \bar\partial\eta >0$, with $\eta\in C^{\infty}_{1,\,0}(X,\,\C)$, are related by $\rho_\eta^{2,\,0} = \rho_\omega^{2,\,0} + \partial\eta$. Hence, for their conjugates, we get: $$\rho_\eta^{0,\,2} = \rho_\omega^{0,\,2} + \bar\partial\bar\eta, \hspace{3ex} \mbox{so} \hspace{3ex} \{\rho_\eta^{0,\,2}\}_{\bar\partial} = \{\rho_\omega^{0,\,2}\}_{\bar\partial}, \hspace{3ex} \mbox{hence also} \hspace{3ex} \{\rho_\eta^{0,\,2}\}_{E_2} = \{\rho_\omega^{0,\,2}\}_{E_2}\in E_2^{0,\,2}(X).$$ \hfill $\Box$ \vspace{2ex} Since $\omega$ is K\"ahler if and only if $\rho_\omega^{0,\,2}=0$, we get the following necessary condition for a given Hermitian-symplectic Aeppli class $\{\omega\}_A$ to contain a K\"ahler metric. \begin{Cor}\label{Cor:necessary-cond_K} Suppose that $n=3$. If a given Hermitian-symplectic Aeppli class $\{\omega\}_A$ contains a K\"ahler metric, then its $E_2$-torsion class $\{\rho_\omega^{0,\,2}\}_{E_2}\in E_2^{0,\,2}(X)$ vanishes. Moreover, the condition $\{\rho_\omega^{0,\,2}\}_{E_2}=0$ in $E_2^{0,\,2}(X)$ is equivalent to $\rho_\omega^{0,\,2}\in\mbox{Im}\,\bar\partial$ for some (hence every) metric $\omega$ lying in $\{\omega\}_A$. \end{Cor} \noindent {\it Proof.} Only the latter statement still needs a proof. The $E_2$-exactness condition on $\rho_\omega^{0,\,2}$ is equivalent to the existence of a $(0,\,1)$-form $\xi$ and of a $(-1,\,2)$-form $\zeta$ such that $\rho_\omega^{0,\,2} = \partial\zeta + \bar\partial\xi$ and $\bar\partial\zeta=0$. (See [Pop19, Proposition 3.1].) However, for bidegree reasons, every $(-1,\,2)$-form $\zeta$ is trivially the zero form, so the $E_2$-exactness condition on $(0,\,2)$-forms is equivalent to the $\bar\partial$-exactness condition. \hfill $\Box$ \vspace{2ex} Therefore, we are led to restricting attention to Hermitian-symplectic Aeppli classes of vanishing torsion class on $3$-dimensional manifolds. In this case, $\rho_\omega^{0,\,2}$ is $\bar\partial$-exact and we let \begin{equation}\label{eqn:xi_min-sol_formula}\xi_\omega^{0,\,1} = \Delta^{''-1}\bar\partial^\star\rho_\omega^{0,\,2}\in\mbox{Im}\,\bar\partial^\star\subset C^\infty_{0,\,1}(X,\,\C)\end{equation} \noindent be the minimal $L^2_\omega$-norm solution of the equation $\bar\partial\xi=\rho_\omega^{0,\,2}$. Our functional $F : {\cal S}_{\{\omega_0\}} \to [0,\,+\infty)$ of Definition \ref{Def:F_energy-functional_H-S} takes the form: \begin{equation}\label{eqn:F_energy-functional_H-S_zero-torsion} F(\omega) = \int\limits_X|\rho_\omega^{2,\,0}|^2_\omega\,dV_\omega = \int\limits_X\rho_\omega^{2,\,0}\wedge\rho_\omega^{0,\,2}\wedge\omega = \int\limits_X\partial\xi_\omega^{1,\,0}\wedge\bar\partial\xi_\omega^{0,\,1}\wedge\omega,\end{equation} \noindent where $\xi_\omega^{1,\,0}$ is the conjugate of $\xi_\omega^{0,\,1}$. Meanwhile, using formulae (\ref{eqn:Neumann_formula_del-0}) and (\ref{eqn:xi_min-sol_formula}) and applying the {\it a priori estimate} in various Sobolev norms to the elliptic operator $\Delta''$, we get the following estimates for every $k\in\N$: \begin{eqnarray}\label{eqn:xi-rho-omega_apriori-estimates}\nonumber||\xi_\omega^{0,\,1}||_{W^{k+2}} & \leq & C_k\,||\bar\partial^\star\rho_\omega^{0,\,2}||_{W^k}\leq A_kC_k\,||\rho_\omega^{0,\,2}||_{W^{k+1}} = A_kC_k\,||\Delta^{'-1}\partial^\star\bar\partial\omega||_{W^{k+1}} \\ & = & A_kC_k\,||\Delta^{''-1}\bar\partial^\star\partial\omega||_{W^{k+1}} \leq A_kC_k^2\,||\bar\partial^\star\partial\omega||_{W^{k-1}}\leq A_k^2C_k^2\,||\partial\omega||_{W^k},\end{eqnarray} \noindent where $A_k, C_k>0$ are constants independent of $\omega$. This shows that $\xi_\omega^{0,\,1}$ is small when $\partial\omega$ is small (i.e. $\omega$ is close to being K\"ahler). \vspace{3ex} Now, recall that a Hodge theory for the second page $E_2$ of the Fr\"olicher spectral sequence of $X$ was constructed in [Pop16] by means of the introduction of the {\it pseudo-differential Laplace-type operator} $$\widetilde\Delta=\partial p''\partial^\star + \partial^\star p''\partial + \Delta'' : C^\infty_{p,\,q}(X,\,\C)\to C^\infty_{p,\,q}(X,\,\C)$$ \noindent whose kernel in every bidegree $(p,\,q)$ is isomorphic to $E_2^{p,\,q}(X)$, where $p'':C^\infty_{p,\,q}(X,\,\C)\to\ker\Delta''\subset C^\infty_{p,\,q}(X,\,\C)$ is the orthogonal projection onto the $\Delta''$-harmonic space. Thus, every $E_2$-class $\{\alpha\}_{E_2}\in E_2^{p,\,q}$ contains a unique element lying in $$\ker\widetilde\Delta = \ker\bar\partial\cap\ker\bar\partial^\star\cap\ker(p''\partial)\cap\ker(p''\partial^\star).$$ \noindent This construction applies for any fixed Hermitian metric $\omega$ on $X$ with respect to which $\partial^\star$, $\bar\partial^\star$, $p''$, $\Delta''$ and $\widetilde\Delta$ are computed. We will add a subscript $\omega$ when we wish to stress the dependence of the respective operator on $\omega$. \begin{Lem}\label{Lem:rho_antiharmonicity} Let $X$ be a compact complex manifold such that $\mbox{dim}_\C X=3$. Suppose that there exists a Hermitian-symplectic metric $\omega$ on $X$ such that $\{\rho_\omega^{0,\,2}\}_{E_2}=0$. Then: \vspace{2ex} (i)\, $\rho_\omega^{0,\,2}\in\ker\bar\partial\cap\ker(p''\partial)\cap\ker(p''\partial^\star)$, hence $\rho_\omega^{0,\,2}\in\ker\widetilde\Delta$ if and only if $\bar\partial^\star\rho_\omega^{0,\,2}=0$; \vspace{2ex} (ii)\, $\rho_\omega^{0,\,2}\perp\ker\widetilde\Delta$, hence $\rho_\omega^{0,\,2}\in\ker\widetilde\Delta$ if and only if $\rho_\omega^{0,\,2}=0$. \end{Lem} \noindent {\it Proof.} (i)\, As noticed in Corollary \ref{Cor:necessary-cond_K}, the hypothesis $\{\rho_\omega^{0,\,2}\}_{E_2}=0$ is equivalent to $\rho_\omega^{0,\,2}\in\mbox{Im}\,\bar\partial$. Hence, $\bar\partial\rho_\omega^{0,\,2}=0$ and $\partial\rho_\omega^{0,\,2}\in\mbox{Im}\,\bar\partial$, so $(p''\partial)\,\rho_\omega^{0,\,2}=0$ as well. Meanwhile, $\rho_\omega^{2,\,0}\in\mbox{Im}\,\bar\partial^\star$ by construction (see Observation \ref{Obs:torsion-formula_dim3}). Taking conjugates, we get $\partial^\star\rho_\omega^{0,\,2}=0$. \vspace{1ex} (ii) The following $L^2_\omega$-orthogonal $3$-space decomposition was proved in [Pop16, Lemma 3.3] in every bidegree $(p,\,q)$: \begin{equation}\label{eqn:3-space-decomp_Delta-tilde}C^{\infty}_{p,\,q}(X,\,\C) = \ker\widetilde{\Delta}\bigoplus\bigg(\mbox{Im}\,\bar\partial + \mbox{Im}\,(\partial_{|\ker\bar\partial})\bigg)\bigoplus\bigg(\mbox{Im}\,(\partial^{\star}\circ p'') + \mbox{Im}\,\bar\partial^{\star}\bigg).\end{equation} \noindent In our case, $(p,\,q)=(0,\,2)$, hence $\mbox{Im}\,(\partial_{|\ker\bar\partial}) = \{0\}$ for bidegree reasons. Since $\rho_\omega^{0,\,2}\in\mbox{Im}\,\bar\partial$, we get $\rho_\omega^{0,\,2}\perp\ker\widetilde\Delta$, by orthogonality. \hfill $\Box$ \vspace{3ex} It follows from the above lemma that the following equivalence holds: $$\rho_\omega^{0,\,2}=0\iff\bar\partial^\star\rho_\omega^{0,\,2}=0.$$ On the other hand, $\bar\partial^\star\rho_\omega^{0,\,2} = \bar\partial^\star\bar\partial\xi_\omega^{0,\,1} = \Delta''\xi_\omega^{0,\,1}$, hence it is tempting to introduce the following \begin{Def}\label{Def:G-functional} Let $X$ be a compact complex manifold such that $\mbox{dim}_\C X=3$. Suppose that there exists a Hermitian-symplectic metric $\omega_0$ on $X$ such that $\{\rho_{\omega_0}^{0,\,2}\}_{E_2}=0$. We define the following {\bf energy functional}: $$G:{\cal S}_{\omega_0}\to[0,\,+\infty), \hspace{3ex} G(\omega)=\int\limits_X|\bar\partial^\star_\omega\rho_\omega^{0,\,2}|^2_\omega\,dV_\omega = ||\bar\partial^\star_\omega\rho_\omega^{0,\,2}||^2_\omega.$$ \end{Def} From the above discussion, we get the identities \begin{equation}\label{eqn:G_identities}G(\omega)=||\bar\partial^\star_\omega\rho_\omega^{0,\,2}||^2_\omega = \langle\langle\Delta''_\omega\rho_\omega^{0,\,2},\,\rho_\omega^{0,\,2}\rangle\rangle_\omega = ||\Delta''_\omega\xi_\omega^{0,\,1}||^2_\omega\end{equation} \noindent and the equivalences $$F(\omega)=0 \iff G(\omega)=0 \iff \omega \hspace{1ex}\mbox{is K\"ahler}.$$ \section{Approach via a Monge-Amp\`ere-type equation}\label{section:M-A_eq} Let $X$ be a compact complex manifold with $\mbox{dim}_\C X=3$. Suppose there exists a Hermitian-symplectic metric $\omega$ on $X$. As usual, let $A = A_{\{\omega\}_A}:= F(\omega) + \mbox{Vol}_\omega(X)>0$ be the generalised volume of the Aeppli cohomology class $\{\omega\}_A\in H^{1,\,1}_A(X,\,\R)$. Let $\gamma$ be an arbitrary Hermitian metric on $X$ and let $c>0$ be the constant defined by \begin{equation}\label{eqn:def_c}\bigg(\int_X\omega\wedge\frac{\gamma^2}{2!}\bigg)^3\bigg\slash\bigg(\int_X\frac{\gamma^3}{3!}\bigg)^2 = 6A/c.\end{equation} Now, consider the following Monge-Amp\`ere-type equation: \begin{equation}\label{eqn:M-A_eq}\nonumber (\omega + \partial\bar\eta + \bar\partial\eta)^3 = c\,(\Lambda_\gamma\omega)^3\,\,\frac{\gamma^3}{3!} \hspace{6ex} (\star)\end{equation} \noindent whose unknown is $\eta\in C^\infty_{1,\,0}(X,\,\C)$, subject to the positivity condition $\omega + \partial\bar\eta + \bar\partial\eta>0$. On the one hand, for any $\eta$ such that $\omega + \partial\bar\eta + \bar\partial\eta>0$, the H\"older inequality with conjugate exponents $p=3$ and $q=3/2$ gives: \begin{eqnarray}\label{eqn:Holder_cube}\int\limits_X\sqrt[3]{\frac{(\omega + \partial\bar\eta + \bar\partial\eta)^3}{\gamma^3}}\,\frac{\gamma^3}{3!} \leq \bigg(\int\limits_X\frac{(\omega + \partial\bar\eta + \bar\partial\eta)^3}{3!}\bigg)^{\frac{1}{3}}\,\bigg(\int\limits_X\frac{\gamma^3}{3!}\bigg)^{\frac{2}{3}}.\end{eqnarray} On the other hand, if the form $\eta$ solves equation ($\star$), we have \begin{eqnarray}\label{eqn:star_cubic-root_consequence}\int\limits_X\sqrt[3]{\frac{(\omega + \partial\bar\eta + \bar\partial\eta)^3}{\gamma^3}}\,\frac{\gamma^3}{3!} = (c/3!)^{\frac{1}{3}}\, \int\limits_X\Lambda_\gamma\omega\,\frac{\gamma^3}{3!} = (c/3!)^{\frac{1}{3}}\,\int\limits_X\omega\wedge\frac{\gamma^2}{2!}.\end{eqnarray} Putting together (\ref{eqn:def_c}), (\ref{eqn:Holder_cube}) and (\ref{eqn:star_cubic-root_consequence}), we get, whenever $\eta$ solves equation ($\star$): \begin{eqnarray}\nonumber A\geq\int\limits_X\frac{(\omega + \partial\bar\eta + \bar\partial\eta)^3}{3!}\geq\frac{1}{(\mbox{Vol}_\gamma(X))^2}\,\bigg(\int\limits_X\sqrt[3]{\frac{(\omega + \partial\bar\eta + \bar\partial\eta)^3}{\gamma^3}}\,\frac{\gamma^3}{3!}\bigg)^3 & = & \frac{c}{3!}\,\frac{\bigg(\int\limits_X\omega\wedge\frac{\gamma^2}{2!}\bigg)^3}{(\mbox{Vol}_\gamma(X))^2 } \\ \nonumber & = & \frac{c}{3!}\,\frac{6A}{c} = A,\end{eqnarray} \noindent where $\mbox{Vol}_\gamma(X) := \int_X\gamma^3/3!$. This implies, thanks to (\ref{eqn:A-invariant}), that $F(\omega_\eta)=0$ in this case, which is equivalent to $\omega_\eta$ being a K\"ahler metric. We have thus proved Proposition \ref{Prop:Introd_consequence_MA-eq} stated in the introduction. \section{Stratification of the Aeppli class}\label{section:stratification} In $\S.$\ref{section:M-A_eq} we defined the Monge-Amp\`ere type equation $(\star)$ and observed that its solutions yield K\"ahler metrics. Unfortunately, little is known about the solvability of equations of this type. Therefore, in this section we consider the special case of equation $(\star)$ where the solution $\eta$ is of the shape $\eta = -(i/2)\,\partial\varphi$, with $\varphi:X\to\R$ a $C^\infty$ function. Equation $(\star)$ becomes: \begin{equation}\label{eqn:M-A_eq_BCc}\nonumber (\omega + i\partial\bar\partial\varphi)^3 = c\,(\Lambda_\gamma\omega)^3\,\,\frac{\gamma^3}{3!}, \hspace{6ex} (\star\star)\end{equation} subject to the condition $\omega + i\partial\bar\partial\varphi>0$, where $\gamma$ is an arbitrary Hermitian metric fixed on $X$, $A$ is the generalised volume of $\{\omega\}_A$ defined in (\ref{eqn:A-invariant}) and the constant $c>0$ is defined in (\ref{eqn:def_c}). The advantage is that we are now dealing with a scalar equation and the existence theory is much more developed in this set-up (c.f. [Che87], [GL09], [TW10]). The drawback is that the perturbation of $\omega$ by $i\partial\bar{\partial}\varphi$ is non-generic within its Aeppli class and this forces us to break $\{\omega\}_A$ into subclasses (to be defined below) and study equation $(\star\star)$ in each subclass. A conformal rescaling of $\gamma$ by a $C^\infty$ function $f:X\to(0,\,+\infty)$ will change the constant $c>0$ to some constant $c_f>0$ defined by the analogue of (\ref{eqn:def_c}): \begin{equation}\label{eqn:def_c_f}\nonumber\frac{6A}{c_f} = \frac{\bigg(\int\limits_Xf^2\,\omega\wedge\frac{\gamma^2}{2!}\bigg)^3}{\bigg(\int\limits_X\frac{f^3\,\gamma^3}{3!}\bigg)^2} = \frac{\bigg(\int\limits_Xf^2\,(\Lambda_\gamma\omega)\,dV_\gamma\bigg)^3}{\bigg(\int\limits_Xf^3\,dV_\gamma\bigg)^2}\leq\int\limits_X(\Lambda_\gamma\omega)^3\,dV_\gamma,\end{equation} where the last inequality is H\"older's inequality applied to the functions $f^2$ and $\Lambda_\gamma\omega$ with the conjugate exponents $p=3/2$ and $q=3$. This translates to the following eligibility condition for $c_f$: \begin{equation*}c_f\geq\frac{6A}{\int\limits_X(\Lambda_\gamma\omega)^3\,dV_\gamma}.\end{equation*} \noindent Now, H\"older's inequality is an equality if $f=\Lambda_\gamma\omega$. In particular, if the metric $\gamma>0$ is chosen such that $\Lambda_\gamma\omega\equiv 1$, no conformal rescaling of $\gamma$ is necessary (i.e. we can choose $f\equiv 1$) to get the {\it minimal} constant $$c = \frac{6A}{\int\limits_XdV_\gamma}.$$ \begin{Def}\label{Def:omega-normalised} Let $\omega$ be a fixed Hermitian metric on a compact complex manifold $X$. A Hermitian metric $\gamma$ on $X$ is said to be {\bf $\omega$-normalised} if $\Lambda_\gamma\omega=1$ at every point of $X$.\end{Def} The following observation is trivial. \begin{Lem}\label{Lem:conformal-class_omega-normalised} Every conformal class of Hermitian metrics on $X$ {\bf contains} a {\bf unique} $\omega$-normalised representative. \end{Lem} \noindent{\it Proof.} Let $\gamma$ be an arbitrary Hermitian metric on $X$. We are looking for $C^\infty$ functions $f:X\to(0,\,+\infty)$ such that $\Lambda_{f\gamma}\omega=1$ on $X$. Since $\Lambda_{f\gamma}\omega=(1/f)\,\Lambda_\gamma\omega$, the only possible choice for $f$ is $f=\Lambda_\gamma\omega$. \hfill $\Box$ \vspace{2ex} We saw in $\S.$\ref{section:M-A_eq} that if equality is achieved in H\"older's inequality (i.e. if the constant $c>0$ assumes its {\it minimal} value computed above) and if equation $(\star\star)$ is solvable with this minimal constant $c$ on the right, then its solution is a K\"ahler metric. In other words, Proposition \ref{Prop:Introd_consequence_MA-eq} and the above considerations lead to \begin{Conc}\label{Conc:BC-subclass_after-H-eq} Let $X$ be a compact complex manifold with $\mbox{dim}_\C X=3$. Suppose there exists a {\bf Hermitian-symplectic} metric $\omega$ on $X$ and fix an arbitrary {\bf $\omega$-normalised} Hermitian metric $\gamma$ on $X$. Let $A>0$ be the generalised volume of $\{\omega\}_A$ defined in (\ref{eqn:A-invariant}). If there exists a $C^\infty$ solution $\varphi:X\to\R$ of the equation \begin{equation}\label{eqn:M-A_eq_BC}\nonumber \frac{(\omega + i\partial\bar\partial\varphi)^3}{3!} = A\,\frac{dV_\gamma}{\int\limits_XdV_\gamma} \hspace{6ex} (\star\star)\end{equation} such that $\omega_\varphi:=\omega + i\partial\bar\partial\varphi>0$, then $\omega_\varphi$ is a K\"ahler metric lying in the Aeppli cohomology class $\{\omega\}_A$. \end{Conc} \subsection{The strata}\label{subsection:strata} A Hermitian-symplectic metric $\omega$ need not be $d$-closed, but let us still call the affine space $\{\omega\}_{BC}:=\{\omega + i\partial\bar\partial\varphi\,\mid\,\varphi\in C^\infty(X,\,\R)\}$ the {\bf Bott-Chern subclass} (or {\bf stratum}) of $\omega$. It is a subspace of the Aeppli class $\{\omega\}_A$ of $\omega$. Similarly, by analogy with the open convex subset $${\cal S}_{\{\omega\}}:=\{\omega'>0\,\mid\,\omega'\in\{\omega\}_A\}\subset\{\omega\}_A\cap C^\infty_{1,\,1}(X,\,\R)$$ of metrics in the Aeppli class of a given H-S metric $\omega$, we define the open convex subset $${\cal D}_{[\omega']}:=\{\omega''>0\,\mid\,\omega''\in\{\omega'\}_{BC}\}\subset\{\omega'\}_{BC}$$ of metrics in the Bott-Chern subclass of a given H-S metric $\omega'$. If we fix a Hermitian-symplectic metric $\omega$, we can {\bf partition} ${\cal S}_{\{\omega\}}$ as \begin{equation}\label{eqn:stratification}{\cal S}_{\{\omega\}}=\bigcup\limits_{j\in J}{\cal D}_{[\omega_j]},\end{equation} where $(\omega_j)_{j\in J}$ is a system of representatives of the Bott-Chern subclasses ${\cal D}_{[\omega']}$ when $\omega'$ ranges over ${\cal S}_{\{\omega\}}$. Moreover, for every $j\in J$, let $\gamma_j$ be an $\omega_j$-normalised Hermitian metric on $X$ and let us consider the equation: \begin{equation*}\frac{(\omega_j + i\partial\bar\partial\varphi)^3}{3!} = A\,\frac{dV_{\gamma_j}}{\int\limits_XdV_{\gamma_j}} \hspace{6ex} (\star\star_j)\end{equation*} such that $\omega_j + i\partial\bar\partial\varphi>0$. (No other condition is imposed at this point on $\gamma_j$.) By the Tosatti-Weinkove theorem [TW10, Corollary 1], there exists a unique constant $b_j>0$ such that the equation \begin{equation*}\frac{(\omega_j + i\partial\bar\partial\varphi)^3}{3!} = b_jA\,\frac{dV_{\gamma_j}}{\int\limits_XdV_{\gamma_j}}\hspace{6ex} (\star\star\star_j),\end{equation*} subject to the extra condition $\omega_j + i\partial\bar\partial\varphi>0$, is solvable. Integrating and using the inequality $\int_X(\omega_j + i\partial\bar\partial\varphi)^3/3!\leq A$, which follows from (\ref{eqn:A-invariant}), we get: $$b_j\leq 1, \hspace{3ex} j\in J.$$ From this and from Conclusion \ref{Conc:BC-subclass_after-H-eq}, we infer Proposition \ref{Prop:Introd_b_j1} stated in the Introduction. \vspace{2ex} The next observation is that, within Bott-Chern subclasses of Hermitian-symplectic metrics that contain a {\it Gauduchon} metric, the volume remains {\it constant} and all the metrics are {\it Gauduchon}. These Bott-Chern subclasses will be called {\it Gauduchon strata}. \begin{Lem}\label{Lem:BC-subclass_volume-Gauduchon} Let $\mbox{dim}_\C X=3$. Suppose that a metric $\omega$ on $X$ is both {\bf SKT} and {\bf Gauduchon}. Then, for every $\varphi:X\longrightarrow\R$, we have $$(a)\,\int\limits_X(\omega + i\partial\bar\partial\varphi)^3 = \int\limits_X\omega^3 \hspace{3ex} \mbox{and} \hspace{3ex} (b)\,\,\,\partial\bar\partial(\omega + i\partial\bar\partial\varphi)^2 = 0.$$ \end{Lem} \noindent {\it Proof.} (a)\, Straightforward calculations give: \begin{eqnarray*}\int\limits_X(\omega + i\partial\bar\partial\varphi)^3 & = & \int\limits_X\omega^3 + 3\int\limits_X\omega^2\wedge i\partial\bar\partial\varphi + 3\int\limits_X\omega\wedge(i\partial\bar\partial\varphi)^2 + \int\limits_X(i\partial\bar\partial\varphi)^3 \\ & = & \int\limits_X\omega^3 + 3i\int\limits_X\varphi\,\partial\bar\partial\omega^2 - 3\int\limits_X\varphi\,\partial\bar\partial\omega\wedge\partial\bar\partial\varphi + \int\limits_X\partial(i\bar\partial\varphi\wedge(i\partial\bar\partial\varphi)^2) = \int\limits_X\omega^3,\end{eqnarray*} \noindent where $\partial\bar\partial\omega=0$ since $\omega$ is SKT, while $\partial\bar\partial\omega^2=0$ since $\omega$ is Gauduchon. \vspace{1ex} (b)\, Straightforward calculations give: \begin{eqnarray*}\partial\bar\partial(\omega + i\partial\bar\partial\varphi)^2 = \partial\bar\partial\omega^2 + 2\,\partial\bar\partial\omega\wedge(i\partial\bar\partial\varphi) + \partial\bar\partial(i\partial\bar\partial\varphi)^2 =0,\end{eqnarray*} since $\partial\bar\partial\omega=0$ and $\partial\bar\partial\omega^2=0$ for the same reasons as in (a). \hfill $\Box$ \vspace{2ex} \subsection{Volume comparison within a Bott-Chern stratum}\label{subsection:volume_BC-stratum} We have seen that for any SKT metric $\omega$ on a $3$-dimensional compact complex manifold $X$, we have: \begin{eqnarray}\label{eqn:vol_BC_comparison1}\nonumber\int\limits_X\frac{(\omega + i\partial\bar\partial\varphi)^3}{3!} & = & \int\limits_X\frac{\omega^3}{3!} + \int\limits_X\omega^2\wedge\frac{i}{2}\partial\bar\partial\varphi = \int\limits_X\frac{\omega^3}{3!} + \int\limits_X \varphi\,\frac{i}{2}\partial\bar\partial\omega^2 \\ & = & \int\limits_X\frac{\omega^3}{3!} + \int\limits_X \varphi\,i\partial\omega\wedge\bar\partial\omega,\end{eqnarray} where the SKT hypothesis on $\omega$ is used only to get the last identity. Thus, to understand the variation of $\mbox{Vol}_{\omega_\varphi}(X)=\int_X(\omega + i\partial\bar\partial\varphi)^3/3!$ when $\varphi$ ranges over the $C^\infty$ real-valued functions on $X$ such that $\omega + i\partial\bar\partial\varphi>0$, the following observation will come in handy. \begin{Lem}\label{Lem:almost_lck-bal_formula} Let $X$ be a $3$-dimensional complex manifold and let $\omega$ be an arbitrary Hermitian metric on $X$. If $$\partial\omega = (\partial\omega)_{prim} + \alpha^{1,\,0}\wedge\omega$$ is the Lefschetz decomposition of $\partial\omega$ into a primitive part (w.r.t. $\omega$) and a part divisible by $\omega$, with $\alpha^{1,\,0}\in C^\infty_{1,\,0}(X,\,\C)$, then \begin{eqnarray}\label{eqn:almost_lck-bal_formula}i\partial\omega\wedge\bar\partial\omega = \bigg(|\alpha^{1,\,0}\wedge\omega|^2_\omega - |(\partial\omega)_{prim}|^2_\omega\bigg)\,dV_\omega.\end{eqnarray} \end{Lem} \noindent {\it Proof.} From the Lefschetz decomposition, we get: \begin{eqnarray*}i\partial\omega\wedge\bar\partial\omega = i(\partial\omega)_{prim}\wedge(\bar\partial\omega)_{prim} + i\alpha^{1,\,0}\wedge\alpha^{0,\,1}\wedge\omega^2,\end{eqnarray*} where $\alpha^{0,\,1}=\overline{\alpha^{1,\,0}}$. This is because $(\partial\omega)_{prim}\wedge\omega=0$ and $(\bar\partial\omega)_{prim}\wedge\omega=0$. Indeed, $(\partial\omega)_{prim}$ and $(\bar\partial\omega)_{prim}$ are primitive $3$-forms on a $3$-dimensional complex manifold, so they lie in the kernel of $\omega\wedge\cdot$. From the general formula (\ref{eqn:prim-form-star-formula-gen}), we get: $$(\bar\partial\omega)_{prim} = i\,\star(\bar\partial\omega)_{prim} \hspace{3ex} \mbox{and} \hspace{3ex} \alpha^{0,\,1}\wedge\frac{\omega^2}{2!} = -i\,\star\alpha^{0,\,1},$$ where $\star$ is the Hodge star operator induced by $\omega$. Hence, \begin{eqnarray*}i(\partial\omega)_{prim}\wedge(\bar\partial\omega)_{prim} & = & -(\partial\omega)_{prim}\wedge\star(\bar\partial\omega)_{prim} = - |(\partial\omega)_{prim}|^2\,dV_\omega \\ i\alpha^{1,\,0}\wedge\alpha^{0,\,1}\wedge\omega^2 & = & 2\,\alpha^{1,\,0}\wedge\star\alpha^{0,\,1} = 2\,|\alpha^{1,\,0}|^2_\omega\,dV_\omega.\end{eqnarray*} Formula (\ref{eqn:almost_lck-bal_formula}) follows from these computations after further noticing that \begin{eqnarray*}|\alpha^{1,\,0}\wedge\omega|^2_\omega & = & \langle\alpha^{1,\,0}\wedge\omega,\,\alpha^{1,\,0}\wedge\omega\rangle_\omega = \langle\Lambda_\omega(\alpha^{1,\,0}\wedge\omega),\,\alpha^{1,\,0}\rangle_\omega \\ & = & \langle[\Lambda_\omega,\,\omega\wedge\cdot](\alpha^{1,\,0}),\,\alpha^{1,\,0}\rangle_\omega = 2\,|\alpha^{1,\,0}|^2_\omega,\end{eqnarray*} where the last identity follows from the well-known formula $[\Lambda_\omega,\,\omega\wedge\cdot] = (n-k)\,\mbox{Id}$ on $k$-forms on an $n$-dimensional complex manifold. (In our case, $k=1$ and $n=3$.) \hfill $\Box$ \vspace{3ex} Notice that, in the setting of Lemma \ref{Lem:almost_lck-bal_formula}, $\omega$ is {\it balanced} if and only if $\partial\omega = (\partial\omega)_{prim}$, while $\omega$ is {\it lck} (i.e. {\it locally conformally K\"ahler}) if and only if $\partial\omega = \alpha^{1,\,0}\wedge\omega$. This accounts for the terminology used in the next \begin{Cor}\label{Cor:almost_lck-bal} Let $X$ be a $3$-dimensional compact complex manifold equipped with an {\bf SKT metric} $\omega$. \vspace{1ex} (a)\, If $|\alpha^{1,\,0}\wedge\omega|_\omega\geq|(\partial\omega)_{prim}|_\omega$ at every point of $X$ (we will say in this case that $\omega$ is {\bf almost lck}), then $\omega$ is {\bf Gauduchon} and $|\alpha^{1,\,0}\wedge\omega|_\omega=|(\partial\omega)_{prim}|_\omega$ at every point of $X$. \vspace{1ex} (b)\, If $|(\partial\omega)_{prim}|_\omega\geq|\alpha^{1,\,0}\wedge\omega|_\omega$ at every point of $X$ (we will say in this case that $\omega$ is {\bf almost balanced}), then $\omega$ is {\bf Gauduchon} and $|\alpha^{1,\,0}\wedge\omega|_\omega=|(\partial\omega)_{prim}|_\omega$ at every point of $X$. \vspace{1ex} (c)\, $\omega$ is {\bf almost lck} $\iff$ $\omega$ is {\bf almost balanced} $\iff$ $\omega$ is {\bf Gauduchon} $\iff$ $|\alpha^{1,\,0}\wedge\omega|_\omega=|(\partial\omega)_{prim}|_\omega$ at every point of $X$. \end{Cor} \noindent {\it Proof.} The SKT assumption on $\omega$ implies that $i\partial\omega\wedge\bar\partial\omega = \frac{i}{2}\,\partial\bar\partial\omega^2$. Integrating this identity and using the Stokes theorem and formula (\ref{eqn:almost_lck-bal_formula}), we get: \begin{eqnarray}\label{eqn:integrals-equal_Lefschetz}\int\limits_X|\alpha^{1,\,0}\wedge\omega|^2_\omega\,dV_\omega = \int\limits_X|(\partial\omega)_{prim}|_\omega^2\,dV_\omega.\end{eqnarray} Therefore, if $|(\partial\omega)_{prim}|^2_\omega-|\alpha^{1,\,0}\wedge\omega|^2_\omega$ has constant sign on $X$, it must vanish identically. This is equivalent to $\frac{i}{2}\,\partial\bar\partial\omega^2$ vanishing identically, hence to $\omega$ being Gauduchon. \hfill $\Box$ \vspace{3ex} Based on these observations, let us introduce the following \begin{Not}\label{Not:puddles} For any SKT metric $\omega$ on a $3$-dimensional compact complex manifold $X$, we put: \begin{eqnarray*}U_\omega: &=&\bigg\{x\in X\,\bigm| \,|\alpha^{1,\,0}\wedge\omega|_\omega(x) < |(\partial\omega)_{prim}|_\omega(x)\bigg\}, \\ V_\omega: &=&\bigg\{x\in X\,\bigm| \,|\alpha^{1,\,0}\wedge\omega|_\omega(x) > |(\partial\omega)_{prim}|_\omega(x)\bigg\}, \\ Z_\omega: &=&\bigg\{x\in X\,\bigm| \,|\alpha^{1,\,0}\wedge\omega|_\omega(x) = |(\partial\omega)_{prim}|_\omega(x)\bigg\}.\end{eqnarray*} \end{Not} Clearly, $U_\omega$ and $V_\omega$ are open subsets of $X$, while $Z_\omega$ is closed. The three of them form a partition of $X$. Moreover, Corollary \ref{Cor:almost_lck-bal} ensures that $\omega$ is Gauduchon if and only if $U_\omega = V_\omega = \emptyset$. This happens if and only if either $U_\omega = \emptyset$ or $V_\omega = \emptyset$. \vspace{2ex} Returning to the variation of the volume of $\omega_\varphi:=\omega + i\partial\bar\partial\varphi$, we now observe a stark contrast between the non-Gauduchon strata dealt with below and the Gauduchon ones treated in Lemma \ref{Lem:BC-subclass_volume-Gauduchon}. \begin{Lem}\label{Lem:BC-subclass_volume-nonGauduchon} Let $X$ be a $3$-dimensional compact complex manifold. Suppose that $\omega$ is an {\bf SKT non-Gauduchon} metric on $X$. Then, the map $$\bigg\{\varphi\in C^\infty(X)\,\big|\omega + i\partial\bar\partial\varphi>0\bigg\}\ni\varphi\longmapsto\int\limits_X\frac{(\omega + i\partial\bar\partial\varphi)^3}{3!}:={Vol}_{\omega_\varphi}(X)\in(0,\,+\infty)$$ does not achieve any local extremum. \end{Lem} \noindent {\it Proof.} Suppose this map achieves, say, a local maximum at some metric $\omega_0=\omega + i\partial\bar\partial\varphi_0>0$. Without loss of generality, we may assume that $\omega_0=\omega$ (and $\varphi_0\equiv 1$). Since $\omega$ is not Gauduchon, both $U_\omega$ and $V_\omega$ are not empty. Thanks to (\ref{eqn:vol_BC_comparison1}) and (\ref{eqn:almost_lck-bal_formula}), the local maximality of $\omega$ translates to \begin{eqnarray}\label{eqn:local-max_ineq}\int\limits_X\varphi\,|\alpha^{1,\,0}\wedge\omega|^2_\omega\,dV_\omega \leq \int\limits_X\varphi\,|(\partial\omega)_{prim}|_\omega^2\,dV_\omega\end{eqnarray} for every $\varphi\in C^\infty(X,\,\R)$ such that $\omega + i\partial\bar\partial\varphi>0$ and $\varphi$ is close enough to $\varphi_0\equiv 1$ in $C^2$ norm. Now, (\ref{eqn:integrals-equal_Lefschetz}) translates to \begin{eqnarray}\label{eqn:local-max_ineq_1}\nonumber\int\limits_{U_\omega}(|\alpha^{1,\,0}\wedge\omega|^2_\omega - |(\partial\omega)_{prim}|_\omega^2)\,dV_\omega & + & \int\limits_{V_\omega}(|\alpha^{1,\,0}\wedge\omega|^2_\omega - |(\partial\omega)_{prim}|_\omega^2)\,dV_\omega \\ & + & \int\limits_{Z_\omega}(|\alpha^{1,\,0}\wedge\omega|^2_\omega - |(\partial\omega)_{prim}|_\omega^2)\,dV_\omega = 0.\end{eqnarray} Thus, if we can find a $\varphi\in C^\infty(X,\,\R)$ sufficiently close to $\varphi_0\equiv 1$ in $C^2$ norm (this will also imply that $\omega + i\partial\bar\partial\varphi>0$) such that $$\varphi\equiv 1 \hspace{1ex}\mbox{on}\hspace{1ex} U_\omega\cup Z_\omega, \hspace{3ex} \varphi\equiv 1+\varepsilon \hspace{1ex}\mbox{on}\hspace{1ex} V'_\omega\Subset V_\omega, \hspace{3ex} \mbox{and}\hspace{3ex} 1\leq\varphi\leq 1+\varepsilon \hspace{1ex}\mbox{on}\hspace{1ex} V_\omega\setminus V'_\omega,$$ for some constant $\varepsilon>0$, where $V'_\omega$ is a pregiven relatively compact open subset of $V_\omega$, we will have $$\int\limits_{V_\omega}(\varphi - 1)\,(|\alpha^{1,\,0}\wedge\omega|^2_\omega-|(\partial\omega)_{prim}|_\omega^2)\,dV_\omega>0.$$ Thanks to (\ref{eqn:local-max_ineq_1}), this will imply that \begin{eqnarray*}\int\limits_X\varphi\,|\alpha^{1,\,0}\wedge\omega|^2_\omega\,dV_\omega > \int\limits_X\varphi\,|(\partial\omega)_{prim}|_\omega^2\,dV_\omega,\end{eqnarray*} which will contradict (\ref{eqn:local-max_ineq}). Now, if $\varepsilon>0$ is chosen small enough, it is obvious that a function $\varphi\in C^\infty(X,\,\R)$ with the above properties exists. \hfill $\Box$ \vspace{3ex} Summing up, the volume of $\omega_\varphi:=\omega + i\partial\bar\partial\varphi$ is {\it constant} on the {\it Gauduchon strata} (if any), while it {\it achieves no local extremum} on the {\it non-Gauduchon strata}. \section{Cohomological interpretations of the generalised volume}\label{Gen-volume_cohom} Before turning to these interpretations of our invariant $A$ in $\S.$\ref{subsection:cohom_A} and $\S.$\ref{subsection:minimal-completion}, we first display it in the context of Hermitian-symplectic and strongly Gauduchon metrics in $\S.$\ref{subsection:sG_H-S}. \subsection{sG metrics induced by H-S metrics}\label{subsection:sG_H-S} From Proposition \ref{Prop:H-S_sG}, we infer the following construction. Let $\mbox{dim}_\C X=3$. With any Hermitian-symplectic metric $\omega$ on $X$, we uniquely associate the $C^\infty$ positive definite $(2,\,2)$-form \begin{equation}\label{eqn:Omega-omega_def}\Omega_\omega:=\omega^2 + 2\rho_\omega^{2,\,0}\wedge\rho_\omega^{0,\,2},\end{equation} where $\rho_\omega^{2,\,0}$ is the $(2,\,0)$-{\it torsion form} of $\omega$ and $\rho_\omega^{0,\,2}=\overline{\rho_\omega^{2,\,0}}$. As is well known (see e.g. [Mic83]), there exists a unique positive definite $(1,\,1)$-form $\gamma_\omega$ such that $$\gamma_\omega^2 =\Omega_\omega.$$ By construction and the proof of Proposition \ref{Prop:H-S_sG}, $\gamma_\omega$ is a {\it strongly Gauduchon} metric on $X$ that will be called the {\bf sG metric associated with $\omega$}. Of course, $\gamma_\omega = \omega$ if and only if $\omega$ is K\"ahler. Since $\gamma_\omega^2$ and $\Omega_\omega$ determine each other uniquely, we will often identify them. In particular, we will also refer to $\Omega_\omega$ as the {\it sG metric associated with $\omega$}. We get: $$\frac{1}{3!}\,\Omega_\omega\wedge\omega = \frac{1}{3!}\,\omega^3 + \frac{1}{3}\,|\rho_\omega^{2,\,0}|_\omega^2\,dV_\omega.$$ Hence, \begin{equation}\label{eqn:Omega_omega-omega}\frac{1}{6}\,\int\limits_X\Omega_\omega\wedge\omega = \frac{2}{3}\,\mbox{Vol}_\omega(X) + \frac{1}{3}\,A,\end{equation} where $A=\mbox{Vol}_\omega(X) + F(\omega)>0$ is the generalised volume of the H-S Aeppli class $\{\omega\}_A$. Thus, the problem of maximising $\mbox{Vol}_\omega(X)$ when $\omega$ ranges over the metrics in $\{\omega\}_A$ is equivalent to maximising the quantity $\int_X\Omega_\omega\wedge\omega$. \subsection{The first cohomological interpretation of the generalised volume $A$}\label{subsection:cohom_A} We first observe that the Aeppli cohomology class of $\Omega_\omega$ depends only on the Aeppli class of $\omega$. \begin{Lem}\label{Lem:Aeppli-class_Omega_omega} Suppose that $\mbox{dim}_\C X=3$. For any {\bf Aeppli cohomologous} Hermitian-symplectic metrics $\omega$ and $\omega_\eta = \omega + \partial\bar\eta + \bar\partial\eta$ on $X$, with $\eta\in C^\infty_{1,\,0}(X,\,\C)$, the associated sG metrics $\Omega_\omega$ and $\Omega_{\omega_\eta}$ are again {\bf Aeppli cohomologous}. Specifically, we have: \begin{eqnarray}\label{eqn:Aeppli-class_Omega_omega}\nonumber\Omega_{\omega_\eta} - \Omega_\omega & = & \partial(\bar\eta\wedge\partial\bar\eta) + \bar\partial(\eta\wedge\bar\partial\eta) + 2\,\partial(\bar\eta\wedge\bar\partial\eta) + 2\,\bar\partial(\partial\eta\wedge\bar\eta) \\ & + & 2\,\partial(\eta\wedge\rho^{0,\,2}_\omega) + 2\,\bar\partial(\bar\eta\wedge\rho^{2,\,0}_\omega) + 2\,\partial(\bar\eta\wedge\omega) + 2\,\bar\partial(\eta\wedge\omega),\end{eqnarray} so $\Omega_{\omega_\eta} - \Omega_\omega\in\mbox{Im}\,\partial + \mbox{Im}\,\bar\partial$. \end{Lem} \noindent {\it Proof.} We know from Corollary \ref{Cor:energy_M-A_mass} that the $(2,\,0)$-torsion forms of $\omega_\eta$ and $\omega$ are related by $\rho^{2,\,0}_\eta = \rho^{2,\,0}_\omega + \partial\eta$. We get: \begin{eqnarray}\nonumber\Omega_{\omega_\eta} & = & \omega_\eta^2 + 2\,\rho^{2,\,0}_{\omega_\eta}\wedge\rho^{0,\,2}_{\omega_\eta} = (\omega + \partial\bar\eta + \bar\partial\eta)^2 + 2\,(\rho^{2,\,0}_\omega + \partial\eta)\wedge(\rho^{0,\,2}_\omega + \bar\partial\bar\eta) \\ \nonumber & = & \omega^2 + (\partial\bar\eta + \bar\partial\eta)^2 + 2\,\omega\wedge(\partial\bar\eta + \bar\partial\eta) + 2\,\rho^{2,\,0}_\omega\wedge\rho^{0,\,2}_\omega + 2\,\partial\eta\wedge\bar\partial\bar\eta + 2\,\rho^{2,\,0}_\omega\wedge\bar\partial\bar\eta + 2\,\partial\eta\wedge\rho^{0,\,2}_\omega \\ \nonumber & = & \Omega_\omega + \partial(\bar\eta\wedge\partial\bar\eta) + \bar\partial(\eta\wedge\bar\partial\eta) + 2\,\partial(\bar\eta\wedge\bar\partial\eta) + 2\,\bar\eta\wedge\partial\bar\partial\eta \\ \nonumber & + & 2\,\partial\bar\eta\wedge\omega + 2\,\bar\partial\eta\wedge\omega + 2\,\bar\partial(\partial\eta\wedge\bar\eta) + 2\,\partial\bar\partial\eta\wedge\bar\eta \\ \nonumber & + & 2\,\bar\partial(\bar\eta\wedge\rho^{2,\,0}_\omega) + 2\,\partial(\bar\eta\wedge\omega) - 2\,\partial\bar\eta\wedge\omega + 2\,\partial(\eta\wedge\rho^{0,\,2}_\omega) + 2\,\bar\partial(\eta\wedge\omega) - 2\,\bar\partial\eta\wedge\omega.\end{eqnarray} This proves (\ref{eqn:Aeppli-class_Omega_omega}) since all the terms that are neither in $\mbox{Im}\,\partial$ nor in $\mbox{Im}\,\bar\partial$ reoccur with the opposite sign and cancel. \hfill $\Box$ \vspace{3ex} We will now use some notions introduced in [PSU20]. We will need the following \begin{Def}(Definition 3.1 in [PSU20b])\label{Def:E_rE_r-bar} Let $X$ be a compact complex manifold. Fix any $r\geq 1$. \vspace{1ex} (i)\, A form $\alpha\in C^\infty_{p,\,q}(X)$ is said to be {\bf $E_r\overline{E}_r$-closed} if $\partial\bar\partial\alpha=0$ and if there exist smooth forms $\eta_1,\dots , \eta_{r-1}$ and $\rho_1,\dots , \rho_{r-1}$ such that the following two towers of $r-1$ equations are satisfied: \begin{align*}\label{eqn:towers_E_rE_r-bar-closedness} \partial\alpha & = \bar\partial\eta_1 & \bar\partial\alpha & = \partial\rho_1 & \\ \partial\eta_1 & = \bar\partial\eta_2 & \bar\partial\rho_1 & = \partial\rho_2 & \\ \vdots & & \\ \partial\eta_{r-2} & = \bar\partial\eta_{r-1}, & \bar\partial\rho_{r-2} & = \partial\rho_{r-1}. &\end{align*} \vspace{1ex} (ii)\, A form $\alpha\in C^\infty_{p,\,q}(X)$ is said to be {\bf $E_r\overline{E}_r$-exact} if there exist smooth forms $\zeta, \xi, \eta$ such that \begin{equation}\label{eqn:main-eq_E_rE_r-bar-exactness}\alpha = \partial\zeta + \partial\bar\partial\xi + \bar\partial\eta\end{equation} \noindent and such that $\zeta$ and $\eta$ further satisfy the following conditions. There exist smooth forms $v_{r-3},\dots , v_0$ and $u_{r-3},\dots , u_0$ such that the following two towers of $r-1$ equations are satisfied: \begin{align*}\label{eqn:towers_E_rE_r-bar-exactness} \bar\partial\zeta & = \partial v_{r-3} & \partial\eta & = \bar\partial u_{r-3} & \\ \bar\partial v_{r-3} & = \partial v_{r-4} & \partial u_{r-3} & = \bar\partial u_{r-4} & \\ \vdots & & \\ \bar\partial v_0 & = 0, & \partial u_0 & = 0. &\end{align*} \end{Def} \vspace{2ex} Note that when $r=1$, the two towers in (i) are empty, so the $E_1\overline{E}_1$-closedness condition is equivalent to $\partial\bar\partial$-closedness. Meanwhile, when $r\geq 2$, the two towers in (i) imply that $\partial\bar\partial\alpha = 0$. As for (ii), when $r=1$, the $E_1\overline{E}_1$-exactness condition is equivalent to $\partial\bar\partial$-exactness. We will also need the following \begin{Def}(Definition 3.4 in [PSU20b])\label{Def:E_r-BC_E_r-A} Let $X$ be a compact complex manifold. Fix $r\in\N^\star$ and a bidegree $(p,\,q)$. \vspace{1ex} (i)\, The {\bf $E_r$-Bott-Chern} cohomology group of bidegree $(p,\,q)$ of $X$ is defined as the following quotient complex vector space: \[E_{r,\,BC}^{p,\,q}(X):=\frac{\{\alpha\in C^\infty_{p,\,q}(X)\,\mid\,d\alpha=0\}}{\{\alpha\in C^\infty_{p,\,q}(X)\,\mid\,\alpha\hspace{1ex}\mbox{is}\hspace{1ex} E_r\overline{E}_r\mbox{-exact}\}}.\] \vspace{1ex} (ii)\, The {\bf $E_r$-Aeppli} cohomology group of bidegree $(p,\,q)$ of $X$ is defined as the following quotient complex vector space: \[E_{r,\,A}^{p,\,q}(X):=\frac{\{\alpha\in C^\infty_{p,\,q}(X)\,\mid\,\alpha\hspace{1ex}\mbox{is}\hspace{1ex} E_r\overline{E}_r-\mbox{closed}\}}{\{\alpha\in C^\infty_{p,\,q}(X)\,\mid\,\alpha\in\mbox{Im}\,\partial + \mbox{Im}\,\bar\partial\}}.\] \end{Def} We will also need the following \begin{Lem}(Proposition 6.2 in [PSU20b])\label{Lem:sG-HS_E_2A} Let $X$ be a compact complex manifold with $\mbox{dim}_\C X=n$ and let $\omega$ be a Hermitian metric on $X$. \vspace{1ex} (i)\, The metric $\omega$ is {\bf strongly Gauduchon (sG)} if and only if $\omega^{n-1}$ is {\bf $E_2\overline{E}_2$-closed}. \vspace{1ex} (ii)\, The metric $\omega$ is {\bf Hermitian-symplectic (H-S)} if and only if $\omega$ is {\bf $E_3\overline{E}_3$-closed}. If $n=3$, $\omega$ is {\bf Hermitian-symplectic (H-S)} if and only if $\omega$ is {\bf $E_2\overline{E}_2$-closed}. \end{Lem} \vspace{3ex} In our case, the consequence of Proposition \ref{Prop:H-S_sG} and of Lemmas \ref{Lem:Aeppli-class_Omega_omega} and \ref{Lem:sG-HS_E_2A} is the following \begin{Lem}\label{Lem:Omega_omega_same-E2A-class} Let $X$ be a compact complex manifold with $\mbox{dim}_\C X=3$. For any {\bf Aeppli cohomologous} Hermitian-symplectic metrics $\omega$ and $\omega_\eta = \omega + \partial\bar\eta + \bar\partial\eta$ on $X$, with $\eta\in C^\infty_{1,\,0}(X,\,\C)$, the corresponding sG metrics $\Omega_\omega$ and $\Omega_{\omega_\eta}$ represent the {\bf same $E_2$-Aeppli class}: $$\{\Omega_{\omega_\eta}\}_{E_{2,\,A}} = \{\Omega_\omega\}_{E_{2,\,A}}\in E_{2,\,A}^{2,\,2}(X).$$ \end{Lem} \noindent {\it Proof.} We know from Proposition \ref{Prop:H-S_sG} that $\Omega_\omega$ and $\Omega_{\omega_\eta}$ are sG metrics, so by (i) of Lemma \ref{Lem:sG-HS_E_2A} they represent $E_2$-Aeppli classes. Meanwhile, by Lemma \ref{Lem:Aeppli-class_Omega_omega}, $\Omega_\omega$ and $\Omega_{\omega_\eta}$ are Aeppli cohomologous, hence also $E_2$-Aeppli cohomologous. \hfill $\Box$ \vspace{3ex} We will now use the main notion introduced in [PSU20a]. \begin{Def}(Theorem and Definition 1.2 in [PSU20a])\label{Def:page-r-ddbar_def} Fix an arbitrary $r\in\N^\star$. A $n$-dimensional compact complex manifold $X$ is said to be a {\bf page-$(r-1)$-$\partial\bar\partial$-manifold} if $X$ has the {\bf $E_r$-Hodge Decomposition} property in the following sense. For every bidegree $(p,\,q)$, every class $\{\alpha^{p,\,q}\}_{E_r}\in E_r^{p,\,q}(X)$ can be represented by a {\bf $d$-closed $(p,\,q)$-form} and for every $k$, the linear map $$\bigoplus_{p+q=k}E_r^{p,\,q}(X)\ni\sum\limits_{p+q=k}\{\alpha^{p,\,q}\}_{E_r}\mapsto\bigg\{\sum\limits_{p+q=k}\alpha^{p,\,q}\bigg\}_{DR}\in H^k_{DR}(X,\,\C)$$ \noindent is {\bf well-defined} by means of $d$-closed pure-type representatives and {\bf bijective}. \end{Def} \vspace{3ex} We will also need the following result from [PSU20]. \begin{The}(Theorem 3.53 in [PSU20])\label{The:page_r-1_ddbar_prop-B_BC-A} Let $X$ be a compact complex manifold with $\mbox{dim}_\C X=n$. Fix an arbitrary integer $r\geq 1$. The following statements are {\bf equivalent}. \vspace{1ex} (a)\, $X$ is a {\bf page-$(r-1)$-$\partial\bar\partial$-manifold}. \vspace{1ex} (b)\, For all $p,q\in\{0,\dots , n\}$, the canonical linear maps $T_r^{p,\,q}:E_{r,\,BC}^{p,\,q}(X)\longrightarrow E_r^{p,\,q}(X)$ and $S_r^{p,\,q}:E_r^{p,\,q}(X)\longrightarrow E_{r,\,A}^{p,\,q}(X)$ induced by the identity are {\bf isomorphisms}. \end{The} \vspace{3ex} In our case, as a consequence of Theorem \ref{The:page_r-1_ddbar_prop-B_BC-A}, we get a {\it unique lift} $\mathfrak{c}_\omega\in E_{2,\,BC}^{2,\,2}(X)$ of $\{\Omega_\omega\}_{E_{2,\,A}}\in E_{2,\,A}^{2,\,2}(X)$ under the appropriate assumption on $X$. \begin{Cor}\label{Cor:E_2BC_lifts} Let $X$ be a {\bf page-$1$-$\partial\bar\partial$-manifold} with $\mbox{dim}_\C X=3$. For any {\bf Aeppli cohomologous} Hermitian-symplectic metrics $\omega$ and $\omega_\eta = \omega + \partial\bar\eta + \bar\partial\eta$ on $X$, with $\eta\in C^\infty_{1,\,0}(X,\,\C)$, there exists a unique {\bf $E_2$-Bott-Chern class} $\mathfrak{c}_\omega\in E_{2,\,BC}^{2,\,2}(X)$ such that $$(S_2^{2,\,2}\circ T_2^{2,\,2})(\mathfrak{c}_\omega) = \{\Omega_{\omega_\eta}\}_{E_{2,\,A}} = \{\Omega_\omega\}_{E_{2,\,A}}\in E_{2,\,A}^{2,\,2}(X),$$ where $\Omega_\omega$ and $\Omega_{\omega_\eta}$ are the sG metrics associated with $\omega$, resp. $\omega_\eta$. In particular, the {\bf $E_2$-Bott-Chern class} $\mathfrak{c}_\omega\in E_{2,\,BC}^{2,\,2}(X)$ depends only on the {\bf $E_2$-Aeppli class} $\{\omega\}_{E_2,\,A}\in E_{2,\,A}^{1,\,1}(X)$. \end{Cor} \vspace{2ex} We can now state and prove the main result of this subsection. It will use the {\bf duality} between the $E_r$-Bott-Chern cohomology of any bidegree $(p,\,q)$ and the $E_r$-Aeppli cohomology of the complementary bidegree $(n-p,\,n-q)$ proved as Theorem 3.11 in [PSU20b]. In our case, $n=3$, $r=2$ and $(p,\,q)=(2,\,2)$. \begin{The}\label{The:cohom_A} Let $X$ be a {\bf page-$1$-$\partial\bar\partial$-manifold} with $\mbox{dim}_\C X=3$. Suppose there exists a \newline Hermitian-symplectic metric $\omega$ on $X$ whose {\bf $E_2$-torsion class vanishes} (i.e. $\{\rho^{0,\,2}_\omega\}_{E_2} = 0\in E_2^{0,\,2}(X)$). Then, the generalised volume $A=F(\omega) + \mbox{Vol}_\omega(X)$ of $\{\omega\}_A$ is given as the following intersection number in cohomology: \begin{equation}\label{eqn:cohom_A}A=\frac{1}{6}\,\mathfrak{c}_\omega.\{\omega\}_{E_2,\,A}.\end{equation} \end{The} \noindent {\it Proof.} $\bullet$ We will first construct a smooth $d$-closed $(2,\,2)$-form $\widetilde\Omega_\omega$ that represents the $E_2$-Bott-Chern class $\mathfrak{c}_\omega\in E_{2,\,BC}^{2,\,2}(X)$ in the most economical way possible. We will proceed in two stages that correspond to lifting the $E_2$-Aeppli class $\{\Omega_\omega\}_{E_2,\,A}\in E_{2,\,A}^{2,\,2}(X)$ to $E_2^{2,\,2}(X)$ under the isomorphism $S_2^{2,\,2}:E_2^{2,\,2}(X)\longrightarrow E_{2,\,A}^{2,\,2}(X)$ induced by the identity, respectively to lifting the resulting $E_2$-class in $E_2^{2,\,2}(X)$ to $E_{2,\,BC}^{2,\,2}(X)$ under the isomorphism $T_2^{2,\,2}:E_{2,\,BC}^{2,\,2}(X)\longrightarrow E_2^{2,\,2}(X)$ induced by the identity. \vspace{1ex} {\it Stage $1$.} To lift $\{\Omega_\omega\}_{E_2,\,A}\in E_{2,\,A}^{2,\,2}(X)$ to $E_2^{2,\,2}(X)$ under the isomorphism $S_2^{2,\,2}:E_2^{2,\,2}(X)\longrightarrow E_{2,\,A}^{2,\,2}(X)$, we need to find a $(2,\,2)$-form $\Gamma^{2,\,2}$ such that $\Gamma^{2,\,2}\in\mbox{Im}\,\partial + \mbox{Im}\,\bar\partial$ (because we need $\Omega_\omega + \Gamma^{2,\,2}$ to represent the same $E_{2,\,A}$-class as the original $\Omega_\omega$) and such that $$\bar\partial(\Omega_\omega + \Gamma^{2,\,2}) = 0 \hspace{3ex} \mbox{and} \hspace{3ex} \partial(\Omega_\omega + \Gamma^{2,\,2})\in\mbox{Im}\,\bar\partial,$$ (because we need $\Omega_\omega + \Gamma^{2,\,2}$ to represent an $E_2$-class). The last two conditions are equivalent to \begin{eqnarray}\label{eqn:cohom_A_proof_1}\bar\partial\Gamma^{2,\,2} = -\bar\partial\Omega_\omega \hspace{3ex} \mbox{and} \hspace{3ex} \partial\Gamma^{2,\,2}\in\mbox{Im}\,\bar\partial,\end{eqnarray} because, for the last condition, we already have $\partial\Omega_\omega\in\mbox{Im}\,\bar\partial$ by the sG property of $\Omega_\omega$. If we denote by ${\cal Z}_{2\bar{2}}^{2,\,2}$ the space of smooth $E_2\overline{E}_2$-closed $(2,\,2)$-forms on $X$, we have $\Omega_\omega\in{\cal Z}_{2\bar{2}}^{2,\,2}$, hence $-\bar\partial\Omega_\omega\in\bar\partial({\cal Z}_{2\bar{2}}^{2,\,2})$. On the other hand, part (ii) of Lemma 3.52 in [PSU20] ensures that $\bar\partial({\cal Z}_{2\bar{2}}^{2,\,2})\subset\mbox{Im}\,(\partial\bar\partial)$ because $X$ is a {\it page-$1$-$\partial\bar\partial$-manifold}. (Actually, this inclusion is equivalent to the surjectivity of the map $S_2^{2,\,2}$.) We conclude that $-\bar\partial\Omega_\omega\in\mbox{Im}\,(\partial\bar\partial)$, so the equation \begin{equation}\label{eqn:u12_def}\partial\bar\partial u^{1,\,2} = \bar\partial\Omega_\omega\end{equation} admits solutions $u^{1,\,2}\in C^\infty_{1,\,2}(X,\,\C)$. Let $u^{1,\,2}_\omega$ be the minimal $L^2_\omega$-norm such solution and put $\Gamma^{2,\,2}_\omega:=\partial u^{1,\,2}_\omega$. Thus, $\Gamma^{2,\,2}_\omega=\partial u^{1,\,2}_\omega$ satisfies conditions (\ref{eqn:cohom_A_proof_1}) and $\Gamma^{2,\,2}_\omega\in\mbox{Im}\,\partial\subset\mbox{Im}\,\partial + \mbox{Im}\,\bar\partial$. So, we have got the minimal lift $\{\Omega_\omega + \partial u^{1,\,2}_\omega\}_{E_2}$ of $\{\Omega_\omega\}_{E_2,\,A}\in E_{2,\,A}^{2,\,2}(X)$ to $E_2^{2,\,2}(X)$, i.e. $$S_2^{2,\,2}(\{\Omega_\omega + \partial u^{1,\,2}_\omega\}_{E_2}) = \{\Omega_\omega\}_{E_2,\,A}.$$ \vspace{1ex} {\it Stage $2$.} To lift $\{\Omega_\omega + \partial u^{1,\,2}_\omega\}_{E_2}\in E_2^{2,\,2}(X)$ to $E_{2,\,BC}^{2,\,2}(X)$ under the isomorphism \newline $T_2^{2,\,2}:E_{2,\,BC}^{2,\,2}(X)\longrightarrow E_2^{2,\,2}(X)$, we need to find a $(2,\,2)$-form $V^{2,\,2}$ such that $V^{2,\,2}\in\partial(\ker\bar\partial) + \mbox{Im}\,\bar\partial$ (because we need $\Omega_\omega + \partial u^{1,\,2}_\omega + V^{2,\,2}$ to represent the same $E_2$-class as $\Omega_\omega + \partial u^{1,\,2}_\omega$) and such that $$\partial(\Omega_\omega + \partial u^{1,\,2}_\omega + V^{2,\,2}) = 0 \hspace{3ex} \mbox{and} \hspace{3ex} \bar\partial(\Omega_\omega + \partial u^{1,\,2}_\omega + V^{2,\,2}) = 0,$$ (because we need $\Omega_\omega + \partial u^{1,\,2}_\omega + V^{2,\,2}$ to represent an $E_{2,\,BC}$-class). The last two conditions are equivalent to \begin{eqnarray}\label{eqn:cohom_A_proof_2}\partial V^{2,\,2} = -\partial(\Omega_\omega + \partial u^{1,\,2}_\omega) \hspace{3ex} \mbox{and} \hspace{3ex} \bar\partial V^{2,\,2} = 0,\end{eqnarray} because, for the last condition, we already have $\bar\partial(\Omega_\omega + \partial u^{1,\,2}_\omega) = 0$. Now, $\Omega_\omega + \partial u^{1,\,2}_\omega\in{\cal Z}_{2\bar{2}}^{2,\,2}$, hence $-\partial(\Omega_\omega + \partial u^{1,\,2}_\omega)\in\partial({\cal Z}_{2\bar{2}}^{2,\,2})\subset\mbox{Im}\,(\partial\bar\partial)$, the last inclusion being a consequence of part (ii) of Lemma 3.52 in [PSU20] and of $X$ being a {\it page-$1$-$\partial\bar\partial$-manifold}. (Actually, this inclusion is equivalent to the surjectivity of the map $T_2^{2,\,2}$.) We conclude that $-\partial(\Omega_\omega + \partial u^{1,\,2}_\omega)\in\mbox{Im}\,(\partial\bar\partial)$, so the equation \begin{equation}\label{eqn:u21_def}\partial\bar\partial u^{2,\,1} = -\partial(\Omega_\omega + \partial u^{1,\,2}_\omega)\end{equation} admits solutions $u^{2,\,1}\in C^\infty_{2,\,1}(X,\,\C)$. Let $u^{2,\,1}_\omega$ be the minimal $L^2_\omega$-norm such solution and put $V^{2,\,2}_\omega:=\bar\partial u^{2,\,1}_\omega$. Clearly, $u^{2,\,1}_\omega = \overline{u^{1,\,2}_\omega}$ since the equations whose minimal solutions are $u^{2,\,1}_\omega$ and $u^{1,\,2}_\omega$ are conjugated to each other. Thus, $V^{2,\,2}_\omega=\bar\partial u^{2,\,1}_\omega$ satisfies conditions (\ref{eqn:cohom_A_proof_2}) and $V^{2,\,2}_\omega\in\partial(\ker\bar\partial) + \mbox{Im}\,\bar\partial$. \vspace{2ex} The upshot of the above construction is that the form $$\widetilde\Omega_\omega:=\Omega_\omega + \partial u^{1,\,2}_\omega + \bar\partial u^{2,\,1}_\omega\in C^\infty_{2,\,2}(X,\,\C)$$ is the minimal completion of $\Omega_\omega$ to a {\it $d$-closed} pure-type form of bidegree $(2,\,2)$. Moreover, the class $\{\widetilde\Omega_\omega\}_{E_{2,\,BC}}\in E_{2,\,BC}^{2,\,2}(X)$ has the property that $$(S_2^{2,\,2}\circ T_2^{2,\,2})(\{\widetilde\Omega_\omega\}_{E_{2,\, BC}}) = \{\Omega_\omega\}_{E_2,\,A}.$$ Hence, $\{\widetilde\Omega_\omega\}_{E_{2,\,BC}} = \mathfrak{c}_\omega$ since the map $S_2^{2,\,2}\circ T_2^{2,\,2}:E_{2,\, BC}^{2,\,2}(X)\longrightarrow E_{2,\, A}^{2,\,2}(X)$ is bijective (thanks to the {\it page-$1$-$\partial\bar\partial$-assumption} on $X$). \vspace{2ex} $\bullet$ We will now use the representative $\widetilde\Omega_\omega$ of the class $\mathfrak{c}_\omega$ to relate the intersection number in (\ref{eqn:cohom_A}) to the generalised volume of $\{\omega\}_A$. We have: \begin{eqnarray*}\mathfrak{c}_\omega.\{\omega\}_{E_2,\,A} = \int\limits_X\widetilde\Omega_\omega\wedge\omega = \int\limits_X\Omega_\omega\wedge\omega + \int\limits_X\partial u^{1,\,2}_\omega\wedge\omega + \int\limits_X\bar\partial u^{2,\,1}_\omega\wedge\omega.\end{eqnarray*} Since $\rho^{2,\,0}_\omega = \partial\xi^{1,\,0}_\omega$ thanks to the hypothesis $\{\rho^{0,\,2}_\omega\}_{E_2} = 0\in E_2^{0,\,2}(X)$ (see Corollary \ref{Cor:necessary-cond_K} and the minimal choice (\ref{eqn:xi_min-sol_formula}) of $\xi^{0,\,1}_\omega=\overline{\xi^{1,\,0}_\omega}$), we get: \begin{eqnarray*}\int\limits_X\partial u^{1,\,2}_\omega\wedge\omega & = & \int\limits_Xu^{1,\,2}_\omega\wedge\partial\omega = -\int\limits_Xu^{1,\,2}_\omega\wedge\bar\partial\rho^{2,\,0}_\omega = \int\limits_Xu^{1,\,2}_\omega\wedge\partial\bar\partial\xi^{1,\,0}_\omega \\ & = & \int\limits_X\partial\bar\partial u^{1,\,2}_\omega\wedge\xi^{1,\,0}_\omega \stackrel{(a)}{=} \int\limits_X\bar\partial\Omega_\omega\wedge\xi^{1,\,0}_\omega \stackrel{(b)}{=} -2\,\int\limits_X\partial(\rho^{0,\,2}_\omega\wedge\omega)\wedge\xi^{1,\,0}_\omega \\ & = & 2\,\int\limits_X\rho^{0,\,2}_\omega\wedge\omega\wedge\partial\xi^{1,\,0}_\omega = 2\,\int\limits_X\rho^{2,\,0}_\omega\wedge\rho^{0,\,2}_\omega\wedge\omega = 2\,||\rho^{2,\,0}_\omega||^2_\omega = 2F(\omega),\end{eqnarray*} where (a) follows from (\ref{eqn:u12_def}) and (b) follows from the formula \begin{equation}\label{eqn:dbar_Omega-omega_formula}\bar\partial\Omega_\omega = -2\,\partial(\rho^{0,\,2}_\omega\wedge\omega)\end{equation} which in turn follows at once from (\ref{eqn:Omega-omega_def}). By conjugation, we also have $\int_X\bar\partial u^{2,\,1}_\omega\wedge\omega = 2F(\omega)$. Putting the various pieces of information together, we get: \begin{eqnarray*}\mathfrak{c}_\omega.\{\omega\}_{E_2,\,A} = \int\limits_X\Omega_\omega\wedge\omega + 4F(\omega) = 4\,\mbox{Vol}_\omega(X) + 2A + 4F(\omega) = 6A,\end{eqnarray*} where the second identity follows from (\ref{eqn:Omega_omega-omega}). The proof of Theorem \ref{The:cohom_A} is complete. \hfill $\Box$ \subsection{The second cohomological interpretation of the generalised volume}\label{subsection:minimal-completion} We will now work in the general case (i.e. without the extra assumptions made in Theorem \ref{The:cohom_A}). The result will show, yet again, that the generalised volume $A = A_{\{\omega\}_A}>0$ of a Hermitian-symplectic Aeppli class $\{\omega\}_A$ is a natural analogue in this more general context of the volume of a K\"ahler class. \begin{Def}\label{Def:minimal-completion} Let $X$ be a compact complex manifold with $\mbox{dim}_\C X=3$. For any Hermitian-symplectic metric $\omega$ on $X$, the $d$-closed real $2$-form $$\widetilde\omega = \rho_\omega^{2,\,0} + \omega + \rho_\omega^{0,\,2}$$ is called the {\bf minimal completion} of $\omega$, where $\rho_\omega^{2,\,0}$, resp. $\rho_\omega^{0,\,2}$, is the $(2,\,0)$-torsion form, resp. the $(0,\,2)$-torsion form, of $\omega$.\end{Def} We will now notice the following consequence of Corollary \ref{Cor:energy_M-A_mass} It gives a new cohomological interpretation of the generalised volume $A = A_{\{\omega\}_A}>0$. \begin{Prop}\label{Prop:same-DR-class} Let $X$ be a compact complex Hermitian-symplectic manifold of dimension $n=3$. \vspace{1ex} (a)\, For any Hermitian-symplectic metric $\omega$ on $X$, its minimal completion $2$-form $\widetilde\omega$ has \newline the property: \begin{equation}\label{eqn:min-comp_integral}\int\limits_X\frac{\widetilde\omega^3}{3!} = \mbox{Vol}_\omega(X) + F(\omega) = A_{\{\omega\}_A}.\end{equation} \vspace{1ex} (b)\, For any Aeppli-cohomologous Hermitian-symplectic metrics $\omega$ and $\omega_\eta$ \begin{equation}\omega_\eta = \omega + \partial\bar\eta + \bar\partial\eta >0 \hspace{3ex} (\mbox{where} \hspace{1ex} \eta\in C^{\infty}_{1,\,0}(X,\,\C)),\end{equation} \noindent the respective minimal completion $2$-forms $\widetilde\omega_\eta$ and $\widetilde\omega$ lie in the same De Rham cohomology class. \vspace{1ex} Thus, $A_{\{\omega\}_A} = \{\widetilde\omega\}_{DR}^3/3!$. \end{Prop} \noindent {\it Proof.} (a)\, Using (\ref{eqn:Omega_omega-omega}) for identity (a) below and the above notation, we get: \begin{eqnarray*}\int\limits_X \widetilde\omega^3 & = & \int\limits_X\widetilde\omega^2\wedge(\rho_\omega^{2,\,0} + \omega + \rho_\omega^{0,\,2}) = \int\limits_X\Omega_\omega\wedge(\rho_\omega^{2,\,0} + \omega + \rho_\omega^{0,\,2})\\ &+& 2\,\int\limits_X\rho_\omega^{2,\,0}\wedge\omega\wedge\rho_\omega^{0,\,2} + 2\,\int\limits_X\rho_\omega^{0,\,2}\wedge\omega\wedge\rho_\omega^{2,\,0} \\ & = & \int\limits_X\Omega_\omega\wedge\omega + 4 F(\omega) \stackrel{(a)}{=} 4\mbox{Vol}_\omega(X) + 2\mbox{Vol}_\omega(X)+ 2F(\omega) + 4 F(\omega) = 6A.\end{eqnarray*} \vspace{1ex} (b)\, We know from Corollary \ref{Cor:energy_M-A_mass} that the $(2,\,0)$-torsion forms of $\omega_\eta$ and $\omega$ are related by $\rho^{2,\,0}_\eta = \rho^{2,\,0}_\omega + \partial\eta$. We get: \begin{eqnarray*}\widetilde\omega_\eta = \rho^{2,\,0}_\eta + \omega_\eta + \rho^{0,\,2}_\eta = \widetilde\omega + d(\eta + \bar\eta).\end{eqnarray*} This proves the contention. \hfill $\Box$ \vspace{3ex} \noindent {\bf References.} \\ \vspace{1ex} \noindent [Che87]\, P. Cherrier --- {\it \'Equations de Monge-Amp\`ere sur les vari\'et\'es hermitiennes compactes} --- Bull. Sc. Math. (2) {\bf 111} (1987), 343-385. \vspace{1ex} \noindent [Dem97]\, J.-P. Demailly --- {\it Complex Analytic and Algebraic Geometry} --- http://www-fourier.ujf-grenoble.fr/~demailly/books.html \vspace{1ex} \noindent [Don06]\, S. K. Donaldson --- {\it Two-forms on Four-manifolds and Elliptic Equations} --- Inspired by S. S. Chern, 153–172, Nankai Tracts Math.,11, World Sci.Publ., Hackensack, NJ, 2006. \vspace{1ex} \noindent [EFV12]\, N. Enrietti, A. Fino, L. Vezzoni --- {\it Tamed Symplectic Forms and Strong K\"ahler with Torsion Metrics} --- J. Symplectic Geom. {\bf 10}, No. 2 (2012) 203-223. \vspace{1ex} \noindent [Gau77a]\, P. Gauduchon --- {\it Le th\'eor\`eme de l'excentricit\'e nulle} --- C.R. Acad. Sc. Paris, S\'erie A, t. {\bf 285} (1977), 387-390. \vspace{1ex} \noindent [Gau77b]\, P. Gauduchon --- {\it Fibr\'es hermitiens \`a endomorphisme de Ricci non n\'egatif} --- Bull. Soc. Math. France {\bf 105} (1977) 113-140. \vspace{1ex} \noindent [GL09]\, B. Guan, Q. Li --- {\it Complex Monge-Amp\`ere Equations on Hermitian Manifolds} --- arXiv e-print DG 0906.3548v1. \vspace{1ex} \noindent [IP13]\, S. Ivanov, G. Papadopoulos --- {\it Vanishing Theorems on $(l/k)$-strong K\"ahler Manifolds with Torsion} --- Adv.Math. {\bf 237} (2013) 147-164. \vspace{1ex} \noindent[HL83]\, R. Harvey, H. B. Lawson --- {\it An intrinsic characterization of K\"ahler manifolds} --- Invent. Math. {\bf 74}, (1983), 169-198. \vspace{1ex} \noindent [KS60]\, K. Kodaira, D.C. Spencer --- {\it On Deformations of Complex Analytic Structures, III. Stability Theorems for Complex Structures} --- Ann. of Math. {\bf 71}, no.1 (1960), 43-76. \vspace{1ex} \noindent [Lam99]\, A. Lamari --- {\it Courants k\"ahl\'eriens et surfaces compactes} --- Ann. Inst. Fourier {\bf 49}, no. 1 (1999), 263-285. \vspace{1ex} \noindent [LZ09]\, T.-J. Li, W. Zhang --- {\it Comparing Tamed and Compatible Symplectic Cones and Cohomological Properties of Almost Complex Manifolds} --- Comm. Anal. Geom. {\bf 17}, no. 4 (2009), 651–683. \vspace{1ex} \noindent [Mic83]\, M. L. Michelsohn --- {\it On the Existence of Special Metrics in Complex Geometry} --- Acta Math. {\bf 143} (1983) 261-295. \vspace{1ex} \noindent [Pop13]\, D. Popovici --- {\it Deformation Limits of Projective Manifolds: Hodge Numbers and Strongly Gauduchon Metrics} --- Invent. Math. {\bf 194} (2013), 515-534. \vspace{1ex} \noindent [Pop15]\, D. Popovici --- {\it Aeppli Cohomology Classes Associated with Gauduchon Metrics on Compact Complex Manifolds} --- Bull. Soc. Math. France {\bf 143}, no. 3 (2015), 1-37. \vspace{1ex} \noindent [Pop16]\, D. Popovici --- {\it Degeneration at $E_2$ of Certain Spectral Sequences} --- Internat. J. of Math. {\bf 27}, no. 13 (2016), DOI: 10.1142/S0129167X16501111. \vspace{1ex} \noindent [PSU20]\, D. Popovici, J. Stelzig, L. Ugarte --- {\it Some Aspects of Higher-Page Non-K\"ahler Hodge Theory} --- arXiv e-print AG 2001.02313v1. \vspace{1ex} \noindent [PSU20a]\, D. Popovici, J. Stelzig, L. Ugarte --- {\it Higher-Page Hodge Theory of Compact Complex Manifolds} --- arXiv e-print AG 2001.02313v2. \vspace{1ex} \noindent [PSU20b]\, D. Popovici, J. Stelzig, L. Ugarte --- {\it Higher-Page Bott-Chern and Aeppli Cohomologies and Applications} --- arXiv e-print AG 2007.03320v1. \vspace{1ex} \noindent [Sch07]\, M. Schweitzer --- {\it Autour de la cohomologie de Bott-Chern} --- arXiv e-print math.AG/0709.3528v1. \vspace{1ex} \noindent [ST10]\, J. Streets, G. Tian --- {\it A Parabolic Flow of Pluriclosed Metrics} --- Int. Math. Res. Notices, {\bf 16} (2010), 3101-3133. \vspace{1ex} \noindent[Sul76]\, D. Sullivan ---{\it Cycles for the dynamical study of foliated manifolds and complex manifolds}--- Invent. Math. {\bf 36} (1976), 225-255. \vspace{1ex} \noindent [TW10]\, V. Tosatti, B. Weinkove --- {\it The Complex Monge-Amp\`ere Equation on Compact Hermitian Manifolds} --- J. Amer. Math. Soc. 23 (2010), no. 4, 1187-1195. \vspace{1ex} \noindent [Ver14]\, M. Verbitsky --- {\it Rational Curves and Special Metrics on Twistor Spaces} --- Geometry and Topology {\bf 18} (2014), 897–909. \vspace{1ex} \noindent [Voi02]\, C. Voisin --- {\it Hodge Theory and Complex Algebraic Geometry. I.} --- Cambridge Studies in Advanced Mathematics, 76, Cambridge University Press, Cambridge, 2002. \vspace{6ex} \noindent Department of Mathematics and Computer Science \hfill Institut de Math\'ematiques de Toulouse, \noindent Jagiellonian University \hfill Universit\'e Paul Sabatier, \noindent 30-409 Krak\'ow, Ul. Lojasiewicza 6, Poland \hfill 118 route de Narbonne, 31062 Toulouse, France \noindent Email: Slawomir.Dinew@im.uj.edu.pl \hfill Email: popovici@math.univ-toulouse.fr \end{document}
proofpile-arXiv_065-193
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} When can the set of homotopy classes of maps between spaces $X$ and $Y$ be computed? That is, when can this (possibly infinite) set be furnished with a finitely describable and computable structure? A reasonable first requirement is that $X$ and $Y$ should be finite complexes; this ensures that at least the spaces can be represented as computational objects. Moreover, the question of whether this set has more than one element is undecidable for $X=S^1$, as shown by Novikov as early as 1955\footnote{This is the triviality problem for group presentations, translated into topological language. This work was extended by Adian and others to show that many other properties of nonabelian group presentations are likewise undecidable.}. Therefore it is also reasonable to require the fundamental group not to play a role; in the present work, $Y$ is always assumed to be simply connected.\footnote{The results can plausibly be extended to nilpotent spaces.} We answer this question with the following choice of quantifiers: for what $Y$ and $n$ can the set of homotopy classes $[X,Y]$ be computed for \emph{every} $n$-dimensional $X$? Significant partial results in this direction were obtained by E.~H.~Brown \cite{Brown} and much more recently by \v{C}adek et al.~\cite{CKMSVW,CKMVW2,CKMVW} and Vok\v r\'inek \cite{Vok}. The goal of the present work is to push their program to its logical limit. To state the precise result, we need to sketch the notion of an H-space, which is defined precisely in \S\ref{S:H}. Essentially, an H-space is a space equipped with a binary operation which can be more or less ``group-like''; if it has good enough properties, this allows us to equip sets of mapping classes to the H-space with a group structure. The \emph{cohomological dimension} $\cd(X,A)$ of a simplicial or CW pair $(X,A)$ is the least integer $d$ such that for all $n>d$ and every coefficient group $\pi$, $H^n(X,A;\pi)=0$. \begin{thmA} \label{main} Let $Y$ be a simply connected simplicial complex of finite type and $d \geq 2$. Then the following are equivalent: \begin{enumerate}[(i)] \item For any simplicial pair $(X,A)$ of cohomological dimension $d+1$ and simplicial map $f:A \to Y$, the existence of a continuous extension of $f$ to $X$ is decidable. \item $Y$ has the rational homotopy type of an H-space through dimension $d$. That is, there is a map from $Y$ to an H-space (or, equivalently, to a product of Eilenberg--MacLane spaces) which induces isomorphisms on $\pi_n \otimes \mathbb{Q}$ for $n \leq d$. \label{isH} \end{enumerate} Moreover, there is a algorithm which, given a simply connected simplicial complex $Y$, a simplicial pair $(X,A)$ of finite complexes of cohomological dimension $d$ and a simplicial map $f:A \to Y$, \begin{enumerate} \item Determines whether the equivalent conditions are satisfied; \item If they are, outputs the set of homotopy classes rel $A$ of extensions $[X,Y]^f$ in the format of a (perhaps empty) set on which a finitely generated abelian group acts virtually freely and faithfully (that is, with a finite number of orbits each of which has finite stabilizer). \end{enumerate} \end{thmA} We give a couple remarks about the statement. First of all, it is undecidable whether $Y$ is simply connected; therefore, when given a non-simply connected input, the algorithm cannot detect this and returns nonsense, like previous algorithms of this type. Secondly, the difference between $d+1$ in the first part of the theorem and $d$ in the second is important: if $\cd(X,A)=d+1$, then we can decide whether $[X,Y]^f$ is nonempty, but there may not be a group with a natural virtually free and faithful action on it. For example, consider $[S^1 \times S^2,S^2]$. This set can be naturally equipped with the structure \[[S^1 \times S^2,S^2] \cong \bigsqcup_{r \in \mathbb{Z}} \mathbb{Z}/2r\mathbb{Z};\] as such, it has a surjection from $\mathbb{Z}^2$, but not an action of it. \subsection{Examples} The new computability result encompasses several previous results, as well as new important corollaries. Here are some examples of spaces which satisfy condition \eqrefb{isH} of Theorem \ref{main}: \begin{enumerate}[(a)] \item Any simply connected space with finite homology groups (or, equivalently, finite homotopy groups) in every dimension is rationally equivalent to a point, which is an H-space. The computability of $[X,Y]$ when $Y$ is of this form was already established by Brown \cite{Brown}. \item Any $d$-connected space is rationally an H-space through dimension $n=2d$. Thus we recover the result of \v{C}adek et al.~\cite{CKMVW2} that $[X,Y]^f$ is computable whenever $X$ is $2d$-dimensional and $Y$ is $d$-connected. This implies that many ``stable'' homotopical objects are computable. One example is the group of oriented cobordism classes of $n$-manifolds, which is isomorphic to the set of maps from $S^n$ to the Thom space of the tautological bundle over $\Gr_n(\mathbb{R}^{2n+1})$. \item The sphere $S^n$ for $n$ odd is rationally equivalent to the Eilenberg--MacLane space $K(\mathbb{Z},n)$. Therefore $[X,S^n]^f$ is computable for any finite simplicial pair $(X,A)$ and map $f:A \to S^n$; this is the main result of Vok\v r\'inek's paper \cite{Vok}. \item Any Lie group or simplicial group $Y$ is an H-space, so if $Y$ is simply connected then $[X,Y]^f$ is computable for any $X$, $A$, and $f$. \item Classifying spaces of connected Lie groups also have the rational homotopy type of an H-space \cite[Prop.~15.15]{FHT}. Therefore we have: \begin{cor} Let $G$ be a connected Lie group. Then: \begin{enumerate}[(i)] \item The set of isomorphism classes of principal $G$-bundles over a finite complex $X$ is computable. \item Let $(X,A)$ be a finite CW pair. Then it is decidable whether a given principal $G$-bundle over $A$ extends over $X$. \end{enumerate} \end{cor} \noindent In particular, given a representation $G \to GL_n(\mathbb{R})$, we can understand the set of vector bundles with a $G$-structure. This includes real oriented, complex, and symplectic bundles, as well as spin and metaplectic structures on bundles. \item More generally, some classifying spaces of topological monoids have the rational homotopy type of an H-space. This includes the classifying space $BG_n=\operatorname{BAut}(S^n)$ for $S^n$-fibrations \cite[Appendix 1]{Milnor} \cite{Smith}; therefore, the set of fibrations $S^n \to E \to X$ over a finite complex $X$ up to fiberwise homotopy equivalence is computable. \end{enumerate} Conversely, most sufficiently complicated simply connected spaces do not satisfy condition \eqref{isH}. The main result of \cite{CKMVW} shows that the extension problem is undecidable for even-dimensional spheres, which are the simplest example. Other examples include complex projective spaces and most Grassmannians and Stiefel manifolds. \subsection{Proof ideas} The proof of the main theorem splits naturally into two pieces. Suppose that $Y$ has the rational homotopy type of an H-space through dimension $d$, but not through dimension $d+1$. We must show that the extension question is undecidable for pairs of cohomological dimension $d+2$. We must also provide an algorithm which computes $[X,Y]^f$ if $\cd(X,A) \leq d$ and decides whether $[X,Y]^f$ is nonempty if $\cd(X,A)=d+1$. Both of these build on work of \v Cadek, Kr\v c\'al, Matou\v sek, Vok\v r\'inek, and Wagner, in \cite{CKMVW} and \cite{CKMVW2} respectively. To show undecidability of the extension problem for a given $Y$, we reduce a form of Hilbert's Tenth Problem to it. Recall that Hilbert asked for an algorithm to determine whether a system of Diophantine equations has a solution. Work of Davis, Putnam, Robinson, and Matiyasevich showed that no such algorithm exists. It turns out that the problem is still undecidable for very restricted classes of systems of quadratic equations; this was used in \cite{CKMVW} to show that the extension problem for maps to $S^{2n}$ is undecidable. We generalize their work: extension problems to a given $Y$ are shown to encode systems of Diophantine equations in which terms are values on vectors of variables of a fixed bilinear form (or sequence of forms) which depends on $Y$. We show that Hilbert's Tenth Problem restricted to any such subtype is undecidable. To provide an algorithm, we use the rational H-space structure of the $d$th Postnikov stage $Y_d$ of $Y$. In this case, we can build an H-space $H$ of finite type together with rational equivalences $$H \to Y_d \to H$$ as well as an ``H-space action'' of $H$ on $Y$, that is, a map $\act:H \times Y_d \to Y_d$ which satisfies various compatibility properties. These ensure that the set $[X/A,H]$ (where $A$ is mapped to the basepoint) acts via composition with $\act$ on $[X,Y_d]^f$. When $\cd(X,A) \leq d$, the obvious map $[X,Y]^f \to [X,Y_d]^f$ is a bijection; when $\cd(X,A)=d+1$, this map is a surjection. This gives the result. \subsection{Computational complexity} Unlike \v Cadek et al.~\cite{CKMVW2,CKV}, whose algorithms are polynomial for fixed $d$, and like Vok\v r\'inek \cite{Vok}, we do not give any kind of complexity bound on the run time of the algorithm which computes $[X,Y]^f$. In fact, there are several steps in which the procedure is to iterate until we find a number that works, with no a priori bound on the size of the number, although it is likely possible to bound it in terms of dimension and other parameters such as the cardinality of the torsion subgroups in the homology of $Y$. There is much space to both optimize the algorithm and discover bounds on the run time. \subsection{The fiberwise case} In a paper of \v Cadek, Kr\v c\'al, and Vok\v r\'inek \cite{CKV}, the results of \cite{CKMVW2} are extended to the \emph{fiberwise} case, that is, to computing the set of homotopy classes of lifting-extensions completing the diagram \begin{equation} \label{axyb} \begin{gathered} \xymatrix{ A \ar[r]^f \ar@{^(->}[d]_i & Y \ar@{->>}[d]^p \\ X \ar@{-->}[ru] \ar[r]^g & B, } \end{gathered} \end{equation} where $X$ is $2d$-dimensional and the fiber of $Y \xrightarrow{p} B$ is $d$-connected. Vok\v r\'inek \cite{Vok} also remarks that his results for odd-dimensional spheres extend to the fiberwise case. Is there a corresponding fiberwise generalization for the results of this paper? The na\"\i ve hypothesis would be that $[X,Y]^f_p$ is computable whenever the fiber of $Y \xrightarrow{p} B$ is a rational H-space through dimension $n$. This is false; as demonstrated by the following example, rational homotopy obstructions may still crop up in the interaction between base and fiber. \begin{ex} Let $B=S^6 \times S^2$ and $Y$ be the total space of the fibration \[S^7 \to Y \xrightarrow{p_0} B \times (S^3)^2\] whose Euler class (a.k.a.\ the $k$-invariant of the corresponding $K(\mathbb{Z},7)$-bundle) is \[[S^6 \times S^2]+[(S^3)^2 \times S^2] \in H^8(B \times(S^3)^2).\] Then the fiber of $p=\pi_1 \circ p_0:Y \to B$ is the H-space $(S^3)^2 \times S^7$, but the intermediate $k$-invariant given above has a term which is quadratic in the previous part of the fiber. Given a system of $s$ polynomial equations each of the form $$\sum_{1 \leq i<j \leq r} a_{ij}^{(k)}(x_iy_j-x_jy_i)=b_k,$$ with variables $x_1,\ldots,x_r,y_1,\ldots,y_r$ and coefficients $b_k$ and $a_{ij}^{(k)}$, we form a space $X'$ by taking $\bigvee_r S^3$ and attaching $s$ $6$-cells, the $k$th one via an attaching map whose homotopy class is $$\sum_{1 \leq i<j \leq r} a_{ij}^{(k)}[\id_i,\id_j],$$ where $\id_i$ is the inclusion map of the $i$th $3$-sphere. We fix a map $f':X' \to S^6$ which collapses the $3$-cells and restricts to a map of degree $-b_k$ on the $k$th $6$-cell. This induces a map $f=f' \times \id$ from $X=X' \times S^2$ to $B$. A lift of $f$ to $B \times (S^3)^2$ corresponds to an assignment of the variables $x_i$ and $y_i$. The existence of a further lift to $Y$ is then equivalent to whether this assignment is a solution to the system of equations above. Since the existence of such a solution is in general undecidable by \cite[Lemma 2.1]{CKMVW}, so is the existence of a lift of $f$ through $p$. \end{ex} The correct fiberwise statement should relate to rational fiberwise H-spaces, as discussed for example in \cite{LupS}. However, some technical difficulties have thus far prevented the author from obtaining such a result. \subsection{Structure of the paper} I have tried to make this paper readable to any topologist as well as anyone who is familiar with the work of \v Cadek et al. Thus \S\ref{S:RHT} and \ref{S:H} attempt to introduce all the necessary algebraic topology background which is not used in \v Cadek et al.'s papers: a bit of rational homotopy theory and some results about H-spaces. For the benefit of topologists, I have tried to separate the ideas that go into constructing a structure on mapping class sets from those required to compute this structure. The construction of the group and action in Theorem \ref{main} is discussed in \S\ref{S:XYA}. In \S\ref{S:comp}, we introduce previous results in computational homotopy theory from \cite{CKMVW2,CKV,FiVo}, and in \S\ref{S:XYAcomp} we use them to compute the structure we built earlier. Finally, in \S\ref{S:H10} and \ref{S:undec}, we prove the negative direction of Theorem \ref{main}. \subsection*{Acknowledgements} I would like to thank Shmuel Weinberger for explaining some facts about H-spaces, and Marek Filakovsk\'y, Luk\'a\v s Vok\v r\'inek, and Uli Wagner for other useful conversations and encouragement. I was partially supported by NSF grant DMS-2001042. \section{Rational homotopy theory} \label{S:RHT} Rational homotopy theory is a powerful algebraicization of the topology of simply connected topological spaces first introduced by Quillen \cite{Qui} and Sullivan \cite{SulLong}. The subject is well-developed, and the texts \cite{GrMo} and \cite{FHT} are recommended as a comprehensive reference. This paper requires only a very small portion of the considerable machinery that has been developed, and this short introduction should suffice for the reader who is assumed to be familiar with Postnikov systems and other constructs of basic algebraic topology. The key topological idea is the construction of rationalized spaces: to any simply connected CW complex $X$ one can functorially (at least up to homotopy) associate a space $X_{(0)}$ whose homology (equivalently, homotopy) groups are $\mathbb{Q}$-vector spaces.\footnote{It's worth pointing out that this fits into a larger family of \emph{localizations} of spaces, another of which is used in the proof of Lemma \ref{lem:torsion}.} There are several ways of constructing such a rationalization, but the most relevant to us is by induction up the Postnikov tower: the rationalization of a point is a point, and then given a Postnikov stage \[\xymatrix{ K(\pi_n(X),n) \ar[r] & X_n \ar[r] \ar@{->>}[d] & E(\pi_n(X),n+1) \ar@{->>}[d]\\ & X_{n-1} \ar[r]^-{k_n} & K(\pi_n(X),n+1), }\] one replaces it with \[\xymatrix{ K(\pi_n(X) \otimes \mathbb{Q},n) \ar[r] & X_{n(0)} \ar[r] \ar@{->>}[d] & E(\pi_n(X) \otimes \mathbb{Q},n+1) \ar@{->>}[d] \\ & X_{n-1(0)} \ar[r]^-{k_n \otimes \mathbb{Q}} & K(\pi_n(X) \otimes \mathbb{Q},n+1). }\] This builds $X_{n(0)}$ given $X_{n-1(0)}$, and then $X_{(0)}$ is the homotopy type of the limit of this construction. We say two spaces are \emph{rationally equivalent} if their rationalizations are homotopy equivalent. The second key fact is that the homotopy category of rationalized spaces of \emph{finite type} (that is, for which all homology groups, or equivalently all homotopy groups, are finite-dimensional vector spaces) is equivalent to several purely algebraic categories. The one most relevant for our purpose is the Sullivan DGA model. A \emph{differential graded algebra} (DGA) over $\mathbb{Q}$ is a cochain complex of $\mathbb{Q}$-vector spaces equipped with a graded commutative multiplication which satisfies the (graded) Leibniz rule. A familiar example is the algebra of differential forms on a manifold. A key insight of Sullivan was to associate to every space $X$ of finite type a \emph{minimal} DGA $\mathcal{M}_X$ constructed by induction on degree as follows: \begin{itemize} \item $\mathcal{M}_X(1)=\mathbb{Q}$ with zero differential. \item For $n \geq 2$, the algebra structure is given by \[\mathcal{M}_X(n)= \mathcal{M}_X(n+1) \otimes \Lambda\Hom(\pi_n(X);\mathbb{Q}),\] where $\Lambda V$ denotes the free graded commutative algebra generated by $V$. \item The differential is given on the elements of $\Hom(\pi_n(X);\mathbb{Q})$ (\emph{indecomposables}) by the dual of the $n$th $k$-invariant of $X$, \[\Hom(\pi_n(X);\mathbb{Q}) \xrightarrow{k_n^*} H^{n+1}(X;\mathbb{Q}),\] and extends to the rest of the algebra by the Leibniz rule. Although it is only well-defined up to a coboundary, this definition makes sense because one can show by induction that $H^k(\mathcal{M}_X(n-1))$ is naturally isomorphic to $H^k(X_{n-1};\mathbb{Q})$, independent of the choices made in defining the differential at previous steps. Note that from this definition, it follows that for an indecomposable $y$ of degree $n$, $dy$ is an element of degree $n+1$ which can be written as a polynomial in the indecomposables of degree $<n$. In particular, it has no linear terms. \end{itemize} The DGA $\mathcal{M}_X$ is the functorial image of $X_{(0)}$ under an equivalence of homotopy categories. Many topological constructions can thus be translated into algebraic ones. This paper will use the following: \begin{itemize} \item The Eilenberg--MacLane space $K(\pi,n)$ corresponds to the DGA $\Lambda \Hom(\pi,\mathbb{Q})$ with generators concentrated in dimension $n$ and zero differential. \item Product of spaces corresponds to tensor product of DGAs. In particular: \begin{prop} \label{prop:Q} The following are equivalent for a space $X$: \begin{enumerate}[(a)] \item $X$ is rationally equivalent to a product of Eilenberg--MacLane spaces. \item The minimal model of $X$ has zero differential. \item The rational Hurewicz map $\pi_*(X) \otimes \mathbb{Q} \to H_*(X;\mathbb{Q})$ is injective. \end{enumerate} \end{prop} \end{itemize} Finally, we note the following theorem of Sullivan: \begin{thm}[Sullivan's finiteness theorem {\cite[Theorem 10.2(i)]{SulLong}}] Let $X$ be a finite complex and $Y$ a simply connected finite complex. Then the map $[X,Y] \to [X,Y_{(0)}]$ induced by the rationalization functor is finite-to-one. \end{thm} Note that this implies that if the map $Y \to Z$ between finite complexes induces a rational equivalence, then the induced map $[X,Y] \to [X,Z]$ is also finite-to-one. \section{H-spaces} \label{S:H} A pointed space $(H,o)$ is an H-space if it is equipped with a binary operation $\add:H \times H \to H$ satisfying $\add(x,o)=\add(o,x)=x$ (the basepoint acts as an identity). In addition, an H-space is \emph{homotopy associative} if \[\add \circ (\add,\id) \simeq \add \circ (\id,\add)\] and \emph{homotopy commutative} if $\add \simeq \add \circ \tau$, where $\tau$ is the ``twist'' map sending $(x,y) \mapsto (y,x)$. We will interchangeably denote our H-space operations (most of which will be homotopy associative and commutative) by the usual binary operator $+$, as in $x+y=\add(x,y)$. A classic result of Sugawara \cite[Theorem 3.4]{StaBook} is that a homotopy associative H-space which is a connected CW complex automatically admits a \emph{homotopy inverse} $x \mapsto -x$ with the expected property $\add(-x,x)=o=\add(x,-x)$. Examples of H-spaces include topological groups and Eilenberg--MacLane spaces. If $H$ is simply connected, then it is well-known that it has the rational homotopy type of a product of Eilenberg--MacLane spaces. Equivalently, from the Sullivan point of view, $H$ has a minimal model $\mathcal{M}_H$ with zero differential; see~\cite[\S12(a) Example 3]{FHT} for a proof. On the other hand, a product of H-spaces is clearly an H-space. Therefore we can add ``$X$ is rationally equivalent to an H-space'' to the list of equivalent conditions in Prop.~\ref{prop:Q}. We will generally use the sloppy phrase ``$X$ is a rational H-space'' to mean the same thing. It is easy to see that an H-space operation plays nice with the addition on higher homotopy groups. That is: \begin{prop} \label{additive} Let $(H,o,\add)$ be an H-space. Given $f,g:(S^n,*) \to (H,o)$, \[[f]+[g]=[\add \circ (f,g)] \in \pi_n(H,o).\] \end{prop} Another important and easily verified fact is the following: \begin{prop} If $(H,o,\add)$ is a homotopy associative H-space, then for any pointed space $(X,*)$, the set $[X,H]$ forms a group, with the operation given by $[\varphi]\cdot[\psi]=[\add \circ (\varphi,\psi)]$. If $H$ is homotopy commutative, then this group is likewise commutative. Moreover, suppose that $H$ is homotopy commutative, and let $A \to X$ be a cofibration (such as the inclusion of a CW subcomplex), and $f:A \to H$ a map with an extension $\tilde f:X \to H$. Then the set $[X,H]^f$ of extensions of $f$ forms an abelian group with operation given by \[[\varphi]+[\psi]=[\varphi+\psi-\tilde f].\] \end{prop} Throughout the paper, we denote the ``multiplication by $r$'' map $$\underbrace{\id + \cdots + \id}_{r\text{ times}}:H \to H$$ by $\chi_r$. The significance of this map is in the following lemmas, which we will repeatedly apply to various obstruction classes: \begin{lem} \label{lem:torsion} Let $H$ be an H-space of finite type, and let $\alpha \in H^n(H)$ be a cohomology class of finite order. Then there is an $r>0$ such that $\chi_r^*\alpha=0$. \end{lem} In other words, faced with a finite-order obstruction, we can always get rid of it by precomposing with a multiplication map. \begin{lem} \label{lem:non-torsion} Let $H$ be a simply connected H-space of finite type. Then for every $r>0$, $$\chi_r^*(H^*(H)) \subseteq rH^*(H)+\text{torsion}.$$ \end{lem} \begin{proof} By Prop.~\ref{additive}, $\chi_r$ induces multiplication by $r$ on $\pi_n(H)$. Therefore by Prop.~\ref{prop:Q}(c), it induces multiplication by $r$ on the indecomposables of the minimal model $\mathcal{M}_H$. Therefore it induces multiplication by some $r^k$ on every class in $H^n(H;\mathbb{Q})$. \end{proof} Combining the two lemmas gives us a third: \begin{lem} \label{lem:both} Let $H$ be a simply connected H-space of finite type. Then for any $r>0$ and any $n>0$, there is an $s>0$ such that $$\chi_s^*(H^n(H)) \subseteq rH^n(H).$$ \end{lem} \begin{proof}[Proof of Lemma \ref{lem:torsion}.] I would like to thank Shmuel Weinberger for suggesting this proof. Let $q$ be the order of $\alpha$. By Prop.~\ref{additive}, for $f:S^k \to H$, $(\chi_q)_*[f]=q[f]$. Let $H[1/q]$ be the universal cover of the mapping torus of $\chi_q$; this should be thought of as an infinite mapping telescope. By the above, the homotopy groups of $H[1/q]$ are $\mathbb{Z}[1/q]$-modules (the telescope localizes them away from $q$). This implies, see \cite[Thm.~2.1]{SulLoc}, that the reduced homology and cohomology groups are also $\mathbb{Z}[1/q]$-modules. Now we would like to show that for some $t$, $(\chi_q^*)^t\alpha=0$, so that we can take $r=q^t$. Suppose not, so that $(\chi_q^*)^t\alpha$ is nonzero for every $t$. Clearly every element in the sequence $$\alpha,\chi_q^*\alpha,(\chi_q^*)^2\alpha,\ldots$$ has order which divides $q$; moreover, since there are finitely many such elements, the sequence eventually cycles. Extrapolating this cycle backward gives us a nonzero element of $$H^n(H[1/q])=\varprojlim\bigl(\cdots \xrightarrow{\chi_q^*} H^n(H) \xrightarrow{\chi_q^*} H^n(H)\bigr)$$ which likewise has order dividing $q$. Since the cohomology groups of $H[1/q]$ are $\mathbb{Z}[1/q]$-modules, this is a contradiction. \end{proof} Note that this proof does not produce an effective bound on $t$. This prevents our algorithmic approach from yielding results that are as effective as those of Vok\v{r}\'inek in \cite{Vok}. We will also require the similar but more involved fact. \begin{lem} \label{lem:product} Let $H$ be an H-space of finite type, $U$ a finite complex, and $n>0$. Let $i_2:U \to H \times U$ be the obvious inclusion $u \mapsto (*,u)$. \begin{enumerate}[(i)] \item Suppose that $\alpha \in H^n(H \times U)$ is torsion and $i_2^*\alpha=0$. Then there is an $r>0$ such that $(\chi_r,\id)^*\alpha=0$. \item Suppose that $H$ is simply connected and $\alpha \in H^n(H \times U)$ is such that $i_2^*\alpha=0$. Then for every $r>0$, $$(\chi_r,\id)^*\alpha \in rH^n(H \times U)+\text{torsion}.$$ \item Suppose that $H$ is simply connected, and consider $$G=\ker i_2^* \subseteq H^n(H \times U).$$ Then for every $r>0$ there is an $s>0$ such that $$(\chi_s,\id)^*G \subseteq rH^n(H \times U).$$ \end{enumerate} \end{lem} \begin{proof} We use the K\"unneth formula, which gives a natural short exact sequence $$0 \to \bigoplus_{k+\ell=n} H^k(H) \otimes H^\ell(U) \to H^n(H \times U) \to \bigoplus_{k+\ell=n+1} \Tor(H^k(H),H^\ell(U)) \to 0.$$ To demonstrate (i), we will first show that there is an $r_0$ such that $(\chi_{r_0},\id)^*\alpha$ is in the image of $\bigoplus_{k+\ell=n} H^k(H) \otimes H^\ell(U)$. In other words, we show that the projection of $(\chi_{r_0},\id)^*\alpha$ to $\bigoplus_{k+\ell=n+1} \Tor(H^k(H),H^\ell(U))$ is zero. Now, this group is generated by elementary tensors $\eta \otimes \nu$ where $\eta \in H^k(H)$ and $\nu \in H^\ell(U)$ are torsion elements. By Lemma \ref{lem:torsion}, for each such elementary tensor, we can pick $r(\eta)$ such that $\chi_{r(\eta)}^*\eta=0$ and therefore $$(\chi_{r(\eta)},\id)^*(\eta \otimes \nu)=0 \in \Tor(H^k(H),H^\ell(U)).$$ We then choose $r_0$ to be the least common multiple of all the $r(h)$'s. Now fix a decomposition of each $H^k(H)$ and $H^\ell(U)$ into cyclic factors to write $\chi_{r_0}\alpha$ as a sum of elementary tensors. Since $i_2^*\alpha=0$, there are no summands of the form $1 \otimes u$; moreover, each summand is itself torsion. For every other elementary tensor $h \otimes u$, we can use Lemma \ref{lem:torsion} (if $h$ is torsion) or Lemma \ref{lem:non-torsion} (otherwise, since then $u$ is torsion) to find an $s(h,u)$ such that $\chi_{s(h,u)}^*h \otimes u=0$. Finally, we can take $r$ to be the product of $r_0$ with the least common multiple of the $s(h,u)$'s. This completes the proof of (i). To demonstrate (ii), we only need to apply Lemma \ref{lem:non-torsion} to $H^k(H)$ for all $0<k<n$. Finally, (iii) follows from (i) and (ii). \end{proof} \section{The algebraic structure of $[X,Y]^f$} \label{S:XYA} We start by constructing the desired structure on $[X,Y]^f$ when $Y$ is a rational H-space. From the previous section, such a $Y$ is rationally equivalent to a product of Eilenberg--MacLane spaces. In particular, it is rationally equivalent to $H=\prod_{n=2}^\infty K(\pi_n(Y),n)$, which we give the product H-space structure. We will harness this to prove the following result. \begin{thm} Suppose that $Y$ is a rational H-space through dimension $d$, denote by $Y_d$ the $d$th Postnikov stage of $Y$, and let $H_d=\prod_{d=2}^\infty K(\pi_n(Y),n)$. Suppose $(X,A)$ is a finite simplicial pair and $f:A \to Y$ a map. Then $[X,Y_d]^f$ admits a virtually free and faithful action by $[X,H_d]^f$ induced by a map $H_d \to Y_d$. \end{thm} Before proving this, we see how computing this structure gives the algorithms of Theorem \ref{main}. If $(X,A)$ has cohomological dimension $d+1$, then there is no obstruction to lifting an extension $X \to Y_d$ of $f$ to $Y$, as the first obstruction lies in $H^{d+2}(X,A;\pi_{d+1}(Y))$. Therefore $[X,Y]^f$ is nonempty if and only if $[X,Y_d]^f$ is nonempty. If $(X,A)$ has cohomological dimension $d$, then in addition every such lift is unique: the first obstruction to homotoping two lifts lies in $H^{d+1}(X,A;\pi_{d+1}(Y))$. Therefore $[X,Y]^f \cong [X,Y_d]^f$. \subsection{An H-space action on $Y_n$} Denote the $n$th Postnikov stages of $Y$ and $H$ by $Y_n$ and $H_n$, respectively, and the H-space zero and multiplication on $H_n$ by $o_n$ and by $+$ or $\add_n:H_n \times H_n \to H_n$. We will inductively construct the following additional data: \begin{enumerate}[(i)] \item Maps $H_n \xrightarrow{u_n} Y_n \xrightarrow{v_n} H_n$ inducing rational equivalences such that $v_nu_n$ is homotopic to the multiplication map $\chi_{r_n}$ for some integer $r_n$. \item A map $\act_n:H_n \times Y_n \to Y_n$ defining an \emph{H-space action}, that is such that $\act_n(o,x)=x$ and the following diagram homotopy commutes: \begin{equation} \label{H:action} \begin{gathered} \xymatrixcolsep{3pc} \xymatrix{ H_n \times H_n \times Y_n \ar[r]^-{(\add_n,\id)} \ar[d]^{(\id,\act_n)} & H_n \times Y_n \ar[d]^{\act_n} \\ H_n \times Y_n \ar[r]^-{\act_n} & Y_n, } \end{gathered} \end{equation} which is ``induced by $u_n$'' in the sense of the homotopy commutativity of \begin{equation} \label{H:xr} \begin{gathered} \xymatrix{ H_n \times H_n \ar[r]^{(\id,u_n)} \ar[d]^{\add_n} & H_n \times Y_n \ar[r]^{(\chi_{r_n},v_n)} \ar[d]^{\act_n} & H_n \times H_n \ar[d]^{\add_n} \\ H_n \ar[r]^{u_n} & Y_n \ar[r]^{v_n} & H_n. } \end{gathered} \end{equation} \end{enumerate} Note that when we pass to rationalizations, the existence of such a structure is obvious: one takes $u_{n(0)}$ to be the identity, $\act_{n(0)}=\add_{n(0)}$, and $v_{n(0)}$ to be multiplication by $r_n$. \subsection{The action of $[X/A,H_d]$ on $[X,Y_d]^f$} \label{S:VFF} Now suppose that we have constructed the above structure. Then $\add_d$ induces the structure of a finitely generated abelian group on the set $[X/A,H_d]$, which we identify with the set of homotopy classes of maps $X \to H_d$ sending $A$ to $o \in H_d$. Moreover, this group acts on $[X,Y_d]^f$ via the action $[\varphi]\cdot[\psi]=[\act_d\circ(\varphi,\psi)]$. It remains to show that this action is virtually free and faithful. Indeed, notice that pushing this action forward along $v_d$ gives the action of of $[X/A,H_d]$ on $[X,H_d]^{v_df}$ via $[\varphi] \cdot [\psi]=r_d[\varphi]+[\psi]$, which is clearly virtually free and faithful. This implies that the action on $[X,Y_d]^f$ is virtually free. Moreover, the map $v_d \circ{}:[X,Y_d]^f \to [X,H_d]^{v_df}$ is finite-to-one by Sullivan's finiteness theorem. Thus the action on $[X,Y_d]^f$ is also virtually faithful. \subsection{The Postnikov induction} \label{S:2.2} Now we construct the H-space action. For $n=1$ all the spaces are points and all the maps are trivial. So suppose we have constructed the maps $u_{n-1}$, $v_{n-1}$, and $\act_{n-1}$, and let $k_n:Y_{n-1} \to K(\pi_n(Y),n+1)$ be the $n$th $k$-invariant of $Y$. For the inductive step, it suffices to prove the following lemma: \begin{lem} There is an integer $q>0$ such that we can define $u_n$ to be a lift of $u_{n-1}\chi_q$, and construct $v_n$ and a solution $\act_n:H_n \times Y_n \to Y_n$ to the homotopy lifting-extension problem \begin{equation} \label{XYA:a} \begin{gathered} \xymatrix{ H_n \times H_n \ar[d]_{(\id, u_n)} \ar[rr]^-{\add_n} && H_n \ar[r]^{u_n} & Y_n \ar@{->>}[d] \\ H_n \times Y_n \ar@{-->}[rrru]|-{\act_n} \ar[r]_{(\chi_q,\id)} & H_n \times Y_n \ar@{->>}[r] & H_{n-1} \times Y_{n-1} \ar[r]^-{\act_{n-1}} & Y_{n-1} } \end{gathered} \end{equation} so that the desired conditions are satisfied. \end{lem} \begin{proof} First, since $Y$ is rationally a product, $k_n$ is of finite order, so by Lemma \ref{lem:torsion} there is some $q_0$ such that $k_nu_{n-1}\chi_q=0$, and therefore \[\xymatrixcolsep{3.5pc} \xymatrix{ H_n \ar@{->>}[d] \ar[r]^{\hat u} & Y_n \ar@{->>}[d] \\ H_{n-1} \ar[r]^{u_{n-1}\chi_{q_0}} & Y_{n-1}; }\] is a pullback square. We will define $u_n=\chi_{q_2q_1}\hat u$, with $q_1$ and $q_2$ to be determined and $q=q_2q_1q_0$. Now we construct $\act_n$. We will in fact construct a lifting-extension \[\xymatrix{ H_n \times H_n \ar[d]_{(\id, \hat u)} \ar[r]^-{(\chi_{q_1},\id)} & H_n \times H_n \ar[r]^-{\add_n} & H_n \ar[r]^{\hat u} & Y_n \ar@{->>}[d] \\ H_n \times Y_n \ar@{-->}[rrru]|-{\widehat\act} \ar[r]_{(\chi_{q_1q_0},\id)} & H_n \times Y_n \ar@{->>}[r] & H_{n-1} \times Y_{n-1} \ar[r]^-{\act_{n-1}} & Y_{n-1}. }\] It is easy to see that then for any $q_2>0$, $\act_n=\widehat\act \circ (\chi_{q_2},\id)$ satisfies \eqrefb{XYA:a}. Note that the outer rectangle commutes since we know \eqref{H:xr} holds in degree $n-1$. Moreover, the obstruction $\mathcal{O} \in H^{n+1}(H_n \times Y_n,H_n \times H_n;\pi_n(Y))$ to finding the lifting-extension is of finite order since $(\id,u_n):H_n \times H_n \to H_n \times Y_n$ is a rational equivalence. We will show that when $q_1$ is large enough, this obstruction is zero. The obstruction group fits into the exact sequence \[\cdots \to H^n(H_n \times H_n;\pi_n(Y)) \xrightarrow{\delta} H^{n+1}(H_n \times Y_n,H_n \times H_n;\pi_n(Y)) \xrightarrow{\rel^*} H^{n+1}(H_n \times Y_n;\pi_n(Y)) \to \cdots,\] and so the image $\rel^*\mathcal{O}$ in $H^{n+1}(H_n \times Y_n;\pi_n(Y))$ is torsion. By Lemma \ref{lem:product}(i), that means that $(\chi_s,\id)^*(\rel^*\mathcal{O})=0$ for some $s>0$. Now we look at the preimage $\alpha \in H^n(H_n \times H_n;\pi_n(Y))$ of $(\chi_s,\id)^*\mathcal{O}$. Applying Lemma \ref{lem:product}(iii), we can find a $t$ such that $\chi_t^*\alpha \in \ker\delta$ and therefore \[\delta((\chi_t,\id)^*\alpha)=(\chi_{st},\id)^*\mathcal{O}=0.\] Thus for $q_1=st$, we can find a map $\widehat\act$ completing the diagram. Now we ensure that \eqref{H:action} commutes by picking an appropriate $q_2$. Note that the diagram \[\xymatrixcolsep{3pc} \xymatrix{ H_n \times H_n \times Y_n \ar[r]^-{(\add_n,\id)} \ar[d]^{(\id,\widehat\act)} & H_n \times Y_n \ar[d]^{\widehat\act} \\ H_n \times Y_n \ar[r]^-{\widehat\act} & Y_n }\] commutes up to finite order; namely, the sole obstruction to commutativity is a torsion class in $H^{n+1}(H_n\times H_n\times Y;\pi_n(Y_n))$. Therefore we can again apply Lemma \ref{lem:product}(i), this time with $H=H_n \times H_n$ and $U=Y_n$, to find a $q_2$ which makes the obstruction zero. Finally, \eqref{H:action} implies that $\act_n|_{\{o\} \times Y_n}$ is homotopic to the identity, so we modify $\act_n$ by a homotopy to make it the identity on the nose. All that remains is to define $v_n$. But we know that $u_n$ is rationally invertible, and so we can find some $v_n$ such that $v_nu_n$ is multiplication by some $r_n$. Moreover, for any such $v_n$, the right square of \eqref{H:xr} commutes up to finite order. Thus by increasing $r_n$ (that is, replacing $v_n$ by $\chi_{\hat r}v_n$ for some $\hat r>0$) we can make it commute up to homotopy. \end{proof} \section{Building blocks of homotopy-theoretic computation} \label{S:comp} We now turn to describing the algorithms for performing the computations outlined in the previous two sections. This relies heavily on machinery and results from \cite{CKMVW2}, \cite{CKV}, and \cite{FiVo} as building blocks. This section is dedicated to explaining these building blocks. Our spaces are stored as simplicial sets \emph{with effective homology}. Roughly speaking this means a computational black box equipped with: \begin{itemize} \item Algorithms which output its homology and cohomology in any degree and with respect to any finitely generated coefficient group. \item A way to refer to individual simplices and compute their face and degeneracy operators. This allows us to, for example, represent a function from a finite simplicial complex or simplicial set to a simplicial set with effective homology. \end{itemize} Now we summarize the operations which are known to be computable from previous work. \begin{thm} \label{blocks} \begin{enumerate}[(a)] \item \label{K(pi,n)} Given a finitely generated abelian group $\pi$ and $n \geq 2$, a model of the Eilenberg--MacLane space $K(\pi,n)$ can be represented as a simplicial set with effective homology and a computable simplicial group operation. Moreover, there are algorithms implementing a chain-level bijection between $n$-cochains in a finite simplicial complex or simplicial set $X$ with coefficients in $\pi$ and maps from $X$ to $K(\pi,n)$ \cite[\S3.7]{CKMVW2}. \item \label{product} Given a finite family of simplicial sets with effective homology, there is a way of representing their product as a simplicial set with effective homology \cite[\S3.1]{CKMVW2}. \item Given a simplicial map $f:X \to Y$ between simplicial sets with effective homology, there is a way of representing the mapping cylinder $M(f)$ as a simplicial set with effective homology. (In \cite{CKV} this is remarked to be ``very similar to but easier than Prop.~5.11''.) \item Given a map $p:Y \to B$, we can compute the $n$th stage of the Moore--Postnikov tower for $p$, in the form of a sequence of Kan fibrations between simplicial sets with effective homology \cite[Theorem 3.3]{CKV}. \item \label{prop3.7} Given a diagram \[\xymatrix{ A \ar@{^{(}->}[d] \ar[r] & P_n \ar@{->>}[d] \\ X \ar[r] \ar@{-->}[ru] & P_{n-1} }\] where $P_n \to P_{n-1}$ is a step in a (Moore--)Postnikov tower as above, there is an algorithm to decide whether a diagonal exists and, if it does, compute one \cite[Prop.~3.7]{CKV} \item \label{pullback} Given a fibration $p:Y \to B$ of simply connected simplicial complexes and a map $f:X \to B$, we can compute any finite Moore--Postnikov stage of the pullback of $p$ along $f$ \cite[Addendum 3.4]{CKV}. \item \label{homotopy} Given a diagram \[\xymatrix{ A \ar[r]^f \ar@{^{(}->}[d]_i & Y \ar@{->>}[d]^p \\ X \ar@{-->}[ru] \ar[r]^g & B, }\] where $A$ is a subcomplex of a finite complex $X$ and $p$ is a fibration of simply connected complexes of finite type, we can compute whether two maps $u,v:X \to Y$ completing the diagram are homotopic relative to $A$ and over $B$ \cite[see ``Equivariant and Fiberwise Setup'']{FiVo}. \item \label{rel-finite} Given a diagram \[\xymatrix{ A \ar[r]^f \ar@{>->}[d]_i & Y \ar[d]^p \\ X \ar@{-->}[ru] \ar[r]^g & B }\] where $A$ is a subcomplex of a finite complex $X$, $Y$ and $B$ are simply connected, and $p$ has finite homotopy groups, we can compute the (finite and perhaps empty) set $[X,Y]^f_p$ of homotopy classes of maps completing the diagram up to homotopy. \end{enumerate} \end{thm} \begin{proof} We prove only the part which is not given a citation in the statement. \subsection*{Part \eqref{rel-finite}} Let $d=\dim X$. One starts by computing the $d$th stage of the Moore--Postnikov tower of $p:Y \to B$ using \eqref{pullback}. From there, we induct on dimension. At the $k$th step, we have computed the (finite) set of lifts to the $k$th stage $P_k$ of the Moore--Postnikov tower. For each such lift, we use \eqref{prop3.7} to decide whether it lifts to the $(k+1)$st stage, and compute a lift $u:X \to P_{k+1}$ if it does. Then we compute all lifts by computing representatives of each element of $H^{k+1}(X,A;\pi_{k+1}(p))$ and modifying $u$ by each of them. Finally, we use \eqref{homotopy} to decide which of the maps we have obtained are duplicates and choose one representative for each homotopy class in $[X,P_{k+1}]^f_p$. We are done after step $d$ since $[X,P_d]^f_p \cong [X,Y]^f_p$. \end{proof} \section{Computing $[X,Y]^f$} \label{S:XYAcomp} We now explain how to compute the group and action described in \S\ref{S:XYA}. We work with a representation of $(X,A)$ as a finite simplicial set and a Postnikov tower for $Y$, and perform the induction outlined in that section to compute $[X,Y_d]^f$ for a given dimension $d$. The algorithm verifies that $Y$ is indeed a rational H-space through dimension $d$; however, it assumes that $Y$ is simply connected and returns nonsense otherwise. \subsection{Setup} Let $d$ be such that $Y_d$ is a rational H-space. Since the homotopy groups of $Y$ can be computed, we can use Theorem \ref{blocks}\eqrefb{K(pi,n)} and \eqrefb{product} to compute once and for all the space $$H_d=\prod_{n=2}^d K(\pi_n(Y),n),$$ and the binary operation $\add_d:H_d \times H_d \to H_d$ is given by the product of the simplicial group operations on the individual $K(\pi_n(Y),n)$'s. The group of homotopy classes $[X/A,H_d]$ is naturally isomorphic to $\prod_{n=2}^d H^n(X,A;\pi_n(Y))$, making this also easy to compute. Finally, given an element of this group expressed as a word in the generators, we can compute a representative map $X \to H_d$, constant on $A$, by generating the corresponding cochains of each degree on $(X,A)$ and using them to build maps to $K(\pi_n(Y),n)$. We then initialize the induction which will compute maps $u_d$, $v_d$, and $\act_d$ and an integer $r_d$ satisfying the conditions of \S\ref{S:XYA}. Since $H_1=Y_1$ is a point, we can set $r_1=1$ and $u_1$, $v_1$, and $\act_1$ to be the trivial maps. \subsection{Performing the Postnikov induction} The induction is performed as outlined in \S\ref{S:2.2}, although we have to be careful to turn the homotopy lifting and extension problems into genuine ones. Suppose that maps $u_{n-1}$, $v_{n-1}$, and $\act_{n-1}$ as desired have been constructed, along with a map \[\Hact_{n-1}:H_n \times M(u_{n-1}) \to Y_{n-1}\] which restricts to $\add_{n-1}$ on $H_{n-1} \times H_{n-1}$ and $\act_{n-1}$ on $H_{n-1} \times Y_{n-1}$ (here $M(f)$ refers to the mapping cylinder of $f$). There are five steps to constructing the maps in the $n$th step: \begin{enumerate}[1.] \item Find $q_0$ such that $u_{n-1}\chi_{q_0}$ lifts to a map $\hat u:H_n \to Y_n$. \item Find $q_1$ such that the diagram \[\xymatrix{ (H_n \times H_n) \cup (o_n \times M(\hat u)) \ar@{>->}[d] \ar[rr]^-{\add_n \cup \id} && M(\hat u) \ar[r]^{\text{project}} & Y_n \ar@{->>}[d] \\ H_n \times M(\hat u) \ar@{-->}[rrru]|-{\widehat\Hact} \ar[r]_{(\chi_{q_1q_0},\id)} & H_n \times M(\hat u) \ar@{->>}[r] & H_{n-1} \times M(u_{n-1}) \ar[r]^-{\Hact_{n-1}} & Y_{n-1} }\] has a lifting-extension along the dotted arrow. Note the modifications to diagram \eqref{XYA:a} which are designed to make it commute on the nose rather than up to homotopy and to make sure that $\widehat\act(o,x)=x$. Here $M(f)$ represents the mapping cylinder of the map $f$. \item Find $q_2$ such that $\widehat\Hact|_{H_n \times Y_n} \circ (\chi_{q_2},\id)$ makes the diagram \eqref{H:action} commute up to homotopy. Now we can set $$\Hact_n=\widehat\Hact \circ (\chi_{q_2},\id); \qquad \act_n=\Hact_n|_{H_n \times Y_n}; \qquad u_n=\hat u\chi_{q_1q_2}.$$ \item Find $q_3$ so that the diagram $$\xymatrix{H_{n+1} \ar@{>->}[r] \ar@/_1pc/[rr]_{\chi_{q_3}} & M(u_{n+1}) \ar@{-->}[r] & H_{n+1}}$$ can be completed by some $\hat v$. \item Find $q_4$ so that setting $$v_{n+1}=\hat v\chi_{q_4} \quad\text{and}\quad r_{n+1}=r_kq_0q_1q_2q_3q_4$$ makes the diagram \eqref{H:xr} commute. \end{enumerate} The first step is done by determining the order of the $k$-invariant $k_{n+1} \in H^{n+2}(Y_n;\pi_{n+1}(Y))$. If this order is infinite, then $Y$ is not rationally a product of Eilenberg--MacLane spaces, and the algorithm returns failure. Otherwise we compute $q_0$ by trying various multiples of the order. The rest of the steps are guaranteed to succeed for some value of $q_i$, and each of the conditions can be checked using the operations of Theorem \ref{blocks}, so this part can be completed by iterating over all possible values until we find one that works. \subsection{Computing the action} \label{S:XYAcomp:action} Let $G=[X/A,H_d]$; we now explain how to compute $[X,Y]^f$ as a set with a virtually free and faithful action by $G$. First we must decide whether there is a map $X \to H_d$ extending $v_df:A \to H_d$. If the set $[X,Y_d]^f$ has an element $e$, then $v_df$ has an extension $v_de$, so if we find that there is no such extension, we return the empty set. Otherwise we compute such an extension $\psi_0$. \begin{lem} We can determine whether an extension $\psi_0:X \to H_d$ of $v_df$ exists, and compute one if it does. \end{lem} \begin{proof} Recall that $H_d=\prod_{n=2}^d K(\pi_n(Y),n)$. Write $\proj_n$ for the projection to the $K(\pi_n(Y),n)$ factor. Then the extension we desire exists if and only if for each $n<d$, the cohomology class in $H^n(A;\pi_n(Y))$ represented by $\proj_nv_df$ has a preimage in $H^n(X;\pi_n(Y))$ under the map $i^*$. We look for an explicit cocycle $\sigma_n \in C^n(X;\pi_n(Y))$ whose restriction to $A$ is $\proj_nv_df$. We can compute cycles which generate $H^n(X;\pi_n(Y))$ (because $X$ has effective homology) as well as generators for $\delta C^{n-1}(X;\pi_n(Y))$ (the coboundaries of individual $(n-1)$-simplices in $X$). Then finding $\sigma_n$ or showing it does not exist is an integer linear programming problem with the coefficients of these chains as variables. Now if $\sigma_n$ exists, then it also determines a map $X \to K(\pi_n(Y),n)$. Taking the product of these maps for all $n \leq d$ gives us our $\psi_0$. \end{proof} We now compute a representative $a_N$ for each coset $N$ of $r_dG \subseteq G$. Since this is a finite-index subgroup of a fully effective abelian group, this can be done algorithmically, for example by trying all words of increasing length in a generating set until a representative of each coset are obtained. For each $a_N$, we compute a representative map $\varphi_N:X \to H_d$ which is constant on $A$. Then the finite set \[S=\{\psi_N=\psi_0+v_du_df_N:N \in G/r_dG\}\] contains representatives of the cosets of the action of $[X/A,H_d]$ on $[X,H_d]^{v_df}$ obtained by pushing the action on $[X,Y]^f$ forward along $v_d$. Now, for each element of $S$ we apply Theorem \ref*{blocks}\eqref{rel-finite} to the square $$\xymatrix{ A \ar[r]^f \ar[d]_i & Y_d \ar[d]^{v_d} \\ X \ar@{-->}[ru] \ar[r]^{\psi_N} & H_d }$$ to compute the finite set of preimages under $v_d$ in $[X,Y_d]^f$. To obtain a set of representatives of each coset for the action of $[X/A,H_d]$ on $[X,Y_d]^f$, we must then eliminate any preimages that are in the same coset. In other words, we must check whether two preimages $\tilde\psi$ and $\tilde\psi'$ of $\psi_N$ differ by an element of $[X/A,H_d]$; any such element stabilizes $v_d\psi$, and so its order must divide $r_d$. Since there are finitely many elements whose order divides $r_d$, we can check for each such element $\varphi$ in turn whether $[\varphi]\cdot[\tilde\psi] \simeq [\tilde\psi']$. Finally, to finish computing $[X,Y_d]^f$ we must compute the finite stabilizer of each coset. This stabilizer is contained in the finite subgroup of $[X/A,H_d]$ of elements whose order divides $r_d$. Therefore we can again go through all elements of this subgroup and check whether they stabilize our representative. \section{Variants of Hilbert's tenth problem} \label{S:H10} In \cite{CKMVW}, the authors show that the existence of an extension is undecidable by using the undecidability of the existence of solutions to systems of diophantine equations of particular shapes: \begin{lem}[Lemma 2.1 of \cite{CKMVW}] The solvability in the integers of a system of equations of the form \begin{align} \sum_{1 \leq i<j \leq r} a_{ij}^{(q)}x_ix_j &= b_q, & q &= 1,\ldots,s \label{Q-SYM} \quad \text{or} \tag{Q-SYM} \\ \sum_{1 \leq i<j \leq r} a_{ij}^{(q)}(x_iy_j-x_jy_i) &= b_q, & q &= 1,\ldots,s \label{Q-SKEW} \tag{Q-SKEW} \end{align} for unknowns $x_i$ and (for \eqrefb{Q-SKEW}) $y_i$, $1 \leq i \leq r$, is undecidable. \end{lem} For our purposes, we will need to show the same for systems of one more form, as well as an infinite family generalizing it. \begin{lem} The solvability in the integers of a system of equations of the form \begin{equation} \sum_{i,j=1}^r a_{ij}^{(q)}x_iy_j=c_q, \qquad q=1,\ldots,s \label{Q-DIFF} \tag{Q-DIFF} \end{equation} for unknowns $x_i$ and $y_i$, $1 \leq i \leq r$, is undecidable. More generally, for any (not all zero) family of $m \times n$ matrices $\{B_p\}_{p=1,\ldots,t}$, the solvability in the integers of a system of equations of the form \begin{equation} \sum_{i,j=1}^r a_{ij}^{(q)}\vec u_i^TB_p\vec v_j = c_{pq}, \qquad q = 1,\ldots,s, \qquad p = 1,\ldots, t \label{Q-BIL} \tag{Q-BLIN$\{B_p\}$} \end{equation} for unknowns $u_{i1},\ldots,u_{im}$ and $v_{j1},\ldots,v_{jn}$, $1 \leq j \leq r$, is undecidable. \end{lem} \begin{proof} Systems of the form \eqrefb{Q-DIFF} are a subset of those of the form \eqrefb{Q-SYM}. In fact, the proof in \cite{CKMVW} of the undecidability of \eqrefb{Q-SYM} only uses systems of the form \eqrefb{Q-DIFF}, and so proves that \eqrefb{Q-DIFF} is undecidable. To show that \eqrefb{Q-BIL}, for any $\{B_p\}_{p=1,\ldots,t}$ which are not all zero, is undecidable, we show that a system of the form \eqrefb{Q-DIFF} can be simulated with one of the form \eqrefb{Q-BIL}. This proof is closely related to that of the undecidability of \eqrefb{Q-SYM} in \cite{CKMVW}. First, suppose that $r=1$, so we just have one matrix $B$. We first show that we replace $B$ wth an invertible square matrix. \begin{lem} Given an $m \times n$ matrix $B$, there is a square invertible matrix $B'$ such that for every choice of $\{a_{ij}\}$ and $c_q$, the system \[\sum_{i,j=1}^r a_{ij}\vec u_i^TB\vec v_j = c_q, \qquad q = 1,\ldots,s \] has a solution if and only if the system \[\sum_{i,j=1}^r a_{ij}(\vec u'_i)^TB'\vec v'_j = c_q, \qquad q = 1,\ldots,s \] has a solution. \end{lem} \begin{proof} The rows of $B$ generate a subgroup of $\mathbb{Z}^n$, and by plugging in different $\vec u_i$ we can get any vector in that subgroup. So let $B''$ be an $t \times n$ matrix whose rows are linearly independent vectors generating that subgroup. Then the set of possible values of $(\vec u'_i)^TB''$ is the same as the set of possible values of $\vec u_i^TB$. Now the columns of $B''$ generate a subgroup of full rank in $\mathbb{Z}^t$, and by plugging in different $\vec v_j$ we can get any vector in that subgroup. So let $B'$ be a $t \times t$ matrix whose columns are linearly independent vectors generating that subgroup. Then the set of possible values of $B'\vec v_j'$ is the same as the set of possible values of $B''\vec v_j$. \end{proof} Thus we may assume from the start that $m=n$ and $B=(b_{k\ell})$ is invertible. Moreover, by shuffling indices we may assume that $b_{11}$ is nonzero. Now consider a general system of the form \eqrefb{Q-DIFF}. We use it to build a system of the form (Q-BLIN$\{B\}$) with variables \begin{align*} & u_{i1},\ldots,u_{in}\text{ and }v_{j1},\ldots,v_{jn}, & 1 &\leq i \leq r, \\ & z_{k\ell}\text{ and }w_{k\ell}, & 1 &\leq k,\ell \leq n. \end{align*} Define $n \times n$ matrices $Z=(z_{k\ell})$ and $W=(w_{k\ell})$. Then the equations of our new system are \begin{equation} \label{inst-BIL} \left\{\begin{aligned} \sum_{i,j=1}^r a_{ij}^{(q)}\vec u_i^TB\vec v_j&=b_{11}c_q, & q&=1,\ldots,s,\\ Z^TBW &= B, \\ (\vec u_i^TBW)_\ell &= 0, & i &= 1,\ldots,r, & \ell &= 2,\ldots,n, \\ (Z^TB\vec v_j)_k &= 0, & j &= 1,\ldots,r, & k &= 2,\ldots,n. \end{aligned}\right. \end{equation} We show that this has a solution if and only if \eqrefb{Q-DIFF} does. It is easy to see that $\{x_i,y_j\}_{1 \leq i,j \leq r}$ is a solution to \eqrefb{Q-DIFF} if and only if $$Z=W=I_n, \qquad \vec u_i=x_i\vec e_1, \qquad \vec v_j=y_j\vec e_1,$$ where $\vec e_1$ is the basis vector $(1,0,\ldots,0)$, is a solution to \eqrefb{inst-BIL}. In particular, if \eqrefb{Q-DIFF} has a solution, then so does \eqrefb{inst-BIL}. Conversely, suppose that we have a solution for \eqrefb{inst-BIL}. Since they are integer matrices and $B$ is invertible, $Z$ and $W$ must both have determinant $\pm1$. Therefore $Z^{-1}$ and $W^{-1}$ are also integer matrices. Then \eqrefb{inst-BIL} also has the solution $$\vec u_i'=Z^{-1}\vec u_i, \qquad \vec v_j'=W^{-1}\vec v_j, \qquad Z'=W'=I_n,$$ which gives us a corresponding solution for \eqrefb{Q-DIFF}. Now we take on the general case. Write $B_p=(b^{(p)}_{k\ell})$; again by reshuffling indices we can assume that $b^{(1)}_{11} \neq 0$. We again use the variables \begin{align*} & u_{i1},\ldots,u_{im}\text{ and }v_{j1},\ldots,v_{jn}, & 1 &\leq i \leq r, \\ & z_{k\ell}\text{ and }w_{k\ell}, & 1 &\leq k \leq m, & 1 &\leq \ell \leq n \end{align*} and the very similar system of equations \begin{equation} \label{inst-BILp} \left\{\begin{aligned} \sum_{i,j=1}^r a_{ij}^{(q)}\vec u_i^TB_p\vec v_j&=b^{(p)}_{11}c_q, & q &= 1,\ldots,s, & p &= 1,\ldots,t, \\ Z^TB_pW &= B_p, & p &= 1,\ldots,t, \\ (\vec u_i^TB_pW)_\ell &= 0, & p &= 1,\ldots,t, & i &= 1,\ldots,r, & \ell &= 2,\ldots,n, \\ (Z^TB_p\vec v_j)_k &= 0, & p &= 1,\ldots,t, & j &= 1,\ldots,r, & k &= 2,\ldots,m. \end{aligned}\right. \end{equation} Once again, \eqrefb{Q-DIFF} has a solution $\{x_i,y_j\}_{1 \leq i,j \leq r}$ if and only if \eqrefb{inst-BILp} has the solution $$Z=I_m,\qquad W=I_n,\qquad \vec u_i=x_i\vec e_1, \qquad \vec v_j=y_j\vec e_1.$$ Conversely, any solution to \eqrefb{inst-BILp} is also a solution to the subsystem consisting of equations involving $B_1$; by the argument above this can be turned into a solution for \eqrefb{Q-DIFF}. \end{proof} \section{Undecidability of extension problems} \label{S:undec} \begin{thm} Let $Y$ be a simply connected finite complex which is not a rational H-space. Then the problem of deciding, for a finite simplicial pair $(X,A)$ and a map $\varphi:A \to Y$, whether an extension to $X$ exists is undecidable. Moreover, $\cd(X,A)=d+1$, where $d$ is the smallest degree such that $Y_{d}$ is not a rational H-space. \end{thm} \begin{proof} We reduce from the problem \eqref{Q-BIL}, for an appropriate set of matrices $B_p$. For each instance of this problem, we construct a pair $(X,A)$ and map $f:A \to Y$ such that an extension exists if and the instance has a solution. Fix a minimal model $\mathcal{M}_Y$ for $Y$ and a basis of generators for the indecomposables $V_k$ in each degree $k$ which is dual to a basis for $\pi_k(Y)/$torsion. Since $Y$ is not a rational H-space, there is some least $d$ such that the differential in the minimal model $\mathcal{M}_Y$ is nontrivial. Recall that for a minimal model, each nonzero term in the differential is at least quadratic. For each of the generators $\eta$ of $V_d$, $d\eta$ is a polynomial in the lower-degree generators. Denote by \emph{P-degree} the degree of an element of the minimal model as a polynomial in these generators, as opposed to the degree imposed by the grading. Of all the terms in all these polynomials, we choose one with the smallest P-degree and write it as $C\alpha\beta\mu$, where $C$ is a rational coefficient, $\alpha$ and $\beta$ are elements of $V_{d_1}$ and $V_{d_2}$, respectively, and $\mu$ is some shorter monomial, perhaps $1$. Some of the $d\eta$ may have other terms of the form $\alpha'\beta'\mu$, for various $\alpha'$ and $\beta'$. We write $$d\eta=P_\eta(\vec\alpha,\vec\beta)\mu+\nu_\eta,$$ with $\nu_\eta$ consisting of all the terms which either have higher P-degree or are not multiples of $\mu$. We note here the connection, first investigated in \cite{AA}, between the differential in the minimal model and higher-order Whitehead products. Given spheres $S^{n_1},\ldots,S^{n_t}$, their product can be given a cell structure with one cell for each subset of $\{1,\ldots,t\}$. Define their \emph{fat wedge} $\mathbb{V}_{i=1}^t S^{n_i}$ to be this cell structure without the top face. Let $N=-1+\sum_{i=1}^t n_i$, and let $\tau:S^N \to \mathbb{V}_{i=1}^t S^{n_i}$ be the attaching map of the missing face. By definition, $\alpha \in \pi_N(Y)$ is contained in the \emph{$r$th-order Whitehead product} $[\alpha_1,\ldots,\alpha_t]$, where $\alpha_i \in \pi_{n_i}(Y)$, if it has a representative which factors through a map $$S^N \xrightarrow{\tau} \mathbb{V}_{i=1}^t S^{n_i} \xrightarrow{f_\alpha} Y$$ such that $[f_\alpha|_{S^{n_i}}]=\alpha_i$. Note that there are many potential indeterminacies in how higher-dimensional cells are mapped, so $[\alpha_1,\ldots,\alpha_t]$ is a set of homotopy classes rather than a unique class. Some properties of the Whitehead product set $[\alpha_1,\ldots,\alpha_t]$ are easy to deduce. It is nonempty if all the $(t-1)$st-order product sets $[\alpha_1,\ldots,\hat \alpha_i,\ldots,\alpha_t]$ contain zero. Moreover, higher order Whitehead products are multilinear, in the sense that $$[c\alpha_1,\ldots,\alpha_t] \supseteq c[\alpha_1,\ldots,\alpha_t],$$ and the factors commute or anticommute as determined by the grading. The main theorem of \cite{AA}, Theorem 5.4, gives a formula for the pairing between an indecomposable $\eta \in V_n$ and any element of an $r$th-order Whitehead product set, assuming that every term of $d\eta$ has P-degree at least $r$. This formula is somewhat complicated, but is $r$-linear in the pairings between factors of the terms of $d\eta$ and factors of the Whitehead product. In particular, let $\mu=\gamma_1 \cdots \gamma_{t}$, and let $e_1,\ldots,e_{t}$ be the generators of $\pi_*(Y)$ dual to the $\gamma_i$. Then any element $f$ of the Whitehead product set $[a,b,e_1,\ldots,e_{t}]$, for $a \in \pi_{d_1}(X)$ and $b \in \pi_{d_2}(X)$, satisfies \begin{equation} \label{eqn-generator} \langle \eta, f \rangle= P'_\eta(\langle \alpha_i,a \rangle,\langle \beta_i,b \rangle) \end{equation} where $P'_\eta$ is an integer bilinear form in the two arguments, and $\alpha_i$ and $\beta_i$ range over generators of $V_{d_1}$ and $V_{d_2}$, respectively, which occur in terms of $\eta$ of the form $\alpha_i\beta_i\mu$ (these are the same set if $d_1=d_2$.) In general, Whitehead product sets may be empty. However, since the rational homotopy of $Y$ below $n$ is that of a product of Eilenberg--MacLane spaces, for any $a_1,\ldots,a_s$ whose degrees add up to $\leq d+1$, there are integers $p_1,\ldots,p_s$ such that $[p_1a_1,\ldots,p_sa_s]$ is nonempty, and if the degrees add up to $\leq d$ then there are $p_1,\ldots,p_s$ such that $0 \in [p_1a_1,\ldots,p_sa_s]$. In particular, we can fix integers $p_1,\ldots,p_{t}$ such that $[p_1e_1,\ldots,p_{t}e_{t}]$ contains zero, as well as integers $\rho_1$ and $\rho_2$ such that for any $g_1 \in \pi_{d_1}(Y)$ and $g_2 \in \pi_{d_2}(Y)$, $[\rho_1g_1,\rho_2g_2,p_1e_1,\ldots,p_te_t]$ is nonempty. Let $\eta_1,\ldots,\eta_r$ be the generators of $V_d$, $g_1,\ldots,g_m$ a generating set for $\rho_1\pi_{d_1}(Y)/$torsion, $h_1,\ldots,h_n$ a generating set for $\rho_2\pi_{d_2}(Y)/$torsion, and for $p=1,\ldots,r$, let $B_p$ be the matrix which gives $P'_{\eta_p}$ in terms of those two bases. Now given a system of the form \eqref{Q-BIL}, we will build a $(d+1)$-dimensional pair $(X,A)$ and a map $f:A \to Y$ such that the extension problem has a solution if and only if the system does. We define $$A=\bigvee_{q=1}^s S^d_q \vee \bigvee_{i=1}^{t} S^{n_i},$$ where $n_i$ is the degree of $e_i$, and let $f:A \to Y$ send \begin{itemize} \item $S^{n_i}$ to $Y$ via a representative of $p_ie_i$; \item $S^d_q$ to $Y$ via an element whose pairing with $\eta_p$ is $c_{pq}$. \end{itemize} Finally, we build $X$ from $A'=A \vee \bigvee_{i=1}^r S^{d_1}_i \vee \bigvee_{j=1}^r S^{d_2}_j$ as follows: \begin{itemize} \item Add on cells so that for every $i$ and $j$, $X$ includes the fat wedge $\mathbb{V}(S^{d_1}_i,S^{d_2}_j,S^{n_1},\ldots,S^{n_{t}})$, and these fat wedges only intersect in $A'$. Let $\varphi_{ij}:S^d \to X$ be the attaching map of the missing $(d+1)$-cell for the $(i,j)$th fat wedge. \item Add on spheres $S^{d_1\prime}_i$ together with the mapping cylinder of a map $S^{d_1}_i \to S^{d_1\prime}_i$ of degree $\rho_1$, and spheres $S^{d_2\prime}_j$ together with the mapping cylinder of a map $S^{d_2}_j \to S^{d_2\prime}_j$ of degree $\rho_2$. \item Then, for each $q$, add a $(d+1)$-cell whose boundary is a representative of $\rho([S^d_q]-\sum_{i,j=1}^r a_{ij}^{(q)}[\varphi_{ij}])$, where $\rho$ is the exponent of the torsion part of $\pi_d(Y)$. \end{itemize} It is easy to see that $H_n(X,A)=0$ for $n>d$. We claim that $(X,A)$ and $f$ pose the desired extension problem. Indeed, any extension of $f$ to $\tilde f:X \to Y$ sends each $S^{d_1}_i$ to an element of $\rho_1\pi_{d_1}(Y)$ and each $S^{d_2}_j$ to an element of $\rho_2\pi_{d_2}(Y)$, as constrained by the mapping cylinders. Now if we write \begin{equation} \label{assign} \tilde f_*[S^{d_1}_i]=\text{torsion}+\sum_{k=1}^m u_kg_k \qquad\text{and}\qquad \tilde f_*[S^{d_1}_j]=\text{torsion}+\sum_{\ell=1}^n v_\ell h_\ell, \end{equation} then the $(d+1)$-cells force, via \eqref{eqn-generator}, a relationship between the $u_k$, $v_\ell$, and $c_{pq}$ which is exactly \eqref{Q-BIL}. Conversely, given $u_k$ and $v_\ell$ satisfying \eqref{Q-BIL}, there is an extension $\tilde f:X \to Y$ satisfying \ref{assign}. To see this, note that there is clearly an extension to the fat wedges and the mapping cylinders. Moreover, under any such extension, $f_*[S^d_q]$ and $\sum_{i,j=1}^r a_{ij}^{(q)}\tilde f_*[\varphi_{ij}] \in \pi_d(Y)$ are rationally equivalent; thus when multiplied by $\rho$ they are equal, and the map extends to the $(d+1)$-cells of $X$. \end{proof} \bibliographystyle{amsalpha}
proofpile-arXiv_065-194
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Preliminary} \label{sec:pre} We assume that a dataset $D$ consists of samples of the form $({\bm{x}},y)\in \mathcal{X}\times\mathcal{Y}$, where ${\bm{x}}$ is the feature and $y$ is the label. A model $f$ is a mapping from feature space to label, i.e., $f: \mathcal{X}\rightarrow \mathcal{Y}$. We assume that the model is parameterized by $\theta\in\mathbb{R}^{p}$. We further define a loss function $\ell(f({\bm{x}}),y)$ which measures the performance of the model on a data point, e.g., the cross-entropy loss for classification task. We may also written the loss function as $\ell(\theta, d)$ for data point $d=({\bm{x}},y)$ in this paper. The learning process is conducted by minimizing the \emph{empirical loss}: $\sum_{d\in D}\ell(\theta, d)$. The data are often divided into training set $D_{train}$ and test set $D_{test}$ to properly evaluate the model performance on unseen samples. The generalization gap $G$ represents the difference of the model performance between the training set and the test set, \begin{equation} \begin{aligned} \label{eq:generalization} G=\mathbb{E}_{d\sim D_{test}}[\ell(\theta, d)]-\mathbb{E}_{d\sim D_{train}}[\ell(\theta,d)]. \end{aligned} \end{equation} \subsection{Data Augmentation} Data augmentation is well known as a good way to improve generalization. It transforms each sample into similar variants and uses the transformed variants as the training samples. We use $\mathcal{T}$ to denote the set of all possible transformations. For a given data point $d$, each transformation $t\in \mathcal{T}$ generates one augmented instance $t(d)=(\tilde {\bm{x}},y)$. For example, if ${\bm{x}}$ is a natural image, the transformation could be rotation by a specific degree or flip over the horizontal direction. The set $\mathcal{T}$ then contains the transformations with all possible rotation degrees and all directional flips. The size of $\mathcal{T}$ may be infinite and we usually only use a subset in practice. Let $T\subset \mathcal{T}$ be a subset of transformations. The cardinality of $|T|$ controls the strength of the data augmentation. We use $T(d)=\{t(d);t\in T\}$ and $\ell_{T}(\theta, d)=\{\ell(\theta,\tilde d);\tilde d\in T(d)\}$ to denote the set of augmented instances and corresponding loss values. With data augmentation, the learning objective is to fit the augmented instances \begin{equation} \begin{aligned} \label{eq:augmented_obj} \theta=\argmin_{\theta}\sum_{d\in D}\;\sum_{ \tilde d \in T(d)} \ell(\theta,\tilde d). \end{aligned} \end{equation} \subsection{Membership Inference} Membership inference is a widely used tool to quantitatively analyze the information leakage of a trained model. Suppose the whole dataset consists of $n$ i.i.d. samples $d_{i},\ldots,d_{n}$ from a data distribution, from which we choose a subset as the training set. We decide membership using $n$ i.i.d. Bernoulli samples $\{m_{1},\ldots,m_{n}\}$ with a positive probability $\mathbb{P}(m_{i}=1)=q$. Sample $d_i$ is used to train the model if $m_{i}=1$ and is not used if $m_{i}=0$. Given the learned parameters $\theta$ and $d_{i}$, membership inference is to infer $m_{i}$, which amounts to computing $\mathbb{P}(m_{i}=1|\theta,d_{i})$. That is to say, membership inference aims to find the posterior distribution of $m_{i}$ for given $\theta$ and $d_{i}$. Specifically, \citet{sablayrolles2019white} shows that it is sufficient to use the loss of the target model to determine the membership $m_i$ under some assumption on the the posterior distribution of $\theta$. They predict $m_{i}=1$ if $\ell(\theta,d_{i})$ is smaller than a threshold $\tau$, i.e. \begin{equation} \label{eq:m_loss} \begin{aligned} M_{loss}(\theta,d_{i})=1\;\;\;\; \text{if} \;\;\;\; \ell(\theta,d_{i})<\tau. \end{aligned} \end{equation} This membership inference is well formulated for the model trained with original samples. However, it is not clear how to conduct membership inference and what is the optimal algorithm when data augmentation is used in the training process\footnote{\citet{sablayrolles2019white} directly applies the algorithm (Equation~\ref{eq:m_loss}) for the case with data augmentation.}. We analyze these questions in next sections. \section{Experiments} \label{sec:exp} In this section, we empirically compare the proposed inference algorithms with state-of-the-art membership inference attack algorithm, and demonstrate that the proposed algorithms achieve superior performance over different datasets, models and choices of data augmentation. \begin{table*} \small \renewcommand{\arraystretch}{1.} \centering \caption{Membership inference success rates (in $\%$). We report top-1 test accuracy for CIFAR10 and top-5 accuracy for ImageNet. The numbers under algorithm name are the attack success rates. When $k=0$, we run the proposed methods with 10 randomly augmented instances as input anyway. The baseline attack $M_{loss}$ is introduced in Section~\ref{sec:pre}. The row with $k=0$ denotes the model is trained without data augmentation. Test accuracy denotes the target model's classification accuracy on test set. } \label{tbl:brief_attack_success_rates} \begin{tabular}{llllllll} \hline \hline Model & Dataset & $|T|$ & Test accuracy & $M_{loss}$ & $M_{NN\_loss}$ & $M_{mean}$ & $M_{moments}$\\ \hline \multirow{2}{*}{2-layer ConvNet}& CIFAR10 & \multirow{1}{*}{$k=0$}& 59.7 & 83.7 & 83.6 & 83.7 & 83.7 \\\cline{2-8} & CIFAR10 &\multirow{1}{*}{$ k=3$} & 64.6 & 82.2 & 85.7 & 90.3 & \textbf{91.3} \\\hline \multirow{2}{*}{ResNet110} & CIFAR10 & \multirow{1}{*}{$k=0$}& 84.9 & 65.4 & 65.4 & 65.4 & 65.6 \\\cline{2-8} & CIFAR10 &\multirow{1}{*}{$ k=10$} & 92.7 & 58.8 & 61.8 & 66.3 & \textbf{67.1} \\\hline \multirow{2}{*}{WRN16-8} & CIFAR10 &\multirow{1}{*}{$k=0$}& 89.7 & 62.9 & 62.8 & 62.8 & 62.9 \\\cline{2-8} & CIFAR10 & \multirow{1}{*}{$ k=10$} & 95.2 & 61.9 & 63.1 & 68.9 & \textbf{70.1} \\\hline \multirow{1}{*}{ResNet101} & ImageNet & \multirow{1}{*}{$ k=10$} & 93.9 & 68.3 & 68.9 & 73.9 & \textbf{75.2} \\\hline \hline \end{tabular} \end{table*} We first introduce the datasets and target models with the details of experiment setup. Our source code is publicly available \footnote{\url{https://github.com/dayu11/MI_with_DA}}. \subsubsection{Datasets} We use benchmark datasets for image classification: CIFAR10, CIFAR100, and ImageNet1000. CIFAR10 and CIFAR100 both have 60000 examples including 50000 training samples and 10000 test samples. CIFAR10 and CIFAR100 have 10 and 100 classes, respectively. ImageNet1000 contains more than one million high-resolution images with 1000 classes. We use the training and validation sets provided by ILSVRC2012\footnote{\url{http://image-net.org/challenges/LSVRC/2012/}.}. \subsubsection{Details of used data augmentation} We consider $6$ standard transformations in image processing literature, including flipping, cropping, rotation, translation, shearing, and cutout \citet{devries2017improved}. For each $t\in\mathcal{T}$, the operations are applied with a random order and each operation is conducted with a randomly chosen parameter (e.g. random rotation degrees). Following the common practice, we sample different transformations for different training samples. \subsubsection{Target models} We choose target models with varying capacity, including a small convolution model used in previous work \cite{shokri2017membership,sablayrolles2019white}, deep ResNet \cite{he2016deep} and wide ResNet \cite{zagoruyko2016wide}. The small convolution model contains $2$ convolution layers with $64$ kernels, a global pooling layer and a fully connected layer of size $128$. The small model is trained for $200$ epochs with initial learning rate 0.01. We decay the learning rate by $10$ at the 100-th epoch. Following \citet{shokri2017membership, sablayrolles2019white}, we randomly choose $15000$ samples as training set for the small model. The ResNet models for CIFAR is a deep ResNet model with 110 layers and a wide ResNet model WRN16-8. The detailed configurations and training recipes for deep/wide ResNets can be found in the original papers. For ImageNet1000, we use the ResNet101 model and follow the training recipe in \citet{sablayrolles2019white}. \subsubsection{Implementation details of membership inference algorithms} All the augmented instances are randomly generated. We use $k$ to denote the number of augmented instances for one image. The number of augmented images is the same for training target models and conducting membership inference attacks. The benchmark algorithm is $M_{loss}$, which achieves the state-of-the-art black-box membership inference success rate \cite{sablayrolles2019white}. For $M_{loss}$, \emph{we report the best result among using every element in $\ell_{T}(\theta,d)$ and the loss of original image}. We tune the threshold of $M_{loss}$ and $M_{mean}$ on valid data following previous work \cite{sablayrolles2019white, song2019privacy}. For $M_{NN\_loss}$ and $M_{moments}$, we use $200$ samples from the training set of target model and $200$ samples from the test set to build the training data of inference network. The inference network has two hidden layers with $20$ neurons and Tanh non-linearity as activation function. We randomly choose $2500$ samples from the training set of target model and $2500$ samples from the test set to evaluate the inference success rate. The samples used to evaluate inference success rate have no overlap with inference model's training data. Other details of implementation can be found in our submitted code. \subsubsection{Experiment Results} We first present the inference success rate with a single $k$. We use $k=10$ as default. For 2-layer ConvNet, we choose $k=3$ because its small capacity. The results are presented in Table~\ref{tbl:brief_attack_success_rates}. When data augmentation is used, algorithms using $\ell_{T}(\theta,d)$ universally outperform $M_{loss}$. Algorithm~\ref{alg:losses} has inferior inference success rate compared to $M_{mean}$ and $M_{moments}$ because it is not robust to permutation on input features. The best inference success rate is achieved by $M_{moments}$, which utilizes the most information while being invariant to the permutation on $\ell_{T}(\theta,d)$. Remarkably, when $k=10$, $M_{moments}$ has inference success rate higher than $70\%$ against WRN16-8, whose top-1 test accuracy on CIFAR10 is more than $95\%$! Moreover, in Table~\ref{tbl:brief_attack_success_rates}, our algorithm on models trained with data augmentation obtains higher inference success rate than previous algorithm ($M_{loss}$) on models trained without data augmentation. We note that the generalization gap of models with data augmentation is much smaller than that of models without data augmentation. \emph{This observation challenges the common belief that models with better generalization provides better privacy. } We further plot the inference success rates of $M_{loss}$, $M_{mean}$ and $M_{moments}$ with varying $k$ in Figure~\ref{fig:varying_N}. For all algorithms, the inference success rate gradually degenerates as $k$ becomes large. Nonetheless, our algorithms consistently outperform $M_{loss}$ by a large margin for all $k$. \section{Connection with Differential Privacy} \label{sec:dp} Differential privacy (DP) measures how a single data point affects the parameter posterior in the worst case. In this section, we show an algorithm with DP guarantee can provide an upper bound on the membership inference. DP is defined for a random algorithm $\mathcal{A}$ applying on two datasets $D$ and $D'$ that differ from each other in one sample, denoted as $D\sim^{1}D'$. Differential privacy ensures the change of arbitrary instance does not significantly change the algorithm's output. \begin{definition} ($\epsilon$ - differential privacy \cite{dwork2006calibrating}) A randomized learning algorithm $\mathcal{A}$ is $\epsilon$-differentially private with respect to $D$ if for any subset of possible outcome $S$ we have $\max_{D\sim^{1}D^{'}} \frac{\mathbb{P}(\mathcal{A}=S|D)}{\mathbb{P}(\mathcal{A}=S|D^{'})}\leq e^{\epsilon}.$ \end{definition} However, in the formula of Theorem~\ref{lma:optimal_mi}, the change/removal of one sample $d_1$ indicates change/removal of a set of training instances $T(d_1)$. We need \emph{group differential privacy} to give upper bound on the quantity of Theorem~\ref{lma:optimal_mi}. Let $D$ be a training set with $n$ samples and $D\sim^{k}D'$ denote that two datasets differ in $k$ instances. Group different privacy and differential privacy are connected via the following property. \begin{remark} (Group differential privacy) \label{rmk:gdp} If $\mathcal{A}$ is $\epsilon$-differentially private with respect to $D$, then it is also $k\epsilon$-group differentially private for the group size $k$. \end{remark} Let $D_{aug}=\{T(d_{i});m_{i}=1,i\in [n]\}$ be the augmented training set with $k$ transformations, i.e., $|T(d_i)| = k $. For mean query based algorithms (e.g. gradient descent algorithm), the sensitivity of any instance is reduced to $\frac{1}{k}$. Therefore, a learning algorithm $\mathcal{A}$ that is $\epsilon$-differentially private with respect to dataset $D$ is $\frac{\epsilon}{k}$-differentially private with respect to $D_{aug}$\footnote{The $\frac{\epsilon}{k}$-DP is at instance level, i.e. $D_{aug}\sim^1 D_{aug}'$.}. With this observation, we have an upper bound on the optimal membership inference in Theorem~\ref{lma:optimal_mi}. \begin{proposition} \label{pro:upper_bound} If the learning algorithm is $\frac{\epsilon}{k}$-differentially private with respect to $D_{aug}$, we have \[\mathbb{P}(m_{1}=1|\theta,T(d_{1}))\leq \sigma\left(\epsilon+\log(q/(1-q))\right).\] \end{proposition} \begin{proof} For any given $\mathcal{K}$, we have \begin{flalign} &\frac{\mathbb{P}(\theta|m_{1}=1,T(d_{1}),\mathcal{K})}{\mathbb{P}(\theta|m_{1}=0,T(d_{1}),\mathcal{K})} \leq \max_{D_{aug}\sim^{k}D^{'}_{aug}}\frac{\mathbb{P}(\mathcal{A}=S|D_{aug})}{\mathbb{P}(\mathcal{A}=S|D^{'}_{aug})}\nonumber\\ &\leq e^{\epsilon}. \label{eq:upper_bound_dp} \end{flalign} The first inequality is due to the definitions of $T(d_1)$ and group differential privacy, and the second inequality is due to the property of group differential privacy (Remark~\ref{rmk:gdp}). Substituting Eq~(\ref{eq:upper_bound_dp}) into Theorem~\ref{lma:optimal_mi} yields the desired bound. \end{proof} Proposition~\ref{pro:upper_bound} tells that if the learning algorithm is $\frac{\epsilon}{k}$-DP with respect to $D_{aug}$ , which is true for differentially private gradient descent \cite{bassily2014private}, the upper bound of the optimal membership inference is not affected by the number of transformations $k$. This is in contrast with previous membership inference algorithm that only considers single instance \cite{sablayrolles2019white}, i.e., formulated as $\mathbb{P}(m_{1}=1|\theta, \tilde d_{1})$, where $\tilde d_{1}$ can be any element in $T(d_{1})$. Due to the result in \citet{sablayrolles2019white}, the upper bound of $\mathbb{P}(m_{1}=1|\theta, \tilde d_{1})$ scales with $\frac{\epsilon}{k}$ for mean query based algorithms, which monotonically decreases with $k$. This suggests the algorithm in \citet{sablayrolles2019white} has limited performance especially when $k$ is large. \section{Conclusion} \label{sec:conclusion} In this paper, we revisit the influence of data augmentation on the privacy risk of machine learning models. We show the optimal membership inference in this case explicitly depends on the augmented dataset (Theorem~\ref{lma:optimal_mi}). When the posterior distribution of parameters follows the Bayesian posterior, we give an explicit expression of the optimal membership inference (Theorem~\ref{thm:opt_bayesian}). Our theoretical analysis inspires us to design practical attack algorithms. Our algorithms achieve state-of-the-art membership inference success rates against well-generalized models, suggesting that the privacy risk of existing deep learning models may be largely underestimated. An important future research direction is to mitigate the privacy risk incurred by data augmentation. \section*{Acknowledgments} Da Yu and Jian Yin are supported by the National Natural Science Foundation of China (U1711262, U1711261,U1811264,U1811261,U1911203,U2001211), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key R\&D Program of Guangdong Province (2018B010107005). Huishuai Zhang and Jian Yin are corresponding authors. \section{Introduction} The training process of machine learning model often needs access to private data, e.g., applications in financial and medical fields. Recent works have shown that the trained model may leak the information of its private training set \citep{fredrikson2015model, wu2016methodology, shokri2017membership, hitaj2017deep}. As the machine learning models are ubiquitously deployed in real-world applications, it is important to quantitatively analyze the information leakage of their training sets. One fundamental approach reflecting the privacy leakage of a model about its training set is the \emph{membership inference} \cite{shokri2017membership,yeom2018privacy,salem2018ml,nasr2018machine,long2018understanding,jia2019memguard,song2019privacy, chen2020machine}, i.e., an adversary, who has access to a target model, determines whether a data point is used to train the target model (being a member) or not (not being a member). Membership inference (MI) attack is formulated as a binary classification task. A widely adopted measure for the performance of an MI attack algorithm in literature is the MI success rate over a balanced set that contains half training samples and half test samples. A randomly guessing attack will have success rate of $50\%$ and hence a good MI algorithm should have success rate above $50\%$. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{imgs/MI.png} \caption{Overview of black-box membership inference in machine learning. The adversary has access to target model's outputs of given samples. The adversary then infers whether the sample is in the target model's training set or not. Higher inference success rate indicates more severe privacy leakage.} \label{fig:MI} \end{figure} It is widely believed that the capability of membership inference is largely attributed to the generalization gap \cite{shokri2017membership,yeom2018privacy,li2020membership}. The larger performance difference of the target model on the training set and on the test set, the easier to determine the membership of a sample with respect to the target model. \emph{Data augmentation} is known to be an effective approach to produce well-generalized models. Indeed, existing MI algorithms obtain significantly lower MI success rate against models trained with data augmentation than those trained without data augmentation \cite{sablayrolles2019white}. It seems that the privacy risk is largely relieved when data augmentation is used. We challenge this belief by elaborately showing how data augmentation affects the MI attack. We first establish the optimal membership inference when the model is trained with data augmentation from the Bayesian perspective. The optimal membership inference indicates that we should use the set of augmented instances of a given sample rather than a single sample to decide the membership. This matches the intuition because the model is trained to fit the augmented data points instead of a single data point. We also explore the connection between optimal membership inference and group differential privacy, and obtain an upper bound of the success rate of MI attack. In this paper, we focus on the \emph{black-box} membership inference \cite{shokri2017membership,yeom2018privacy,salem2018ml,song2019privacy,sablayrolles2019white}. We give an illustration of black-box MI in Figure~\ref{fig:MI}. The black-box setting naturally arises in the \emph{machine learning as a service (MLaaS)} system. In MLaaS, a service provider trains a ML model on private crowd-sourced data and releases the model to users through prediction API. Under the black-box setting, one has access to the model's output of a given sample. Typical outputs are the loss value \cite{yeom2018privacy,sablayrolles2019white} and the predicted logits \cite{shokri2017membership,salem2018ml}. We use the loss value of a given sample as it is shown to be better than the logits \cite{sablayrolles2019white}. Motivated by the optimal membership inference, we formulate the membership inference as a set classification problem where the set consists of loss values of the augmented instances of a sample evaluated on the target model. We design two new algorithms for the set classification problem. The first algorithm uses threshold on the average of the loss values of augmented instances, which is inspired by the expression of the optimal membership inference. The second algorithm uses neural network as a membership classifier. For the second algorithm, we show it is important to design features that are invariant to the permutation on loss values. Extensive experiments demonstrate that our algorithms significantly improve the success rate over existing membership inference algorithms. We even find that the proposed approaches on models trained with some data augmentation achieve higher MI attack success rate than the existing methods on the model trained without data augmentation. Notably, our approaches achieve $>70\%$ MI attack success rate against a wide residual network, whose test accuracy on CIFAR10 is more than $95\%$. Our contributions can be summarized as follows. First, we establish the optimal membership inference when the model is trained with data augmentation. Second, we formulate the membership inference as a set classification problem and propose two new approaches to conduct membership inference, which achieve significant improvement over existing methods. This suggests that \emph{the privacy risk of models trained with data augmentation could be largely underestimated}. To the best of our knowledge, this is the first work to systematically study the effect of data augmentation on membership inference and reveal non-trivial theoretical and empirical findings. \subsection{Related Work} Recent works have explored the relation between generalization gap and the success rate of membership inference. \citet{shokri2017membership, sablayrolles2019white} empirically observe that better generalization leads to worse inference success rate. \citet{yeom2018privacy} show the success rates of some simple attacks are directly related to the model's generalization gap. For a given model, \citet{li2020membership} empirically verify the success rate of MI attack is upper bounded by generalization gap. However, whether the target model is trained with data augmentation, the analysis and algorithms of previous work only use single instance to decide membership. Our work fills this gap by formulating and analyzing membership inference when data augmentation is applied. \citet{song2019privacy} show \emph{adversarially robust} \cite{madry2018towards} models are more vulnerable to MI attack. They identify one major reason of this phenomenon is the increased generalization gap caused by adversarial training. They also design empirical attack algorithm which leverages the adversarially perturbed image (this process needs white-box access to the target model). In this paper, we choose perturbations following the common practice of data augmentation, which can reduce the generalization gap and do not need white-box access to the target model. Differential privacy \citep{dwork2006calibrating, algofound} controls how a single sample could change the parameter distribution in the worst case. How data augmentation affects the DP guarantee helps us to understand how the data augmentation affects membership inference. In Section~\ref{sec:dp}, we give a discussion on the relation between data augmentation, differential privacy, and membership inference. \section{Optimal Membership Inference with Augmented Data} \label{sec:optimal_mi} When data augmentation is applied, the process \[\{d_{i}\} \rightarrow \{T(d_{i})\} \rightarrow \{\theta, m_i\}\] forms a \emph{Markov chain}, which is due to the described learning process. That is to say, given $T(d_{i})$, $d_i$ is independent from $\{\theta, m_i\}$. Hence we have \[H(m_i| \theta, T(d_i)) = H(m_i|\theta, T(d_i), d_i) \ge H(m_i |\theta, d_i),\] where $H(\cdot| \cdot)$ is the conditional entropy \cite{ghahramani2006information}, the first equality is due to the Markov chain and the second inequality is due to the property of conditional entropy. This indicates that we could get less uncertainty of $m_i$ based on $\{\theta, T(d_i)\}$ than based on $\{\theta, d_i\}$. Based on this observation, we give the following definition. \begin{definition} (Membership inference with augmented data) \label{def:definition_mi_aug} For given parameters $\theta$, data point $d_{i}$ and transformation set $T$, membership inference computes \begin{equation} \begin{aligned} \mathbb{P}(m_{i}=1|\theta,T(d_{i})). \end{aligned} \end{equation} \end{definition} For the membership inference with augmented data given by Definition~\ref{def:definition_mi_aug}, we establish an equivalent formula in the Bayesian sense, which sets up the optimal limit that our algorithm can achieve. Without loss of generality, suppose we want to infer $m_{1}$. Let $\mathcal{K}=\{m_{2},\ldots,m_{n},T(d_{2}),\ldots,T(d_{n})\}$ be the status of remaining data points. Theorem~\ref{lma:optimal_mi} provides the Bayesian optimal membership inference rate. \begin{theorem} \label{lma:optimal_mi} The optimal membership inference for given $\theta$ and $T(d_{1})$ is $\mathbb{P}(m_{1}=1|\theta,T(d_{1}))=$ \[\mathbb{E}_{\mathcal{K}}\left[\sigma\left(\log\left(\frac{\mathbb{P}(\theta|m_{1}=1,T(d_{1}),\mathcal{K})}{\mathbb{P}(\theta|m_{1}=0,T(d_{1}),\mathcal{K})}\right)+\log\left(\frac{q}{1-q}\right)\right)\right],\] where $\sigma(x):=(1+e^{-x})^{-1}$ is the sigmoid function and $q :=\mathbb{P}(m_{1}=1)$ is a constant. \end{theorem} \begin{proof} Apply the law of total expectation and Bayes' theorem, we have \begin{equation} \begin{aligned} \label{eq:pro_lma1_0} &\mathbb{P}(m_{1}=1|\theta,T(d_{1}))=\mathbb{E}_{\mathcal{K}}[\mathbb{P}(m_{1}=1|\theta,T(d_{1}),\mathcal{K})]\\ &=\mathbb{E}_{\mathcal{K}}\left[\frac{\mathbb{P}(\theta|m_{1}=1,T(d_{1}),\mathcal{K})\mathbb{P}(m_{1}=1)}{\mathbb{P}(\theta|T(d_{1}),\mathcal{K})}\right]. \end{aligned} \end{equation} Substitute $q:=\mathbb{P}(m_{i}=1)$ and let {\small \begin{equation} \begin{aligned} \label{eq:pro_lma1_defineab} \alpha :=\mathbb{P}(\theta|m_{1}=1,T(d_{1}),\mathcal{K}),\;\beta :=\mathbb{P}(\theta|m_{1}=0,T(d_{1}),\mathcal{K}). \end{aligned} \end{equation} } Notice that $\mathbb{P}(\theta|T(d_{1}),\mathcal{K})=q\alpha+(1-q)\beta$. Then rearranging Eq~(\ref{eq:pro_lma1_0}) gives \begin{equation} \begin{aligned} \label{eq:pro_lma1_1} \mathbb{P}(m_{1}=1|\theta,T(d_{1}))=\mathbb{E}_{\mathcal{K}}\left[\left(1+(\frac{1-q}{q})\frac{\beta}{\alpha}\right)^{-1}\right], \end{aligned} \end{equation} which concludes the proof. \end{proof} We note that the expression in Theorem~\ref{lma:optimal_mi} measures how a single data point affects the parameter posterior in expectation. This is connected with the \emph{differential privacy} \cite{dwork2006calibrating, dwork2006our}, which measures how a single data point affects the parameter posterior in the worst case. We give a discussion on the relation between data augmentation, differential privacy, and membership inference in Section~\ref{sec:dp}. \section{Membership Inference with Augmented Data Under a Posterior Assumption} \label{sec:mi_bayesian} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{imgs/loss_hist.png} \caption{Distribution of single loss values on CIFAR10 dataset. The model is ResNet110 trained with $|T|=10$. The plot uses 10000 examples from training set and 10000 examples from test set. The dark region is the overlap area between training and test distributions. The membership of a value inside overlap region is hard to decide. } \label{fig:loss_hist} \end{figure} In this section we first show the optimal membership inference explicitly depends on the loss values of augmented examples when $\theta$ follows a \emph{posterior} distribution. Then we give a membership inference algorithm based on our theory. \subsection{Optimal Membership Inference Under a Posterior Assumption} In order to further explicate the optimal membership inference (Theorem \ref{lma:optimal_mi}), we need knowledge on the probability density function of $\theta$. Following the wisdom of energy based model \cite{lecun2006tutorial,du2019implicit}, we assume the posterior distribution has the form, \begin{equation} \begin{aligned} \label{eq:assumption} p(\theta|m_{1},T(d_{1}),\mathcal{K})\propto \exp\left(-\frac{1}{\gamma}L(\theta)\right), \end{aligned} \end{equation} where $L(\theta)=\sum_{i=1}^{n}m_{i}\sum \ell_{T}(\theta,d_{i})\geq 0$ is the objective to be optimized and $\gamma$ is the temperature parameter. We note that Eq~(\ref{eq:assumption}) meets the intuition that the parameters with lower loss on training set have larger chance to appear after training. Let $p_{\mathcal{K}}(\theta)=\frac{\exp(-\frac{1}{\gamma}\sum_{i=2}^{n}m_{i}\sum \ell_{T}(\theta,d_{i}))}{\int_{z}\exp(-\frac{1}{\gamma}\sum_{i=2}^{n}m_{i}\sum \ell_{T}(z,d_{i}))dz}$ be the PDF of $\theta$ given $\mathcal{K}$. The denominator is a constant keeping $\int_{z}p_{\mathcal{K}}(z)dz=1$. Theorem~\ref{thm:opt_bayesian} present the optimal algorithm under this assumption. \begin{theorem} \label{thm:opt_bayesian} Given parameters $\theta$ and $T(d_{1})$, the optimal membership inference is \[\mathbb{P}\left(m_{1}=1|\theta,T(d_{1})\right)=\mathbb{E}_{\mathcal{K}}\left[\sigma\left(\tau-\frac{1}{\gamma}\sum \ell_{T}(\theta, d_{1})+c_{q}\right)\right],\] where $\tau:=- \log\left(\int_{z}\exp(-\frac{1}{\gamma}\sum \ell_{T}(z, d_{1}))p_{\mathcal{K}}(z) dz\right)$, $c_{q}:=\log(q/(1-q))$ and $\sigma(\cdot)$ is the sigmoid function. \end{theorem} \begin{proof} For the $\alpha$ and $\beta$ defined in Eq~(\ref{eq:pro_lma1_defineab}), we have \begin{equation} \begin{aligned} \label{eq:thm0} \alpha &= \frac{e^{-(1/\gamma)\sum \ell_{T}(\theta,d_{1})}e^{-(1/\gamma)\sum_{i=2}^{n}m_{i}\sum \ell_{T}(\theta,d_{i})}}{\int_{z}e^{-(1/\gamma)\sum \ell_{T}(z,d_{1})}e^{-(1/\gamma)\sum_{i=2}^{n}m_{i}\sum \ell_{T}(z,d_{i})}dz}\\ & =\frac{e^{-(1/\gamma)\sum \ell_{T}(\theta,d_{1})} p_{\mathcal{K}}(\theta)}{\int_{z}e^{-(1/\gamma)\sum \ell_{T}(z,d_{1})}p_{\mathcal{K}}(z) dz} \end{aligned} \end{equation} and $\beta=p_{\mathcal{K}}(\theta)$. Therefore, we have $\log(\frac{\alpha}{\beta}) =$ \begin{equation} \begin{aligned} \label{eq:thm1} -\frac{1}{\gamma}\sum \ell_{T}(\theta,d_{1}) - \log\left(\int_{z}e^{-(1/\gamma)\sum \ell_{T}(z,d_{1})}p_{\mathcal{K}}(z) dz\right). \end{aligned} \end{equation} Then plugging Eq~(\ref{eq:thm1}) into Theorem~\ref{lma:optimal_mi} yields Theorem~\ref{thm:opt_bayesian}. \end{proof} The $\tau$ in Theorem~\ref{thm:opt_bayesian} represents the magnitude of $\ell_{T}(d_{1})$ on parameters trained without $T(d_{1})$. Smaller $\sum \ell_{T}(\theta,d_{1})$ indicates higher $\mathbb{P}(m_{1}=1)$. This motivates us to design a membership inference algorithm based on a threshold on loss values (see Algorithm \ref{alg:mean}). Data points with loss values smaller than such a threshold are more likely to be training data. A second observation is that the optimal membership inference \emph{explicitly} depends on the set of loss values. Therefore, membership inference attacks against the model trained with data augmentation are ought to leverage the loss values of all augmented instances for a given sample. We give more empirical evidence in Section~\ref{subsec:mi_mean}. \subsection{Inference Algorithm in Practice} \label{subsec:mi_mean} Inspired by Theorem~\ref{thm:opt_bayesian}, we predict the membership by comparing $\frac{1}{k}\sum \ell_{T}(\theta,d_{i})$ with a given threshold. The pseudocode is presented in Algorithm~\ref{alg:mean}. \begin{algorithm} \caption{Membership inference with average loss values ($M_{mean}$).} \label{alg:mean} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Set of loss values $\ell_{T}(\theta,d )$, threshold $\tau$.} \Output{Boolean value, $true$ denotes $d $ is a member.} Compute $v=mean(\ell_{T})$. Return $v<\tau$. \end{algorithm} We can set threshold $\tau$ in Algorithm~\ref{alg:mean} based on the outputs of shadow models or tune it based on validation data as done in previous work \cite{sablayrolles2019white,song2019privacy}. Though simple, Algorithm~\ref{alg:mean} significantly outperforms $M_{loss}$ by a large margin. The experiment results can be found in Section~\ref{sec:exp}. We now give some empirical evidence on why $M_{mean}$ is better than $M_{loss}$. We plot the bar chart of single loss values in Figure~\ref{fig:loss_hist} (we random sample one loss value for each example). We train the ResNet110 model \cite{he2016deep} to fit CIFAR10 dataset\footnote{\url{https://www.cs.toronto.edu/~kriz/cifar.html}.}. We use the same transformation pool $\mathcal{T}$ as \citet{he2016deep} which contains horizontal flipping and random clipping. As shown in Figure~\ref{fig:loss_hist}, the overlap area of the loss values between the training samples and the test samples is large when data augmentation is used. For the value inside the overlap area, it is impossible for $M_{loss}$ to classify its membership confidently. Therefore, the overlap area sets up a limit on the success rate of $M_{loss}$. Next, we plot the distribution of $\frac{1}{k}\sum \ell_{T}(\theta,d_{i})$ in Figure~\ref{fig:mean_std_hist}. The overlap area in Figure~\ref{fig:mean_std_hist} is significantly smaller compared to Figure~\ref{fig:loss_hist}. This indicates classifying the mean of $\ell_{T}(\theta,d_{i})$ is easier than classifying a single loss value. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{imgs/loss_hist_mean_std.png} \caption{Distribution of the mean of $\ell_{T}(\theta,d_{i})$. The experiment setting is the same as Figure~\ref{fig:loss_hist}. When using mean as metric, the overlap area between training and test distributions is smaller than using single loss, which indicates that $\frac{1}{k}\sum \ell_{T}(\theta,d_{i})$ is a better feature.} \label{fig:mean_std_hist} \end{figure} \section{Membership Inference with Augmented Data Using Neural Network} \label{sec:mi_nn} We have shown that the mean of loss values is the optimal membership inference when $\theta$ follows a posterior assumption and demonstrate its good empirical performance. However, if in practice $\theta$ does not exactly follow the posterior assumption, it is possible to design features to incorporate more information than the average of loss values to boost the membership inference success rate. In this section, we use more features in $\ell_T(\theta, d)$ as input and train a neural network $\mathcal{N}$ to do the membership inference. The general algorithm is presented in Algorithm~\ref{alg:NN}. \begin{algorithm} \caption{Membership inference with neural network.} \label{alg:NN} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Set of loss values of a target sample $\ell_{T}(\theta,d )$; MI network $\mathcal{N}$ and hyperparameters $\mathcal{H}$; some raw data $\mathcal{S}:=\{(\ell_T(\theta, \hat{d}), \mathbf{1}_{\hat{d}\in D_{train}}\}$.} \Output{boolean value, $true$ denotes $d $ is a member.} Build input feature vectors ${\bm{v}}$ from $\ell_{T}(\theta,\hat{d})$ and construct a training set $\mathcal{S}':=\{({\bm{v}}, \mathbf{1}_{\hat{d}\in D_{train}})\}$; Use the training set $\mathcal{S}'$ and hyperparameters $\mathcal{H}$ to train MI network $\mathcal{N}$; Return $\mathcal{N}(\ell_{T}(\theta,d ))$. \end{algorithm} In Algorithm~\ref{alg:NN}, each record in raw data $\mathcal{S}$ consists of the loss values of a given example and corresponding membership. The training data of MI network is built from $\mathcal{S}$. Specifically, the loss values of each record are transformed into the input feature vector of MI network $\mathcal{N}$. Then the key point is to design input feature of the network $\mathcal{N}$. We first use the raw values in $\ell_{T}(\theta,d )$ as features. We show this solution has poor performance because it is not robust to the permutation on loss values. Then we design permutation invariant features through the \emph{raw moments} of $\ell_{T}(\theta,d )$ and demonstrate its superior performance. \subsection{A Bad Solution} A straightforward implementation is to train a neural network as a classifier whose inputs are the loss values of all the augmented instances for a target sample. The pseudocode of this implementation is presented in Algorithm~\ref{alg:losses}. We refer to this approach as $M_{NN\_loss}$. Surprisingly, the success rate of $M_{NN\_loss}$ is much worse than $M_{mean}$ though $M_{NN\_loss}$ has access to more information. \begin{algorithm} \caption{Generating input features from raw losses.} \label{alg:losses} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Set of loss values $\ell_T(\theta, \hat{d})$.} \Output{Feature vector ${\bm{v}}$.} Concatenate the elements in $\ell_{T}$ into vector ${\bm{v}}$. Return ${\bm{v}}$. \end{algorithm} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{imgs/per_noinv.png} \caption{Neural network is not robust to permutation of input features. Changing the order of features will change the prediction. However, the order of augmented instances is not relevant to the membership. } \label{fig:per_noinv} \end{figure} We note that different from standard classification task, the order of elements in set $\ell_{T}(\theta,d )=\{\ell(\theta,\tilde d);\tilde d\in T(d )\}$ should not affect the decision of the MI classifier because of the nature of the problem. However, the usual neural network is not invariant to the permutation on input features. For a neuron with non-trivial weights, changing the positions of input features would change its output. We illustrate this phenomenon in Figure~\ref{fig:per_noinv}: the order of elements in $\ell_{T}$, which is not relevant to the target sample's membership, however has large influence on the output of network. \subsection{Building Permutation Invariant Features } Inspired by the failure of $M_{NN\_loss}$, we design features that are invariant to the permutation on $\ell_{T}(\theta,d)$. We first define functions whose outputs are permutation invariant with respect to their inputs. Then we use permutation invariant functions to encode the loss values into permutation invariant features. Recall $k=|T|$ is the number of augmented instances for each sample. Let $a\in\mathbb{R}^{k}$ be a vector version of $\ell_{T}(\theta,d)$. Let $\pi\in\Pi$ be a permutation of $a$ and $P_{\pi}\in \mathbb{R}^{k\times k}$ be its corresponding permutation matrix. The following definition states a transformation function satisfying the permutation invariant property. \begin{definition} \label{def:invariant} A function $f:\mathbb{R}^{k}\rightarrow\mathbb{R}^{p}$ is permutation invariant if for arbitrary $\pi_{i},\pi_{j}\in\Pi$ and $a\in\mathbb{R}^{k}$: \[f(P_{\pi_{i}}a)=f(P_{\pi_{j}}a).\] \end{definition} Clearly, the \emph{mean} function in Algorithm~\ref{alg:mean} satisfies Definition~\ref{def:invariant}. However, using the $mean$ to encode $\ell_{T}(\theta,d)$ may introduce too much information loss. To better preserve the information, we turn to the raw moments of $\ell_{T}(\theta,d)$. The $i_{th}$ raw moment $v_{i}$ of a probability density (mass) function $p(z)$ can be computed as $v_{i}=\int_{-\infty}^{+\infty}z^{i}p(z)dz$. The moments of $\ell_{T}(\theta,d)$ can be computed easily because $\ell_{T}(\theta,d)$ is a valid empirical distribution with uniform probability mass. Shuffling the loss values would not change the moments. More importantly, for probability distributions in bounded intervals, the moments of all orders uniquely determines the distribution (known as \emph{Hausdorff moment problem \cite{shohat1943problem}}). The pseudocode of generating permutation invariant features through raw moments is in Algorithm~\ref{alg:features}. \begin{algorithm} \caption{Generating permutation invariant features through raw moments.} \label{alg:features} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Set of loss values $\ell_{T}(\theta,d)$; the highest order of moments $m$.} \Output{Permutation invariant features ${\bm{v}}$} \For{$i\in [m]$}{ Compute the normalized $i_{th}$ raw moment: $v_{i}:=\left(\frac{1}{|T|}\sum_{l\in \ell_{T}(\theta,d)}l^{i}\right)^{1/i}$,} Concatenate $\{v_i; i\in[m]\}$ into a vector ${\bm{v}}$. Return ${\bm{v}}$. \end{algorithm} We note that any classifier using the features generated by Algorithm~\ref{alg:features} is permutation invariant with respect to $\ell_{T}(\theta,d)$. We then use Algorithm~\ref{alg:features} to construct $\mathcal{S}^{'}$ in Algorithm~\ref{alg:NN}. This approach is referred to as $M_{moments}$. In our experiments, $M_{moments}$ achieves the highest inference success rate. Experiments details and results can be found in Section~\ref{sec:exp}. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{imgs/varyingT_large.png} \caption{Membership inference success rates with varying $k$ on CIFAR10 and CIFAR100. The left y-axis denotes the membership inference attack success rate. The right y-axis denotes the test accuracy of target models. Our algorithms achieve universally better performance on different datasets and models with varying choices of $k$. } \label{fig:varying_N} \end{figure}
proofpile-arXiv_065-195
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A program synthesis solver aims to find a program or expression $P$ that reasons about a set of input arguments $x$ and satisfies some logical specification $\sigma$ for all possible values that those input arguments may take. That is, synthesis solvers find a solution to formulas of the following form: $$ \exists P \forall \vec{x}\,\, \sigma(P,\vec{x}). $$ However, the application of program synthesis to many real world-domains can be limited by the abilities of state-of-the-art solvers to reason about input arguments of types that are typically found in these domains. One instance of this is data-structures like arrays, which, in many software applications, may be so large as to be practically considered infinite. The size of these data-structures introduces the need for additional quantification, beyond that required for standard program synthesis, in both specifications and solution expressions. \paragraph{Invariant synthesis: }Consider the example of invariant synthesis. Symbolic methods for proving properties of programs use inductive arguments to reason about execution traces of potentially unbounded length. Many programs or software systems have a potentially unbounded state space, which might arise from unbounded data structures in memory or an unbounded number of threads or an unbounded numbers of machines on a network. Inductive arguments for non-trivial properties of such systems require quantification over the state-holding elements, e.g., the elements of a potentially unbounded array, i.e., we require invariants of the form $$\forall i \in \{0, \ldots, n\}. I(i)$$ where $n$ denotes the number of components and where $I(i)$ is a property of the state of the component with index $i$. This paper addresses the general case of program synthesis in which we wish to synthesize an expression that may reason about unbounded data-structures using quantifiers, and which satisfies some logical specification $\sigma$ which also may contain additional quantifiers. Both the expression and the specification may contain quantifier alternations. Consider the task of synthesizing a safety invariant for the loop shown in Figure~\ref{fig:alternating_q}. The two arrays $A$ and $B$ are initially equal. The elements of $A$ are swapped nondeterministically. We use~$\ast$~to indicate a non-deterministic choice. The assertion in this snippet checks that for every element of $A$ there exists an equal element in~$B$. The most natural way to formalize this property is to use a formula with one quantifier alternation. \begin{wrapfigure}{l}{0.5\linewidth} \begin{lstlisting}[style=examples] int $A[\,]$; int $B[\,]$; int c; assume: $\forall i \,\,. A[i]==B[i]$ assume: $c>0$ while($\ast$) x'=$\ast$; y'=$\ast$; $A'[x]=A[y]$ $A'[y]=A[x]$ swap($A[x]$, $A[y]$) $\forall i\, A'[i]=A[i]+c \wedge B'[i]=B[i]+c$ assert: $\forall x\,\exists y \,\,.A[x]=B[y]$ \end{lstlisting} \caption{A safety property with a quantifier alternation. \label{fig:alternating_q}} \end{wrapfigure} There are many verification problems like the one above, in many forms and shapes. In this particular instance, while it is easy to verify that the assertion above holds, it is difficult for humans to write such an invariant, and thus, there is a need for algorithmic methods that identify these invariants automatically. Nevertheless, to the best of our knowledge, no existing synthesis engine is able to generate the inductive invariant for the example above. When formulated as a synthesis problem, CVC4~\cite{DBLP:journals/corr/abs-1806-08775} version 1.8 returns ``unknown'', and when formulated as a Constrained Horn Clause problem Z3~\cite{DBLP:conf/tacas/MouraB08,DBLP:journals/fmsd/KomuravelliGC16} (version 4.8.7) also returns ``unknown''. \paragraph{Sketching: }Another example application of program synthesis is program sketching~\cite{DBLP:conf/asplos/Solar-LezamaTBSS06}, in which a programmer provides a sketch of a program with some specification and asks a synthesizer to fill the the holes. The idea behind sketching is that it is often easier to write a specification that describes what some block of code should do than it is to write the code itself. However, writing specifications for programs that reason about arrays often requires quantification that is not currently supported by state-of-the-art solvers. This motivates the need for additional quantification support in specifications. To illustrate this point, we compile a set of program sketching benchmarks that require a program synthesizer to synthesize fragments of the Java StringUtils class, one of the most used Java Utility Classes. A string in Java, or C, is represented by an array of chars, and we model this as an array of integers. In Figure~\ref{fig:sketch_ex} we show an example program sketch asking the synthesizer to synthesize part of a method, denoted by $??$, which returns true if a string contains a given character. \begin{wrapfigure}{r}{0.6\linewidth} \begin{lstlisting}[style=examples] int str[]; int strLen; int ch; bool contains(str, strLen, ch) { for(int i=0; i<strLen, i++) if(??) return true; return false; } assert($\exists$i. str[i]=ch $\iff$contains(str, strLen, ch)) \end{lstlisting} \caption{Sketch example\label{fig:sketch_ex}} \end{wrapfigure} The natural way to write a specification for this code fragment, which should return $true$ if an array of chars contains a target char, and $false$ otherwise, is to use an existential quantifier: $(\exists i \, str[i] = ch)\iff contains(i, str[i])$. Such quantification is not permitted within the syntax-guided synthesis format~\cite{sygus}, nor can such a problem be formulated within the constrained-horn-clause format~\cite{chc}. The existential quantifier puts this problem out of reach of state-of-the-art Syntax-Guided Synthesis solver CVC4~\cite{DBLP:journals/corr/abs-1806-08775}. \paragraph{\textbf{toolname}:} In this paper we present \textbf{toolname}: Synthesis via Restriction and Generalization. The algorithm is based around a CounterExample Guided Inductive Synthesis~(CEGIS)~\cite{DBLP:conf/asplos/Solar-LezamaTBSS06} solver, a well-known paradigm for solving program synthesis problems of the form $\exists P \,\forall \vec{x}\, \sigma(P,\vec{x})$. Current state-of-the-art techniques can only handle quantifier-free specifications, $\sigma$, and programs,$P$. The original CEGIS algorithm performs synthesis in two stages: first it synthesizes a candidate solution that works for a \emph{simpler} version of the specification (i.e., it works for a subset of the possible inputs) and then verifies whether that candidate solution works for the full specification (i.e., for all the possible inputs). Using this principle, we perform synthesis by restricting a specification that contains quantifiers over infinite domains into a \emph{simpler} quantifier-free specification on a restricted domain. We synthesize candidate solutions for this simpler specification, and then attempt to generalize them to satisfy the full specification. \textbf{toolname} integrates well with existing syntax-guided synthesis solvers: by reducing complex synthesis specifications with nested quantification over infinite domains to quantifier-free specifications over restricted domains, we take advantage of state-of-the-art synthesis solvers for quantifier-free theories. We present an instance of the algorithm that solves formulas with quantifiers over the indices of arrays. In order to evaluate this algorithm,since there are currently no benchmarks for Syntax-Guided Synthesis that use arrays and quantifiers, we compile a set of benchmarks in the standard Syntax-Guided Synthesis Interchange Format~\cite{sygus}. This set of benchmarks contains basic invariant synthesis tasks, benchmarks adapted from the software verification competition~\cite{10.1007/978-3-030-45237-7_21}, and program sketching examples which require the synthesis solver to synthesize parts of the Java StringUtils class. We observe that the algorithm outperforms the state-of-the-art solvers; we hypothesize that this is owed to the fact the array theory employs the \emph{small model property}~\cite{DBLP:conf/vmcai/BradleyMS06}, i.e., the validity of quantifier-free array formulas can be determined in a finite domain by computing a set of indices that is often surprisingly small. We conjecture that some variant of this algorithmic framework would be extensible to any theory over infinite domains if they have the small model property. That is, for any formula $f$ over an infinite domain, where a formula $f_b$ can be constructed over some finite domain and $f_b$ is satisfiable iff $f$ is satisfiable. \paragraph{Contributions: }The contributions of this paper are: \begin{itemize} \item \textbf{toolname}: a general program synthesis algorithm which can synthesize expressions containing quantifiers, which satisfy specifications containing quantifiers. As far as we know, this is the first solver general enough to automatically synthesize expressions containing alternating quantifiers. \item a new set of program synthesis benchmarks based on the Java StringUtils class expressed in Syntax-Guided Synthesis Interchange-Format (SyGuS-IF)~\cite{sygus} with additional quantification, basic invariant synthesis tasks and SV-comp~\cite{10.1007/978-3-030-45237-7_21} benchmarks. \end{itemize} \begin{figure*} \centering \input{new_alg.tikz} \caption{SynRG: algorithm for synthesis of programs with quantifiers and arrays} \label{fig:basic_alg} \end{figure*} \section{Background} \label{sec:prelim} \subsection{Program synthesis} Program synthesis can be framed as the following existential second-order logic formula: $$ \exists P .\, \forall \vec{x}.\, \sigma(\vec{x}, P)$$ where $P$ ranges over functions (where a function is represented by the program computing it) and $\vec{x}$ ranges over ground terms. We interpret the ground terms over some domain, and we use ${\cal I}$ to denote the set of all possible inputs to the function, and so in the formula above $\vec{x} \in {\cal I}$. Program sketching is a specific instance of this problem, where the specification constitutes a sketch of some program that contains holes and some assertions about the behavior of that code, and the holes are expressions to be synthesized. We allow $P$ and $\sigma$ to contain linear integer arithmetic, arrays and quantification over array indices. We restrict the array indices to terms in Presburger arithmetic. CEGIS~\cite{DBLP:conf/asplos/Solar-LezamaTBSS06} is an algorithmic framework often used to tackle the program synthesis problem described above. It consists of two phases; the synthesis phase and the verification phase: Given the specification of the desired program, $\sigma$, the inductive synthesis procedure produces a candidate program~$P$ that satisfies the specification $\sigma(\vec{x},P)$ for all $\vec{x}$ in a set of inputs ${\cal I_G}$ (which is a subset of ${\cal I}$). The candidate program $P$ is passed to the verification phase, which checks whether $P$ is a full solution, i.e., it satisfies the specification $\sigma(\vec{x}, P)$ for all $\vec{x}$. If so, we have successfully synthesized a full solution and the algorithm terminates. Otherwise, the verifier passes a counterexample, i.e., the satisfying assignment to $\vec{x}$, which is added to the set of inputs ${\cal I_G}$ passed to the synthesizer, and the loop repeats. \subsubsection{Safety invariants} The synthesis of safety invariants can be formulated as a general program synthesis problem, and the Syntax Guided Synthesis Competition~\cite{DBLP:journals/corr/abs-1711-11438} has a track dedicated to this. We use safety invariants as an exemplar of program synthesis in this paper although our algorithm is able to tackle more general synthesis problems. Given a loop with a loop variable $x$, some initial conditions $init(x)$, a transition relation $trans(x,x')$, and a post condition $post(x)$, we synthesize an invariant $inv$ such that \begin{align*} \exists inv(x) \forall x\, \\ init(x) &\implies inv(x) \wedge \\ inv(x) \wedge trans(x,x') &\implies inv(x') \wedge \\ inv(x) &\implies post(x). \\ \end{align*} Finding the simplest loop invariants involves solving this formula with a single quantifier alternation. This paper extends this to permit quantifiers in specifications as well as the synthesized invariants. Use cases that benefit from specifications containing quantifiers quantification are nested loops, the initialization of arrays, or asserting properties about arrays. Consider the example given in Figure~\ref{fig:running_ex}. A reachability problem such as this one requires reasoning with alternating quantification: a candidate inductive invariant for this loop must satisfy the base case of the induction proof, i.e., that $\forall \vec{x}\,\, (init(\vec{x}) \implies inv(\vec{x})),$ where $\vec{x}$ denotes the set of all possible inputs to the program, $init(\vec{x})$ asserts that the initial conditions of the program hold, and $inv(\vec{x})$ is the inductive invariant we wish to synthesize. This is equivalent to $$\forall \vec{x}\,\, (\neg init(\vec{x}) \vee inv(\vec{x})).$$ For Figure~\ref{fig:running_ex}, the initial conditions are $(c<0) \, \wedge \forall i \,A[i]\geq 0$, and thus we obtain the following base case of the induction proof: $$\forall A,c \,\, (c < 0 \, \vee \,(\exists i \,\,A[i])<0) \, \vee \,inv(A,c).$$ Note the quantifier alternation in $\forall A,c \exists i$. Nested quantifiers are not supported by recent works in synthesis of quantified invariants~\cite{DBLP:conf/atva/GurfinkelSV18, DBLP:conf/cav/FedyukovichPMG19}, and are explicitly prohibited in the Constrained-Horn Clause~\cite{chc} and in the SyGuS competition~\cite{sygus} formats. \begin{figure} \begin{center} \begin{lstlisting}[style=examples] int $A[\,]$; int $c=\ast$; assume: $c>0$ assume: $\forall i \,\,. A[i]\geq 0$ while($\ast$) $\forall i\,\, A'[i]=A[i]+c$ assert: $\neg \exists i\, A[i]<0$ \end{lstlisting} \caption{Simple running example\label{fig:running_ex}} \end{center} \end{figure} \section{Algorithm overview} The classic CEGIS algorithm breaks down a hard problem into an easier problem for synthesis, and then attempts to generalize a solution to the easier problem to the full problem. That is, the synthesis phase attempts to synthesize a solution that works only for a subset of inputs, i.e., it solves $\exists P \forall \vec{x} \in {\cal I_G} \sigma(P, \vec{x})$. The verification phase then checks whether a candidate solution satisfies the specification for all possible inputs. We apply a similar approach to CEGIS: We take a specification $\sigma$, which contains quantification over the infinite domain of arrays. We restrict the domain of arrays in $\sigma$, generating a restricted-domain specification $\sigma^b$, which considers only $b$ elements of each array. This specification then only contains quantification over finite domains, which can be removed with exhaustive quantifier instantiation. We synthesize a solution to $\sigma^b$ with an existing state-of-the-art Syntax-Guided Synthesis solver. We use $P^b$ to denote a solution to $\exists P \forall \vec{x} \sigma^b(P, \vec{x})$. We then attempt to generalize $P^b$, to give a candidate solution to the full specification, which we denote $P^*$, and verify whether $P^*$ is a solution to $\sigma$, i.e., $\exists x \,\, \neg\sigma(P^*,\vec{x})$. The generalization has two phases: a syntactic generalization phase and a synthesis-based generalization phase. If we fail to find a solution, we increase the bound $b$ used for generating $\sigma^b$. An overview of this general algorithmic framework is illustrated in Figure~\ref{fig:basic_alg}. \paragraph{\textbf{Verification phase: }} The verification phase, using a standard SMT-solver, guarantees the soundness of our approach. There are two verification phases; the first verifies the candidate produced by the first (syntactic) generalization phase. If this verification fails, then the incorrect solution is used by the synthesis-based generalization phase. The second verification phase is embedded inside the synthesis-based generalization, and if the synthesis-based generalization phase fails to produce a candidate that passes verification, we relax the restriction on the array size, i.e., the arrays in $\sigma^b$ increase in size, but are still smaller than the arrays in $\sigma$ \paragraph{\textbf{Synthesis phase: }} The synthesis phase solves the formula $\exists P^b\, \forall \vec{x}\, \sigma^b(P^b, \vec{x})$. This formula contains only theories permitted in SyGuS-IF~\cite{sygus}, the universal input format for SyGuS solvers, and we are able to apply standard synthesis-guided synthesis algorithms. The synthesis problem is not, in general, decidable, and the syntax-guided synthesis algorithms that complete this task are not complete~\cite{DBLP:journals/corr/CaulfieldRST15}, and consequently \textbf{toolname} is also not complete. The synthesis phase initially constructs the bounded synthesis query without using a syntactic template, and calls a solver with a time-out of $2$s. If this query returns a solution then we proceed to the generalization phase. If it does not, we add a syntactic template to the synthesis query and call a solver with a time-out of $60$s. These time-outs are heuristics, and increasing the time-out may allow \textbf{toolname} to solve more benchmarks, but generally slower since each iteration takes longer. The syntactic template comprises a non-terminal for each non-array parameter or return type, all non-array parameters, all operators in linear integer arithmetic and array select operators for indices up to the bound of $\sigma_b$. For example, this syntactic template would be used for a function with two array parameters, two integer parameters and a boolean return type, and a bound of $2$: \begin{lstlisting}[style=grammar] Program::= B B::= B $\wedge$ B|B $\vee$ B| not B|I $\geq$ I|I $\leq$ I|I = I I::= 0|1|y|z|I-I|I+I|arr1[0]|arr1[1]|arr2[0]|arr2[1] \end{lstlisting} Although these synthesis queries are executed sequentially, they are not dependent one each other so they could practically be executed in parallel. \paragraph{\textbf{Restriction and Generalization: }}In the following sections, we discuss the fragments of array logic for which such a restricted-domain specification is guaranteed to exist. We then give details of a specific instance of this algorithmic framework, where the restricted-domain specification considers explicitly $b$ elements of every array, and the generalization approach is a two staged approach based on applying a series of syntactic rules and then, if that fails, an additional synthesis-based generalization procedure. \section{The small model property} \label{sec:small-model} \label{sec:decidability} The algorithm we have presented benefits from the existence of a restricted-domain program $P^b$ that can be generalized to the unrestricted domain. In this section we prove that such a restricted-domain program is guaranteed to exist for a restricted fragment of array theory. \subsection{Fragment of array theory} Consider the case where the specification $\sigma$ and the solution to be synthesized $P$ are restricted in such a way that the verification query can be written as a boolean combination of array properties~\cite{DBLP:conf/vmcai/BradleyMS06}. \begin{definition}{Array property: } \label{def:frag} an array theory formula is called an array property~\cite{DBLP:conf/vmcai/BradleyMS06,DBLP:series/txtcs/KroeningS16} \emph{iff} it is of the form $$ \forall i_1\ldots \forall i_k \in T_I, \phi_I(i_1, ... i_k) \implies \phi_V(i_1, ..., i_k),$$ where $T_I$ is the index theory, $i_1, ..., i_k$ are array index variables, $\phi_I$ is the index guard and $\phi_V$ is the value constraint, and the formula also satisfies the following conditions: \end{definition} \begin{enumerate} \item the index guard $\phi_I$ must follow the grammar \begin{lstlisting}[style=grammar,language=alg] iguard::= iguard$\wedge$iguard|iguard$\vee$iguard|iterm$\le$iterm|iterm$=$iterm iterm::= $i_1$|...|$i_k$|terms term::= integer_constant|integer_constant$*$index_identifier|term$+$term \end{lstlisting} The {\rmfamily\mdseries\footnotesize index\_identifier} used in {\rmfamily\mdseries\footnotesize term} must not be one of the universally quantified variables. \item The index variables $i_1, ..., i_k$ can only be used in array read expressions. \end{enumerate} If this is the case, then we know that the verification query is solvable~\cite{DBLP:conf/vmcai/BradleyMS06} and, crucially, that there is a finite set of index terms such that instantiating universally quantified index variables from only this set is sufficient for completeness. This set of index terms, $\mathcal{R}$ is made up of all expressions used as an array index term, or inside index guards, in $\phi$ that are not quantified variables. The example shown in Figure~\ref{fig:decidable}, shows an invariant synthesis problem with one possible target solution. The verification query for checking the inductive step of this target solution is: \begin{align*} \forall x (x < i \implies a[x]=0 ) \\ \wedge\, a'[i]=0 \wedge \forall j\neq i. a'[j]=a[j] \\ \wedge\, \exists x .(x \leq i \wedge a'[x]\neq 0). \end{align*} We instantiate the existential quantifier with a fresh variable $z$: \begin{align*} \forall x (x < i \implies a[x]=0 ) \\ \wedge\, a'[i]=0 \wedge \forall j\neq i. a'[j]=a[j] \\ \wedge\, (z \leq i \wedge a'[z]\neq 0). \end{align*} The set of index terms $\mathcal{R}$ is $\{i, z\}$. If we replace the universal quantifier $\forall i P(i)$ with a conjunction $\bigwedge_{i \in \mathcal{R}} P(i)$, we get the following quantifier-free formula: \begin{align*} (z < i \implies a[z]=0 ) \\ \wedge\, (a'[i]=0) \wedge (z\neq i \implies a'[z]=a[z]) \\ \wedge\, (z \leq i \wedge a'[z]\neq 0). \end{align*} Thus, it is sufficient to verify the candidate $P$ by only considering two elements of the array, $a[z]$ and $a[i]$, provided we consider the cases where $z<i$, $z=i$, and $z>i$. There is a restricted-domain candidate program $P^b$ and a restricted-domain specification $\sigma^b$ such that $\exists x \neg \sigma^b(P^b,x)$ is equisatisfiable with the original verification formula. In this case the restricted-domain specification for the inductive step would be $(P^b(a) \wedge a'[i]=0 \wedge (z\neq i \implies a'[z]=a[z])) \implies P^b(a')$ and the restricted-domain program $P^b(a)$ is $z<i \implies a[z]=0)$. Given that we only need to reason about array indices $a[z]$ and $a[i]$, if we find a solution that works for an arrays of size $2$, we will have a solution that we can generalize to the infinite case by a procedure detailed in Section~\ref{sec:generalize} that is based on reversing the steps we used to obtain the restricted-domain specification. However, without knowing the solution $P$ in advance, we cannot determine the size of the set $I$ needed for the verification query. Consequently, we use a heuristic approach in this work where we begin with two array elements and increase the number of elements if we are unable to find a solution. The exact process we use is detailed in Section~\ref{sec:restriction}. We note that the restricted-domain synthesis query itself does not fall within a decidable fragment~\cite{DBLP:journals/corr/CaulfieldRST15}, even for this array fragment, and so, perhaps unsurprisingly, the unrestricted-domain synthesis problem is in general undecidable. \begin{figure} \begin{lstlisting}[style=examples,linewidth=5cm] int $A[\,]$; int $i=0$; while(i<50) A'[i]=0; i'=i+1; invariant: $\forall x, (x<i) \implies A[x]=0$ assert: $x<50 \implies A[x]=0$ \end{lstlisting} \caption{Safety invariant expressed in array property fragment\label{fig:decidable}} \end{figure} \subsection{Beyond the array property fragment} The array property fragment in Definition~\ref{def:frag} is restrictive, but expressive enough for many benchmarks we adapted from the SV-Comp~\cite{10.1007/978-3-030-45237-7_21}. As an exemplar of a synthesis problem that falls outside this fragment, consider that we have synthesized an invariant that states that array $A$ contains at least one element that is not in array $B$. That is $\exists i \forall $j$ A[i] \neq B[j]$. The verification of this invariant will include the subformula $\exists $A,B$\, \forall i \exists j. A[i]=B[j]$. This formula falls outside the decidable fragment described above, and formulas of the form $\exists x \forall i \exists j$, where $x$ is some array and $i$ and $j$ are indices, are in general undecidable. This can be shown by reduction to the termination of integer loops~\cite{DBLP:conf/vmcai/BradleyMS06}. Consequently we are unable to show that a finite model exists. However, our experimental evaluation shows that the approach we present in this paper is a heuristic that can be used to some problems that fall outside of the decidable fragment. \section{Restriction of $\sigma$ to $\sigma^b$} \label{sec:restriction} We aim to generate a modified specification $\sigma^b$ that considers a finite set of $b$ index terms for each array. We do this by bounding the length of the arrays in the specification to length $b$, by replacing any predicate $e$ in $\sigma$ that reasons about an array index $i$, with an implication $(0\leq i < b) \implies e$. \begin{algorithm} \label{alg:bound-arrays} \KwData{$\sigma$, bound b} \KwResult{$\sigma^b$} idx: list of array indices \; Qidx: list of quantified array indices\; $\sigma^b\leftarrow \emptyset$\; idx$\leftarrow \emptyset$\; Qidx$\leftarrow\emptyset$\; \For{constraint $c \in\sigma$}{ $c^b \leftarrow$ boundQuantification($c$, b, idx, Qidx)\; $c^b \leftarrow$\{(idx $< b$ )$\implies c^b$\}\; idx$\leftarrow \emptyset$\; $\sigma^b \leftarrow c^b$\; } \Return $\sigma^b$ \caption{Pseudocode for generating $\sigma^b$} \end{algorithm} \begin{algorithm} \KwData{expression $e$, bound $b$, idx, Qidx} \KwResult{finite-domain expression $e^b$, updated idx and Qidx} \If{$e$ is $\forall$ or $e$ is $\exists$} { idx $\leftarrow$ Qidx\; Qidx $\leftarrow \emptyset$\; } \For{operand o $\in$ e.operands} { $o \leftarrow$boundQuantification(o, b, idx, Qidx)\; } \uIf{($e $ is $ \forall$ or $e$ is $\exists$) $\wedge$ Qidx $\neq \emptyset$ } { $e.Predicate\leftarrow \{($Qidx$<B)\implies e.Predicate$\}\; Qidx$\leftarrow \emptyset$ \; } \uElseIf{$e$ is an array element} { Qidx $\leftarrow$getIndex($e$)\; } \Return $e$\; \caption{boundQuantification: Algorithm for bounding an expression} \end{algorithm} The algorithm for bounding arrays, shown in Algorithm~\ref{alg:bound-arrays}, applies these rules recursively on the syntax tree of each constraint. This method of considering only the first $b$ elements of the arrays may require us to consider a larger specification than strictly necessary. For instance, suppose we have a specification that reasons only about array element $x[99]$. Our algorithm will not work until we have increased $b$ to $100$, and we then have to solve a synthesis query that considers all elements $x[0]..x[99]$. Future work will explore more sophisticated heuristics for generating this specification. \paragraph{Remove quantification: } Once we have obtained this restricted-domain specification for $\sigma$, all quantification is now over finite domains. We can hence use exhaustive quantifier instantiation to remove all quantifiers over array indices and replace universal quantifiers with conjunctions and existential quantifiers with disjunctions. This exhaustive quantifier instantiation is possible only because we have bound the size of the arrays to make the data types finite. \paragraph{Running example:} Consider the example presented in Figure~\ref{fig:running_ex}. The constraint asserting that the invariant holds at the initial conditions would be: \begin{align*} (c>0) \wedge ( \forall i \,\,. A[i]\geq 0) \implies inv(c,A) \,\,\bigwedge\\ inv(c,A) \wedge (\forall i\,\, A'[i]=A[i]+c) \implies inv(c,A') \,\,\bigwedge\\ inv(c,A) \implies \neg \exists i\, A[i]<0. \end{align*} If we apply these steps to our running example, bounding the arrays to size two, we get the following constraints: \begin{align} \begin{split} (c>0) \wedge ( \bigwedge_{0 \leq i<2} A[i]\geq 0) \implies inv(c,A) \,\, \bigwedge\\ inv(c,A) \wedge (\bigwedge_{0 \leq i < 2} A[i]=A[i]+c)\implies inv(c,A')\,\,\bigwedge\\ inv(c,A) \implies \neg \bigvee_{0 \leq i < 2}A[i]<0 \label{eq:bounded_spec} \end{split} \end{align} \begin{algorithm} \KwData{expression $e$, bound $b$} \KwResult{quantifier-free expression} \For{operand $o \in e.operands$} { removeQuantifers($o$, $b$)\; } \If{$e$ is $\forall$ or $e$ is $\exists$} { $v \leftarrow e.Binding$ \; $P \leftarrow e.Predicate$ \; \uIf{$e$ is $\forall$} { $e_{qf} \leftarrow$ conjunction\; } \Else { $e_{qf} \leftarrow$ disjunction\; } \For{$0 \leq i < b$} { $P_i \leftarrow $ replaceVarWithConstant($P$, $v$, $i$)\; \tcc{add $P_i$ to the operands of $e_{qf}$, which is a conjunction or disjunction.} $e_{qf}.operands \leftarrow P_i$ \; } \Return $e_{qf}$ } \Return $e$ \caption{removeQuantifiers: This psuedocode is simplified to handle only quantifiers that bind only to a single variable} \end{algorithm} \section{Generalization} \label{sec:generalize} Assuming the synthesis block has found a solution $P^b$, we now attempt to generalize this solution to obtain a candidate $P^*$ that may satisfy the full specification, by introducing quantifiers in the place of conjunctions or disjunctions. The generalization has two phases: a syntax-based quantifier introduction procedure, which, if it fails to produce a solution that works for the full array, is then used to provide syntactic-guidance to a synthesis-based generalization phase. \begin{algorithm} \KwData{finite-domain expression $e$} \KwResult{An unrestricted-domain expression} \For{operand $o \in e.operands$} { generalize($o$)\; } \tcc{set of matching operands} $Ops \leftarrow \emptyset $\; \tcc{set of sets of matching operands} $Sets \leftarrow \emptyset $\; \If{$e$ is $\wedge$ or $\vee$} { Ops$\leftarrow${e.operand[0]}\; Sets$\leftarrow$Ops\; N$\leftarrow$e.operands.size()\; \For {$1\leq i < N$} { placed$\leftarrow$false \For {set$\in$Sets} { \If{compareExpr(set, e.operand[i])} { set$\leftarrow$E.operand[i]\; placed$\leftarrow$true\; } } \If{!placed} { newSet$\leftarrow$e.operand[i] Sets$\leftarrow$newSet } } result$\leftarrow$true; \For{set$\in$Sets} { \tcc{Replace array indices with local variables} P$\leftarrow$replaceIndicesWithVars(set, $v_{loc}$)\; \uIf{$e$ is $\wedge$} { quantifiedExpr$\leftarrow \forall v_{loc} \,\,P$ } \uElseIf{$e$ is $\vee$} { quantifiedExpr$\leftarrow \exists v_{loc} \,\,P$ } result$\leftarrow$result$\wedge$quantifiedExpr } \Return result; } \caption{syntactic generalize: algorithm for reintroducing quantifiers} \label{alg:reintroduce_quantifiers} \end{algorithm} \subsection{Syntactic-generalization} We implement a syntax-based quantifier introduction procedure based on identifying conjunctions or disjunctions of predicates that use array indices. We describe the steps for universal quantifiers, and note that existential quantifiers can be introduced by treating disjunctions in the same way. In order to introduce a universal quantifier, in place of an expression: \begin{itemize} \item the expression must be a conjunction of predicates that reason about array elements; \item replacing an array index in the same location in each predicate with a new variable must result in equisatisfiable predicates; \item and the conjunction must cover all possible indices of the (bounded) array. \end{itemize} These three items are sufficient for generalizing expressions that are part of the array fragment given in Section~\ref{sec:decidability}, which disallows Presburger arithmetic on quantified index-variables, and allows quantified variables to only be used in array read operations. \vspace{0.5em} \begin{definition} \label{lemma:matching} Two predicates $\phi_1$ and $\phi_2$ are \emph{matching} predicates if $\phi_1$ contains an array read operation $A[c]$ and $\phi_2$ contains an array read operation $A[d]$, where $c$ and $d$ are constants, and if we replace both $c$ and $d$ with the same fresh variable $z$, $\phi_1$ and $\phi_2$ are equisatisfiable. \end{definition} \vspace{0.5em} Given a predicate $\phi$ which contains an array read $A[c]$, we use $\phi(z)$ as short-hand for the same predicate with the constant $c$ replaced by a fresh variable. This relationship is transitive, if $\phi_1$ and $\phi_2$ are \emph{matching} predicates, and $\phi_3$ and $\phi_2$ are \emph{matching} predicates then $\phi_1$ and $\phi_3$ are \emph{matching} predicates. It is also commutative. A conjunction $C$ over predicates $\phi_0, ..., \phi_n$ can be replaced with a universal quantifier over $\phi(z)$ if the constant array indices we replaced in $\phi_0, ..., \phi_n$ to obtain $\phi(z)$ span the full range of the bounded arrays in the finite-domain specification. \vspace{0.5em} \begin{definition} \label{lemma:conjunctions} A conjunction $C$ over predicates $\phi_0, ...,\phi_n$ is equisatisfiable with the expression $\forall z.\, \phi_0(z)$, in the finite-domain with bounded arrays, if $\phi_0, ..., \phi_n$ are matching predicates, and the original constants span the full range of the finite-domain bounded arrays. \end{definition} \vspace{0.5em} A similar statement to definition~\ref{lemma:conjunctions} can be written for disjunctions and existential quantifiers. Using definition~\ref{lemma:matching} and definition~\ref{lemma:conjunctions}, we are able to apply a procedure to replace conjunctions and disjunctions iteratively on the syntax-tree of the finite-domain candidate program $P^b$, starting at the leaf nodes and working upwards, as shown in Algorithm~\ref{alg:reintroduce_quantifiers}. \paragraph{Running example: } Consider a possible $P^b$ for our running example, in a finite-domain with arrays of length $2$: $(A[0]\geq 0 \wedge A[1]\geq 0 \wedge c \geq 0)$. This is a conjunction of predicates: \begin{align*} \phi_0 = A[0]>0, \,\, \phi_1 = A[1]>0, \,\, \phi_2 = c>0 \end{align*} If we replace the constant indices in the array read operations in $\phi_0$ and $\phi_1$, we can see that the two predicates are \emph{matching}, and the constants span the full range of the finite-domain array. $\phi_2$ does not match any other predicate. We thus replace the first conjunction with a universal quantifier, and the expression becomes $(\forall z\, A[z]>0) \wedge (c>0)$. \paragraph{Beyond the decidable array fragment: } We add two more checks that allow us to handle limited cases outside the decidable array property fragment, specifically we consider the case where Presburger arithmetic is applied to quantified index-variables and where limited cases where quantified index-variables are used outside of array read operations. That is: \begin{itemize} \item if more than one element of the same array is indexed, we look for constant difference relationships between the array elements indexed in the predicate, and check these relationships are the same across all predicates; \item and if the predicate contains constants of the same type as the array index that are not used for indexing arrays, we look for constant adjustment relationships between the constants and the array indices, and check if these are the same across all predicates. \end{itemize} The formal definitions of these rules can be found in Appendix~\ref{sec:beyond}. \paragraph{Nested quantifiers: } Although nested quantifiers are outside of the decidable array fragment, transforming a finite-domain candidate solution $P^b$ to a solution with nested quantifiers requires no further transformation rules. Algorithm~\ref{alg:reintroduce_quantifiers} applies the transformations recursively and, given an expression as input, begins by calling itself on all of the operands of that expression, and by doing so is able to introduce nested quantifiers. Consider the expressions $(A[0]=B[0] \vee A[0]=B[1])\wedge (A[1]=B[0] \vee A[1]=B[1])$. The syntax tree for this expression is shown in Figure~\ref{fig:syntaxtree}. \begin{figure} \Tree [.$\wedge$ [.$\vee$ [.= $A[0]$ $B[0]$ ].= [.= $A[0]$ $B[1]$ ].= !\qsetw{2.25cm} ].$\vee$ [.$\vee$ [.= $A[1]$ $B[0]$ ].= [.= $A[1]$ $B[1]$ ].= !\qsetw{2.25cm} ].$\vee$ !\qsetw{5cm} ].$\wedge$ \caption{Syntax tree example\label{fig:syntaxtree} } \end{figure} The key comparison the algorithm makes are: \begin{enumerate} \item Compare the disjunction operands:\\ $(A[0]=B[0])$ and $(A[0]=B[1])$.\\ Replace with:\\ $\exists z_1 \, A[0]=A[z_1]$. \item Compare the disjunction operands:\\ $(A[1]=B[0])$ and $(A[1]=B[1])$.\\ Replace with:\\ $\exists z_2 \, A[1]=B[z_2]$. \item Compare the conjunction operands: \\ $\exists z_1 \, A[1]=B[z_1]$ and $\exists z_2 \, A[0]=B[z_2]$\\ Replace with: \\ $\forall z_3 \exists z_1 \, A[z_3]=B[z_1]$ \end{enumerate} \subsection{Synthesis-based Generalization} If the syntactic generalization fails, we hypothesize that this is likely because the property described by the generalized candidate solution does not hold for all array indices. We use the predicates found in the generalized solution as a syntactic template for a synthesis query, which asks whether there exists a solution to the unbounded specification $\sigma$ that is constructed using these predicates, and additional simple expressions that can be constructed from non-array parameters, comparison operators, and conjunctions and disjunctions. As an exemplar, given a candidate $\forall i, x[i]>0 \vee y=0$, for a program $P$ that accepts three input parameters, an array $x$ and an integer $y$ and $z$, and returns a boolean, we will generate the following syntactic template: \begin{lstlisting}[style=grammar] Program::= B B::= B $\wedge$ B|B $\vee$ B|I $\geq$ I|I $\leq$ I|I = I|(y = 0)| $(\forall i, x[i]>0)$| $(\forall i\, ($I$\leq i<$I$)\implies x[i]>0$) I::= 0|1|y|z|I-I|I+I \end{lstlisting} One of the key things this synthesis-based generalization procedure achieves is identifying whether a syntactically generalized solution is a component of a valid solution, if it is, for example, only applied to a subsection of the array that was larger than the original bound. For instance, a valid solution may be $\forall i, (i<z)\implies x[i]>0 \vee y=0$ \paragraph{Extensions: } The generalization phase is incomplete. There is scope for syntactic generalization of further expressions outside of the decidable array fragment. For instance, Array indices as expressions outside of Presburger arithmetic. \section{Evaluation} \label{sec:eval} \begin{table} \begin{center} \begin{tabular}{c| p{3.2cm} p{1.7cm} p{1.7cm} p{1.7cm} p{1.7cm}} &Solver & Z3 & QUIC3 & CVC4 & \textbf{toolname} \\\hline\hline SV-COMP &No. solved & 1/24 & 7/24 & 1/24 & 10/24 \\ &No. unknown & 11/24 & 11/24 & 0/24 & n/a \\ &No. time-out & 12/24 & 6/24 & 23/24 & 14 \\ &Avg. solving time & $<$0.1s & $<$0.1s& $<$0.1s & 122.1s \\ &Median. solving time & $<$0.1s & $<$0.1s& $<$0.1s & 121.3s \\ \hline Crafted Inv&No. solved& 0/19 & 1/19 & 3/19 & 16/19 \\ &No. unknown & 18/19 & 18/19 & 0/19 & n/a \\ &No. time-out & 1/19 & 0/19 & 16/19 & 3/19 \\ &Avg. solving time & n/a & $<$0.1s & $<$0.1s & 0.5s \\ &Median. solving time & $<$0.1s & $<$0.1s& $<$0.1s & $<$0.1s \\ \hline Sketching&No. solved & n/a & n/a & 10/22 & 12/22 \\ &No. unknown & n/a & n/a & 0/22 & n/a \\ &No. time-out & n/a & n/a & 12/22 & 10/22 \\ &Avg. solving time & & & 1.7s & 30.5s \\ &Median. solving time & $<$0.1s & $<$0.1s & $<$0.1s & 0.5s \\ \hline\hline Total&Total solved & 1 & 9 & 14 & 38 \\ &Avg. solving time & $<$0.1s & $<$0.1s & $<$0.1s & 40.8s \\ &Median. solving time & $<$0.1s & $<$0.1s & $<$0.1s & 0.1s \\ \hline\hline \end{tabular} \end{center} \caption{Examples solved by each solver. We ran the experiments with a 300\,s timeout. We differentiate between unsolved benchmarks that time-out and unsolved benchmarks where the solver returns unknown. \textbf{toolname} does not implement any way of returning unknown.} \end{table} We implement \textbf{toolname} using CVC4 version 1.9[pre-release] as the synthesis phase. We use Z3 version 4.8.7 for verification. The communication between the transformation phases and the synthesis phase is done in SyGuS-IF, allowing any existing SyGuS solver to be substituted into the synthesis phase. Furthermore, the verification phase produces standard SMT-lib, allowing any existing SMT solver to be used as a back-end. We evaluate our algorithm on $65$ benchmarks: $24$ invariant synthesis benchmarks adapted from the Software Verification Competition~\cite{10.1007/978-3-030-45237-7_21}; $19$ example challenging invariant synthesis problems, crafted to test the capabilities of our algorithm, and $22$ program sketching problems, $19$ of which are adapted from the Java StringUtils Class. The Java StringUtils Class is a good target for program sketching problems since the functions reason about strings, which are arrays of characters. We represent these as arrays of integers. Furthermore, each function is provided with a specification in the source code. All benchmarks and code are available to download\footnote{\url{https://drive.google.com/file/d/1Y__q0CPTDLZ5swQZfnUw7R-brjOhTz1_}}. All but $7$ of our benchmarks use additional quantifiers beyond the standard program synthesis formulation. The $7$ benchmarks that do not are taken from the program sketching JavaStringUtils benchmarks. We run QUIC3~\cite{DBLP:conf/atva/GurfinkelSV18} and the Z3~\cite{DBLP:journals/fmsd/KomuravelliGC16} Horn Solver, both contained within Z3 version 4.8.7, on the examples that can be expressed in CHC-format~\cite{DBLP:journals/corr/abs-2008-02939}, and CVC4~\cite{DBLP:journals/corr/abs-1806-08775} version 1.9[pre-release] on all benchmarks with a time-out of 300s. None of these solvers officially support this combination of quantification, and so are only able to solve a subset of these benchmarks. \textbf{toolname} solves more than double the number of examples than any other tool. The median solving time is comparable to the other solvers, but the average is larger since some benchmarks that go through several iterations to find a model size large enough may have to wait for the synthesis phases to time-out on the earlier iterations. The synthesis time-out is a heuristic and with a greater time-out we may be able to solve more benchmarks but slower. It typically takes between $1$ and $4$ iterations to find a model size large enough to solve each benchmark, and $23$ of the benchmarks were solved by the syntactic generalization procedure, and a further $15$ required the synthesis based generalization. We solve $27$ invariant synthesis benchmarks; $18$ of the solutions have single quantifiers and $5$ have alternating quantifiers. QUIC3 is able to solve $8$ invariant synthesis benchmarks, which all have single quantifiers. CVC4 is able to solve $10$ of the program sketching benchmarks, $2$ of which require quantifiers in the specification but not in the synthesized solutions, and typically where the benchmark is single invocation (that is, where the function is called only with the same arguments in the same order), for which CVC4 contains special purpose algorithms~\cite{DBLP:conf/cav/ReynoldsDKTB15}. \textbf{toolname} is able to solve $4$ benchmarks that CVC4 is unable to solve due to the size of the array that must be reasoned about, but misses solving one benchmark that has a single invocation solution. A number of the sketching benchmarks would be solvable by \textbf{toolname} if we included helper functions from the benchmark in the syntactic template for the bounded synthesis problem. \subsection{Threats to validity} \vspace{1em} \textit{Benchmark selection: }We report an assessment of our approach over a set of real-world(SV-COMP and Java StringUtils sketching) and crafted benchmarks designed to test the capability of our algorithm, since the synthesis community is currently lacking a standard set of benchmarks containing arrays and quantifiers. \noindent \textit{Dependency on CVC4: }Our algorithm depends on the abilities of CVC4, or another synthesis solver, to solve the specification $\sigma^b$. For some benchmarks, where the verification query would fall outside the decidable fragment identified, CVC4 was unable to solve the smallest model we were able to generate within the timeout. Since the actual solutions for these small models are short (typically $3-4$ operations, reasoning about $2-4$ array elements), we believe that a valuable direction for future work would be exploring enumerative techniques tailored to these types of problems where the search space for an enumerative engine is small. These are shown as unsolved by \textbf{toolname} in the results table. However, in order to validate our generalization procedure, we also ran experiments where mocked the expected result from CVC4 and showed that our generalization process is capable of producing the correct result. \noindent\textit{Modeling of loops in benchmarks:} Nested or multiple loops are represented in some of our benchmarks using quantifiers. This is an easy representation for a human modeling such benchmarks to understand. However, one could equally express these benchmarks using multiple loops, reducing the number of quantifiers needed, but requiring the an invariant synthesis tool to synthesize several inter-dependent invariants. We experimented with this representation and found that the results of QUIC3 and Z3 were not affected by our choice of representation. \section{Related work} \label{sec:related} Program sketching was originally presented by Solar-Lezama et al.~\cite{DBLP:conf/asplos/Solar-LezamaTBSS06}. The Sketch tool allows synthesis of expressions, if the user encodes a grammar as part of a generator function, and it performs well when the user is able to provide a sufficiently detailed sketch, but it does not support quantifiers and only has support for limited loops. Many of the SyGuS competition benchmarks in the General Track are equivalent to program sketching, albeit without arrays, and the leading solver in this track for 2018 and 2019 was CVC4~\cite{DBLP:journals/corr/abs-1806-08775}, which we compare to in our experimental evaluation. The community has invested a large amount of effort into the problem of invariant synthesis, which is a specific instance of the program synthesis problem we tackle, and identified a broad variety of special cases in which reachability properties of parametric systems are decidable. For the unrestricted case, the community has devised numerous heuristic methods for guessing and possibly refining the predicate~$I$~\cite{DBLP:conf/aplas/KongJDWY10, DBLP:conf/kbse/NguyenDV17, DBLP:conf/nips/SiDRNS18}. CVC4 and LoopInvGen~\cite{pldi/2016/PadhiSM} both perform well in the syntax-guided synthesis competition, but neither can handle quantifiers in synthesis. There are many approaches that synthesize invariants containing quantifiers over array indices, however, none of them allow for quantification in the specification. QUIC3~\cite{DBLP:conf/atva/GurfinkelSV18} is an adaptation of IC3~\cite{DBLP:conf/vmcai/Bradley11} to synthesize quantified invariants. Larraz et al.~\cite{DBLP:conf/vmcai/LarrazRR13} present an SMT-based array invariant generation approach, which is limited to universally quantified loop invariants over arrays and scalar variables. FreqHorn~\cite{DBLP:conf/cav/FedyukovichPMG19} uses syntax-guided synthesis to synthesize quantified invariants: they identify potentially useful facts about elements accessed in a loop and attempt to generalize these to hypothesis about the entire range of the variables. This is the approach most similar to our work, however the way they identify the range of elements is specific to a loop invariant synthesis problem. Our approach relies on a more general program synthesis phase to identify useful elements and so is not restricted to loop invariant synthesis. FreqHorn also does not permit additional quantification in the specification and we so are unable to compare to this tool. There also exists recent work on using trace-logic to verify loops with properties that use alternating quantifiers~\cite{DBLP:journals/corr/abs-2008-01387}; the approach is specific to loop verification, and cannot tackle the general program synthesis problem, although it can verify some loops that have properties containing alternating quantifiers. We do not compare directly to the tool since they cannot tackle the general synthesis problem, and do not accept SyGuS-IF or CHC format, but we would expect them to perform comparably to \textbf{toolname} on the invariant synthesis benchmarks but be unable to tackle the general program synthesis problems. I4~\cite{DBLP:conf/sosp/MaGJKKS19} is an algorithm that uses a similar insight based on finding invariants for small instances of protocols using model-checking, and generalizing them to larger numbers of nodes. Since the approach is based on model-checking, it is limited to invariant generation, whereas our approach can handle more general synthesis cases. They also handle only universal quantifiers over nodes of the distributed protocol, and not quantifier alternations or existential quantifiers. Our algorithm is in part inspired by verification approaches which use the principle of abstracting a verification problem by considering short versions of bit-vectors and arrays~\cite{DBLP:conf/fmcad/SinhaSMSW12,DBLP:journals/sttt/BryantKOSSB09}. Khasidashvili et al.~\cite{DBLP:conf/fmcad/KhasidashviliKV09} verify equivalences of memory by translation into first-order logic, and note that for some specific designs this falls into a decidable fragment. Verification procedures such as CEGAR~\cite{DBLP:journals/jacm/ClarkeGJLV03} iteratively refine an abstraction, and we iteratively refine $\sigma^b$. A key difference is that CEGAR relies on refining the abstraction until it the abstraction is precise enough that a counterexample is valid on the original program. We only refine $\sigma^b$ until it is precise enough that a satisfying assignment $P^b$ can be generalized to be a valid solution $P$ for the original specification. The restricted specification $\sigma^b$ is almost never precise enough that $P^b$ is a valid solution for $\sigma$. \section{Conclusions} We have presented an algorithm which can synthesize expressions containing alternating quantifiers, to specifications containing quantification over arrays. The synthesis algorithm works by bounding unrestricted domains in the synthesis specification, synthesizing a solution to this finite-domain specification, and then attempting to generalize the solution to that to the unrestricted domain. We are able to synthesize quantified expressions that elude existing solvers and, despite being a general program synthesis algorithm, perform well against specialized invariant synthesis solvers. Furthermore, our algorithm is a framework that exploits the strengths of existing state-of-the-art solvers, and so as the speed and scalability of quantifier-free syntax-guided synthesis improves, so will the performance of our algorithmic framework.
proofpile-arXiv_065-196
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{introduction} In classical mechanics, the configuration space is a finite dimensional smooth manifold $M$ where the tangent bundle $p_M:TM \to M$ corresponds to the velocity space. This geometrical object plays a relevant role in the Lagrangian formalism between tangent and cotangent bundles (cf. \cite{Kle}). In \cite{Wei}, Weinstein develops a generalized theory of Lagrangian Mechanics on Lie algebroids. In \cite{Lib}, Libermann shows that such a formalism is not possible, in general, if we consider the tangent bundle of a Lie algebroid. The notion of prolongation of a Lie algebroid introduced by Higgins and Mackenzie \cite{HiMa} offers a nice context in which such a formalism was generalized by Mart\'inez (cf. \cite{Ma1} and \cite{Ma2}).\\ The notion of Lie algebroid in the Banach setting was simultaneously introduced in \cite{Ana} and \cite{Pe1}. Unfortunately, in this setting, there exist many problems to generalizing all canonical Lie structures on a finite dimensional Lie algebroid and, at first, a nice definition of a Lie bracket (cf. \cite{CaPe}). In this paper, we consider the more general convenient setting (cf. \cite{KrMi}) in which we give a definition of a convenient Lie algebroid (and more generally a partial convenient Lie algebroid) based on a precise notion of sheaf of Lie brackets \footnote{cf. Remark \ref{R_LocalGlobal}} on a convenient anchored bundle (cf. section \ref{___AlmostLieAlgebroidAndLieAlgebroid}).\\ As in finite dimension, given a convenient anchored bundle $ \left( \mathcal{A},\pi, M,\rho \right) $, \footnote{The anchor $\rho$ is a vector bundle morphism $\rho: \mathcal{A} \to TM$.} the total space of the prolongation $ \hat{\bf p}:\mathbf{T}\mathcal{M}\to \mathcal{M}$ of $ \left( \mathcal{A},\pi, M,\rho \right) $ over a fibred manifold $\mathbf{p}:\mathcal{M}\to M$, is the pullback over $\rho$ of the bundle $T\mathbf{p}: T\mathcal{M} \to TM$. Moreover, we have an anchor $\hat{\rho}: \mathbf{T}\mathcal{M} \to T\mathcal{M}$. \\ In finite dimension, the Lie bracket on $\mathcal{A}$ gives rise to a Lie bracket on $\mathbf{T}\mathcal{M}$. Unfortunately, this is not any more true in infinite dimension. If $\tilde{\pi}:\widetilde{\mathcal{A}}\to \mathcal{M}$ is the pullback of $\pi:\mathcal{A}\to M$, over $\mathbf{p}$, then the module of local sections of $\widetilde{\mathcal{A}}$ is no longer finitely generated by local sections along $\mathbf{p}$. For this reason, the way to define the prolongation of the bracket does not work as in finite dimension. Thus the prolongation of such a Lie algebroid is not again a Lie algebroid but only a strong partial Lie algebroid (cf. $\S$ \ref{___StructureofPartialLieAlgebroid}). We also define (non linear) connections on such a prolongation. As for the tangent bundle of a Banach vector bundle (cf. \cite{AgSu}), we then show that the kernel bundle of $\hat{\bf p}:\mathbf{T}\mathcal{M}\to \mathcal{M}$ is split if and only if there exists a linear connection on $\mathcal{M}$.\\ In the Banach setting, it is proved that if the kernel of the anchor $\rho$ of a Banach Lie algebroid $(\mathcal{A}, \pi, M,\rho, [.,.]_\mathcal{A})$ is split and its range is closed, then the associated distribution on $M$ defines a (singular) foliation (cf. \cite{Pe1}). Under these assumptions, we show that the prolongation $ \left( \mathbf{T}\mathcal{A}, \hat{\mathbf{p}}, \mathcal{A}, \hat{\rho} \right) $ has the same properties, and the foliation defined by $\hat{\rho}(\mathbf{T}\mathcal{A})$ on $\mathcal{A}$ is exactly the set $\{\mathcal{A}_{| L}, \; L \textrm{ leaf of } \rho(\mathcal{A})\}$ (cf. Theorem \ref{T_tildefol}).\\ As an illustration of these notions, in the convenient setting, and not only in a Banach one, we end this work by the prolongation of projective (resp. direct) limits of sequences of projective (resp. ascending) sequences of fibred Banach Lie algebroids with finite or infinite dimensional fibres. \\ \emph{This work can be understood as the basis for further studies on how the Lagrangian formalism on finite dimensional Lie algebroid (cf. \cite{Ma2} for instance) can be generalized in this convenient framework.}\\ Section \ref{_ConvenientLieAlgebroid} is devoted to the presentation of the prerequisities needed in this paper about (partial) convenient Lie algebroids. After some preliminaries on notations, we introduce the notion of convenient anchored bundle. Then we define a notion of almost bracket on such a vector bundle. The definition of a convenient Lie algebroid and some of its properties are given in subsection \ref{___AlmostLieAlgebroidAndLieAlgebroid}. The next subsection exposes the concept of partial convenient Lie algebroid. The following subsection is devoted to the definitions of some derivative operators (Lie derivative with respect to some sheaf of $k$-forms sections and exterior derivative), in particular of strong partial Lie algebroids. The last subsection recall some results about integrability of Banach Lie algebroids when the Banach Lie algebroid is split and the range of its anchor is closed (cf \cite{Pe2}).\\ The important part of this work is contained in section \ref{__ProlongationOfAConvenientLieAlgebroidAlongAFibration}. In the first subsection, we build the prolongation of a convenient anchored bundle. Then, in subsection \ref{___ProlongationOfTheLieBracket}, we can define a Lie bracket on local projectable sections of the total space of the prolongation. We then explain why this bracket cannot be extended to the whole set of local sections if the typical fibre of the anchored bundle is not finite dimensional, which is the essential difference with the finite dimensional setting. We end this section by showing that if a Banach Lie algebroid is split and the range of its anchor is closed, the same is true for its prolongation and the range of the anchor of the prolongation defines also a foliation even if its Lie bracket is not defined on the set of all local sections.\\ The two last sections show that, under adequate assumptions, the prolongation of a projective (resp. direct) limit of a projective sequence (resp. ascending sequence) of Banach Lie algebroids is exactly the prolongation of the projective (resp. direct) limit of this sequence. \\ In order to make the two last sections more affordable for non-informed readers, we have added two appendices recalling the needed concepts and results on projective and direct limits. \section{Convenient Lie algebroid} \label{_ConvenientLieAlgebroid} \subsection{Local identifications and expressions in a convenient bundle}${}$\\ \label{___LocalIdentificationsAndExpressionsInAConvenientBundle} \emph{In all this paper, we work in the convenient setting and we refer to \cite{KrMi}.}\\ Consider a convenient vector bundle $\pi: \mathcal{A}\to M$ where the typical fibre is a convenient linear space $\mathbb{A}$. For any open subset $U\subset M$, we denote by $C^\infty(U)$ the ring of smooth functions on $U$ and by $\Gamma\left( \mathcal{A}_U \right) $ the $C^\infty(U)$-module of smooth sections of the restriction $ \mathcal{A}_U$ of $\mathcal{A}$ over $U$, simply $\Gamma(\mathcal{A})$ when $U=M$.\\ Consider a chart $(U,\phi)$ on $M$ such that we have a trivialization $\tau:{ \mathcal{A}}_U\to \psi(U)\times \mathbb{A} $ Then $T\phi$ is a trivialization of $TM_U$ on $\phi(U)\times \mathbb{M}$ and $T\tau$ is a trivialization of $T \mathcal{A}_U$ on $\phi(U)\times \mathbb{A}\times\mathbb{M}\times\mathbb{A}$.\\ For the sake of simplicity, we will denote these trivializations: \begin{description} \item[--] $ \mathcal{A}_U= \mathcal{A} _{|U}\equiv \phi(U)\times\mathbb{\mathbb{A} }$; \item[--] $TM_U=TM_{|U}\equiv \phi(U)\times\mathbb{M}$; \item[--] $T \mathcal{A}_U=T \mathcal{A}_{|{ \mathcal{A}_{(U)}}}\equiv(\phi(U)\times{\mathbb{A} })\times(\mathbb{M}\times{\mathbb{A} })$; \item[--] $T \mathcal{A}^*_{| \mathcal{A}^*_U}\equiv \phi(U)\times\mathbb{A}^{*}\times\mathbb{M}\times \mathbb{A}^*$. \end{description} where $U \subset M$ is identified with $\psi(U)$.\\ We will also use the following associated local coordinates where $\equiv$ stands for the representation in the corresponding trivialization. \begin{description} \item[] $\mathfrak{a}=(x,a)\equiv(\mathsf{x,a})\in U\times \mathbb{A}$ \item[] $(x,v)\equiv (\mathsf{x,v})\in U\times\mathbb{M}$ \item[] $(\mathfrak{a},\mathfrak{b})\equiv(\mathsf{x,a, v ,b})\in U\times\mathbb{A}\times\mathbb{M}\times \mathbb{A}.$ \item[] $(\sigma,w \eta)\equiv (\mathsf{x,\xi,w_1,\eta_2})\in U\times\mathbb{A}^{*}\times\mathbb{M}\times \mathbb{A}^*$ \end{description} \subsection{Convenient anchored bundle} \label{___ConvenientAnchoredBundle} Let $\pi:\mathcal{A}\to M$ be a convenient vector bundle whose fibre is a convenient linear space $\mathbb{A}$. \begin{definition} \label{D_Anchor} A morphism of vector bundles $\rho:\mathcal{A}\to TM$ is called an \textit{anchor} and the quadruple $ \left( \mathcal{A},\pi,M,\rho \right) $ is called a convenient anchored bundle\index{convenient!anchored bundle}. \end{definition} \begin{notations} \label{N_Anchor} ${}$ \begin{enumerate} \item In this section, if there is no ambiguity, the anchored bundle $(\mathcal{A},\pi,M,\rho)$ is fixed and, in all this work, the Lie bracket of vector fields on a convenient manifold will be simply denoted $[.,.]$. \item For any open set $U$ in $M$, the morphism $\rho$ gives rise to a $C^\infty(U)$-morphism of modules ${\rho}_U:\Gamma\left( \mathcal{A}_U\right) \to \mathfrak{X}(U) $ defined, for any $x \in M$ and any smooth section $\mathfrak{a}$ of $\mathcal{A}_U$, by: \[ \left( {\rho}_U\left( \mathfrak{a} \right) \right) \left( x\right) =\rho\left( \mathfrak{a}\left( x\right) \right) \] and still denoted by $\rho$. \item For any convenient spaces $\mathbb{E}$ and $\mathbb{F}$, we denote by $\operatorname{L}(\mathbb{E},\mathbb{F})$, the convenient space of bounded linear operators from $\mathbb{E}$ to $\mathbb{F}$ and for $\mathbb{E}=\mathbb{F}$, we set $\operatorname{L}(\mathbb{E}):=\operatorname{L}(\mathbb{E},\mathbb{F})$; $\operatorname{GL}(\mathbb{E})$ is the group of bounded automorphisms of $\mathbb{E}$. \item In local coordinates in a chart $(U,\phi)$, the restriction of $\rho$ to $U$ will give rise to a smooth field $\mathsf{x} \mapsto \mathsf{\rho_x}$ from $\phi(U)\equiv U$ to $\operatorname{L}(\mathbb{A},\mathbb{M})$. \end{enumerate} \end{notations} \subsection{Almost Lie bracket} \label{___AlmostLieBracket} \begin{definition} \label{D_AlmostLieBracketOnAnAnchoredBundle} An almost Lie bracket on an anchored bundle $\mathcal{A}$ is a sheaf of skew-symmetric bilinear maps \[ \lbrack.,.]_{\mathcal{A}_U}:\Gamma\left( \mathcal{A}_U\right) \times\Gamma\left( \mathcal{A}_U\right) \to\Gamma\left( \mathcal{A}_U\right) \] for any open set $U\subseteq M$ which satisfies the following properties: \begin{enumerate} \item [\textbf{(AL 1)}] The Leibniz identity:\index{Leibniz identity} \[ \forall\left( \mathfrak{a}_{1},\mathfrak{a}_{2}\right) \in \Gamma \left( \mathcal{A}_U \right) ^{2}, \forall f \in C^\infty(M) ,\ [\mathfrak{a}_{1},f\mathfrak{a}_{2}]_{\mathcal{A}}=f.[\mathfrak{a}_{1},\mathfrak{a}_{2}]_{\mathcal{A}}+df(\rho(\mathfrak{a}_{1})).\mathfrak{a}_{2}. \] \item[\textbf{(AL 2)}] For any open set $U\subseteq M$ the map \[ (\mathfrak{a}_1,\mathfrak{a}_2)\mapsto [\mathfrak{a}_1,\mathfrak{a}_2]_{\mathcal{A}_U} \] only depends on the $1$-jets of $\mathfrak{a}_1$ and $\mathfrak{a}_2$ of sections of $\mathcal{A}_U$. \end{enumerate} By abuse of notation, such a sheaf of almost Lie brackets will be denoted $[.,.]_\mathcal{A}$. \end{definition} \begin{remark} \label{R_LocalGlobal} In finite dimension, the bracket is defined on global sections and induces a Lie bracket on local sections which depend on the $1$-jets of sections. In the convenient setting (as in the Banach one), if $M$ is not smoothly regular, the set of restrictions to some open set $U$ of global sections of $\mathcal{A}$ could be different from $\Gamma(\mathcal{A}_U)$ but, unfortunately, we have no example of such a situation. Thus, any bracket defined on the whole space $\Gamma(\mathcal{A})\times \Gamma(\mathcal{A})$ will not give rise to a bracket on local sections of $\mathcal{A}$ and, even if it is true, the condition \emph{\textbf{(AL 2)}} will not be true in general. \end{remark} In the context of local trivializations ($\S$ \ref{___LocalIdentificationsAndExpressionsInAConvenientBundle}), if $\operatorname{L}^2_{\operatorname{alt}}(\mathbb{A};\mathbb{A}) $ is the convenient space of bounded skew-symmetric operators on $\mathbb{A}$ with values in $\mathbb{A}$, there exists a smooth field \[ \begin{array}{cccc} \mathsf{C}: & U & \rightarrow &\operatorname{L}_{\operatorname{alt}}^2(\mathbb{A},\mathbb{A}) \\ & \mathsf{x} & \mapsto & \mathsf{C}_\mathsf{x} \end{array} \] such that, for $\mathfrak{a}_1(x)\equiv(\mathsf{x},\mathsf{a_1(x)})$ and $\mathfrak{a}_2(x)\equiv(\mathsf{x,a_2(x)})$, we have: \begin{eqnarray} \label{eq_loctrivct} [\mathfrak{a}_1,\mathfrak{a}_2]_U(x) \equiv (\mathsf{x,C_x(a_1(x),a_2(x)})+d \mathsf{a_2(\rho_x(a_1(x)))}-d \mathsf{a_1(\rho_x(a_2(x)))}). \end{eqnarray} \subsection{Almost Lie algebroid and Lie algebroid} \label{___AlmostLieAlgebroidAndLieAlgebroid} \begin{definition} \label{D_AlmostLieAlgebroid} The quintuple $ \left( \mathcal{A},\pi,M,\rho,[.,.]_{\mathcal{A}} \right) $ where $ \left( \mathcal{A},\pi,M,\rho \right) $ is an anchored bundle and $[.,.]_{\mathcal{A}}$ an almost Lie bracket is called a convenient almost Lie algebroid\index{convenient almost Lie algebroid}\index{almost Lie algebroid!convenient}. \end{definition} In this way, the \emph{Jacobiator}\index{Jacobiator} is the $\mathbb{R}$-trilinear map $J_{\mathcal{A}_U}:\Gamma(\mathcal{A}_{U})^{3}\to\Gamma(\mathcal{A}_{ U})$ defined, for any open set $U$ in $M$ and any section $\left( \mathfrak{a}_{1}, \mathfrak{a}_{2}, \mathfrak{a}_{3} \right) \in \Gamma(\mathcal{A}_{U})^3$ by \[ J_{\mathcal{A}_U}(\mathfrak{a}_{1},\mathfrak{a}_{2},\mathfrak{a}_{3} )=[\mathfrak{a}_{1},[\mathfrak{a}_{2},\mathfrak{a}_{3}]_{\mathcal{A}}]_{\mathcal{A}}+[\mathfrak{a}_{2},[\mathfrak{a}_{3},\mathfrak{a}_{1}]_{\mathcal{A}}]_{\mathcal{A}}+[\mathfrak{a}_{3},[\mathfrak{a}_{1},\mathfrak{a}_{2}]_{\mathcal{A}}]_{\mathcal{A}}. \] \begin{definition} \label{D_ConvenientLieAlgebroid} A convenient Lie algebroid \index{convenient Lie algebroid}\index{Lie algebroid!convenient} is a convenient almost Lie algebroid $ \left( \mathcal{A},\pi,M,\rho ,[.,.]_{\mathcal{A}} \right) $ such that the associated jacobiator $J_{\mathcal{A}_U}$ vanishes identically on each module $\Gamma(\mathcal{A}_{U})$ for all open sets $U$ in $M$. \end{definition} We then have the following result (cf. \cite{BCP}, Chapter 3): \begin{proposition} \label{P_EquivalenceMorphismJEtensor} Consider an almost convenient Lie algebroid $\left( \mathcal{A},\pi,M,\rho ,[.,.]_{\mathcal{A}} \right) $. \begin{enumerate} \item For any open set $U\subseteq M$ and for all $\left( \mathfrak{a}_{1},\mathfrak{a}_{2} \right) \in \Gamma\left( \mathcal{A}_{ U} \right) ^2$, the map \[ \left( \mathfrak{a}_{1},\mathfrak{a}_{2} \right)\mapsto \rho \left( [\mathfrak{a}_1,\mathfrak{a}_2]_\mathcal{A} \right) -[\rho(\mathfrak{a}_1), \rho(\mathfrak{a}_2)] \] only depends on the $1$-jet of $\rho$ at any $x\in U$ and the values of $\mathfrak{a}_1$ and $\mathfrak{a}_2$ at $x$. \item If the jacobiator $J_{\mathcal{A}_U}$ vanishes identically, then we have: \begin{equation} \label{eq_rhoCompatible \forall \left( \mathfrak{a}_{1},\mathfrak{a}_{2} \right) \in \Gamma\left( \mathcal{A}_{ U} \right) ^2,\; \rho \left( [\mathfrak{a}_1,\mathfrak{a}_2]_\mathcal{A} \right) =[\rho(\mathfrak{a}_1), \rho(\mathfrak{a}_2)]. \end{equation} \item If the property (\ref{eq_rhoCompatible}) is true, then $J_{\mathcal{A}_U}$ is a bounded trilinear $C^\infty(U)$-morphism from $ \Gamma\left( \mathcal{A}_U \right)^3$ to $\Gamma\left( \mathcal{A}_U \right)$ which takes values in $\ker \rho$ over $U$.\\ \end{enumerate} \end{proposition} If for each open set $U$, the assumption (2) of Proposition \ref{P_EquivalenceMorphismJEtensor} is satisfied, (3) implies that the family $\{ J_{\mathcal{A}_U}, U\textrm{ open in } M \}$ defines a sheaf of trilinear morphisms from the sheaf $\{ (\Gamma(A_U))^3, U \textrm{ open in } M\}$ into the sheaf $\{ \Gamma(A_U), U \textrm{ open in } M\}$. This sheaf will be denoted $J_{\mathcal{A}}$. \begin{corollary} \label{C_rhoLieAlgebraMorphism} If $ \left( \mathcal{A},\pi,M,\rho,[.,.]_{\mathcal{A}} \right) $ is a convenient Lie algebroid, then $\rho$ induces a morphism of Lie algebras from $\Gamma(\mathcal{A}_{ U})$ into $\mathfrak{X}(U)$ for any open sets $U$ in $M$. \end{corollary} \begin{definition} \label{D_SplitConvenientLieAlgebroid} A convenient Lie algebroid $\left( \mathcal{A},\pi,M,\rho,[.,.]_{\mathcal{A}} \right) $ will be called split\index{split convenient Lie algebroid} if, for each $x\in {M}$, the kernel of $\rho_x=\rho_{| \pi^{-1}(x)}$ is supplemented in $ \pi^{-1}(x)$. \end{definition} For example, if $ \operatorname{ker}\rho_x$ is finite dimensional or finite codimensional for all $x\in M$ or if $\mathbb{A}$ is a Hilbert space, then $(\mathcal{A},\pi,M,\rho,[.,.]_{\mathcal{A}})$ is split. Another particular situation is the case where the anchor $\rho\equiv 0$ and then $\left( \mathcal{A},\pi,M,\rho,[.,.]_{\mathcal{A}} \right) $ is a \emph{Lie algebra Banach bundle}. \subsection{Structure of partial Lie algebroid} \label{___StructureofPartialLieAlgebroid} We have the following generalization of the notion of convenient Lie algebroid: \begin{definition} \label{D_PartialConvenientLieAlgebroid} Let $(\mathcal{A},\pi,M,\rho)$ be a convenient anchored bundle. Consider a sub-sheaf $\mathfrak{P}_M$ of the sheaf $\Gamma(\mathcal{A})_M$ of sections of $\mathcal{A}$. Assume that $\mathfrak{P}_M$ can be provided with a structure of Lie algebras sheaf which satisfies, for any open set $U$ in $M$: \begin{enumerate} \item[(i)] for any $(\mathfrak{a}_1,\mathfrak{a}_2)\in \left( \mathfrak{P}(U) \right) ^2$ and any $f\in C^\infty(U)$, we have the Leibniz conditions \begin{eqnarray} \label{eq_rhoCompatibilitySheaf} [\mathfrak{a}_1,f\mathfrak{a}_2]_{\mathfrak{P}(U)}=df(\rho(\mathfrak{a}_1))\mathfrak{a}_2+f[\mathfrak{a}_1,\mathfrak{a}_2]_{\mathfrak{P}(U)} \end{eqnarray} \item[(ii)] the Lie bracket $[.,.]_{\mathfrak{P}(U)}$ on $\mathfrak{P}(U)$ only depends on the $1$-jets of sections of ${\mathfrak{P}(U)}$; \item[(iii)] $\rho$ induces a Lie algebra morphism from $\mathfrak{P}(U)$ to $\mathfrak{X}(U)$. \end{enumerate} Then $ \left( \mathcal{A},\pi, M,\rho,\mathfrak{P}_M \right)$ is called a convenient partial Lie algebroid\index{partial Lie algebroid}. The family $\{ [.,.]_{\mathfrak{P}(U)}, U \textrm{ open set in }M \}$ is called a sheaf bracket\index{sheaf bracket} and is denoted $[.,.]_\mathcal{A}$. \\ A partial convenient Lie algebroid $(\mathcal{A},\pi, M,\rho,\mathfrak{P}_M)$ is called strong\index{partial Lie algebroid!strong} if for any $x\in M$, the stalk\index{stalk} \[ \mathfrak{P}_x=\underrightarrow{\lim}\{ \mathfrak{P}(U),\;\; \varrho^U_V:V \to U,\;\; U,V \textrm{ open neighbourhoods of } x : U \supset V \} \] is equal to $\pi^{-1}(x)$ for any $x\in M$. \end{definition} Any convenient Lie algebroid is a partial Lie algebroid.\\ More generally, if $(\mathcal{A},\pi, M,\rho)$ is a convenient anchored bundle, any convenient subbundle $\mathcal{B}$ of $\mathcal{A}$ such that $ \left( \mathcal{B}, \pi_{\mathcal{B}}=\pi_{| \mathcal{B}},M, \rho_{\mathcal{B}}=\rho_{| \mathcal{B}}, [.,.]_{\mathcal{B}} \right) $ is a convenient Lie algebroid provided with a structure of convenient partial Lie algebroid on $\mathcal{A}$ which is not strong in general. Another type of example of convenient partial Lie algebroids will be described in the context of the prolongation of a convenient Lie algebroid in the next section. This convenient partial Lie algebroid will be a strong partial Lie algebroid. \begin{remark} \label{R_PartialLieBracket} In local coordinates, the Lie bracket $[.,.]_{\mathfrak{P}(U)}$ can be written as in (\ref{eq_loctrivct}). \end{remark} \subsection{Derivative operators} \label{___DerivativeOperators} \subsubsection{Preliminaries} \label{____Preliminaries} If $U$ is a $c^\infty$-open subset of a convenient space $\mathbb{E}$, the space $C^\infty(U,\mathbb{F})$ of smooth maps from $U$ to a convenient space $\mathbb{F}$ is a convenient space (cf. \cite{KrMi}, 3.7 and 3.11).\\ The space $L(\mathbb{E},\mathbb{F})$ of bounded linear maps from $\mathbb{E}$ to $\mathbb{F}$ endowed with the topology of uniform convergence on bounded subsets in $\mathbb{E}$ is a closed subspace of $C^\infty(\mathbb{E},\mathbb{F})$ and so is a convenient space.\\ More generally, the set $L^{k}_{\operatorname{alt}}(\mathbb{E},\mathbb{F})$ of all bounded $k$-linear alternating mappings from $\mathbb{E}^k$ to $\mathbb{F}$ endowed with the topology of uniform convergence of bounded sets is a closed subspace of $C^\infty(\mathbb{E}^k,\mathbb{F})$ (cf. \cite{KrMi}, Corollary 5.13) and so $L^{k}_{\operatorname{alt}}(\mathbb{E},\mathbb{F})$ is a convenient space.\\ On the other hand, if $\bigwedge^k(\mathbb{E})$ is the set of alternating $k$-tensors on $\mathbb{E}$ then $L^{k}_{\operatorname{alt}}(\mathbb{E}):=L^{k}_{\operatorname{alt}}(\mathbb{E},\mathbb{R})$ is isomorphic as a locally convex topological space to $L_{\operatorname{alt}}^k(\mathbb{E})$ (cf. \cite{KrMi}, Corollary 5.9) and so has a natural structure of convenient space.\\ Recall that bounded linear maps are smooth (cf. \cite{KrMi}, Corollary 5.5).\\ Let us consider a convenient vector bundle $\pi:\mathcal{A}\rightarrow M$ with typical fibre $\mathbb{A}$. We study the bundle \[ \begin{array} [c]{cccc} \pi^k: & L_{\operatorname{alt}}^k(\mathcal{A})=\displaystyle\bigcup_{x\in M}L_{\operatorname{alt}}^k(\mathcal{A}_x) & \to & M\\ & (x,\omega) & \mapsto & x \end{array} \] Using any atlas for the bundle structure of $\pi:\mathcal{A}\rightarrow M$, it is easy to prove that $\pi^k:L_{\operatorname{alt}}^k(\mathcal{A})\rightarrow M$ is a convenient vector bundle. The vector space of local sections of $L_{\operatorname{alt}}^k(\mathcal{A}_U)$ is denoted by $\bigwedge^{k}\Gamma^*(\mathcal{A}_U)$ and is called the set of $k$-exterior differential forms on $\mathcal{A}_U$. We denote by $\bigwedge^k\Gamma^*(\mathcal{A})$ the sheaf of sections of $\pi^k: L_{\operatorname{alt}}^k(\mathcal{A})\to M$ and $\bigwedge\Gamma^*(\mathcal{A})=\displaystyle\bigcup_{k=0}^\infty\bigwedge^k\Gamma^*(\mathcal{A})$ the sheaf of associated graded exterior algebras. \\ {\bf In this section, we assume that $(\mathcal{A},\pi, M,\rho)$ is an anchored bundle and that $ \left( \mathcal{A},\pi, M,\rho,\mathfrak{P}_M \right)$ is a fixed strong partial Lie algebroid}.\\ This situation is always satisfied if $(\mathcal{A},\pi, M,\rho, \left[.,.\right]_{\mathcal{A}})$ is a Lie algebroid and occurs for the prolongation of a convenient Lie algebroid (cf $\S$ \ref{__ProlongationOfAConvenientLieAlgebroidAlongAFibration}). This context also occurs in the setting of partial Poisson manifolds (cf. \cite{PeCa}). \subsubsection{ Insertion operator } \label{____InteriorProduct} Let $\mathfrak{a}$ be a local section of $\mathcal{A}$ defined on an open set $U$. As in \cite{KrMi}, 33.10, we have \begin{proposition} \label{D_InsertionOperator} The insertion operator\index{insertion operator}\index{operator!insertion} $i_\mathfrak{a}$ is the graded endomorphism of degree $-1$ defined by: \begin{enumerate} \item \begin{enumerate} \item[(i)] For any function $f\in C^\infty(U)$ \begin{eqnarray} \label{eq_i0} i_{\mathfrak{a}}\left( f\right) =0 \end{eqnarray} \item[(ii)] For any $k$-form $\omega$ (where $k>0$), \begin{eqnarray} \label{eq_iq} \left( i_{\mathfrak{a}}\omega\right) \left( \mathfrak{a}_{1},\dots,\mathfrak{a}_{k}\right)(x)=\omega( \mathfrak{a}(x),\mathfrak{a}_1(x),\dots,\mathfrak{a}_k(x)). \end{eqnarray} \end{enumerate} \item $\hfil{ i_\mathfrak{a}(\omega \wedge \omega^\prime)=i_\mathfrak{a}(\omega)\wedge \omega^\prime+(-1)^{{\rm deg}\omega}i_\mathfrak{a}(\omega^\prime). }$ \end{enumerate} \end{proposition} \subsubsection{Lie derivative} \label{____Liederivative} \begin{proposition} \label{P_Liekform} For $k\geq 0$, let $\omega$ be a local $k$-form that is an element of $\bigwedge^{k}\Gamma^*(\mathcal{A}_U)$ for some open set $U$ of $M$. Given any section $\overline{\mathfrak{a}}\in \mathfrak{P}(U)$, the \emph{Lie derivative}\index{Lie derivative} with respect to $\overline{\mathfrak{a}}$ on sections of $\mathfrak{P}(U)$, denoted by $L_{\overline{\mathfrak{a}}}^\rho$, is the graded endomorphism with degree $0$ defined in the following way: \begin{enumerate} \item For any function $f\in C^\infty(U)$, \begin{eqnarray} \label{eq_L0} L_{\overline{\mathfrak{a}}}^{\rho}(f) = i_{{\rho}\circ\overline{ \mathfrak{a}}}\left( df\right) \end{eqnarray} where $L_{X}$ denote the usual Lie derivative with respect to the vector field $X$ on $M$. \item For any $k$-form $\omega$ (where $k>0$), \begin{eqnarray} \label{eq_Lq} \begin{aligned} \left( L_{\overline{\mathfrak{a}}}^{\rho}\omega\right) \left( \mathfrak{a}_{1},\dots,\mathfrak{a}_{k}\right)(x)= & L_{\overline{\mathfrak{a}}}^{\rho}\left( \omega\left( \mathfrak{a}_{1},\dots,\mathfrak{a}_{k}\right) \right)(x)\\ &-{\displaystyle\sum\limits_{i=1}^{k}} \omega\left( \mathfrak{a}_{1},\dots,\mathfrak{a}_{i-1} ,\left[ \overline{\mathfrak{a}},\overline{\mathfrak{a}}_{i}\right] _{\mathcal{A}},\mathfrak{a}_{i+1},\dots,\mathfrak{a}_{k}\right)(x) \end{aligned} \end{eqnarray} where $\overline{\mathfrak{a}}_i$ is any section of $\mathfrak{P}(U)$ such that $\overline{\mathfrak{a}}_i(x)={\mathfrak{a}}_i(x)$ for $i \in \{1, \dots, k\} $. \end{enumerate} \end{proposition} \begin{proof} Since the problem is local, we may assume that $U$ is a $c^\infty$-open set in $\mathbb{M}$ over which $\mathcal{A}$ is trivial. Fix some section $\mathfrak{a}$ of $\mathcal{A}$ on $U$ and fix some $x\in U$. After shrinking $U$ if necessary, if $f$ is a smooth function on $U$, it is clear that (\ref{eq_L0}) is well defined.\\ For $k>0$, let $\omega\in \bigwedge^{k}\Gamma^*(\mathcal{A}_U)$. Since we have a strong partial Lie algebroid, this implies that for any $k$-uple $(\mathfrak{a} _1,\dots,\mathfrak{a}_k)$ of local sections on $U$ of $\mathcal{A}$, the value $\omega(\mathfrak{a}_1(x),\dots,\mathfrak{a}_k(x))$ is well defined. Let $\overline{\mathfrak{a}}_i\in \mathfrak{P}(U)$ such that $\mathfrak{a}_i(x)=\overline{\mathfrak{a}}_i(x)$ for $i \in \{1,\dots,k\}$, we apply the formula (\ref{eq_Lq}) to $(\overline{\mathfrak{a}},\overline{\mathfrak{a}}_1,\dots, \overline{\mathfrak{a}}_k)$. In our context, $\omega$ is a smooth field over $U$ with values in $L_{\operatorname{alt}}^k(\mathbb{A})$ and each $\overline{\mathfrak{a}}_i$ is a smooth map from $U$ to $\mathbb{A}$. In this way, we have \begin{align*} L^\rho_{\overline{\mathfrak{a}}}\omega \left( \overline{ \mathfrak{a}}_{1},\dots,\overline{ \mathfrak{a}}_{k}\right)(x) =& d_x\omega \left( \rho(\overline{\mathfrak{a}}); \overline{ \mathfrak{a}}_{1}(x),\dots,\overline{ \mathfrak{a}}_{k}(x) \right)\\ &+\sum_{i=1}^k\omega \left( \overline{ \mathfrak{a}}_{1}(x),\dots, d_x\overline{ \mathfrak{a}}_{i}(\rho(\overline{\mathfrak{a}})), \dots,\overline{ \mathfrak{a}}_{k}(x) \right) \end{align*} Since $[.,.]_{\mathfrak{P}(U)}$ only depends on the $1$-jets of sections, as for an almost Lie bracket (cf. Remark \ref{R_PartialLieBracket}), we have: \[ \left[ \overline{\mathfrak{a}},\overline{\mathfrak{a}}_{i}\right]_{\mathcal{A}}(x)=d_x\overline{\mathfrak{a}}_{i}(\rho( \overline{\mathfrak{a}}))-d_x\overline{\mathfrak{a}}(\rho( \overline{\mathfrak{a}}_i)+ C_x( \overline{\mathfrak{a}}(x), \overline{\mathfrak{a}}_i(x)). \] It follows that we have \begin{align*} \left( L_{\overline{\mathfrak{a}}}^{\rho}\omega\right) \left( \mathfrak{a}_{1},\dots,\mathfrak{a}_{k}\right)(x)= d_x\omega\left(\rho(\overline{\mathfrak{a}}); \overline{ \mathfrak{a}}_{1}(x),\dots,\overline{ \mathfrak{a}}_{k}(x)\right))\\ + {\displaystyle\sum\limits_{i=1}^{k}} \omega\left( \overline{\mathfrak{a}}_{1}(x),\dots,\overline{\mathfrak{a}}_{i-1}(x) ,d_x\overline{\mathfrak{a}}(\rho( \overline{\mathfrak{a}}_i)+ C_x( \overline{\mathfrak{a}}(x), \overline{\mathfrak{a}}_i(x)),\overline{\mathfrak{a}}_{i+1}(x),\dots,\overline{\mathfrak{a}}_{k}(x) \right) . \end{align*} which implies that $L_{\overline{\mathfrak{a}}}^{\rho}\omega$ is a well defined $k$-skew symmetric form on $\mathcal{A}_x$ on $U$ since its value only depends on the $1$-jet of $\overline{\mathfrak{a}}$ and of $\omega$ and the values of $(\mathfrak{a}_1,\dots,\mathfrak{a}_k)$ at $x$. Now, as $x\mapsto C_x$ is a smooth map from $U$ to $L_{\operatorname{alt}}^2(\mathbb{A})$ and so is bounded, the differential of functions is a bounded morphism of convenient space (cf. \cite{KrMi}, 3), and $\rho$ is a bounded morphism of convenient spaces, this completes the proof according to the uniform boundedness principle given in \cite{KrMi}, Proposition 30.1. \end{proof} \begin{remark} \label{R_Liederivativef} From the relation (\ref{eq_L0}), it is clear that the Lie derivative of a function is defined for any section of $\mathcal{A}_U$. Of course, this is also true for any $k$-form on a Lie algebroid. But, for a strong partial Lie algebroid, this is not true for any $k$-form with $k>0$, since the last formula in the previous proof shows clearly that $L_{\overline{\mathfrak{a}}}^{\rho}\omega$ also depends on the $1$-jet of ${\overline{\mathfrak{a}}}$. \end{remark} \begin{remark} \label{R_AlmostLieDerivative} Assume that $(\mathcal{A},\pi, M,\rho)$ is provided with an almost Lie bracket $\left[.,.\right]_{\mathcal{A}}$. Then the Lie derivative $ L_{{\mathfrak{a}}}^{\rho}\omega$ is again well defined by an evident adaptation of formula (\ref{eq_Lq} ) for any local section $\mathfrak{a}$ and $k$-form $\omega$ defined on some open set $U$. Moreover, if the Lie bracket on $ \left( \mathcal{A},\pi, M,\rho,\mathfrak{P}_M \right)$ is induced by the almost Lie bracket $\left[.,.\right]_{\mathcal{A}}$, then the Lie derivative defined in Proposition \ref{P_Liekform} and the previous global one are compatible. \end{remark} \subsubsection{Exterior derivative} \label{____ExteriorDerivative} At first, for any function $f$, we can also define the $1$-form $d_{\rho}f,$ by \begin{eqnarray} \label{eq_d0} d_{\rho}f={{\rho}^t}\circ df \end{eqnarray} where ${\rho}^t:T^{\prime}M\rightarrow \mathcal{A}^{\prime}$ is the transposed mapping of $ \rho$. \smallskip The Lie derivative with respect to any local section $\mathfrak{a}$ of $\mathcal{A}$ commutes with $d_{\rho}$. \medskip The \emph{exterior differential}\index{exterior differential} on $\bigwedge\Gamma^*(\mathcal{A})$ is defined as follows: \begin{proposition} \label{P_Exreriordidderential}${}$ \begin{enumerate} \item[(1)] The exterior differential $d_\rho$ is the graded endomorphism of degree $1$ on $\bigwedge\Gamma^*(\mathcal{A})$ defined in the following way: \begin{enumerate} \item For any function $f$, $d_{\rho}f$ is defined previously; \item For $k>0$ and any $k$-form $\omega$, the exterior differential $d_{\rho}\omega$ is the unique $(k+1)$-form such that, for all $\mathfrak{a}_{0},\dots,\mathfrak{a}_{k}\in \Gamma(\mathcal{A})$, \begin{eqnarray} \label{eq_dext} \begin{aligned} \left( d_{\rho}\omega\right) \left( \mathfrak{a}_{0},\dots, \mathfrak{a}_{q}\right)(x) & ={\displaystyle\sum\limits_{i=0}^{q}}\left( -1\right) ^{i}L_{\mathfrak{a}_{i}}^{\rho }\left( \omega\left( \mathfrak{a}_{0},\dots,\widehat{ \mathfrak{a}_{i}},\dots, \mathfrak{a}_{q}\right)(x) \right) \\ & +{\displaystyle\sum\limits_{0\leq i<j\leq q}}\left( -1\right) ^{i+j}\left( \omega\left( \left[ \overline{\mathfrak{a}}_{i}, \overline{\mathfrak{a}}_{j}\right] _{\mathcal{A}}, \mathfrak{a}_{0} ,\dots,\widehat{ \mathfrak{a}_{i}},\dots,\widehat{ \mathfrak{a}_{j}},\dots, \mathfrak{a}_{q}\right) \right)(x) \end{aligned} \end{eqnarray} where $\overline{\mathfrak{a}}_i$ is any section of $\mathfrak{P}(U)$ such that $\overline{\mathfrak{a}}_i(x)={\mathfrak{a}}_i(x)$ for $i \in \{0,\dots k\}$. \end{enumerate} \item[(2)] For any $k$-form $\eta$, $l$-form $\zeta$ where $\left( k,l \right) $ in $\mathbb{N}^2$, we then have the following property \begin{equation} \label{Eq_WedgeProduct} d_\rho(\eta\wedge\zeta)=d_\rho(\eta)\wedge \zeta+(-1)^k\eta\wedge d_\rho(\zeta). \end{equation} \begin{equation} \label{Eq_dcircd} d_{\rho}\circ d_{\rho}={d_\rho}^2=0. \end{equation} \end{enumerate} \end{proposition} \bigskip \begin{proof}${}$\\ (1) Using the same context as in the proof of Proposition \ref{P_Liekform}, on the one hand, in local coordinates, we have \begin{align*} L_{\overline{\mathfrak{a}}_{i}}^{\rho} \left( \omega \left( \mathfrak{a}_{0},\dots,\widehat{\mathfrak{a}_{i}},\dots, \mathfrak{a}_{q}\right)(x) \right) &= d_x\omega\left(\rho({\mathfrak{a}}_i); \overline{ \mathfrak{a}}_{1}(x),\dots,\widehat{\overline{ \mathfrak{a}}_{i}},\dots,\overline{ \mathfrak{a}}_{k}(x)\right)\\ &\;\;\;+\sum_{i=1}^k\omega \left(\overline{ \mathfrak{a}}_{1}(x),\dots, d_x\overline{ \mathfrak{a}}_{j}(\rho({\mathfrak{a}}_i)), \dots,\overline{ \mathfrak{a}}_{k}(x)\right) .\\ \end{align*} On the other hand, we have \begin{eqnarray*} \begin{aligned} &\omega\left( \left[ \overline{\mathfrak{a}}_{i}, \overline{\mathfrak{a}}_{j}\right] _{\mathcal{A}}, \mathfrak{a}_{0} ,\dots,\widehat{ \mathfrak{a}_{i}},\dots,\widehat{ \mathfrak{a}_{j}},\dots, \mathfrak{a}_{q}\right) (x)\\ &=\omega\left( d_x\overline{\mathfrak{a}}_{i}(\rho( \overline{\mathfrak{a}}_j))-d_x\overline{\mathfrak{a}}_j(\rho( \overline{\mathfrak{a}}_i)+ C_x( \overline{\mathfrak{a}}_i(x), \overline{\mathfrak{a}}_j(x)), \overline{\mathfrak{a}}_i(x)), \overline{\mathfrak{a}}_{j}, \mathfrak{a}_{0} ,\dots,\widehat{ \mathfrak{a}_{i}},\dots,\widehat{ \mathfrak{a}_{j}},\dots, \mathfrak{a}_{k}\right)(x). \end{aligned} \end{eqnarray*} Finally, as $\rho(\mathfrak{a}_i(x))=\rho(\overline{\mathfrak{a}}_i(x))$ for $i \in \{0,\dots, k\}$, we obtain: \begin{eqnarray*} \begin{aligned} &\left( d_{\rho}\omega\right) \left( \mathfrak{a}_{0},\dots, \mathfrak{a}_{q}\right)(x)={\displaystyle\sum\limits_{i=0}^{k}}\left( -1\right) ^{i} d_x\omega\left(\rho(\overline{\mathfrak{a}}_i); \overline{ \mathfrak{a}}_{1}(x),\dots,\widehat{\overline{ \mathfrak{a}}_{i}},\dots,\overline{ \mathfrak{a}}_{k}(x) \right) \\ &+{\displaystyle\sum\limits_{0\leq i<j\leq k}}\left( -1\right)^{i+j}\left( \omega\left( C_x\left( \overline{\mathfrak{a}}_i(x), \overline{\mathfrak{a}}_j(x)\right), \overline{\mathfrak{a}}_{0}(x),\dots,\widehat{\overline{ \mathfrak{a}}_{i}}(x),\dots,\widehat{ \overline{\mathfrak{a}}_{j}}(x),\dots, \overline{\mathfrak{a}}_{k}(x) \right)\right) \end{aligned} \end{eqnarray*} Since this value only depends on the $1$-jet of $\omega$ at $x$ and the value of each $\overline{\mathfrak{a}}_i(x)$ for $i \in \{0,\dots k\}$, it follows that $d_{\rho}\omega$ is a well defined $(k+1)$-form by the same arguments as at the end of the proof of Proposition \ref{P_Liekform} .\\ (2) According to the definition of the wedge product, the last formula in local coordinates for $ \left( d_{\rho}\omega\right) \left( \mathfrak{a}_{0},\dots, \mathfrak{a}_{q}\right)$ clearly implies relation (\ref{Eq_WedgeProduct}).\\ Since the Lie bracket on $\mathfrak{P}(U)$ satisfies the Jacobi identity for any open set $U$, and since the differential $d_\rho\omega$ only depends on the $1$-jet of $\omega$, as in finite dimension, it follows that $d_\rho(d_\rho \omega)=0$. \end{proof} \subsubsection{Nijenhuis endomorphism} \label{____NijenhuisEndomorphism} In this subsection, we only consider the case of a convenient Lie algebroid $\left( \mathcal{A},\pi,M,\rho,[.,.]_{\mathcal{A}} \right) $. Let $A$ be an endomorphism of $\mathcal{A}$. The \emph{Lie derivative of $A$} with respect to a local section $\mathfrak{a}$ is defined by \begin{eqnarray} \label{eq_LieDerivativeEndomorphism} L^\rho_\mathfrak{a}A(\mathfrak{b})=[\mathfrak{a},A \left( \mathfrak{b} \right) ]_\mathcal{A}-A \left( [\mathfrak{a},\mathfrak{b}]_\mathcal{A} \right) \end{eqnarray} for all local or global sections $\mathfrak{b}$ with the same domain as $\mathfrak{a}$.\\ The \emph{Nijenhuis tensor}\index{Nijenhuis tensor}\index{tensor!Nijenhuis} of $A$ is the tensor of type $(2,0)$ defined by: \begin{eqnarray} \label{eq_NijenhuisEndomorphism} N_A(\mathfrak{a},\mathfrak{b})=[A\mathfrak{a},A\mathfrak{b}]_\mathcal{A}-A[A\mathfrak{a},\mathfrak{b}]_\mathcal{A}-A[\mathfrak{a},A\mathfrak{b}]_\mathcal{A}-[\mathfrak{a},\mathfrak{b}]_\mathcal{A} \end{eqnarray} for all local or global sections $\mathfrak{a}$ and $\mathfrak{b}$ with same domain. \begin{remark} \label{R_PartialAlgebroid} Consider a partial Lie algebroid $ \left( \mathcal{A},\pi, M,\rho,\mathfrak{P}_M \right) $. If $A$ is an endomorphism of sheaves of $\mathfrak{P}_M$, the same formulae are well defined for any local section $\overline{\mathfrak{a}}$ of $\mathfrak{P}_M$. In this way, we can also define the Nijenhuis tensor $N_A$ as an endomorphism of sheaves of $\mathfrak{P}_M$. \end{remark} \subsection{Lie morphisms and Lie algebroid morphisms} \label{___LieAlgebroidMorphism} Let $(\mathcal{A}_1, \pi_1, M_1,\rho_1, [.,.]_{\mathcal{A}_1})$ and $(\mathcal{A}_2, \pi_2, M_2,\rho_2, [.,.]_{\mathcal{A}_2})$ be two convenient Lie algebroids.\\ We consider a bundle morphism $\Psi:\mathcal{A}_1 \to \mathcal{A}_2$ over $\psi:M_1 \to M_2$. \\ In the one hand, according to \cite{HiMa} in finite dimension, we can introduce: \begin{definition} \label{D_psiRelatedSections} Consider a section $\mathfrak{a}_1$ of $\mathcal{A}_1$ over an open set $U_1$ and a section $\mathfrak{a}_2$ of $\mathcal{A}_2$ over an open set $U_2$ which contains $\psi(U_1)$. We say that the pair of sections $(\mathfrak{a}_1,\mathfrak{a}_2)$ are $\psi$-related\index{related pairs of sections} if we have \begin{description} \item{\bf (RS)} $\Psi\circ \mathfrak{a}_1=\mathfrak{a}_2\circ \psi$. \end{description} \end{definition} \begin{definition} \label{D_LieMorphism} $\Psi$ is called a Lie morphism\index{Lie morphism}\index{morphism!Lie} over $\psi$ if it fulfills the following conditions: \begin{description} \item[\textbf{(LM 1)}] $\rho_2 \circ \Psi= T\psi \circ \rho_1$; \item[\textbf{(LM 2)}] $\Psi \circ [\mathfrak{a}_1,\mathfrak{a}^\prime_1]_{\mathcal{A}_1} = [\mathfrak{a}_2,\mathfrak{a}'_2]_{\mathcal{A}_2}\circ \psi$ for all $\psi$-related pairs of sections $(\mathfrak{a}_1,\mathfrak{a}_2)$ and $(\mathfrak{a}^\prime_1,\mathfrak{a}^\prime_2)$. \end{description} \end{definition} \begin{remark} \label{R_LieMorphismPartialAlgebroid} For $i \in \{1,2\}$, let $ \left( \mathcal{A}_i, \pi_i, M_i,\rho_i, \mathfrak{P}_{M_i} \right) $ be a partial Lie algebroid.\\ We consider a sheaf morphism $\Psi:\mathfrak{P}_{M_1} \to \mathfrak{P}_{M_2}$ over a smooth map $\psi:M_1 \to M_2$. Then the Definition \ref{D_psiRelatedSections} makes sense for pairs of sections $(\mathfrak{a}_1,\mathfrak{a}_2)\in \mathfrak{P}_{M_1}\times\mathfrak{P}_{M_2}$ which are then called $\psi$-related. If $[.,.]_{\mathcal{A}_i}$ is the sheaf of Lie bracket defined on $\mathfrak{P}_{M_i}$ the assumption \emph{\textbf{(LM 2)}} in Definition \ref{D_LieMorphism} also makes sense for two pairs of such $\psi$-related sections.\\ Thus $\Psi$ will be called a {\bf Lie morphism of partial convenient Lie algebroids} if it satisfies the assumptions \emph{\textbf{(LM 1)}}, and \emph{\textbf{(LM 2)}} for two pairs of $\psi$-related sections of $\mathfrak{P}_{M_1}\times\mathfrak{P}_{M_2}$.\\ \end{remark} For any local $k$-form $\omega$ on $\mathcal{A}_2$ defined on $U_2$, we denote by $\Psi^*\omega$ the local $k$-form on $\mathcal{A}_1$ defined on $U_1=\psi^{-1}(U_2)$ by: \begin{eqnarray} \label{eq_PullbackOmega} (\Psi^*\omega)_{x_1}(\mathfrak{a}_1\dots\mathfrak{a}_k)=\omega_{\psi(x_1)}\left( \Psi(\mathfrak{a}_1),\cdots,\Psi(\mathfrak{a}_k) \right) \end{eqnarray} for all $x_1\in U_1$.\\ On the other hand, as classically in finite dimension, we can introduce: \begin{definition} \label{D_ClassicLieAlgebroidMorphism} $\Psi$ is a Lie algebroid morphism over $\psi$ if and only if we have \begin{description} \item[\textbf{(LAM 1)}] $\Psi^*(d_{\rho _2}f) = d_{\rho _1} \left( f \circ \psi \right) $ for all $f \in C^\infty (U_2)$; \item[\textbf{(LAM 2)}] $\Psi^*(d_{\rho _2} \omega)=d_{\rho _1} \Psi^*(\omega)$ for any $1$-form $\omega$ on $ \{ \mathcal{A}_2 \} _{U_2}$. \end{description} \end{definition} It is easy to see that condition \textbf{(LM 1)} and \textbf{(LAM 1)} are equivalent (cf. Proof of Proposition \ref{P_psiDiffeomorphism}). Property \textbf{(LM 2)} implies property \textbf{(LAM 2)} for $\psi $-related sections but, in general, a pair of local sections $(\mathfrak{a}_1,\mathfrak{a}_2)$ of $\mathcal{A}_1$ and $ \mathcal{A}_2$ are not $\psi$-related while each member (\ref{eq_PullbackOmega}), for any two such pairs, is well defined. On the other hand, under the assumption of \textbf{(LM 2)}, we have \begin{eqnarray} \label{eq_Psia1a2} [\Psi(\mathfrak{a}_1),\Psi(\mathfrak{a}_2)]_{\mathcal{A}_2}\left(\psi(x_1)\right)=([\mathfrak{a}'_1,\mathfrak{a}'_2]_{\mathcal{A}_2})\left(\psi(x_1) \right) . \end{eqnarray} For any $x_1\in U_1$. Therefore the relation \textbf{(LAM 1)} is satisfied for any such pair $ (\mathfrak{a}_1,\mathfrak{a}_2)$ of sections of $\mathcal{A}_1$ which are $\psi$-related to a pair $ (\mathfrak{a}_1^\prime,\mathfrak{a}_2^\prime)$ of sections of $\mathcal{A}_2$. Of course, this property is no longer true for any pair $(\mathfrak{a}_1,\mathfrak{a}_2)$ of local sections of $\mathcal{A}_2$ and so the bracket "$[\Psi (\mathfrak{a}_1),\Psi (\mathfrak{a}^\prime_1)]_{\mathcal{A}_2}(\psi(x_1))$" is not defined. Thus, in general, both definitions are not comparable. However, if $\psi$ is a local diffeomorphism, we have: \begin{proposition} \label{P_psiDiffeomorphism} Let $ \left( \mathcal{A}_1, \pi_1, M_1,\rho_1, [.,.]_{\mathcal{A}_1} \right) $ and $ \left( \mathcal{A}_2, \pi_2, M_2,\rho_2, [.,.]_{\mathcal{A}_2} \right) $ be two convenient Lie algebroids. We consider a bundle morphism $\Psi:\mathcal{A}_1\to \mathcal{A}_2$ over a local diffeomorphisms $\psi:M_1\to M_2$. Then $\Psi$ is a Lie algebroid morphism if and only if it is a Lie morphism. \end{proposition} For instance, given any convenient Lie algebroid $ \left( \mathcal{A}, \pi, M,\rho, [.,.]_{\mathcal{A}} \right) $, then $\rho$ is Lie morphism and a Lie algebroid morphism from $ \left( \mathcal{A}, \pi, M,\rho, [.,.]_{\mathcal{A}} \right) $ to the convenient Lie algebroid $(TM, p_M, M, Id, [.,.])$.\\ \begin{proof} Since the set of differentials $\{df, \; f \textrm{ smooth map around } x_2\in M_2\}$ is a separating family on $T_{x_2}M_2$, by an elementary calculation, we obtain the equivalence $\textbf{(LM 1)}\;\Leftrightarrow\; \textbf{(LAM 1)}$.\\ At first note that \textbf{(LM 2)} and \textbf{(LAM 2)} are properties of germs. Thus the equivalence is in fact a local problem. Fix some $x^0_1\in M_1$ and $x^0_2=\psi(x^0_1)$. Since $\psi$ is a local diffeomorphism, the model of $M_1$ and of $M_2$ are the same convenient space $\mathbb{M}$ and we have charts $(U_1,\phi_1)$ and $(U_2,\phi_2)$ around $x^0_1$ in $M_1$ and $x^0_2$ in $M_2$ such that for $i \in \{1,2\}$: \begin{description} \item[--] $\phi_i(x_i)=0\in \mathbb{M}$; \item[--] $\phi_2\circ \psi\circ \phi_1^{-1}$ is a diffeomorphism between the $c^\infty$-open sets $\mathsf{U}_1:=\phi_1(U_1)$ and $\mathsf{U}_2:=\phi_2(U_2)$; \item[--] a trivialization $\tau_i \{\mathcal{A}_i\} _{U_1}=U_i \times\mathbb{A}_i$. \end{description} Thus, without loss of generality, we may assume that $M_i$ is a $c^\infty$-open neighbourhood of $0\in \mathbb{M}$, $\psi$ is a diffeomorphism from $M_1$ to $M_2$ and $\mathcal{A}_i=M_i\times \mathbb{A}_i$. By the way, the anchor $\rho_i$ is a smooth map from $M_i$ to $\operatorname{L}(\mathbb{M},\mathbb{A}_i)$ and each section $\mathfrak{a}_i$ of $\mathcal{A}_i$ is a smooth map from $M_i$ to $\mathbb{A}_i$. Under this context, on the one hand, for all $x_1\in M_1$, we have \begin{eqnarray*} \begin{aligned} \Psi^*d_{\rho_2}\omega(\mathfrak{a},\mathfrak{a}')(x_1) & = {d}_{\psi(x_1)} \left( \omega(\Psi(\mathfrak{a}') \right) \left( \rho_2\circ \Psi(\mathfrak{a}) \right)) -{d}_{\psi(x_1)} \left( \omega(\Psi(\mathfrak{a}) \right) \left(\rho_2\circ \Psi(\mathfrak{a}') \right) \\ &-\omega \left( [\Psi(\mathfrak{a}),\Psi(\mathfrak{a}')]_{\mathcal{A}_2} \right) \left( \psi(x_1) \right).\\ \end{aligned} \end{eqnarray*} On the other hand, we have \begin{eqnarray*} \begin{aligned} d_{\rho_1}\Psi^*\omega(\mathfrak{a},\mathfrak{a}')(x_1)& ={d}_{\psi(x_1)}\omega \left( (\Psi(\mathfrak{a}') \right) \left( T\psi\circ\rho(\mathfrak{a}) \right)\\ &-{d}_{\psi(x_1)}\omega \left( (\Psi(\mathfrak{a}) \right) \left(T\psi\circ\rho(\mathfrak{a}') \right) -\omega \left( \Psi([\mathfrak{a},\mathfrak{a}']_{\mathcal{A}_1} \right) (\psi(x_1)).\\ \end{aligned} \end{eqnarray*} Note that for any two pairs of $\psi$-related sections $(\mathfrak{a}_1,\mathfrak{a}_1')$ and $(\mathfrak{a}_2,\mathfrak{a}_2')$, as $\psi$ is a diffeomorphism, \textbf{(LM 2)} is equivalent to \begin{eqnarray}\label{eq_Psia1a2} [\Psi(\mathfrak{a}_1),\Psi(\mathfrak{a}_2)]_{\mathcal{A}'}\left(\psi(x_1)\right)=\Psi([\mathfrak{a}_1,\mathfrak{a}_2]_{\mathcal{A}})\left(\psi(x_1)\right) \end{eqnarray} for all $x_1\in M_1$. Thus, if \textbf{(LM 1)} and \textbf{(LM 2)} are true, then, in the previous local context, \textbf{(LAM 2)} is equivalent to \begin{eqnarray} \label{eq_OmegaBracket} \omega \left( \Psi([\mathfrak{a}_1,\mathfrak{a}_2]_{\mathcal{A}_1} \right)(\psi(x_1)) =\omega\left( [\Psi(\mathfrak{a}_1),\Psi(\mathfrak{a}_2)]_{\mathcal{A}'} \right) (\psi(x_1)) \end{eqnarray} for any $1$-form $\omega$ on $\mathcal{A}_2$ and any $x_1\in M_1$. As $\psi$ is a diffeomorphism for any pair of sections $(\mathfrak{a}_1, \mathfrak{a}_2)$ of $\mathcal{A}_1$ if we set $ \mathfrak{a} _1^\prime=\Psi(\mathfrak{a}_1)\circ\psi^{-1}$ and $ \mathfrak{a}_2^\prime=\Psi(\mathfrak{a}_2)\circ\psi^{-1}$, then $(\mathfrak{a}_1, \mathfrak{a}^\prime_1)$ and $(\mathfrak{a}_2, \mathfrak{a}^\prime_2)$ are $\psi$-related and it follows that \textbf{(LM 1)} and \textbf{(LM 2)} implies \textbf{(LAM 1)} and \textbf{(LAM 2)}. Conversely, assume that \textbf{(LAM 1)} and \textbf{(LAM 2)} are true. Consider any two pairs of $\psi$-related sections $(\mathfrak{a}_1,\mathfrak{a}_1')$ and $(\mathfrak{a}_2,\mathfrak{a}_2')$. In this case the relation (\ref{eq_OmegaBracket}) evaluated on $(\mathfrak{a}_1,\mathfrak{a}_2)$ is equivalent to \textbf{(LAM 2)} for any $1$-form $\omega $ on $U_2$. Since around each point in $M_2$, the set of germs of $1$-forms on $ \mathcal{A}_2$ is separating for germs of sections of $\mathcal{A}_2$ and as $\psi$ is a diffeomorphism, this implies (\ref{eq_OmegaBracket}).\\ It follows that the relation \textbf{(LM 2}) evaluated on both pairs $(\mathfrak{a}_1,\mathfrak{a}_1')$ and $(\mathfrak{a}_2,\mathfrak{a}_2')$ is satisfied, which ends the proof. \end{proof} \subsection{Foliations and Banach-Lie algebroids} \label{__FoliationsAndBanachLieAlgebroids} We first recall the classical notion of integrability of a distribution on a Banach manifold (cf. \cite{Pe1}). Let $M$ be a Banach manifold. \begin{enumerate} \item A distribution\index{distribution} $\Delta$ on $M$ is an assignment $\Delta: x\mapsto\Delta_{x}\subset T_{x}M$ on $M$ where $\Delta_{x}$ is a subspace of $T_{x}M$. The distribution $\Delta$ is called closed if $\Delta_x$ is closed in $T_xM$ for all $x\in M$. \item A vector field $X$ on $M$, defined on an open set Dom$(X)$, is called tangent to a distribution $\Delta$ if $X(x)$ belongs to $\Delta_{x}$ for all $x\in$Dom$(X)$. \item Let $X$ be a vector field tangent to a distribution $\Delta$ and $\operatorname{Fl}^X_t$ its flow. We say that $\Delta$ is $X$-invariant if $T_x\operatorname{Fl}^X_t(\Delta_x)=\Delta_{\operatorname{Fl}^X_t(x)}$ for all $t$ for which $\operatorname{Fl}^X_t(x)$ is defined. \item A distribution $\Delta$ on $M$ is called integrable if, for all $x_{0}\in M$, there exists a weak submanifold $(N,\phi)$ of $M$ such that $\phi(y_{0})=x_{0}$ for some $y_{0}\in N$ and $T\phi(T_{y}N)=\Delta_{\phi(y)}$ for all $y\in N$. In this case $(N,\phi)$ is called an integral manifold of $\Delta$ through $x$. A leaf $L$ is a weak submanifold which is a maximal integral manifold. \item A distribution $\Delta$ is called involutive if for any vector fields $X$ and $Y$ on $M$ tangent to $\Delta$ the Lie bracket $[X,Y]$ defined on Dom$(X)\cap$Dom$(Y)$ is tangent to $\Delta$.\\ \end{enumerate} Classically, in the Banach context, when $\Delta$ is a supplemented subbundle of $TM$, according to the Frobenius Theorem, involutivity implies integrability.\\ In finite dimension, the famous results of H. Sussman and P. Stefan give necessary and sufficient conditions for the integrability of smooth distributions.\\ Few generalizations of these results in the framework of Banach manifolds can be found in \cite{Ste}. In the context of this section, we have (cf. \cite{Pe1}): \begin{theorem} \label{T_IntegrabilityDistributionRangeAnchor} Let $(\mathcal{A},\pi,M,\rho,[.,.]_{\mathcal{A}})$ be a split Banach-Lie algebroid.\\ If $\rho(\mathcal{A})$ is a closed distribution, then this distribution is integrable. \end{theorem} Note that if $\rho$ is a Fredholm morphism, the assumptions of Theorem \ref{T_IntegrabilityDistributionRangeAnchor} are always satisfied. In the Hilbert framework, only the closeness of $\rho$ is required. \section{Prolongation of a convenient Lie algebroid along a fibration} \label{__ProlongationOfAConvenientLieAlgebroidAlongAFibration} \subsection{Prolongation of an anchored convenient bundle } \label{___ProlongationOfAnAnchoredConvenientBundle} Let ${\bf p}:\mathcal{E}\rightarrow M$ be a convenient vector bundle with typical fibre $\mathbb{E}$ and $\mathcal{M}$ an open submanifold such that the restriction of ${\bf p}$ to $\mathcal{M}$ is a surjective fibration over $M$ of typical fibre $\mathbb{O}$ (open subset of $\mathbb{E}$). We consider some anchored convenient bundle $({\mathcal{A}},\pi,M,\rho)$. \begin{notations} \label{N_locM} If $(U,\phi)$ is a chart such that $\mathcal{E}_U$ and $\mathcal{A}_U$ are trivializable, then $TM_U=TM_{| U}$ and $T\mathcal{M}_U$ are also trivializable. In this case, we have trivializations and local coordinates \begin{description} \item[--] $\mathcal{E}_U\equiv U\times \mathbb{E}$ and $\mathcal{M}_U\equiv U\times \mathbb{O}$ with local coordinates $\mathsf{m}=(\mathsf{x,e})$ \item[--] $T\mathcal{M}_U\equiv (U\times \mathbb{O})\times\mathbb{M}\times \mathbb{E}$ with local coordinates $(\mathsf{m, v, z})$. \end{description} \end{notations} For $m\in {\bf p}^{-1}(x)$ we set \[ {\mathbf{T}}^{\mathcal{A}}_m{\mathcal{M}}=\{(a,\mu)\in{\mathcal{A}}_x\times T_m{\mathcal{M}}: \; \rho(a)=T{\bf p}(\mu)\}. \] An element of ${\mathbf{T}}^{\mathcal{A}}_m{\mathcal{M}}$ will be denoted $(m,a,\mu)$.\\ We set ${\mathbf{T}}^{\mathcal{A}}{\mathcal{M}}=\displaystyle\bigcup_{m\in{\mathcal{M}}}{\mathbf{T}}^{\mathcal{A}}_m{\mathcal{M}}$ and we consider the projection $\hat{{\bf p}}:{\mathbf{T}}^{\mathcal{A}}{\mathcal{M}}\rightarrow {\mathcal{M}}$\\ defined by $\hat{{\bf p}}(m,a,\mu)=m$.\\ We introduce the following context: \begin{enumerate} \item Let $\widetilde{\pi}: \widetilde{\mathcal{A}}\rightarrow {\mathcal{M}}$ be the pull-back of the bundle $\pi:{\mathcal{A}}\rightarrow M$ by ${\bf p}:{\mathcal{M}}\rightarrow M$. We denote by $\widetilde{\bf p}$ the canonical vector bundle such that the following diagram is commutative: \[ \xymatrix{ \widetilde{\mathcal{A}} \ar[r]^{\widetilde{{\bf p}}}\ar[d]_{\widetilde{\pi}} & \mathcal{A} \ar[d]^{\pi}\\ \mathcal{M} \ar[r]^{{\bf p}} & M\\ } \] \item Consider the map \[ \begin{array} [c]{cccc} \hat{ \rho}: & {\mathbf{T}}^{\mathcal{A}}{\mathcal{M}} & \to & T{\mathcal{M}} \\ & (m,a,\mu) & \mapsto & (m,\mu) \end{array} \] and let $ {\bf p}_{\widetilde{\mathcal{A}}}: {\mathbf{T}}^{\mathcal{A}}{\mathcal{M}}\mapsto \widetilde{\mathcal{A}}$ be the map defined by ${\bf p}_{\widetilde{\mathcal{A}}}(m,a,v,z)=(m,a)$. Then the following diagrams are commutative \[ \xymatrix{ {\mathbf{T}}^{\mathcal{A}}{\mathcal{M}} \ar[r]^{{{\bf p}}_{\widetilde{\mathcal{A}}}}\ar[d]_{\hat{{\bf p}}} & \widetilde{\mathcal{A}} \ar[d]^{\widetilde{\pi}}\\ \mathcal{M} \ar[r]^{\operatorname{Id}} & \mathcal{M}\\ } \;\;\;\;\;\;\;\;\;\;\;\; \xymatrix{ {\mathbf{T}}^{\mathcal{ A}}\mathcal{M} \ar[r]^{\hat{ \rho}}\ar[d]_{{\bf p}_{\mathcal{A}}} & T\mathcal{M} \ar[d]^{T{\bf p}}\\ \mathcal{A} \ar[r]^{\rho} & TM\\ } \] \item If $ {\bf p}_{\mathcal{M}}: {T}{\mathcal{M}}\mapsto {\mathcal{M}}$ is the tangent bundle, consider the associated vertical bundle ${\bf p}_\mathcal{M}^\mathbf{V} :\mathbf{T}\mathcal{M}\rightarrow \mathcal{M}$. Then there exists a canonical isomorphism bundle ${\bf \nu}$ from the pull-back $\widetilde{\bf p}:\widetilde{\mathcal{E}}\rightarrow \mathcal{M}$ of the the bundle ${\bf p}:\mathcal{E}\rightarrow M$ over ${\bf p}:\mathcal{M}\rightarrow M$ to ${\bf p}_\mathcal{M}^V :\mathbf{V}\mathcal{M}\rightarrow \mathcal{M}$ so that the following diagram is commutative: \begin{eqnarray} \label{eq_nu} \xymatrix{ \widetilde{\mathcal{E}} \ar[r]^{{\bf \nu}} \ar[d]_{\widetilde{\bf p}} & \mathbf{V}\mathcal{M} \ar[d]^{{\bf p}_\mathcal{M}^V }\\ \mathcal{M} \ar[r]^{\operatorname{Id}} & \mathcal{M}\\ } \end{eqnarray} \end{enumerate} \begin{theorem} \label{T_Prolongation} ${}$ \begin{enumerate} \item $\hat{\bf p}:{\mathbf{T}}^{\mathcal{A}}{\mathcal{M}}\rightarrow {\mathcal{M}}$ is a convenient bundle with typical fibre $\mathbb{A}\times \mathbb{E}$ and $(\mathbf{T}^{\mathcal{A}}\mathcal{M},\hat{\bf p},\mathcal{M},\hat{\rho}) $ is an anchored bundle. \item ${{\bf p}}_{\widetilde{\mathcal{A}}}$ is a surjective bundle morphism whose kernel is a subbundle of $\mathbf{T}\mathcal{M}$. The restriction of $\hat{\rho}$ to $\ker{{\bf p}}_{\widetilde{\mathcal{A}}}$ is a bundle isomorphism on $\mathbf{V}\mathcal{M}$. \item Given an open subset $V$, then, for each section $\mathbf{X}$ of $\mathbf{T}\mathcal{M}$ defined on the open set $\mathcal{V}=\hat{\mathbf{p}}^{-1}(V)\subset \mathcal{M}$, there exists a pair $(\mathfrak{a},X)$ of a section $\mathfrak{a}$ of $\widetilde{\mathcal{A}}$ and a vector field $X$ on $\mathcal{V}$ such that \begin{eqnarray} \label{eq_aX} \forall m\in \mathcal{V},\; T\mathbf{p}(X(m))=\rho \circ {\mathbf{p}_\mathcal{A}}(\mathfrak{a}(m)). \end{eqnarray} Conversely such a pair $(\mathfrak{a},X)$ which satisfies (\ref{eq_aX}) defines a unique section $\mathbf{X}$ on $\mathcal{V}$, the associated pair of $\mathbf{X}$ is precisely $(\mathfrak{a},X)$ and with these notations, we have $\hat{\rho}(\mathbf{X})=X$. \end{enumerate} \end{theorem} \begin{proof} (1) Let $(U,\phi)$ be a chart on $M$ such that we have a trivialization $\tau:\mathcal{A}_U\rightarrow \phi(U)\times \mathbb{A}$ and $\Phi:\mathcal{M}_U\rightarrow \phi(U)\times \mathbb{O}\subset \phi(U)\times \mathbb{E}$. Then $T\phi$ is a trivialization of $TM_U$ on $\phi(U)\times \mathbb{M}$ and $T\Phi$ is a trivialization of $T\mathcal{M}_U$ on $\phi(U)\times \mathbb{O}\times\mathbb{M}\times\mathbb{E}$.\\ To be very precise, according to the notations in $\S$ \ref{___LocalIdentificationsAndExpressionsInAConvenientBundle}, we have \begin{description} \item $\phi(x)\equiv\mathsf{x} $; \item $\tau(x,a)\equiv (\mathsf{x,a})$; \item $\Phi(x,e)\equiv (\mathsf{x,e})$ and for $m=(x,e)$, $\Phi(m)\equiv \mathsf{m}$; \item $T\phi(x,v)\equiv (\mathsf{x,v})$; \item $T\Phi(m, \mu)= T\Phi(x,e,v,z)\equiv(\mathsf{x,e,v,z})$ \end{description} where $\equiv$ stands for "denoted".\\ By the way, in this local context and with these notations, we have \begin{eqnarray} \label{eq_locTAM} \mathbf{T}^{\mathcal{A}}\mathcal{M}_U\equiv\{\mathsf{(x,e,a,v, z})\in \phi(U)\times\mathbb{O}\times\mathbb{A}\times \mathbb{M}\times\mathbb{E}\;:\; \mathsf{v={\bf \rho}_x(a)}\} \end{eqnarray} where ${\bf \rho}$ corresponds to the local expression of the anchor. It follows that: \begin{eqnarray} \label{eq_LocTAMstrict} \mathbf{T}^{\mathcal{A}}\mathcal{M}_U\equiv\{\mathsf{(x,e,a,\rho_x(b),z}) \;\;: (\mathsf{x,e,a,z})\in \phi(U)\times\mathbb{O}\times \mathbb{A}\times \mathbb{E}\}, \end{eqnarray} and so the map $\mathbf{T}\Phi: \mathbf{T}^{\mathcal{A}}\mathcal{M}_U\rightarrow \phi(U)\times\mathbb{O}\times \mathbb{A}\times \mathbb{E}$ defined by $\mathbf{T}\Phi(x,e,a,v,z)\equiv(\mathsf{x,e,a,z})$ is a smooth map which is bijective. Moreover, for each $m=(x,e)\in \mathcal{M}_U$ the restriction $\mathbf{T}{\Phi}_m$ to $\mathbf{T}_m\mathcal{M}_U$ is clearly linear and we have the following commutative diagram \[ \xymatrix{ \mathbf{T}^\mathcal{A}\mathcal{M}_U \ar[r]^{\mathbf{T}{\Phi}}\ar[d]_{\hat{\mathbf{p}}} & \phi(U)\times\mathbb{O}\times \mathbb{A}\times\mathbb{E} \ar[d]^{\hat{\pi}_1}\\ {\mathcal{M}}_U \ar[r]^{\Phi} & \phi(U )\times\mathbb{O}\\ } \] This shows that $\mathbf{T}{\Phi}$ is a local trivialization of $\mathbf{T}^\mathcal{A}\mathcal{M}_U$ modelled on $\mathbb{A}\times \mathbb{E}$.\\ Now, in this local context, $\hat{\rho}(x,e,a,v,z)\equiv (\mathsf{x,e,a,z})$.\\ Consider two such chart domains $U$ and $U'$ in $M$ such that $U\cap U'\not=\emptyset$. Then we have: \begin{itemize} \item[--] the transition maps associated to the trivializations $\tau$ and $\tau'$ in $\mathcal{A}$ are of type \[ \mathsf{(x,a)}\mapsto \mathsf{\left( t(x), G_x(a) \right) } \] where $\mathsf{x\mapsto G_{x}}$ takes value in $\operatorname{GL}(\mathbb{A})$, and is a smooth map from $U\cap U^\prime$ into $\operatorname{L}(\mathbb{A})$; \item[--] the transition maps associated to the trivializations $\Phi$ and $\Phi'$ in $\mathcal{M}$ are of type \[ \mathsf{(x,e,a,v)} \mapsto \mathsf{\left( t(x),F_x(e) \right) } \] where $\mathsf{x} \mapsto \mathsf{F_{x}}$ takes value in $\operatorname{GL}(\mathbb{E})$, and is a smooth map from $U\cap U^\prime$ into $\operatorname{L}(\mathbb{E})$; \item[--] if $\widetilde{\Phi}$ and $\widetilde{\Phi}'$ are the trivializations of $\widetilde{\mathcal{A}}$ associated to $\Phi$ and $\Phi'$, the transition maps associated to trivializations $\widetilde{\Phi}$ and $\widetilde{\Phi}'$ in $\widetilde{\mathcal{A}}$ are of type \[ \mathsf{(x,e,a)} \mapsto \mathsf{\left( t(x),F_x(e),G_{x}(a) \right) }; \] \item[--] the transition maps associated to trivializations $T\Phi$ and $T\Phi'$ in $T\mathcal{M}$ are of type \[ \mathsf{(x,e,v,z)} \mapsto \left( \mathsf{ t(x),F_x(e),} d\mathsf{_xt(y), H_{(x,e)}(v) } \right) \] where $\mathsf{(x,e)} \mapsto \mathsf{H_{(x,e)}}$ takes value in $\operatorname{GL}(\mathbb{E})$ and is a smooth map from $U\cap U^\prime$ to $\operatorname{L}(\mathbb{E})$; \item[--] the transition maps associated to trivializations ${\bf T}\Phi$ and ${\bf T}\Phi'$ in ${\bf T}^\mathcal{A}\mathcal{M}$ are of type \[ \mathsf{(x,e,a,z)} \mapsto \mathsf{\left( t(x),F_x(e),G_{x}(a), H_{(x,e)}(z)) \right) }. \] \end{itemize} Clearly, this implies that $\hat{\bf p}:{\bf T}\mathcal{M}\to \mathcal{M}$ is a convenient bundle.\\ Now, in a trivialization $\tau$ and $T\phi$, we write $\rho(x,a)\equiv (\mathsf{x,a}) \mapsto (\mathsf{x,\rho_x(a)})$. If, in another trivialization $\tau'$ and $ T\phi'$, we write $\rho(x,a)\equiv (\mathsf{x',a'}) \mapsto (\mathsf{x',\rho'_{x'}(a')})$, then, for the associated transition maps, we have \[ \mathsf{(x',\rho'_{x'})= (x',} d \mathsf{_x t\circ \rho_x\circ G_x.)} \] It follows easily that $\hat{\rho}$ is a bundle convenient morphism.\\ (2) By construction, the following diagram is commutative: \[ \xymatrix{ {\bf T}^{\mathcal{A}}\mathcal{M} \ar[r]^{\hat{\rho}}\ar[d]_{{\bf p}_{\widetilde{\mathcal{A}}}} & T{\mathcal{M}} \ar[d]^{p_\mathcal{M}}\\ \widetilde{\mathcal{A}} \ar[r]^{\widetilde{\pi}} & \mathcal{M}\\ } \] In the trivialization $\widetilde{\Phi}:\widetilde{A}_{\mathcal{M}_U}\rightarrow \phi(U)\times\mathbb{O}\times \mathbb{A}$, using the same convention as previously, we have \[ \{(x,e,a,v,z)\mapsto {{\bf p}}_{\widetilde{\mathcal{A}}}(x,e,a,v)\}\equiv \{(\mathsf{x,e,a,v,z})\mapsto (\mathsf{x,e,a})\}. \] Thus, by analog arguments as in the proof of (1), it is clear that ${{\bf p}}_{\widetilde{\mathcal{A}}}$ is compatible with the transition maps associated to the trivializations over the chart domains $U$ and $U'$ of $M$ for ${\bf T}^{\mathcal{A}}\mathcal{M}$ and for $\widetilde{\mathcal{A}}$. Thus ${\bf p}_{\widetilde{\mathcal{A}}}: {\bf T}^{\mathcal{A}}\mathcal{M}\to \widetilde{\mathcal{A}}$ is a Banach bundle morphism which is surjective.\\ \noindent From the construction of $\mathbf{T}^{\mathcal{A}}\mathcal{M}$ we have \[ \ker{{\bf p}}_{\widetilde{\mathcal{A}}}=\{(m,a,v)\in \mathbf{T}_m^\mathcal{A}\mathcal{M}\;\;: \rho(a)=0\}. \] The definition of $\hat{\rho}$ implies that its restriction to $\ker{{\bf p}}_{\widetilde{\mathcal{A}}}$ is an isomorphism onto $\mathbf{V}\mathcal{M}$, which ends the proof of (2).\\ (3) Let $\mathbf{X}$ be a section of $\mathbf{T}\mathcal{M}$ defined on $\mathcal{V}=\hat{\mathbf{p}}^{-1}(V)$. According to (2), if we set $\mathfrak{a}=\mathbf{p}_{\widetilde{\mathcal{A}}}\circ \mathbf{X}$ and $X=\hat{\rho}(\mathbf{X})$, the pair $(\mathfrak{a},X)$ is well defined and from the definition of $\mathbf{T}^\mathcal{A}\mathcal{M}$ the relation (\ref{eq_aX}) is satisfied. Conversely, if $\mathfrak{a}$ is a section $\mathfrak{a}$ of $\widetilde{\mathcal{A}}$ and $X$ a vector field on $\mathcal{V}$, the relation (\ref{eq_aX}) means exactly that $\mathbf{X}(m)=(\widetilde{\mathbf{p}}(\mathfrak{a}(m)), X(m))$ belongs to $\mathbf{T}_m^\mathcal{A}\mathcal{M}$; so we get a section $\mathbf{X}$ of $\mathbf{T}^\mathcal{A}\mathcal{M}$. Now it is clear that $\mathfrak{a}=\mathbf{p}_{\widetilde{\mathcal{A}}}\circ \mathbf{X}$ and $X=\hat{\rho}(\mathbf{X})$. \end{proof} \begin{definition} \label{D_ProlongantionOfAnAnchorBundleOverAFibration} The anchored bundle $(\mathbf{T}^{\mathcal{A}}\mathcal{M}, \hat{\mathbf{p}}, \mathcal{M}, \hat{\rho})$ is called the prolongation of $(\mathcal{A},\pi,M,\rho)$ over $\mathcal{M}$. The subbundle $\ker{{\bf p}}_{\widetilde{\mathcal{A}}}$ will be denoted $\mathbf{V}^{\mathcal{A}}\mathcal{M}$ and is called the vertical subbundle. \end{definition} \begin{remark} \label{R_VerticalBundle} According to the proof of Theorem \ref{T_Prolongation}, if ${\bf V}^{\mathcal{A}}\mathcal{M}_U$ is the restriction of ${\bf V}^{\mathcal{A}}\mathcal{M}$ to $\mathcal{M}_U$, we have \[ {\bf V}^{\mathcal{A}}\mathcal{M}_U\equiv\{\mathsf{(x,e,a,0,z}) \;\;: (\mathsf{x,e,a,z})\in \phi(U)\times\mathbb{O}\times \mathbb{A}\times \mathbb{E}\}. \] \end{remark} \begin{examples} \label{Ex_Class} For simplicity in these examples we assume that the model space $\mathbb{M}$ of $M$ and the typical fiber $\mathbb{A}$ of $\mathcal{A}$ are Banach spaces. \begin{enumerate} \item If $\mathcal{A}={\mathcal{M}}=TM$ then we have ${\mathbf{T}}{\mathcal{A}}=TTM$ and ${\mathbf{T}}{\mathcal{A}}^*=TT^{*}M$ and the anchor $\hat{\rho}$ is the identity. \item If $\mathcal{A}={\mathcal{M}}$ then $\mathbf{T}^{\mathcal{A}}\mathcal{A}$ is simply denoted $\mathbf{T}\mathcal{A}$ and the anchor $\hat{\rho}$ is the map $(x,a,b,c)\mapsto (x,a,\rho(b),\nu(c))$ from $\mathbf{T}\mathcal{A}$ to $T\mathcal{A}$. \item If $\mathcal{M}=\mathcal{A}^*$ then $\mathbf{T}^{\mathcal{A}}\mathcal{A}^*$ is simply denoted $\mathbf{T}\mathcal{A}^*$ and the anchor $\hat{\rho}$ is the map $(x,\xi,a,\eta)\mapsto (x,\xi,\rho(a),\nu(\eta)$ from $\mathbf{T}\mathcal{A}^*$ to $T\mathcal{A}^*$. \item If $\mathcal{M}=\mathcal{A}\times_M {\mathcal{A}}^*$ then $\mathbf{T}^{\mathcal{A}}\mathcal{M}$ is simply denoted $\mathbf{T}(\mathcal{A}\times\mathcal{A}^*)$ and the anchor $\hat{\rho}$ is the map $(x,a,\xi,b,c,\eta)\mapsto (x,a,\xi,\rho(b),\nu(v),\eta))$ from $\mathbf{T}(\mathcal{A}\times\mathcal{A}^*)$ to $T(\mathcal{A}\times\mathcal{A}^*)$. \item If $\mathcal{M}$ is a conic submanifold of $\mathcal{A}$ (cf \cite{Pe2}), then $\mathbf{T}^{\mathcal{A}}\mathcal{M}$ is simply denoted $\mathbf{T}\mathcal{M}$ and the anchor $\hat{\rho}$ is the same expression as in (2). \end{enumerate} \end{examples} \begin{remark} \label{R_Chart} We come back to the previous general context: $(\mathcal{A},\pi, M,[.,.]_\mathcal{A})$ is a convenient Lie algebroid and $\mathbf{p}:\mathcal{E}\to M$is a convenient vector bundle and $\mathcal{M}$ is an open submanifold of $\mathcal{E}$ which is fibered on $M$.\\ Let $(U, \phi)$ be a chart of $M$ such that $\mathcal{A}_U$ (resp. $\mathcal{E}_U$) is trivial and we have denoted \begin{center} $\tau:\mathcal{A}_U\rightarrow \phi(U)\times \mathbb{A}$ (resp. $\Phi : \mathcal{E}_U\rightarrow \phi(U)\times \mathbb{E}$) \end{center} the associated trivialization (cf. notations in the proof of Theorem \ref{T_Prolongation}). Then we have a canonical anchored bundle \begin{center} $(\phi(U)\times \mathbb{A}, \pi_1,\phi(U), \mathsf{r}= T\phi\circ \rho \circ\phi^{-1})$ \end{center} where $\pi_1:\phi(U)\times \mathbb{A}\rightarrow \phi(U)$.\\ Therefore, the prolongation $\phi(U)\times \mathbb{A}$ over $\phi(U)\times \mathbb{E}$ is then \[ \mathbf{T}^{\phi(U)\times \mathbb{A}}(\phi(U)\times \mathbb{E})=\phi(U)\times \mathbb{E}\times\mathbb{A}\times\mathbb{E} \] and the anchor $\hat{\mathsf{r}}$ is the map $(\mathsf{x,e,a,z}) \mapsto (\mathsf{x,e, r(a), z})$.\\ Note that the bundle $\mathbf{T}^{\phi(U)\times \mathbb{A}}(\phi(U)\times \mathbb{E})=\phi(U)\times \mathbb{E}\times\mathbb{A}\times\mathbb{E}$ can be identified with the subbundle \[ \left\{ (\mathsf{x,e,a,0,v})\in U\times \mathbb{E}\times\mathbb{A}\times\mathbb{M}\times\mathbb{E} \right\} \] of $T(\phi(U)\times \mathbb{E})=\phi(U)\times \mathbb{E}\times\mathbb{A}\times\mathbb{M}\times\mathbb{E}.$\\ An analog description is true for any open set $\mathcal{U}$ in $\mathcal{E}$ such that $\mathbf{p}(\mathcal{U})=U$ since $\mathcal{U}$ is contained in $\mathcal{M}_U$.\\ \end{remark} Given a fibred morphism $\Psi:{\mathcal{M}}\rightarrow {\mathcal{M}}'$ between the fibred manifolds ${\bf p}:{\mathcal{M}}\rightarrow M$ and ${\bf p}':{\mathcal{M}'}\rightarrow M'$ over $\psi:M\rightarrow M'$ and a morphism of anchored bundles $ \varphi$ between $({\mathcal{A}},\pi, M,\rho)$ and $({\mathcal{A}}',\pi', M',\rho')$, over $\psi:M\rightarrow M'$, we get a map \begin{eqnarray}\label{eq_bfTPsi} \begin{array}[c]{cccc} \label{eq_TPsi} {\mathbf{T}}\Psi: & {\mathbf{T}}^{\mathcal{A}}{\mathcal{M}}& \to & {\mathbf{T}}^{\mathcal{A}'}{\mathcal{M}}'\\ & (m,a,\mu) & \mapsto & ( \Psi(m);\varphi(a),T_m\Psi(\mu) ) \end{array} \end{eqnarray} \begin{remark} \label{R_chartTM} As in the proof of Theorem \ref{T_Prolongation}, consider a chart $(\mathcal{U},\Phi)$ of $\mathcal{M}$ where $\mathbf{p}(\mathcal{U})=U$ and let $\tau:\mathcal{A}_U\rightarrow U\times\mathbb{A}$ be an associated trivialization of $\mathcal{A}_U$. According to Remark \ref{R_Chart}, since $\Phi$ is a smooth diffeomorphism from $\mathcal{U}$ onto its range $\phi(U)\times\mathbb{O}$ in $\phi(U)\times\mathbb{E}$, then $\mathbf{T}\Phi: \mathbf{T}^{\mathcal{A}}\mathcal{M}_{| \mathcal{U}}\rightarrow \mathbf{T}^{\phi(U)\times \mathbb{A}}\mathcal{U}=\Phi(\mathcal{U})\times\mathbb{A}\times\mathbb{E}$ is a bundle isomorphism and so $(\mathbf{T}^{\mathcal{A}}\mathcal{M}_{| \mathcal{U}},\mathbf{T}\Phi)$ is a chart for $\mathbf{T}^\mathcal{A}\mathcal{M}$. \end{remark} \begin{notations} \label{N_Prolongations} From now on, the anchored bundle $(\mathcal{A},\pi,M,\rho)$ is fixed and, if no confusion is possible, we simply denote by $\mathbf{T}\mathcal{M}$ and $\mathbf{V}\mathcal{M}$ the sets $\mathbf{T}^{\mathcal{A}}\mathcal{M}$ and $\mathbf{V}^\mathcal{A}\mathcal{M}$ respectively. In particular {\bf when $\mathcal{M}=\mathcal{A}$, the prolongation $\mathbf{T}\mathcal{A}$ will be simply called the prolongation of the the Lie algebroid $\mathcal{A}$}\index{prolongation of a convenient Lie algebroid}. The bundle $\mathbf{V}\mathcal{M}$ will be considered as a subbundle of $\mathbf{T}\mathcal{M}$ as well as of $T\mathcal{M}$. \end{notations} \subsection{Connections on a prolongation} \label{___ConnectionsOnAProlongation} Classically (\cite{KrMi}, 37), a \emph{connection}\index{connection} \footnote{In the finite dimensional context such a connection is sometimes called a nonlinear connection.} on a convenient vector bundle $\mathbf{p}:\mathcal{E}\rightarrow M$ is a Whitney decomposition $T\mathcal{E}=H\mathcal{E}\oplus V\mathcal{E}$. Now, as in finite dimension we introduce this notion on $\mathbf{T}\mathcal{M}$. \begin{definition} \label{D_NLConnection} A connection on $\mathbf{T}\mathcal{M}$ is a decomposition of this bundle in a Whitney sum $\mathbf{T}\mathcal{M}=\mathbf{H}\mathcal{M}\oplus \mathbf{V}\mathcal{M}$.\end{definition} Such a decomposition is equivalent to the datum of an endomorphism $\bf{N}$ of $\mathbf{T}\mathcal{M}$ such that $\mathbf{N}^2=\operatorname{Id}$ with $\mathbf{T}\mathcal{M}=\operatorname{ker} (\operatorname{Id}+\mathbf{N})$ and $\mathbf{H}\mathcal{M}=\operatorname{ker} (\operatorname{Id}-\mathbf{N})$ where $\operatorname{Id}$ is the identity morphism of $\mathbf{T}\mathcal{M}$. We naturally get two projections: \begin{description} \item[ ] $h_\mathbf{N} =\displaystyle\frac{1}{2}(\operatorname{Id}+\mathbf{N}): \mathbf{T}\mathcal{M}\rightarrow \mathbf{H}\mathcal{M}$ \item[ ] $v_\mathbf{N}=\displaystyle\frac{1}{2}(\operatorname{Id}-\mathbf{N}): \mathbf{T}\mathcal{M}\rightarrow \mathbf{V}\mathcal{M}$. \end{description} $v_\mathbf{N}$ and $h_\mathbf{N}$ are called respectively the \emph{vertical}\index{projector!vertical} and \emph{horizontal} projector\index{projector!horizontal} of $\mathbf{N}$.\\ Using again the context of Remark \ref{R_chartTM}, we have charts $$\Phi:\mathcal{M}_U\rightarrow \phi(U)\times \mathbb{O}\;\;\textrm{ and}\;\;\mathbf{T}\Phi:\mathbf{T}\mathcal{M}_U\rightarrow \phi(U)\times \mathbb{O}\times \mathbb{A}\times\mathbb{E}.$$ If $\mathbf{N}$ is a connection on $\mathbf{T}\mathcal{M}$, then $\mathsf{N}=\mathbf{T}\Psi\circ \mathbf{N}\circ \Psi^{-1}$ is a non linear connection on the trivial bundle $ \phi(U)\times \mathbb{O}\times \mathbb{A}\times\mathbb{E}$. Thus $\mathsf{N}$ can be written as a matrix field of endomorphisms of $\mathbb{A}\times\mathbb{E}$ of type \begin{eqnarray} \label{eq_Christoffel} \begin{pmatrix} \operatorname{Id}_\mathbb{A} & 0\\ -2\digamma & -\operatorname{Id}_\mathbb{E} \end{pmatrix} \end{eqnarray} and so the associated horizontal (resp. vertical) projector is given by \begin{description} \item $\mathsf{h_N}_{\mathsf{m}}(\mathsf{a,z})=\displaystyle\frac{1}{2}(\mathsf{a,z})- \digamma(\mathsf{a})$; \item $\mathsf{v_N}_{\mathsf{m}}(\mathsf{a,z})=\displaystyle\frac{1}{2}(\mathsf{a,v})+\digamma(\mathsf{a})$. \end{description} The associated horizontal space in $\{\mathsf{m}\}\times \mathbb{A}\times \mathbb{E}$ is \[ \left\{ \displaystyle\frac{1}{2}(\mathsf{a,z})- \digamma(\mathsf{a}), (\mathsf{a,z})\in \{\mathsf{m}\}\times \mathbb{A}\times\mathbb{E}\right\} \] and the associated vertical space in $\{\mathsf{m}\}\times \mathbb{A}\times \mathbb{E}$ is $\{\mathsf{m}\}\times \{\mathsf{0}\}\times \mathbb{E}$.\\ $\digamma$ is called the \emph{(local) Christoffel symbol of} ${\bf N}$.\\ Let $\widetilde{\bf p}: \widetilde{\mathcal{A}\times\mathcal{E}}\rightarrow \mathcal{M}$ the fibered product bundle over $\mathcal{M}$ of $({\pi},{\bf p}): {\mathcal{A}}\times {\mathcal{E}}\rightarrow {M}$. We have natural inclusions $\iota_1 :\widetilde{\mathcal{A}}\rightarrow \widetilde{\mathcal{A}\times\mathcal{E}}$ and $\iota_2 :\widetilde{\mathcal{E}}\rightarrow \widetilde{\mathcal{A}\times\mathcal{E}}$, given respectively by $\iota_1(m,a)=(m,a,0)$ and $\iota_1(m,z)=(m,0,z)$, such that \begin{eqnarray} \label{eq_DecompAE} \widetilde{\mathcal{A}\times\mathcal{E}}=\iota_1 (\widetilde{\mathcal{A}})\oplus \iota_2 (\widetilde{\mathcal{E}}). \end{eqnarray} With these notations, we have \begin{proposition} \label{P_IsoConnection} ${}$ \begin{enumerate} \item There exists a non linear connection $\mathbf{N}$ on $\mathbf{T}\mathcal{M}$ if and only if there exists a convenient bundle morphism $\mathbf{H}$ from $\widetilde{\mathcal{A}}$ to $\mathbf{T}\mathcal{M}$ such that $\mathbf{T}\mathcal{M}=\mathbf{H}(\widetilde{\mathcal{A}})\oplus \mathbf{V}\mathcal{M}$. In this case $\mathbf{T}\mathcal{M}$ is isomorphic to $\widetilde{\mathcal{A}\times\mathcal{E}}$. \item Assume that $\mathbf{N}$ is a connection on ${\bf T}\mathcal{M}$. Let $\Upsilon$ be semi-basic vector valued $\Upsilon$ \footnote{ that is a morphism from ${\bf T}\mathcal{M}$ to ${\bf V}\mathcal{M}$ such that $\Upsilon(\mathbf{Z})=0$ for any local vertical section $\mathbf{Z}$}, then $\mathbf{N}+\Upsilon$ is a connection on ${\bf T}\mathcal{M}$. Conversely, given any nonlinear connection $\mathbf{N}'$ on ${\bf T}\mathcal{M}$, there exists a unique semi-basic vector valued $\Upsilon$ such that $\mathbf{N}'=\mathbf{N}+\Upsilon$. \end{enumerate} \end{proposition} According to this Proposition we introduce: \begin{definition} \label{D_SplitProlongation} A prolongation of $\mathcal{A}$ over ${\bf p}:\mathcal{M}\to M$ is called a split prolongation\index{split prolongation} if there exists a Withney decomposition ${\bf T}\mathcal{M}={\bf K}\mathcal{M}\oplus {\bf V}\mathcal{M}$. \end{definition} The following result is a clear consequence of Proposition \ref{P_IsoConnection}. \begin{corollary} \label{C_SplitProlongation} Let ${\bf T}\mathcal{M}$ be a split prolongation. Then there exists a connection on ${\bf T}\mathcal{M}$ and ${\bf T}\mathcal{M}$ is isomorphic to $\widetilde{\mathcal{A}\times\mathcal{E}}$. \end{corollary} \begin{proof}[Proof of Proposition \ref{P_IsoConnection}]${}$\\ (1) Assume that we have a connection $\mathbf{N}$ on $\mathbf{T}\mathcal{M}$ and let $\mathbf{T}\mathcal{M}=\mathbf{H}\mathcal{M}\oplus \mathbf{V}\mathcal{M}$ be the associated Whitney decomposition.\\ Let $\mathbf{H}$ be the restriction of $\mathbf{p}_{\widetilde{\mathcal{A}}}$ to $\mathbf{H}\mathcal{M}$. Since $\mathbf{V}\mathcal{M}$ is the kernel of the surjective morphism $\mathbf{p}_{\widetilde{\mathcal{A}}}$, it follows that $\mathbf{H}$ is an isomorphism. Since we have an isomorphism $\nu:\widetilde{\mathcal{E}}\rightarrow \mathbf{V}\mathcal{M}$, according to (\ref{eq_DecompAE}), it follows easily that $\widetilde{\mathcal{A}\times\mathcal{E}}$ is isomorphic to $\mathbf{T}\mathcal{M}$. The converse is clear.\\ (2) At first, if $\Upsilon$ is semi basic, then $\operatorname{ker} (\operatorname{Id}+\mathbf{N}+\Upsilon)=\mathbf{V}\mathcal{M}$ and clearly the range of $\operatorname{Id}+\mathbf{N}+\Upsilon$ is a supplemented subbundle of $\mathbf{V}\mathcal{M}$.\\ On the one hand, if $\mathbf{N}'$ is a connection, we set $\Upsilon=\mathbf{N}'-\mathbf{N}$. Then $\Upsilon(\mathbf{Z})=0$, for all local vertical sections. On the other hand $\Upsilon(\mathbf{X})=(\operatorname{Id} +\mathbf{N}')(\mathbf{X})-(\operatorname{Id} +\mathbf{N}(\mathbf{X}) $ which belongs to $\mathbf{V}\mathcal{M}$.\\ \end{proof} A sufficient condition for the existence of a connection on $\mathbf{T}\mathcal{M}$ is given by the following result: \begin{theorem} \label{T_CShconnection} Assume that there exists a linear connection on the bundle $\mathbf{p}:\mathcal{E}\rightarrow M$. Then there exists a connection $\mathbf{N}$ on $\mathbf{T}\mathcal{M}$. \end{theorem} \begin{proof} let $\mathbf{p}^{\ast}TM$ (resp. $\mathbf{p}^{\ast}\mathcal{E}$) is the pull-back over $\mathcal{M}$ of $TM\rightarrow M$ (resp. $\mathcal{E}\rightarrow M$).\\ If there exists a linear connection $N$ on the bundle $\mathcal{E}\rightarrow M$, there exists a convenient isomorphism bundle \[ \kappa=(\kappa_1,\kappa_2): T\mathcal{E}\rightarrow \mathbf{p}^{\ast}TM\oplus \mathbf{p}^{\ast}\mathcal{E}.\] (cf. Theorem 3.1 \cite{AgSu} in Banach setting, \cite{BCP}, Chapter 6 in convenient setting). Therefore, for an open fibred submanifold $\mathcal{M}$ of $\mathcal{E}$, by restriction, we obtain an isomorphism (again denoted $\kappa$). \[ \kappa:T\mathcal{M}\rightarrow \mathbf{p}^{\ast}TM \oplus \mathbf{p}^{\ast}\mathcal{E} \] Without loss of generality, we can identify $T\mathcal{M}$ with $\mathbf{p}^{\ast}TM \oplus \mathbf{p}^{\ast}\mathcal{E}$. For $m=(x,e)$, the fibre $\mathbf{T}_m\mathcal{M}$ is then \[ \mathbf{T}_m\mathcal{M}=\{(a,\mu)\in \mathcal{A}_x\times T_xM\times\mathbf{V}_m\mathcal{M}\;\;: \rho(a)=T\mathbf{p}(\mu)\}. \] But, under our identification of $T\mathcal{M}$ with $\mathbf{p}^{\ast}TM\oplus \mathbf{p}^{\ast}\mathcal{E}$, if $m=(x,e)$, this implies that $\mu\in T_m\mathcal{M}$, can be written as a pair $(v,z)\in T_xM\times \mathcal{E}_x$ and so we replace the condition $\rho(a)=T\mathbf{p}(\mu)$ by $\rho_x(a)=v$.\\ Recall that, in the one hand, we have a chart (cf. proof of Theorem \ref{T_Prolongation}): \[ \mathbf{T}\Phi: \mathbf{T}\mathcal{M}_U\rightarrow \Phi(\mathcal{M}_U)\times \mathbb{A}\times \mathbb{E}. \] The value of $\mathbf{T}\Phi(m, a,\mu)$ can be written $(\phi(x),\Phi_x(e),\tau_x(a),T_m\Phi(\mu))$ (value denoted $(\mathsf{x,e,a,z})$) with $T_m\Psi (T\mathbf{p}(\mu))=T_x\phi(\rho_x(a))$. But, under our assumption, we have $T_m\Phi(\mu))=(T_x\phi(\rho_x(a)),\mathsf{z})\in \{(\mathsf{x,e})\}\times \mathbb{M}\times \mathbb{E}$ and so we obtain \begin{eqnarray} \label{eq_TPhiassump} \mathbf{T}\Phi(m, a,\mu)\equiv (\mathsf{x,e,a,z}). \end{eqnarray} On the other hand, we have a trivialization $\widetilde{\tau\times\Phi}$ from $\widetilde{\mathcal{A}\times\mathcal{E}}_{\mathcal{M}_U}$ to $\Phi(\mathcal{M}_U)\times\mathbb{A}\times\mathbb{E}$ over $\Phi$. In fact, we have $(\widetilde{\tau\times\Phi})(x,e,a,z)=(\phi(x),\Phi_x(e),\tau_x(a),\Phi_x(z))$. \\ According to our assumption, the map $\widetilde{\Psi}:\widetilde{\mathcal{A}\times}\mathcal{E}\rightarrow \mathbf{T}\mathcal{M}$ is given by \[ \widetilde{\Psi}(m,a,z)=(m,a,\rho(a),z) \] is well defined. In local coordinates, we have \[ \widetilde{\Psi}(e,x,a,\rho(a),z)\equiv(\mathsf{x,e,a,z}). \] Thus $\widetilde{\Psi}$ is the identity in local coordinates and so is a local bundle isomorphism. To complete the proof, we only have to show that under our assumption $\widetilde{\Psi}$ is a convenient bundle morphism.\\ By analogy with the notations used in the proof of Theorem \ref{T_Prolongation} (1), let $(U',\phi')$ be another chart on $M$ and consider all the corresponding trivializations $T\phi'$, $\Phi'$, and $\Phi'$. We set $(\mathsf{x',e',v',z'})=T\Phi'(\mathfrak{e},\mathfrak{z})$ (i.e. $(\mathfrak{e},\mathfrak{z})\equiv(\mathsf{x',e',v',z'}) $ following our convention), in these new coordinates, we have $\widetilde{\Psi}(\mathfrak{e},\mathfrak{z}) \equiv(\mathsf{x',e',a',z'})$. Assume that $U\cap U'\not=\emptyset$. For the change of coordinates, we set $\theta_\mathsf{x}=\phi'\circ\phi^{-1}(\mathsf{x})$, and each associated transition map gives rise to a smooth field of isomorphisms of convenient spaces as follows: \begin{eqnarray*} \begin{aligned} &T_\mathsf{x}\theta(\mathsf{v})=T_\mathsf{x}(\phi'\circ\phi^{-1})(\mathsf{v})\\ &\mathfrak{T}_\mathsf{x}(\mathsf{a})=\left(\tau' \circ\tau ^{-1}\right)_\mathsf{x}(\mathsf{a})\\ &\Theta_\mathsf{x}(\mathsf{e})=(\Phi'\circ \Phi^{-1})_\mathsf{x}(\mathsf{e}). \end{aligned} \end{eqnarray*} Thus, under our assumption, according to (\ref{eq_TPhiassump}), in fact we have \[ \mathbf{T}\Phi'\circ \mathbf{T}\Phi^{-1}(\mathsf{x,e,a,z})=(\theta(\mathsf{x}),\Theta_\mathsf{x}(\mathsf{e}), \mathfrak{T}_\mathsf{x}(\mathsf{a}), \Theta_\mathsf{x}(\mathsf{z})). \] Now with the previous notations we have \[ \left(\widetilde{\tau'\times\Phi'}\right)\circ \left(\widetilde{\tau\times\Phi}\right)^{-1}(\mathsf{x,e,a,z})=(\theta(\mathsf{x}),\Theta_\mathsf{x}(\mathsf{e}), \mathfrak{T}_\mathsf{x}(\mathsf{a}), \Theta_\mathsf{x}(\mathsf{z}))=\mathbf{T}\Phi'\circ \mathbf{T}\Phi^{-1}(\mathsf{x,e,a,z}). \] Since, in such local coordinates, $\widetilde{\Psi}$ is the identity map, so under our assumption $\widetilde{\Psi} $ is a convenient bundle isomorphism. \end{proof} \subsection{Prolongation of the Lie bracket} \label{___ProlongationOfTheLieBracket} In the finite dimensional framework, a Lie bracket for smooth sections of $\hat{\bf p}:{\bf T}\mathcal{M}\to \mathcal{M}$ is well defined. Unfortunately, we will see that it is not true in this general convenient context.\\ According to the notations \ref{N_locM}, for each open set $U$ of $M$, we denote by $\Gamma(\textbf{T}\mathcal{M}_U)$, $\Gamma(\textbf{V}\mathcal{M}_U)$ and $\Gamma(\widetilde{\mathcal{A}}_U)$ the $C^\infty(\mathcal{M}_{U})$-module of sections of $\textbf{T}\mathcal{M}_U$, ${\bf V}\mathcal{M}_U$ and $\widetilde{\mathcal{A}}_U$ respectively. We also denote $\Gamma(\mathcal{A}_U)$ the $C^\infty({U})$-module of sections $\pi:\mathcal{A}_U\to U$. \begin{definition} \label{D_Mproj} Let $U$ be an open subset of $M$. A section $\mathbf{X}$ in $\Gamma({\bf T}\mathcal{M}_U)$ is called projectable\index{projectable section} if there exists $\mathfrak{a}\in\Gamma({\mathcal{ A}}_U)$ such that \[ {\bf p}_{\mathcal{A}}\circ\mathbf{ X}=\mathfrak{a}. \] \end{definition} Therefore $\mathbf{ X} $ is projectable if and only if there exists a vector field $X$ on $\mathcal{M}_U$ and $\mathfrak{a}\in\Gamma(\mathcal{ A}_U)$ such that (see Theorem \ref{T_Prolongation}) \[ \mathbf{ X}=(\mathfrak{a}\circ{\bf p},X) \textrm{ with } T{\bf p}(X)=\rho\circ \mathfrak{a}. \] \textbf{Assume now that $(\mathcal {A},\pi,M,\rho,[.,.]_{\mathcal {A}})$ is a convenient Lie algebroid}. \smallskip Let $\mathbf{X}_i=(\mathfrak{a}_i\circ{\bf p},X_i)$, $i \in \{ 1,2 \}$ be two projectable sections defined on $\mathcal{M}_U$. We set \begin{eqnarray} \label{eq_projectTMbracket} [\mathbf{X}_1,\mathbf{X}_2]_{{\bf T}\mathcal{M}}=([\mathfrak{a}_1,\mathfrak{a}_2]_{\mathcal{A}}\circ {\bf p},[X_1,X_2]) \end{eqnarray} Since $\rho([\mathfrak{a}_1,\mathfrak{a}_2]_{\mathcal{A}})=[\rho(\mathfrak{a}_1),\rho(\mathfrak{a}_2)]$ and $T{\bf p} \left( [X_1,X_2] \right) =[T{\bf p}(X_1),T{\bf p}(X_2)]$, it follows that $[\mathbf{X}_1,\mathbf{X}_2]_{{\bf T}\mathcal{M}}$ is a projectable section which is well defined. Moreover, we have \begin{eqnarray}\label{eq_HatRhoMorphism} \hat{\rho} \left( [\mathbf{X}_1,\mathbf{X}_2]_{{\bf T}\mathcal{M}} \right) =[\hat{\rho}(\mathbf{X}_1),\hat{\rho}(\mathbf{X}_2)]. \end{eqnarray} \begin{comments} \label{Com_PartialBracket} Now, \textbf{in finite dimension}, it is well known that the module of sections of $\widetilde{\mathcal{A}}_U\to \mathcal{M}_U$ is the $C^\infty(\mathcal{M}_{U})$-module generated by the set of sections $\mathfrak{a}\circ {\bf p}$ where $\mathfrak{a}$ is any section of $\mathcal{A}_U\to U$. Therefore, according to Theorem \ref{T_Prolongation}, the module $\Gamma({\bf T}\mathcal{M}_U)$ is generated, as $C^\infty(\mathcal{M}_{U})$-module, by the set of all projectable sections of $\Gamma({\bf T}\mathcal{M}_U)$. This result is essentially a consequence of the fact that, in the local context used in the proof of Theorem \ref{T_Prolongation}, over such a chart domain $U$, the bundle $\widetilde{\mathcal{A}}_U$ is a finite dimensional bundle and so the module of sections of $\widetilde{\mathcal{A}}_U$ over $\mathcal{M}_U$ is finitely generated as $C^\infty(\mathcal{M}_U)$. Thus, if $\mathcal{A}$ is a finite rank bundle over $M$, then the module $\Gamma({\bf T}\mathcal{M}_U)$ is generated, as $C^\infty(\mathcal{M}_{U})$-module, by the set of all projectable sections of $\Gamma({\bf T}\mathcal{M}_U)$.\\ \textbf{Unfortunately, this is no longer true in the convenient context and even in the Banach setting in general}.\\ Note that, under some type of approximation properties of $\mathbb{A}$, we can show that the module generated by the set of all projectable sections of $\Gamma(\mathcal{A}_U)$ as $C^\infty(\mathcal{M}_{U})$-module, is dense in $\Gamma(\widetilde{\mathcal{A}}_U)$ as a convenient space. In this case, the $C^\infty(\mathcal{M}_{U})$-module, generated by the set of all projectable sections of $\Gamma({\bf T}\mathcal{M}_U)$ will be dense in $\Gamma({\bf T}\mathcal{M}_U)$ (as a convenient space). We could hope that, in this context, the Lie bracket $[.,.]_{\mathbf{T}\mathcal{M}_U}$ can be extended to $\Gamma({\bf T}\mathcal{M}_U)$.\\ \end{comments} \begin{definition} We denote by $\mathfrak{P}({\bf T}\mathcal{M}_U)$ the $C^\infty(\mathcal{M}_{U})$-module $\Gamma({\bf T}\mathcal{M}_U)$ generated by the set of projectable sections defined on $U$ in the $C^\infty(\mathcal{M}_{U})$-module \end{definition} Each module $\mathfrak{P}({\bf T}\mathcal{M}_U)$ has the following properties: \begin{lemma} \label{L_extbracket} ${}$ \begin{enumerate} \item For any open subset $U$ in $M$, there exists a well defined Lie bracket $[.,.]_{{\bf T}\mathcal{M}_U}$ on $\mathfrak{P}({\bf T}\mathcal{M}_U)$ which satisfies the assumption of Definition \ref{D_AlmostLieBracketOnAnAnchoredBundle} and whose restriction to projectable sections is given by the relation (\ref{eq_projectTMbracket}) \item For each $x\in M$, there exists a chart domain $U$ around $x$ in $M$ such that ${\bf T}\mathcal{M}_U$ is trivializable over $\mathcal{M}_U$ and for each $(m,v)\in {\bf T}_m\mathcal{M}$ where $m=(x,e)\in \mathcal{M}_U$, there exists a projectable section ${\bf X}$ defined on $\mathcal{M}_U$ such that ${\bf X}(m)=(a,v)$. \item Assume that we have a Whitney decomposition ${\bf T}\mathcal{M}={\bf K}\mathcal{M}\oplus{\bf V}\mathcal{M}$ and let $p_{\bf K}$ be the associated projection on ${\bf K}\mathcal{M}$. Then, for any section ${\bf X}\in \mathfrak{P}({\bf T}\mathcal{M}_U)$, the induced section ${\bf X}^K=p_{\bf K}\circ {\bf X}$ belongs to $\mathfrak{P} \left( {\bf K}\mathcal{M}_U \right) =\Gamma \left( {\bf K}\mathcal{M}_U \right) \cap \mathfrak{P} \left( {\bf T}\mathcal{M}_U \right) $. In particular, ${\bf X}^K$ is projectable if and only if ${\bf X}$ is so. \end{enumerate} \end{lemma} \begin{proof} (1) First of all, using the Leibniz formula, if $\mathbf{X}_1$ and $\mathbf{X}_2$ are projectable sections defined on $\mathcal{M}_U$, we have \[ [\mathbf{X}_1,f\mathbf{X}_2]_{{\bf T}\mathcal{M}_U}=df(\hat{\rho}(\mathbf{X}_1))\mathbf{X}_2+ f[\mathbf{X}_1,\mathbf{X}_2]_{{\bf T}\mathcal{M}_U} \] for any $f\in C^\infty(\mathcal{M})$. Now any local section $\mathbf{X}$ of $\mathfrak{P}({\bf T}\mathcal{M}_U)$ is a finite linear functional sum \[ \mathbf{X}=f_1 \mathbf{X}_2+\cdots+ f_k\mathbf{X}_k \] where, for $i \in \{ 1\dots,k \}$, each $\mathbf{X}_i$ is projectable and $f_i$ is a local smooth function on $\mathcal{M}_U$. Therefore, such a decomposition allows to define the bracket $[\mathbf{Y},\mathbf{X}]_{{\bf T}\mathcal{M}_U}$ for all projectable sections $\mathbf{Y}$ defined on the same open set as $\mathbf{X}$. Note that, from (\ref{eq_projectTMbracket}) and the Leibniz formula, the value $[\mathbf{Y},\mathbf{X}]_{{\bf T}\mathcal{M}_U}(m)$ only depends on the $1$-jet of $\mathbf{Y}$ and $\mathbf{X}$ at point $m$ and so the value of $[\mathbf{Y}_,\mathbf{X}]_{{\bf T}\mathcal{M}_U}(m)$ is well defined. Since such a value is independent on the expression of $\mathbf{X}$, by similar arguments, we can define the bracket $[\mathbf{X}',\mathbf{X}]_{{\bf T}\mathcal{M}_U}(m)$, for any other local section $\mathbf{X}'$ of $\mathfrak{P}({\bf T}\mathcal{M}_U)$. Now since, by assumption, $[.,.]_\mathcal{A}$ and the Lie bracket of vector fields satisfies the assumption of Definition \ref{D_AlmostLieBracketOnAnAnchoredBundle}, the restriction of $[.,.]_{{\bf T}\mathcal{M}_U}$ to projectable sections satisfies also the assumption of Definition \ref{D_AlmostLieBracketOnAnAnchoredBundle} and so is its extension to sections of $\mathfrak{P}({\bf T}\mathcal{M}_U)$ for any open set $U$ of $M$ is an almost Lie bracket. \\ In this context, the jacobiator\index{jacobiator} on ${{\bf T}\mathcal{M}_U}$ is defined by\\ $J_{{\bf T}\mathcal{M}_U}(\mathbf{X}_1,\mathbf{X}_2,\mathbf{X}_3)\hfill{}$ ${}\hfill =[[\mathbf{X}_1,\mathbf{X}_2]_{{\bf T}\mathcal{M}_U},\mathbf{X}_3]_{{\bf T}\mathcal{M}_U} +[[\mathbf{X}_2,\mathbf{X}_3]_{{\bf T}\mathcal{M}_U},\mathbf{X}_1]_{{\bf T}\mathcal{M}_U}+[ [\mathbf{X}_3,\mathbf{X}_1]_{{\bf T}\mathcal{M}_U},\mathbf{X}_2]_{{\bf T}\mathcal{M}_U}$\\ But according to relation (\ref{eq_HatRhoMorphism}) for projectable sections, using the Leibnitz property, it is easy to see that, for all $\mathbf{X}_1,\mathbf{X}_2$ in $\mathfrak{P}(\mathbf{T}\mathcal{M}_U)$ \[ \hat{\rho}([\mathbf{X}_1,\mathbf{X}_2]_{{\bf T}\mathcal{M}})=[\hat{\rho}(\mathbf{X}_1),\hat{\rho}(\mathbf{X}_2)]. \] On the other hand, from(\ref{eq_projectTMbracket}), for projectable sections $\mathbf{X}_i$ , $i \in \{1,2,3\}$, it follows that \[ J_{{\bf T}\mathcal{M}_U}(\mathbf{X}_1,\mathbf{X}_2,\mathbf{X}_3)=0. \] Therefore, according to these properties, it follows that $J_{{\bf T}\mathcal{M}_U}$ vanishes identically on $\mathfrak{P}({\bf T}\mathcal{M}_U)$ , which ends the proof of (1). (2) Choose $x_0\in M$. According to the proof of Theorem \ref{T_Prolongation}, there exists a chart domain $U$ around $x$ such that \begin{description} \item $\mathcal{M}_U\equiv U\times\mathbb{O}$ \item $\mathcal{A}_U\equiv U\times\mathbb{A}$ \item $T\mathcal{M}_U\equiv U\times\mathbb{O}\times\mathbb{M}\times\mathbb{E}$ \item $\widetilde{\mathcal{A}}_U=U\times\mathbb{O}\times\mathbb{A}$ \item ${\bf T}\mathcal{M}_U\equiv U\times\mathbb{O}\times\mathbb{A}\times\mathbb{E}$. \end{description} Consider $(a_0,v_0)\in {\bf T}_{m_0}\mathcal{M}$ where $m=(x_0,e_0)\in \mathcal{M}_U$. Using local coordinates, if $m_0\equiv \mathsf{(x_0,e_0)}$ and $(a_0,v_0)\equiv \mathsf{(a_0,v_0)}$ we consider the section \[ \mathsf{X}: U\times\mathbb{O}\to U\times\mathbb{O}\times\mathbb{A}\times\mathbb{E} \] given by $ \mathsf{X(x,e)=(x,e, a_0, v_0)}$. Then, by construction, the corresponding local section ${\bf X}\equiv X$ is a projectable section defined on $\mathcal{M}_U$.\\ (3) Under the assumptions of (3), note that the restriction ${\bf p}_{{\bf K}\mathcal{M}}$ of ${\bf p}_{\widetilde{\mathcal{A}}}$ to ${\bf K}\mathcal{M}$ is an isomorphism onto $\widetilde{\mathcal{A}}$. Let ${\bf X}$ be a section of ${\bf T}\mathcal{M}_U$. Thus the difference ${\bf X}-{\bf X}^K$ is a vertical section. Since the sum of two projectable sections is a projectable section, it follows that ${\bf X}$ is projectable if and only if ${\bf X}^K$ is projectable. \end{proof} \begin{notations} \label{N_SheafSectionBracketlocal} ${}$ \begin{enumerate} \item\textbf{Sheaf $\mathfrak{P}_\mathcal{M}$ of sections of $\mathbf{T}\mathcal{M}$:}\\ On the one hand, for any open set $\mathcal{U}$ of $\mathcal{M}$, if $U=\mathbf{p}(\mathcal{U})$, then $\mathcal{U}\subset \mathcal{M}_U$ and so $\mathfrak{P}(\mathcal{U}):=\{(\mathbf{X}_{| \mathcal{U}},\; \mathbf{X}\in \mathfrak{P}(\mathcal{M}_U)\}$.\\ On the other hand, since the set of smooth sections of a Banach bundle defines a sheaf over its basis, it follows that the set $\{\mathfrak{P}({\bf T}\mathcal{U}),\; \mathcal{U} \textrm{ open set in }\mathcal{M}\}$ defines a sub-sheaf of modules $\mathfrak{P}_\mathcal{M}$ of the sheaf of modules $\Gamma_\mathcal{M}$ of sections of ${\bf T}\mathcal{M}$. Thus $\{\mathfrak{P}(\mathcal{M}_U),\; U\textrm{ open set in } M\}$ generates a sheaf of modules $\mathfrak{P}_\mathcal{M}$ on $\mathcal{M}$. According to Definition \ref{D_PartialConvenientLieAlgebroid}, in this context, when no confusion is possible, we simply denote by $[.,.]_{{\bf T}\mathcal{M}}$ the sheaf of brackets generated by $\{[.,.]_{{\bf T}\mathcal{M}_U}, U\textrm{ open set in } M\}$.\\ \item {\bf Local version of the Lie bracket $[.,.]_{{\bf T}\mathcal{M}} $:}\\ Each section ${\bf X}$ in $\mathfrak{P}({\bf T}\mathcal{M}_U)$ has a decomposition: \[ {\bf X}=\displaystyle\sum_{i=1}^p f_i\mathbf{X}_i \] with $f_i \in C^\infty(\mathcal{M}_U)$ and ${\bf X}_i$ are projectable sections for all $i \in \{1,\dots,p \}$. We consider a chart domain $U$ in $M$ for which the situation considered in the proof of Lemma \ref{L_extbracket} (2) is valid. In the associated local coordinates, for any section ${\bf X}$ of ${\bf T}\mathcal{M}_U$, we have ${\bf X}(x,e)\equiv\mathsf{(x,e, a(x,e), z(x,e))}$. Now, ${\bf X}$ is projectable if and only if $\mathsf{a}$ only depends on $\mathsf{x}$. Under these notations, if $\bf{X}'\equiv \mathsf{(x,e, a'(e), z'(x,e))}$ is another projectable section, we have (cf (\ref{eq_projectTMbracket}) and Notations \ref{eq_projectTMbracket}): \begin{eqnarray} \label{eq_BracketProject} \begin{aligned} \left[{\bf X},{\bf X}' \right]_{{\bf T}\mathcal{M}}(x,e) \equiv &\mathsf{(x,e, C_x(a,a')} + d \mathsf{a'(\rho_x(a))} - d \mathsf{a(\rho_x(a')),}\\ &d \mathsf{ z'(\rho_x(a),z)} - d \mathsf{ z((\rho_x(a'),z')))}. \end{aligned} \end{eqnarray} Now consider two sections ${\bf X}$ and ${\bf X}'$ in $\mathfrak{P} \left( {\bf T}\mathcal{M}_U \right)$. We can write ${\bf X}=\displaystyle\sum_{i=1}^p f_i\mathbf{X}_i$ and ${\bf X}'=\displaystyle\sum_{j=1}^q f'_j\mathbf{X}'_j$ where $\left( f_i, f'_i \right) \in C^\infty(\mathcal{M}_U)^2$ and ${\bf X}_i$ and ${\bf X}'_j$ are projectable sections for all $i \in \{1,\dots,p\}$ and $j \in \{1,\dots,q\}$. Since the value of the Lie bracket $[{\bf X},{\bf X}']_{{\bf T}\mathcal{M}}$ at $m$ only depends on the $1$-jet of ${\bf X}$ and ${\bf X}'$ and $m$, so this value does not depend on the previous decompositions. Now ${\bf X}$ and ${\bf X}'$ can be also written as a pair $(\mathfrak{a}, X)$ and $(\mathfrak{a}', X')$ respectively. Of course, if $m=(x,e)$, we have \[ \mathfrak{a}(m)= \displaystyle\sum_{i=1}^p f_i(m)a_i(x) \textrm{ and } \mathfrak{a}'(m)= \displaystyle\sum_{j=1}^q f'_i(m)a'_i(x) \] \[ X= \displaystyle\sum_{i=1}^p f_iX_i \textrm{ and } X= \displaystyle\sum_{i=1}^p f_iX_i. \] In local coordinates, we then have \[ {\bf X}\equiv \mathsf{(a,z)=\left( \displaystyle\sum_{i=1}^p f_i(m)a_i,\sum_{i=1}^p f_iz_i \right) } \] \[ {\bf X}'\equiv \mathsf{(a',z')= \left( \displaystyle\sum_{j=1}^q f'_ja'_j,\sum_{j=1}^q f_jz_j \right) }. \] Thus the Lie bracket $[{\bf X},{\bf X}']_{{\bf T}\mathcal{M}}$ has the following expression in local coordinates: \begin{eqnarray} \label{eq_BracketQuasiprojectable} \begin{aligned} \left[{\bf X},{\bf X}'\right]_{{\bf T}\mathcal{M}}& \equiv \mathsf{( C(a,a') } +d \mathsf{a'(\rho_x(a'))}\\ &- d \mathsf{a(\rho_x(a')),} d \mathsf{z'(\rho_x(a),z)} -d \mathsf{z((\rho_x(a'),z'))} \end{aligned} \end{eqnarray} where $\mathsf{C}: U\to {L}_{alt}^2(\mathbb{A})$ is defined in local coordinates by the Lie bracket $[.,.]_\mathcal{A}$ and where \[ \mathsf{C(a,a')(m)= \displaystyle\sum_{i=1}^p \sum_{j=1}^q f_i(m)f'_j(m) C_x(a_i(x),a'_j(x))}. \] \end{enumerate} \end{notations} \begin{remark} \label{R_AFiniterank} According to Comments \ref{Com_PartialBracket}, when $\mathcal{A}$ is a finite rank vector bundle, then $\mathfrak{P}(\mathcal{M}_U=\Gamma(\mathbf{T}\mathcal{M}_U$ and so $[.,.]_{\mathbf{T}\mathcal{M}_U}$ is defined for all sections of $\Gamma(\mathbf{T}\mathcal{M}_U)$. \end{remark} Now, from Lemma \ref{L_extbracket}, $ \left( \mathfrak{P}({\bf T}\mathcal{M}_U), [.,.]_{{\bf T}\mathcal{M}} \right)$ is a Lie algebra and $\hat{\rho}$ induces a Lie algebra morphism from $(\mathfrak{P}({\bf T}\mathcal{M}_U), [.,.]_{{\bf T}\mathcal{M}})$ to the Lie algebra of vector fields on $\mathcal{M}_U$. In this way, so we get: \begin{theorem} \label{T_PartialLieAlgebroid} The sheaf $\mathfrak{P}_\mathcal{M}$ on $\mathcal{M}$ which gives rise to a strong partial convenient Lie algebroid on the anchored bundle $({\bf T}\mathcal{M}, \hat{p}, \mathcal{M}, \hat{\rho})$. Moreover, the restriction of the bracket $[.,.]_{{\bf T}\mathcal{M}}$ to the module of vertical sections induces a convenient Lie algebroid structure on the anchored subbundle $({\bf V}\mathcal{M}, \hat{p}_{|{\bf V}\mathcal{M}},\mathcal{M},\hat{\rho})$ which is independent on the bracket $[.,.]_\mathcal{A}$. \end{theorem} From Remark \ref{R_AFiniterank}, we obtain: \begin{corollary} \label{C_LieAlgebroidTM} If $\mathcal{A}$ is a finite dimensional fiber Banach bundle, then\\ $ \left( {\bf T}\mathcal{M},\hat{p}, \mathcal{M}, \hat{\rho},[.,.]_{{\bf T}\mathcal{M}} \right) $ is a convenient Lie algebroid. \end{corollary} \begin{proof}[Proof of Theorem \ref{T_PartialLieAlgebroid}] From Lemma \ref{L_extbracket}, $(\mathfrak{P}({\bf T}\mathcal{M}_U), [.,.]_{{\bf T}\mathcal{M}})$ is a Lie algebra and $\hat{\rho}$ induces a Lie algebra morphism from $(\mathfrak{P}({\bf T}\mathcal{M}_U), [.,.]_{{\bf T}\mathcal{M}})$ to the Lie algebra of vector fields on $\mathcal{M}_U$. This implies the same properties for $(\mathfrak{P}({\bf T}\mathcal{U}), [.,.]_{{\bf T}\mathcal{M}})$ for any open set $\mathcal{U}$ in $\mathcal{M}$. Thus we obtain a sheaf $\mathfrak{P}_\mathcal{M}$ of Lie algebras and $C^\infty_\mathcal{M}$ modules on $\mathcal{M}$, which implies that $({\bf T}\mathcal{M},\hat{p}, \mathcal{M}, \hat{\rho},[.,.]_{{\bf T}\mathcal{M}})$ is a partial convenient Lie algebroid which is strong, according to Lemma \ref{L_extbracket} (2).\\ It remains to prove the last property. From its definition, the restriction of $\hat{\rho}$ to the vertical bundle ${\bf V}\mathcal{M}\to \mathcal{M}$ is an isomorphism onto the vertical bundle of the tangent bundle $T\mathcal{M}\to \mathcal{M}$. Now, from the definition of the bracket of projectable sections, it is clear that the bracket of two (local) vertical sections of ${\bf T}\mathcal{M}\to \mathcal{M}$ is also a (local) vertical section which is independent on the choice of $[.,.]_\mathcal{A}$.\\ \end{proof} \begin{remark} \label{R_liftmorphism} Given a fibred morphism $\Psi:{\mathcal{M}}\to {\mathcal{M}}'$ between fibred manifold ${\bf p}:{\mathcal{M}}\to M$ and ${\bf p}':{\mathcal{M}'}\to M'$ over $\psi:M\to M'$ and a morphism of anchored bundles $ \varphi$ between two Lie algebroids $({\mathcal{A}}, \pi, M,\rho, [.,.]_{\mathcal{A}})$ and $({\mathcal{A}}', \pi',M',\rho',[.,.]_{\mathcal{A}'})$, over $\psi:M\to M'$. Then, from Remark \ref{R_LieMorphismPartialAlgebroid}, the prolongation ${\bf T}\Psi: {{\bf T}}^{\mathcal{A}}{\mathcal{M}}\to {{\bf T}}^{\mathcal{A}'}{\mathcal{M}}'$, is a Lie morphism of partial convenient Lie algebroids from $({\bf T}^{\mathcal{A}}\mathcal{M},\mathcal{M},\hat{\rho},\mathfrak{P}_{\mathcal{M}})$ to $({\bf T}^{\mathcal{A}'}\mathcal{M}',\mathcal{M}',\hat{\rho}',\mathfrak{P}_{\mathcal{M}'})$. \end{remark} \subsection{Derivative operator on ${\bf T}\mathcal{M}$} \label{___LieDerivativeExteriorDifferentialAndNijenhuisTensorOnTM} Consider an open set $\mathcal{U}$ in $\mathcal{M}$. We simply denote ${\bf T}\mathcal{U}$ the restriction of ${\bf T}\mathcal{M}$ to $\mathcal{U}$. Note that $U={\bf p}(\mathcal{U})$ is an open set in $M$ and of course $\mathcal{U}$ is contained in the open set $\mathcal{M}_U$; so we have a natural restriction map from $\Gamma({\bf T}\mathcal{M}_U)$ (resp. $\mathfrak{P}({\bf T}\mathcal{M}_U)$) into $\Gamma({\bf T}\mathcal{U})$ (resp. $\mathfrak{P}({\bf T}\mathcal{U})$).\\ Since ${\bf T}\mathcal{M}$ has only a strong partial convenient Lie algeboid structure, by application of results of subsection \ref{___DerivativeOperators}, we then have the following result: \begin{theorem} \label{T_DifferentialOperatorTM} Fix some open set $\mathcal{U}$ in $\mathcal{M}$. \begin{enumerate} \item Fix some projectable section $\mathfrak{u}\in \mathfrak{P}({\bf T}\mathcal{U})$. For any $k$-form $\omega$ on $\mathcal{U}$ the Lie derivative $L^{\hat{\rho}}_\mathfrak{ u}\omega$ is a well defined $k$-form on $\mathcal{U}$. \item For any $k$-form $\omega$ on $\mathbf{T}\mathcal{U}$ the exterior derivative $d_{\hat{\rho}}\omega$ of $\omega$ is well defined $(k+1)$-form on $\mathcal{U}$.\\ \end{enumerate} \end{theorem} \subsection{Prolongations and foliations} \label{___ProlongationsAndFoliations} \emph{We assume that $(\mathcal{A},M,\rho,[.,.]_\mathcal{A})$ is a split Banach Lie algebroid}.\\ Under the upper assumptions, by application of Theorem \ref{T_Prolongation} and Theorem 2 in \cite{Pe1}, we obtain the following link between the foliation on $M$ and the foliation on $T\mathcal{M}$: \begin{theorem} \label{T_tildefol} Assume that $(\mathcal{A},\pi,M,\rho,[.,.]_\mathcal{A})$ is split and the distribution $\rho(\mathcal{A})$ is closed. \begin{enumerate} \item The distribution $\hat{\rho}({\bf T}\mathcal{A})$ is integrable on ${\bf T}\mathcal{M}$. \item Assume that $\mathcal{M}=\mathcal{A}$. Let $L$ be a leaf of $\rho(\mathcal{A})$ and $(\mathcal{A}_L,\pi_{|L},L,\rho_L,[.,.]_{\mathcal{A}_L})$ be the Banach-Lie algebroid which is the restriction of $(\mathcal{A}, \pi, M,\rho,[.,.]_{\mathcal{A}})$. Then $\hat{L}=\mathcal{A}_L=\pi^{-1}(L)$ is a leaf of $\hat{\rho}( \mathbf{T}\mathcal{A})$ such that $\hat{\mathbf{p}}(\hat{L})=L$. \item In the previous context, the partial Banach-Lie algebroid which is the prolongation of $(\mathcal{A}_L,\pi_{|L},L,\rho_L,\mathfrak{P}_{\mathcal{A}_L})$ over $\hat{L}$ is exactly the restriction to $\hat{L}$ of the partial Banach-Lie algebroid $({\bf T}\mathcal{A}, \hat{p},\mathcal{A},\hat{\rho}, \mathfrak{P}_{\mathcal{A}})$. \end{enumerate} \end{theorem} \begin{remark} \label{R_prolongAL} ${}$ \begin{enumerate} \item If there exists a weak Riemannian metric on $\mathcal{A}$, the distribution $\rho(\mathcal{A})$ is closed and the assumptions of Theorem \ref{T_tildefol} are satisfied. These assumptions are always satisfied if $\rho$ is a Fredholm morphism. \item The set of leaves of the foliations defined by $\hat{\rho}(\mathbf{T}\mathcal{A})$ is \[ \{\mathcal{A}_L, \; L \textrm{ leaf of } \rho(\mathcal{A})\;\}. \] \end{enumerate} \end{remark} \begin{proof} From Theorem \ref{T_Prolongation}, if $m=(x,e)\in \mathcal{A}$, the fibre ${\bf T}_m\mathcal{M}$ can be identified with ${\mathcal{A}}_x\times \widetilde{\mathcal{E}}_m\equiv \mathbb{A}\times\mathbb{E}$, so we have $\hat{\rho}_m(a, v)\equiv (\mathsf{\rho_x(a),v})$. It follows that $\ker(\hat{\rho}_m)$ can be identified with $\ker\rho_x\times{\bf V}_m\mathcal{M}\subset {\mathcal{A}}_x\times T_m\mathcal{A}$. Thus, $\ker\hat{\rho}_m$ is split if and only if $\ker\rho_x$ is split. Moreover $\hat{\rho}_m({\bf T}_m\mathcal{A})=\rho_x(\mathcal{A}_x)\times {\bf V}_m\mathcal{M}$ is closed in ${\bf T}_m\mathcal{M}\equiv\mathbb{M}\times\mathbb{E}$ if and only if $\rho_x(\mathcal{A}_x)$ is closed in $T_xM$. Then (1) will be a consequence of \cite{BCP}, Theorem 8.39 if we show that, for any $(x,e)\in\mathcal{M}$, there exists an open neighbourhood $U$ of $x$ in $M$ such that the set $\mathfrak{P}({\bf T}\mathcal{M}_U)$ is a generating upper set for $\hat{\rho}({\bf T}\mathcal{M})$ around $(x,e)$ (cf. \cite{BCP}, Definition 8.38) and satisfied the condition {\bf (LB)} given in \cite{BCP}, $\S$ Integrability and Lie invariance.\\ The point $m=(x,e)\in \mathcal{M}$ is fixed and we choose a chart domain $U$ of $x\in M$ such that $\mathcal{M}_U$ and $\mathcal{A}_U$ are trivializable. Without loss of generality, according to the notations in $\S$ \ref{___LocalIdentificationsAndExpressionsInAConvenientBundle}, we may assume that $U\subset \mathbb{M}$, $\mathcal{M}_U=U\times\mathbb{O}\subset \mathbb{M}\times\mathbb{E}$, $\mathcal{A}_U=U\times\mathbb{A}$. Thus, according to the proof of Theorem \ref{T_Prolongation}, we have ${\bf T}\mathcal{M}_U=U\times\mathbb{O}\times \mathbb{A}\times\mathbb{E}$. In this context, if $\{\mathsf{x}\}\times\mathbb{A}=\ker \rho_\mathsf{x}\oplus \mathbb{S}$, then \[ \{\mathsf{(x,e)}\}\times \mathbb{A}\times\mathbb{E}=\{\mathsf{(x,e)}\}\times(\ker \rho_\mathsf{x}\oplus \mathbb{S})\times \mathbb{E}. \] Now consider the upper trivialization $\rho: U\times \mathbb{A}\to U\times\mathbb{M}(=TM_U)$ and the associated upper trivialization: \[ \hat{\rho}:U\times\mathbb{O}\times\mathbb{A}\times\mathbb{E}\to U\times\mathbb{O}\times\mathbb{A}\times\mathbb{M}(=T\mathcal{M}_U). \] Then, from the definition of $\hat{\rho}$, any upper vector field is of type \[ {\bf X}_{\mathsf{(a,v)}}=\hat{\rho}\mathsf{(a, v)= (\rho(a), v)} \] for any $\mathsf{(a, v)}\in \mathbb{A}\times\mathbb{E}$. From the proof of Lemma \ref{L_extbracket} (2), it follows that such a vector field is the range, under $\hat{\rho}$, of a projectable section. Moreover, the stability of projectable sections under the Lie bracket $[.,.]_{{\bf T}\mathcal{M}_U}$ and the fact that $\hat{\rho}$ induces a Lie algebra morphism from $\mathfrak{P}({\bf T}\mathcal{M}_U)$ into the Lie algebra of vector fields on $\mathcal{M}_U$ implies that condition ({\bf LB}) is satisfied for the set $\mathfrak{P}({\bf T}\mathcal{M}_U)$.\\ Assume that $\mathcal{M}=\mathcal{A}$. Fix some leaf $L$ in $M$. If $(\mathcal{A}_L,\pi_{|L},L,\rho_L,[.,.]_{\mathcal{A}_L})$ is the Banach-Lie algebroid which is the restriction of $(\mathcal{A}, \pi, M,\rho,[.,.]_\mathcal{A})$ then $\pi^{-1}(L)=\mathcal{A}_L$ and $\rho_L(\mathcal{A}_L)=TL$. Now by construction, the prolongation $\mathbf{T}\mathcal{A}_L$ of $\mathcal{A}_L$ relative to the Banach-Lie algebroid $(\mathcal{A}_L,\pi_{|L},L,\rho_L,[.,.]_{\mathcal{A}_L})$ is characterized by \[ \mathbf{T}_{(x,a)}\mathcal{A}_L=\{(b, (v,y))\in \mathcal{A}_x \times T_{(x,a)}\mathcal{A}_L\;:\; \rho_x(b)=y\}. \] It follows that $\mathbf{T}\mathcal{A}_L$ is the restriction of $\mathbf{T}\mathcal{A}$ to $\mathcal{A}_L$ and also $\hat{\rho}(\mathbf{T}\mathcal{A}_L)=T\mathcal{A}_L$. Since $L$ is connected, so is $\mathcal{A}_L$ and then $\mathcal{A}_L$ is a leaf of $\hat{\rho}(\mathbf{T}\mathcal{M})$. \end{proof} \section{Projective limits of prolongations of Banach Lie algebroids} \label{__ProjectiveLimitsOfProlongationsOfBanachAnchoredBundles} \begin{definition} \label{D_ProjectiveSequenceofBanachLieAlgebroids} Consider a projective sequence of Banach-Lie algebroid bundles \begin{center} $\left( \left( \mathcal{A}_{i},\pi_{i},M_{i},\rho_{i},[.,.]_i \right),\left( \zeta_i^j, \xi_i^j, \delta_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$ \end{center} (resp. of Banach bundles $\left( \left( \mathcal{E}_{i},\mathbf{\pi}_{i},M_{i},\rho_{i},[.,.]_i \right),\left( \xi_{i}^{j}, \delta_{i}^{j} \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$).\\ A sequence of open fibred manifolds $\mathcal{M}_i $ of $\mathcal{E}_i$ is called compatible with algebroid prolongations, if, for all $\left( i,j \right) \in\mathbb{N}^2$ such that $j\geq i$, we have \begin{description} \item[\textbf{(PSPBLAB 1)}] {\hfil $\xi_i^j(\mathcal{M}_j)\subset \mathcal{M}_i$;} \item[\textbf{(PSPBLAB 2)}] {\hfil $\mathbf{p}_i \circ \xi_i^j = \delta_i^j \circ \mathbf{p}_j$.} \end{description} \end{definition} Under the assumptions of Definition \ref{D_ProjectiveSequenceofBanachLieAlgebroids}, for each $i\in \mathbb{N}$, we denote by $(\mathbf{T}\mathcal{M}_i, \hat{\mathbf{p}}_i, \mathcal{M}_i, \hat{\rho}_i)$ the prolongation of $\mathcal{A}_i$ over $\mathcal{M}_i$ and $[.,.]_{\bf{T}\mathcal{M}_i}$ the prolongation of the Lie bracket $[.,.]_{\mathcal{A}_i}$ on projectable sections of $\mathbf{T}\mathcal{M}_i$. We then have the following result. \begin{proposition} \label{P_ProjectiveLimitProlongationBracket} Consider a projective sequence of Banach-Lie algebroid bundles \begin{center} $\left( \left( \mathcal{A}_{i},\pi_{i},M_{i},\rho_{i},[.,.]_{\mathcal{A}_i} \right),\left( \zeta_i^j, \xi_i^j, \delta_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$ \end{center} (resp. Banach bundles $\left( \left( \mathcal{E}_{i},\mathbf{\pi}_{i},M_{i},\rho_{i},[.,.]_i \right),\left( \xi_{i}^{j}, \delta_{i}^{j} \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$) and a sequence of open fibred manifolds $\mathcal{M}_i $ of $\mathcal{E}_i$ compatible with algebroid prolongations. Then \begin{enumerate} \item $\left( \mathbf{T}\mathcal{M}=\underleftarrow{\lim}\mathbf{T}\mathcal{M}_i,\hat{\mathbf{p}}=\underleftarrow{\lim}\hat{\mathbf{p}}_i,M=\underleftarrow{\lim}\mathcal{M}_i, \hat{\rho}=\underleftarrow{\lim}\hat{\rho}_i \right) $ is a Fr\'{e}chet anchored bundle which is the prolongation of $\left( \mathcal{A}=\underleftarrow{\lim}\mathcal{A}_i,\pi=\underleftarrow{\lim}\pi_i,M=\underleftarrow{\lim}M_i \right)$ over $M$.\\ \item Consider any of open $U$ in $M$ and a sequence of open set $U_i$ in $M_i$ such that $U=\underleftarrow{\lim}U_i$. We denote by $\mathfrak{P}^{pl}(\mathbf{T}\mathcal{M}_U)$ the $C^\infty(U)$-module generated by all projective limits $\mathbf{X}=\underleftarrow{\lim}\mathbf{X}_i$ of projectable sections $\mathbf{X}_i$ of $\mathbf{T}\mathcal{M}_i$ over $\{\mathcal{M}_i\}_{U_i}$ Then there exists a Lie bracket $[.,.]_{\mathbf{T}\mathcal{M}} $ defined on $\mathfrak{P}^{pl}(\mathbf{T}\mathcal{M}_U)$ which satisfies the assumptions of Definition \ref{D_AlmostLieBracketOnAnAnchoredBundle} characterized by \[ [\mathbf{X}, \mathbf{X}']_{\mathbf{T}\mathcal{M}_U}=\underleftarrow{\lim}[\mathbf{X}_i,\mathbf{X}'_i]_{\mathbf{T}\mathcal{M}_i} \] where $\mathbf{X}=\underleftarrow{\lim}\mathbf{X}_i$ and $\mathbf{X}'=\underleftarrow{\lim}\mathbf{X}'_i$.\\ \end{enumerate} \end{proposition} Note that $\mathfrak{P}^{pl}(\mathbf{T}\mathcal{M}_U)$ is a submodule of the module $\mathfrak{P}(\mathbf{T}\mathcal{M}_U)$ generated by projectable sections $\mathbf{T}\mathcal{M}_U$. Therefore, by analog argument as used in the proof of Theorem \ref{T_PartialLieAlgebroid}, the set $\{\mathfrak{P}^{pl}(\mathbf{T}\mathcal{M}_U), U \textrm{ open set in } M\}$ generates a sheaf of $\mathfrak{P}^{pl}_\mathcal{M}$ over $ \mathcal{M}$. Moreover, for any open set $\mathcal{U}$ in $ \mathcal{M}$, according to Proposition \ref{P_ProjectiveLimitProlongationBracket}, the restriction of $\hat{\rho}$ to each $\mathfrak{P}^{pl}(\mathbf{T}\mathcal{U})$ is a Lie algebra morphism into the Lie algebra morphism into the Lie algebra of vector fields on $\mathcal{U}$. Thus we obtain: \begin{theorem} \label{T_ProjectiveLimitOfProlongationOfBanachAnchoredBundles} Consider a projective sequence of Banach-Lie algebroid bundles \begin{center} $\left( \left( \mathcal{A}_{i},\pi_{i},M_{i},\rho_{i},[.,.]_{\mathcal{A}_i} \right),\left( \zeta_i^j, \xi_i^j, \delta_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$ \end{center} (resp. Banach bundles $\left( \left( \mathcal{E}_{i},\mathbf{\pi}_{i},M_{i},\rho_{i},[.,.]_i \right),\left( \xi_{i}^{j}, \delta_{i}^{j} \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$) and a sequence of open fibred manifolds $\mathcal{M}_i $ of $\mathcal{E}_i$ compatible with algebroid prolongations. Then $\left( \mathbf{T}\mathcal{M}=\underleftarrow{\lim}\mathbf{T}\mathcal{M}_i,\hat{\mathbf{p}}=\underleftarrow{\lim}\hat{\mathbf{p}}_i,M=\underleftarrow{\lim}\mathcal{M}_i, \hat{\rho}=\underleftarrow{\lim}\hat{\rho}_i \right) $ is a Fr\'{e}chet anchored bundle which is the prolongation of $\left( \mathcal{A}=\underleftarrow{\lim}\mathcal{A}_i,\pi=\underleftarrow{\lim}\pi_i,M=\underleftarrow{\lim}M_i \right)$ over $M$. Moreover $\left(\mathbf{T}\mathcal{M},\hat{\bf p}, \mathcal{M}, \hat{\rho},\mathfrak{P}^{pl}_{\mathcal{M}}\right)$ is a strong partial Fr\'echet Lie algebroid. \end{theorem} \begin{remark} \label{R_NotPartialLieAlgebroidProlongation} In general, since the projective limit of a projective sequence of Banach algebroids has only a structure of partial Fr\'echet Lie algebroid, it follows that $\mathfrak{P}^{pl}_\mathcal{M}$ is a subsheaf of modules of $\mathfrak{P}_\mathcal{M}$ and the inclusion is strict in the sense that for each open $\mathcal{U}$ in $\mathcal{M}$, the inclusion of $\mathfrak{P}^{pl}(\mathcal{U})$ in $\mathfrak{P}(\mathcal{U})$ is strict. Thus we do not have a structure of strong partial Fr\'echet Lie algebroid defined on $\mathfrak{P}_\mathcal{M}$ as in Theorem \ref{T_PartialLieAlgebroid}. \end{remark} \begin{proof}[Proof of Proposition \ref{P_ProjectiveLimitProlongationBracket}]${}$\\ (1) According to \textbf{(PSPBLAB 1)} and Theorem \ref{T_ProjectiveLimitOfBanachLieAlgebroids}, $\left( \underleftarrow{\lim}\mathcal{A}_i,\underleftarrow{\lim}\pi_i,\underleftarrow{\lim}M_i ,\underleftarrow{\lim}\rho_i \right)$ is a Fr\'echet anchored bundle. \\ From \textbf{(PSPBLAB 2)} and Proposition \ref{P_ProjectiveLimitOfBanachVectorBundles}, we obtain a structure of Fr\'echet vector bundle on $\left( \underleftarrow{\lim}\mathcal{E}_i,\underleftarrow{\lim}\mathbf{p}_i,\underleftarrow{\lim}M_i \right) $. Since each $\mathcal{M}_i$ is an open manifold of $\mathcal{E}_i$ such that the restriction of $\mathbf{p}_i$ is a surjective fibration of $\mathcal{M}_i$ over $M_i$ and we have $\xi_i^j(\mathcal{M}_j) \subset \mathcal{M}_i$, it follows that $(\mathcal{M}_i,\xi_i^j) _{(i,j)\in\mathbb{N}^2, j \geq i}$ is a projective sequence of Banach manifolds and so the restriction of $\mathbf{p}=\underleftarrow{\lim}\mathbf{p}_i$ to $\mathcal{M}=\underleftarrow{\lim}\mathcal{M}_i$ is a surjective fibration onto $M$\\ Recall that \[ { \bf T}\mathcal{M}_j =\{ \left( a_j,\mu_j \right) \in \mathcal{A}_{x_j} \times T_{m_j}\mathcal{M}_j: \; \rho_j \left( a_j \right) = T_{m_j}{\bf p}_j \left( \mu_j \right) \}. \] Let $\left( a_j,\mu_j \right)$ be in ${ \bf T}{\mathcal{M}_j}$ and consider $a_i=\xi_{i}^{j} \left( a_j \right) $ and $\mu_i=T\xi_i^j\left( \mu_j \right) $. We then have: \[ \rho_i\left( a_i \right) = \rho_i \circ \xi_{i}^{j} \left( a_j \right) = T\delta_i^j \circ \rho_j \left( a_j \right) \] and also \[ T_{m_i}{\bf p}_i \left( \mu_i \right) = T_{m_i}{\bf p}_i \circ T\xi_i^j \left( \mu_j \right) = T\delta_i^j \circ T_{m_j}{\bf p}_j \left( \mu_j \right). \] Since $\rho_j \left( a_j \right) = T_{m_j}{\bf p}_j \left( \mu_j \right)$, we then obtain $\rho_i \left( a_i \right) = T_{m_i}{\bf p}_i \left( \mu_i \right)$. So $\mathbf{T}\xi_i^j : \mathbf{T} \mathcal{M}_j \to \mathbf{T} \mathcal{M}_i$ is a morphism of Banach bundles and we have the following commutative diagram \[ \xymatrix{ \mathbf{T} \mathcal{M}_i \ar@{<-}[r]^{\mathbf{T}\xi_i^j}\ar[d]_{\hat{\rho}_i} & \mathbf{T} \mathcal{M}_j \ar[d]^{\hat{\rho}_j}\\ T\mathcal{M}_i \ar@{<-}[r]^{T\xi_i^j} & T\mathcal{M}_j\\ } \] We deduce that $\left( \left( {\mathbf{T}}^{\mathcal{A}_i} \mathcal{M}_i,\hat{\mathbf{p}}_i,\mathcal{M}_i,\hat{\rho}_i \right),\left( \mathbf{T}\xi_i^j,\xi_{i}^{j}\right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$ is a projective sequence of Banach anchored bundles. Applying again Theorem \ref{T_ProjectiveLimitOfBanachLieAlgebroids}, we get a Fr\'echet anchored bundle structure on $\left( \underleftarrow{\lim}\bf{T}\mathcal{M}_i,\underleftarrow{\lim}\hat{\bf{p}}_i,\underleftarrow{\lim}\mathcal{M}_i \right)$ over $\underleftarrow{\lim}\mathcal{M}_i$ and which appears as the prolongation of $\left( \underleftarrow{\lim}\mathcal{A}_i,\underleftarrow{\lim}\pi_i,\underleftarrow{\lim}M_i ,\underleftarrow{\lim}\rho_i \right) $ over $\underleftarrow{\lim}\mathcal{M}_i $.\\ (2) Let $U$ be an open set in $M$. There exists $U_i$ in $M_i$ such that $\delta_i(U)\subset U_i$ for each $i\in \mathbb{N}$ and so that $U=\underleftarrow{\lim}U_i$. Now, from the definition of $\{\mathcal{M}_i\}_{U_i}$, we must have $\mathcal{M}_U=\underleftarrow{\lim}\{\mathcal{M}_i\}_{U_i}$.\\ Recall that a projectable section $\mathbf{X}_i$ on $\{\mathcal{M}_i\}_{U_i}$ is characterized by a pair $(\mathfrak{a}_i, X_i)$ where $\mathfrak{a}_i$ is a section of $\{\mathcal{A}_i\}_{U_i}$ and $X_i$ is a vector field on $\{\mathcal{M}_i\}_{U_i}$ such that $\rho_i\circ \mathfrak{a}_i=T{\mathbf{p}_i}(X_i)$. Assume that $\mathfrak{a}=\underleftarrow{\lim}\mathfrak{a}_i$ and $X=\underleftarrow{\lim}X_i$, from the compatibility with bonding maps for sequences of sections $(\mathfrak{a}_i)$, anchors $(\rho_i)$ and vector fields $(X_i)$ we must have then have $T{\bf p}(X)=\rho\circ \mathfrak{a}$. But from Theorem \ref{T_Prolongation}, it follows that $(\mathfrak{a}\circ \mathbf{p}, X)$ defines a projectable section $\mathbf{X}$ over $\mathcal{M}_U$ and so $\mathbf{X}=\underleftarrow{\lim}\mathbf{X}_i$.\\ On the other hand, recall that from (\ref{eq_projectTMbracket}), for each $i\in \mathbb{N}$, we have \begin{eqnarray} \label{Eq_BracketTMi} [\mathbf{X}_i,\mathbf{X}'_i]_{{\bf T}\mathcal{M}_i}=([\mathfrak{a}_i,\mathfrak{a}'_i]_{\mathcal{A}_i}\circ {\bf p},[X_i,X'_i]) \end{eqnarray} Now since $\xi_i^j$ is a Lie algebroid morphism, over $\delta_i^j$, according to Definition \ref{D_LieMorphism} we have \begin{eqnarray} \label{Eq_bracketfij} \xi_i^j([\mathfrak{a}_j,\mathfrak{a}'_j)]_{\mathcal{A}_j})(x_j)=\left([\mathfrak{a}_i,\mathfrak{a}_i]_{\mathcal{A}_i}\right)(\delta_i^j(x_j)). \end{eqnarray} Since $\delta_i^j\circ\mathbf{p}_j=\mathbf{p}_i\circ \lambda_i^j$, we have: \begin{eqnarray} \label{Eq_bracketfij} \xi_i^j([\mathfrak{a}_j,\mathfrak{a}'_j]_{\mathcal{A}_j})\circ\mathbf{p}_j(m_j)=\left([(\mathfrak{a}_i,\mathfrak{a}'_i]_{\mathcal{A}_i}\right)\circ \mathbf{p}_i\circ \xi_i^j(m_j). \end{eqnarray} Naturally, since $X_i$ (resp. $X'_i$) and $X_j$ (resp. $X'_j$) are $\xi_i^j$ related, we also have \begin{eqnarray}\label{Eq_bracketxiij} [X_i,X'_i)]\left(\xi_i^j(m_j)\right)=T\xi_i^j\left([X_j,X_j]\right)(m_j). \end{eqnarray} From (\ref{Eq_BracketTMi}) and (\ref{eq_bfTPsi}) we then obtain \begin{eqnarray}\label{Eq_BracketbfTM} \begin{aligned} \mathbf{T}\xi_i^j\left([\mathbf{X}_i,\mathbf{X}^\prime_j]_{\mathbf{T}\mathcal{M}_i}\right)(m_j) &=\left(\xi_i^j([\mathfrak{a}_j,\mathfrak{a}^\prime_j]_{\mathcal{A}_j})\circ\mathbf{p}_j(m_j),T_{m_j}\xi_i^j([{X}_i,{X}^\prime_j])\right)\\ &=\left([\mathfrak{a}_i,\mathfrak{a}^\prime_i]_{\mathcal{A}_i}\circ\mathbf{p}_i\circ \xi_i^j(m_j),[{X}_i,X^\prime_i])\circ \xi_i^j(m_j)\right)\\ &=\left([\mathbf{X}_i,\mathbf{X}_i^\prime]_{\mathbf{T}\mathcal{M}_i}\right)\circ\xi_i^j(m_j). \end{aligned} \end{eqnarray} It follows that we can define: \begin{eqnarray}\label{Eq_ProjectiveLimit bracketTMi} [\mathbf{X},\mathbf{X}']_{\mathbf{T}\mathcal{M}_U}=\underleftarrow{\lim}[\mathbf{X}_i,\mathbf{X}'_i]_{\mathbf{T}\mathcal{M}_i}. \end{eqnarray} Now, since each bracket $[.,.]_{\mathbf{T}\mathcal{M}_i}$ satisfies the Jacobi identity, from $(\ref{Eq_ProjectiveLimit bracketTMi})$, the same is true for $[.,.]_{\mathbf{T}\mathcal{M}}$ on projective limit $\mathbf{X}$ and $\mathbf{X'}$ of projective sections $(\mathbf{X}_i)$ and $(\mathbf{X}'_i)$. Finally, as \begin{eqnarray}\label{Eq_compatibilityrho} [\hat{\rho}_i(\mathbf{X}_i),\hat{\rho}_i\mathbf{X}'_i)]_{\mathbf{T}\mathcal{M}_i}=\hat{\rho}_i\left([\mathbf{X}_i,\mathbf{X}_i]_{\mathbf{T}\mathcal{M}_i}\right). \end{eqnarray} from the compatibility with bonding maps for sequences of sections $(\mathfrak{a}_i)$, anchors $(\rho_i)$ vector fields $(X_i)$ and Lie brackets $[.,.]_{\mathbf{T}\mathcal{M}_i}$ on projective sequence $(\mathbf{T}\mathcal{M}_i)$, it follows that $\hat{\rho}$ satisfies the same type of relation as (\ref{Eq_compatibilityrho}).\\ Now using by same arguments as used in the proof of Lemma \ref{L_extbracket} we show that we can extend this bracket to a Lie bracket on the module $\mathfrak{P}^{pl}(\mathbf{T}\mathcal{M}_U)$ so that we have a Lie algebra and the restriction of $\hat{\rho}$ to this Lie algebra is a morphism of Lie algebra into the Lie algebra of vector fields on $\mathcal{M}_U$. \end{proof} \section{Direct limits of prolongations of Banach Lie algebroids} \label{__DirectLimitsOfProlongationsOfBanachAnchoredBundles} As in the previous section we introduce: \begin{definition} \label{D_DirectSequenceofBanachLieAlgebroids} Consider a direct sequence of Banach-Lie algebroid bundles \begin{center} $\left( \left( \mathcal{A}_{i},\pi_{i},M_{i},\rho_{i} ,[.,.]_{\mathcal{A}_i}\right),\left( \eta_i^j, \chi_i^j, \varepsilon_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, i \leq j}$ \end{center} (resp. of Banach bundles $\left( \left( \mathcal{E}_{i},\mathbf{\pi}_{i},M_{i} \right),\left( \chi_i^j, \delta_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, i\leq j}$).\\ A sequence of open fibred manifolds $\mathcal{M}_i $ of $\mathcal{E}_i$ is called compatible with algebroid prolongations, if for all $\left( i,j \right) \in\mathbb{N}^2$ such that $i\leq j$, we have \begin{description} \item[\textbf{(DSPBLAB 1)}] {\hfil $\chi_i^j(\mathcal{M}_j)\subset \mathcal{M}_i$;} \item[\textbf{(DSPBLAB 2)}] {\hfil $\varepsilon_i^j \circ \mathbf{p}_i = \mathbf{p}_j \circ \chi_i^j$}. \end{description} \end{definition} \begin{remark} The context of direct limit in which we work concerns ascending sequences of Banach manifolds $(M_i)_{i\in \mathbb{N}}$ where $M_i$ is a closed submanifold of $M_{i+1}$. The reason of this assumption is essentially that their direct limit has a natural structure of (n.n.H) convenient manifold. \\ Although each manifold $\mathcal{M}_i$ is open in $\mathcal{E}_i$, since $\mathcal{E}_i$ is a closed subbundle of $\mathcal{E}_j$, it follows that $ \left( \mathcal{M}_i,\chi_i^j \right) _{(i,j)\in\mathbb{N}^2, i \leq j}$ is an ascending sequence of convenient manifolds. \end{remark} As in the previous section, for each $i\in \mathbb{N}$, we denote by $(\mathbf{T}\mathcal{A}_i, \hat{\mathbf{p}}_i, \mathcal{A}_i, \hat{\rho}_i)$ the prolongation of $\mathcal{A}_i$ over $\mathcal{M}_i=\mathcal{A}_i$ and $[.,.]_{\mathbf{T}\mathcal{A}_i}$ the prolongation of the Lie bracket $[.,.]_{\mathcal{A}_i}$ on projectable section of $\mathbf{T}\mathcal{A}_i$.\\ Adapting the argument used in proof of Proposition \ref{P_ProjectiveLimitProlongationBracket} to this setting of strong asending sequences and direct limits, we have the result below. Note that, in this context, the prolongation is not Hausdorff in general. However, all the arguments used in the proofs are local and so they still work in this context. \begin{proposition} \label{P_DirectLimitProlongationBracket} Consider a direct sequence of Banach-Lie algebroid bundles \begin{center} $\left( \left( \mathcal{A}_{i},\pi_{i},M_{i},\rho_{i},[.,.]_{\mathcal{A}_i} \right) , \left( \eta_i^j,\xi_i^j, \varepsilon_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, i \leq j}$ \end{center} (resp. Banach bundles $ \left( \left( \mathcal{E}_{i},\mathbf{\pi}_{i},M_{i},\rho_{i},[.,.]_i \right) , \left( \xi_i^j, \varepsilon_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, i \leq j}$) and a sequence of open fibred manifolds $\mathcal{M}_i $ of $\mathcal{E}_i$ compatible with algebroid prolongations. Then \begin{enumerate} \item $\left( \underrightarrow{\lim}\mathbf{T}\mathcal{M}_i,\underrightarrow{\lim}\hat{\mathbf{p}}_i,\underrightarrow{\lim}\mathcal{M}_i, \underleftarrow{\lim}\hat{\rho}_i \right) $ is a convenient anchored bundle which is the prolongation of $\left( \underrightarrow{\lim}\mathcal{A}_i,\underrightarrow{\lim}\pi_i,M_i \right)$ over $\underrightarrow{\lim}\mathcal{M}_i$. \item Consider any open set $U$ in $M$ and a sequence of open sets $U_i$ in $M_i$ such that $U=\underrightarrow{\lim}U_i$. We denote by $\mathfrak{P}^{dl}(\mathbf{T}\mathcal{M}_U)$ the $C^\infty(U)$-module generated by all direct limits $\mathbf{X}=\underrightarrow{\lim}\mathbf{X}_i$ of projectable sections $\mathbf{X}_i$ of $\mathbf{T}\mathcal{M}_i$ over $\{\mathcal{M}_i\}_{U_i}$.\\ Then there exists a Lie bracket $[.,.]_{\mathbf{T}\mathcal{M_U}} $ defined on $\mathfrak{P}^{pl}(\mathbf{T}\mathcal{M}_U)$ which satisfies the assumptions of Definition \ref{D_AlmostLieBracketOnAnAnchoredBundle} characterized by \[ [\mathbf{X}, \mathbf{X}']_{\mathbf{T}\mathcal{M}_U}=\underrightarrow{\lim}[\mathbf{X}_i,\mathbf{X}'_i]_{\mathbf{T}\mathcal{M}_i} \] where $\mathbf{X}=\underrightarrow{\lim}\mathbf{X}_i$ and $\mathbf{X}'=\underrightarrow{\lim}\mathbf{X}'_i$.\\ \end{enumerate} \end{proposition} As in the context of Projective sequences, $\mathfrak{P}^{dl}(\mathbf{T}\mathcal{M}_U)$ is a submodule of the module $\mathfrak{P}(\mathbf{T}\mathcal{M}_U)$ generated by projectable sections $\mathbf{T}\mathcal{M}_U$. Therefore, again by analog argument as used in the proof of Theorem \ref{T_PartialLieAlgebroid}, the set $\{\mathfrak{P}^{dl}(\mathbf{T}\mathcal{M}_U), U \textrm{ open set in } M\}$ generates a sheaf of $\mathfrak{P}^{dl}_\mathcal{M}$ over $\mathcal{M}$. Moreover, for any open set $\mathcal{U}$ in $ \mathcal{M}$, according to Proposition \ref{P_DirectLimitProlongationBracket}, the restriction of $\hat{\rho}$ to each $\mathfrak{P}^{dl}(\mathbf{T}\mathcal{U})$ is a Lie algebra morphism into the Lie algebra morphism into the Lie algebra of vector fields on $\mathcal{U}$. Thus we obtain: \begin{theorem} \label{T_DirectLimitOfProlongationOfBanachAnchoredBundles} Consider a direct sequence of Banach-Lie algebroid bundles \begin{center} $\left( \left( \mathcal{A}_{i},\pi_{i},M_{i},\rho_{i},[.,.]_{\mathcal{A}_i} \right),\left( \eta_i^j, \xi_{i}^{j}, \varepsilon_{i}^{j} \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$ \end{center} (resp. Banach bundles $\left( \left( \mathcal{E}_{i},\mathbf{\pi}_{i},M_{i},\rho_{i},[.,.]_i \right),\left( \xi_i^j, \varepsilon_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, i \leq j}$) and a sequence of open fibred manifolds $\mathcal{M}_i $ of $\mathcal{E}_i$ compatible with algebroid prolongations. Then $\left( \underrightarrow{\lim}\mathbf{T}\mathcal{M}_i,\underrightarrow{\lim}\hat{\mathbf{p}}_i,\underrightarrow{\lim}\mathcal{M}_i, \underleftarrow{\lim}\hat{\rho}_i \right) $ is a convenient anchored bundle which is the prolongation of $\left( \underrightarrow{\lim}\mathcal{A}_i,\underrightarrow{\lim}\pi_i,M_i \right)$ over $\underrightarrow{\lim}\mathcal{M}_i$. Moreover $\left(\mathbf{T}\mathcal{M},\hat{\bf p}, \mathcal{M}, \hat{\rho},\mathfrak{P}^{dl}_{\mathcal{M}}\right)$ is a strong partial convenient Lie algebroid. \end{theorem}
proofpile-arXiv_065-197
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Over the last few decades, the impact of globalization has transformed the semiconductor manufacturing and testing industry from vertical to horizontal integration. The continuous trend of device scaling has enabled the designer to incorporate more functionality in a system-on-chip~(SoC) by adopting lower technology nodes to increase performance and reduce the overall area and cost of an SoC. At present, majority of the SoC design companies or design houses no longer manufacture chips and maintain a foundry (fab) of their own due to cost for building and maintaining such foundries~\cite{YehFabCost2012} and the increased complexity in the fabrication process as new technology is adopted. The design-house integrates intellectual properties~(IP) obtained from different third-party IP vendors along with its design and outsources the manufacturing to an offshore foundry. Due to this distributed design and manufacturing flow, which includes third-party IPs, manufacturing, testing, and distribution of chips, various threats have emerged in recent years~\cite{alkabani2007active, castillo2007ipp, tehranipoor2011introduction}. The research community has also been extensively involved in proposing countermeasures against these threats~\cite{roy2008epic, rajendran2012security, charbon1998hierarchical, kahng2001constraint, qu2007intellectual, jarvis2007split}. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{logic-locking.pdf} \vspace{-20px} \caption{An abstract view of the logic locking technique.} \vspace{-20px} \label{fig:LL} \end{figure} Logic locking has emerged as the most prominent method to address the threats incurred from untrusted manufacturing. In logic locking, the design of a circuit is locked so that the circuit produces incorrect results in normal operation unless a correct secret key is programmed into the chip. Figure~\ref{fig:LL} shows an abstract view of logic locking where the key is stored in a tamper-proof non-volatile memory. Subramanyan~et al.\@\xspace~\cite{subramanyan2015evaluating} first showed that a locked circuit can efficiently be broken using key-pruning oracle-guided SAT analysis. Since then, many different versions of SAT-based attacks have been launched on logic locking~\cite{shamsi2019ip}, and the solutions have been proposed to mitigate these attacks as well~\cite{wang2017secure, sengupta2020truly, guin2016fortis, GuinVTS2017, GuinTVLSI2018, karmakar2018encrypt, potluriseql, chiang2019looplock, juretus2019increasing } \textit{Can we safely state that a logic locking technique is completely secure even if we achieve complete SAT resistivity?} Note that an untrusted foundry has many more effective means to determine the secret key without performing SAT analysis~\cite{rahmankey, jain2019taal, zhang2019tga, jainVTS2020}. Countermeasures are also developed to partially prevent these attacks~\cite{zhang2015veritrust, shen2018nanopyramid, vashistha2018trojan, zhang2019tga}. In this paper, we show how an adversary can extract the secret key from a locked netlist, even if all the existing countermeasures are in place. An adversary can determine the secret key by injecting faults at the key registers~\cite{rahman2020defense, rahmankey}, which hold the key value during normal operation, and performing differential fault analysis. The entire process can be performed in three steps. First, an input pattern that reflects different output response for the change of only one key bit while keeping the other key bits at the faulty states is selected. To generate such a test pattern, we propose to use the constrained automatic test pattern generation (ATPG) algorithm, which is widely popular for testing VLSI circuits. The pattern which detects either stuck-at-1 (\textit{sa1}) or stuck-at-0 (\textit{sa0}) fault at one of the key lines with logic 1 or 0 constraints for other key lines respectively, is sufficient to determine that key bit. Note that the fault-free and faulty responses are always complementary for an input test pattern that detects that fault. The same process repeated for other key bits to obtain \bm{$|K|$} patterns for determining the entire key of size \bm{$|K|$}. Second, we apply these test patterns to two instances of an unlocked chip obtained from the market and collect the responses. Logic 1 faults are now injected into all the key lines and measure the responses of the faulty circuit by applying the set of test patterns. Next, logic 1 faults are injected into all the key lines except one key line. The response from this fault-free circuit is obtained by applying the test pattern associated with the fault-free key. This step is repeated for all the key lines, and responses are obtained by applying its corresponding test pattern. Finally, the results are compared to determine the key. The actual value of a key bit is 1 if the two responses are the same, otherwise, the key is 0. Note that this paper specifically considers the countermeasures in accordance with the logic locking techniques and not the countermeasures related to the prevention of fault injection or its detection. The contributions of this paper are described as follows: \begin{itemize} \item We propose a novel attack to break any key-based logic locking technique using the fault injection attack. When we apply a constrained \textit{sa1} pattern to a key line, the hypothesis key bit becomes 1 if the responses of the fault-free and faulty circuits are the same, otherwise, the key value is 0. The proposed attack is self-referencing and does not require any complex analysis ({i.e.}\@\xspace, SAT). \textbf{\textit{To the best of our knowledge, we are the first to demonstrate that the stuck-at fault patterns can be used to determine the secret key of a locked circuit.}} \item We demonstrate and validate our proposed attack by performing the laser fault injection on a Kintex-7 FPGA. The technology-dependent gate-level netlist created in Synopsys Design Compiler is converted to a technology-independent netlist and implemented in Xilinx Vivado without any optimization so that the \textit{saf} patterns can be applied to the FPGA. \end{itemize} The rest of the paper is organized as follows: the proposed attack and its methodology to extract the secret key from any locked circuit are described in Section~\ref{sec:fault-injection-attack}. We present the results for the implementation of the proposed attack on different logic locked benchmark circuits in section~\ref{sec:experimental-results}. Finally, we conclude our paper in Section~\ref{sec:conclusion}. \section{Proposed Fault Injection Attack}\label{sec:fault-injection-attack} The differential fault analysis (DFA) attack on logic locking is motivated by the test pattern generation for VLSI circuits. A single stuck-at fault will be detected using a test pattern that activates the fault and propagates the faulty response to the primary output. The key register, which holds the key value loaded from the tamper-proof non-volatile memory, can be treated as the source of the fault. These registers are the target for an adversary to obtain the secret key from a working chip. \subsection{Threat Model} The threat model defines the traits of an adversary and its position in the IC manufacturing and supply chain. It is very important to know an attacker's capabilities and its resources/tools to estimate its potential to launch the attack. The design house or entity designing the chip is assumed to be trusted. The attacker is assumed to be an untrusted foundry or a reverse engineer having access to the following: \begin{itemize} \item The attacker has access to the locked netlist of a circuit. An untrusted foundry has access to all the layout information which can be extracted from the GDSII or OASIS file. Also, a locked netlist can be constructed from layer-by-layer reverse engineering of the fabricated chip with advanced technological tools~\cite{torrance2009state}. The attacker has the capability to determine the location of the tamper-proof memory. It can be trivial for an adversary to find the location of the key register in a netlist, as it can easily trace the route from the tamper-proof memory. \item The attacker has possession of an unlocked and fully functional chip, which can be easily acquired from the market. \item A fault injection equipment is necessary to launch the attack. It is not necessary to use high-end fault injection equipment. The basic requirement is to inject faults at the key registers (all the flip-flops) location on a de-packaged/packaged chip. \end{itemize} \noindent\textbf{Notations}: An original circuit, and its locked version are denoted by $C_{O}$ and $C_L$, respectively. The two versions of fault-injected $C_L$ are represented as $C_{F}$ and $C_{A}$. $C_{F}$ represents a locked circuit where all the key lines ($|K|$) are injected with logic 1 (logic 0) faults, denoted as a faulty circuit. $C_{A}$ represents the same locked circuit where $(|K|-1)$ key lines are injected with the same logic 1 (logic 0) faults, leaving one key line fault free. We denote this circuit as a fault-free circuit for DFA. Both functional chips are loaded with the correct key in its tamper-proof memory. A fault is injected at the key register using a fault injection method (see details in Section~\ref{sec:experimental-results}). For any given circuit, we assume the primary inputs~($PI$) of size~\textit{$|PI|$}, primary outputs~($PO$) of size~\textit{$|PO|$}, and secret key~($K$) size of~\textit{$|K|$}. We also use key lines or key registers alternatively throughout this paper as their effects are the same on a circuit. Note that \textit{saf} is an abstract representation of a defect to generate test patterns, whereas, an injected fault is the manifestation of a faulty logic state due to fault injection. \begin{figure}[ht] \centering \vspace{-5px} \includegraphics[width=\linewidth]{proposed-approach.pdf} \vspace{-10px} \caption{The abstract representation of our proposed fault injection attack.} \vspace{-10px} \label{fig:absract-DFA} \end{figure} \subsection{Differential Fault Analysis Attack Methodology}\label{subsec:DFA} The proposed fault injection attack relies on differential fault analysis, where the responses of two instances of faulty and fault-free circuits are compared to determine the secret key. A practical fault injection approach is described in Section~\ref{sec:experimental-results} to create the faulty chip. Figure~\ref{fig:absract-DFA} shows an abstract representation of our proposed approach. For an input pattern, the output responses are collected for both $C_A$ and $C_F$. The output responses are XORed to find any mismatch. If both the circuits differ in their responses, the XORed output will be 1, otherwise, it will be 0. If we find an input pattern that produces conflicting results for both $C_A$ and $C_F$ only for one key bit, the key value can be predicted. The key value is the same as the injected fault value if the XORed output is of logic 0, otherwise, the key value is complementary to the injected fault. The proposed attack can be described as follows: \noindent {\tiny $\bullet$} \textbf{\textit{Step-1}}: The first step is to select an input pattern that produces complementary results for the fault-free ($C_A$) and faulty ($C_F$) circuits. The input pattern needs to satisfy the following property -- it must sensitize only one key bit to the primary output(s). In other words, only the response of one key bit is visible at the \textit{PI} keeping all other key bits at logic 1s (or 0s). If this property is not satisfied, it will be impractical to reach a conclusion regarding the key bit value. Multiple key combinations can result in the same. \textit{Now the question is how can we find if such a pattern exists in the entire input space~($\xi$)}. To meet this requirement, our method relies on stuck-at faults~(\textit{saf}) based constrained ATPG to obtain the specific input test patterns (see details in Section~\ref{subsec:test-pattern-gen}). Considering the fact that adversary has access to the locked netlist~($C_L$), it can generate test patterns to detect \textit{sa1} or \textit{sa0} at any key lines and adding constraints to other key lines~(logic 1 and 0 for \textit{sa1} and \textit{sa0}, respectively). A single fault, either \textit{sa0} or \textit{sa1} on a key line is sufficient to determine the value of that key bit. Therefore, we have selected \textit{sa1} and the following sections are explained considering this fault only. This process is iterated over all the key-bits to obtain \bm{$|K|$} test patterns. The algorithm to generate the complete test pattern set is provided in Algorithm~section~\ref{subsec:test-pattern-gen}. \noindent {\tiny $\bullet$} \textbf{\textit{Step-2}}: The complete set of generated test patterns is applied to fault-induced functional circuit~($C_F$). The circuit is obtained by injecting logic 1 fault on the key registers if \textit{sa1} is selected in the previous step, else, logic 0 fault is injected for \textit{sa0}. The responses are collected for later comparison with the fault-free responses. For ($C_A$), the test patterns are applied such that it matches the fault modifications in the circuit. For example, the test pattern for the first key is applied to the circuit when the circuit instance does not pertain any fault on its corresponding key register and holds the correct key value while, the remaining key registers are set to logic 1 (for \textit{sa1}) or 0 (for \textit{sa0}). For the next key-bit, ($C_A$) instance is created by excluding this selected key bit from any fault while keeping all the other key registers to logic 1 (for \textit{sa1}) or 0 (for \textit{sa0}). This process is repeated for all key bits and their responses are collected for comparison in the next step. \noindent {\tiny $\bullet$} \textbf{\textit{Step-3}}: The adversary will make the decision regarding the key value from the observed differences in the output responses of ($C_A$) and ($C_F$). For any test pattern corresponding to a particular key bit, when the output of both the circuits is same, it implies that the injected fault on the key lines in a $C_F$ circuit is same as the correct key bit, only then the output of both the ICs will be same. Otherwise, when $C_F$ and $C_A$ differ in their output response, it concludes the correct key bit is complementary to the induced fault. This process is repeated for all key bits. In this manner, the key value can be extracted by comparing the output responses of both circuits for the same primary input pattern. \subsection{Example} \label{subsec:examples} \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{2-key_propagation.pdf} \vspace{-20px} \caption{Differential Fault attack on a test circuit locked with a 3-bit secret key, where the propagation of $k_0$ is dependent on $k_1$ and vice versa. (a) Test pattern generation considering a \textit{sa1} at key line $k_0$ with constraint $k_1=1$ and $k_2=1$. Test pattern, $P_1=[11010X]$ will be applied to $C_F$. (b) The same pattern are required to be applied to $C_A$. } \label{fig:fault-propagation} \vspace{-15px} \end{figure*} In this section, we present an example circuit to illustrate the proposed attack. Test pattern generation for detecting stuck-at faults at the key lines is described using the D-Algorithm~\cite{bushnell2004essentials}. A Combinational circuit is chosen as an example for simplicity. However, the attack is valid for sequential circuits as well as it can be transformed into a combinational circuit in the scan mode, where all the internal flip-flops can be reached directly through the scan chains~\cite{bushnell2004essentials}. Figure~\ref{fig:fault-propagation} shows our proposed attack on a test circuit locked with a 3-bit secret key, where the propagation of $k_0$ and $k_1$ is inter-dependent on each other while propagation of $k_2$ is independent of other keys in the circuit. The circuit has six inputs ($|PI|=6$) and two outputs ($|PO|=2$). The attack targets all the key bits separately as mentioned before. First, we target to find out the value of $k_0$. It is necessary to generate a test pattern that detects a \textit{saf} at \textit{$k_0$} with constraint $k_1=1$ and $k_2=1$, which is shown in Figure~\ref{fig:fault-propagation}.(a). $\overline{D}$ is assigned after the \textit{sa1} at the key line $k_0$. $D$ is defined as logic 1 for a good circuit and logic 0 for a faulty one~\cite{bushnell2004essentials}. To activate this fault, the ATPG tool will assign a logic 0 at $k_0$. A test pattern $P_1$ needs to be generated to detect a \textit{sa1} fault at $k_0$ with constraint $k_1=1$ and $k_2=1$. As the value of $k_1$ is known during the pattern generation, the effect of the \textit{sa1} at $k_0$ will be propagated to the primary output $y_0$. For a fault value $\overline{D}$ at $k_0$, if $[x_0~x_1] = [1~1]$ then $D$ propagates to $n_2$ as $G_1$ is an AND gate. To propagate the value at $n_2$ to the output of $G_3$, its other input ($n_4$) needs to attain logic 1. Since $k_1=1$ due to injected fault which is set as a constraint in ATPG tool, $n_4=1$ for $n_3=0$ which implies $[x_2~x_3] = [0~1]$. At last, $x_4=0$ propagates $D$ value at $n_5$ to the primary output $y_0$. The output $y_0$ can be observed as $D$ for the test pattern $P_1=[1~1~0~1~0~X]$. \textbf{\textit{Note that the output \bm{$y_0$} will have complementary values for \bm{$k_0=0$} and \bm{$k_0=1$} when we apply \bm{$P_1$} at the input.}} This property of the input patterns will be used in DFA to recover the secret key. Similar analysis can be performed to detect \textit{saf} \textit{D} on other two key lines, $k_1$ and $k_2$. After generating the test pattern $P_1$ for the \textit{sa1} at key line $k_0$, the next step is to perform differential fault analysis between the responses of the $C_F$ and $C_A$. The test pattern is applied first to the faulty circuit $C_F$ and its response is captured, which is shown in Figure~\ref{fig:fault-propagation}.(a). As this pattern detects a \textit{sa1} at line $k_0$, the faulty response will be propagated to the output $y_0$. If we injected a logic 1 fault ($D$) using the fault injection method, the value at $y_0$ will be logic 0 ($\overline{D}$). The same test pattern $P_1$ is now applied to the fault-free circuit $C_A$, which is shown in Figure~\ref{fig:fault-propagation}.(b). The logic value of $y_0$ for $C_A$ will be $\overline{k_0}$. If the value of $y_0$ is the same for both $C_F$ and $C_A$, the value of the key ($k_0$) is 1, otherwise, $k_0$ is equal to 0. Similarly, the test pattern for detecting a \textit{sa1} at $k_1$ can be applied to extract its value based on the difference between the two circuit instances. \subsection{Test Pattern Generation} \label{subsec:test-pattern-gen} To generate the test pattern set, an automated process relying on constrained ATPG is performed. The detailed steps to be followed are provided in Algorithm~\ref{alg:TP-generation}. Synopsys Design Compiler~\cite{SynopsysDC} is utilized to generate the technology-dependent gate-level netlist and its test protocol from the RTL design. A test protocol is required for specifying signals and initialization requirements associated with design rule checking in Synopsys TetraMAX~\cite{SynopsysTetraMAX}. Automatic test generation tool TetraMAX generates the test patterns for the respective faults along with constraints for the locked gate-level netlist. \vspace{-5px} \begin{algorithm}[] \SetAlgoLined \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{~Locked gate-level netlist~($C_L$), test protocol~($T$), and standard cell library} \Output{~Test pattern (\textit{P}) set} Read the locked netlist ($C_L$) \; Read standard cell library \; Run design rule check with test protocol generated from design compiler \; Determine key size~\bm{$|K|$} from $C_L$ \; \For{$i\gets0$ \KwTo ($|K|-1$) }{ Add a \textit{sa1} fault at key line $k_i$ \; \For{$j\gets0$ \KwTo ($|K|-1$)}{ \If{$i \neq j$}{ Add constraint at $k_j$ to logic 1 \; } } Run ATPG to detect the fault \; Add the test pattern, $P_i$ to the pattern set, \textit{P} \; Remove all faults; Remove all constraints \; } Report the test pattern set, $P$ \; \caption{Test pattern generation for constrained ATPG} \label{alg:TP-generation} \end{algorithm} \vspace{-5px} The inputs to the algorithm are locked gate-level netlist~($C_L$), Design Compiler generated test protocol~($T$), and the standard cell library. The algorithm starts with reading the locked netlist and standard cell library~(Lines 1-2). The ATPG tool runs the design rule check with the test protocol obtained from the Design Compiler to check for any violation (Line 3). Only upon completion of this step, the fault model environment is set up in the tool. The size of the key ($|K|$) is determined by analyzing $C_L$ (Line 4). The remaining key lines are selected one by one to generate test patterns (Line 5). A stuck-at-1 fault is added at the $i^{th}$ key line to generate $P_i$ (Line 6). The ATPG constraints (logic 1) are added to other key lines (Lines 7-10). A test pattern $P_i$ is generated to detect the \textit{sa1} at the $i^{th}$ key line (Lines 12-13) and added to the pattern set, $P$. All the added constrains and faults are removed to generate the $(i+1)^{th}$ test pattern (Line 14). Finally, the algorithm reports all the test patterns, $P$ (Line 16). \subsection{Fault-injection Approach} Fault-injection attack has been widely used in the past to extract secret assets and bypassing security measures in the device~\cite{kim2007faults}. An adversary can use several fault-injection approaches depending on the budget and expertise. The basic fault-injection approach includes voltage, timing, electromagnetic, and laser-based fault-injection methods~\cite{moro2013electromagnetic, alam2019ram, tajik2015laser}. Laser-fault injection~(LFI) offers the most precision in both spatial and temporal domains during the operation of the chip, hence, used for deploying DFA attack for extracting the secret key. Laser with photon energy higher than silicon bandgap energy used to induce faults in an integrated circuit~\cite{tajik2015laser}. Therefore, the laser with a wavelength less than 1.1~$\mu$m is used in our experiment. The LFI attack can be completed in the following steps: \noindent {\tiny $\bullet$} \textbf{Sample Preparation:} LFI can be injected from both frontside and backside of the chip. However, the interconnecting metal layers at the front of the die obstruct the optical path of photons. On the other hand, the absence of any metal obstacle or reflective coating at the backside of the die allows an adversary to access the transistors with the laser. In a typical packaged chip (bondwire IC), the backside can be exposed by wet etching. Nonetheless, the flip-chip substrate is typically covered with a metallic lid, which can be easily removed to expose the silicon die. The backside of the silicon can be further polished to 30 -- 100~$\mu$m to reduce the power loss along the laser path due to photon absorption phenomena ~\cite{champeix2015seu, tajik2015laser}. \noindent {\tiny $\bullet$} \textbf{Target Localization and Fault-injection:} The method of localizing key-register location depends on the capability and asset availability to an adversary. An adversary, like an untrusted foundry or an expert reverse engineer, can localize the key location, i.e., tamper-proof memory, key-register, key-gates by analyzing the GDSII or partial/full-blown reverse engineering. Once the target is localized, an attacker needs to identify the fault sensitive location for injecting fault. Localizing the most reverse biased P-N junction in the key-register can be identified as the potential candidate for fault-injection~\cite{champeix2015seu}. Therefore, depending on logic 1 (logic 0) fault, the laser can be applied to the drain location of the p-type (n-type) MOS transistors for fault injection. Another challenge is that a single laser source can only inject a single fault at once. Therefore, the fault can be injected in a sequential order where the laser source can be moved from one key-register to another for injecting fault. After localizing the targeted key registers, an adversary can automate the sequential fault-injection process with the help of computer vision and image processing~\cite{stellari2018automated, vashistha2018trojan}. Since the key is imperative for the IP operation, it is safe to assume that once secure boot-up is complete, the locking key will remain stored in the key-register during the operation of IP~\cite{rahman2020defense, rahmankey}. Therefore, an adversary can initiate the fault-injection method after the secure boot-up of the chip is complete. An adversary can identify the clock-cycle required for secure-boot up by monitoring the power consumption of the circuit. \section{Experimental Results} \label{sec:experimental-results} To evaluate the effectiveness of our proposed attack, we adopted and performed the laser fault injection technique on a Kintex-7 FPGA, which is used as the device-under-test (DUT). Different benchmark circuits are implemented in a Kintex-7 FPGA, where the faults are injected on the key registers. First, the RTL netlist for ISCAS'99 benchmark circuits~\cite{bryan1985iscas} are synthesized using 32nm technology libraries in Synopsys Design Compiler~\cite{SynopsysDC}. The technology-dependent gate-level locked netlist is given to the Synopsys TetraMAX ATPG tool~\cite{SynopsysTetraMAX} to generate test pattern set~\textit{P} using Algorithm~\ref{alg:TP-generation}. The same netlist is then converted into a technology-independent gate-level Verilog code using our in-house PERL script. This is primarily done to assure that the circuit implemented in the FPGA is exactly the same circuit for which the test pattern set is generated. Otherwise, fault propagation cannot be ensured. Fault injection is performed on the circuit loaded into the FPGA, which leads to the instances of faulty and fault-free circuits by laser-induced faults on the key registers. Additionally, the implemented design includes a separate universal asynchronous receiver/transmitter~(UART) module, which is used for communication between the computer and the FPGA. The inputs are applied through the real-term monitor and responses are collected on the same. Once the response for any key-bit is obtained, the step is repeated for all the key bits in a benchmark circuit. Finally, the key-bits are exposed through the comparison between the corresponding instances of the circuits as explained in Section~\ref{subsec:DFA}. \begin{figure}[h] \centering \includegraphics[width = 0.9\linewidth]{exp_setup_modified_1.pdf} \vspace{-5px} \caption{The FPGA board placed under the lens for laser-fault injection at the target registers.} \vspace{-15px} \label{fig:setup} \end{figure} \subsection{Laser Fault Injection Attack}\label{subsec:laser-fault-injection} The laser fault injection~(LFI) setup is provided by a Hamamatsu PHEMOS-1000 FA microscope as shown in~\ref{fig:setup}. The equipment consists of a diode pulse laser source (Hamamatsu C9215-06) with a wavelength of 1064 $nm$. Three objective lenses were used during this work: 5x/0:14 NA, 20x/0:4 NA, 50x/0:76 NA. The 50x lens is equipped with a correction ring for silicon substrate thickness. The laser diode have two operation modes -- a) low power (200 $mW$) pulse mode, and b) high power (800 $mW$) impulse mode. The high power impulse mode can be used for laser fault injection. The laser power can be adjusted from 2$\%$ to 100$\%$ in 0.5$\%$ steps. Photon emission analysis~\cite{rahman2019backside} is used to localize the implemented locked circuitry in the DUT. Thereafter, The DUT is placed under the laser source for LFI. A trigger signal is fed to the PHEMOS-1000 to synchronize the LFI with DUT operation. Once the device reaches a stable state after power-on, the laser is triggered on target key-registers. After the fault injection, we have to guarantee that the device is still functioning as expected and has not entered into a completely dysfunctional state. The laser triggering timing can be checked by a digital oscilloscope for greater precision. We have performed and verified our results for different benchmark circuits implemented with random logic locking (RLL)~\cite{roy2010ending}, strong interference-based logic locking~(SLL)~\cite{rajendran2012security} and fault-based stripped functionality logic locking~(SFLL-Fault)~\cite{sengupta2018customized}. For RLL, we selected locked instances of c432 and c2670 benchmark circuits with a 32-bit key and 128-bit key respectively obtained from Trust-hub~\cite{salmani2018trust}. For SLL, we selected c1355 and c1908 locked benchmarks with 128-key bits, also obtained from Trust-Hub. We also implemented the attack on the circuit locked with a combination of SFLL-fault~(40-bit key) and RLL~(40-bit key) technique. We successfully recovered the entire key for all the circuits which proves the effectiveness of our proposed ATPG-guided fault injection attack. \section{Conclusion}\label{sec:conclusion} In this paper, we have presented a novel ATPG-guided stuck-at fault based attack to undermine the security of any logic locking technique. The attack relies on injecting faults on the key lines through hardware to perform differential fault analysis between faulty and fault-free chip for the ATPG generated test patterns. We have shown that at most \bm{$|K|$} test patterns are required to recover the entire secret key of size \bm{$|K|$}. We have demonstrated the attack on circuits implemented in the FPGA using the laser fault injection method. The results depicted the success of the proposed attack on different logic locking techniques, irrespective of their SAT resiliency. \section*{Acknowledgment} The authors would like to thank Dr. Navid Asadizanjani, University of Florida, for helping with laser fault injection experimentation. This work was supported in part by the National Science Foundation under grant number CNS-1755733 and Air Force Research Laboratory under grant AF-FA8650-19-1-1707. \bibliographystyle{IEEEtran}
proofpile-arXiv_065-198
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \noindent Fix a natural number $r$. The moduli space ${\mathcal{M}}^f$ of rank $r$ framed sheaves on the plane is an algebro-geometric incarnation of the instanton moduli space that gives rise to supersymmetric ${\mathcal{N}}=2$ $U(r)$--gauge theory on ${\mathbb{R}}^4$ in the $\Omega$--background. In \cite{Nek}, the partition function of this theory was expressed in terms of equivariant integrals over ${\mathcal{M}}^f$. The present note is concerned with the deformation from cohomology to $K$--theory (over ${\mathbb{Q}}$), which corresponds to supersymmetric ${\mathcal{N}}=1$ $U(r)$--gauge theory on ${\mathbb{R}}^4 \times S^1$. In this setting, \cite{CNO} considered the universal sheaf: $$ \xymatrix{{\mathcal{U}} \ar@{.>}[d] \\ {\mathcal{M}}^f \times {\mathbb{A}}^2} $$ and its exterior powers ${\mathcal{U}}_1 \otimes ... \otimes {\mathcal{U}}_k$ on ${\mathcal{M}}^f \times {\mathbb{A}}^{2k}$, where ${\mathcal{U}}_i$ denotes the pull-back of ${\mathcal{U}}$ from the $i$--th factor of ${\mathbb{A}}^{2k} = ({\mathbb{A}}^2)^k$. These exterior powers yield operators: \begin{equation} \label{eqn:operator def} K_T({\mathcal{M}}^f) \xrightarrow{V} \Lambda_{\BA^2} = \bigoplus_{k=0}^\infty K_{{\mathbb{C}}^* \times {\mathbb{C}}^* \times {\mathfrak{S}}(k)} ({\mathbb{A}}^{2k}) \end{equation} (the action $T \curvearrowright {\mathcal{M}}^f$ is explained in Subsection \ref{sub:torus}, the action ${\mathbb{C}}^* \times {\mathbb{C}}^* \curvearrowright {\mathbb{A}}^2$ is the usual scaling, and the symmetric group ${\mathfrak{S}}(k)$ acts on ${\mathbb{A}}^{2k} = ({\mathbb{A}}^2)^k$ by permutations) given by: \begin{equation} \label{eqn:operator} V = \bigoplus_{k=0}^\infty \rho_{k*} \Big({\mathcal{U}}_1 \otimes ... \otimes {\mathcal{U}}_k \otimes \pi_{k}^* \Big) \end{equation} The maps in \eqref{eqn:operator} are the natural projection maps: \begin{equation} \label{eqn:diag 0} \xymatrix{ & {\mathcal{M}}^f \times {\mathbb{A}}^{2k} \ar[ld]_-{\pi_{k}} \ar[rd]^-{\rho_{k}} & \\ {\mathcal{M}}^f & & {\mathbb{A}}^{2k}} \end{equation} As shown in \emph{loc. cit.}, the operator $W(m) : K_T({\mathcal{M}}^f) \rightarrow K_T({\mathcal{M}}^f)$ that encodes the contribution of bifundamental matter to the gauge theory at hand factors as: \begin{equation} \label{eqn:ext} W(m) = V^* \cdot m^{\deg} \cdot V \end{equation} (up to a renormalization that will not concern us in the present paper) where $\deg :$ $\Lambda_{\BA^2} \rightarrow \Lambda_{\BA^2}$ is the operator which scales the $k$--th direct summand in \eqref{eqn:operator def} by $k$. \\ \noindent In \cite{CNO}, from whom we borrowed both the main construction and the title of the present paper, the authors compute the $r=1$ case of $V$ as follows: the moduli space ${\mathcal{M}}^f|_{r = 1}$ is isomorphic to the Hilbert scheme of points on ${\mathbb{A}}^2$, and its $K$--theory is naturally identified with $\Lambda_{\BA^2}$ (see \cite{BKR, H}). Then \cite{CNO} computes $V$ as an explicit exponential in the usual bosonic realization of $\Lambda_{\BA^2}$, times the famous $\nabla$ operator (see \cite{BG}). The resulting formula for $V$ yields a geometric incarnation of a combinatorial identity from \cite{GHT}, and implies the formula for $\Phi_m$ computed in \cite{CO}. \\ \noindent In the present paper, we take a somewhat different route toward computing the operator $V$, for general $r$. We recall the actions of the \underline{elliptic Hall algebra} ${\CA}$ on the domain and target of the map \eqref{eqn:operator def}, which were studied in \cite{FHHSY} and \cite{FT, SV 1}: \begin{equation} \label{eqn:action} \Lambda_{\BA^2} \ \stackrel{\Psi}\curvearrowleft \ {\CA} \ \stackrel{\Phi}\curvearrowright \ K_T({\mathcal{M}}^f) \end{equation} (we refer to \cite{K-theory} for an overview of our viewpoint on these actions). We will recall the definition of the elliptic Hall algebra ${\CA}$ in Subsection \ref{sub:enter a}, and in Subsection \ref{sub:more a} we will introduce the subalgebra: \begin{equation} \label{eqn:half 0} {\CA^{(r)}} \subset {\CA} \end{equation} Intuitively, ${\CA^{(r)}}$ is half of ${\CA}$ with respect to a certain triangular decomposition. Consider the following modification of the diagram \eqref{eqn:diag 0}: $$ \xymatrix{{\mathcal{M}}^f \times \{\text{origin}\} \ar@{^{(}->}[r]^-\iota \ar@{=}[d] & {\mathcal{M}}^f \times {\mathbb{A}}^{2k} \ar[ld]_-{\pi_{k}} \ar[rd]^-{\rho_{k}} & \\ {\mathcal{M}}^f & & {\mathbb{A}}^{2k}} $$ from which it is easy to see that: \begin{equation} \label{eqn:gamma} \Gamma_{\BA^2} = \bigoplus_{k=0}^\infty \iota^*\Big({\mathcal{U}}_1 \otimes ... \otimes {\mathcal{U}}_k \otimes \rho_{k}^* \Big)^{{\mathfrak{S}}(k)} : \Lambda_{\BA^2} \longrightarrow K_T({\mathcal{M}}^f) \end{equation} is given by $\Gamma_{\BA^2} = V^* \circ [(1-q_1)(1-q_2)]^{\deg}$. Thus, we will compute $\Gamma_{\BA^2}$ instead of $V$. \\ \begin{theorem} \label{thm:main} For any $r$, the operator $\Gamma_{\BA^2}$ commutes with the ${\CA^{(r)}}$--action: \begin{equation} \label{eqn:main} \xymatrix{\Lambda_{\BA^2} \ar[d]_-{\Psi(a)} \ar[r]^-\Gamma_{\BA^2} & K_T({\mathcal{M}}^f) \ar[d]^-{\Phi(a)} \\ \Lambda_{\BA^2} \ar[r]^-\Gamma_{\BA^2} & K_T({\mathcal{M}}^f)} \end{equation} for all $a \in {\CA^{(r)}}$. Moreover, after localization to $\emph{Frac}(\emph{Rep}_T)$, the operator $\Gamma_{\BA^2}$ is uniquely determined by the commutativity of diagram \eqref{eqn:main}. \\ \end{theorem} \noindent The point of view in Theorem \ref{thm:main}, namely of determining an operator through its commutation with an algebra action rather than through explicit formulas, was used in \cite{W, AGT} to compute the operator \eqref{eqn:ext}. However, the strength of this approach is that it allows us to generalize from the local situation of the moduli spaces ${\mathcal{M}}^f$ of framed sheaves on the affine plane to the global case of moduli spaces of stable sheaves ${\mathcal{M}}^s$ on a general smooth projective surface $S$. We will review the necessary setup in Section \ref{sec:surfaces} (including Assumptions A and S, subject to which we make all the following claims), but the idea is to consider the operator: \begin{equation} \label{eqn:operator def surface} \Lambda_S = \bigoplus_{k=0}^\infty K_{{\mathfrak{S}}(k)} (S^k) \xrightarrow{\Gamma_S} K({\mathcal{M}}^s) \end{equation} explicitly given by: \begin{equation} \label{eqn:operator surface} \Gamma_S = \bigoplus_{k=0}^\infty \pi_{k*} \Big({\mathcal{U}}_1 \otimes ... \otimes {\mathcal{U}}_k \otimes \rho_{k}^* \Big)^{{\mathfrak{S}}(k)} \end{equation} where the notions in the right-hand side of \eqref{eqn:operator surface} are defined just like their counterparts in \eqref{eqn:operator} (Assumption A ensures that there exists a universal sheaf ${\mathcal{U}}$ on ${\mathcal{M}}^s \times S$, and we fix such a sheaf throughout the present paper). As for the analogues of the action \eqref{eqn:action} for a general surface, the former was worked out in \cite{Hecke}: \begin{equation} \label{eqn:phi intro} {\CA} \xrightarrow{\Phi} \textrm{Hom}(K({\mathcal{M}}^s) , K({\mathcal{M}}^s \times S)) \end{equation} and provided a generalization of the classical Heisenberg algebra action on the cohomology groups of Hilbert schemes (\cite{G, Nak}). We will also provide an analogue: \begin{equation} \label{eqn:psi intro} {\CA} \xrightarrow{\Psi} \textrm{Hom}(\Lambda_S , \Lambda_S \times K(S)) \end{equation} of the second action from \eqref{eqn:action}, where by a slight abuse of notation, we write: $$ \Lambda_S \times K(S) = \bigoplus_{k=0}^\infty K_{{\mathfrak{S}}(k)} (S^k \times S) $$ (the symmetric group ${\mathfrak{S}}(k)$ only acts on $S^k$). Then our main result for a smooth surface $S$, subject to the assumptions in Subsection \ref{sub:assumption}, is the following: \\ \begin{theorem} \label{thm:surface} For any $r$, the operator \eqref{eqn:operator surface} commutes with the ${\CA^{(r)}}$--action: \begin{equation} \label{eqn:main surface} \xymatrix{\Lambda_S \ar[d]_-{\Psi(a)} \ar[r]^-{\Gamma_S} & K({\mathcal{M}}^s) \ar[d]^-{\Phi(a)} \\ \Lambda_S \times K(S) \ar[r]^-{\Gamma_S} & K({\mathcal{M}}^s \times S)} \end{equation} for all $a \in {\CA^{(r)}}$. \\ \end{theorem} \noindent As an ${\CA^{(r)}}$--module, $K_T({\mathcal{M}}^f)$ is generated by a single element, namely the fundamental class of the component parametrizing framed sheaves with $c_2 = 0$ (this plays an important role in the uniqueness statement of Theorem \ref{thm:main}). Meanwhile, we will show in Proposition \ref{prop:generators} that $K({\mathcal{M}}^s)$ has countably many generators, namely the fundamental classes $\boldsymbol{1}_d$ of the components ${\mathcal{M}}^s_d \subset {\mathcal{M}}^s$ parametrizing stable sheaves with $c_2 = d$. In our language, we have $\Gamma_S(1) = \prod_{d \in {\mathbb{Z}}} \boldsymbol{1}_d$, where $1 \in K_{{\mathfrak{S}}(0)}(S^0) \cong {\mathbb{Q}}$. \\ \noindent I would like to thank Andrei Okounkov for teaching me much of the beautiful mathematics contained in this note. I gratefully acknowledge NSF grants DMS--1760264 and DMS--1845034, as well as support from the Alfred P. Sloan Foundation. \\ \section{The case of the affine plane} \label{sec:affine} \subsection{} \label{sub:frobenius} Even before dealing with ${\mathbb{A}}^2$, let us discuss the situation of ${\mathfrak{S}}(k)$--equivariant coherent sheaves on a point $\circ$, which is just another word for finite-dimensional ${\mathfrak{S}}(k)$--modules. We have the well-known Frobenius character isomorphism: \begin{equation} \label{eqn:degree k} K_{{\mathfrak{S}}(k)} (\circ) \cong \Big\{ \text{degree }k \text{ symmetric polynomials in } x_1,x_2,... \Big\} \end{equation} given as a sum over partitions $\lambda = (\lambda_1 \geq ... \geq \lambda_t)$ of size $k$ by the formula: \begin{equation} \label{eqn:frobenius} M \mapsto \sum_{|\lambda| = k} \frac {p_\lambda}{z_\lambda} \cdot \text{Tr}_M(\omega_\lambda) \end{equation} where we let $\omega_\lambda \in {\mathfrak{S}}(k)$ be any permutation of cycle type $\lambda$, and define: \begin{equation} \label{eqn:basis} p_{\lambda} = p_{\lambda_1} \dots p_{\lambda_t} \end{equation} with $p_n = x_1^n+x_2^n+...$ being the \underline{power sum functions}. In \eqref{eqn:frobenius}, we let $z_\lambda = \lambda! \prod_i \lambda_i$, where $\lambda!$ is the product of factorials of the number of times each integer appears in $\lambda$. It is useful to take the direct sum of \eqref{eqn:degree k} over all $k \in {\mathbb{N}} \sqcup 0$, and obtain: \begin{equation} \label{eqn:degree any} \Lambda:= \bigoplus_{k=0}^\infty K_{{\mathfrak{S}}(k)} (\circ) \cong \Big\{\text{symmetric polynomials in } x_1,x_2,... \Big\} \end{equation} This is beneficial because $\Lambda$ is manifestly a commutative ring, in fact the polynomial ring generated by $p_1,p_2,...$. In terms of representations of ${\mathfrak{S}}(k)$, the operation of multiplication by power sum functions corresponds to parabolic induction: \begin{equation} \label{eqn:induction} K_{{\mathfrak{S}}(l)} (\circ) \xrightarrow{p_k} K_{{\mathfrak{S}}(k+l)} (\circ), \qquad M \mapsto \text{Ind}_{{\mathfrak{S}}(k) \times {\mathfrak{S}}(l)}^{{\mathfrak{S}}(k+l)} (p_k \boxtimes M) \end{equation} However, the ring $\Lambda$ is also endowed with a symmetric pairing, determined by $\langle p_\lambda, p_\mu \rangle = \delta_\lambda^\mu z_\lambda$, or, in the language of finite-dimensional ${\mathfrak{S}}(k)$--modules: \begin{equation} \label{eqn:pairing} \langle M, M' \rangle = \text{dim } \text{Hom}_{{\mathfrak{S}}(k)}(M,M') \end{equation} With this in mind, Frobenius reciprocity states that the adjoints of the operators \eqref{eqn:induction} are the parabolic restriction operators: \begin{equation} \label{eqn:restriction} K_{{\mathfrak{S}}(k+l)} (\circ) \xrightarrow{p^\dagger_k} K_{{\mathfrak{S}}(l)} (\circ), \qquad M \mapsto \text{Hom}_{{\mathfrak{S}}(k)} \left(p_k, \text{Res}^{{\mathfrak{S}}(k+l)}_{{\mathfrak{S}}(k) \times {\mathfrak{S}}(l)} (M) \right) \end{equation} A reformulation of the main result of \cite{Ges} is the following: \\ \begin{theorem} \label{thm:classic} The operators $p_k, p_k^\dagger : \Lambda \rightarrow \Lambda$ satisfy the relations: $$ [p_k^\dagger, p_l] = k \delta_k^l \cdot \emph{Id} $$ for all $k,l \in {\mathbb{N}}$, as well as the obvious relations $[p_k,p_l] = [p_k^\dagger, p_l^\dagger] = 0$. \\ \end{theorem} \subsection{} \label{sub:a2} We will follow the presentation of \cite{CNO} in the present Subsection, and we will recycle the notation used in the previous Subsection. We will consider ${\mathbb{A}}^2$ with the standard action of ${\mathbb{C}}^* \times {\mathbb{C}}^*$ that dilates the coordinate axes, and then the induced action of ${\mathbb{C}}^* \times {\mathbb{C}}^*$ on ${\mathbb{A}}^{2k} = ({\mathbb{A}}^2)^k$ commutes with the action of ${\mathfrak{S}}(k)$ that permutes the factors. Then we will consider the analogue of \eqref{eqn:degree any}: $$ \Lambda_{\BA^2} = \bigoplus_{k=0}^\infty K_{{\mathbb{C}}^* \times {\mathbb{C}}^* \times {\mathfrak{S}}(k)} ({\mathbb{A}}^{2k}) $$ If we let $q_1$ and $q_2$ denote the elementary characters of ${\mathbb{C}}^* \times {\mathbb{C}}^*$, then the inclusion of the origin $\circ \hookrightarrow {\mathbb{A}}^2$ induces a map: $$ \Lambda \otimes_{{\mathbb{Q}}} {\mathbb{Q}}[q_1^{\pm 1}, q_2^{\pm 1}] \rightarrow \Lambda_{\BA^2} $$ which sends a finite-dimensional ${\mathfrak{S}}(k)$--module to the same module supported at the origin $\circ^k \hookrightarrow {\mathbb{A}}^{2k}$. With this in mind, we may consider the following elements: $$ \left[ p_k \otimes {\mathcal{O}}_{\circ^k} \right] \in K_{{\mathbb{C}}^* \times {\mathbb{C}}^* \times {\mathfrak{S}}(k)} ({\mathbb{A}}^{2k}) $$ (the skyscraper sheaf at the origin tensored with the ${\mathfrak{S}}(k)$--character $p_k$) which induce the following analogues of the operators \eqref{eqn:induction} and \eqref{eqn:restriction}: \begin{equation} \label{eqn:ind res} K_{{\mathbb{C}}^* \times {\mathbb{C}}^* \times {\mathfrak{S}}(l)} ({\mathbb{A}}^{2l}) \xrightleftharpoons[p_k^\dagger]{p_k} K_{{\mathbb{C}}^* \times {\mathbb{C}}^* \times {\mathfrak{S}}(k+l)} ({\mathbb{A}}^{2(k+l)}) \end{equation} given by: \begin{align} &p_k(M) = \text{Ind}_{{\mathfrak{S}}(k) \times {\mathfrak{S}}(l)}^{{\mathfrak{S}}(k+l)} \Big( \underbrace{\left[ p_k \otimes {\mathcal{O}}_{\circ^k} \right]}_{\text{sheaf on }{\mathbb{A}}^{2k}} \ \boxtimes \underbrace{M}_{\text{sheaf on }{\mathbb{A}}^{2l}} \Big) \label{eqn:ind} \\ &p_k^\dagger(M) = \text{Hom}_{{\mathfrak{S}}(k)} \left(p_k, \text{Res}^{{\mathfrak{S}}(l)}_{{\mathfrak{S}}(k) \times {\mathfrak{S}}(l-k)} (M) \Big|_{\circ^k \times {\mathbb{A}}^l }\right) \label{eqn:res} \end{align} It is easy to see that the operators \eqref{eqn:ind} and \eqref{eqn:res} are adjoint with respect to the pairing on $\Lambda_{\BA^2}$ given by formula \eqref{eqn:pairing}, with the caveat that the symbol ``dim Hom" must be understood to mean the ${\mathbb{C}}^* \times {\mathbb{C}}^*$ character of the space of ${\mathfrak{S}}(k)$--equivariant global homomorphisms over ${\mathbb{A}}^{2k}$. With respect to this pairing, we have: \begin{equation} \label{eqn:pairing plane} \langle p_\lambda, p_\mu \rangle_{{\mathbb{A}}^2} = \delta_\lambda^\mu z_\lambda \prod_{i=1}^t \left[ (1-q_1^{\lambda_i})(1-q_2^{\lambda_i}) \right] \end{equation} for any $\lambda = (\lambda_1 \geq ... \geq \lambda_t)$. The natural analogue of Theorem \ref{thm:classic} reads: \\ \begin{proposition} \label{prop:classic plane} The operators $p_k, p_k^\dagger : \Lambda_{\BA^2} \rightarrow \Lambda_{\BA^2}$ satisfy the relations: \begin{equation} \label{eqn:classic heis} [p_k^\dagger, p_l] = k \delta_k^l (1-q_1^{k})(1-q_2^{k})\cdot \emph{Id} \end{equation} for all $k,l \in {\mathbb{N}}$, as well as the obvious relations $[p_k,p_l] = [p_k^\dagger, p_l^\dagger] = 0$. \\ \end{proposition} \begin{proof} See \cite{CNO}, although the proof is analogous to that of Proposition \ref{prop:classic surface}. \end{proof} \subsection{} \label{sub:plethysm} Two very important classes of symmetric polynomials are the \underline{elementary} and \underline{complete} ones, whose generating series are given by: \begin{align*} &\sum_{k=0}^\infty \frac {h_k}{z^k} = \exp\left[\sum_{k=1}^\infty \frac {p_k}{k z^k} \right] = \prod_{i=1}^\infty \left(1-\frac {x_i}{z} \right)^{-1} \\ &\sum_{k=0}^\infty \frac {e_k}{(-z)^k} = \exp\left[- \sum_{k=1}^\infty \frac {p_k}{k z^k} \right] = \prod_{i=1}^\infty \left(1-\frac {x_i}{z} \right) \end{align*} As elements of $\Lambda$ and $\Lambda_{\BA^2}$ (i.e. as ${\mathfrak{S}}(k)$--modules or ${\mathfrak{S}}(k)$--modules supported at the origin of ${\mathbb{A}}^{2k}$, respectively), the symmetric polynomials $h_k$ and $e_k$ correspond to the trivial and sign one-dimensional representation, respectively. Let: $$ h_k^\dagger, e_k^\dagger : \Lambda_{\BA^2} \rightarrow \Lambda_{\BA^2} $$ denote the adjoints, with respect to the pairing \eqref{eqn:pairing plane}, of the operators of multiplication by $h_k$ and $e_k$ (respectively). Clearly, we have: \begin{align*} &\sum_{k=0}^\infty h_k^\dagger z^k = \exp\left[\sum_{k=1}^\infty \frac {p^\dagger_k z^k}k \right] \\ &\sum_{k=0}^\infty e_k^\dagger (-z)^k = \exp\left[-\sum_{k=1}^\infty \frac {p^\dagger_k z^k}k \right] \end{align*} The following computations are simple consequences of \eqref{eqn:classic heis}: \begin{align} &\exp\left[-\sum_{k=1}^\infty \frac {p^\dagger_k z^k}k \right] \exp\left[\sum_{k=1}^\infty \frac {p_kw^{-k}}{k} \right] = \exp\left[\sum_{k=1}^\infty \frac {p_kw^{-k}}{k} \right] \exp\left[-\sum_{k=1}^\infty \frac {p^\dagger_k z^k}k \right] \zeta \left( \frac zw \right)^{-1} \label{eqn:computation 1} \\ &\exp\left[\sum_{k=1}^\infty \frac {p^\dagger_k z^k}k \right] \exp\left[- \sum_{k=1}^\infty \frac {p_k w^{-k}}{kq^k} \right] = \exp\left[- \sum_{k=1}^\infty \frac {p_k w^{-k}}{k q^k} \right] \exp\left[\sum_{k=1}^\infty \frac {p^\dagger_k z^k}k \right] \zeta \left( \frac wz \right)^{-1} \label{eqn:computation 2} \end{align} where we let $q = q_1q_2$ and write: \begin{equation} \label{eqn:zeta} \zeta(x) = \frac {(1-xq_1)(1-xq_2)}{(1-x)(1-xq)} = \exp \left[ \sum_{k=1}^\infty \frac {x^k}k \cdot (1-q_1^k)(1-q_2^k) \right] \end{equation} Note that $\zeta(x) = \zeta\left(\frac 1{xq} \right)$. \\ \subsection{} \label{sub:enter a} Consider the following half planes in ${\mathbb{Z}}^2$: $$ {\mathbb{Z}}_+^2 = \{(n,m) \in {\mathbb{Z}}^2 \text{ s.t. } n>0 \text{ or } n=0,m>0\} $$ $$ {\mathbb{Z}}_-^2 = \{(n,m) \in {\mathbb{Z}}^2 \text{ s.t. } n<0 \text{ or } n=0,m<0\} $$ The following is a model for the Hall algebra of the category of coherent sheaves over an elliptic curve, as defined in \cite{BS} (although we follow the normalization of \cite{W}). \\ \begin{definition} \label{def:eha} Consider the algebra: $$ {\mathcal{A}}_{\emph{loc}} = {\mathbb{Q}}(q_1,q_2) \Big \langle P_{n,m}, c_1^{\pm 1}, c_2^{\pm 1} \Big \rangle_{(n,m) \in {\mathbb{Z}}^2 \backslash (0,0)} \Big /^{c_1, c_2 \text{ central, and}}_{\text{relations \eqref{eqn:relation 1}, \eqref{eqn:relation 2}}} $$ where we impose the following relations. The first of these is: \begin{equation} \label{eqn:relation 1} [P_{n,m}, P_{n',m'}] = \delta_{n+n'}^0 \frac {d(1-q_1^d)(1-q_2^d)}{q^{-d} - 1} \left(1 - c_1^{-n} c_2^{-m} \right) \end{equation} if $nm'=n'm$ and $(n,m) \in {\mathbb{Z}}_+^2$, with $d = \gcd(m,n)$. The second relation states that whenever $nm'>n'm$ and the triangle with vertices $(0,0), (n,m), (n+n',m+m')$ contains no lattice points inside nor on one of the edges, then we have the relation: \begin{equation} \label{eqn:relation 2} [P_{n,m}, P_{n',m'}] = \frac {(1-q_1^d)(1-q_2^d)}{q^{-1} - 1} Q_{n+n',m+m'} \end{equation} $$ \cdot \ \begin{cases} c_1^n c_2^m & \text{if } (n,m) \in {\mathbb{Z}}_-^2, (n',m') \in {\mathbb{Z}}_+^2, (n+n',m+m') \in {\mathbb{Z}}_+^2 \\ c_1^{-n'} c_2^{-m'} & \text{if } (n,m) \in {\mathbb{Z}}_-^2, (n',m') \in {\mathbb{Z}}_+^2, (n+n',m+m') \in {\mathbb{Z}}_-^2 \\ 1 & \text{otherwise} \end{cases} $$ where $d = \gcd(n,m)\gcd(n',m')$ (by the assumption on the triangle, we note that at most one of the pairs $(n,m), (n',m'), (n+n',m+m')$ can fail to be coprime), and: \begin{equation} \label{eqn:qmn} \sum_{k=0}^{\infty} Q_{ka,kb} \cdot x^k = \exp \left[ \sum_{k=1}^\infty \frac {P_{ka,kb}}k \cdot x^k \left(1 - q^{-k} \right) \right] \end{equation} for all coprime integers $a,b$. Note that $Q_{0,0} = 1$. \\ \end{definition} \subsection{} \label{sub:more a} Let us consider $H_{n,m} \in {\mathcal{A}}_{\text{loc}}$ defined for all coprime integers $a,b$ by: \begin{equation} \label{eqn:emn} \sum_{k=0}^{\infty} H_{ka,kb} \cdot x^k = \exp \left[\sum_{k=1}^\infty \frac {P_{ka,kb}}k \cdot x^k \right] \end{equation} In other words, for every fixed pair of coprime integers $a,b$, the elements $H_{ka,kb}$ will be to complete symmetric functions as the elements $P_{ka,kb}$ are to power-sum functions. In the present paper, we will work with the subalgebra: \begin{equation} \label{eqn:def a} {\mathcal{A}}_{\text{loc}} \supset {\mathcal{A}} = {\mathbb{Z}}[q_1^{\pm 1}, q_2^{\pm 1}] \Big \langle H_{n,m}, c_1^{\pm 1}, c_2^{\pm 1} \Big \rangle_{(n,m) \in {\mathbb{Z}}^2 \backslash (0,0)} \end{equation} We note a slight abuse in \eqref{eqn:def a}: the notation implies that the structure constants of products of $H_{n,m}$'s lie in ${\mathbb{Z}}[q_1^{\pm 1}, q_2^{\pm 1}]$, but this is not quite true. The reason is the presence of denominators in \eqref{eqn:relation 1} and \eqref{eqn:relation 2}. However, in \eqref{eqn:relation 2}, the denominator is canceled by $Q_{n,m}$, which is by definition a multiple of $1-q^{-1}$. In \eqref{eqn:relation 1}, the denominator will be canceled by the numerator in all representations in which: $$ (c_1,c_2) \ \mapsto \ (q^r,1) \ \text{or} \ (1,q^{-1}) $$ which will be the case throughout the present paper. However, for all $r \in {\mathbb{N}}$, the following subalgebra of ${\mathcal{A}}$ is unambiguously well-defined over ${\mathbb{Z}}[q_1^{\pm 1}, q_2^{\pm 1}]$: $$ {\CA^{(r)}} = {\mathbb{Z}}[q_1^{\pm 1}, q_2^{\pm 1}] \Big \langle H_{n,m} \Big \rangle_{m > - nr} $$ The subalgebra ${\CA^{(r)}}$ is half of ${\CA}$ with respect to a triangular decomposition. \\ \subsection{} \label{sub:action functions} The following is obtained by combining the action of \cite{FHHSY} with the explicit formulas obtained in \cite{Shuf} (see Theorem 2.15 of \cite{Ops} for the explicit formulas, although the normalization of \emph{loc. cit. } is somewhat different from that of the present paper). \\ \begin{theorem} \label{thm:action functions} There is an action ${\mathcal{A}} \stackrel{\Psi}\curvearrowright \Lambda_{\BA^2}$ given by: \begin{equation} \label{eqn:action up 1} c_1 \mapsto 1, \qquad c_2 \mapsto q^{-1}, \end{equation} \begin{equation} \label{eqn:action up 2} P_{0,m} \ \mapsto p_m , \qquad P_{0,-m} \mapsto -q^m \cdot p_m^\dagger \end{equation} while for all $n > 0$ and $m \in {\mathbb{Z}}$, we have: \begin{equation} \label{eqn:action up 4} H_{n,m} \mapsto \int_{|z_1| \gg ... \gg |z_n|} \frac {\prod_{i=1}^n z_i^{\left \lfloor \frac {mi}n \right \rfloor - \left \lfloor \frac {m(i-1)}n \right \rfloor}}{\prod_{i=1}^{n-1} \left(1 - \frac {z_{i+1}q}{z_i} \right) \prod_{i<j} \zeta \left( \frac {z_j}{z_i} \right)} \end{equation} $$ \exp \left[\sum_{k=1}^\infty \frac {z_1^{-k}+...+z_n^{-k}}k \cdot p_k \right] \exp \left[-\sum_{k=1}^\infty \frac {z_1^k+...+z_n^k}k \cdot p_k^\dagger \right] \prod_{a=1}^n \frac {dz_a}{2\pi i z_a} $$ and: \begin{equation} \label{eqn:action up 5} H_{-n,m} \mapsto \int_{|z_1| \ll ... \ll |z_n|} \frac {(-q)^n \prod_{i=1}^n z_i^{\left \lfloor \frac {mi}n \right \rfloor - \left \lfloor \frac {m(i-1)}n \right \rfloor}}{\prod_{i=1}^{n-1} \left(1 - \frac {z_{i+1}q}{z_i} \right) \prod_{i<j} \zeta \left( \frac {z_j}{z_i} \right)} \end{equation} $$ \exp \left[-\sum_{k=1}^\infty \frac {z_1^{-k}+...+z_n^{-k}}{k \cdot q^k} \cdot p_k \right] \exp \left[\sum_{k=1}^\infty \frac {z_1^k+...+z_n^k}k \cdot p_k^\dagger \right] \prod_{a=1}^n \frac {dz_a}{2\pi i z_a} $$ The integrals go over concentric circles, contained inside each other in the order depicted in the subscript of each integral, and far away from each other relative to the size of the parameters $q_1$ and $q_2$ (which are assumed to be complex numbers). \\ \end{theorem} \begin{proof} We will sketch the proof, in order to prepare for the analogous argument in Theorem \ref{thm:action functions surface}. There is a well-known triangular decomposition: $$ {\mathcal{A}} = {\mathcal{A}}^+ \otimes {\mathcal{A}}^0 \otimes {\mathcal{A}}^- $$ where ${\mathcal{A}}^\pm$ are the subalgebras of ${\mathcal{A}}$ generated by $H_{\pm n,m}$ for $(n,m) \in {\mathbb{N}} \times {\mathbb{Z}}$, and ${\mathcal{A}}^0$ is generated by $P_{0,\pm m}$ and the central elements $c_1,c_2$. The main result of \cite{S} implies that, in order to show that formulas \eqref{eqn:action up 1}--\eqref{eqn:action up 5} yield an action ${\mathcal{A}} \curvearrowright \Lambda_{\BA^2}$, one needs to prove the following two things: \\ \begin{itemize}[leftmargin=*] \item Formulas \eqref{eqn:action up 4} and \eqref{eqn:action up 5} induce actions of the subalgebras ${\mathcal{A}}^+$ and ${\mathcal{A}}^-$ on $\Lambda_{\BA^2}$. \\ \item The particular cases of \eqref{eqn:relation 1} and \eqref{eqn:relation 2} when $n,n' \in \{-1,0,1\}$ hold, i.e.: \begin{align} &\Big[\Psi(P_{0,\pm m}), \Psi(P_{0,\pm m'})\Big] = 0 \label{eqn:need 1} \\ &\Big[ \Psi(P_{0,m}), \Psi(P_{0,-m'}) \Big] = \delta_{m'}^m m (1-q_1^m)(1-q_2^m)q^m \label{eqn:need 2} \\ &\Big[ \Psi(H_{\pm 1,k}), \Psi(P_{0,\pm m}) \Big]= - (1-q_1^m)(1-q_2^m) \cdot \Psi(H_{\pm 1,k\pm m}) \label{eqn:need 3} \\ &\Big[ \Psi(H_{\pm 1,k}), \Psi(P_{0,\mp m}) \Big]= (1-q_1^m)(1-q_2^m)q^{m\delta_\pm^+} \cdot \Psi(H_{\pm 1,k\mp m}) \label{eqn:need 4} \\ &\Big[ \Psi(H_{1,k}), \Psi(H_{-1,k'}) \Big] = \frac {(1-q_1)(1-q_2)}{q^{-1}-1} \begin{cases} \Psi(A_{k+k'}) &\text{if } k+k' > 0 \\ 1-q^k &\text{if } k+k' = 0 \\ - q^k \Psi(B_{-k-k'}) &\text{if } k+k' < 0 \end{cases} \label{eqn:need 5} \end{align} for all $m,m' \in {\mathbb{N}}$ and $k,k' \in {\mathbb{Z}}$, where in the last expression, we write: \begin{align*} &\sum_{m=0}^\infty \frac {A_m}{x^m} = \exp \left[ \sum_{m=1}^\infty \frac {p_m}{m x^m}(1-q^{-m}) \right] \\ &\sum_{m=0}^\infty \frac {B_m}{x^m} = \exp \left[ \sum_{m=1}^\infty \frac {p^\dagger_m}{m x^m}(1-q^m) \right] \end{align*} \end{itemize} \noindent The second bullet is a consequence of straightforward computations using Proposition \ref{prop:classic plane}, which we leave as exercises to the interested reader. As for the first bullet, we note that formula \eqref{eqn:action up 5} reads: \begin{equation} \label{eqn:kp} \Psi(H_{-n,m}) = \int_{|z_1| \ll ... \ll |z_n|} r_{n,m}(z_1,...,z_n) X(z_1,...,z_n) \end{equation} where $r_{n,m}(z_1,...,z_n)$ (resp. $X(z_1,...,z_n)$) is the rational function (resp. the expression) in $z_1,...,z_n$ on the first (resp. second) line of \eqref{eqn:action up 5}. If we assume $q_1$ and $q_2$ to be complex numbers with absolute value greater than 1, then one can move the contours in the integral of \eqref{eqn:kp} to $|z_1|=...=|z_n|$, without picking up any new poles. Once one does this, because $X(z_1,...,z_n)$ is symmetric in $z_1,...,z_n$, then replacing $r_{n,m}$ with its symmetrization only changes the value of the integral by an overall factor of $n!$. Explicitly, this means that \eqref{eqn:kp} is equivalent to: \begin{equation} \label{eqn:jp} \Psi(H_{-n,m}) = \frac 1{n!} \int_{|z_1| = ... = |z_n|} R_{n,m}(z_1,...,z_n) X(z_1,...,z_n) \end{equation} where $R_{n,m} = \text{Sym } r_{n,m}$. An elementary application of \eqref{eqn:computation 2} shows that: $$ X(z_1,...,z_n) X(z_{n+1},...,z_{n+n'}) = X(z_1,...,z_{n+n'}) \prod^{1\leq i \leq n}_{n+1 \leq j \leq n+n'} \zeta \left(\frac {z_j}{z_i} \right)^{-1} $$ Therefore, by applying \eqref{eqn:jp} twice, we obtain: \begin{multline} \label{eqn:lp} \Psi(H_{-n,m}) \Psi(H_{-n',m'}) = \\ = \frac 1{(n+n)!}\int_{|z_1| = ... = |z_{n+n'}|} (R_{n,m} * R_{n',m'})(z_1,...,z_{n+n'}) X(z_1,...,z_{n+n'}) \end{multline} where $R_{n,m} * R_{n',m'}$ denotes the rational function in $z_1,...,z_{n+n'}$ given by: $$ \frac 1{n! n'!} \cdot \textrm{Sym} \left[R_{n,m}(z_1,...,z_n) R_{n',m'}(z_{n+1},...,z_{n+n'}) \prod^{1\leq i \leq n}_{n+1 \leq j \leq n+n'} \zeta \left(\frac {z_i}{z_j} \right) \right] $$ The operation $*$ gives rise to an associative product on the vector space ${\mathcal{S}}$ of symmetric rational functions with certain poles (\cite{FHHSY}), called the shuffle product. It was shown in \cite{Shuf} that the operation: $$ ({\mathcal{A}}^-,\cdot) \rightarrow ({\mathcal{S}},*) \qquad H_{-n,m} \mapsto R_{n,m} $$ induces an algebra homomorphism. As we have seen by comparing formulas \eqref{eqn:jp} and \eqref{eqn:lp}, the operation: $$ ({\mathcal{S}},*) \rightarrow (\text{End}(\Lambda_{\BA^2}), \cdot) \qquad R_{n,m} \mapsto \text{RHS of \eqref{eqn:jp}} $$ is also an algebra homomorphism. Composing the aforementioned homomorphisms implies that formulas \eqref{eqn:jp} give a well-defined action of ${\mathcal{A}}^-$ on $\Lambda_{\BA^2}$. The fact that formulas \eqref{eqn:action up 4} give rise to a well-defined action of ${\mathcal{A}}^+$ on $\Lambda_{\BA^2}$ is proved analogously. \end{proof} \subsection{} \label{sub:pleth not} We will use the symbol $X$ to refer to the totality of the variables $x_1,x_2,...$, and thus we will denote the complete and elementary symmetric functions by: \begin{align} &\sum_{k=0}^\infty \frac {h_k}{z^k} = \wedge^\bullet \left(- \frac Xz \right) \label{eqn:complete} \\ &\sum_{k=0}^\infty \frac {e_k}{(-z)^k} = \wedge^\bullet \left(\frac Xz \right) \label{eqn:elementary} \end{align} where $\wedge^\bullet$ is a multiplicative symbol determined by the property that if a vector space $V$ has torus character $\chi$, then $\wedge^\bullet(\chi)$ denotes the torus character of the total exterior power $\wedge^\bullet(V)$. Elements of $\Lambda_{\BA^2}$ will generally be denoted by $f[X]$. We will adopt \underline{plethystic notation}, according to which one defines: \begin{equation} \label{eqn:plethysm} f[X \pm (1-q_1)(1-q_2) z] \in \Lambda_{\BA^2} [z] \end{equation} to be the image of $f[X]$ under the ring homomorphism $\Lambda_{\BA^2} \rightarrow \Lambda_{\BA^2}[z]$ that sends: \begin{equation} \label{eqn:plethysm 2} p_n \mapsto p_n \pm (1-q_1^n)(1-q_2^n)z^n \end{equation} In other words, one computes the plethysm \eqref{eqn:plethysm} by expanding $f[X]$ in the basis \eqref{eqn:basis}, and then replacing each $p_n$ therein according to \eqref{eqn:plethysm 2}. The reader may find a description of plethysm in the language of equivariant $K$--theory in Proposition \ref{prop:plethystic identity}. The following is a well-known and straightforward exercise: \\ \begin{proposition} \label{prop:plethysm} For any $f[X] \in \Lambda_{\BA^2}$ and any variable $z$, we have: \begin{equation} \label{eqn:pleth} f \left[ X \pm \left(1-q_1 \right)\left(1- q_2 \right) z \right] = \exp \left[\pm \sum_{k=1}^\infty \frac {p_k^\dagger z^k}k \right] \cdot f[X] \end{equation} where $p_k^\dagger$ is the adjoint operator defined in Subsection \ref{sub:a2}. \\ \end{proposition} \subsection{} \label{sub:new plane} Using \eqref{eqn:complete}, \eqref{eqn:elementary} and \eqref{eqn:pleth}, formulas \eqref{eqn:action up 4}--\eqref{eqn:action up 5} take the form: $$ \Psi(H_{n,m})(f[X]) = \int_{0, X \prec |z_n| \prec ... \prec |z_1| \prec \infty} \frac {\prod_{i=1}^n z_i^{\left \lfloor \frac {mi}n \right \rfloor - \left \lfloor \frac {m(i-1)}n \right \rfloor}}{\prod_{i=1}^{n-1} \left(1 - \frac {z_{i+1}q}{z_i} \right) \prod_{i<j} \zeta \left( \frac {z_j}{z_i} \right)} $$ \begin{equation} \label{eqn:action left} \wedge^\bullet \left( - \frac X{z_1} \right) ... \wedge^\bullet \left( - \frac X{z_n} \right) \cdot f \left[ X - \left(1-q_1 \right)\left(1- q_2 \right) \sum_{i=1}^n z_i \right] \prod_{a=1}^n \frac {dz_a}{2\pi i z_a} \end{equation} and: $$ \Psi(H_{-n,m})(f[X]) = \int_{0, X \prec |z_1| \prec ... \prec |z_n| \prec \infty} \frac {(-q)^n \prod_{i=1}^n z_i^{\left \lfloor \frac {mi}n \right \rfloor - \left \lfloor \frac {m(i-1)}n \right \rfloor}}{\prod_{i=1}^{n-1} \left(1 - \frac {z_{i+1}q}{z_i} \right) \prod_{i<j} \zeta \left( \frac {z_j}{z_i} \right)} $$ \begin{equation} \label{eqn:action right} \wedge^\bullet \left( \frac X{z_1 q} \right) ... \wedge^\bullet \left( \frac X{z_n q} \right) \cdot f \left[ X + \left(1-q_1 \right)\left(1- q_2 \right) \sum_{i=1}^n z_i \right] \prod_{a=1}^n \frac {dz_a}{2\pi i z_a} \end{equation} Above, the notation $0, X \prec |z_n| \prec ... \prec |z_1| \prec \infty$ means that we integrate the variables $z_1,...,z_n$ over concentric circles that go in the prescribed order, and are contained between the poles at $0, x_1,x_2,...$ and the pole at $\infty$. Indeed, the variables $z_i$ must have absolute value larger than the variables $x_1, x_2,...$, in order for us to be able to replace the symbols $p_k$ in \eqref{eqn:action up 4}--\eqref{eqn:action up 5} by $x_1^k+x_2^k+...$. \\ \subsection{} \label{sub:torus} We will work over an algebraically closed field of characteristic 0, henceforth denoted by ${\mathbb{C}}$. Fix a line $\infty \subset {\mathbb{P}}^2$, and let us write ${\mathbb{A}}^2 = {\mathbb{P}}^2 \backslash \infty$ for the complement. \\ \begin{definition} Fix $r \in {\mathbb{N}}$. For any $d \geq 0$, consider the moduli space: \begin{equation} \label{eqn:framed sheaves} {\mathcal{M}}^f_d = \Big\{ ({\mathcal{F}}, \phi), \ {\mathcal{F}} \text{ a torsion free sheaf on } {\mathbb{P}}^2, {\mathcal{F}}|_\infty \stackrel{\phi}\cong {\mathcal{O}}_\infty^{\oplus r}, c_2({\mathcal{F}}) = d \Big\} \end{equation} It is a smooth quasiprojective algebraic variety of dimension $2rd$. \\ \end{definition} \noindent An isomorphism $\phi$ as in \eqref{eqn:framed sheaves} is called a framing of the torsion-free sheaf ${\mathcal{F}}$, and the pair $({\mathcal{F}},\phi)$ is called a framed sheaf. We will write: $$ {\mathcal{M}}^f = \bigsqcup_{d = 0}^\infty {\mathcal{M}}^f_d $$ (the rank $r$ of our sheaves will be fixed throughout the present paper). The torus: $$ T = {\mathbb{C}}^* \times {\mathbb{C}}^* \times ({\mathbb{C}}^*)^r $$ acts on ${\mathcal{M}}^f$ as follows: the first two factors ${\mathbb{C}}^* \times {\mathbb{C}}^*$ act on sheaves by their underlying action on the standard coordinate directions of ${\mathbb{A}}^2$, while $({\mathbb{C}}^*)^r$ acts by multiplication on the isomorphism $\phi$ in \eqref{eqn:framed sheaves}. Therefore, we may consider: \begin{equation} \label{eqn:decomposition} K_{T}({\mathcal{M}}^f) = \prod_{d = 0}^\infty K_{T}({\mathcal{M}}^f_d) \end{equation} Let $\circ \in {\mathbb{A}}^2$ denote the origin, and let us consider the derived restriction: $$ \xymatrix{{\mathcal{U}}_\circ \ar@{.>}[d]\\ {\mathcal{M}}^f} $$ of the universal sheaf ${\mathcal{U}}$ on ${\mathcal{M}}^f \times {\mathbb{A}}^2$ to ${\mathcal{M}}^f \times \{\circ\} \cong {\mathcal{M}}^f$. \\ \subsection{} \label{sub:correspondences} We will now define certain operators on $K_T({\mathcal{M}})$, which were shown in \cite{K-theory} to give rise to the elliptic Hall algebra action that was discovered earlier in \cite{FT,SV 1}. \\ \begin{definition} \label{def:corr} The following moduli spaces are smooth quasiprojective varieties: \begin{align*} &{\mathfrak{Z}}_1 = \Big\{ ({\mathcal{F}} \supset_\circ {\mathcal{F}}')\Big\}\\ &{\mathfrak{Z}}_2^\bullet = \Big\{ ({\mathcal{F}} \supset_\circ {\mathcal{F}}' \supset_\circ {\mathcal{F}}'') \Big\} \end{align*} where ${\mathcal{F}} \supset_\circ {\mathcal{F}}'$ means that ${\mathcal{F}} \supset {\mathcal{F}}'$ (as framed sheaves) and the quotient ${\mathcal{F}}/{\mathcal{F}}'$ is isomorphic to the length 1 coherent sheaf supported at $\circ \in {\mathbb{A}}^2$. Consider the maps: $$ \xymatrix{& {\mathfrak{Z}}_1 \ar[ld]_{p_-} \ar[rd]^{p_+} & \\ {\mathcal{M}} & & {\mathcal{M}}} \qquad \qquad \xymatrix{& ({\mathcal{F}} \supset_\circ {\mathcal{F}}') \ar[ld] \ar[rd] & \\ {\mathcal{F}} & & {\mathcal{F}}'} $$ $$ \xymatrix{& {\mathfrak{Z}}_2^\bullet \ar[ld]_{\pi_-} \ar[rd]^{\pi_+} & \\ {\mathfrak{Z}}_1 & & {\mathfrak{Z}}_1} \qquad \xymatrix{& ({\mathcal{F}} \supset_\circ {\mathcal{F}}' \supset_\circ {\mathcal{F}}'') \ar[ld] \ar[rd] & \\ ({\mathcal{F}} \supset_\circ {\mathcal{F}}') & & ({\mathcal{F}}' \supset_\circ {\mathcal{F}}'')} $$ and the line bundles: $$ \xymatrix{{\mathcal{L}} \ar@{.>}[d] \\ {\mathfrak{Z}}_1} \qquad \qquad \qquad \xymatrix{{\mathcal{F}}_\circ/{\mathcal{F}}'_\circ \ar@{.>}[d] \\ ({\mathcal{F}} \supset_\circ {\mathcal{F}}')} $$ $$ \xymatrix{{\mathcal{L}}_1,{\mathcal{L}}_2 \ar@{.>}[d] \\ {\mathfrak{Z}}^\bullet_2} \ \ \qquad \xymatrix{{\mathcal{F}}'_\circ/{\mathcal{F}}''_\circ, {\mathcal{F}}_\circ/{\mathcal{F}}'_\circ \ar@{.>}[d] \\ ({\mathcal{F}} \supset_\circ {\mathcal{F}}' \supset_\circ {\mathcal{F}}'')} $$ \end{definition} \noindent The smoothness of these moduli spaces is proved by analogy with the corresponding statements in Definition \ref{def:corr surface}. However, all we need at the moment is the structure of ${\mathfrak{Z}}_1$ and ${\mathfrak{Z}}_2^\bullet$ as dg schemes, which was developed in \cite{K-theory}. The following is the main result of \emph{loc. cit. } (see also \cite{Hecke} for notation closer to ours): \\ \begin{theorem} \label{thm:action moduli} There exists an action ${\mathcal{A}} \stackrel{\Phi}\curvearrowright K_T({\mathcal{M}}^f)$ given by: \begin{equation} \label{eqn:action right 1} c_1 \mapsto q^r, \qquad c_2 \mapsto 1, \end{equation} \begin{align} &P_{0,m} \ \mapsto \text{tensoring with } p_m({\mathcal{U}}_\circ) \label{eqn:action right 2} \\ &P_{0,-m} \mapsto \text{tensoring with } - q^m \cdot p_m({\mathcal{U}}^\vee_\circ) \label{eqn:action right 3} \end{align} \footnote{Above, $p_m({\mathcal{U}})$ means the $m$--th power sum functor: if ${\mathcal{U}}_\circ = \sum_i \pm y_i \in K_T({\mathcal{M}}^f)$, then: $$ p_m({\mathcal{U}}_\circ) = \sum_i \pm y_i^m \in K_T({\mathcal{M}}^f) $$} while for all $n > 0$ and $m \in {\mathbb{Z}}$, we have: \begin{equation} \label{eqn:action right 4} H_{n,m} \mapsto p_{-*} \Big[ {\mathcal{L}}^{d_n} \otimes \pi_{-*} \pi_+^* \Big[ {\mathcal{L}}^{d_{n-1}} \otimes ... \otimes \pi_{-*} \pi_+^* \Big[ {\mathcal{L}}^{d_1} \otimes p_+^* \Big] ... \Big] \Big] \end{equation} and: \begin{equation} \label{eqn:action right 5} H_{-n,m} \mapsto \left[ \frac {\det {\mathcal{U}}_\circ}{(-q)^{r-1}}\right]^n \otimes p_{+*} \Big[ {\mathcal{L}}^{d_1-r} \otimes ... \otimes \pi_{+*}\pi_-^* \Big[ {\mathcal{L}}^{d_n-r} \otimes p_-^* \Big] ... \Big] \end{equation} where $d_i = \left \lfloor \frac {mi}n \right \rfloor - \left \lfloor \frac {m(i-1)}n \right \rfloor$. \\ \end{theorem} \subsection{} \label{sub:universal} Recall the decomposition \eqref{eqn:decomposition}, and consider the class of the structure sheaf: \begin{equation} \label{eqn:fundamental} \boldsymbol{1}_d \in K_T({\mathcal{M}}^f_d) \end{equation} For any symmetric polynomial $f[X] \in \Lambda_{\BA^2}$, we consider the so-called \underline{universal class}: \begin{equation} \label{eqn:universal class} f[{\mathcal{U}}_\circ] \in K_T({\mathcal{M}}^f_d) \end{equation} by applying the symmetric polynomial $f$ to the Chern roots of the universal sheaf ${\mathcal{U}}_\circ$ on ${\mathcal{M}}^f_d$. It is well-known that $K_T({\mathcal{M}}^f_d)$ is spanned by universal classes for every $d \geq 0$, i.e. by \eqref{eqn:universal class} as $f[X]$ ranges over $\Lambda_{\BA^2}$ (this fact holds for all Nakajima quiver varieties, of which ${\mathcal{M}}^f_d$ is an example). Then formulas \eqref{eqn:action right 2} imply that $K_T({\mathcal{M}}^f_d)$ is generated by the operators $P_{0,1},P_{0,2},...$ acting on the class \eqref{eqn:fundamental}, for every $d \geq 0$. This also happens in the case of general surfaces, as we will see in Section \ref{sec:surfaces}. \\ \begin{proposition} \label{prop:action universal} (\cite{W surf}) In terms of universal classes, \eqref{eqn:action right 4}--\eqref{eqn:action right 5} read: $$ \Phi(H_{n,m})(f[{\mathcal{U}}_\circ]) = \int_{{\mathcal{U}}_\circ \prec |z_n| \prec ... \prec |z_1| \prec 0, \infty} \frac {\prod_{i=1}^n z_i^{\left \lfloor \frac {mi}n \right \rfloor - \left \lfloor \frac {m(i-1)}n \right \rfloor}}{\prod_{i=1}^{n-1} \left(1 - \frac {z_{i+1}q}{z_i} \right) \prod_{i<j} \zeta \left( \frac {z_j}{z_i} \right)} $$ \begin{equation} \label{eqn:action universal 1} \wedge^\bullet \left(- \frac {{\mathcal{U}}_\circ}{z_1} \right) ... \wedge^\bullet \left(- \frac {{\mathcal{U}}_\circ}{z_n} \right) \otimes f \left[{\mathcal{U}}_\circ-(1-q_1)(1-q_2) \sum_{i=1}^n z_i \right] \prod_{a=1}^n \frac {dz_a}{2\pi i z_a} \end{equation} and: $$ \Phi(H_{-n,m})(f[{\mathcal{U}}_\circ]) = \int_{{\mathcal{U}}_\circ \prec |z_1| \prec ... \prec |z_n| \prec 0, \infty} \frac {(-q)^{n}\prod_{i=1}^n z_i^{\left \lfloor \frac {mi}n \right \rfloor - \left \lfloor \frac {m(i-1)}n \right \rfloor}}{\prod_{i=1}^{n-1} \left(1 - \frac {z_{i+1}q}{z_i} \right) \prod_{i<j} \zeta \left( \frac {z_j}{z_i} \right)} $$ \begin{equation} \label{eqn:action universal 2} \wedge^\bullet \left(\frac {{\mathcal{U}}_\circ}{z_1q} \right) ... \wedge^\bullet \left(\frac {{\mathcal{U}}_\circ}{z_nq} \right) \otimes f \left[{\mathcal{U}}_\circ+(1-q_1)(1-q_2) \sum_{i=1}^n z_i \right] \prod_{a=1}^n \frac {dz_a}{2\pi i z_a} \end{equation} where $\wedge^\bullet \left(\frac {{\mathcal{U}}_\circ}z \right) = \sum_{i=0}^\infty (-z)^{-i} [\wedge^i ( {\mathcal{U}}_\circ )]$. \\ \end{proposition} \noindent Recall from the last paragraph of Subsection \ref{sub:new plane} that the notation ${\mathcal{U}} \prec |z_n| \prec ... \prec |z_1| \prec 0, \infty$ means that we integrate the variables $z_1,...,z_n$ over concentric circles that go in the prescribed order, and are contained between the Chern roots of the universal sheaf ${\mathcal{U}}_\circ$ and the poles at $0$ and $\infty$. \\ \begin{proof}\emph{of Theorem \ref{thm:main}:} It is easy to see that the operator $\Gamma_{\BA^2}$ of \eqref{eqn:gamma} is given by: $$ \Gamma_{\BA^2}(f[X]) = f[{\mathcal{U}}_\circ] $$ in the notations of Subsections \ref{sub:pleth not} and \ref{sub:universal}, respectively. The fact that $\Gamma_{\BA^2}$ commutes with $P_{0,m}$ for any $m > 0$ is an immediate consequence of comparing \eqref{eqn:action up 2} and \eqref{eqn:action right 2}. As for the fact that $\Gamma$ intertwines $\Psi(H_{\pm n, m})$ with $\Phi(H_{\pm n, m})$ for all $n>0$ and $m > \mp nr$, this follows by comparing formulas \eqref{eqn:action left}--\eqref{eqn:action right} with \eqref{eqn:action universal 1}--\eqref{eqn:action universal 2}: either of these formulas involve one and the same integrand, the distinction between them being the location of the contours. Specifically, the contours in \eqref{eqn:action left}--\eqref{eqn:action right} differ from the ones in \eqref{eqn:action universal 1}--\eqref{eqn:action universal 2} only in which side of the contour the pole at 0 lies. The integrals are equal because the integrands are regular at 0 in each variable among $z_1$,...,$z_n$, which is easily seen to be the case for \eqref{eqn:action universal 1}--\eqref{eqn:action universal 2} when $m > \mp nr$. \\ \noindent Concerning the uniqueness statement, let us show that there exists at most a unique: \begin{equation} \label{eqn:gamma unique} \Gamma = \prod_{d=0}^\infty \Gamma_d \quad \text{with} \quad \Gamma_d : \Lambda_{{\mathbb{A}}^2,\text{loc}} \rightarrow K_T({\mathcal{M}}^f_d)_{\text{loc}} \end{equation} where: \begin{align*} &\Lambda_{{\mathbb{A}}^2,\text{loc}} = \Lambda_{\BA^2} \bigotimes_{{\mathbb{Z}}[q_1^{\pm 1}, q_2^{\pm 1}]} {\mathbb{Q}}(q_1,q_2) \\ &K_T({\mathcal{M}}^f_d)_{\text{loc}} = K_T({\mathcal{M}}^f_d) \bigotimes_{K_T(\circ)} \text{Frac}(K_T(\circ)) \end{align*} such that $\Gamma$ is determined by the facts that $\Gamma_0(1) = \boldsymbol{1}_0$ and that $\Gamma$ commutes with the action of ${\CA^{(r)}}$, in the sense of diagram \eqref{eqn:main}. The commutativity with the operators $P_{0,m}$ for $m>0$ uniquely determine $\Gamma_0$. Meanwhile, we have the following. \\ \begin{claim} \label{claim:generate} Any class $\gamma \in K_T({\mathcal{M}}^f_d)_{\emph{loc}}$ is uniquely determined by the collection: $$ \Big\{ \Phi(H_{1,m_1}...H_{1,m_d})(\gamma) \in K_T({\mathcal{M}}^f_0)_{\emph{loc}} \Big\} $$ as $m_1,...,m_d$ range over the integers $>- r$. \\ \end{claim} \noindent The commutativity of diagram \eqref{eqn:main} implies that: $$ \Phi(H_{1,m_1}...H_{1,m_d})(\Gamma_d(f)) = \Gamma_0(\Psi(H_{1,m_1}...H_{1,m_d})(f)) $$ for all $f \in \Lambda_{{\mathbb{A}}^2,\text{loc}}$. Since we have already seen that $\Gamma_0$ is uniquely determined, then Claim \ref{claim:generate} implies that $\Gamma_d(f)$ is uniquely determined, for all $d \geq 0$ and all $f$. \\ \begin{proof}\emph{of Claim \ref{claim:generate}:} Let ${\mathbb{F}} = \text{Frac}(K_T(\circ)) = K_T({\mathcal{M}}^f_0)_{\text{loc}}$, where the last equality is due to the fact that ${\mathcal{M}}^f_0$ is a point. The Thomason equivariant localization theorem gives us the following isomorphism of ${\mathbb{F}}$--vector spaces: \begin{equation} \label{eqn:localization} K_T({\mathcal{M}}_d^f)_{\text{loc}} \cong \bigoplus_{{\boldsymbol{\la}} \text{ of size } d} {\mathbb{F}} \cdot |{\boldsymbol{\la}} \rangle \end{equation} where $|{\boldsymbol{\la}} \rangle$ denotes the (renormalized) skyscraper sheaf at the $T$--fixed point of ${\mathcal{M}}_d^f$ indexed by an $r$--partition ${\boldsymbol{\la}}$ of size $d$ (i.e. an $r$--tuple of partitions of total size $d$, we refer to \cite{K-theory, W} for a discussion of the connection between fixed points and $r$--partitions). The classes $|{\boldsymbol{\la}} \rangle$ form an orthogonal basis of \eqref{eqn:localization}, with respect to the equivariant Euler characteristic pairing. Since the adjoint of $H_{1,m}$ with respect to this pairing is a multiple of $H_{-1,m+r}$, the claim is equivalent to proving that: $$ \Big\{ \Phi(H_{-1,m_1}...H_{-1,m_d})(K_T({\mathcal{M}}^f_0)_{\text{loc}}) \Big\}_{m_i > 0} \quad \text{span} \quad K_T({\mathcal{M}}^f_d)_{\text{loc}} $$ We may prove this claim by induction on $d$, and it suffices to establish that: \begin{equation} \label{eqn:generate} \Big\{ \Phi(H_{-1,m})(K_T({\mathcal{M}}^f_{d-1})_{\text{loc}}) \Big\}_{m > 0} \quad \text{span} \quad K_T({\mathcal{M}}^f_d)_{\text{loc}} \end{equation} To prove the claim above, let's consider an $r$--partition ${\boldsymbol{\mu}}$ of size $d-1$. We have: \begin{equation} \label{eqn:matrix} \Phi(H_{-1,m})|{\boldsymbol{\mu}}\rangle = \sum_{{\boldsymbol{\la}} = {\boldsymbol{\mu}}+{\blacksquare}} \chi_{\blacksquare}^m \cdot \tau_{\boldsymbol{\mu}}^{\boldsymbol{\la}} |{\boldsymbol{\la}} \rangle \end{equation} where the right-hand side goes over all $r$--partitions obtained by adding a single box ${\blacksquare}$ to ${\boldsymbol{\mu}}$, and if this box is located at coordinates $(x,y)$ in the $i$--th constituent partition of ${\boldsymbol{\mu}}$, its weight is defined by $\chi_{\blacksquare} = u_i q_1^x q_2^y$ (see \cite{K-theory, W} for the aforementioned notions and formulas for the coefficients $\tau_{\boldsymbol{\mu}}^{\boldsymbol{\la}}$ that appear in \eqref{eqn:matrix}, but we remark that they do not depend on $m$). Since there are only finitely many ways to add a single box to the $r$--partition ${\boldsymbol{\mu}}$, and all of these boxes have different weights, it is clear that there exists a ${\mathbb{F}}$--linear combination $H$ of the operators $H_{-1,1}, H_{-1,2},...$ such that $\Phi(H)|{\boldsymbol{\mu}}\rangle = |{\boldsymbol{\la}} \rangle$ for any fixed ${\boldsymbol{\la}}$. This completes the proof of \eqref{eqn:generate}. \end{proof} \end{proof} \section{The case of general surfaces} \label{sec:surfaces} \subsection{} \label{sub:assumption} Consider a smooth projective surface $S$ with an ample divisor $H$, and also fix $(r,c_1) \in {\mathbb{N}} \times H^2(S, {\mathbb{Z}})$. Consider the moduli space ${\mathcal{M}}^s$ of $H$--stable sheaves on the surface $S$ with the numerical invariants $r,c_1$ and any $c_2$. We make the following: \begin{align} &\textbf{Assumption A:} \qquad \gcd(r, c_1 \cdot H) = 1, \ \text{ and} \label{eqn:assumption a} \\ &\textbf{Assumption S:} \ \qquad \text{either } \begin{cases} {\mathcal{K}}_S \cong {\mathcal{O}}_S \quad \ \ \text{or} \\ c_1({\mathcal{K}}_S) \cdot H < 0 \end{cases} \label{eqn:assumption s} \end{align} Assumption A implies that ${\mathcal{M}}^s$ is representable, i.e. there exists a universal sheaf: $$ \xymatrix{{\mathcal{U}} \ar@{.>}[d] \\ {\mathcal{M}}^s \times S} $$ We fix a choice of ${\mathcal{U}}$ throughout this paper. If we let ${\mathcal{M}}^s_d \subset {\mathcal{M}}^s$ denote the subspace of sheaves with $c_2 = d$, then we have a disconnected union: $$ {\mathcal{M}}^s = \bigsqcup_{d = \left \lceil \frac {r-1}{2r} c_1^2 \right \rceil}^\infty {\mathcal{M}}^s_{d} $$ where the fact that $d$ is bounded below is a consequence of Bogomolov's inequality. Each ${\mathcal{M}}^s_{d}$ is projective (by Assumption A) and smooth (by Assumption S). In the present paper, we will work with the $K$--theory groups: $$ K({\mathcal{M}}^s) = \prod_{d = \left \lceil \frac {r-1}{2r} c_1^2 \right \rceil}^\infty K({\mathcal{M}}^s_d) $$ We refer the reader to \cite{Shuf surf, Hecke} for an introduction to basic facts on the moduli space of stable sheaves, as pertains to the present paper. \\ \subsection{} \label{sub:weak} Since $K({\mathcal{M}}^s \times S) \not \cong K({\mathcal{M}}^s)$, as opposed from the case $S={\mathbb{A}}^2$ studied previously, we must take care what we mean by ``algebras acting on $K$--theory groups". In Definitions \ref{def:weak} and \ref{def:strong}, we let $X$ be any smooth quasiprojective algebraic variety. \\ \begin{definition} \label{def:weak} A \underline{weak action} ${\mathcal{A}} \stackrel{\Phi}\curvearrowright_S K(X)$ is an abelian group homomorphism: \begin{equation} \label{eqn:def phi} {\CA} \xrightarrow{\Phi} \emph{Hom}(K(X), K(X \times S)) \end{equation} such that: \\ \begin{itemize}[leftmargin=*] \item $\Phi(1)$ is the standard pull-back map; \\ \item for all $a \in {\mathcal{A}}$ and $f \in {\mathbb{Z}}[q_1^{\pm 1}, q_2^{\pm 1}]^{\emph{Sym}}$, we require $\Phi(f \cdot a)$ to equal the composition: \begin{equation} \label{eqn:eq par} K(X) \xrightarrow{\Phi(a)} K(X \times S) \xrightarrow{\emph{Id}_{X} \boxtimes f(q_1,q_2)} K(X \times S) \end{equation} where $q_1,q_2$ are identified with the Chern roots of $[\Omega_S^1] \in K(S)$; \\ \item for all $a,b \in {\mathcal{A}}$, we require $\Phi(ab)$ to equal the composition: \begin{equation} \label{eqn:hom} K(X) \xrightarrow{\Phi(b)} K(X \times S) \xrightarrow{\Phi(a) \boxtimes \emph{Id}_S} K(X \times S \times S) \xrightarrow{\emph{Id}_{X} \boxtimes \Delta^*} K(X \times S) \end{equation} (where $\Delta : S \hookrightarrow S \times S$ is the diagonal). \\ \end{itemize} \end{definition} \noindent We will apply the definition above when $X = {\mathcal{M}}^s$ or $X = \bigsqcup_{k=0}^\infty S^k$. \\ \subsection{} \label{sub:strong} Note that in the definition of a weak action, the composition of operators $\Phi(a)$ and $\Phi(b)$ only records what happens on the diagonal of $S \times S$. To understand the behavior off the diagonal, we introduce the following stronger notion. \\ \begin{definition} \label{def:strong} A weak action as in Definition \ref{def:weak} is called \underline{strong} if, for all $a,b \in {\mathcal{A}}$, we have the following equality of operators $K(X) \rightarrow K(X \times S \times S)$: \begin{equation} \label{eqn:comm phi} [\Phi(a), \Phi(b)] = (\emph{Id}_X \boxtimes \Delta)_*\left[\Phi \left( \frac {[a,b]}{(1-q_1)(1-q_2)} \right) \right] \end{equation} where the left-hand side of \eqref{eqn:comm phi} denotes the difference of the compositions: \begin{align} &K(X) \xrightarrow{\Phi^{(2)}(b)} K(X \times S_2) \xrightarrow{\Phi^{(1)}(a) \boxtimes \emph{Id}_{S_2}} K(X \times S_1 \times S_2) \label{eqn:diff 1} \\ &K(X) \xrightarrow{\Phi^{(1)}(a)} K(X \times S_1) \xrightarrow{\Phi^{(2)}(b) \boxtimes \emph{Id}_{S_1}} K(X \times S_1 \times S_2) \label{eqn:diff 2} \end{align} (we write $S_i$ instead of $S$ and $K(X) \xrightarrow{\Phi^{(i)}(a)} K(X \times S_i)$ instead of $\Phi(a)$, $\forall i \in \{1,2\}$, in order to better illustrate the two factors of $S$ involved in \eqref{eqn:diff 1}--\eqref{eqn:diff 2}). \\ \end{definition} \noindent The right-hand side of \eqref{eqn:comm phi} is well-defined, because (see \cite{W surf}) the commutator of any two elements of ${\mathcal{A}}$ is a multiple of $(1-q_1)(1-q_2)$. The following operators: \begin{equation} \label{eqn:reduced} [\Phi(a), \Phi(b)]_{\text{red}} = \Phi \left( \frac {[a,b]}{(1-q_1)(1-q_2)} \right) \end{equation} which act between $K(X) \rightarrow K(X \times S)$, will be called the \underline{reduced commutators}. \\ \begin{remark} \label{rem:leibniz jacobi} Consider any $\alpha, \beta, \gamma : K(X) \rightarrow K(X \times S)$, and any $f \in {\mathbb{Z}}[q_1^{\pm 1}, q_2^{\pm 1}]^{\emph{Sym}}$. Let us define the operators: $$ f \alpha, \ \alpha \beta : K(X) \rightarrow K(X \times S) $$ and: $$ [\alpha, \beta] : K(X) \rightarrow K(X \times S \times S) $$ by replacing $\Phi(a)$ and $\Phi(b)$ in \eqref{eqn:eq par}, \eqref{eqn:hom}, \eqref{eqn:diff 1}, \eqref{eqn:diff 2} with $\alpha$ and $\beta$. Then we have the following associativity properties: \begin{equation} \label{eqn:associativity} (f \alpha) \beta = f (\alpha \beta), \qquad (\alpha \beta)\gamma = \alpha (\beta \gamma) \end{equation} Moreover, assume that the commutator of any two of $\alpha,\beta,\gamma$ is supported on the diagonal $\Delta \subset S \times S$, i.e. we have the following equality $K(X) \rightarrow K(X \times S \times S)$: \begin{equation} \label{eqn:red} [\alpha, \beta] = (\emph{Id}_X \boxtimes \Delta)_*([\alpha, \beta]_{\emph{red}}) \end{equation} for some operator $[\alpha, \beta]_{\emph{red}} : K(X) \rightarrow K(X \times S)$, and the analogous formulas for the pairs $(\beta, \gamma)$ and $(\alpha, \gamma)$. \footnote{If \eqref{eqn:red} holds for some operator $[\alpha, \beta]_{\text{red}}$, then this operator is unique, due to the fact that the map $(\text{Id}_X \boxtimes \Delta)_*$ has a left inverse given by $(\text{Id}_X \boxtimes \text{proj}_1)_*$} Then the following Leibniz rule holds: \begin{equation} \label{eqn:leibniz} [\alpha \beta, \gamma]_{\emph{red}} = \alpha [\beta,\gamma]_{\emph{red}} + [\alpha, \gamma]_{\emph{red}} \beta \end{equation} and the following Jacobi identity holds: \begin{equation} \label{eqn:jacobi} \sum_{\emph{cyclic}} [\alpha,[\beta, \gamma]_{\emph{red}}]_{\emph{red}} = 0 \end{equation} The claims \eqref{eqn:associativity}, \eqref{eqn:leibniz} and \eqref{eqn:jacobi} are straightforward exercises. \\ \end{remark} \subsection{} Let us apply Definitions \ref{def:weak} and \ref{def:strong} to the case $X = {\mathcal{M}}^f$, $S = {\mathbb{A}}^2$ and that of $T$--equivariant $K$--theory. In this case, composing the action map: $$ {\mathcal{A}} \xrightarrow{\Phi} \text{Hom}(K_T({\mathcal{M}}^f), K_T({\mathcal{M}}^f \times {\mathbb{A}}^2)) $$ with the restriction to the origin $\circ \in {\mathbb{A}}^2$ (which is an isomorphism), we obtain: $$ {\CA} \xrightarrow{\Phi'} \text{End}(K_T({\mathcal{M}}^f)) $$ Property \eqref{eqn:eq par} states that $q_1$ and $q_2$ are the equivariant Chern roots of $[\Omega_{{\mathbb{A}}^2}^1]$, property \eqref{eqn:hom} states that $\Phi'(ab) = \Phi'(a)\Phi'(b)$, while property \eqref{eqn:comm phi} states that: $$ [\Phi'(a), \Phi'(b)] = \Phi'([a,b]) $$ the reason being that $\Delta^*\Delta_* = (1-q_1)(1-q_2)$ if $\Delta : {\mathbb{A}}^2 \hookrightarrow {\mathbb{A}}^2 \times {\mathbb{A}}^2$ is the diagonal. The conclusion is that $\Phi'$ yields an honest action of ${\mathcal{A}}$ on $K_T({\mathcal{M}}^f)$. \\ \begin{remark} Definitions \ref{def:weak} and \ref{def:strong} are inspired by the Heisenberg algebra action on the cohomology groups of Hilbert schemes that was developed by Grojnowski (\cite{G}) and Nakajima (\cite{Nak}). This construction can be interpreted as ``operators on the cohomology groups of Hilbert schemes of points on a surface $S$, indexed by a cohomology class on $S$". Indeed, if: \begin{equation} \label{eqn:phi gamma} \Phi(a)^{(\gamma)} : K({\mathcal{M}}^s) \rightarrow K({\mathcal{M}}^s) \end{equation} denotes the composition: $$ K({\mathcal{M}}^s) \xrightarrow{\Phi(a)} K({\mathcal{M}}^s \times S) \xrightarrow{\emph{Id}_{{\mathcal{M}}^s} \boxtimes \gamma} K({\mathcal{M}}^s \times S) \xrightarrow{\pi_*} K({\mathcal{M}}^s) $$ for any $\gamma \in K(S)$ (where $\pi:{\mathcal{M}}^s \times S \rightarrow {\mathcal{M}}^s$ is the projection), then \eqref{eqn:comm phi} reads: $$ \left[ \Phi(a)^{(\gamma)}, \Phi(b)^{(\delta)} \right] = \Phi \left( \frac {[a,b]}{(1-q_1)(1-q_2)} \right)^{(\gamma \delta)} $$ for any $\gamma, \delta \in K(S)$. The particular case of the formula above when $a = P_{n,0}$ and $b = P_{n',0}$ yields precisely a Heisenberg algebra action in the sense of \emph{loc. cit. } However, since in $K$--theory one does not have a K\"unneth decomposition, the datum of the homomorphism $\Phi(a)$ is stronger than the totality of the endomorphisms \eqref{eqn:phi gamma}. \\ \end{remark} \subsection{} Let us present the analogues of the correspondences of Subsection \ref{sub:correspondences} with $({\mathcal{M}}^f, {\mathbb{A}}^2)$ replaced by $({\mathcal{M}}^s, S)$, and use them to construct an action ${\mathcal{A}} \curvearrowright_S K({\mathcal{M}}^s)$. \\ \begin{definition} \label{def:corr surface} The following moduli spaces are smooth projective varieties: \begin{align*} &{\mathfrak{Z}}_1 = \Big\{ ({\mathcal{F}} \supset_x {\mathcal{F}}') \text{ for some } x \in S\Big\}\\ &{\mathfrak{Z}}_2^\bullet = \Big\{ ({\mathcal{F}} \supset_x {\mathcal{F}}' \supset_x {\mathcal{F}}'') \text{ for some } x \in S \Big\} \end{align*} Consider the maps: $$ \xymatrix{& {\mathfrak{Z}}_1 \ar[ld]_{p_-} \ar[d]^{p_S} \ar[rd]^{p_+} & \\ {\mathcal{M}} & S & {\mathcal{M}}} \qquad \qquad \xymatrix{& ({\mathcal{F}} \supset_x {\mathcal{F}}') \ar[ld] \ar[d] \ar[rd] & \\ {\mathcal{F}} & x & {\mathcal{F}}'} $$ $$ \xymatrix{& {\mathfrak{Z}}_2^\bullet \ar[ld]_{\pi_-} \ar[rd]^{\pi_+} & \\ {\mathfrak{Z}}_1 & & {\mathfrak{Z}}_1} \qquad \xymatrix{& ({\mathcal{F}} \supset_x {\mathcal{F}}' \supset_x {\mathcal{F}}'') \ar[ld] \ar[rd] & \\ ({\mathcal{F}} \supset_x {\mathcal{F}}') & & ({\mathcal{F}}' \supset_x {\mathcal{F}}'')} $$ and the line bundles: $$ \xymatrix{{\mathcal{L}} \ar@{.>}[d] \\ {\mathfrak{Z}}_1} \qquad \qquad \qquad \xymatrix{{\mathcal{F}}_x/{\mathcal{F}}'_x \ar@{.>}[d] \\ ({\mathcal{F}} \supset_x {\mathcal{F}}')} $$ $$ \xymatrix{{\mathcal{L}}_1,{\mathcal{L}}_2 \ar@{.>}[d] \\ {\mathfrak{Z}}^\bullet_2} \ \ \qquad \xymatrix{{\mathcal{F}}'_x/{\mathcal{F}}''_x, {\mathcal{F}}_x/{\mathcal{F}}'_x \ar@{.>}[d] \\ ({\mathcal{F}} \supset_x {\mathcal{F}}' \supset_x {\mathcal{F}}'')} $$ \end{definition} \noindent We refer the reader to \cite{Shuf surf} for the statements pertaining to ${\mathfrak{Z}}_1$ (although they were known for a long time, see \cite{ES}) and to \cite{W surf} for the statements pertaining to ${\mathfrak{Z}}_2^\bullet$. \\ \subsection{} The following analogue of Theorem \ref{thm:action moduli} was proved in \cite{Hecke}. \\ \begin{theorem} \label{thm:action surface} There exists a strong action ${\mathcal{A}} \stackrel{\Phi}\curvearrowright_S K({\mathcal{M}}^s)$ given by: \begin{equation} \label{eqn:action right surface 1} c_1 \mapsto q^r, \qquad c_2 \mapsto 1, \end{equation} (recall from Definition \ref{def:weak} that $q_1,q_2$ are identified with the Chern roots of $[\Omega_S^1]$, hence $q = q_1q_2$ is identified with the canonical line bundle $[{\mathcal{K}}_S]$) and: \begin{align} &P_{0,m} \ \ \mapsto \left[ K({\mathcal{M}}^s) \xrightarrow{\text{pull-back}} K({\mathcal{M}}^s \times S)\xrightarrow{\otimes p_m({\mathcal{U}})} K({\mathcal{M}}^s \times S) \right] \label{eqn:action right surface 2} \\ &P_{0,-m} \mapsto \left[ K({\mathcal{M}}^s) \xrightarrow{\text{pull-back}} K({\mathcal{M}}^s \times S)\xrightarrow{\otimes (-q^m p_m({\mathcal{U}}^\vee))} K({\mathcal{M}}^s \times S) \right] \label{eqn:action right surface 3} \end{align} while for all $n > 0$ and $m \in {\mathbb{Z}}$, we have: \begin{equation} \label{eqn:action right surface 4} H_{n,m} \mapsto (p_- \times p_S)_* \Big[ {\mathcal{L}}_n^{d_n} \otimes \pi_{-*} \pi_+^* \Big[ {\mathcal{L}}_{n-1}^{d_{n-1}} \otimes ... \pi_{-*} \pi_+^* \Big[ {\mathcal{L}}_1^{d_1} \otimes p_+^* \Big] ... \Big] \Big] \end{equation} and: \begin{equation} \label{eqn:action right surface 5} H_{-n,m} \mapsto \left[ \frac {\det {\mathcal{U}}}{(-q)^{r-1}} \right]^n \otimes (p_+ \times p_S)_* \Big[ {\mathcal{L}}_1^{d_1-r} \otimes ... \otimes \pi_{+*} \pi_-^* \Big[ {\mathcal{L}}_n^{d_n-r} \otimes p_-^* \Big] ... \Big] \end{equation} where $d_i = \left \lfloor \frac {mi}n \right \rfloor - \left \lfloor \frac {m(i-1)}n \right \rfloor$. \\ \end{theorem} \subsection{} The analogue of the ring of symmetric functions for an arbitrary surface is: $$ \Lambda_S = \bigoplus_{k=0}^\infty K_{{\mathfrak{S}}(k)} (S^k) $$ where ${\mathfrak{S}}(k)$ permutes the factors of $S^k$. We will (slightly abusively) write: \begin{align*} &\Lambda_S \times K(S) = \bigoplus_{k=0}^\infty K_{{\mathfrak{S}}(k)} (S^k \times S) \\ &\Lambda_S \times K(S \times S) = \bigoplus_{k=0}^\infty K_{{\mathfrak{S}}(k)} (S^k \times S \times S) \end{align*} where ${\mathfrak{S}}(k)$ does not act on the last factor of $S$ or $S \times S$. There exist analogues of the operators \eqref{eqn:ind res} of parabolic induction and restriction, respectively: \begin{equation} \label{eqn:operators} p_k, p_k^\dagger : \Lambda_S \longrightarrow \Lambda_S \times K(S) \end{equation} which take the form: \begin{align} &K_{{\mathfrak{S}}(l)} (S^l) \xrightarrow{p_k} K_{{\mathfrak{S}}(k+l)} (S^{k+l} \times S) \label{eqn:induction surface} \\ &K_{{\mathfrak{S}}(k+l)} (S^{k+l}) \xrightarrow{p^\dagger_k} K_{{\mathfrak{S}}(l)} (S^l \times S) \label{eqn:restriction surface} \end{align} and are explicitly given by: \begin{align} &p_k(M) = \text{Ind}_{{\mathfrak{S}}(k) \times {\mathfrak{S}}(l)}^{{\mathfrak{S}}(k+l)} \Big( \underbrace{ \left[p_k \otimes {\mathcal{O}}_{\Delta_{1...k\bullet}} \right] }_{\text{sheaf on } S^k \times S} \ \boxtimes \underbrace{M}_{\text{sheaf on }S^l} \Big) \label{eqn:ind surf} \\ &p_k^\dagger(M) = \text{Hom}_{{\mathfrak{S}}(k)}\left( p_k, \text{Res}_{{\mathfrak{S}}(k) \times {\mathfrak{S}}(l)}^{{\mathfrak{S}}(k+l)} (M)\Big|_{\Delta_{1...k \bullet \times S^l}} \right) \label{eqn:res surf} \end{align} where $\Delta_{1...k\bullet} \subset S^k \times S$ is the small diagonal. In the right-hand side of \eqref{eqn:res surf}, we implicitly pull-back $M$ from $S^{k+l}$ to $S^{k+l} \times S$, and then restrict it to the diagonal obtained by identifying the first $k$ and the last factor, thus obtaining a sheaf on $S^l \times S$. \\ \begin{proposition} \label{prop:classic surface} The operators $p_k$, $p_k^\dagger$ give rise to a strong action (in the sense of Definitions \ref{def:weak} and \ref{def:strong}) of the infinite-dimensional Heisenberg algebra, i.e.: \begin{equation} \label{eqn:heis surface} [p_k^\dagger, p_l] = (\emph{Id}_{\Lambda_S} \boxtimes \Delta)_* \left( \rho^*\left[\delta_k^l k \frac {(1-q_1^k)(1-q_2^k)}{(1-q_1)(1-q_2)} \right] \otimes \pi^* \right) \end{equation} as well as $[p_k,p_l] = [p_k^\dagger, p_l^\dagger] = 0$, as homomorphisms $\Lambda_S \rightarrow \Lambda_S \times K(S \times S)$, where: $$ \xymatrix{& {\bigsqcup_{n=0}^{\infty}} S^n \times S \ar[ld]_\pi \ar[rd]^\rho & \\ {\bigsqcup_{n=0}^{\infty}} S^n & & S} $$ are the standard projections. In the left-hand side of \eqref{eqn:heis surface}, the operator $p_k^\dagger$ (respectively $p_l$) acts only on the first (respectively second) factor of $S \times S$. \\ \end{proposition} \begin{remark} Proposition \ref{prop:classic surface} actually holds for an arbitrary smooth variety $S$, although we will only need the surface case. If $S$ has dimension $d$, then the constant in the square brackets in the right-hand side of \eqref{eqn:heis surface} must be replaced by: $$ \delta_k^l k \frac {(1-q_1^k)(1-q_2^k)...(1-q_d^k)}{(1-q_1)(1-q_2)...(1-q_d)} $$ where $q_1+q_2+...+q_d = [\Omega_S^1] \in K(S)$. The proof below applies to arbitrary $d$. \\ \end{remark} \begin{proof} Recall that elements of $K(S^n)$ represent vector bundles $M$ on $S^n$. Given any permutation $\sigma = \{1,...,n\} \rightarrow \{a_1,...,a_n\}$, we will use the notation $M_{a_1...a_n}$ for the vector bundle $\sigma^*(M)$ on $S^n$ and the associated $K$--theory class. Similarly, elements of $K_{{\mathfrak{S}}(n)}(S^n)$ represent ${\mathfrak{S}}(n)$--equivariant vector bundles, i.e. vector bundles $M$ on $S^n$ endowed with isomorphisms $M \cong \sigma^*(M)$ for all $\sigma \in {\mathfrak{S}}(n)$ that respect the product of permutations. In this language, formula \eqref{eqn:ind surf} reads: \begin{equation} \label{eqn:zero} p_l(M) = \bigoplus^{(l,n)}_{\text{shuffles}} [p_l \otimes {\mathcal{O}}_{\Delta_{a_1...a_l\bullet}}] \boxtimes M_{b_1....b_n} \end{equation} where the right-hand side is a vector bundle on $S^{n+l} \times S$ (the index $\bullet$ represents the last factor of $S$) with the action of ${\mathfrak{S}}(n+l)$ given by permutation of the indices. The term ``$(l,n)$--shuffles" above and henceforth refers to the set of all partitions: \begin{equation} \label{eqn:shuffle} \{a_1<...<a_l\} \sqcup \{b_1<...<b_n\} =\{1,...,n+l\} \end{equation} Iterating \eqref{eqn:zero} twice implies that: \begin{equation} \label{eqn:dodi} p_k p_l(M) = \bigoplus^{(k,l,n)}_{\text{shuffles}} [p_k \otimes {\mathcal{O}}_{\Delta_{a_1...a_k\circ}} ] \otimes [p_l \otimes {\mathcal{O}}_{\Delta_{b_1...b_l\bullet}}] \boxtimes M_{c_1....c_n} \end{equation} where $(k,l,n)$--shuffles are partitions of $\{1,...,n+k+l\}$ into three sets, of sizes $k$, $l$ and $n$, respectively. The right-hand side of \eqref{eqn:dodi} is a vector bundle on $S^{n+k+l} \times S \times S$, where the latter two factors of $S$ are indexed by the symbols $\circ$ and $\bullet$, respectively. It is clear that the right-hand side of \eqref{eqn:dodi} is symmetric under permuting $k \leftrightarrow l$, if we also permute $\circ \leftrightarrow \bullet$. This implies $[p_k,p_l] = 0$, and the statement that $[p_k^\dagger, p_l^\dagger] = 0$ is analogous. As for the commutator \eqref{eqn:heis surface}, we note that \eqref{eqn:res surf} implies: \begin{equation} \label{eqn:unu} p_k^\dagger p_l(M) = \textrm{Hom}_{{\mathfrak{S}}(k)} \left(p_k, \bigoplus^{(l,n)}_{\text{shuffles}} [p_l \otimes {\mathcal{O}}_{\Delta_{a_1...a_l\bullet}}] \boxtimes M_{b_1....b_n} \Big|_{\Delta_{1...k\circ} \times S^{n+l-k}} \right) \end{equation} as a vector bundle on $S^{n+l-k} \times S \times S$ (the indices $\bullet$ and $\circ$ represent the two latter factors of $S$) with the action of ${\mathfrak{S}}(n+l-k)$ given by permutation of the indices $>k$. As a virtual representation of ${\mathfrak{S}}(l)$, the power sum $p_l$ has the property that: \begin{equation} \label{eqn:restriction zero} p_l \Big|_{{\mathfrak{S}}(i) \times {\mathfrak{S}}(l-i)} = 0 \end{equation} for all $i \in \{1,...,l-1\}$. We will call any shuffle as in \eqref{eqn:shuffle} of ``type $i$" if: $$ \{a_1,...,a_i\} \sqcup \{b_1,...,b_{k-i}\} = \{1,...,k\} $$ Because of \eqref{eqn:restriction zero}, the only shuffles which contribute non-trivially to \eqref{eqn:unu} are those of type $0$ and type $l$. The shuffles of type $0$ correspond to the case when $\{1,...,k\} \subset \{b_1,...,b_n\}$, and their contribution to \eqref{eqn:unu} may be identified with: \begin{multline} \label{eqn:doi} p_l p_k^\dagger(M) = \bigoplus^{\text{type 0}}_{(l,n)\text{--shuffles}} [p_l \otimes {\mathcal{O}}_{\Delta_{a_1...a_l\bullet}}] \boxtimes \\ \boxtimes \textrm{Hom}_{{\mathfrak{S}}(k)} \left(p_k, M_{b_1....b_n} \Big|_{\Delta_{1...k\circ} \times S^{n-k}} \right) \end{multline} Therefore, the difference between \eqref{eqn:unu} and \eqref{eqn:doi} consists precisely of the sum over type $l$ shuffles, i.e. those such that $\{a_1,...,a_l\} \subset \{1,...,k\}$. However, if $k>l$, then the $\text{Hom}_{{\mathfrak{S}}(k)}(p_k,...)$ space in \eqref{eqn:unu} vanishes because of \eqref{eqn:restriction zero} for $k \leftrightarrow l$. Therefore, the only shuffle which has a non-zero contribution to the difference of \eqref{eqn:unu} and \eqref{eqn:doi} is the one corresponding to $\{a_1,...,a_l\} = \{1,...,k\}$. We thus conclude that: \begin{equation} \label{eqn:trei} [p_k^\dagger, p_l](M) = M \boxtimes \delta_k^l \textrm{Hom}_{{\mathfrak{S}}(k)} \left(p_k, p_k \otimes {\mathcal{O}}_{\Delta_{1...k\bullet}} \Big|_{\Delta_{1...k\circ}}\right) \end{equation} In $K$--theory, the restriction of a regular subvariety (in the situation above, the small diagonal $\Delta_{1...k} : S \hookrightarrow S^k$) to itself is equal to the exterior algebra of the conormal bundle to this subvariety. In the situation of \eqref{eqn:trei}, this leads to: \begin{equation} \label{eqn:patru} [p_k^\dagger, p_l](M) = M \boxtimes \Delta_* \left[ \delta_k^l \textrm{Hom}_{{\mathfrak{S}}(k)} \left(p_k, p_k \otimes \wedge^\bullet(N^*_{S|S^k}) \right) \right] \end{equation} where $\Delta \hookrightarrow S \times S$ is the diagonal that identifies the points $\bullet$ and $\circ$. Recall that: $$ \textrm{Hom}_{{\mathfrak{S}}(k)}(p_k,V) = \text{Tr}_V(\omega_k) $$ where $\omega_k \in {\mathfrak{S}}(k)$ is a $k$--cycle. Therefore, \eqref{eqn:patru} implies \eqref{eqn:heis surface} because of the well-known fact that $\text{Tr}_{p_k}(\omega_k) = k$ and the claim below: \\ \begin{claim} If $N_{S|S^k}$ denotes the normal bundle of the small diagonal in $S^k$, then: $$ \emph{Tr}_{\wedge^\bullet(N^*_{S|S^k})}(\omega_k) = \frac {(1-q_1^k)(1-q_2^k)}{(1-q_1)(1-q_2)} $$ where $q_1+q_2 = [\Omega_S^1]$. \\ \end{claim} \noindent The normal bundle arises from the short exact sequence: $$ 0 \longrightarrow TS \xrightarrow{\text{diag}} (TS)^{\oplus k} \longrightarrow N_{S|S^k} \longrightarrow 0 $$ where the ${\mathfrak{S}}(k)$--action permutes the factors of $TS$. Therefore, we have: $$ \wedge^\bullet(N^*_{S|S^k}) = \frac {\wedge^\bullet \left( (\Omega_S^1)^{\oplus k} \right)}{\wedge^\bullet (\Omega_S^1)} $$ Since the denominator of the expression above is a trivial ${\mathfrak{S}}(k)$--module with $K$--theory class $(1-q_1)(1-q_2)$, it remains to show that: \begin{equation} \label{eqn:cinci} \text{Tr}_{\wedge^\bullet \left( (T^*S)^{\oplus k} \right)}(\omega_k) = (1-q_1^k)(1-q_2^k) \end{equation} By the splitting principle, we can assume $\Omega_S^1 \cong {\mathcal{L}}_1 \oplus {\mathcal{L}}_2$, where ${\mathcal{L}}_1$ and ${\mathcal{L}}_2$ are line bundles with $K$--theory classes $q_1$ and $q_2$. If we let $l_a$ denote a local section of the line bundle ${\mathcal{L}}_a$, then a basis for the local sections of $\wedge^\bullet({\mathcal{L}}_1^{\oplus k} \oplus {\mathcal{L}}_2^{\oplus k})$ consists of: \begin{equation} \label{eqn:wedges} l_1^{(i_1)} \wedge l_1^{(i_2)} \wedge ... \wedge l_1^{(i_a)} \wedge l_2^{(j_1)} \wedge l_2^{(j_2)} \wedge ... \wedge l_2^{(j_b)} \end{equation} where $l_a^{(i)}$ denotes the section $l_a$ on the $i$--th copy of ${\mathcal{L}}_a$ inside ${\mathcal{L}}_a^{\oplus k}$. The cycle $\omega_k$ acts on the basis \eqref{eqn:wedges} by increasing the indices $i_a, j_b \in {\mathbb{Z}}/n{\mathbb{Z}}$ by 1. Therefore, there are only 4 basis elements which are unchanged by $\omega_k$, namely the cases $(a,b) = (0,0)$, $(k,0)$, $(0,k)$, $(k,k)$ of \eqref{eqn:wedges}. These 4 basis elements contribute precisely $1$, $-q_1^k$, $-q_2^k$, $q^k$ (respectively) to the trace, thus implying \eqref{eqn:cinci}. \end{proof} \subsection{} By analogy with Subsection \ref{sub:plethysm}, we have: \begin{align} &\sum_{k=0}^\infty \frac {h_k}{z^k} = \exp \left[ \sum_{k=1}^\infty \frac {p_k}{k z^k} \right] \Big|_\Delta \label{eqn:complete surface} \\ &\sum_{k=0}^\infty \frac {e_k}{(-z)^k} = \exp \left[- \sum_{k=1}^\infty \frac {p_k}{k z^k} \right] \Big|_\Delta \label{eqn:elementary surface} \end{align} where $h_k,e_k$ are operators defined by analogy with \eqref{eqn:ind surf}, specifically: \begin{align*} h_k(M) = \text{Ind}_{{\mathfrak{S}}(k) \times {\mathfrak{S}}(l)}^{{\mathfrak{S}}(k+l)} \Big([\text{triv}_{{\mathfrak{S}}(k)} \otimes {\mathcal{O}}_{\Delta_{1...k\bullet}}] \boxtimes M \Big) \\ e_k(M) = \text{Ind}_{{\mathfrak{S}}(k) \times {\mathfrak{S}}(l)}^{{\mathfrak{S}}(k+l)} \Big([\text{sign}_{{\mathfrak{S}}(k)} \otimes {\mathcal{O}}_{\Delta_{1...k\bullet}}] \boxtimes M \Big) \end{align*} for all $M \in K_{{\mathfrak{S}}(l)}(S^l)$. Moreover, we consider: \begin{align} &\sum_{k=0}^\infty h_k^\dagger z^k = \exp \left[ \sum_{k=1}^\infty \frac {p^\dagger_k z^k}{k} \right] \Big|_\Delta \label{eqn:complete surface adj} \\ &\sum_{k=0}^\infty e_k^\dagger (-z)^k = \exp \left[- \sum_{k=1}^\infty \frac {p^\dagger_k z^k}{k} \right] \Big|_\Delta \label{eqn:elementary surface adj} \end{align} which by analogy with \eqref{eqn:res surf} satisfy: \begin{align} &h_k^\dagger(M) = \text{Hom}_{{\mathfrak{S}}(k)}\left(\text{triv}_{{\mathfrak{S}}(k)}, \text{Res}_{{\mathfrak{S}}(k) \times {\mathfrak{S}}(l)}^{{\mathfrak{S}}(k+l)} (M) \Big|_{\Delta_{1...k \bullet \times S^l}} \right) \label{eqn:triv surf} \\ &e_k^\dagger(M) = \text{Hom}_{{\mathfrak{S}}(k)}\left(\text{sign}_{{\mathfrak{S}}(k)}, \text{Res}_{{\mathfrak{S}}(k) \times {\mathfrak{S}}(l)}^{{\mathfrak{S}}(k+l)} (M) \Big|_{\Delta_{1...k \bullet \times S^l}} \right) \label{eqn:sign surf} \end{align} for all $M \in K_{{\mathfrak{S}}(k+l)}(S^{k+l})$. \\ \begin{remark} In \cite{K}, a similar result to (a categorification of) Proposition \ref{prop:classic surface} was proved by using certain operators $\Lambda_S \rightarrow \Lambda_S$ indexed by classes $\gamma \in K(S)$. While similar in overall shape with our: $$ e_k^{(\gamma)} : \Lambda_S \xrightarrow{e_k} \Lambda_S \times K(S) \xrightarrow{\emph{Id}_{\Lambda_S} \boxtimes \gamma} \Lambda_S \times K(S) \xrightarrow{\pi_*} \Lambda_S $$ and their adjoints, the operators of \emph{loc. cit. } are not linear in $\gamma$. Linearity is necessary in order for operators indexed by $\gamma \in K(S)$ to ``glue" to an operator: $$ \Lambda_S \rightarrow \Lambda_S \times K(S) $$ which is required for the framework of Definitions \ref{def:weak} and \ref{def:strong}. \\ \end{remark} \subsection{} We have the following global analogue of the rational function \eqref{eqn:zeta}: \begin{equation} \label{eqn:zeta s} \zeta^S(x) = \wedge^\bullet (-x \cdot {\mathcal{O}}_\Delta) \in K(S \times S)(x) \end{equation} whose restriction to the diagonal is precisely: $$ \wedge^\bullet \Big(-x \cdot \left([{\mathcal{O}}_S]-[\Omega_S^1]+[{\mathcal{K}}_S]\right) \Big) = \frac {(1-xq_1)(1-xq_2)}{(1-x)(1-xq)} = \zeta(x) $$ where $[\Omega_S^1] = q_1+q_2 \in K(S)$. We have the following analogue of Theorem \ref{thm:action functions}: \\ \begin{theorem} \label{thm:action functions surface} There is a strong action ${\mathcal{A}} \stackrel{\Psi}\curvearrowright_S \Lambda_S$ given by: \begin{equation} \label{eqn:action up 1 surface} c_1 \mapsto 1, \qquad c_2 \mapsto q^{-1}, \end{equation} \begin{equation} \label{eqn:action up 2 surface} P_{0,m} \ \mapsto p_m, \qquad P_{0,-m} \mapsto - m q^m \cdot p_m^\dagger, \end{equation} while for all $n > 0$ and $m \in {\mathbb{Z}}$, we have: $$ H_{n,m} \mapsto \int_{|z_1| \gg ... \gg |z_n|} \frac {\prod_{i=1}^n z_i^{\left \lfloor \frac {mi}n \right \rfloor - \left \lfloor \frac {m(i-1)}n \right \rfloor}}{\prod_{i=1}^{n-1} \left(1 - \frac {z_{i+1}q}{z_i} \right) \prod_{i<j} \zeta \left( \frac {z_j}{z_i} \right)} $$ \begin{equation} \label{eqn:action left surface} \exp \left[\sum_{k=1}^\infty \frac {z_1^{-k}+...+z_n^{-k}}k \cdot p_k \right] \exp \left[-\sum_{k=1}^\infty \frac {z_1^k+...+z_n^k}k \cdot p_k^\dagger \right] \Big|_\Delta \prod_{a=1}^n \frac {dz_a}{2\pi i z_a} \end{equation} and: $$ H_{-n,m} \mapsto \int_{|z_1| \ll ... \ll |z_n|} \frac {(-q)^n \prod_{i=1}^n z_i^{\left \lfloor \frac {mi}n \right \rfloor - \left \lfloor \frac {m(i-1)}n \right \rfloor}}{\prod_{i=1}^{n-1} \left(1 - \frac {z_{i+1}q}{z_i} \right) \prod_{i<j} \zeta \left( \frac {z_j}{z_i} \right)} $$ \begin{equation} \label{eqn:action right surface} \exp \left[-\sum_{k=1}^\infty \frac {z_1^{-k}+...+z_n^{-k}}{k \cdot q^k} \cdot p_k \right] \exp \left[\sum_{k=1}^\infty \frac {z_1^k+...+z_n^k}k \cdot p_k^\dagger \right] \Big|_\Delta \prod_{a=1}^n \frac {dz_a}{2\pi i z_a} \end{equation} (see the last sentence of Theorem \ref{thm:action functions} for the meaning of the contours). \\ \end{theorem} \begin{proof} Let us first show that the assignments \eqref{eqn:action up 1 surface}--\eqref{eqn:action right surface} give rise to a weak action. As shown in \cite{Hecke}, this can be achieved by establishing that the two bullets in the proof of Theorem \ref{thm:action functions} hold. The first bullet is proved almost word-for-word as in the aforementioned Theorem, with the minor modification that the parameters $q_1$ and $q_2$ are now identified with the Chern roots of $\Omega_S^1$. As for the second bullet, it is an immediate consequence of the following analogues of \eqref{eqn:need 1}--\eqref{eqn:need 5}: \begin{align} &\Big[ \Psi(P_{0,\pm m}), \Psi(P_{0,\pm m'}) \Big] = 0 \label{eqn:need new 1} \\ &\Big[ \Psi(P_{0,m}), \Psi(P_{0,-m'})\Big] = \Delta_* \left( \rho^*\left[\delta_{m'}^m m \frac {(1-q_1^m)(1-q_2^m)}{(1-q_1)(1-q_2)} \right] \pi^* \right) \label{eqn:need new 2} \\ &\Big[\Psi(H_{\pm 1,k}), \Psi(P_{0,\pm m})\Big] = \Delta_* \left(- \rho^*\left[ \frac {(1-q_1^m)(1-q_2^m)}{(1-q_1)(1-q_2)} \right] \Psi(H_{\pm 1,k\pm m}) \right) \label{eqn:need new 3} \\ &\Big[\Psi(H_{\pm 1,k}), \Psi(P_{0,\mp m}) \Big] = \Delta_* \left(\rho^*\left[ \frac {(1-q_1^m)(1-q_2^m)q^{m\delta_\pm^+}}{(1-q_1)(1-q_2)} \right] \Psi(H_{\pm 1,k \mp m}) \right) \label{eqn:need new 4} \\ &\Big[\Psi(H_{1,k}), \Psi(H_{-1,k'})\Big] = \Delta_* \left( \frac 1{q^{-1}-1} \begin{cases} \Psi(A_{k+k'}) &\text{if } k+k' > 0 \\ \rho^*(1-q^k)\pi^* &\text{if } k+k' = 0 \\ - \Psi(q^k B_{-k-k'}) &\text{if } k+k' < 0 \end{cases} \quad \right) \label{eqn:need new 5} \end{align} (in the context of a weak action, we only need the restriction of formulas \eqref{eqn:need new 1}--\eqref{eqn:need new 5} to the diagonal $\Delta \subset S \times S$) where $\pi : {\mathcal{M}}^s \times S \rightarrow {\mathcal{M}}^s$ and $\rho : {\mathcal{M}}^s \times S \rightarrow S$ are the standard projections, and $A_m, B_m : \Lambda_S \rightarrow \Lambda_S \times K(S)$ are defined by: \begin{align*} &\sum_{m=0}^\infty \frac {A_m}{x^m} = \exp \left[ \sum_{m=1}^\infty \frac {p_m}{m x^m}(1-q^{-m}) \right] \Big|_\Delta \\ &\sum_{m=0}^\infty \frac {B_m}{x^m} = \exp \left[ \sum_{m=1}^\infty \frac {p^\dagger_m}{m x^m}(1-q^m) \right] \Big|_\Delta \end{align*} Formulas \eqref{eqn:need new 1}--\eqref{eqn:need new 5} are equalities of homomorphisms $K({\mathcal{M}}^s) \rightarrow K({\mathcal{M}}^s \times S \times S)$, which are straightforward consequences of Proposition \ref{prop:classic surface} (in fact, the first two of these formulas are trivial). Therefore, let us prove \eqref{eqn:need new 5} as an illustration, and leave the remaining formulas as exercises to the interested reader. We have: $$ \Psi(H_{1,k}) = \sum^{\lambda,\mu \text{ partitions}}_{|\lambda| - |\mu| = k} \frac {(-1)^{|\mu|}}{z_\lambda z_\mu} p_\lambda p_\mu^\dagger \Big|_\Delta $$ $$ \Psi(H_{-1,k'}) = \sum^{\lambda',\mu' \text{ partitions}}_{|\lambda'| - |\mu'| = k'} \frac {(-q)^{1-|\lambda'|}}{z_{\lambda'} z_{\mu'}} p_{\lambda'} p_{\mu'}^\dagger \Big|_\Delta $$ Therefore, we have: $$ [\Psi(H_{1,k}),\Psi(H_{-1,k'})]_{\text{red}} = \sum^{\lambda,\mu,\lambda',\mu' \text{ partitions}}_{|\lambda| - |\mu| = k, |\lambda'|-|\mu'|=k'} \frac {(-1)^{|\mu|}(-q)^{1-|\lambda'|}}{z_\lambda z_\mu z_{\lambda'} z_{\mu'}} \left[ p_\lambda p_\mu^\dagger \Big|_\Delta, p_{\lambda'} p_{\mu'}^\dagger \Big|_\Delta \right]_{\text{red}} $$ Formula \eqref{eqn:leibniz} allows us to compute the reduced commutator in the right-hand side. Specifically, this reduced commutator picks up a contribution from the pairing of any $k \in \lambda$ (respectively $l \in \mu$) and any $k \in \mu'$ (respectively $l \in \lambda'$), and this contribution is a scalar due to \eqref{eqn:heis surface}. Therefore, one can write the right-hand side of the expression above as a linear combination of expressions of the form: $$ p_{\tilde{\lambda}} p_{\tilde{\mu}}^\dagger \Big|_\Delta $$ whose coefficients are symmetric Laurent polynomials in $q_1$ and $q_2$. However, the right-hand side of \eqref{eqn:need new 5} is also a linear combination of expressions of the same form. The fact that the two sides of \eqref{eqn:need new 5} are equal in the case of $S = {\mathbb{A}}^2$ (when $q_1,q_2$ are formal parameters) implies that they are equal in the case of an arbitrary surface (when $q_1,q_2$ are specialized to the Chern roots of the cotangent bundle). \\ \noindent Now that we have showed that formulas \eqref{eqn:action up 1 surface}--\eqref{eqn:action right surface} give rise to a weak action ${\mathcal{A}} \curvearrowright K({\mathcal{M}}^s)$, let us show that they in fact produce a strong action. This entails proving \eqref{eqn:comm phi} for all $x,y \in {\mathcal{A}}$. Because of the Leibniz rule \eqref{eqn:leibniz}, it suffices to consider the case when $x,y$ range among the generators of ${\mathcal{A}}$, namely the $H_{n,m}$'s. For illustration, let us start with the case $x = H_{-1,m}$ with $y = H_{-1,m'}$: \begin{equation} \label{eqn:twice} \Phi(H_{-1,m}) = \int (-q) \exp \left[- \frac {p_k}{k z^k q^k} \right] \exp \left[\frac {p_k^\dagger z^k}k \right] \Big|_\Delta \frac {dz}{2\pi i z} \end{equation} Below, we will consider two copies $S_1=S_2=S$ of the surface, and write: $$ p_k^{(i)}, p_k^{\dagger, (i)} , \Phi^{(i)}(H_{-1,m}) : \Lambda_S \rightarrow \Lambda_S \times K(S_{i}) \quad \text{and} \quad q^{(i)} = [{\mathcal{K}}_{S_i}] \in K(S_i) $$ for each $i \in \{1,2\}$. By applying relation \eqref{eqn:twice} twice, we have: \begin{equation} \label{eqn:two operators} \left(\Phi^{(1)}(H_{-1,m}) \boxtimes \text{Id}_{S_2} \right) \circ \Phi^{(2)}(H_{-1,m'}) = \int_{|z_1|\ll |z_2|} q^{(1)} q^{(2)} z_1^m z_2^{m'} \end{equation} $$ \exp \left[- \frac {p^{(1)}_k}{k z_1^kq^{(1)k}} \right] \exp \left[\frac {p_k^{\dagger,(1)} z_1^k}k \right] \Big|_\Delta \exp \left[- \frac {p^{(2)}_k}{k z_2^kq^{(2)k}} \right] \exp \left[\frac {p_k^{\dagger,(2)} z_2^k}k \right] \Big|_\Delta \frac {dz_1}{2\pi i z_1} \frac {dz_2}{2\pi i z_2} $$ As a consequence of \eqref{eqn:heis surface}, we have the following analogue of \eqref{eqn:computation 2}: \begin{multline*} \exp\left[\sum_{k=1}^\infty \frac {p^{\dagger,(1)}_k z_1^k}k \right] \exp\left[- \sum_{k=1}^\infty \frac {p_k^{(2)}}{k z_2^k q^{(2)k}} \right] = \\ = \exp\left[- \sum_{k=1}^\infty \frac {p_k^{(2)}}{k z_2^k q^{(2)k}} \right] \exp\left[\sum_{k=1}^\infty \frac {p^{\dagger,(1)}_k z_1^k}k \right] \zeta^S \left( \frac {z_2}{z_1} \right)^{-1} \end{multline*} as an equality of operators $\text{Hom}(\Lambda_S, \Lambda_S \times K(S_1 \times S_2))$, where $\zeta^S$ is the rational function \eqref{eqn:zeta s}. Therefore, formula \eqref{eqn:two operators} may be rewritten as: \begin{equation} \label{eqn:comp 1} \left(\Phi^{(1)}(H_{-1,m}) \boxtimes \text{Id}_{S_2} \right) \circ \Phi^{(2)}(H_{-1,m'}) = \int_{|z_1|\ll |z_2|} q^{(1)} q^{(2)} z_1^m z_2^{m'} \zeta^S \left( \frac {z_2}{z_1} \right)^{-1} \end{equation} $$ \exp \left[-\sum_{k=1}^\infty \left( \frac {p^{(1)}_k}{k z_1^kq^{(1)k}} + \frac {p^{(2)}_k}{k z_2^kq^{(2)k}} \right) \right] \exp \left[\sum_{k=1}^\infty \left( \frac {p_k^{\dagger,(1)} z_1^k}k + \frac {p_k^{\dagger,(2)} z_2^k}k \right) \right] \frac {dz_1}{2\pi i z_1} \frac {dz_2}{2\pi i z_2} $$ Similarly, we have: \begin{equation} \label{eqn:comp 2} \left(\Phi^{(2)}(H_{-1,m'}) \boxtimes \text{Id}_{S_1} \right) \circ \Phi^{(1)}(H_{-1,m}) = \int_{|z_1|\gg |z_2|} q^{(1)} q^{(2)} z_1^m z_2^{m'} \zeta^S \left( \frac {z_1}{z_2} \right)^{-1} \end{equation} $$ \exp \left[-\sum_{k=1}^\infty \left( \frac {p^{(1)}_k}{k z_1^kq^{(1)k}} + \frac {p^{(2)}_k}{k z_2^kq^{(2)k}} \right) \right] \exp \left[\sum_{k=1}^\infty \left( \frac {p_k^{\dagger,(1)} z_1^k}k + \frac {p_k^{\dagger,(2)} z_2^k}k \right) \right] \frac {dz_1}{2\pi i z_1} \frac {dz_2}{2\pi i z_2} $$ However, Remark 3.17 of \cite{Shuf surf} gives us the following formula: \begin{equation} \label{eqn:zeta expand} \zeta^S(x)^{-1} = 1 - [{\mathcal{O}}_\Delta] \otimes \frac x{(1-xq_1)(1-xq_2)} \end{equation} where $q_1$ and $q_2$ are the Chern roots of $\Omega_S^1$ on the diagonal inside $S \times S$. Expanding formula \eqref{eqn:zeta expand} in either positive or negative powers of $x$ shows that it is always equal to 1 times a multiple of $[{\mathcal{O}}_\Delta]$. Therefore, the difference of \eqref{eqn:comp 1} and \eqref{eqn:comp 2} is a multiple of $[{\mathcal{O}}_\Delta]$, and we conclude that: \begin{equation} \label{eqn:final 1} \left[\Phi^{(1)}(H_{-1,m}), \Phi^{(2)}(H_{-1,m'}) \right] = (\text{Id}_{\Lambda_S} \boxtimes \Delta)_* (A) \end{equation} where $A$ is a certain difference of integrals of rational functions in $q_1$ and $q_2$, times the symmetric expression: $$ \exp \left[-\sum_{k=1}^\infty \left( \frac {p_k}{k z_1^kq^k} + \frac {p_k}{k z_2^kq^k} \right) \right] \exp \left[\sum_{k=1}^\infty \left( \frac {p_k^{\dagger} z_1^k}k + \frac {p_k^{\dagger} z_2^k}k \right) \right] \Big|_\Delta : \Lambda_S\rightarrow \Lambda_S \times K(S) $$ However, because of Theorem \ref{thm:action functions}, we have: \begin{equation} \label{eqn:final 2} A = \Phi\left( \frac {[H_{-1,m}, H_{-1,m'}]}{(1-q_1)(1-q_2)} \right) \end{equation} in the $S = {\mathbb{A}}^2$ case (when $q_1,q_2$ are formal parameters). Therefore, formula \eqref{eqn:final 2} also holds in the situation at hand (when $q_1,q_2$ are the Chern roots of $\Omega_S^1$). The generalization of the argument above to any $x = H_{n,m}$ and $y = H_{n',m'}$ where $n$ and $n'$ have the same sign is straightforward, and we refer the interested reader to the proof of ``Conjecture 5.7 subject to Assumption B" from \cite{W}. \\ \noindent Let us now show how to prove \eqref{eqn:comm phi} for $x = H_{n,m}$ and $y = H_{n',m'}$ where $n$ and $n'$ have opposite signs, and we will do so by induction on $|n|+|n'|$. The base cases of the induction are precisely \eqref{eqn:need new 1}--\eqref{eqn:need new 5}, so let us assume without loss of generality that $n \geq 2$. There exist $u,u',v,v'$ with $u+u' = n$, $v+v' = m$, $u,u' > 0$ such that: $$ [H_{u,v}, H_{u',v'}] = (1-q_1)(1-q_2) \Big( c \cdot H_{n,m} + ... \Big) $$ (see \cite{BS}) where the ellipsis denotes a sum of products of $H_{u'',v''}$ with $0 < u'' < n$, and $c$ is a product of expressions of the form $1+q+...+q^{d-1}$. Since $u$ and $u'$ have the same sign, the argument in the previous paragraph implies that: $$ \Psi(H_{n,m}) = c^{-1} \Big[\Psi(H_{u,v}), \Psi(H_{u',v'})\Big]_{\text{red}} - c^{-1} \left( \Psi(...) \Big|_\Delta \right) $$ (see \eqref{eqn:reduced} for the definition of the reduced commutator). We note that $c$ is invertible in $K(S)$ because $q-1$ is nilpotent. Therefore, the Leibniz rule \eqref{eqn:leibniz} and the Jacobi identity \eqref{eqn:jacobi} imply that: $$ \Big[\Psi(H_{n,m}), \Psi(H_{n',m'})\Big]_{\text{red}} = c^{-1} \Big[\Psi(H_{u,v}), \Big[\Psi(H_{u',v'}), \Psi(H_{n',m'})\Big]_{\text{red}}\Big]_{\text{red}} - $$ $$ - c^{-1} \Big[\Psi(H_{u',v'}), \Big[\Psi(H_{u,v}), \Psi(H_{n',m'})\Big]_{\text{red}}\Big]_{\text{red}} - c^{-1} \Big[\Psi(...)\Big|_\Delta, \Psi(H_{n',m'})\Big]_{\text{red}} $$ By the induction hypothesis, the right-hand side of the expression is equal to: $$ c^{-1} \Psi \left(\frac {[H_{u,v}, [H_{u',v'},H_{n',m'}] - [H_{u',v'}, [H_{u,v},H_{n',m'}]}{(1-q_1)^2(1-q_2)^2} - \frac {[...,H_{n',m'}]}{(1-q_1)(1-q_2)} \right) $$ The usual Leibniz rule and Jacobi identity in ${\mathcal{A}}$ show that the expression above is $$ \Psi \left( \frac {[H_{n,m}, H_{n',m'}]}{(1-q_1)(1-q_2)} \right) $$ as required. \end{proof} \subsection{} Akin with Subsection \ref{sub:pleth not}, we will denote elements of $\Lambda_S$ by $f[X]$, and write: \begin{equation} \label{eqn:complete surf} \sum_{k=0}^\infty \frac {h_k}{z^k} = \wedge^\bullet \left( - \frac Xz \right) \end{equation} \begin{equation} \label{eqn:elementary surf} \sum_{k=0}^\infty \frac {e_k}{(-z)^k} = \wedge^\bullet \left( \frac Xz \right) \end{equation} The following notion is analogous to Proposition \ref{prop:plethysm}: \\ \begin{definition} \label{def:plethysm surface} For any $f[X] \in \Lambda_S$ and any variable $z$, define: \begin{equation} \label{eqn:pleth surf} f \left[ X \pm {\mathcal{O}}_{\Delta} z \right] = \exp \left[\pm \sum_{k=1}^\infty \frac {p_k^\dagger z^k}k \right] \Big|_\Delta \cdot f[X] \end{equation} as an element of $\Lambda_S \times K(S)[z]$. \\ \end{definition} \noindent Using \eqref{eqn:complete surf}, \eqref{eqn:elementary surf}, \eqref{eqn:pleth surf}, we may rewrite formulas \eqref{eqn:action left surface} and \eqref{eqn:action right surface} as: $$ \Psi(H_{n,m})(f[X]) = \int_{0, X \prec |z_n| \prec ... \prec |z_1| \prec \infty} \frac {\prod_{i=1}^n z_i^{\left \lfloor \frac {mi}n \right \rfloor - \left \lfloor \frac {m(i-1)}n \right \rfloor}}{\prod_{i=1}^{n-1} \left(1 - \frac {z_{i+1}q}{z_i} \right) \prod_{i<j} \zeta \left( \frac {z_j}{z_i} \right)} $$ \begin{equation} \label{eqn:action left surf} \wedge^\bullet \left( - \frac X{z_1} \right) ... \wedge^\bullet \left( - \frac X{z_n} \right) \cdot f \left[ X - {\mathcal{O}}_\Delta \sum_{i=1}^n z_i \right] \prod_{a=1}^n \frac {dz_a}{2\pi i z_a} \end{equation} and: $$ \Psi(H_{-n,m})(f[X]) = \int_{0, X \prec |z_1| \prec ... \prec |z_n| \prec \infty} \frac {(-q)^n \prod_{i=1}^n z_i^{\left \lfloor \frac {mi}n \right \rfloor - \left \lfloor \frac {m(i-1)}n \right \rfloor}}{\prod_{i=1}^{n-1} \left(1 - \frac {z_{i+1}q}{z_i} \right) \prod_{i<j} \zeta \left( \frac {z_j}{z_i} \right)} $$ \begin{equation} \label{eqn:action right surf} \wedge^\bullet \left( \frac X{z_1 q} \right) ... \wedge^\bullet \left( \frac X{z_n q} \right) \cdot f \left[ X + {\mathcal{O}}_\Delta \sum_{i=1}^n z_i \right] \prod_{a=1}^n \frac {dz_a}{2\pi i z_a} \end{equation} These formulas are proved by analogy with \eqref{eqn:action left} and \eqref{eqn:action right}. The contours of integration are explained in the last paragraph of Subsection \ref{sub:new plane}. \\ \subsection{} We will now bridge Theorems \ref{thm:action surface} and \ref{thm:action functions surface}. To any element $f[X] \in K_{{\mathfrak{S}}(k)}(S^k)$ $\subset\Lambda_S$, we may associate \underline{universal classes} on ${\mathcal{M}}^s$ via the construction \eqref{eqn:operator def surface}: \begin{equation} \label{eqn:universal} f[X] \stackrel{\Gamma_S}\leadsto f[{\mathcal{U}}] := \pi_* \Big( {\mathcal{U}}_1 \otimes ... \otimes {\mathcal{U}}_k \otimes \rho^*(f[X]) \Big)^{{\mathfrak{S}}(k)} \end{equation} where ${\mathcal{U}}$ is the universal sheaf on ${\mathcal{M}}^s \times S$, ${\mathcal{U}}_i$ denotes its pull-back to ${\mathcal{M}}^s \times S^k$ via the $i$--th projection map $S^k \rightarrow S$, and $\pi : {\mathcal{M}}^s \times S^k \rightarrow {\mathcal{M}}^s$, $\rho : {\mathcal{M}}^s \times S^k \rightarrow S^k$ are the projection maps. Just like in the case of ${\mathbb{A}}^2$, the universal classes generate the $K$--theory groups of moduli spaces of stable sheaves, as the following result shows. \\ \begin{lemma} \label{lem:generate} Any element of $K({\mathcal{M}}^s_d)$ is of the form \eqref{eqn:universal}, for some $k \gg d$. \\ \end{lemma} \begin{proof} Theorem 1 of \cite{M} shows that the class of the diagonal: $$ \Delta_{{\mathcal{M}}^s_d} \hookrightarrow {\mathcal{M}}^s_d \times {\mathcal{M}}^s_d $$ can be expressed as a Chern class of the virtual bundle: $$ \xymatrix{{\mathcal{E}} \ar@{.>}[d] & \ \sum_{i=0}^2 (-1)^i\text{Ext}^i({\mathcal{F}},{\mathcal{F}}') \ar@{.>}[d] \ar@{_{(}->}[l] \\ {\mathcal{M}}^s_d \times {\mathcal{M}}^s_d & \ ({\mathcal{F}},{\mathcal{F}}') \ar@{_{(}->}[l]} $$ Although the result of \emph{loc. cit. } is stated in cohomology, it holds at the level of Chow groups $A^*({\mathcal{M}}_d^s)$ with rational coefficients. Also, while this result is stated for a surface with trivial canonical class (the top option in Assumption S), an analogous argument works for a surface with negative canonical class (the bottom option in Assumption S), as shown in \cite{GT} for Hilbert schemes. Based on these facts, it is standard to show that any element of $A^*({\mathcal{M}}_d^s)$ can be written in the form: $$ \pi_{*} \Big(\text{ch}({\mathcal{U}}_1) \cdot ... \cdot \text{ch}({\mathcal{U}}_k) \cdot \rho^*(g) \Big) \in A^*({\mathcal{M}}_d^s) $$ for some $k \gg d$ and some $g \in A^*(S^k)$. Since the Chern character $K({\mathcal{M}}_d^s) \rightarrow A^*({\mathcal{M}}_d^s)$ is an isomorphism (over ${\mathbb{Q}}$, as a consequence of Assumption S), then a simple application of the Grothendieck-Hirzebruch-Riemann-Roch theorem shows that any element of $K({\mathcal{M}}_d^s)$ can be written as: $$ \pi_* \Big( {\mathcal{U}}_1 \otimes ... \otimes {\mathcal{U}}_k \otimes \rho^*(g) \Big) $$ for some $g \in K(S^k)$. The formula above is equal to \eqref{eqn:universal} for $f[X] = \sum_{\sigma \in {\mathfrak{S}}(k)} \sigma^*(g)$. \end{proof} \begin{remark} It would be very interesting to prove Lemma \ref{lem:generate} without Assumption S (although in this case, one should rather work with a dg scheme model of the moduli space of stable sheaves, instead of the singular scheme ${\mathcal{M}}^s$). However, the argument provided above requires Assumption S in several crucial places. \\ \end{remark} \subsection{} We will need a geometric incarnation on the plethysm operation \eqref{eqn:pleth surf}: \\ \begin{proposition} \label{prop:plethystic identity} For any $f[X] \in K_{{\mathfrak{S}}(n)}(S^n)$, we have the following identity: \begin{equation} \label{eqn:plethystic identity} f \left[{\mathcal{U}}\pm {\mathcal{O}}_{\Delta} z \right] = \pi_{*} \Big[ \left({\mathcal{U}}_1 \pm {\mathcal{O}}_{\Delta_{1\bullet}} z \right) ... \left({\mathcal{U}}_k \pm {\mathcal{O}}_{\Delta_{n\bullet}} z \right) \otimes f[X] \Big]^{{\mathfrak{S}}(n)} \end{equation} as elements of $K({\mathcal{M}}^s \times S)[z]$, where $\pi : {\mathcal{M}}^s \times S^n \times S \rightarrow {\mathcal{M}}^s \times S$ is the projection map that forgets the middle $n$ factors of $S$, and $\bullet$ indicates the surviving factor of $S$. \\ \end{proposition} \begin{proof} Let us prove \eqref{eqn:plethystic identity} in the case $\pm = +$, as the case $\pm = -$ is analogous, and so we leave it to the interested reader. The right-hand side of \eqref{eqn:plethystic identity} is: $$ \sum_{k=0}^n z^k \cdot \left( \sum_{\{a_1,...,a_k\} \subset \{1,...,n\}} \pi_* \left[ \bigotimes_{s \in \{1,...,n\} \backslash \{a_1,...,a_n\}} {\mathcal{U}}_s \otimes {\mathcal{O}}_{\Delta_{a_1...a_k\bullet}} \otimes f[X] \right] \right)^{{\mathfrak{S}}(n)} = $$ $$ = \sum_{k=0}^n z^k \cdot \left( \pi^{(k)}_*\left[{\mathcal{U}}_1 \otimes ... \otimes {\mathcal{U}}_{n-k} \otimes f[X] \Big|_{S^{n-k} \times \Delta_{n-k+1...n\bullet}} \right] \right)^{{\mathfrak{S}}(n-k) \times {\mathfrak{S}}(k)} $$ where $\pi^{(k)} : {\mathcal{M}}^s \times S^{n-k} \times S \rightarrow {\mathcal{M}}^s \times S$ is the projection map that forgets the middle $n-k$ factors of $S$ (the two rows above are equal because of Frobenius reciprocity). By \eqref{eqn:triv surf}, the bottom row of the expression above is equal to: $$ \sum_{k=0}^n z^k \cdot \pi^{(k)}_*\left[{\mathcal{U}}_1 \otimes ... \otimes {\mathcal{U}}_{n-k} \otimes h_k^\dagger(f[X]) \right]^{{\mathfrak{S}}(n-k)} \stackrel{\eqref{eqn:universal}}= $$ $$ = \sum_{k=0}^n z^k \cdot \Gamma_S \left(h_k^\dagger(f[X]) \right) \stackrel{\eqref{eqn:complete surface}}= \Gamma_S \left( \exp \left[\sum_{k=1}^\infty \frac {p_k^\dagger z^k}k \right] \Big|_\Delta \cdot f[X] \right) = \Gamma_S \left( f \left[X + {\mathcal{O}}_{\Delta} z \right] \right) $$ which is equal to the left-hand side of \eqref{eqn:plethystic identity}. \end{proof} \noindent The following is Proposition 5.12 of \cite{W surf} (see also Theorem 3.16 of \cite{W} for the $S = {\mathbb{A}}^2$ case). In \emph{loc. cit.}, the objects denoted by $f[...]$ in formulas \eqref{eqn:action universal surface 1} and \eqref{eqn:action universal surface 2} were actually understood to be the right-hand side of \eqref{eqn:plethystic identity}. The fact they match our current notion of plethysm \eqref{eqn:pleth surf} is the content of Proposition \ref{prop:plethystic identity}. \\ \begin{proposition} \label{prop:action universal surface} In terms of universal classes, \eqref{eqn:action right surface 4}--\eqref{eqn:action right surface 5} read: $$ \Phi(H_{n,m})(f[{\mathcal{U}}]) = \int_{{\mathcal{U}} \prec |z_n| \prec ... \prec |z_1| \prec 0, \infty} \frac {\prod_{i=1}^n z_i^{\left \lfloor \frac {mi}n \right \rfloor - \left \lfloor \frac {m(i-1)}n \right \rfloor}}{\prod_{i=1}^{n-1} \left(1 - \frac {z_{i+1}q}{z_i} \right) \prod_{i<j} \zeta \left( \frac {z_j}{z_i} \right)} $$ \begin{equation} \label{eqn:action universal surface 1} \wedge^\bullet \left(- \frac {{\mathcal{U}}}{z_1} \right) ... \wedge^\bullet \left(- \frac {{\mathcal{U}}}{z_n} \right) \otimes f \left[{\mathcal{U}} - {\mathcal{O}}_\Delta \sum_{i=1}^n z_i \right] \prod_{a=1}^n \frac {dz_a}{2\pi i z_a} \end{equation} and: $$ \Phi(H_{-n,m})(f[{\mathcal{U}}]) = \int_{{\mathcal{U}} \prec |z_1| \prec ... \prec |z_n| \prec 0, \infty} \frac {(-q)^{n}\prod_{i=1}^n z_i^{\left \lfloor \frac {mi}n \right \rfloor - \left \lfloor \frac {m(i-1)}n \right \rfloor}}{\prod_{i=1}^{n-1} \left(1 - \frac {z_{i+1}q}{z_i} \right) \prod_{i<j} \zeta \left( \frac {z_j}{z_i} \right)} $$ \begin{equation} \label{eqn:action universal surface 2} \wedge^\bullet \left(\frac {{\mathcal{U}}}{z_1q} \right) ... \wedge^\bullet \left(\frac {{\mathcal{U}}}{z_nq} \right) \otimes f \left[{\mathcal{U}} + {\mathcal{O}}_\Delta \sum_{i=1}^n z_i \right] \prod_{a=1}^n \frac {dz_a}{2\pi i z_a} \end{equation} as elements of $K({\mathcal{M}}^s \times S)$, where $q_1 + q_2 = [\Omega_S^1]$ and $q=q_1q_2 = [{\mathcal{K}}_S]$. \\ \end{proposition} \begin{proof}\emph{of Theorem \ref{thm:surface}:} Comparing formulas \eqref{eqn:action left surf}--\eqref{eqn:action right surf} with \eqref{eqn:action universal surface 1}--\eqref{eqn:action universal surface 2}, we observe that they are one and the same integral but over slightly different contours. The difference is in which side of the $z_1,...,z_n$ contours the pole at $0$ lies. As we explained in the proof of Theorem \ref{thm:main}, when $m > \pm nr$, the integrand is actually regular at 0, so formulas \eqref{eqn:action left surf}--\eqref{eqn:action right surf} produce the same result as \eqref{eqn:action universal surface 1}--\eqref{eqn:action universal surface 2}. \\ \end{proof} \subsection{} Consider a weak action ${\mathcal{A}} \stackrel{\Phi}\curvearrowright_S K(X)$. Given any element $v \in K(X)$, the \underline{submodule} generated by $v$ will refer to the subset ${\mathcal{A}} \cdot v \subset K(X)$ consisting of linear combinations of the following operators applied to $v$: \begin{multline*} K(X) \xrightarrow{\Phi(a_k)} K(X \times S) \xrightarrow{\Phi(a_{k-1}) \boxtimes \text{Id}_S} ... \\ ... \xrightarrow{\Phi(a_1) \boxtimes \text{Id}_{S^{k-1}}} K(X \times S^k) \xrightarrow{\text{Id}_X \boxtimes \gamma} K(X \times S^k) \xrightarrow{\pi_*} K(X) \end{multline*} where $k \in {\mathbb{N}}$, $a_1,...,a_{k-1},a_k \in {\mathcal{A}}$, $\gamma \in K(S^k)$ are arbitrary, and $\pi : X \times S^k \rightarrow X$ denotes the projection. For the action of Theorem \ref{thm:action surface}, we consider the submodules: \begin{equation} \label{eqn:submodules} {\mathcal{A}} \cdot \boldsymbol{1}_d \subset K({\mathcal{M}}^s) \end{equation} where $\boldsymbol{1}_d \in K({\mathcal{M}}^s_d)$ denotes the structure sheaf of the subvariety ${\mathcal{M}}_d^s \subset {\mathcal{M}}^s$. Since the algebra ${\mathcal{A}}$ contains the operators \eqref{eqn:action right surface 2} of tensor product with the universal sheaf, then Lemma \ref{lem:generate} implies that: $$ K({\mathcal{M}}^s_d) \subset {\mathcal{A}} \cdot \boldsymbol{1}_d $$ for all $d \geq \left \lceil \frac {r-1}{2r} c_1^2 \right \rceil$. Therefore, we conclude the following: \\ \begin{proposition} \label{prop:generators} The ${\CA^{(r)}}$--module $K({\mathcal{M}}^s)$ is generated by $\{\boldsymbol{1}_d\}_{d \geq \left \lceil \frac {r-1}{2r} c_1^2 \right \rceil}$, i.e. \begin{equation} \label{eqn:union} K({\mathcal{M}}^s) = \bigcup_{d = \left \lceil \frac {r-1}{2r} c_1^2 \right \rceil}^\infty {\mathcal{A}} \cdot \boldsymbol{1}_d \end{equation} Moreover, these submodules are contained inside each other, i.e.: \begin{equation} \label{eqn:nested} {\mathcal{A}} \cdot \boldsymbol{1}_d \supset {\mathcal{A}} \cdot \boldsymbol{1}_{d-1} \end{equation} for all $d$. \\ \end{proposition} \noindent The final claim in Proposition \ref{prop:generators} follows from: \begin{equation} \label{eqn:identity} \Phi(H_{n,m})^{(\circ)} \cdot \boldsymbol{1}_d = \begin{cases} \boldsymbol{1}_{d-n} &\text{if } m = 0 \\ 0 &\text{if } m \in \{-1,...,-nr+1\} \end{cases} \end{equation} for the skyscraper sheaf $\circ \in K(S)$ at any point on $S$. Formula \eqref{eqn:identity} is an immediate consequence of relation \eqref{eqn:action universal surface 1} for $f = 1$ (see also Proposition 4.15 of \cite{K-theory}). \\
proofpile-arXiv_065-199
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }